id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
8847362 | pes2o/s2orc | v3-fos-license | A Comparison of Internal Fixation and Bipolar Hemiarthroplasty for the Treatment of Reverse Oblique Intertrochanteric Femoral Fractures in Elderly Patients
Purpose To compare the clinical and radiological results between internal fixation using the proximal femoral nail system and bipolar hemiarthroplasty (BHA) in reverse oblique intertrochanteric hip fractures in elderly patients. Materials and Methods From January 2005 to July 2012, we reviewed the medical records of 53 patients who had been treated surgically for reverse oblique intertrochanteric fracture and had been followed-up on for a minimum of two years. All patients were ≥70 years of age, and divided into two groups for retrospective evaluation. One group was treated with internal fixation using the proximal femoral nail system (31 cases), and the other group was treated with BHA (22 cases). Results Early ambulation postoperatively and less pain at postoperative three month were significantly superior in the BHA group. However, by 24 months postoperatively, the internal fixation group exhibited higher Harris scores and correspondingly less pain than the BHA group. There were no significant differences in union rate, duration of hospitalization or lateral wall fracture healing between the two groups. Four patients in the internal fixation group underwent reoperation. Conclusion In the treatment of intertrochanteric fracture of the reverse oblique type, open reduction and internal fixation should be considered to be the better choice for patients with good health and bone quality. However, in cases of severe comminition of fracture and poor bone quality, BHA is an alternative offering advantages including early ambulation, less pain at early stages, and a lower risk of reoperation.
INTRODUCTION
Amongst unstable intertrochanteric femoral fractures, the type 3 fracture, defined by the Association for Osteosynthesis/Orthopaedic Trauma Association (AO/OTA) classification, is characterized by a reverse oblique fracture 1) , and its distal fracture fragments tend to be displaced inwardly due to a loss of inner cortical bone, resulting in extremely unstable fractures, both anatomically and dynamically, as compared to other intertrochanteric femoral fractures 2) . Recently, the results from a small number of studies have demonstrated favorable clinical outcomes for elderly osteophorosis patients with the type 2 intertrochanteric femoral fractures who underwent bipolar hemiarthroplasty (BHA) 3,4) . Additionally, a few studies have reported results from the use of open reduction and internal fixation using various fixing tools in type 3 intertrochanteric femoral fractures, which are expected to be very unstable 2,5-9) . However, there have been no published reports comparing the clinical and medical imaging results between BHA and internal fixation. Therefore, the present study compared clinical results via a retrospective analysis of the implementation of either open reduction-internal fixation (OR-IF group) or BHA for treating type 3 intertrochanteric femoral fractures, as defined by AO/OTA classification, in elderly patients ≥70 years of age.
MATERIALS AND METHODS
Amongst a total of 982 surgical cases of intertrochanteric femoral fractures implemented between January 2005 through July 2012, 85 cases were type 3 reverse oblique intertrochanteric femoral fractures based on the AO/OTA classification. Of these, 18 cases were excluded for reasons including being <70 years of age, an inability to ambulate before their injury, and having one or more an accompanying or pathological fractures. Out of remaining 67 cases, the 53 cases with available followup data were analyzed retrospectively. The criteria for intertrochanteric femoral fractures included class 3 type fractures and cases representing free lateral wall fracture fragments owing to one or more additional lines on the coronal or sagittal planes of the femoral lateral wall. This study protocol was approved by the institutional board of Gwangju Veterans Hospital (GJVH-IRB No14-9-7).
Study patients were divided into either the OR-IF group (31 cases) or the BHA group (22 cases), and the reduction status was further divided into three groups for evaluation based on the classification system of Fogagnolo et al. 10) , which was slightly modified from that of Baumgaertner et al 11) . As results, 26 cases and two cases were classified as good and acceptable cases, respectively, while three cases were found to be poor. For internal fixation of the OR-IF group, the gamma locking nail (Trochanteric gamma locking nail; StrykerTrauma GmbH, Schonkirchen, Germany), proximal femoral nail (Synthes, Paoli, Switzerland), and proximal femoral nail antirotation (Synthes) were utilized for 11, 9, and 11 cases, respectively. A cementless, double tapering C2 femoral stem with a rectangular surface (Lima Corporate, Udine, Italy) was utilized for all cases in the BHA group. The age of the total patient population ranged from between 70 and 86 years and averaged 77.6 years. There were 16 male and 37 female patients. The average follow-up period was 42.84 months (range, 24-68 months). In order to evaluate the level of patient hip functioning, the Harris hip score was used 12) , and the Singh index 13) was used to assess the degree of osteoporosis. No statistical differences were noted between the study groups (i.e., OR-IF group and BHA group; Table 1). Forty seven patients (88.7%) had one or more accompanying diseases including hypertension, which was the most prevalent (28 cases), diabetes, cardiovascular diseases, cerebrovascular diseases, and pulmonary diseases.
All cases in which accompanying lateral wall fractures were found when visiting our hospital or type 1 or 2 intertrochanteric femoral fractures occurred during hospitalization, were divided in advance of the analysis depending on time of injury (i.e., time when the injury was sustained versus time before surgical implementation). Due to the inherent nature of the retrospective study design, it was impossible to describe clear criteria for the selection of operation methods. However, BHA was generally chosen for patients with one or more internal diseases, such as hypertension, cardiovascular diseases, diabetes, and renal diseases. Similarly, hemiarthroplasty was generally chosen for patients with expected early ambulation due to poor patient compliance, and for those in whom a high risk of internal fixation was anticipated due to the severity of the degree of pulverization of the femoral cortical bone. For both groups, anesthesia risk was assessed as well 14) (Table 1).
Operation and Post-operative Treatment
On average, operations were implemented 3.5 days after the injury occurred. Of 53 cases, 50 (94.3%) underwent operations within one week (Table 2). Utilizing the fracture operation table and image amplifier, the closed reduction was performed for the OR-IF group until satisfactory reduction was achieved. Otherwise various surgical tools (e.g., clamp, and Hohmann retractor) were employed through a small incision around the fracture fragments for the reduction. No direct incision was made for the fracture site in any case. In cases where left reduction of fracture fragments was required, a thick Steinman pin was inserted to maintain the fracture prior to the introduction of the proximal femoral nail. The fixation of the lesser and greater trochanters during operations for those in the BHA group was performed only if a final femoral stem was inserted prior to assembly of the bipolar femoral head with the stem in order to complete the reduction of the joint and recovered the length of the leg. If a lesser trochanter was fractured, the displaced lesser trochanter was not separated from the iliopsoas tendon, but instead was placed manually on the anatomical positions prior to fixation via tying either the lower trochanter or upper/lower trochanter with a steel wire. In order to achieve stable fixation in cases involving a fractured and displaced lateral wall, the trochanters were properly placed on their anatomical positions prior to the use of a Dall-Miles cable or steel wire tied in the shape of an "8" for the lower part of the lesser trochanter and upper part of greater trochanter. If stable fixation was not achieved for the greater trochanter or lateral wall, additional holes were made for one or more femoral bones and the greater trochanter in order to allow for the steel wire to be tied in the shape of an "8" for further stable fixation. In cases where fixation was not achieved using a steel wire due to small fracture fragments or a serious degree Preventive medication for deep vein thrombosis or ectopic bone formation was not used, while compress dressing was provided after the surgery. Intravenous administration of antibiotics was given after five days from the surgery. Immediately after surgery, patients performed active bending and stretching exercises for their knee and ankle. One day after the surgery, patients started sitting on the bed. Wheelchair walking was allowed two days after the surgery, while ambulation was further allowed depending on the degree of pain, and bone union status, which was confirmed by medical imaging examinations, as well as reduction status.
Assessment Method
Patient data were analyzed retrospectively. More specifically, operation time, the amount of blood loss, and post operative compilations were compared between the groups. Further reoperation and operative methods were also analyzed. In order to compare short-term mortality, the difference in mortality within two years from the surgery was compared as well. Two independent orthopedic residents who did not perform any of the included surgeries performed clinical and medical imaging examinations and evaluations. Medical imaging evaluation included anterior, posterior, and lateral radiological images taken after surgeries, as well as more recent radiological images, which were compared and reviewed.
1) Evaluation of mortality
Because the present study included elderly patients ≥70 years of age, short-term mortality, defined as death within two years after the surgery, was analyzed. Regardless of the follow-up period, comparisons were based on the time of investigation, and patient information was obtained from the termination date of insurance from the National Health Insurance Corporation of Korea, as well as a reported date of death from a government office. Furthermore, actual death and time of death were confirmed retrospectively via medical histories from the hospital and phone interviews.
2) Clinical assessment For both groups, operation time was recorded from anesthesia administration to the end of the surgery.
Further, the amount of blood loss, blood transfusion, post-operative complications, and reoperation cases were assessed. In order to evaluate clinical parameters, these were monitored before/after surgery, and at the third, sixth, 12th, and 24th month follow-up periods. Functional evaluation was assessed using the Harris hip score 12) . Joint pain evaluation was assessed via the visual analog scale (VAS) scores (0=no pain, 100=unbearable pain) 15) .
3) Medical imaging assessment
Cases accompanied by a fracture of the lateral wall were analyzed in both groups. These cases were subdivided into those in which the fracture on the lateral wall occurred during the course of injury, those in which type 1 or 2 fractures according to the AO/OTA classification were diagnosed as a result of injury, and those in which a fracture was diagnosed as either type 1 or 2 when injured and went on to develop to type 3 due to a fracture on the lateral wall during hospitalization. As part of the medical imaging evaluation of the OF-IF group, bone union period was also monitored and defined as time taken for the formation of 3-4 cortical callus bridges according to anterior, posterior, and lateral radiological images taken after the surgery, as well as the absence of pricking and pressing pain when weight was loaded. Joint union was evaluated via dividing the reverse oblique intertrochanteric femoral fracture by the minor fracture line for the major fracture line. Moreover, other complications including secondary introversion via changes in the neck-shaft angle, degree of displacement of the distal fracture fragment, cracking of fixatives, perforation of the head of the femur, loosening of a screw or blade, and fracture non-union were investigated by comparing radiographic images taken immediately after the surgery and at the final follow-up.
For the medical imaging assessment of the BHA group, acetabular erosion, measurement length of both legs, and ectopic ossification according to the Brooker classification 16) were monitored. Images were rated for the radiolucent line accompanied by a sclerotic line according to the method of Gruen et al 17) . Osteolysis, vertical offset, and stability of the femoral stem were measured at the final follow-up according to the method of Engh et al 18) . In addition, bone union of the lateral wall fracture, as well as time taken for its union, were recorded and compared. The vertical offset of the femoral stem was measured based on the method previously described by Callaghan et al 19) . To accomplish this, the distance between the center of a small hole in the proximal part of the femoral stem and the middle of the lesser trochanter was measured. The medical imaging assessments were performed by two orthopedic residents who were not involved in the surgeries. In order to validate the two methods of evaluation (between K1 and K2) the kappa coefficient test was performed. As a result, both evaluators were successfully validated for all measured parameters (K1=0.88, K2=0.81).
Statistical Analysis
The Mann-Whitney and chi-square tests were performed using the PASW Statistics software ver. 18.0 (IBM Co., Armonk, NY, USA). A P-value <0.05 was considered statistically significant.
Mortality Assessment
Of all 67 cases, 14 had less than two years of followup. Among these 14 cases, 12 were excluded due to death. Five and seven of these subjects were from the OR-IF group and BHA groups, respectively (P=0.125). Of the 12 excluded cases, nine of the subjects died within one year of their surgeries. Four cases were from the OR-IF group and five were from the BHA group (P=0.268). Lastly, of the nine cases that died within one year of their surgeries, two (one from each group) died within three months (P=0.742).
Comparisons of Operative Blood loss and Operation Time
Regarding the amount of blood loss during the surgery, the BHA group lost 293.2 mL (range, 200-400 mL) while the OR-IF group lost 142.3 mL (range, 70-260 mL), which differed significantly between the groups (P=0.000). In contrast, there was no difference in operation time, defined as the time taken from the administration of anesthesia through the end of anesthesia, between the groups. In the BHA group, the average operation time was 67.0 minutes (range, 40-90 minutes), while it was 67.3 minutes (range, 40-90 minutes) in the OR-IF group. This comparison did not reach the level of statistical significance (P=0.960; Table 2).
Clinical Assessment
In the evaluation of early ambulation after the surgery, the average time taken for partial weight loaded walking with a walker or crutches was 9.3 days (range, 4-14 days) in the BHA group. In contrast, it was 13.6 days (range, 9-21 days) in the OR-IF group and therefore significantly shorter in the BHA group (P=0.032). It should be noted that although it took a shorter period of time, the hospitalization period was not significantly different between the groups (35.8 days [range, 24-58 days] and 40.4 days [range, 21-60 days] in the OR-IF group and BHA group, respectively; P=0.365). The Harris hip score was evaluated over four times beginning from three months after the surgery through the end of the follow-up period (i.e., 24 months after the surgery). 15) on the measurement performed three months after the surgery (P=0.043), while no differences were found on the sixth and 12th month measurements (P=0.546 and 0.436, respectively) from the surgery. In contrast, the OF-IF group had better outcomes as compared with those of the BHA group in the last assessment performed 24 months after the surgery (P=0.000; Table 3).
Medical Image Assessment
Of the 31 cases in the OR-IF group, 28 were accompanied by a fracture on the lateral wall. In contrast, 20 of 22 cases in the BHA group exhibited lateral wall fractures. In one case, a patient was diagnosed as type 2 intertrochanteric femoral fracture when hospitalized, but went on to develop type 3 reverse oblique fracture due to an additional fracture on the lateral wall while staying in our hospital (Fig. 1). In the OR-IF group, the most frequent position for a femoral head screw was Zone 5 (n=25 cases) 20) . After the first surgery, 26 of 28 cases in the OR-IF group and 16 of 20 cases in the BHA group displayed a minor fracture line union. The average time for this union was 9.2 months (range, 6-20 months) and 9.1 months (range, 4-15 months) in the BHA and OR-IF groups, respectively. No statistical differences in the union of the minor fracture line (P=0.089) or the average time taken for union (P=0.154) were detected between the groups. Of the 31 cases, in the OR-IF group, with the exception of one case involving artificial joint replacement due to early loss of reduction, one case of BHA due to of periprosthetic fracture, one case of additional surgery attributable to non-union, and one case of non-surgery treatment, 27 cases were able to achieve major fracture line union after their first surgery. The average time taken for the major union was 6.9 months (range, 4-13 months). Regarding the radiographic evaluation, neither radiolucent lines more than 2 mm suggestive of dissociation of the femoral stem nor vertical offsets of more than 5 mm were observed at the final follow-up. Furthermore, no osteolysis was detected. None of the subjects experienced a limb length discrepancy of >10 mm. Results pertaining to the stability of the femoral stem, assessed according to the method of Engh et al. 18) , demonstrated that 19 cases (86.3%) had bony fixation while three (13.6%) showed a fibrous stable fixation. No case showed ectopic ossification. Five and three cases from the BHA group and OR-IF groups, respectively, showed a displacement >5 mm during the union of the lateral wall; however, this difference did not reach the level of statistical significance (P=0.327; Table 4, Fig. 2).
Evaluation of Complications and Reoperation
Reoperation was performed for four cases. These cases were all from the OF-IF group. Of these cases, one required BHA due to periprosthetic fracture. The other cases included an operation for removal of an internal fixative, as the patient complained of severe discomfort (Fig. 3), as well as an autogenous bone graft and additional inner fixation using a metal plate and screws due to wide-spread bony defects and non-union. In the last case, BHA was performed due to early loss of reduction (within two months from the first operation). Of the two cases of non union of the major fracture line in the OR-IF group, one case was revised with bone grafting and additional plating, which achieved successful bone union. The other case remains under long-term follow-up and is monitored regularly, as no significant pain was noted and the patient declined to undergo reoperation in favor of using crutches (Table 5, Fig. 4). On the other hand, one subject in the BHA group complained vigorously about stimulation of the internal fixative. More specifically, the patient felt uncomfortable with the projected steel wire used for the fixation of the lateral wall. Even though the lateral wall was displaced upwards due to non union, conservative treatment is being provided, as additional surgical
DISCUSSION
Type 3 reverse intertrochanteric femoral fractures, as defined by the AO/OTA classification, account for 2% and 5% of total hip fractures and intertrochanteric femoral fractures, respectively 5) . This AO/OTA classification fracture type indicates a case where the fracture line extends to the distal part, toward the lateral femoral part, and over the left vastus ridge 21) . In such cases, the support afforded by the medially localized cortical bone is lost, resulting in a distal fracture fragment that tends to be displaced inwardly, making it very anatomically and mechanically unstable. 2) It has been characterized as a four part burst fracture in elderly patients 22) . These burst fracture fragments can be classified into four types: a proximal fragment including the femur head; an anterior fragment including the trochanteric line; a posterior fragment including the trochanteric crest; and a distal fragment including the femoral shaft 22) . It has been reported that such cases require careful treatment, as the lateral wall is often fractured, and bone defects can be widely spread 22) . Anatomically, the lateral wall indicates the lateral femoral cortex from the vastus ridge to the distal part. Although it is a part of the lesser trochanter, the lateral wall also constitutes the most proximal extension of the femoral shaft and therefore serves as a lateral buttress in bone union. Because a high A B C D reoperation rate has been associated with cases involving one or more lateral wall fractures, accurate and rigid fixation has been considered an important factor for the determination of a prognosis of type 3 intertrochanteric femoral fracture accompanied by lateral wall fracture 23) . In a study conducted by Haidukewych et al. 5) , 49 cases of reverse oblique intertrochanteric femoral fractures were analyzed. All cases in the study had additional fracture lines that were non displaced and mostly extended toward the proximal greater trochanter.
The study authors commented that surgeons should pay extra attention as a medullary cavity nail is inserted through the greater trochanter where the fracture line is extended, especially if a proximal femoral medullary nail is being used.
In the present study of elderly patients ≥70 years old, 48 had accompanying lateral wall fractures, indicating that it is not unusual to find this type of fracture, which most often occurs in elderly patients, in conjunction with type 3 intertrochanteric femoral fractures. As mentioned, we encountered one case in which a patient who was diagnosed as type 2 intertrochanteric femoral fracture advanced to type 3 due to an additional lateral wall fracture experienced during hospitalization in advance of the surgery. Therefore, the utmost care should be taken for patients with severe osteoporosis and a type 2 intertrochanteric femoral fracture characterized by a thin lateral wall. From the results of our previous studies, we determined that the minor fracture line, the lateral wall fracture, takes more time to achieve union as compared the major fracture line. This may be attributable, at least in part, to the force generated by the abductor muscles directed towards the lateral wall fragment. Additional investigations are warranted, as accurate assessment may be difficult to achieve based on such a small numbers of cases. In the treatment of reverse oblique intertrochanteric femoral fractures, sliding hip screws are not recommended due to surgical difficulty and lack of stable fixation, which elevates the reoperation rate 8-fold 6,24) . Therefore, it has been recently reported that intramedullary metal nails may be useful for such cases. The surgical treatment for type 3 intertrochanteric femoral fractures is characterized by short operative times, a small amount of blood loss, the inward displacement of the distal fragment, and complications 7,8) . Lower of delayed union may require reoperation for elderly osteoporosis patients, as well as for failure of internal fixatives 9) . Although we experienced difficulties in reduction due to burst reverse oblique fractures, the reduction was performed as closely as possible to the anatomical features in order to achieve rigid internal fixation, thereby resulting in favorable outcomes at the final follow-up. However, in the treatment of unstable reverse oblique fractures in osteoporosis patients, either implementation of reoperation for elderly patients who may already have comorbidities, or asking patients to stay in bed for long periods to avoid weight bearing due to the occurrence of complications (such as loss of reduction after internal fixation) may be somewhat risky and burdensome. In addition, results from recent reports have demonstrated that primary BHA allows patients to ambulate earlier with a low failure ratio 3,4) . Subsequently, this surgical approach (i.e., BHA) was studied in order to determine if it conferred favorable results in reverse oblique intertrochanteric femoral fractures. Although BHA allows for early ambulation and lowers the risks of reduction failure and complication occurrence associated with internal fixatives, its inherent surgical disadvantages (e.g., large surgical incision, large amount of blood loss, and long operative time), as well as other risks (e.g., dissociation of artificial joint, acetabular erosion, infection, and dislocation) have hindered its widespread application. BHA for intertrochanteric femoral fractures is expected to be associated with a larger amount of blood loss and longer operative times as compared to those of reduction and internal fixation, yet these findings were not supported by the data from the cases in the present study. Specifically, we did not find any difference in the surgical times between the OR-IF BHA groups. This may be attributable to the longer amounts of time required to prepare patients with fractures on a surgical table and to perform reduction, even though the actual amount of time taken between incision and suture was short. BHA in reverse oblique intertrochanteric femoral fractures represents its own advantages, as there was no difference in operative time compared to internal fixation, early ambulation was achieved, patients rarely complained of early pain, and reoperation was not necessary. It should be noted that although not statistically different between groups, there were four cases of non union of the lateral wall in the OR-IF group (out of 20 cases) while only two of 28 cases were reported in the BHA group. These consequences may have been attributable to the subluxation of soft tissues attached to abductor muscles and lateral walls during the processes involved in the anterior-lateral approach method. Therefore, selection of the optimal approach method should be carefully determined in cases with intertrochanteric femoral fractures accompanied by lateral wall fracture treated via BHA. In addition, three of 22 cases experienced a limb length discrepancy, which represents a slightly higher frequent rate as compared to our previous experiences with BHA. This was likely the result of difficulties in achieving an accurate measurement of the greater trochanter tip due to the lateral wall fracture. Additionally, a complete reduction for the comminuted fracture of the lesser trochanter, which can be used as a reference for the estimation of limb lengths, was not made. Therefore, further efforts are required in order to produce more accurate measurements of limb lengths in BHA.
In cases where comminuted fractures of the greater trochanter are accompanied by reverse oblique inter trochanteric femoral fracture, anatomical reduction and rigid fixation of bone fragments are significant due to consequences including hip pain and changes in the lever arm covering the center of the hip throughout to the abductor muscle's point of action, potentially resulting in dislocation of an artificial joint due to weakened abductor muscles 25) . Various surgical techniques are being utilized for treating unstable intertrochanteric femoral fractures via BHA 26-28) . In the present study, we utilized the tension band wiring technique which fixes the lesser trochanter with a circular steel wire. Subsequently, another steel wire was penetrated through the upper greater trochanter and the lesser trochanter in an "8" shape 28) . The advantages of the technique included its simplicity, its lack of effect on surgical costs, and the lack better bone union, bursitis, bone resorption, and damage to the metal plates employed for rigid fixation 26) . In particular, it reduces operation time, which may be crucial for faster recovery times in elderly patients. In contrast, we found five cases where displacement (<5 mm) was noted in the BHA group despite the use of a steel wire for fixation. Although the fractured lateral wall was fixed with the steel wire in an '8' shape, the displacements we noted may indicate how difficult it is to maintain closure owing to the action of the abductor muscles. It should be noted that in cases where the lateral wall has fracture in parallel with the ground, there is a greater likelihood of lateral wall fragment displacement, as stable wire fixation is difficult to achieve.
The present study had a few limitations. Firstly, we were not able to clearly define the criterion for the selection of the surgical approaches based on our retrospective study design. Second, the follow-up period (i.e., two years) was relatively short. Lastly, the number of cases was somewhat small due to the low incident rate of type 3 fractures. Therefore, future prospective studies with larger sample sizes and longer follow-up durations may be warranted. Considering its significance, further investigations regarding the various surgical techniques used to attach the greater trochanter, as well as matters that require attention during BHA surgery should be performed.
CONCLUSION
In the final follow-up of elderly patients (>70 years old) with reverse oblique intertrochanteric femoral fractures, favorable clinical outcomes associated with pain as well as functionality, were demonstrated. In conjunction, successful anatomical fracture reduction and rigid internal fixation were achieved. Therefore, stable internal fixation may be a good choice for healthy patients with excellent bone quality that can reasonably expect to live long lives. Appropriate reduction of fractures is available through careful examination of fracture types in advance of surgery. Due to its benefits (e.g., early ambulation, pain improvement, and low risk of reoperation), BHA may be chosen for cases with severe bone pulverization and poor bone quality, patients with a higher risk for early failure, as well as for those with short remaining lifetimes, and patients who require early ambulation, as high risks of complications are expected due to long term bed rest. | 2017-08-15T23:41:38.270Z | 2015-09-01T00:00:00.000 | {
"year": 2015,
"sha1": "184c5915c838eacd9fac6404c3eab4c8077289c6",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4972720?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "184c5915c838eacd9fac6404c3eab4c8077289c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256615067 | pes2o/s2orc | v3-fos-license | Competing Nuclear Quantum Effects and Hydrogen-Bond Jumps in Hydrated Kaolinite
Recent work has shown that the dynamics of hydrogen bonds in pure clays are affected by nuclear quantum fluctuations, with different effects for the hydrogen bonds holding different layers of the clay together and for those within the same layer. At the clay–water interface there is an even wider range of types of hydrogen bond, suggesting that the quantum effects may be yet more varied. We apply classical and thermostated ring polymer molecular dynamics simulations to show that nuclear quantum effects accelerate hydrogen-bond dynamics to varying degrees. By interpreting the results in terms of the extended jump model of hydrogen-bond switching, we can understand the origins of these effects in terms of changes in the quantum kinetic energy of hydrogen atoms during an exchange. We also show that the extended jump mechanism is applicable not only to the hydrogen bonds involving water, but also those internal to the clay.
* sı Supporting Information ABSTRACT: Recent work has shown that the dynamics of hydrogen bonds in pure clays are affected by nuclear quantum fluctuations, with different effects for the hydrogen bonds holding different layers of the clay together and for those within the same layer. At the clay−water interface there is an even wider range of types of hydrogen bond, suggesting that the quantum effects may be yet more varied. We apply classical and thermostated ring polymer molecular dynamics simulations to show that nuclear quantum effects accelerate hydrogen-bond dynamics to varying degrees. By interpreting the results in terms of the extended jump model of hydrogen-bond switching, we can understand the origins of these effects in terms of changes in the quantum kinetic energy of hydrogen atoms during an exchange. We also show that the extended jump mechanism is applicable not only to the hydrogen bonds involving water, but also those internal to the clay.
H ydrogen bonding is central to the properties of clays, 1−4 with H-bonds not only holding together the layers of clay materials but also controlling their interactions with surrounding solvents such as water. Clay−water systems show a rich variety of hydrogen-bond types and strengths. These interactions are crucial in processes such as the adsorption of solute on the surface of the clay, 5,6 its wetting, 7−9 and the swelling of the material by intercalation between the layers. 10,11 Recent work by the authors 4 has shown that the properties of dry clays can be affected by nuclear quantum effects (NQEs) such as zero-point energy, leading to weaker hydrogen bonds. The magnitude of these quantum effects depends on whether the H-bonds held together different clay layers or acted within a single layer; in clay−water systems, there is a wider range of H-bonding, with clays accepting H-bonds from the surrounding water, and in some cases able to donate hydrogen bonds to the water. The range of H-bond strengths exhibited by clay− water systems 1−3 raises the possibility that NQEs may affect different types of hydrogen bond to different degrees.
In this Letter, we use classical and ring polymer molecular dynamics to show that NQEs do indeed affect the hydrogen bonds in clay−water systems to different degrees. By interpreting our results using the angular jump model of hydrogen-bond switching, 12−14 we are able to understand these quantum effects in terms of the change in confinement of a water molecule as it undergoes a jump. We show further that the angular jump mechanism is present in H-bonds that do not involve water, hinting at its universality.
We focus on NQEs in hydrated kaolinite [Al 2 Si 2 O 5 (OH) 4 ], a 1:1-type dioctahedral clay whose layers comprise an octahedral sheet containing aluminum and a tetrahedral sheet containing silicon. Pure kaolinite contains H-bonds both within a clay layer and between neighboring layers; the kaolinite−water interface adds H-bonds donated by water molecules to the silica sheets and H-bonds between water and the aluminol sheets, in which either water or clay O−H groups may be the donor. 2,3 As in ref 4, since our main focus is on the importance of NQEs, we use the CLAYFF-TRPMD force field developed in that work, which extends the popular CLAYFF force field 15−18 to give accurate vibrational spectra in path integral simulations. Similarly, the q-TIP4P/F force field was used to model water; 19 this model accounts well for the experimental properties of pure water in path integral simulations, without requiring a significant computational expense. Classical molecular dynamics (MD), path-integral molecular dynamics (PIMD), 20 and thermostated ring polymer molecular dynamics (TRPMD) 21 simulations were carried out using the i-PI 22,23 and LAMMPS 24 codes, with NQEs accounted for by the PIMD and TRPMD calculations. To avoid the problem of artificial electric fields at kaolinite−water interfaces described in ref 25, two types of system were simulated: one in which both clay layers were oriented with the aluminol sheets toward the water and one with both silica sheets oriented toward water. Example simulation inputs as well as further details are given in the Supporting Information.
To identify events in which an H-bond donor changed its acceptor, the stable-states picture of chemical reactions was used, 26 as in refs 12 and 27. That is, a hydrogen-bond switch occurs whenever a donor oxygen O, whose initial acceptor is O a , forms a hydrogen bond with a new acceptor O b . We use a strict geometrical criterion for H-bonding, as described in the Supporting Information. The different types of H-bond studied in this letter are given in Figure 1. We began by calculating the hydrogen-bond lifetime correlation functions, 27 with the average over all possible H-bonds. n R (t) is 1 if the Hbond is intact at time t and 0 otherwise, and n P (t) is 1 if the original H-bond is no longer intact at time t, but another Hbond has been formed with the same donor O−H group. Absorbing boundary conditions are used, so that once a hydrogen bond is broken and a new one formed with the same donor, n R (t) = 0 thereafter. By fitting to an exponential decay, C(t) ≃ e −t/τ 0 , we find the hydrogen-bond exchange time τ 0 . In path integral simulations the n R (t) and n P (t) functions are evaluated using the ring-polymer centroid. Figure 2 shows the lifetime correlation function C(t) from MD and TRPMD calculations for the various types of H-bond that are present in these simulations. In Figure 2b, the dynamics of intralayer hydrogen bonds are split into those of H-bonds that remain in the minimum-energy structure (i.e., at zero temperature, denoted T = 0 K) and those of H-bonds where the donor would participate in interlayer H-bonds at zero temperature but at finite temperatures is participating in an intralayer H-bond (denoted T > 0 K). This distinction was shown to be important in ref 4, in which NQEs on the two types of intralayer hydrogen bonds were shown to be very different. The resulting H-bond exchange times τ 0 are shown in Table 1 and are in accordance with existing results which show that hydrogen bonding between pairs of water molecules is weaker than with the (hydrophilic) aluminol surface and stronger than with the (hydrophobic) silica surface. 1,2 The exchange times increase in the order water−silica < water− water < water−aluminol. Table 1 shows that hydrogen bonds donated by aluminol O−H groups and accepted by water are weaker than those donated by water and accepted by the aluminol layer, with the latter being 50% longer lived. Due to the higher electronegativity of Al than of H, the partial negative charge of oxygen in water is greater than that in an aluminol O−H group, making water the stronger H-bond donor. This difference in hydrogen-bonding strength was previously observed in ab initio MD simulations. 2,3 For intralayer Hbonds, there are two distinct time scales: T > 0 K H-bonds are much shorter lived than intralayer H-bonds that persist at zero temperature. For the T = 0 K intralayer H-bonds, the activation of an exchange involves the rearrangement of the relatively rigid solid structure, meaning that its characteristic time is the longest observed. In Figure 2, some correlation functions are not single exponentials, particularly for water− The Journal of Physical Chemistry Letters pubs.acs.org/JPCL Letter aluminol and finite-temperature intralayer H-bonds. This behavior has been observed previously for hydrogen-bonding dynamics at water−mineral interfaces, and attributed to a distribution of exchange times τ 0 . 28,29 Although all of the quantum effects for H-bond exchange times are small, there is a variation among the different types. For the H-bonds involving water, we note that the longer the classical exchange time τ CL , the larger the quantum effect, with water−silica H-bonds having the smallest value for each of these and water−aluminol H-bonds the largest. This is in accord with previous results suggesting that more strongly bound systems, with longer lifetimes, are more susceptible to NQEs, in the context of ion solvation 30,31 and proton transfer reactions. 32 Since clays are a significant source of contaminant removal from soil, 6 this result suggests that NQEs may increase the rate of this process. However, the results for kaolinite's internal H-bonds do not fit into this trend; although the T > 0 K intralayer H-bonds are longer-lived than water−silica Hbonds, there are essentially no NQEs within the error bars. In addition, despite having the same classical exchange time, water−water and interlayer H-bonds incur different quantum effects. Crucially, the differing ratios of classical to quantum exchange times implies that NQEs in clay−water systems cannot be straightforwardly accounted for by running classical simulations at elevated temperatures.
To better understand the different magnitudes of quantum effect seen for hydrated kaolinite we note that the time τ 0 is a key ingredient in the extended jump model of H-bond switching, in which hydrogen bonds change their acceptors through a large-angle jump of the donor O−H group; 14 this has been observed in a range of systems, 12,33−35 including mineral−water interfaces. 36−38 While it has been shown that water molecules donating hydrogen bonds to a material also change their acceptor via a large-angle jump, it is not yet known whether hydrogen bonds internal to clay minerals also undergo these exchanges, allowing us to investigate the applicability of this mechanism to systems beyond water. For H-bonds involving water, the type is formatted "Donor− acceptor". Results are shown for classical molecular dynamics (labelled "CL") and thermostated ring polymer molecular dynamics (labelled "NQE"). The ratio of the two is also shown. b τ CL and τ NQE are the same within their error bars. While the trajectories of H-bonds involving water look qualitatively very similar to those observed in pure water, 12,27 the donor−acceptor distances inside the clay (panels (d)−(f) in Figure 3) behave nonmonotonically, and the behavior of This asymmetry reflects the fact that 95% of interlayer H-bonds become intralayer H-bonds after undergoing a jump, and 25% of intralayer H-bonds become interlayer H-bonds. Jumps where both the initial and final acceptor are in the same layer are much more symmetric, as shown in the Supporting Information. This discrepancy may appear at first to be at odds with detailed balance, indicating that jumps do not happen independently�rather, the equilibrium distribution of Hbonds is maintained by reversals of jumps. The populations of interlayer and intralayer H-bonds in dry kaolinite, according to our previous work, 4 are 0.51:0.19:0.10:0.20 interlayer:T = 0 K intralayer:T > 0 K intralayer:dangling (i.e., without an H-bond acceptor). The nonmonotonic behavior is less straightforward to understand: future work will focus on investigating whether this is due to the complexity of the mechanism or to limited statistics. The ratios of jumps to different types of H-bond acceptor are also instructive: for water molecules initially donating an H-bond to the aluminol surface, the ratio of final acceptors that are on the aluminol surface to those that are water molecules is around 1:1; for donation to the silica surface, this ratio is closer to 1:3, illustrating the higher strength of H-bonds formed with the aluminol. As shown in the Supporting Information, NQEs make very little difference to the trajectories of H-bond jumps including water, in accord with the results of ref 39. To understand the origin of these NQEs, we computed the centroid virial kinetic energy tensor, Table 1. The inset illustrates this tensor as an ellipsoid during a jump event: at the transition state, the H atom is able to move toward either the initial or final acceptor, meaning its confinement is low in the perpendicular direction; it is only weakly H-bonded, meaning that it is relatively confined in the parallel direction.
The Journal of Physical Chemistry Letters pubs.acs.org/JPCL Letter acceptors is relatively low, while decreases, indicating a greater delocalization in this direction as the O−H bond can move just as easily toward the initial and the final acceptor.
The inset of Figure 4 shows this in more detail. Qualitatively, these results are in accord with previous work. 4 These two effects combine to give the total quantum kinetic energy, which generally decreases at the transition state. Table 2 shows the changes in kinetic energy on going to the transition state, decomposed into parallel and perpendicular components. does not account for the quantum effect observed by itself, as the average potential energy will also change, but a greater decrease in the reaction barrier due to kinetic energy correlates fairly well with larger quantum effects in τ 0 (see Table 1).
A significant cause of the quantum effects in H-bond exchange times is the change in confinement of the H atom, leading to a decrease in the free energy barrier to undergoing a jump. The effects of motion parallel and perpendicular to the O−H bond compete with each other, with the latter dominating in all cases. This competition of effects has been observed before for hydrogen-bonded systems, 19,31,40 with the dominant effect being determined by the type of motion that contributes most to H-bond breaking. 40 This means that, while the NQEs in our simulations are extremely subtle, they act as a direct probe of the change in geometry that occurs on going to the transition state. Since only a few hundred jump events were collected for intralayer H-bonds that are only present at finite temperatures, the kinetic energy trajectories are extremely noisy (see the Supporting Information), meaning that the apparent lack of quantum effects in their exchange time τ 0 cannot yet be fully understood. As in Figure 3, the symmetry in the quantum kinetic energy trajectories indicates the types of H-bonds that are preferentially formed during a jump: Hbonds initially donated from water to the silica layer have highly asymmetric trajectories because half of the final acceptors are water molecules; for this reason, the section of the water−Si trajectory with t > 0 is very similar to that of the water−water trajectory. On the other hand, H-bonds donated to the aluminol layer have much more symmetric trajectories, since a greater proportion of the final acceptor is also in this layer.
In this Letter, we have shown that hydrated clays exhibit a variety of nuclear quantum effects, which are rationalized using the angular jump mechanism of hydrogen-bond exchange. The different degrees of quantum effects are found to be a manifestation of the change in confinement that an H-bond experiences at the jump transition state. Future work will focus on understanding the complex jump mechanism in and around clays further, paying attention to the nonmonotonicity in trajectories, as well as the effect of quantum fluctuations at the interface between materials and ionic solutions, 41 on surfaces at which water molecules dissociate, 42 and on using more sophisticated potentials 43 | 2023-02-07T06:17:43.227Z | 2023-02-06T00:00:00.000 | {
"year": 2023,
"sha1": "a93558a8f70ac33756217da22f3550c317cc2d73",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e52cd8e04fd130fe7a7e43e3293749bb220ee249",
"s2fieldsofstudy": [
"Materials Science",
"Physics",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4640584 | pes2o/s2orc | v3-fos-license | Access to Orphan Drugs: A Comprehensive Review of Legislations, Regulations and Policies in 35 Countries
Objective To review existing regulations and policies utilised by countries to enable patient access to orphan drugs. Methods A review of the literature (1998 to 2014) was performed to identify relevant, peer-reviewed articles. Using content analysis, we synthesised regulations and policies for access to orphan drugs by type and by country. Results Fifty seven articles and 35 countries were included in this review. Six broad categories of regulation and policy instruments were identified: national orphan drug policies, orphan drug designation, marketing authorization, incentives, marketing exclusivity, and pricing and reimbursement. The availability of orphan drugs depends on individual country’s legislation and regulations including national orphan drug policies, orphan drug designation, marketing authorization, marketing exclusivity and incentives such as tax credits to ensure research, development and marketing. The majority of countries (27/35) had in place orphan drug legislation. Access to orphan drugs depends on individual country’s pricing and reimbursement policies, which varied widely between countries. High prices and insufficient evidence often limit orphan drugs from meeting the traditional health technology assessment criteria, especially cost-effectiveness, which may influence access. Conclusions Overall many countries have implemented a combination of legislations, regulations and policies for orphan drugs in the last two decades. While these may enable the availability and access to orphan drugs, there are critical differences between countries in terms of range and types of legislations, regulations and policies implemented. Importantly, China and India, two of the largest countries by population size, both lack national legislation for orphan medicines and rare diseases, which could have substantial negative impacts on their patient populations with rare diseases.
Introduction
Orphan drugs are medicines or vaccines intended to treat, prevent or diagnose a rare disease. Examples of rare diseases include genetic diseases, rare cancers, infectious tropic diseases and degenerative diseases. The definition of rare diseases varies across jurisdictions but typically considers disease prevalence, severity and existence of alternative therapeutic options. In the United States (US) rare diseases are defined as a disease or a condition which affects fewer than 200,000 patients in the country (that is, 6.4 in 10,000 people) [1] while the European Union (EU) identifies a rare disease as a life-threatening or chronically debilitating condition affecting no more than 5 in 10,000 people [1]. 6000-8000 rare diseases are estimated to exist today, affecting approximately 6-8% of the world's population [1][2][3][4]. A recent systematic review [5] of cost-of-illness studies on 10 rare diseases (including cystic fibrosis and haemophilia) found overall limited information published [5]. The availability of information ranges from none to little between diseases and the estimated total cost of illness also ranges substantially between studies conducted in different countries, for example, lifetime costs of cystic fibrosis in Germany was estimated at €858,604 per patient in 2007, while US data suggest €1,907,384 in 2006 [5].
Availability and access to medicines are important to reduce morbidity and mortality of rare diseases. For instance, until the recent availability of pirfenidone, a lung transplant was the only treatment option for patients with idiopathic pulmonary fibrosis, a rare disease with a 50% chance of survival at 3 years [6]. Despite the need and importance of availability and access to orphan drugs, there is a paucity of available treatments for rare diseases. Less than one in ten patients with rare diseases receives disease-specific treatment [7]. Drug development for rare diseases is often limited by the prohibitive cost of investing in an original pharmaceutical agent with poor profit potential given the small patient size per rare disease indication. Under human rights principles, patients with rare diseases have equal rights to medicines as other patients with more prevalent disease (e.g., diabetes). They should not be excluded from gaining benefits from medical advances just because of the rarity of their illness [1,3]. In this context, many governments and authorities have established legislations, regulations and policies to encourage the research and development of orphan drugs [3,4,8] and to address licensing regulations and pricing and reimbursement of these drugs [4,[8][9][10]; such economic and regulatory incentives are important public health decisions.
It is important to understand regulatory and policy initiatives for orphan drugs that exist in countries and their differences to improve research and policy development for treatment of rare diseases. However, existing articles in this field predominantly either summarized regulations and policies in a single country or continent, or discussed the effect of a single or few regulations/policies influencing access to these important medicines. The aim of this study was to review, as comprehensively and systematically as possible, the range and types of existing legislations, regulations and policies that are utilised by countries to enable the availability and accessibility of orphan drugs.
Article Selection and Data Collection
From the database/journal searches 23904 titles/abstracts were retrieved. The title and abstract of all retrieved articles were reviewed by lead author (TG) for relevance. Subsets of research results were checked by a second author (ZB or CYL). If there was any ambiguity with regards to the paper, the full-text article was retrieved and reviewed for relevance. After removing duplicates and titles/abstracts unrelated to orphan drugs or rare diseases, we identified 113 peer-reviewed, English-language articles. We included original articles, reviews, commentaries and opinions if they described legislations/regulations/policies for orphan drugs and relevant health services. Of these, only 58 articles were relevant to legislations/regulations/policies for orphan drugs; thus these articles were read in full by TG, with guidance from ZB and CYL. Six more articles were identified from references of the retrieved articles; thus 64 articles were considered against our study inclusion and exclusion criteria with no significant bias found that would affect the cumulative evidence reported (Table 1). Based on these criteria, a further 7 articles were excluded and 57 articles were included for final analysis (Fig 1).
Analysis
We reviewed the literature systematically to ensure that a narrative synthesis produced was sourced from the most complete collection of relevant literature possible. Thematic analysis of the articles was conducted, with the addition of new regulation/policy categories as needed, and relevant sub-categories created for examination until no more themes were identified and saturation was deemed to be reached. Using regulation/policy categories generated by this analysis, we described the range and types of legislation/regulation/policy in each included country that affect the availability and access to orphan drugs.
For the purposes of this review, availability of and access to orphan drugs were each defined as follows. Availability of orphan drugs was defined as whether an orphan drug had obtained a relevant marketing authorization (and orphan designation if necessary) [3]. Access to orphan drugs was defined as the enabling of individuals in their financial and physical ability to obtain and receive relevant care involving orphan medicines. Access was commonly determined by coverage status, reimbursement, and price [3].
Results
The included 57 articles involved 35 countries (21 EU countries and 5 Asian countries); these included the US, Canada, Europe, Australia, Taiwan, Singapore and Japan. We did not find any reporting legislations/regulations/policies for orphan drugs in Latin America and African countries. We noted that there had been few original articles (22 of 57) in this field. Our review of the 57 articles generated 6 themes with 13 subcategories. Themes included national orphan drug policy, orphan drug designation, orphan drug marketing authorization, marketing
Countries Covered
Countries with a publicly funded health system and ability to institute policy and regulation in various methods to facilitate access to orphan medicines. Orphan drug policy and regulation implemented by countries which exist outside of these domains which facilitate access to orphan medicines will also be considered in the review. Did not describe any specific legislations/regulations/policies for orphan drugs exclusivity, incentives, pricing and reimbursement. These themes describe the political or regulatory mechanism utilised and the relevant influence with regard to patient access to orphan medicines. Table 2 summarises these categories. We summarized the environment for availability of and access to orphan drugs in each of the included countries by these categories (Table 3). (S3 Appendix: General Characteristics of Included Studies) summarized articles included in this study. We summarized our findings by theme below.
National Orphan Drug Policy
National Orphan Drug Policy was one of the 6 themes identified from our review and included: orphan drug legislation, national rare disease plans, cross-border regulation, and orphan drug designation. Orphan drug legislation. Orphan drug legislation is used by a number of countries to encourage research, development and marketing of orphan drugs [19,31,42]. The US was the first country to establish national orphan drug legislation with the Orphan Drug Act passed in 1983 [10]. Japan was the second country to implement orphan drug legislation in 1993 [1,40]. Australia was also one the first countries to develop orphan drug legislation; Australia's Therapeutic Goods Act 1990 was amended in 1997 with the full Australian orphan drug policy [3]. EU legislation (Regulation (CE) N°141/2000) for orphan drugs was implemented in 2000. Taiwan and Singapore also include specific legislation pertaining to orphan drugs (Medicines Act Chapter 176, Section 9 & The Rare Disease and Orphan Drug Act respectively) [1]. Orphan drug legislation intends to address the challenges of prohibitive costs of product development and limited profit potential due to the smaller market size for each rare disease [3,31,42]. Such legislation includes a variety of incentives to encourage orphan drug research, development and marketing. These often include tax credits for research costs of orphan drugs, several years of marketing exclusivity [1,12,23,29,31,40,42] that prevents marketing approval of a generic drug or brand name for the same rare disease indication, free scientific advice such as protocol assistance, fast track/ priority review for marketing authorization of orphan designated products and pre-licensing access initiatives, including off-label and compassionate use programmes [4,10,13,14,20].
It is important to note that patient advocacy was instrumental in the formation of orphan drug legislation such as the US Orphan Drug Act and EU regulation (CE) N°141/2000 [12]. Patients frequently form patient organisations as "surrogate pressure groups" and influence prescribers, regulatory agencies and political bodies in matters of availability of and access to orphan drugs [9,12,21,23,32,[43][44][45]. For instance, international organisations such as the National Organisation for Rare Disorders (NORD) in the US and the European Organisation for Rare Diseases (EURODIS). These groups focus on improving care of rare diseases through better access to information as well as individual patient access to orphan drugs and associated treatment [14,21,22,27,32,[43][44][45][46]. Patient advocacy groups often lobby third-party payers or governments funding healthcare, to provide full reimbursement of orphan drugs, regardless of their high price [44]. Patient advocacy groups may form partnerships with regulatory agencies, for example, EURODIS with the European Medicines Agency (EMA) [23].
National Rare Disease Plans. National plans for rare diseases have a general purpose to create a regulatory framework for access to services, treatment, and information, research stimulation, and patient advocacy [9,14,21,27,43,46]. National rare disease plans differ from orphan drug legislation in that they often do not put specific legislation into place. Most often, they indicate the initial 'readiness' of the country to respond in the field of orphan drugs and rare diseases [14,43]. These plans include a framework and documentation of a national shared vision in the field of orphan drugs [14, 20-22, 27, 46]. For example, five neighbouring European countries, Bulgaria, Greece, Macedonia, Romania and Serbia, have established national plans for rare diseases [27]. The effectiveness of such plans in terms of availability (orphan drug designations & marketing authorizations) and access (low(er) prices & positive reimbursement decisions) may be affected by national purchasing power, budget as well as decision making criteria for pricing and reimbursement policies [10,20,22,27].
Cross-border regulation. The EU is unique in that it is the only entity to have a centralised procedure for orphan drug designation and marketing approval extending across its member countries. Cross-border regulation is of particular importance in the context of rare diseases because patients often do not receive treatment due to inadequate access to orphan drugs as well as inadequate availability of related specialised clinicians and facilities domestically [2]. The directive 2011/24/EU clarifies patients' rights on cross-border healthcare. This directive enables patients with a rare disease within the EU the right to EU wide healthcare services if the national healthcare system is not able to provide the essential treatment domestically within a reasonable timeframe [2]. However, due to differences in national pricing and reimbursement policies across the EU patients still experience differential access to orphan drugs [2].
Orphan Drug Designation. Orphan designations are often based upon: severity (lifethreatening or chronically debilitating conditions) and unmet need (no therapeutic alternative or the new product provides significant clinical benefit) [4,47]. This basis is often further split between prevalence or economic criteria [29]. Prevalence criteria consider specific definitions of orphan diseases and individual national patient prevalence, while economic criteria consider whether expected sales of a drug product would cover the initial investment costs associated with research and development [4,31]. Differences in prevalence criteria are usually the primary reason for differing definitions of rare diseases and orphan drugs across jurisdictions [4]. There is often a lack of quantity and quality of clinical evidence for orphan drugs, due to a limited number of patients for clinical trials [4,47,48]. Orphan drug designation may allow drugs for non-orphan diseases to gain market access [14]. Oncology products account for the greatest number of orphan drug designations in the US (32.5% of all orphan designations); similarly in, Europe, Japan and Australia [1,3,4,31].
Orphan Drug Marketing Authorization
Assessment for marketing authorization of orphan drugs has been critical in promoting the availability of orphan medicines and is often the same as those for non-orphan medicinal products in non-EU countries [4,12,16,28,39,41]. For example, in countries such as Australia, the US and Japan, marketing authorization procedures are largely identical to non-orphan drugs [4,12,16,28,39,41]. Similarly, procedures are the same for orphan and non-orphan drugs in countries currently without orphan drug legislation in place such as Canada and Israel [1,3,13]. However, marketing authorization procedures differ in the EU. In the EU, decisions on orphan designation are made by the Committee for Orphan Medicinal Products (COMP) of the European Medical Agency while marketing authorization decisions are made by the Committee for Medicinal Products for Human Use (CHMP), the same committee for non-orphan drugs. A single marketing authorization is granted by the CHMP, with the aim to ensure patients with rare diseases have equal access to orphan drugs independent of member state. In a study of 11 countries (Australia, Canada, England, France, Germany, Hungary, Netherlands, Poland, Slovakia, Switzerland and the US) by Blankart et al. [3], all implement similar standards for the approval of orphan drugs. In smaller nations such as Serbia or Macedonia, the process is simplified if the drug has been authorized in other larger nations [16,27,28,39]. This may impact timely availability of orphan drugs in smaller nations because pharmaceutical companies tend to apply for authorization in the US or EU first [3]. Countries often rely upon the same studies to assess clinical effectiveness of orphan drugs. Orphan drugs are often evaluated using the same criteria, including severity and unmet need. However, differences are often found in the interpretation of study results, which may affect the outcome of marketing authorizations [3]. "Success rate", the proportion of orphan medicines that receive marketing approval after receiving an orphan designation was reported to be as low as averaging 10.9% of all orphan designations granted in the EU in the first ten years of EU orphan drug legislation (2000-2010). Success rate proportions were similar in the US, with a 15.9% success rate in the 28 years since implementation of the 1983 Orphan Drug Act [25]. Low success rates are likely due to the differences in approval criteria for orphan drug designation versus those for marketing authorization. Research and development incentives for orphan drugs likely result in large numbers of applications and orphan drug designations. However, currently stricter criteria for marketing authorization mean many products that received an orphan designation may not ultimately be approved for marketing [25]. Accelerated Procedures. Some countries have accelerated procedures to ensure timely availability of orphan drugs to the market [3,4]. These procedures include: priority review, fast-track approval and accelerated approval [29]. Although the process is applicable to both orphan and non-orphan medicines, the process is more accepted for orphan drugs, nevertheless orphan drugs are not automatically qualified for accelerated procedures. In some countries, less rigid criteria are used for the evaluation of the therapeutic value of orphan drugs [48]. Criteria regarding unmet need, severity, and high clinical efficacy must be met for accelerated review. For instance, in the US priority review is granted to orphan drugs that demonstrate major advances in treatment or meet significant unmet need. For instance, iloprost, an orphan drug for treatment of pulmonary arterial hypertension experienced priority review in the US in December 2004, with a positive outcome within 6 months as compared to the regular 10 month assessment period [3]. An accelerated assessment usually takes about half the time needed for the standard marketing authorization process (~150 days versus a year or longer) [3,29].
Incentives
There are financial and non-financial incentives to ensure availability and access to orphan drugs. We summarize these below.
Financial Incentives. Financial incentives utilised worldwide include: research grants, tax credits/corporate tax reductions, marketing exclusivity, and user fee waivers [1,12,17,31,41,42,49]. These provisions exist as a means to allow firms to recover research and development costs, which would not be possible with sales of orphan drugs given the small market sizes. These incentives generally help to increase the availability of orphan drugs [3]; Blankart et al. [3] found that only 10% of clinical trials for orphan drugs would have been conducted without such financial incentives.
Non-Financial Incentives. Non-financial incentives we identified include: fast track approval, pre-licensing access (in the form of compassionate or off-label access) and scientific advice, that is, free protocol assistance and/or development consultation [1,3,4,10,29]. Garau et al. [4] investigated a selection of seven EU member states and found four countries (France, Italy, Spain and the Netherlands) allow pre-licensing access to orphan drugs but encourage the collection of additional clinical data to prove therapeutic benefit. Pre-licensing allows importation of orphan drugs available in other countries but currently unauthorized in the country. Pre-licensing access is often the most common method for patients accessing orphan drugs in many countries, often through procedures such as 'named patient procedures'. Such use may be granted to an individual or a group of patients with a serious or life-threatening disease where there is no alternative therapeutic option [4,29]. The need is determined by the responsible physician and patient [38]; each application is evaluated in light of relevant evidence and the advice of scientific communities [38]. While pre-licensing access may be granted, access by individual patients is rarely reimbursed by public health insurance [3]; an example is Turkey's national reimbursement scheme that enables both availability (through importation) and access to orphan drugs when these drugs are unavailable, unauthorized and inaccessible [3,4,10,38]. Free scientific advice including protocol assistance is provided by regulatory authorities to increase the quality of clinical trials and study protocols, and increase the likelihood of successful marketing authorization and subsequent reimbursement applications [4,20,29].
Marketing Exclusivity
Marketing exclusivity is generally implemented as part of a package of incentives to encourage pharmaceutical companies in research and development of orphan drugs. This allows firms several years to recover costs for drug development. During the marketing exclusivity period, regulatory agencies cannot approve a generic drug or brand name drug for the same rare disease indication [3,31]. However, the same drug can receive approval for a different disease indication and no limits are currently put in place globally on the number of drugs that may be selected for the same rare disease profile [3,31]. Picavet et al. [29] studied orphan drug policies in Europe and found that the period of marketing exclusivity can be challenged if the orphan drug lacks supply, "is sufficient profitable", or if another drug is "clinically superior" than the existing orphan drug. These exceptions are mirrored in the US and worldwide. However, to date, the EU and the US have not withdrawn market exclusivity status for any drug, despite increasing profitability [3,4,31,40,44].
Monopolisation. Marketing exclusivity exists as a strong incentive for development of orphan drugs worldwide. However, there are concerns regarding monopolisation and the manufacturers' high prices for orphan drugs [3]. This is because patients with rare diseases often have a high willingness to pay given the limited therapeutic alternatives and the life-threatening or chronically debilitating nature of many rare diseases. Therefore, third party payers are generally forced to pay the manufacturer's high price, leading to payment of a monopoly based price scheme [3,4,29,31]. Monopoly based pricing schemes can impact patient access to orphan drugs as well as non-orphan drugs given the pressure to contain increasing health (including pharmaceutical) expenditures [3,4,19,20,23,27,29,31,33,36,44,50,51]. However, contrasting evidence regarding the effect of marketing exclusivity on the creation of a market monopoly has been suggested. Turnover of the first orphan drug authorized for a rare disease indication is linked to increased likelihood of 'follow-on' orphan drug research and development. Marketing authorization for the first orphan drug may indicate feasible development for future drugs for the same rare disease. Arguments rejecting claims of market monopolies commonly attribute the occurrence of a single orphan drug for a single rare disease on the small market size, with an inability to attract competition [18].
Pricing
Pricing of orphan drugs is often referred to as 'black box' pricing due to the lack of literature on orphan drug pricing mechanisms [25,30]. Pricing of orphan drugs is unique in that the costs of research and development must be retrieved from a small number of patients. Given this, marketing exclusivity, and the lack of therapeutic alternatives, orphan drugs are relatively expensive, often exceeding €100,000 per patient per year [15,25,30,52] (e.g., Replagal for Fabry Disease, a rare genetic x linked lysomal storage disease, costs on average US$265987.20 per patient per year [3]). Generally, there are no large variations in ex-factory (manufacturer) prices for orphan drugs between countries of different pricing and reimbursement systems [30]. Rather the heterogeneity in price and access to orphan drugs across countries is possibly due to national budget constraints and political pressures. Orphan drugs with multiple orphan indications, those for chronic treatments and those with demonstrated improvements in overall quality of life or survival are associated with higher annual prices. Repurposed orphan drugs, those orally administered and those for which an alternative treatment was available are associated with lower annual prices of treatment [30]. The variability in access and use of orphan medicines is comparable to other newly authorised, non-orphan drugs in the EU [24].
Free versus Fixed Pricing. Fixed pricing, adopted by many EU countries and other countries such as Japan and Canada, often involves two methods. The first is reference pricing, whereby a country compares the price requested by the manufacturer with the price in other countries [19]. Countries that use reference pricing tend to have comparable drug prices, [3,42] but orphan drugs still have relatively high prices. The second example of fixed pricing is prices set at the discretion of governmental and regulatory bodies. These prices remain fixed as the respective agency will 'fix' the price at a level it determines optimal. This included measures such as "cost plus" pricing set at the cost of research and development plus a profit percentage [29]. Free pricing sets prices at the manufacturer's discretion [40], commonly used in the US and Germany [3,26]. Fixed pricing models tend to exhibit moderate to significantly lower acquisition prices, averaging around 40% less than free pricing models [3,10].
Reimbursement
Coverage and reimbursement of orphan drugs have been widely regarded as the most important determinant of patient access to orphan medicines [1,4,9,12,16,37,41,53,54]. Orphan drugs that are not covered by insurance systems are practically inaccessible to patients due to their high cost [33,42], and even when they are covered, patient cost-sharing (through co-payments or coinsurance) can still limit access. This theme includes 4 subthemes: health technology assessment, co-payments, post-marketing surveillance and managed entry agreements.
Health Technology Assessment. Health Technology Assessment (HTA) is often utilised to assess the value of medicinal products, including orphan drugs [13,34,35,55,56]. Criteria most commonly include measures of cost-effectiveness based upon the indices such as quality adjusted life years (QALY) and incremental cost effectiveness ratios (ICERs) [4,10,56]. However, standard HTA practices and standards of evidence that require formal cost-effectiveness analyses and randomised controlled trials are often not strictly applied to orphan drugs given the typical lack of data regarding clinical efficacy and the burden of the disease, lack of appropriate diagnosis and trained health professionals, and small patient sizes [3,4,29,54,57,58]. Because of these evidence gaps, a higher level of uncertainty on clinical efficacy, safety, incremental cost-effectiveness and budgetary impact is accepted for orphan drugs in many countries [48].
While orphan drugs often do not meet traditional cost-effectiveness criteria, they may be reimbursed by payers in some countries because other factors are taken into account in reimbursement decisions [17]. These include: therapeutic value, budget impact, impact on clinical practice, pricing and reimbursement practices globally, patient organisations, economic importance, ethical arguments and the political climate [17]. Standards of evidence required in reimbursement decisions across countries may explain these differences. One study [4] found that only 69% of 43 potentially available EMA-granted orphan drugs were reimbursed in Sweden. England and Wales saw only 2 positive recommendations by the National Institute for Health and Care Excellence of the 43 available orphan drugs. Of 28 orphan drugs reviewed in Scotland, 15 (54%) were reimbursed. Finally 94% and 100% of all launched orphan drugs were reimbursed in Italy and France respectively. These effects were attributed to differences in pricing and reimbursement strategies as well as decision making criteria by the aforementioned countries [4]. In particular, while France and Italy focus on a standard of evidence that requires proven clinical value and measures of innovation, both countries do not require a formal costeffectiveness analysis for orphan drugs [4]. These countries consider literature reviews and cohort studies when clinical evidence and cost-effective evidence are limited based on data from manufacturers [4]. While these countries take into account the high price of orphan medicines, they are often still reimbursed due to their relatively low budget impact because of small patient sizes [4]. Countries that require a standard of evidence including a formal clinical and cost-effectiveness analysis often have lower coverage compared to countries that utilise alternative standards of evidence [4].
Additional considerations by countries can also often include "rule of rescue" (value of rescuing a life regardless of cost) and equity of access. "Rule of rescue" and equity of access criteria are often considered in Canada and Israel for orphan drugs [13]. Common to this theme, reimbursement decision-making authorities in Turkey do not require pharmaco-economic analysis for orphan drugs. Furthermore, all orphan drugs in Turkey entering the market are reimbursed without any co-payment [38].
Proposed HTA solutions for orphan drugs include 'multi-criteria decision analysis' that considers measures of: rarity, clinical effectiveness, level of research undertaken, level of uncertainty around effectiveness, manufacturing complexity, follow up measures, disease severity, available alternatives and budget impact. Healthcare resources are then allocated on the basis of the performance of the drug against these criteria until the associated budget is consumed [55].
Co-Payments. Access to orphan drugs may be affected by considerable patient co-payment or coinsurance [3], which are out-of-pocket costs for patients. Patient co-payments for prescription drugs can be substantial in some countries such as the US, Canada and Switzerland; for instance, monthly co-payment may be as high as $90 for prescription medicines in the US or a coinsurance of~30% of the drug's cost. It is important to note that co-payments by patients for these medicines are not found equally among the countries of this review. For example, in countries such as, but not limited to, the Netherlands and Poland, no co-payment is required for drugs, including orphan drugs, included in the national reimbursement list [3]. Countries often have 'catastrophic coverage' to protect against the risk of excessive out-ofpocket expenditure. In the US, healthcare plans approved by Medicare cover 95% of drug costs after patient payments of US$4350 per year have been reached [3]. Similarly, in Canada, Public Service Health Care plans see an increase from 80% to 100% of total drug costs after co-payments of US$2814 per patient per year have been reached [3].
Post-marketing surveillance. Clinical evidence requirements at the time of orphan designation and marketing approval may be relaxed if post-marketing surveillance programmes are used. These mechanisms are often utilised to enable early approval and access to drugs for a serious or life-threatening illness. These programmes are in place to ensure that if clinical efficacy requirements are not reached, the drug will no longer be provided [48,57,[59][60][61][62]. Sorafenib, an orphan drug for treatment of renal cell carcinoma was as of December 2012 in Italy subject to post-marketing surveillance to ensure the clinical efficacy of the drug in patients following relaxed clinical evidence at time of approval [60].
Managed Entry Agreements. Managed entry agreements (MEAs) are being increasingly utilised worldwide as 'innovative reimbursement approaches' to fund high cost (orphan) drugs. These require the manufacturer to enter into an agreement with the payer involving negotiations of performance targets based on expectable health improvements. These schemes are utilised as an alternative approach to provide coverage with restrictions for drugs that may not otherwise be covered [60]. MEAs often present in two formats: performance-based schemes and financial-based arrangements [60].
Performance-based schemes aim to provide security of cost-effectiveness and link performance to reimbursement of (orphan) drugs. These schemes can provide reimbursement with an assurance of post-marketing clinical evidence and surveillance [60]. If performance targets are not reached, drug prices are reduced to maintain satisfactory cost-effectiveness relationships [12,13,23,26]. Patient access is enabled under strict criteria. Financial-based schemes exist to address concerns of healthcare payers regarding cost and the budget impact of orphan drugs. Financial-based schemes take a variety of forms including 'cost capping' (beyond a cost threshold the drug is provided at a discount or at zero cost), utilisation capping (any number of doses and/or cycles beyond an agreed amount results in financial consequences, eg price volume agreements), and free and/or discounted initiation (treatment is free up to a specified number of doses). One study found that of 7 EU countries, Italy had the highest number of MEAs for orphan drugs, followed by the Netherlands, England and Wales, Sweden, and Belgium; no data were available for France and Germany [60]. The reasons for these differences are unclear but how uncertainty and value are perceived and defined are possible reasons for inter-country differences in the use of MEAs [60]. Orphan drugs including sorafenib (for treatment of renal cell carcinoma), nilotinib (for treatment of gastro intestinal stromal tumours) and temsirolimus (for treatment of renal cell carcinoma) were subject to both performance and financial based MEAs by the Italian Medicines Agency as of December 2012 [4,60]. The objective of these schemes was to 'verify and control the appropriateness of the prescription and to circumscribe the level of uncertainty around the drug' when clinical efficacy was in question [60].
Discussion
This article reports a comprehensive literature review of legislations, regulations, and policies used in 35 countries to enable the availability of and access to orphan drugs in the last two decades. Existing reviews in this field predominantly either summarized rare disease and orphan drug regulations and policies in a single country or continent, or discussed the effect of a single or few regulations/policies influencing access to these important medicines. The breadth and depth of our review provides important understanding and appraisal of the topic. We summarized the wide range of legislations, regulations, and policies that influence access to orphan drugs on an international scale and examined similarities and differences in legislations/regulations/policies between countries. We identified 6 major types of regulations/policies each with subcategories.
We included 21 countries from the EU. Orphan drug designation, marketing authorization and 10 year marketing exclusivity are common to EU countries. We also noted key differences between the 35 countries included in the review. Differences in pricing and reimbursement policies and budgetary considerations across countries may result in inequities in access to orphan drugs [3,4,19,20,22,27].
Overall, the majority of countries (27 of 35) included in this review have in place orphan drug legislation (either independent legislation or legislation as part of the EU) but only 18 countries had established a national plan for rare diseases and orphan drugs. This is likely because a country having established national orphan drug legislation may not require a national plan. It is also possible that national plans are less prevalent because they are at the discretion of the individual country while national legislation for orphan drugs in EU countries is governed by EU regulation (CE) N°141/2000 [12]. Countries without orphan drug legislation are China, India, Canada, Israel, Macedonia, Serbia, Switzerland and Turkey. Notably, China and India-two largest countries each with population in excess of one billion individuals-do not have in place orphan drug legislation and/or a national rare disease plan.
The EU, US, Japan, Taiwan and Australia all have in place independent pathways for marketing authorization of orphan drugs. In these countries orphan drugs tend to meet criteria for accelerated procedures based upon unmet need or disease severity. Accelerated procedures can shorten marketing authorization timeframe almost by half but countries vary in the implementation of accelerated marketing authorization procedures [3,10,29]. Other countries have identical processes for marketing authorization of orphan and non-orphan drugs.
Financial and non-financial incentives are commonly used, with only 9 of 35 countries found to have no financial or non-financial incentive of any kind for orphan medicines. A common challenge for many countries is the inability to adequately implement proposed incentives for orphan drugs due to budgetary constraints [20,22,27]. Pre-licensing access in the form of compassionate or off-label access to orphan drugs is common (17 of 35 countries) and allows the importation of unauthorized medicines on a named patient or patient group basis. However, when it exists, pre-licensing access is rarely reimbursed by public health insurance (e.g., Turkey) [1,3,4,20,27,33,34,38,49,61] but in such cases, it promotes both the availability of, and access to these medicines regardless of positive marketing authorization or inclusion on a national reimbursement list [4,38]; thus, there is not always a clear distinction between 'availability' and 'access' in the field of orphan drugs. Marketing exclusivity of orphan medicines is widely used (26 of 35 countries) and ranges 5 to 10 years on average. Marketing exclusivity is attractive to pharmaceutical companies and policy-makers worldwide in fostering orphan drug research and development. However, the efficacy of marketing exclusivity in promoting patient access orphan medicines is not apparent [3,42]. The formation of 'minimonopolies' is a major concern [3,4,18,27,29,31,36,44].
Access to orphan drugs continues to be limited by high prices. Fixed pricing schemes are common across the countries in the review (16 of 35 countries). However, fixed pricing models are not without limitation; price fluctuations may be explained by international purchasing power parity differences [19,20,22,26,27].
Reimbursement of orphan drugs is probably the most important factor determining patient access to orphan drugs given their high costs [3,12,20,50]. HTA, particularly cost-effectiveness analysis, has important impact on reimbursement decisions for all drugs, including orphan drugs [3,4,13,29,38,62]. Twenty-nine countries consider cost-effectiveness in their assessment of orphan drugs (e.g., the United Kingdom) but many also consider other factors such as unmet need, human value and solidarity (e.g., Sweden); thus, countries often accept a "more limited evidence base" for orphan drugs compared to non-orphan drugs. Interestingly, Hungary has established a separate HTA for orphan drugs. There is worldwide debate about the use of HTA and considerations beyond the typical therapeutic benefits, risks, and costs to include a more systematic consideration of ethical/equity factors such as 'rule of rescue' [13,29,38,48].
Patient co-payments in some countries such as the US and Canada pose significant barriers to patient access as well as high acquisition costs of orphan drugs [3,4,10]. Thus, regardless of orphan drug availability, patient access can often be substantially restricted by out-of-pocket costs [1,4,9,12,16,37,41,42,53,54]. Of the 35 countries in this review, 33 countries provide some reimbursement for orphan drugs; reimbursement is generally dependent upon if orphan drugs are approved in the country or included on the national reimbursement list. In China and India, which have no orphan drug legislation or associated incentives, costs for orphan medicines are largely self-funded by patients 'out-of-pocket'. Seven of 35 countries (Canada, Germany, Sweden, US, Switzerland, Denmark and Greece) have dedicated co-payments protection programs that provide financial support once an annual amount of co-payment is exceeded to protect against the risk of excessive out-of-pocket expenditure. Interestingly, countries including, but not limited to, Australia, Italy and the Netherlands have special programmes reimbursing orphan drugs (namely, Australia's Life-saving drug program, Italy's 5% Agenzia italiana del Farmaco (AIFA) fund for reimbursing orphan drugs not yet marketed, and the Dutch Policy Rule for Expensive Hospital and Orphan Drugs that supports hospitals financially for prescribing orphan drugs).that are separate to their national drug coverage programs for non-orphan drugs [3,4,10,15,18,37].
Managed entry agreements are an innovative and increasingly used approach to enable access to high cost orphan and non-orphan drugs in situations where there is a lack of sufficient evidence for coverage of promising technologies that may benefit patients [13,17,26,48,[58][59][60]63].
While many developed countries such as the US, Japan, Australia, and EU countries have established a range of legislations/regulations/policies for orphan drugs, many Asian countries fall behind. A few Asian countries such as Japan, Singapore and Taiwan have made developments in this area. In particular, Taiwan reimburses 70% to 100% of the cost of orphan drugs for low income families under the Rare Disease Prevention and Medicine Law by the Department of Health/ Bureau of National Health Insurance [1,35]. However, importantly, China and India continue to lack national legislation for orphan medicines and rare diseases [1,12,23,28,29,31,35,40]. Due to their large populations, the lack of developments to support access to orphan drugs has substantial negative impacts on their patient populations with rare diseases [23,28].
Our study has some limitations. First, publication and outcome reporting bias may have led to the publication or non-publication depending on the nature or direction of the findings [8]. Investigators were limited to the English language literature; publications in other languages were not included. Second, the review study selection included only articles published in peerreviewed journals; grey literature was excluded. This was to ensure an academic level of accuracy through the peer review process. Finally, while it was beyond the scope of this review to examine the impacts of regulations and policies for orphan drugs, we noted that published evidence was limited and often used measures that were superficial, such as the number of orphan designations, number of orphan drugs granted marketing authorizations, and number of orphan drugs on reimbursement lists. Nevertheless, with consideration of these limitations, the review provides a better understanding of types of legislations, regulations and policies influencing patient access to orphan drugs. Future research on orphan drugs should identify legislations/regulations/policies for orphan drugs from Latin America and African countries. Research is also needed for comparing prices of (a sample of) orphan drugs and numbers of designated vs. marketing approved orphan drugs in light of the differences in legislations/regulations/policies across countries.
Conclusions
Overall many countries have undertaken a combination of regulations and policies for orphan drugs in the last two decades. While these may enable the availability and access to orphan drugs, there are critical differences between countries in terms of range and types of regulations and policies implemented. The presence of marketing exclusivity remains critical to incentivising research and development of orphan drugs but poses risks, most notably monopolisation and high prices for orphan drugs, which may limit patient access to these needed medicines. Importantly, China and India that each has populations in excess of one billion individuals, lack national legislation for orphan medicines and rare diseases, which could have substantial negative impacts on their patient populations with rare diseases. | 2018-04-03T05:31:56.967Z | 2015-10-09T00:00:00.000 | {
"year": 2015,
"sha1": "b68a36cde1c3d6d44e64f3b112d70a8768b64328",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0140002&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "caf528a62869258703e257fac3d245e4b7f1abbb",
"s2fieldsofstudy": [
"Law",
"Medicine"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
212751476 | pes2o/s2orc | v3-fos-license | Analytical Approach to Study Sensing Properties of Graphene Based Gas Sensor
Over the past years, carbon-based materials and especially graphene, have always been known as one of the most famous and popular materials for sensing applications. Graphene poses outstanding electrical and physical properties that make it favorable to be used as a transducer in the gas sensors structure. Graphene experiences remarkable changes in its physical and electrical properties when exposed to various gas molecules. Therefore, in this study, a set of new analytical models are developed to investigate energy band structure, the density of states (DOS), the velocity of charged carriers and I-V characteristics of the graphene after molecular (CO, NO2, H2O) adsorption. The results show that gas adsorption modulates the energy band structure of the graphene that leads to the variation of the energy bandgap, thus the DOS changes. Consequently, graphene converts to semiconducting material, which affects the graphene conductivity and together with the DOS variation, modulate velocity and I-V characteristics of the graphene. These parameters are important factors that can be implemented as sensing parameters and can be used to analyze and develop new sensors based on graphene material.
Introduction
Electronics and sensors are required to provide valuable advantages for humanity in different vital areas such as life sciences, health care, and security. Electrical sensors are electronic devices capable of detecting physical quantities in their environment and convert them into measurable electrical signals that can be displayed by the monitor [1][2][3]. Sensors can be designed, fabricated, and employed for various applications subject to the physical quantity to be measured [4][5][6][7][8]. Depending on the type, sensors can measure specific changes in the properties of different materials such as liquids and gases. One of the most important applications of sensors is the detection of specific gases [9][10][11]. There are a variety of harmful and toxic gases threatening human life in many working and living This leads to the transition of graphene from metallic to semiconducting energy state and modifies the electrical conductance of the graphene channel in the graphene-based FET. The changes in the energy band structure, bandgap, and the number of charged carriers in the transistor channel lead to modify the density of stats, carrier concentration, and thus, carrier velocity, resulting from modulating the current-voltage properties of the sensor.
The presented study is somehow analogous to our previous work [44] but has main differences based on sensor substrate structure and introducing the on-site energy parameter, 0 E′ , for the adsorbed gas molecules. In terms of the structure, graphene is used as a substrate that provides a higher surface to volume ratio, which is more favorable for sensing applications. Graphene has a 2D structure (GNR is 1D). Thus, the number of molecules that can be adsorbed on graphene is higher, which means graphene provide a large detection range. There are some benefits to use graphene instead of the GNR regarding fabrication process in terms of cost and complexity. On the other hand, in the previous model, the on-site energies of the adsorbed gas molecules were not considered in the modeling, and the energies of the carbon atoms of the GNR and adsorbed molecules were considered to be the same. But in the current models, we differentiate between different types of gas molecules and with carbon atoms of the graphene.
Here, three random gases CO, NO2, H2O, are chosen to be applied for sensing purposes. These three gasses are famous gases that are extensively being used in industrial and medical applications. We use three different gases to evaluate the proposed models and to show that the proposed models can predict different types of gas molecules. In addition, we used perfect graphene for mathematical formulation and modeling of the gas sensor. However, for future studies, the current modeling technique can be modified to apply for the defected graphene as well, as defected graphene can provide higher sensitivity and faster response and recovery time [45,46].
Materials and Methods
In the graphene energy band structure, the valence and conduction bands overlap in Dirac points that makes graphene to have no energy gap [47]. The electrical and physical properties of the graphene are dependent and specified mostly based on hopping energies and on-site energies of carbon atoms. On the other hand, in the graphene energy band structure, the highest valence band and the lowest conduction band are the main responsible for the conductivity and electrical characteristics [48]. Thus, in our model, only these two bands are considered in the calculations. Based on this concept, we start with the modeling of the graphene energy band structure around the Fermi energy level. This leads to the transition of graphene from metallic to semiconducting energy state and modifies the electrical conductance of the graphene channel in the graphene-based FET. The changes in the energy band structure, bandgap, and the number of charged carriers in the transistor channel lead to modify the density of stats, carrier concentration, and thus, carrier velocity, resulting from modulating the current-voltage properties of the sensor.
Energy Band Structure Modeling
The presented study is somehow analogous to our previous work [44] but has main differences based on sensor substrate structure and introducing the on-site energy parameter, E 0 , for the adsorbed gas molecules. In terms of the structure, graphene is used as a substrate that provides a higher surface to volume ratio, which is more favorable for sensing applications. Graphene has a 2D structure (GNR is 1D). Thus, the number of molecules that can be adsorbed on graphene is higher, which means graphene provide a large detection range. There are some benefits to use graphene instead of the GNR regarding fabrication process in terms of cost and complexity.
On the other hand, in the previous model, the on-site energies of the adsorbed gas molecules were not considered in the modeling, and the energies of the carbon atoms of the GNR and adsorbed molecules were considered to be the same. But in the current models, we differentiate between different types of gas molecules and with carbon atoms of the graphene.
Here, three random gases CO, NO 2 , H 2 O, are chosen to be applied for sensing purposes. These three gasses are famous gases that are extensively being used in industrial and medical applications. We use three different gases to evaluate the proposed models and to show that the proposed models can predict different types of gas molecules. In addition, we used perfect graphene for mathematical formulation and modeling of the gas sensor. However, for future studies, the current modeling technique can be modified to apply for the defected graphene as well, as defected graphene can provide higher sensitivity and faster response and recovery time [45,46].
Materials and Methods
In the graphene energy band structure, the valence and conduction bands overlap in Dirac points that makes graphene to have no energy gap [47]. The electrical and physical properties of the graphene are dependent and specified mostly based on hopping energies and on-site energies of carbon atoms. On the other hand, in the graphene energy band structure, the highest valence band and the lowest conduction band are the main responsible for the conductivity and electrical characteristics [48]. Thus, in our model, only these two bands are considered in the calculations. Based on this concept, we start with the modeling of the graphene energy band structure around the Fermi energy level.
Energy Band Structure Modeling
To calculate the low energy band for graphene, the tight-binding technique is adapted. In our tight-binding model, graphene as a 2D material is incorporated with the assumption that there is only one orbital per atom leading to the ultimate matrix form of the Schrödinger Equation as [48]: where H nm is the Hamiltonian operator matrix, ϕ n or m is the column vector representing the wave function in the unit-cell n or m, and E is energy [49]. The schematics of the graphene structure with a pair of carbon atoms per unit-cell is shown in Figure 2. To calculate the low energy band for graphene, the tight-binding technique is adapted. In our tight-binding model, graphene as a 2D material is incorporated with the assumption that there is only one orbital per atom leading to the ultimate matrix form of the Schrödinger Equation as [48]: where Hnm is the Hamiltonian operator matrix, φn or m is the column vector representing the wave function in the unit-cell n or m, and E is energy [49]. The schematics of the graphene structure with a pair of carbon atoms per unit-cell is shown in Figure 2. To calculate the energy dispersion relation for graphene, we need to solve the above Schrödinger Equation.
Thus, first, we should acquire Hamiltonian matrixes for the graphene unit cells. Each graphene unit-cell (n th unit cell) has four closest neighbor unit cells. In addition, each unit cell consists of two carbon atoms. Therefore, based on the theory that employs a single orbital per carbon atom, a (2 × 2) Hamiltonian matrix indicating the valence and conduction band in the energy band structure can be achieved. In the Hamiltonian matrixes, the upper or lower diagonal matrix elements for the neighbor carbon atoms of the graphene are assumed as t, while the rest of them are zeros.
where 0 3 /2 a a = and 0 3 /2 b a = , a1 and a2 are the lattice vectors, kx and ky are the unit vector components, t represents the hopping energy parameter of graphene atoms, and E0 is the on-site energy of a carbon atom. Finally, the Eigen equation for the Eigen value E can be obtained as [48]: Once the impurity molecules adsorbed on graphene, the on-site energies of the adsorbed molecules and the hopping energies between gas molecules and graphene, contribute to the graphene characteristics and modify its electrical and physical properties such energy band structure, density of states, concentration, and velocity of carriers and current-voltage properties. To model gas effects on these parameters, the tight-binding (TB) approach based on the nearest neighborhood is employed for the graphene in the presence of the gas molecule. This time, our Hamiltonians will be (3 × 3) a1 a2 Unit-cell To calculate the energy dispersion relation for graphene, we need to solve the above Schrödinger Equation.
Thus, first, we should acquire Hamiltonian matrixes for the graphene unit cells. Each graphene unit-cell (n th unit cell) has four closest neighbor unit cells. In addition, each unit cell consists of two carbon atoms. Therefore, based on the theory that employs a single orbital per carbon atom, a (2 × 2) Hamiltonian matrix indicating the valence and conduction band in the energy band structure can be achieved. In the Hamiltonian matrixes, the upper or lower diagonal matrix elements for the neighbor carbon atoms of the graphene are assumed as t, while the rest of them are zeros. Defining: where a = 3a 0 /2 and b = √ 3a 0 /2, a 1 and a 2 are the lattice vectors, k x and k y are the unit vector components, t represents the hopping energy parameter of graphene atoms, and E 0 is the on-site energy of a carbon atom. Finally, the Eigen equation for the Eigen value E can be obtained as [48]: Once the impurity molecules adsorbed on graphene, the on-site energies of the adsorbed molecules and the hopping energies between gas molecules and graphene, contribute to the graphene characteristics and modify its electrical and physical properties such energy band structure, density of states, concentration, and velocity of carriers and current-voltage properties. To model gas effects on these parameters, the tight-binding (TB) approach based on the nearest neighborhood is employed for the graphene in the presence of the gas molecule. This time, our Hamiltonians will be (3 × 3) matrixes in the presence of the gas molecules. On the other hand, in our Hamiltonians, a single contact between the adsorbate and carbon atom is presumed as depicted in Figure 3.
where 0 E and 0 E′ are the on-site energies of the carbon atom and the adsorbed molecule, and t′ the hopping energy between the adsorbate and a carbon atom. By summation of individual matrixes over the n th unit cell and its four neighbors, and then calculating the determinants of the h(k), the energy dispersion model for graphene, considering the molecular adsorption effects, is achieved as: where the on-site energy of the adsorbed molecule is represented by 0 E′ and t' is the hopping integral parameter that shows the hopping energy between the carbon atom and the adsorbate. The value of the E0 is set to be zero as the origin of the energy [42]. The value of the t is fixed to be t = 2.7 eV, but t' will have different values depending on the nature of the target gas molecule. In the case of the graphene without gas, the value of the t' is zero; by tuning it to non-zero quantities, the gas adsorption effects can be applied.
To investigate the sensing properties of the graphene, we assumed the adsorption of NO2, CO, and H2O gases. Each gas molecule prefers different configurations on the graphene plane. In our study, the orientation and molecular distance from the graphene surface are considered according to work presented in [45]. On the other hand, each gas molecule has specific hopping energy when adsorbed on the graphene surface that is dependent on the type and configuration of the molecule on the substrate. After calculating the corresponding hopping energy parameter for each gas, we can apply the molecular adsorption effects on the electrical and physical properties of the graphene. The Similar to Equation (2), according to the tight-binding model based on the nearest neighbor approximation, the Hamiltonian matrixes for the unit cell 'n' and its four nearest neighbors considering the adsorbed molecule can be described as: where E 0 and E 0 are the on-site energies of the carbon atom and the adsorbed molecule, and t the hopping energy between the adsorbate and a carbon atom. By summation of individual matrixes over the n th unit cell and its four neighbors, and then calculating the determinants of the h(k), the energy dispersion model for graphene, considering the molecular adsorption effects, is achieved as: where the on-site energy of the adsorbed molecule is represented by E 0 and t' is the hopping integral parameter that shows the hopping energy between the carbon atom and the adsorbate. The value of the E 0 is set to be zero as the origin of the energy [42]. The value of the t is fixed to be t = 2.7 eV, but t' will have different values depending on the nature of the target gas molecule. In the case of the graphene without gas, the value of the t' is zero; by tuning it to non-zero quantities, the gas adsorption effects can be applied.
To investigate the sensing properties of the graphene, we assumed the adsorption of NO 2 , CO, and H 2 O gases. Each gas molecule prefers different configurations on the graphene plane. In our study, the orientation and molecular distance from the graphene surface are considered according to work presented in [45]. On the other hand, each gas molecule has specific hopping energy when adsorbed on the graphene surface that is dependent on the type and configuration of the molecule on the substrate. After calculating the corresponding hopping energy parameter for each gas, we can apply the molecular adsorption effects on the electrical and physical properties of the graphene. The hopping energy parameter of the adsorbed molecules on a substrate can be achieved as following [50]: where t αβ is the hopping energy parameters between the substrate and adsorbate, t R is hopping parameter between the atoms of the adsorbate (carbon atoms for graphene), d R = 1.42 Å is the carbon-carbon bond length in graphene, and d αβ represents the distance between the adsorbate and substrate. Therefore, the hopping energies for the adsorbed gases are presented in Table 1: The adsorption effects of these three gases on the energy bandgap of graphene are investigated. The band structure analysis indicates that molecular adsorption can modify the band structure of the graphene, as illustrated in Figure 4. It can be seen that after the adsorption of CO, NO 2 , and H 2 O molecules, the energy gap of graphene shows remarkable increments, while CO and H 2 O have the lowest and highest impact on the bandgap respectively.
Sensors 2020, 20, 1506 6 of 14 hopping energy parameter of the adsorbed molecules on a substrate can be achieved as following [50]: where tαβ is the hopping energy parameters between the substrate and adsorbate, tR is hopping parameter between the atoms of the adsorbate (carbon atoms for graphene), dR = 1.42 Å is the carboncarbon bond length in graphene, and dαβ represents the distance between the adsorbate and substrate. Therefore, the hopping energies for the adsorbed gases are presented in Table 1: The adsorption effects of these three gases on the energy bandgap of graphene are investigated. The band structure analysis indicates that molecular adsorption can modify the band structure of the graphene, as illustrated in Figure 4. It can be seen that after the adsorption of CO, NO2, and H2O molecules, the energy gap of graphene shows remarkable increments, while CO and H2O have the lowest and highest impact on the bandgap respectively.
Carrier Velocity Model
When gas molecules adsorbed to the graphene surface, it modulates the energy bandgap and also the carrier concentration. The bandgap variation leads to the variation of the density of states. On the other hand, according to the literature, the velocity of carriers is a function of the density of states and carrier concentration. Now, we are going to model gas adsorption effects on the density of states and carrier concertation and study effect on the carrier velocity in the form of I-V characteristics.
Carrier Velocity Model
When gas molecules adsorbed to the graphene surface, it modulates the energy bandgap and also the carrier concentration. The bandgap variation leads to the variation of the density of states. On the other hand, according to the literature, the velocity of carriers is a function of the density of states and carrier concentration. Now, we are going to model gas adsorption effects on the density of states and carrier concertation and study effect on the carrier velocity in the form of I-V characteristics. It has been reported by previous studies that the velocity of the carriers is proportional to the density of states and carrier density at any moment, as given by [25]: where f(E) = 1/(1 + exp((E − E f )/k B T)) is the Fermi distribution function, D(E) is the density of states and n is the carrier concentration. The magnitude of the velocity is given as: where m * is the electron effective mass. To calculate velocity, first, we should find D(E) and carrier concentration. The DOS in a material describes the number of states at each energy level that can be taken by the electrons per energy interval, that is calculated as: where A is the graphene surface area and the wave propagation vector, k x , in the x-direction is formulated as: The gas adsorption effect on the graphene density of states against the wave vector is illustrated in Figure 5. It can be seen that DOS changes after molecular adsorptions. On the other hand, the variation of the DOS is not the same for all gases because of the different values of the hopping energy for each gas. In addition, the DOS increases after gas adsorption caused by the bandgap opening in the graphene energy band structure induce by the molecular adsorption. Therefore, gas adsorption can change the possibility of occupying energy states by the electrons that will affect the electrical properties of the sensor. Based on Figure 5, the DOS experiences small changes against gas molecules, but a small change in DOS can lead to remarkable effects in the electrical properties of the sensor.
where f(E) = 1/(1 + exp((E − Ef)/kBT)) is the Fermi distribution function, D(E) is the density of states and n is the carrier concentration. The magnitude of the velocity is given as: where m * is the electron effective mass. To calculate velocity, first, we should find D(E) and carrier concentration. The DOS in a material describes the number of states at each energy level that can be taken by the electrons per energy interval, that is calculated as: where A is the graphene surface area and the wave propagation vector, kx, in the x-direction is formulated as: The gas adsorption effect on the graphene density of states against the wave vector is illustrated in Figure 5. It can be seen that DOS changes after molecular adsorptions. On the other hand, the variation of the DOS is not the same for all gases because of the different values of the hopping energy for each gas. In addition, the DOS increases after gas adsorption caused by the bandgap opening in the graphene energy band structure induce by the molecular adsorption. Therefore, gas adsorption can change the possibility of occupying energy states by the electrons that will affect the electrical properties of the sensor. Based on Figure 5, the DOS experiences small changes against gas molecules, but a small change in DOS can lead to remarkable effects in the electrical properties of the sensor. The next step is to calculate the carrier concentration. The concentration of carriers is a function of the density of states and Fermi function given as [42]: Based on Equations (13) and (11), we can write: Sensors 2020, 20, 1506 8 of 14 With the normalized fermi function η = E f −E g k B T and x = E−E g k B T , the final equation for the carrier concentration for the graphene, considering the gas molecule adsorption effect, could be achieved as: Based on Equation (15), the carrier concentration of graphene is presented in Figure 6. The concentration of the carrier is calculated to be around 10ˆ1 4 for our graphene, which is consistent with the literature [51,52].
Based on Equations (13) and (11), we can write: With the normalized fermi function 8 1 Based on Equation (15), the carrier concentration of graphene is presented in Figure 6. The concentration of the carrier is calculated to be around 10^1 4 for our graphene, which is consistent with the literature [51,52].
Having E = x.kBT + Eg, normalized fermi function ƞ, and x, Equation (16) could be modified as: Now, according to Equation (11), to model the carrier velocity, the numerator is calculated as: Having E = x.k B T + Eg, normalized fermi function η, and x, Equation (16) could be modified as: where k B is the Boltzmann constant and T is temperature. Finally, the carrier velocity for the graphene considering the molecular adsorption effects is achieved as: The parameters t' and E 0 exert the gas adsorption effects on the velocity of carriers. Based on Equation (18), the velocity of the carriers for the bare graphene is depicted in Figure 7.
The parameters t' and 0 E′ exert the gas adsorption effects on the velocity of carriers. Based on Equation (18), the velocity of the carriers for the bare graphene is depicted in Figure 7. The investigation of the carrier velocity variation after the gas adsorption is performed in the form of current-voltage analysis. Hence, we need to obtain the I-V characteristics based on the carrier velocity. Based on the relation between I-V and velocity, the I-V can be written as a function of the carrier velocity: The investigation of the carrier velocity variation after the gas adsorption is performed in the form of current-voltage analysis. Hence, we need to obtain the I-V characteristics based on the carrier velocity. Based on the relation between I-V and velocity, the I-V can be written as a function of the carrier velocity: where A represents the surface area, which carriers travel through it, n represents the carrier concentration and q is the electrical charge. Finally, based on Equation (20) the electrical properties of the gas sensor can be investigated. The I-V characteristics of the gas sensor in the exposure of CO, NO 2, and H 2 O are plotted, as shown in Figure 8. According to Figure 8, the current has decreased after molecular adsorption, and the sensor exhibits different properties against each target gas molecule. Upon gas molecules adsorption on the graphene, two phenomena occur that alter the current of the sensor. The atomic forces between graphene and adsorbed molecules induced by the covalent bonds, exert some changes to the graphene structure, and results to the variation of the band structure and energy bandgap of graphene and therefore, modifies its electrical conductivity and density of states and subsequently current-voltage properties.
sensor exhibits different properties against each target gas molecule. Upon gas molecules adsorption on the graphene, two phenomena occur that alter the current of the sensor. The atomic forces between graphene and adsorbed molecules induced by the covalent bonds, exert some changes to the graphene structure, and results to the variation of the band structure and energy bandgap of graphene and therefore, modifies its electrical conductivity and density of states and subsequently current-voltage properties. In addition, the transfer of charge between substrate and adsorbate changes theG carrier concentration on the graphene surface. The variation of DOS and carrier concentration affect the velocity of the carriers that lead to change the I-V of the sensor. After gas adsorption, the graphene energy bandgap increases, thus, the conductivity decreases and reduces the current. Moreover, the response of the gas sensor against adsorbed molecules is calculated and presented in Figure 9. It can be seen that the response of the sensor is the highest for the H2O and the lowest for the CO molecules. In addition, the transfer of charge between substrate and adsorbate changes theG carrier concentration on the graphene surface. The variation of DOS and carrier concentration affect the velocity of the carriers that lead to change the I-V of the sensor. After gas adsorption, the graphene energy bandgap increases, thus, the conductivity decreases and reduces the current. Moreover, the response of the gas sensor against adsorbed molecules is calculated and presented in Figure 9. It can be seen that the response of the sensor is the highest for the H 2 O and the lowest for the CO molecules. This result is consistent with the band structure and I-V analysis as we saw more bandgap increment and current decrement after H 2 O adsorption than other molecules.
For future investigations and further evaluation of the models, the comparison study with the results of similar studies such as first principle calculations or experimental data can be conducted, as we couldn't find similar work at the present time. However, according to the archived results and trend of the plots, the proposed models illustrate reasonable performance. In addition, the range of values of the IV, velocity, band structure, and other plots indicates reasonable quantities and, thus the proposed models seem to be valid.
For future investigations and further evaluation of the models, the comparison study with the results of similar studies such as first principle calculations or experimental data can be conducted, as we couldn't find similar work at the present time. However, according to the archived results and trend of the plots, the proposed models illustrate reasonable performance. In addition, the range of values of the IV, velocity, band structure, and other plots indicates reasonable quantities and, thus the proposed models seem to be valid.
Conclusions
In this study, a series of analytical models for the detection of the gas molecule using a platform based on graphene FET is developed and applied for CO, NO2, and H2O detection. Based on the results, it was shown that the energy dispersion relation of the graphene varies when exposed to the gas molecules. The DOS and carrier concentration parameters were calculated, and the variation of the DOS in after molecular adsorption of the gases were monitored. Based on the DOS and carrier concentration models, the carrier velocity model was developed and analyzed in the form of currentvoltage properties. The I-V analysis indicates that the current of the sensor has changed and decreased after the adsorption of the gas molecules. After gas adsorption, the graphene energy bandgap changes that affect the conductivity of the graphene and increase the channel resistance. On the other hand, after molecular adsorption, the change transfer between molecules modifies the number of the carriers and hence the velocity of the charged carriers on the graphene plane that leads to modulate the I-V characteristics. It can be seen that the most and the least current reduction have occurred after H2O and CO adsorption, respectively. This implies that the response of the sensor is not the same for different gases. Finally, the main outcomes of this study can be mentioned as: (i) the analytical formulation, which allows the convenient use of the results for simulators, (ii) the demonstrated selectivity of the sensors with respect to different gases.
Conclusions
In this study, a series of analytical models for the detection of the gas molecule using a platform based on graphene FET is developed and applied for CO, NO 2, and H 2 O detection. Based on the results, it was shown that the energy dispersion relation of the graphene varies when exposed to the gas molecules. The DOS and carrier concentration parameters were calculated, and the variation of the DOS in after molecular adsorption of the gases were monitored. Based on the DOS and carrier concentration models, the carrier velocity model was developed and analyzed in the form of current-voltage properties. The I-V analysis indicates that the current of the sensor has changed and decreased after the adsorption of the gas molecules. After gas adsorption, the graphene energy bandgap changes that affect the conductivity of the graphene and increase the channel resistance. On the other hand, after molecular adsorption, the change transfer between molecules modifies the number of the carriers and hence the velocity of the charged carriers on the graphene plane that leads to modulate the I-V characteristics. It can be seen that the most and the least current reduction have occurred after H 2 O and CO adsorption, respectively. This implies that the response of the sensor is not the same for different gases. Finally, the main outcomes of this study can be mentioned as: (i) the analytical formulation, which allows the convenient use of the results for simulators, (ii) the demonstrated selectivity of the sensors with respect to different gases. | 2020-03-12T10:30:54.733Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "d986c942c65d17dfc0ab31d67ba65625ba544284",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/5/1506/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c019155433b18c2cc786c55de86e9d72751a717d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine",
"Computer Science"
]
} |
56641683 | pes2o/s2orc | v3-fos-license | Rethinking Data Sharing and Human Participant Protection in Social Science Research : Applications from the Qualitative Realm
While data sharing is becoming increasingly common in quantitative social inquiry, qualitative data are rarely shared. One factor inhibiting data sharing is a concern about human participant protections and privacy. Protecting the confidentiality and safety of research participants is a concern for both quantitative and qualitative researchers, but it raises specific concerns within the epistemic context of qualitative research. Thus, the applicability of emerging protection models from the quantitative realm must be carefully evaluated for application to the qualitative realm. At the same time, qualitative scholars already employ a variety of strategies for humanparticipant protection implicitly or informally during the research process. In this practice paper, we assess available strategies for protecting human participants and how they can be deployed. We describe a spectrum of possible data management options, such as de-identification and applying access controls, including some already employed by the Qualitative Data Repository (QDR) in tandem with its pilot depositors. Throughout the discussion, we consider the tension between modifying data or restricting access to them, and retaining their analytic value. We argue that developing explicit guidelines for sharing qualitative data generated through interaction with humans will allow scholars to address privacy concerns and increase the secondary use of their data.
Introduction
While data sharing is becoming increasingly common in quantitative social inquiry, qualitative data are still rarely shared.One of the major factors inhibiting data sharing is a concern about human participant protections and privacy.Protecting the confidentiality and safety of research participants is a consideration for both quantitative and qualitative researchers, but it raises specific worries within the epistemic context of qualitative research.Thus, the applicability of emerging protection models from the quantitative realm must be carefully evaluated for elements appropriate for the qualitative realm.At the same time, qualitative scholars already employ a variety of strategies for human-participant protection implicitly or informally during the research process, so part of the challenge is lessened if data repositories help researchers draw on their familiarity and comfort with these and enhance them in the process.
In this practice paper, we draw on our experiences working at the Qualitative Data Repository (QDR) to assess available approaches for protecting human participants and how they can be deployed in the qualitative realm in particular.We describe a spectrum of possible data management options that can be used individually or in combinations, such as de-identification and applying access controls.We also review some use-case applications by the repository in tandem with its pilot depositors.Throughout the discussion, we consider the tension between modifying data or restricting access to them, and retaining their analytic value.We argue that domain data professionals, cognizant of the needs of social scientific scholarly communities, can develop explicit but flexible guidelines for sharing qualitative data generated through interaction with humans that allow scholars to address privacy concerns throughout their work process.This, in turn, will make their collected data shareable and increase their secondary use for analytical or pedagogical purposes.
Impossible to Share?
All those records had now been burned: Even before the controversy began, Goffman felt as though their ritual incineration was the only way she could protect her friend-informers from police scrutiny after her book was published (Lewis-Kraus, 2016).
Until recently, sociologist Alice Goffman's approach to protecting her research participants was the norm in qualitative social science, even with data far less sensitive than her ethnographic study of crime and policing in low-income communities in Philadelphia (Goffman, 2014).A lack of awareness about the need for and benefits of data sharing limited practicable strategies for protecting participants, and structurally conservative institutional review boards (IRBs) all combined to dissuade researchers from even attempting to share their data.Even more fundamentally, not thinking of the qualitative materials they collect as "data" with inherent value beyond their own study, many social scientists have remained oblivious to the developing technologies, practices and scientific infrastructure that make sharing that is both legal and ethical newly possible.
The tide is turning, however: open science and research transparency are becoming established as disciplinary norms and funding agencies as well as journals are developing mandates for making articles, data, and software available to the scientific community and the public at large.Simultaneously, textual, audio, video and other types of qualitative data are becoming more immediately obtainable, and those collected in digital formats are increasingly easy to distribute.Each of these factors leads to an increased interest in managing and sharing qualitative data, but also raises concerns about how to openly share those involving human subjects both ethically and safely.The tensions between the broad vision of open access and the long-standing demand to protect the people whose information researchers use are important, but should not be declared irreconcilable.The most fruitful way forward is for institutions that fund data collection, that store data for sharing, and that publish academic work making knowledge claims on the basis of these data -in collaboration with the researchers themselves -to develop policies and procedures that are consistent with relevant legal and ethical obligations ensuring the wellbeing and privacy of research participants.
The Qualitative Data Repository (QDR, www.qdr.org),hosted by Syracuse University, went online in 2014.It was established as a domain repository with the explicit mission to provide a home for qualitative and multi-method primary data, which might otherwise remain invisible in the social science research community.QDR serves this mission most directly by offering a user-friendly platform that enables researchers from around the world and across all social science disciplines to publish their data projects in a reliable digital venue and thus make them durably discoverable (via indexing and use of digital object identifiers or DOIs), citable (by suggesting an accurate and complete bibliographical record), intelligible to others (by providing narrative documentation and structured metadata) and, ideally, linked to the original researcher's and others' publications that use them (by using CrossRef/DataCite article-data linking).
More broadly, QDR's staff -which includes the authors of this paper -has learned during these early years that its key role is to cultivate the repository's intended user community.QDR has consequently been at the forefront of efforts to promote and support the sharing of qualitative social science data, not simply by providing technical infrastructure, but by working closely with individual data depositors to curate their qualitative data for preservation and reuse and by creating useful guidance materials that address the various stages of a research project (see https://qdr.syr.edu/guidance).When provided education in the basics of data management, social scientists become well-positioned to undertake their work with the goal of sharing in mind from the planning stages.In the course of the repository operations, we have found that the biggest impact we can have is to encourage qualitative researchers to start seeing what they do as "data collection" and its artifacts as stand-alone scholarly products that are publishable and deserve intellectual recognition.
Qualitative data sharing works best when researchers are able to capitalize on their closeness to the human sources of their rich materials and on existing feelings of responsibility for and skills in protecting those sources.By giving researchers both credit for and control over their data work, we believe repositories can partner with them to advance the cause of safe and ethical data sharing.Drawing specific lessons from an initial set of pilot studies, each with its own challenges, we at QDR developed strategies to coach researchers about the options at their disposal to share even sensitive qualitative data.These strategies fit within the research data lifecycle, from planning through data publication.
Planning for Data Collection
The main insight throughout QDR's pilot projects has been the advantage of early and thorough data management planning oriented towards the later sharing of data (Karcher et al, 2016).However, many standard approaches borrowed from quantitative research are difficult to apply directly to qualitative research.For example, as a general rule of thumb, QDR recommends that scholars do not collect identifying information where it is not substantively needed for the purpose of the study.However, the nature of qualitative interviews often produces a paper (or e-mail) trail to schedule the interview where direct identifiers (names, phone numbers, addresses) abound.Complicating the situation even further, researchers often build lasting relationships spanning multiple interviews with their participants, making such a strategy inapplicable.The objection to data sharing most commonly raised by qualitative researchers themselves combines this integral role they as individuals play in the research process and the very richness of the contextual material typically gathered (Fink, 2000).
We propose to reconsider this "closeness" of the investigators, as we find that it makes them best positioned to undertake the necessary modifications to received strategies that can enable reasonable data sharing without introducing harm to the participants the researchers know so well.Instead of making the sharing and archiving of qualitative data particularly challenging, the embeddedness of researchers in their research site should be thought of as a resource, a deep foundation of knowledge of local circumstances and expectations.Thus a scholar would be able to decide in advance what might be the right secure location to store any contact information necessary for his or her ongoing interactions in the field: One example could be a notebook separate from the digital transcriptions of interviews that they keep with them at all times because of a fear that their rented apartment in the field can be accessed without their knowledge; another -a file encrypted on a memory stick, locked in a cabinet once back at their home institution, where negligence is a greater concern than unauthorized searches.Additionally, scholars who have decided on such basic data management rules in advance can use them to easily train any transcribers or other research assistants they work with in the chosen privacy protocols.Even more importantly, they can present a cogent argument during their IRB application process (i.e., before the rules are put in action) why a given option that does not involve destroying collected materials is the right choice for a given research project.Crucially, all of these downstream advantages can only be realized if the idea of sharing the data is pursued from the earliest project planning stages.
Another key aspect of qualitative data gathering concerns the informed consent procedure.As Bishop (2009, p. 261) notes, many qualitative researchers (often to accommodate what they think IRBs expect) use highly restrictive terms of consent, even where risks are minimal and research participants would not object to data sharing.Beyond requesting affirmative consent to share the collected data, researchers can and should tailor the details of their consent procedure to the locale and cultural context -and qualitative researchers can use their close interaction with participants to gain a better sense of the most appropriate form of consent.For example, we talked to one researcher studying former civil war combatants who found (somewhat to her surprise) that her interviewees were reassured by the detailed written consent forms she used.In other contexts, written consent might have the opposite effect.The guiding principle however applies to both those scenarios: the researcher needs to make intentional choices and provide clear documentation of them.Even if the decision is for verbal collective-based consent, for instance, justified on the basis of a traditional understanding of authority to grant such in the group the researcher is studying, this result and its rationale will be recorded and presented as documentation alongside the actual transcripts (full or redacted further, which should be another discrete option) of the group interviews.
Those choices themselves should be based on a thoughtful and realistic assessment of both the probability and degree of risk of harm, as weighed against the benefits of the research itself and the sharing of the data (Van Den Eynden, 2008).This sort of "risk-benefit calculation" is quite familiar to IRBs from the biomedical research realm where they originated (Beauchamp, 2011) and its logic remains broadly pertinent for social science work, both qualitative and quantitative.
Data Collection
Continuing on the topic of participants' consent, the ideal stance -which qualitative researchers should find an extension of their general ethical position with regard to the people they study -would be to involve participants in the process of data sharing.Given the extended interactions qualitative researchers typically have with their interviewees, they are able to explain the nature, purpose and benefits of data sharing and also get a sense of the types of risks their participants might be worried about (or not) and thus calibrate an initial hypothetical assessment.
What we at QDR advise researchers to do, in order to facilitate consent that grants full agency to the participants, is to offer them a range of data sharing options they can agree to.To illustrate, researchers can present separate Yes-No choices for: full audio recording; partial note taking; having one's words quoted in later research outputs, with or without attribution; and archiving, respectively, again only the transcripts or transcripts and audio recordings if both were made during the interview. 1The customizable selections allow for a meaningful negotiation between interviewer and interviewees in a way that permits the latter to tailor their choice in a way that seems optimal to them.In all cases, participants hold a "veto" over sharing their data, but researchers should be careful to perform (together with their IRB and, later, repository staff) an individual risk assessment even where interviewees agree to data sharing.
Only careful planning before the start of the project can ensure truly informed consent (conceptualized, as we do above, as an interactive and ongoing process between researcher and participant) during the faceto-face collection stage.
Data Curation and Publication
In most cases, researchers will seek to remove personal aspects of the data for publication.There are significant challenges in de-identifying quantitative research like surveys (Kennickell, 1997), but possible attack vectors as well as possible solutions tend to be technical (e.g., adding noise to data, collapsing categories, masking or obscuring metadata records, etc.).For qualitative data, there is no alternative to a manual, context-driven procedure.While automated tools can assist by flagging possible indirect identifiers like specific dates and locations, the researcher, ideally in consultation with a data management specialist, needs to individually remove or alter numerous instances of indirect identifiers.Some more traditional strategies for de-identification, which underlie the more recent computerized implementation, can nevertheless be more easily applied.These include broadening categories or partially reducing content.As an illustration for the first type, "I graduated from Dartmouth in 1985" becomes "I graduated [from a liberal arts college] in [1985][1986][1987][1988][1989][1990]/or [in the mid-1980s]".The resulting statement expands the population in which the interviewee falls, while also retaining critical factual information the original statement was meant to convey.
One QDR pilot project was chosen specifically to try out such a strategy for carefully de-identifying over 100 interview transcripts in a way that retained their analytical richness and inferential usefulness (Dunning and Camp, 2015).QDR curation staff worked closely with the researchers to develop a clear and comprehensive protocol of anonymization rules addressing all the detailed substantive categories that made sense in the context of this project.The resulting protocol has been archived as part of the project's documentation files (Dunning et al, 2015) and serves both as a transparent explanation of the adaptations made from the original material for a secondary user of the collection, and also as a creative model for methodological learning.A similar logic can be applied when citing such interviews in a publication.Details about interviews can be referenced either by broad categories (e.g., "city council member, Buenos Aires region, Fall 2012") or, if unfeasible, by assigning codes to the interviewees and locations (e.g., "Interviewee 1; City A").
The alternative of such content-based data redaction can be achieved by either only publishing a subset of interviews or by only publishing selected relevant extracts from them.In fact, Dunning and Camp also made the first of those additional choices, since the full data collection they and other co-authors engaged in had produced several hundred interviews.To make some data sharing practicable, while still keeping the sensitive data about political clientelism safe, the researchers selected only one geographical cluster of the several locations where interviews were conducted.Again, the choice was deliberate (random sampling across different sites could have provided similar numerical reduction, but would have eliminated the social-network aspect that was of critical importance for the research question) and clearly documented.
An interesting twist on how quantitative reduction of qualitative data can be used to protect human participants' confidentiality and safety can be seen in the so-called Active Citation compilations, commissioned by QDR (Moravcsik, 2014).In this type of project, the authors contribute annotations that supplement the arguments of a formal publication on a micro level.In some cases (Ellett, 2015;Rich, 2015), researchers who had started from the default position of no possibility of sharing, ultimately felt sufficiently comfortable to use briefer or redacted excerpts of interviews at the specific points in their texts where they were trying to substantiate an empirical claim.Clearly, such partial provision of data only works for this specific type of transparency technique and does not satisfy more ambitious research transparency and data access needs.But when the judgment calls of what was annotated, what type of excerpts were provided and what were excluded are openly detailed in the published project documentation, it does present a better alternative than zero primary data availability.
While the techniques discussed above all rest primarily with the researcher, another important strategy in protecting participants' privacy is not achievable without engaging a professional repository in the sharing process.This concerns the various levels of access controls, which range from simple registration requirements, to requirements to submit a research proposal for secondary data use and sign a special usage agreement, to timed embargos, or to on-site -only access.The in-depth understanding qualitative researchers have of their participants can inform the nature of access controls as well.With access options available at the file (i.e., individual interview) level, a single study can contain anything from open, identified data to unpublished data -depending on both the consent of the participant and the researcher's risk assessment.Given its flexibility, such differential access will probably become more widely used as more qualitative data are shared, as well as used, especially via institutional repositories.Restricting access might also be the only option in cases where de-identification is impossible due to the medium used, for example, for video and audio recordings.While it might be possible to blur or distort identifying elements, this is both expensive and destroys important qualities of the data.
With all these strategies, there is a trade-off between sharing and risk to privacy, between ease of access and the protection of sensitive data.QDR's goal is to teach scholars how they can reduce the trade-off to its optimal point, i.e., to share as much as possible without introducing undue additional risk.Still, just as for the research itself, where risks exist, scholars need to balance them against the numerous benefits of publicly available data.Qualitative social scientists, in particular, should do so in close collaboration with research participants, on the one end of the research process, and with domain repositories, on the other, where the staff is deeply familiar with both social science convention and qualitative methodology.QDR provides guidance for researchers throughout the research lifecycle: from planning, through handling data securely in the field, to preparing them for publication.Its published projects and training materials showcase a growing list of examples of successful data management and sharing.Researchers can apply the broader lessons learned from these materials, together with their own expertise, to arrive at contextsensitive solutions for their qualitative project.
And while some kinds of data and some ways of sharing will always be more problematic than others, often the objections to any sharing of qualitative data are based on discussing the most difficult end of the spectrum, while simultaneously envisioning the least constrained ways of sharing.While a pioneer in facilitating the sharing of qualitative data, QDR does not advocate such an imaginary "handing over the data" (Sieber, 1988) without sufficient preparation.Various tools for advance planning and constrained sharing, in the cases where that is appropriate, should allow even the more difficult cases to be handled properly with great benefits to the scholarly enterprise and to individual research projects.If research is conducted with eventual sharing in mind, and if scholars are familiar with the available strategies, then the kinds of dilemmas that for a long time have forced researchers into promises for data destruction become the exceptions and no longer the rule.
A Cautionary Ending (for would-be data incinerators)
Earlier that day, I'd taken a train from New York to Philadelphia because I wanted to track down at least one of Goffman's subjects, and I was pretty sure I had figured out where some of Chuck's surviving family lived (Singal, 2015).
While Alice Goffman's motivation to protect her Philadelphian interlocutors was certainly admirable, the way she went about it seems to have caused a lot of negative scrutiny of her work and, most unfortunately, not a lot of protection in the end.As a new institution trying to learn from the theoretical advances and good practices of archivists and information scientists, QDR's current prescriptions rest on trying to maximize those of the so-called "Five Safes" (Corti and Welpton, 2015) that are and will remain most relevant for qualitative work in the social sciences.While "safe data" and "safe outputs" can rarely be produced from sensitive qualitative materials, educating researchers how to be "safe people" and how to plan for "safe projects" -when accessing such data and using them for secondary analysis -and providing longterm "safe settings" for the data, including via de-identification and appropriate access controls, will remain QDR's focus in its future work with researchers.
Going forward, QDR will continue to build upon the lessons referenced above from its early deposits by pursuing interactions with all relevant actors along various avenues in the social science enterprise.We are currently distilling key insights from our first three years of operations into an expanding suite of guidance documents.QDR continues to improve its technical platform, in order to offer researchers fine-grained control over and easy application of access controls.To further facilitate sharing of sensitive data, we are conducting a multi-prong outreach effort to the U.S. IRB community.By building relationships between repositories and IRBs, we hope to improve the review of data-sharing provisions in IRB applications and assure that data sharing is not unecessarily impeded by IRB protocols.Most importantly, the repository's primary enduring commitment remains to future depositors and the participants in their projects, to work together to present creative solutions for qualitative data management and sharing in a way that is both ethical and productive. | 2019-01-23T00:30:19.477Z | 2017-09-07T00:00:00.000 | {
"year": 2017,
"sha1": "451a35649180c943b5047ee0d66326d01a279667",
"oa_license": "CCBY",
"oa_url": "http://datascience.codata.org/articles/10.5334/dsj-2017-043/galley/712/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "451a35649180c943b5047ee0d66326d01a279667",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
27238105 | pes2o/s2orc | v3-fos-license | Microcanonical analysis of a nonequilibrium phase transition
Microcanonical analysis is a powerful method for studying phase transitions of finite-size systems. This method has been used so far only for studying phase transitions of equilibrium systems, which can be described by microcanonical entropy. I show that it is possible to perform microcanonical analysis of a nonequilibrium phase transition, by generalizing the concept of microcanonical entropy. One-dimensional asymmetric diffusion process is studied as an example where such a generalized entropy can be explicitly found, and the microcanonical method is used to analyze a nonequilibrium phase transition of a finite-size system.
Microcanonical analysis is a powerful method for studying phase transitions of finite-size systems [1,2].In this approach, the form of the microcanonical entropy is examined to see whether there is a convex region.Existence of such a region signals the onset of an inhomogeneity, and the system is considered to undergo a firstorder phase transition in this region.The phase transition in microcanonical analysis is a well-defined concept even for a finite-size system, which is in contrast to most of the traditional canonical ensemble approach where phase transition is defined only for infinite size systems, defined in terms of singular behavior of physical quantities in this limit.The microcanonical analysis has been applied for studying phase transitions of various finite-size systems such as spin models [3][4][5][6], atomic clusters and nuclei [2,7], polymers [8][9][10][11][12][13][14][15], peptides [16], and proteins [17][18][19][20][21].
Since only equilibrium systems can be described in terms of microcanonical entropy, microcanonical method has been used exclusively for analyzing equilibrium phase transitions so far.In this Letter, I show that it is possible to apply this method for analyzing a nonequilibrium phase transition [22][23][24][25][26][27][28], by a proper generalization of the concept of microcanonical entropy.
Let us first briefly review the connection between the convex region of microcanonical entropy and the phase transition [1,2].We consider a finite closed system with a conserved quantity, say energy E, and denote the number of corresponding microstates as Ω L (E), where the subscript denotes the dependence on the system size L. The microcanonical entropy is then defined as where we use the unit with k B = 1.Now suppose we construct a larger system by assembling two identical subsystems of energy E and size L. We let the two subsystems make a thermal contact, but let the coupling between the two systems be weak enough so that the total energy is E tot = 2E = E A + E B where E A and E B are the energy values of the two subsystems.
We then examine the qualitative feature of the probabil-ity distribution of the energy values of the subsystems, for any E ± with E − < E < E + , so the homogeneous distribution of energy among the subsystems is preferred.On the other hand, if there is a convex region in S L (E) so that one can find values E 1 , E 2 and 0 < p < 1 satisfying then there are values E ± such that P (E, E) < P (E − , E + ) so that an inhomogeneous distributions is favored, and we say that the system is in the region of the first-order phase transition.The argument can easily be generalized to the case of subsystems of different sizes, more than two subsystems, and multiple conserved quantities [1,2].We note that the only relevant property of Ω L (E) exploited in the argument is that when a conserved quantity Q = Q A + Q B of the total system is distributed over two subsystems A and B, the probability distribution of Q A and Q B is proportional to the product of Ωs: where L and L ′ denote the sizes of the subsystems A and B. We can also impose a certain boundary condition at the interface between the subsystems, in which case P (Q A , Q B ) becomes a conditional probability.Therefore, it is clear that even for a nonequilibrium system, if a probability of the distribution of a conserved quantity Q among subsystems under appropriate boundary condition can be expressed in the form Eq.( 3), then we can consider Ω L (Q) as the generalized density, and their log as the generalized entropy, which can then be used as the target of the microcanonical analysis.This is the main claim of this Letter.
As an example of a nonequilibrium model on which microcanonical analysis can be performed, we consider a diffusion model where particles of two types, labelled as 1 and 2, move asymmetrically on a periodic lattice of length L [23][24][25]29].Treating the vacancy as a particle with label 0, the transition rates g αβ for the particle exchange of the type (α, β) → (β, α) at neighboring sites are given as [24,25,29] g 10 = g 02 = 1, g 12 = q, g 21 = 1 (4) with all other components of gs being zero.We note that the numbers of both types of particles are conserved separately, which we will denote as n 1 and n 2 .The matrix representation of the stationary state for this process has already been found, and is given as [24,25,29] where β k denotes the particle type at the k-the site, and the components of the three infinite-dimensional matrices G β (β = 0, 1, 2) are given as where Now let us suppose that there are vacancies at sites a and b.The whole periodic lattice can be divided into two regions bounded by these two sites, and we would like to obtain conditional probability for particles in these two regions being n A = (n 2 ) and n B = (n 2 ).Obviously, from Eq.( 5), we see that it is proportional to where δ(a, b) = δ a,b denotes Kronecker delta function that vanishes when the indices are not equal.Note that Therefore, the conditional probability for the steady state is expressed in the form Eq.( 3), where the generalized density for a system of size L is now defined as where the system size L includes one vacancy.We analyze the phase transition of the current model by performing the microcanonical analysis on the generalized entropy S L (n) = log Ω L (n).It is expressed in terms of (L/2)×(L/2) submatrices of G β , which can be computed exactly for given values of q and L [24, 25,29].By performing analytic computations, Monte Carlo simulation, mean field calculations [24], and partition function zero analysis [25], it has been argued that this system undergoes a nonequilibrium phase transition in the limit of L → ∞.There is a q c > 1 such that the system remains homogeneous for q ≥ q c , but inhomogeneities of particle densities appear for certain range of particle numbers when q < q c .In fact, the latter can be considered as a region of first order transition between the fluid and the condensed phases, as will be elaborated below.
From the viewpoint of microcanonical analysis, the criterion for a first-order transition is the existence of a nonconcave region in the microcanonical entropy, a set of points where one can find a direction with positive second derivative [1,2].For the current model where the conserved quantity n is discrete, I examined discretized second derivatives The generalized entropy function S L (n 1 , n 2 ) are shown in the left panels of the figures 1 and 2 for L = 10 and L = 100 respectively, for various values of q.Note that the entropy has the symmetry with respect to the line n 1 = n 2 due to the invariance under the simultaneous application of particle type exchange 1 ↔ 2 and the parity inversion k ↔ −k.We see that for small enough values of q, a nonconcave region appears in the generalized entropy, enclosed by dashed lines in figure 1 and denoted as gray regions in figure 2. As q increases, the nonconcave region shrinks, and eventually disappears for large enough values of q.We find that nonconcave region always includes a part of the line n 1 = n 2 .The second derivative at such a point is also largest along the (1, 1) direction, which tells us that for sufficiently small q, when the system is divided into subsystems with respect to a pair of vacancies, it is most probable that there is a inhomogeneity for the total particle numbers, but there are the same numbers of two species at both sides.Note that this is an exact statement for a finite value of L, in contrast to the results of previous works where limit of L → ∞ was considered [24,25].
The cross sections of S L (n 1 , n 2 ) along the line n 1 = n 2 = n, S L (n, n), are also displayed in the right panels of figures 1 and 2, where the concave envelopes are denoted by dashed lines whenever they exist.These correspond to the regions of the first-order transition, whose upper and lower boundaries ρ ± in the space of particle density ρ ≡ n/L (0 ≤ ρ < 0.5) are drawn as functions of q to produce a phase diagram in the figure 3, for L = 10 and L = 100.The mean field result q(ρ) = (1 + 6ρ)/(1 + 2ρ) in the limit of L → ∞ is shown in the figure 3 with a dashed line for comparison [24], where q(ρ) is the inverse function of ρ ± (q).The high-density side of the phase boundary, ρ ≥ ρ + , corresponds to the condensed phase.Note that there is a q 1 (L) such that ρ − = 0 for q ≤ q 1 (L), in which case the low-density phase ρ = ρ − = 0 is just the vacuum without any particles present.For q > q 1 (L), the lowdensity phase ρ ≤ ρ − is the fluid phase.As q increases, ρ ± approaches toward each other and eventually merges at the critical point q = q c (L), after which the system is in a homogeneous phase.The values of q 1 and q c for L = 10 and L = 100 are indicated by arrows in Figure 3.The mean field prediction for these parameters are q 1 (∞) = 1 and q c (∞) = 2, as can be easily read off from the analytic expression for q(ρ).
The regions q ≤ q 1 , q 1 < q < q c , and q ≥ q c have been called pure, mixed, and disordered phases [24].However, microcanonical analysis shows that in ρ space, each of the regions q ≤ q c and q 1 < q < q c is divided into vacuum (or fluid) phase (ρ ≤ ρ − ), condensed phase (ρ ≥ ρ + ), and the phase coexistence region (ρ − < ρ < ρ + ).The situation is analogous to the two-dimensional Ising model with the conserved magnetization M and the temperature T .When one simply considers the T dependence, then there is a critical temperature T c such that the system is in a disordered phase for T ≥ T c and ordered phase for T < T c .However, by examining the M dependent behavior of the system, one realizes that the ordered phase in fact gets divided into up-spin phase, down-spin phase, and the region of the first-order transition between up and down phases.
I also plot q c (L) and q 1 (L) as functions of L in figure 4. Both q c (L) and q 1 (L) approach their mean field values q c (∞) = 2 and q 1 (∞) = 1.
The current work also clarifies the physical meaning of a previous work based on the partition function zeros (PFZs) [25].There, a partition function of the form n1,n2 Ω L (n 1 , n 2 )x n1+n2 was constructed where x was called the chemical potential.Then the PFZs in the complex plane of x was analyzed to claim that there is a first-order transition as L → ∞, for sufficiently small values of q.It is obvious that Ω L (n) was used implicitly as the generalized density of states, but it was not explained why Ω L (n) should have such a special status.Also, the physical meaning of the chemical potential was as functions of the system size L. The first-order phase transition exists for q < qc.The low-density phase is a vacuum phase for q ≤ q1 and fluid phase for q1 < q < qc unclear, because Ω L (n) was regarded as describing the the particles on a periodic lattice of size L, which is an isolated system.The current work not only justifies the use of Ω L (n) as a generalized density, via the factorization Eq.( 3), but also shows that Ω L (n) in PFZs approach describes a subsystem of size L−1 bounded by pair of vacancies, rather than the whole system.Then the chemical potential x can be considered as a parameter describing the rest of the system whose size is much larger than L, acting as an infinite-size particle reservoir.The microcanonical analysis is more general since the phase transition is well defined for a system with a finite size.In fact, the notion of finite-size nonequilibrium phase transition itself is introduced for the first time in the current work via microcanonical analysis, which would be a subject of much interest for future study.This work was supported by the National Research Foundation of Korea, funded by the Ministry of Education, Science, and Technology (NRF-2014R1A1A2058188).
FIG. 1 :
FIG.1:The generalized entropy function SL(n1, n2) for L = 10 is displayed with distinct symbols for points belonging to different ranges of function values, for (a) q = 0.5 , (c) q = 0.7, and (e) q = 2.0.The nonconcave region is enclosed by dashed lines.The cross section along the diagonal line n1 = n2 in figures (a), (c), and (e), are displayed in figures (b), (d), and (f).The concave envelopes are drawn in figures (b) and (d) with dashed lines as visual guides.
FIG. 3 :
FIG.3:The phase boundary ρ±(q) for L = 10 and L = 100.The mean field result in the limit of L → ∞ is shown with dashed line for comparison.
FIG. 4 :
FIG.4:The critical values qc (solid line) and q1 (dashed line) as functions of the system size L. The first-order phase transition exists for q < qc.The low-density phase is a vacuum phase for q ≤ q1 and fluid phase for q1 < q < qc | 2015-07-06T18:56:41.000Z | 2015-06-25T00:00:00.000 | {
"year": 2016,
"sha1": "406751bccc14998799bbf30e77d3ee0049573df1",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/1506.07654",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "406751bccc14998799bbf30e77d3ee0049573df1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
12267356 | pes2o/s2orc | v3-fos-license | Synthesis and anticancer evaluation of 1,3,4-oxadiazoles, 1,3,4-thiadiazoles, 1,2,4-triazoles and Mannich bases.
A series of 5-(pyridin-4-yl)-N-substituted-1,3,4-oxadiazol-2-amines (3a-d), 5-(pyridin-4-yl)-N-substituted-1,3,4-thiadiazol-2-amines (4a-d) and 5-(pyridin-4-yl)-4-substituted-1,2,4-triazole-3-thiones (5a-d) were obtained by the cyclization of hydrazinecarbothioamide derivatives 2a-d derived from isonicotinic acid hydrazide. Aminoalkylation of compounds 5a-d with formaldehyde and various secondary amines furnished the Mannich bases 6a-p. The structures of the newly synthesized compounds were confirmed on the basis of their spectral data and elemental analyses. All the compounds were screened for their in vitro anticancer activity against six human cancer cell lines and normal fibroblast cells. Sixteen of the tested compounds exhibited significant cytotoxicity against most cell lines. Among these derivatives, the Mannich bases 6j, 6m and 6p were found to exhibit the most potent activity. The Mannich base 6m showed more potent cytotoxic activity against gastric cancer NUGC (IC50=0.021 µM) than the standard CHS 828 (IC50=0.025 µM). Normal fibroblast cells WI38 were affected to a much lesser extent (IC50>10 µM).
About 13% deaths of human beings throughout the world are caused by cancer, which is characterized by uncontrolled cell growth, metastasis, and invasion. 1) Among the existing cancer therapies, chemotherapy has turned out to be one of the most significant treatments in cancer management. 2) However, the most important clinical problem related to the use of new therapeutic strategies is the high toxicity of new potentially cytotoxic compounds. Thus in the light of existing problems in cancer therapy, discovery of novel, efficient, safe and selective anticancer agents is a thrust area for medicinal chemists. 3) The ring-closure reactions of carbohydrazides to prepare 1,3,4-oxadiazoles, 1,3,4-thiadiazoles and 1,2,4-triazoles are well-known and have been thoroughly studied. 1,3,4-Oxadiazoles are an important class of heterocyclic compounds with a wide range of biological activities. [4][5][6] Among these, a few different substituted 1,3,4-oxadiazoles have exhibited potent antitumor activities. [7][8][9][10] Recently, a series of new 1,3,4-oxadiazole derivatives incorporating a pyridine moiety were synthesized and developed as potential telomerase inhibitors. 11) On the other hand, the literature presents the variously substituted 1,3,4-thiadiazoles as compounds with a broad spectrum of biological activity. [12][13][14] The anticancer potential of 2-amino-1,3,4-thiadiazole derivatives has been documented by in vitro and in vivo studies. High expectations for therapies are attached to the extensive ongoing research conducted on the group of 2-amino-1,3,4-thiadiazole substituted with 2,4-dihydroxyphenyl group. [15][16][17] The small and simple triazole nucleus is present in compounds aimed at evaluating new entities that possess antimicrobial, antitubercular, anticonvulsant, antidepressant, antimalarial and antiinflammatory activities. 18,19) A large number of 1,2,4-triazoles have been incorporated into a wide variety of therapeutically interesting drug candidates possessing anticancer activities. 20,21) Literature survey reveals that important chemotherapeutics, such as Vorozole, Letrozole and Anastrozole ( Fig. 1) that consist of substituted 1,2,4-triazole ring, are currently being used for the treatment of breast cancer. 22) Moreover, the aminoalkylation of aromatic substrates by Mannich reaction is of considerable importance for the synthesis and modification of biologically active compounds. 23) Mannich bases are common pharmacophores in the design and development of anticancer chemotherapeutic agents. [24][25][26][27] Some Mannich bases are reported to exhibit activity in vitro against Maurine P388 lymphocytic leukemia cells. 28) In the design of new compounds, development of hybrid molecules through the combination of different pharmaco-phores in one structure may lead to compounds with increased biological activity. However, reviewing the literature reveals that the potential anticancer activity of isonicotinic acid hydrazide derivatives and Mannich bases derived from 1,2,4-triazoles have not yet been thoroughly investigated. Prompted by the aforementioned findings and in continuation of our previous work on discovering the anticancer potential of triazole derivatives, 29) in the present study it was planned to synthesize hybrid compounds that comprise pyridine ring and the variant heterocyclic ring systems in order to identify new candidates that may be of value in designing new, potent, selective and less toxic anticancer agents. All the synthesized compounds were evaluated for their in vitro cytotoxicity against six human cancer cell lines and normal fibroblast cells and their structure-activity relationship (SAR) was investigated.
Results and Discussion
Chemistry The reaction sequence employed for the syn-thesis of the title compounds is shown in Charts 1 and 2. The reaction of isonicotinic acid hydrazide 1 with various aryl isothiocyanates afforded the corresponding hydrazinecarbothioamide derivatives 30,31) 2a-d. Oxidative cyclization of the latter compounds in 4 N NaOH using a mixture of I 2 / KI (5%) furnished the corresponding 5-(pyridin-4-yl)-N-substituted-1,3,4-oxadiazol-2-amines 32,33) (3a-d) via elimination of H 2 S. On the other hand, 5-(pyridin-4-yl)-N-substituted-1,3,4-thiadiazol-2-amines 32,34) (4a-d) were obtained by cyclization of 2a-d upon reaction with cold conc. H 2 SO 4 . The structure of the synthesized compounds was confirmed on the basis of their spectral data and elemental analyses. The IR spectra of the hydrazinecarbothioamide derivatives 2a-d displayed the presence of signals in the range of 1672-1684 and 1249-1255 cm −1 corresponding to C=O and C=S, respectively. Meanwhile, the 1 H-NMR spectra of these compounds showed three singlet signals corresponding to the NH groups in the range of δ 9.71-10.82 ppm. The carbothioamides 2a-d Chart 1. Synthesis of Compounds 3a-d and 4a-d Chart 2. Synthesis of Compounds 5a-d and 6a-p underwent cyclization upon heating under reflux with 4 N aq. NaOH to afford 5-(pyridin-4-yl)-4-substituted-1,2,4-triazole-3-thiones [35][36][37] (5a-d). The mercapto triazole derivatives are reported to be present in thione-thiol tautomeric forms in solutions. 38) The structure elucidation of the triazoles 5a-d revealed their presence in the thione form predominantly which was further confirmed by their conversion to their Mannich bases. Thus treatment of compounds 5a-d with each of Nmethylpiperazine, morpholine, diethyl amine and benzylpiperidine in the presence of formaldhyde solution afforded the corresponding Mannich bases 6a-p. The 1 H-NMR spectra of the Mannich bases 6a-p displayed additional signals due to -N-CH 2 -N-groups in the range of δ 5.25-5.78 ppm integrating to two protons, beside signals belonging to methylpiprazine, morpholine, diethylamine or benzylpiperdine moieties. Furthermore, the 13 C-NMR spectra of compounds 6a-p exhibited signals in the range of δ 69.77-77.35 ppm (N-CH 2 -N) and δ 169.41-171.68 ppm (C=S).
In Vitro Cytotoxicity Effect on the Growth of Human Cancer Cell Lines The heterocyclic compounds, prepared in this study, were evaluated according to standard protocols for their in vitro cytotoxicity against six human cancer cell lines including cells derived from human gastric cancer (NUGC), human colon cancer (DLD1), human liver cancer (HA22T and HEPG2), nasopharyngeal carcinoma (HONE1), human breast cancer (MCF) and normal fibroblast cells (WI38). For comparison purposes, CHS 828, a pyridyl cyanoguanidine, was used as standard antitumor drug 39) (Fig. 2). The IC 50 values in micro molar (µM) are listed in Table 1. Sixteen of the tested compounds showed cytotoxic activity with IC 50 values <1 µM. The Mannich bases 6j, 6m and 6p were found to be the most potent derivatives. All the synthesized compounds were tested for their cytotoxicity against normal fibroblast cells as many anticancer drugs are toxic not only against cancer cells but also normal ones. The results obtained showed that normal fibroblast cells (WI38) were affected to a much lesser extent (IC 50 >10 µM).
Structure Activity Relationship In the present study, on correlating the structures of the synthesized oxadiazoles, thiadiazoles and triazoles with their anticancer activity, it has been observed that the electronegativity of the substituents at the aromatic rings manipulated the magnitude of activity. Thus, compounds bearing 4-chlorophenyl and 4-bromophenyl pharmacophores were found to be the only active compounds. Among the oxadiazole derivatives 3a-d, only compound 3c showed selective moderate activity against gastric and colon cancer cell lines. On the other hand, 1,3,4-thiadiazole derivatives 4c and 4d exhibited broad spectrum activity. Compound 4d demonstrated almost equipotent activity against gastric cancer cell line NUGC (IC 50 =0.028 µM) compared to the standard CHS 828, 4d also showed almost four fold higher activity against liver cancer cell lines compared to its 4-chlorophenyl substituted analog 4c. The superior cytotoxic activity of the 1,3,4-thiadiazole derivatives compared to those of their bioisosters 1,2,4-oxadiazoles and 1,3,4-oxadiazoles was previously reported. 3) On the other hand, the 4-bromophenyl substituted mercaptotriazole 5d showed more potent activity against liver cancer HA22T than 5c.
To verify the effects of structural modification on the anticancer activity, sixteen Mannich bases 6a-p incorporating methylpiperazine, morpholine, diethylamine and benzylpiperidine moieties were synthesized starting from the 1,2,4-triazoles 5a-d. The screening results again revealed the superior anticancer activity of the 4-chlorophenyl and 4-bromophenyl substituted derivatives compared to their p-tolyl and 4-methoxyphenyl congeners. The p-tolyl derivatives proved to be almost devoid of activity, while only one 4-methoxyphenyl derivative was active.
Methylpiperazine and morpholine derived Mannich bases were found to display significant cytotoxic activity. The 4-bromophenyl derivative 6m incorporating a methylpiperazine moiety, showed more potent cytotoxic activity against gastric cancer NUGC (IC 50 =0.021 µM) compared to the standard CHS 828 and exhibited significant activity against colon cancer DLDI and liver cancer HEPG2. Meanwhile, the 4-chlorophenyl derivative 6j incorporating a morpholine moiety, was found to be the most potent among all the tested Mannich bases against liver cancer HEPG2 (IC 50 =0.028 µM).
Considering the Mannich bases bearing a diethylamino moiety only the 4-chlorophenyl derivative 6k showed cytotoxic activity against four cell lines while 6c and 6o exhibited selective activity against breast cancer MCF.
Among the 4-methoxyphenyl Mannich bases, cytotoxic activity was displayed only by the benzyl piperidine derivative 6h. This dramatic increase in activity may be attributed to the presence of the benzyl moiety. Moreover, the 4-bromophenyl derivative 6p exhibited almost equipotent cytotoxic activity against gastric NUGC and nasopharyngeal carcinoma HONE1 compared to the standard CHS 828 as well as the highest potency against colon cancer.
From the previous findings, it might be concluded that the presence of the bioactive methylpiprazine, morpholine and benzyl piperidine moieties play an important role in displaying cytotoxic activity. Thus it is obvious that while some of the compounds were not the most potent, their specific activity against particular cell lines makes them of interest for further development as anticancer drugs.
Conclusion
This study presented the synthesis and characterization of a series of oxadiazoles, thiadiazoles, triazoles and sixteen Mannich bases derived from 1,2,4-triazoles. All the synthesized compounds were evaluated for their in vitro anticancer activ-ity against six human cancer cell lines and normal fibroblast cells and their SAR was investigated. The screening results revealed that the presence of the 4-chlorophenyl and 4-bromophenyl substituents was essential for activity. The Mannich bases demonstrated promising cytotoxic activity. Most of the compounds that exhibited anticancer activity were nontoxic to normal fibroblast cells suggesting that they may serve as lead compounds for further development of new drugs.
Experimental
Chemistry All melting points were determined on a Stuart apparatus and the values given are uncorrected. IR spectra (KBr, cm −1 ) were determined on a Shimadzu IR 435 spectrophotometer (Faculty of Pharmacy, Cairo University, Egypt). 1 H-and 13 C-NMR spectra were recorded on Bruker Ascend 400 MHz spectrophotometers (Microanalytical Unit, Faculty of Pharmacy, Cairo University, Egypt) using tetramethylsilane (TMS) as internal standard. Chemical shift values are recorded in ppm on δ scale. The electron impact (EI) mass spectra were recorded on a Hewlett Packard 5988 spectrometer (Microanalysis Center, Cairo University, Egypt). Elemental analyses were carried out at the Microanalysis Center, Cairo University, Egypt; found values were within ±0.35% of the theoretical ones. Progress of the reactions was monitored using thin layer chromatography (TLC) sheets precoated with UV fluorescent silica gel Merck 60F 254 and were visualized using UV lamp. | 2018-04-03T00:28:18.901Z | 2015-05-01T00:00:00.000 | {
"year": 2015,
"sha1": "19f07e2407f5eda55341d8011b173fb12d477216",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/cpb/63/5/63_c15-00059/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "518604bfd0ff00d7598af743a51fa669b79206e6",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
16055431 | pes2o/s2orc | v3-fos-license | Oxytocin and Estrogen Receptor β in the Brain: An Overview
Oxytocin (OT) is a neuropeptide synthesized primarily by neurons of the paraventricular and supraoptic nuclei of the hypothalamus. These neurons have axons that project into the posterior pituitary and release OT into the bloodstream to promote labor and lactation; however, OT neurons also project to other brain areas where it plays a role in numerous brain functions. OT binds to the widely expressed OT receptor (OTR), and, in doing so, it regulates homeostatic processes, social recognition, and fear conditioning. In addition to these functions, OT decreases neuroendocrine stress signaling and anxiety-related and depression-like behaviors. Steroid hormones differentially modulate stress responses and alter OTR expression. In particular, estrogen receptor β activation has been found to both reduce anxiety-related behaviors and increase OT peptide transcription, suggesting a role for OT in this estrogen receptor β-mediated anxiolytic effect. Further research is needed to identify modulators of OT signaling and the pathways utilized and to elucidate molecular mechanisms controlling OT expression to allow better therapeutic manipulations of this system in patient populations.
diffusing across neural tissue (6). In this review, we focus on the function of OT in the brain and its modulation by estrogens.
OXYTOCin ReCePTORS: BRAin DiSTRiBUTiOn AnD FUnCTiOn
Oxytocin signals through OT receptors (OTRs), which are G protein-coupled receptors that, upon binding to OT, activate the Gq protein subunit and ultimately excite the cell. Autoradiographic studies have identified OTR expression in several regions of the rat brain, including the olfactory system, basal ganglia, hippocampus, central amygdala, and hypothalamus (7). The generation of a knock-in mouse strain where Venus, a variant of yellow fluorescent protein, is under the regulation of the Otr promoter sequence, facilitated the identification of OTR expressing cells in additional brain regions, e.g., the median raphe nucleus and the lateral hypothalamus. This mouse model has been valuable in the identification of the phenotype of the cells expressing OTRs. For instance, OTRs have been found in serotonergic neurons (8), implicating serotonergic involvement in OTs anxiolytic effects in depression and anxiety.
Central OT is important in homeostatic processes, such as thermoregulation (9), food intake (10), and mating (11, 12). OT also plays an important role in maternal behavior. Female rats that received and exhibited high maternal care showed higher levels of OTRs in various limbic regions of the brain, including the bed nucleus of the stria terminalis (BNST), central nucleus of the amygdala (CeA), lateral septum (LS), PVN, and medial preoptic area (MPOA). Additionally, central administration of an OTR antagonist (OTA) completely eliminated the elevated licking and grooming behaviors seen in the high maternal behavior animals, suggesting that OTRs mediate maternal behaviors (13).
Additional insights into the function of OT and the OTR are gained from the examination of genetically engineered mouse models. Female OT knockout (OTKO) mice show normal parturition and maternal behavior but are unable to nurse their pups demonstrating that in the mouse OT is not necessary for maternal behavior or labor but is essential for milk ejection (14). Compared to the normal maternal behavior observed in OTKOs, OTR knockout (OTRKO) mice show deficits in maternal behaviors demonstrated by their longer latency for pup retrieval (15). OT signaling is also implicated in social behavior, and the OTKOs and OTRKOs both showed deficits in social memory. Wild-type animals investigate a novel conspecific for a longer period of time than a familiar animal, whereas OTKOs and OTRKOs show similar investigation times for both novel and familiar animals (15, 16). Although OTR levels remain unaltered in OTKOs (14,16,17), OTKOs demonstrated increased OTR sensitivity as measured by increased grooming following central OT administration (17).
Unlike OTKOs, OTRKO males display increased aggression in the resident-intruder task (15). It is possible that this elevated aggressive behavior in the OTRKOs is mediated by a lack of OT signaling in the CeA, since administration of OT into the CeA of male rats decreased aggressive behavior (18). Interestingly, OTKO offspring generated from a homozygous breeding scheme demonstrated an increased aggression phenotype as compared to those bred from heterozygous parents, suggesting that OT from the heterozygous dam can prevent the aggressive phenotype in the OTKO pups (15). Although these changes in behavior may relate to the absence of OT or the OTR, these phenotypic changes could be due to compensatory mechanisms that occur during development to overcome the absence of OT signaling.
Furthermore, selective knockout of OTRs in the LS showed that OT plays a bi-directional role in fear regulation dependent on social context. Animals exposed to a non-fearful conspecific [positive social encounter; (19)] or to social defeat [negative social encounter; (20)] during contextual fear conditioning showed reduced or increased fear, respectively, compared to controls. Intra-LS administration of OTA or of a virally linked Cre-recombinase to knockdown OTR expression prevented the altered fear response mediated by the social stimulus (19,20). These data demonstrate that the OT/OTR system enhances memory of social interactions, reducing fear after positive and enhancing fear after negative social interactions.
Various factors influence OT signaling. OTRs are largely expressed centrally but their regional localization varies across species. For instance, mice and rats both express OTRs in the ventromedial nucleus of the hypothalamus [VMH (7, 8)], but OTRs are not expressed in this region in rabbits (7). These species-specific differences in localization may account for different responses to OT, for example, mice and rats respond differently to OT administration (21). Additionally, mice and humans show different OTR localization. For example, OTR-Venus immunoreactivity was seen in the mouse hippocampus (8) but, in humans, OTRs were not localized to this area (22). OTR signaling also changes during development in rats with transient developmental patterns displayed postnatally, an adult-like expression pattern seen around postnatal day 21, and increased OTR quantities into adulthood (23). Additionally, OT signaling differs between males and females. Female rats were found to have fewer OTRs in the BNST, VMH, and medial amygdala compared to males (24), and in humans, men and women were found to respond differently to intranasal OT administration (25,26). These differences between males and females may relate to hormone differences, which alter OT signaling and are discussed in more detail in a later section.
OXYTOCin ReGULATiOn OF HYPOTHALAMiC-PiTUiTARY-ADRenAL AXiS
Oxytocin release from neurons of the PVN and the presence of OTRs within the PVN suggests the possibility that OT can directly modulate the stress reactive hypothalamic-pituitary-adrenal (HPA) axis. The HPA axis responds to stressors and activates neurons residing in the PVN causing increased synthesis and secretion of corticotropin-releasing factor (CRF). The release of CRF into the hypophyseal portal system enhances synthesis and release of adrenocorticotropic hormone (ACTH) from the anterior pituitary. In turn, ACTH acts on the adrenal cortex to stimulate release of glucocorticoids [cortisol in humans and corticosterone (CORT) in rats and mice]. Increased levels of circulating glucocorticoids can further inhibit HPA axis activity via glucocorticoid and mineralocorticoid receptors in the brain as well as acting upon specific brain sites to modulate behaviors (27).
Oxytocin can putatively impact several sites within the HPA axis. PVN neurons that project to the median eminence release OT into the hypophyseal portal vasculature to stimulate adrenal glucocorticoid release by potentiating the actions of CRF at the anterior pituitary level in a similar fashion to the closely related neuropeptide vasopressin (28). By contrast, OT neurons in the PVN that project into the forebrain and release OT in response to stressors (29) exert anxiolytic actions (5). Intracerebroventricular (ICV) administration of OT decreases not only circulating CORT levels but also ACTH levels following exposure to stressors in rats (30,31) and mice (32,33), and central infusion of OT into the PVN inhibits HPA axis reactivity, via modulation of CRF neuronal activity (34). Using the restraint stress paradigm in association with OT administration (ICV), Windle et al. (31) demonstrated the presence of an OT-sensitive forebrain stress circuit involving the dorsal hippocampus, ventrolateral septum, and PVN (31).
Endogenous OT levels are also sufficient to alter HPA axis reactivity. ICV injection of OTA showed elevated ACTH and CORT levels prior to behavioral testing suggesting that endogenous OT levels can suppress HPA axis reactivity (34,35). Additionally, administration of OTA via retrodialysis into the PVN resulted in increased ACTH and CORT release indicating that endogenous OT can inhibit PVN neurons (35). Female OTKO mice show elevated CORT levels following acute and repeated shaker stress compared to wild-type littermates (33), demonstrating a definitive role for OT in regulating HPA axis reactivity to stress.
Interestingly, OT also promotes social buffering in response to stress, similar to the effect seen with fear (19). Female prairie voles subjected to restraint stress demonstrated an increase in anxietylike behaviors and CORT levels when recovering alone but not when recovering with a male partner, which also corresponded to an increase in OT release in the PVN of these females. Intra-PVN OT injections reduced CORT and anxiety-related behaviors when animals recovered alone, whereas intra-PVA OTA administration prevented social buffering. These observations suggest that PVN OT signaling is necessary and sufficient for social buffering effects in response to stress in prairie voles (36).
OXYTOCin ReGULATiOn OF AnXieTY AnD DePReSSive BeHAviORS
Oxytocin is strongly implicated in social bond formation and social behavior [for review see Ref. (37)], but may also play a role in psychiatric disorders, such as anxiety and depression. The effect of OT in these disorders may be related to abnormal social behavior, but OT may also independently impact these disorders via regulation of the HPA axis. Dysregulation of the HPA axis and increased response to stressors are commonly seen in anxiety and mood disorders (38). In a clinical study with pediatric and adult participants, cerebrospinal fluid and plasma OT levels were found to be higher in participants that had lower anxiety (39). However, severe anxiety symptoms may be related to over-activation of the OT system as women with elevated OT levels were more likely to report being anxious on a daily basis (40). Reduced nocturnal levels of OT have been reported in depressed individuals; however, numerous studies have also reported no differences compared to healthy controls (41). This variability across studies for anxiety and depression may relate to OT levels corresponding more to personality traits rather than symptoms of depression or anxiety (42). Despite these inconsistencies in data concerning psychiatric disorder OT levels, a recent meta-analysis suggests that OT may be beneficial in the treatment of anxiety and depression (43).
Oxytocin signaling during early development may contribute to later anxiety. Prairie vole pups exposed to a single injection of OT on postnatal day 1 demonstrated an increase in serotonergic axon density in the anterior hypothalamus, cortical amygdala, and VMH but not in the PVN or medial amygdala. Such effects on serotonergic neurons could be a mechanism by which OT affects emotional behaviors, since serotonin is strongly linked to mood, and serotonin dysregulation is seen in depression and anxiety disorders (44).
In adult animals, OT administration reduces anxiety-related behaviors in the elevated plus maze (30,45,46) and open field assay (8, 11). OT administration centrally (30), to the medial prefrontal cortex (46), to the CeA (11), and to the PVN (45) was sufficient to reduce anxiety-related behaviors. Chronic central OT administration reduced anxiety in rats bred for high levels of anxiety-related behaviors (47). Further support for the role of OT in reducing anxiety comes from studies of OTKO mice with OTKO females showing increased anxiety-related behaviors compared to their wild-type counterparts (32,33). The effect of OT was sex dependent as male OTKOs showed reduced anxietyrelated behaviors (32,48).
Oxytocin also reduces measures of depression in the forced swim test (FST) and tail suspension test. In FST, rats treated with an OT analog spent less time immobile and more time swimming and climbing the walls of the chamber than saline-treated animals indicating an antidepressant effect (49). Similarly, ICV OT or OTA administration showed a dose-dependent decrease or increase in immobility in both assays, respectively (50,51). Interestingly, the antidepressant effect of OT was not blocked by a selective OTA, suggesting that OTs antidepressant effects are not OTR mediated (50).
ReGULATiOn OF OXYTOCin FUnCTiOn BY STeROiD HORMOneS
Steroid hormones are a broad family of hormones that include the estrogens, androgens, progestins, mineralocorticoids, and glucocorticoids. These hormones can readily cross the cell membrane where they bind and activate their respective intracellular receptors. Steroid receptor proteins have DNA and ligand-binding domains, and unliganded steroid receptors are maintained in an inactive state by a complex of chaperone proteins (52). Upon ligand binding, the receptors dimerize and translocate into the nucleus and bind DNA promoters and recruit cofactors and transcription machinery to promote gene transcription (53). Steroid hormones have been found to alter OT signaling. Estrogens can act in a synergistic manner with OT, not only by enhancing its anxiolytic effects (54) but also by increasing OTR levels in the mouse brain (55). In humans, a single dose of estradiol was sufficient to increase plasma OT levels in women (56). Similarly, testosterone alters OTR expression differently depending on brain region (21). Progesterone is important in pregnancy maintenance and in vitro studies found that progesterone could inhibit OT binding to the OTR (57). Also, treatment with a synthetic glucocorticoid significantly altered OTR expression in various brain regions, such as the amygdala, BNST, and VMH (58).
Understanding OT regulation by sex steroids is important since anxiety and depressive disorders show a larger gender disparity (38), which may be related to circulating steroid hormone levels. Testosterone has been shown to decrease HPA axis activity (59,60), whereas estrogens can both increase (60,61) or decrease (62,63) HPA axis activity, and these alterations may in part be through modulations of OT activity. The differences in the observed effects of estrogens on behavior and neuroendocrine responses to stress may relate to its differential activity on ERα and ERβ. Activation of ERα can increase HPA axis activity, whereas activation of ERβ has the opposite effect (61,64).
Although ERα-mediated activity modulates OTR transcription, ERβ-mediated activity has been found to alter Ot mRNA levels (65,66). Moreover, androgen modulation of OT appears to be mediated in part by the testosterone metabolite 3β-diol, which activates ERβ to allow binding to the Ot promoter to increases Ot mRNA (67).
eSTROGen ReCePTOR β AnD OXYTOCin inTeRACTiOnS in ReGULATiOn HPA AXiS AnD AnXieTY-ReLATeD BeHAviORS
Activation of ERβ reduces HPA axis activity, as seen by reductions in ACTH levels and CORT levels, in mice (68) and rats (60,69,70) following a stressor. ERβ receptors are expressed widely throughout the brain and often overlap with ERα expression (71), except in the PVN of rats where only ERβ is expressed (72). Interestingly, approximately 85% of OT neurons in the PVN co-express ERβ (72), and activation of ERβ within the PVN, with the ERβ-specific agonist diarylpropionitrile (DPN) or testosterone metabolite 3β-diol, reduces HPA axis activity following (1) Estrogen enters the cell and binds to inactive ERβ. In its inactive form, ERβ is bound to a complex of chaperone proteins.
(2) Upon binding its ligand, ERβ dimerizes. (3) The dimerized receptor binds to the composite hormone response element (cHRE) of the oxytocin promoter. Co-activators, such are SRC-1 and CBP, are recruited along with the transcription machinery to promote transcription. (4) Ultimately, the oxytocin peptide is produced. (B) Oxytocin signaling in the brain. Oxytocin is produced in neurons of the PVN and supraoptic nucleus (SON). Oxytocin neurons from both regions project to the posterior pituitary (P Pit). In addition to this release, the PVN also sends oxytocin projections throughout the brain (5). ERβ is expressed in approximately 85% of neurons in the rodent PVN but not in the SON (72). This suggests that ERβ could play a role in increasing oxytocin production in regions important in responding to stress and can subsequently influence brain areas that express oxytocin receptors [shown in blue; (8, 13)]. BNST, bed nucleus of the stria terminalis; DR, dorsal raphe nucleus; LS, lateral septum; MnR, median raphe nucleus; PFC, prefrontal cortex. restraint stress in rats (61,73). Treatment with estradiol increases Ot mRNA expression in the brains of wild-type mice, but not in ERβ knockout (ERβKO) mice in both males (65) and females (66). This ERβ-mediated increase in Ot mRNA was specific to the PVN and not seen in the MPOA, SON (65), medial amygdala, or VMH (66). The substantial overlap in the distribution of ERβ and OT in the PVN suggests a potential interaction between the two in the regulation of HPA axis activity. As previously discussed, activation of ERβ reduced HPA axis reactivity and anxiety-like behaviors in rats and mice (64,68,70). ICV treatment with OTA, however, blocked the ERβ agonist-mediated reduction of anxiety-related behaviors and CORT secretion (70), suggesting interaction between ERβ signaling pathways and OTergic pathways in the control of anxiety-related behaviors and HPA axis reactivity in stress. Currently, the mechanisms involved in the crosstalk between these two pathways are not completely understood.
Recent studies have begun to investigate the complex interaction between ERβ and the Ot promoter. Using a mouse hypothalamic cell line expressing ERβ and OT, Sharma et al. (74) demonstrated Ot promoter occupancy by ERβ. The Ot promoter has a composite hormone response element, which allows for steroid receptor binding and Ot gene transcription regulation by ERs and other members of the nuclear receptor family but not the other steroid hormone receptors (75). Treatment of a neuronal cell line with the ERβ agonists, 3β-diol, DPN, or estradiol, elicited increases in Ot mRNA levels and Ot promoter occupancy (67,74). In tandem with ERβ occupancy of the Ot promoter, cAMP response element-binding protein (CBP) and steroid receptor coactivator (SRC)-1 were found to occupy the Ot promoter, leading to increased acetylation of histone H4 in the presence of 3β-diol. Taken together, the data suggest that in the presence of 3β-diol, ERβ binds the Ot promoter and recruits ligand-dependent coactivator SRC-1, which binds CBP, and forms a functional complex that acetylates histone H4 to drive Ot gene expression (74). The role of ERβ related to OT signaling at the molecular level and its larger role in OT signaling throughout the brain are summarized in Figure 1. Further studies are needed to determine the extent of the binding of ERβ to the Ot promoter, the co-activators recruited, and how this interaction modulates HPA axis function in vivo.
COnCLUSiOn
Oxytocin has a wide range of roles in the brain and allows interesting and important directions for research. Current data suggest that the OT neurons of the PVN provide the principal OTergic innervation of the forebrain. The function of OT, through OTRs, is regionally specific; however, the localization of OTRs varies across species, age, and sex, so separating the effect of these variables is necessary to determine how animal studies translate to humans. Modulators of the OT system, particularly the steroid hormones, also provide additional regulatory targets since OT modulates HPA axis reactivity and participates in many diverse functions. In particular, ERβ is expressed by many neurons of the PVN, and ERβ activation increases OT synthesis and reduces anxiety and neuroendocrine responses in animals. Hence, such targets may be fruitful directions for future focus.
FUnDinG
The research programs of the authors have been funded by NS039951 (RJH) and CH062512 (SKM). | 2016-06-17T00:39:11.076Z | 2015-10-15T00:00:00.000 | {
"year": 2015,
"sha1": "cc85dfb99f4f4719080d11aa4c2b3ba8d522a680",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2015.00160/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc85dfb99f4f4719080d11aa4c2b3ba8d522a680",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
228916520 | pes2o/s2orc | v3-fos-license | Allometric Equations for Shrub and Short-Stature Tree Aboveground Biomass within Boreal Ecosystems of Northwestern Canada
Aboveground biomass (AGB) of short-stature shrubs and trees contain a substantial part of the total carbon pool within boreal ecosystems. These ecosystems, however, are changing rapidly due to climate-mediated atmospheric changes, with overall observed decline in woody plant AGB in boreal northwestern Canada. Allometric equations provide a means to quantify woody plant AGB and are useful to understand aboveground carbon stocks as well as changes through time in unmanaged boreal ecosystems. In this paper, we provide allometric equations, regression coefficients, and error statistics to quantify total AGB of shrubs and short-stature trees. We provide species- and genus-specific as well as multispecies allometric models for shrub and tree species commonly found in northwestern boreal forest and peatland ecosystems. We found that the three-dimensional field variable (volume) provided the most accurate prediction of shrub multispecies AGB (R2 = 0.79, p < 0.001), as opposed to the commonly used one-dimensional variable (basal diameter) measured on the longest and thickest stem (R2 = 0.23, p < 0.001). Short-stature tree AGB was most accurately predicted by stem diameter measured at 0.3 m along the stem length (R2 = 0.99, p < 0.001) rather than stem length (R2 = 0.29, p < 0.001). Via the two-dimensional variable cross-sectional area, small-stature shrub AGB was combined with small-stature tree AGB within one single allometric model (R2 = 0.78, p < 0.001). The AGB models provided in this paper will improve our understanding of shrub and tree AGB within rapidly changing boreal environments.
Introduction
Ecosystems in northwestern Canada are changing rapidly due to a warming climate, drier conditions, extended growing season, and climate-mediated increases in frequency and intensity of disturbances, such as wildfire, permafrost thaw, insect and pathogen outbreaks, and anthropogenic natural resource extraction (e.g., [1][2][3]). One of the significant outcomes of climate-mediated change in these environments is the increased abundance of short-stature vegetation, such as shrubs [4,5] and low productive and juvenile trees, in particular where wildfire disturbance sets back ecosystems to an early successional stage post fire [4] or in the rapidly changing transition zones between elevated forests and adjacent peatlands due to permafrost thaw [6]. Increased shrub cover influences important ecosystem functions, such as energy balance and hydrology [6] at local to regional scales and greenhouse gas/carbon-climate cycle feedbacks at national to global scales [6,7], which could ultimately exacerbate these changes [8,9]. Shrubs and short-stature trees contain a substantial part of the total aboveground carbon of unmanaged boreal forest and peatland ecosystems, although specific numbers remain
Study Area
Shrub and tree species examined in this study were destructively sampled across the southern margin of the sporadic to discontinuous permafrost zone of the Taiga Plains and Taiga Shield ecozones ( Figure 1). The mean annual air temperature varies from −2.5 (Fort Simpson, Taiga Plains) to between −3 and −4 (Yellowknife, Taiga Shield) • C, while cumulative annual precipitation is 390 (Fort Simpson) and 360 (Yellowknife) mm, respectively [19,20].
The geology is dominated by lacustrine plains overlain by peatlands with fine-to coarse-textured lacustrine and till in the mid-boreal Taiga Plains, which transitions towards the high boreal Taiga Shield into nearly level to rolling and hilly bedrock [19,20]. In the Taiga Plains, approximately 25%-50% of the land surface area is covered by peatlands, consisting of peat plateaus underlain by permafrost, bogs, and fens [21]. While shrubs dominantly grow in the transition zones between peat Forests 2020, 11, 1207 3 of 16 plateaus and bogs or fens, short-stature trees grow within upland forests or on elevated peat plateaus. In the high-boreal Taiga Shield, peatlands cover an area of less than 5% and occupy hollows within and between bedrock exposures [20]. The dominant vegetation in the mid-boreal Taiga Plains consists of black spruce (Picea mariana) and mixed hardwood and softwood forests containing similar species as found in the Taiga Shield ecozone. Taiga Shield ecosystems are dominated by black spruce and jack pine (Pinus banksiana) with abundance of paper birch (Betula papyrifera) and trembling aspen (Populus tremuloides) [19,20]. The geology is dominated by lacustrine plains overlain by peatlands with fine-to coarsetextured lacustrine and till in the mid-boreal Taiga Plains, which transitions towards the high boreal Taiga Shield into nearly level to rolling and hilly bedrock [19,20]. In the Taiga Plains, approximately 25%-50% of the land surface area is covered by peatlands, consisting of peat plateaus underlain by permafrost, bogs, and fens [21]. While shrubs dominantly grow in the transition zones between peat plateaus and bogs or fens, short-stature trees grow within upland forests or on elevated peat plateaus. In the high-boreal Taiga Shield, peatlands cover an area of less than 5% and occupy hollows within and between bedrock exposures [20]. The dominant vegetation in the mid-boreal Taiga Plains consists of black spruce (Picea mariana) and mixed hardwood and softwood forests containing similar species as found in the Taiga Shield ecozone. Taiga Shield ecosystems are dominated by black spruce and jack pine (Pinus banksiana) with abundance of paper birch (Betula papyrifera) and trembling aspen (Populus tremuloides) [19,20].
Shrub Measurements, Destructive Sampling, and Processing
To determine AGB of small-stature shrubs and trees growing in the mid-boreal Taiga Plains and high-boreal Taiga Shield ecoregions, we derived allometric models for five common shrub genera and species (Alnus spp., Betula spp., Dasiphora fruticosa, Salix spp., and Shepherdia canadensis) and four common tree genera and species (Betula papyrifera, Picea spp., Populus balsamifera, and Populus tremuloides). Plant individuals were measured and destructively sampled within 65 peatlands and upland forest ecosystems distributed across the two ecoregions in late July/early August 2018 and 2019. Field sample locations were situated in late successional sites and in sites disturbed by wildland fire within the last 50 years ( Figure 1, Table 1) in order to represent the high variability of boreal ecosystem disturbance by wildfire and permafrost thaw in our allometric models.
Shrub Measurements, Destructive Sampling, and Processing
To determine AGB of small-stature shrubs and trees growing in the mid-boreal Taiga Plains and high-boreal Taiga Shield ecoregions, we derived allometric models for five common shrub genera and species (Alnus spp., Betula spp., Dasiphora fruticosa, Salix spp., and Shepherdia canadensis) and four common tree genera and species (Betula papyrifera, Picea spp., Populus balsamifera, and Populus tremuloides). Plant individuals were measured and destructively sampled within 65 peatlands and upland forest ecosystems distributed across the two ecoregions in late July/early August 2018 and 2019. Field sample locations were situated in late successional sites and in sites disturbed by wildland fire within the last 50 years ( Figure 1, Table 1) in order to represent the high variability of boreal ecosystem disturbance by wildfire and permafrost thaw in our allometric models. Harvested plants were located within <10 m of field transects (Figure 2a), which were set up randomly within each site. Between one to three transects were installed per site. Field transects were 25 m in length, starting in upland forests and traversing into peatlands and crossing the forest-peatland transition zone perpendicularly. This setup was chosen in order to capture the shrub abundance within the transition zones between peat plateaus and bogs/fens, where the largest and fastest ecosystem changes have been observed [6]. Shrubs were located along the transect with distance from the start of the transect. Shrubs were selected per transect within standing height ranges of 0.5 up to 3.5 m. A shrub individual was selected when it was alive and mostly free of leaf or stem disturbances. Up to five shrub individuals were sampled per transect. Harvested plants were located within <10 m of field transects (Figure 2a), which were set up randomly within each site. Between one to three transects were installed per site. Field transects were 25 m in length, starting in upland forests and traversing into peatlands and crossing the forestpeatland transition zone perpendicularly. This setup was chosen in order to capture the shrub abundance within the transition zones between peat plateaus and bogs/fens, where the largest and fastest ecosystem changes have been observed [6]. Shrubs were located along the transect with distance from the start of the transect. Shrubs were selected per transect within standing height ranges of 0.5 up to 3.5 m. A shrub individual was selected when it was alive and mostly free of leaf or stem disturbances. Up to five shrub individuals were sampled per transect. Measured 1D variables for each individual shrub sample included maximum height (herein "max height" (m)), average maximum height (m), number of individual stems, and basal diameter of each stem (cm). Heights were measured using a tape measure, and diameters were measured with a caliper. Previous studies have examined the relationship between the AGB of a single stem and the 1D independent variable measurement of the same stem [11,16]. However, boreal shrubs are commonly multi-stemmed. In order to analyze how accurately 1D field-measured variables from a single stem can predict the AGB of the entire plant, we tested the basal diameter of the stem with the widest diameter (herein "max basal diameter" (cm)) and the length of the longest stem (herein "max stem length" (m)) within allometric relationships. To test a 2D variable, we transformed each basal diameter to cross-sectional area (cm 2 ) and summed this to total cross-sectional area per shrub individual. For the 3D variable, we measured the extent of the uppermost foliage layer perpendicular to the transect (herein "width" (m)) and parallel to the transect (herein "line-intercept cover" (m)) using a tape measure. The 3D shrub volume (m 3 ) (1) was then calculated as follows: (1) Measured 1D variables for each individual shrub sample included maximum height (herein "max height" (m)), average maximum height (m), number of individual stems, and basal diameter of each stem (cm). Heights were measured using a tape measure, and diameters were measured with a caliper. Previous studies have examined the relationship between the AGB of a single stem and the 1D independent variable measurement of the same stem [11,16]. However, boreal shrubs are commonly multi-stemmed. In order to analyze how accurately 1D field-measured variables from a single stem can predict the AGB of the entire plant, we tested the basal diameter of the stem with the widest diameter (herein "max basal diameter" (cm)) and the length of the longest stem (herein "max stem length" (m)) within allometric relationships. To test a 2D variable, we transformed each basal diameter to cross-sectional area (cm 2 ) and summed this to total cross-sectional area per shrub individual. For the 3D variable, we measured the extent of the uppermost foliage layer perpendicular to the transect (herein "width" (m)) and parallel to the transect (herein "line-intercept cover" (m)) using a tape measure. The 3D shrub volume (m 3 ) (1) was then calculated as follows: Following measurements in situ, shrubs were clipped directly above the soil surface ( Figure 2c) and stored in paper bags for further processing. Dead stems were not harvested. In the laboratory, harvested samples were air dried for up to four months, separated into stem and leaf parts, and oven dried at 60 • C for 48 h [22]. Twigs and fruits were included as leaf parts. The total AGB was determined as dry weight (g) by weighing each shrub part (woody and leafy) and summing the dry weight of all parts per shrub individual.
Tree Measurements, Sampling, and Processing
Small-stature tree AGB was collected in late July 2019 along 20 random transects adjacent to the permanent sample plots set up by the Canadian Forest Service, located near Fort Liard, Northwest Territories (NWT), Canada. Live trees were chosen from the understory and open areas across different height ranges determined in intervals of 0.5 m up to ≤4.5 m. In all cases, samples were mostly free of foliage disturbance/mortality and stem blemishes. In situ 1D tree measurements included stem length (m), and stem diameter (cm) measured at 0.03, 0.15, 0.30, and 1.3 m along the stem length starting from the average ground surface surrounding the tree. Stem diameters were transformed to cross-sectional area (cm 2 ) per tree individual to provide a 2D variable analogous to that for shrubs and thus offer the potential for a joint shrub and juvenile/low productive tree allometric equation. 3D volume was not measured for trees because tree AGB can best be predicted with diameter, stem length, or both variables combined (e.g., [14]). Following measurements in situ, trees were cut as close to the ground surface as possible and packed into large paper bags to be transported back to the University of Lethbridge. In the laboratory, trees were separated into stem, branch, and leaf components after air drying of up to four months. Branches were cut off directly at the stem. Twigs and fruits were included as leaf parts, while bark was included as part of the stem. Dead branches were not included in the analysis. Oven drying and biomass derivation was completed using the methods described above for shrubs.
Derivation of Aboveground Biomass Allometric Equations
In situ structural measurements of harvested shrubs and trees were used to determine the most accurate 1D, 2D, or 3D independent variables to predict AGB. Three different forms of single variable regression analysis, which are most commonly used in biomass allometry [11,16,17,[23][24][25] were tested for each 1D, 2D, and 3D independent variable per shrub and tree genus and species and for the pooled data. These were used to determine the most descriptive regression model of AGB for genus/species, multispecies shrubs, and multispecies trees as well as in general for all trees and shrubs combined. The first allometric biomass model (2) uses linear regression of the log-transformed dependent (y) and independent in situ (x) variables (herein "linear logarithmic regression (LLR)"): where α and β are the regression coefficients. The back-transformation to an arithmetic scale was achieved using Equation (3): However, the back-transformation resulted in a skewed distribution ofŷ [23]. Baskerville [23] reported a general underestimation of 10%-20% when back-transforming the logarithmic regression estimates of AGB without correcting for skewness. To set this into context, we compared the results of LLR with the results of linear logarithmic regression with correction (herein "LLRC") (4-6): where ε represents a multiplicative correction factor of the back-transformation with MSE as the mean square error of the regression [23,25]. This correction removes the bias in Equation (3), which occurs following the back-transformation from a normal distribution of ln(ŷ) for a given ln(x). LLR results are presented only to set LLRC model results into context, and usage is not recommended without the provided correction factor.
To avoid the problem of skewness, a majority of the research on biomass allometry have used the untransformed nonlinear relationship between dependent and independent variables to predict tree [17] and shrub [11,16,17] AGB. These models use iterative nonlinear least squares regression via a power function (herein "nonlinear least squares regression (NLS)") with an additive error term ε: Forests 2020, 11, 1207 6 of 16 The NLS function for biomass prediction is available in statistical software packages (e.g., "nls" function in the "stats" package in R [26]). The default nonlinear equation that is implemented in R assumes a homogeneous variance of regression residuals [25]. However, trees and shrubs can be inherently heteroscedastic, such that the assumption of homogeneous residual variance across the range of the independent variable could lead to biased model predictions [25]. Although weights can be specified in NLS functions, these should be applied when both arithmetic and logarithmic variances do not show uniformity [23]. Our AGB data showed uniform variances on arithmetic scales for most species and on logarithmic scales for all species. To understand the performance of using a non-weighted nonlinear model that does not address potential heteroscedasticity within the data, we compared NLS (7) with the results of the logarithmic-based models LLR (2-3) and LLRC (4-6).
The modeled biomass results of these three allometric models were evaluated using root mean square error (RMSE), coefficient of determination (R 2 ), and regression residual analysis. Residual analysis was performed using visual inspection of the relationships between dependent and independent variables as well as the total percentage error (%) derived via (8): Significance of the differences between the genus/species-specific AGB model means and the multispecies model means were evaluated with the t-test for equal variances and Welch's t-test for unequal variances. Equality of variances was tested with the F-test.
Biomass Allometric Models
In total, we used 1D, 2D, and 3D input variables and three different forms of regression (LLR, LLRC, and NLS) to predict shrub AGB. Regression coefficients and standard errors were calculated for (a) total shrub AGB per genus/species; (b) all shrubs combined per ecoregion; and (c) all shrubs within both ecoregions combined. Here, (c) represents a multispecies shrub allometric equation that can be applied to all shrubs across the southern half of the study region (NWT, Canada; Figure 1). For short-stature trees, we examined 1D and 2D predictor variables (3D tree volume predictors were not measured) and the three different regression equations described above (LLR, LLRC, and NLS). Similar to shrubs, we have provided the allometric models that most accurately predict (a) total AGB for each tree species; (b) all tree species combined per ecoregion; and (c) for all trees within the two ecoregions combined (herein "multispecies trees"). In addition to individual "shrub" and "tree" models, we have provided a 2D input variable allometric model for total AGB prediction for small-stature shrubs and trees combined (herein "general shrubs and trees"). With the "general shrub and tree" model, we explored the utility of a single 1D (stem length) or 2D (cross-sectional area) variable for combined shrub and tree AGB prediction to understand whether these variables scale uniformly between shrubs and trees. Such combined approaches may have utility for rapid assessment in the field or using less invasive observation techniques (e.g., unmanned airborne vehicles or laser scanning) where vegetation species and type may be indeterminate.
Results and Discussion
The ranges and averages (± standard deviation) of the predictor variables for 205 shrubs and 106 trees are provided in Supplementary Material 1, Tables S1 and S2, respectively. Regression coefficients, correction factors, and standard errors are presented in Supplementary Material 2, Tables S3-S5, and can be input into Equations (2)- (7). For all allometric models, regression coefficients were positive, indicating increasing biomass with increasing predictor variable for total AGB, as was found by Lambert et al. [14].
The applicability of multispecies models to predict total AGB for single genus and species has been shown for shrubs by He et al. [16]. Similarly, the differences between our modeled means of total AGB via the multispecies equations and the genus/species-specific modeled means were not Forests 2020, 11, 1207 7 of 16 significant (p > 0.05). Furthermore, the difference between the modeled means of shrub and tree biomass in previously burned sites and the unburned sites was not significant (p > 0.05). Species-specific coefficients for input into the allometric Equations (2)-(7) are provided in Supplementary Material 2, Table S3, using volume as the predictor and Supplementary Material 2, Table S4, using cross-sectional area as the predictor.
Comparison of 1D, 2D, and 3D Variables for Shrub Total AGB Prediction
For the genus/species-specific 1D-based models using max stem length or max basal diameter as input into each of the three allometric models (LLR, LLRC, and NLS), Betula spp., Dasiphora fruticosa, and Salix spp. had lower R 2 (0.005 ≤ R 2 ≤ 0.325) compared to Shepherdia canadensis (0.433 ≤ R 2 ≤ 0.809, Table S6). In addition, Dasiphora fruticosa was the only species where stem length was not significantly related (p > 0.05) to the dependent variable of measured total AGB. For the multispecies shrub models (pooled for all shrub genera and species), the use of 1D predictor variables yielded the lowest model fits (RMSE) ranging from 262 (NLS) to 318 (LLRC) g for max stem length and 252 (NLS) to 388 (LRC) g for max basal diameter ( Table 2). These results are in contrast with previous allometric models for boreal [11,16,17] or subtropical [27] shrubs, where the 1D variable basal diameter of the longest stem had provided the most accurate prediction of total AGB. However, 1D field variables, although related to the dependent variable (p < 0.001) (with the exception of max stem length of Dasiphora fruticosa), did not explain total AGB variability when considering all stems of the entire plant (0.228 ≤R 2 ≤ 0.335, Table 2, Figure 3a,b). This was true for each genus/species model as well as the multispecies equation, with the exception of Sheperdia canadensis. For this species, max basal diameter was a similarly good predictor variable (R 2 = 0.809) to cross-sectional area (R 2 = 0.738) and volume (R 2 = 0.765, Table S6). The performance of total AGB models increased for all other genera and species as well as for all genera and species combined using the 3D predictor variable of volume, with R 2 ranging between 0.684 (Betula spp. and Dasiphora fruticosa) and 0.882 (Alnus spp.). The RMSEs for the multispecies models ranged from 141 (NLS) to 144 (LLR and LLRC) g with R 2 of~0.790 using any of the three models (LLR, LLRC, and NLS; Table 2). Figure 3a-d shows the relationship between the three models for 1D, 2D, and 3D variables for the multispecies shrub AGB. Of the three model forms tested, the 3D volume predictor produced the most consistent shrub AGB results (Figure 3, Table 3), suggesting the choice of model form is less critical when using a 3D predictor. As expected, the exponent was greater for 1D models and close to unity for the 3D models. This suggests that a simple linear model would result in similar model fits compared to the LLRC or NLS models when using a 3D predictor. However, after testing this (results not shown), the Of the three model forms tested, the 3D volume predictor produced the most consistent shrub AGB results (Figure 3, Table 3), suggesting the choice of model form is less critical when using a 3D predictor. As expected, the exponent α was greater for 1D models and close to unity for the 3D models. This suggests that a simple linear model would result in similar model fits compared to the LLRC or NLS models when using a 3D predictor. However, after testing this (results not shown), the model results achieved slightly less goodness of fit (R 2 = 0.769, RMSE 148.6 g) compared to LLRC or NLS (Table 2). Decreasing exponents (holding all else equal) can be explained by the nature of allometric scaling between 1D, 2D, and 3D measurements of a plant to its mass (a 3D attribute) via a power function. Assuming no change in the multiplier (β) (which might be considered analogous to a density attribute), scaling from a 1D measurement to a 3D property requires a higher exponent (α) compared to scaling from a 2D or 3D measurement (Table 3). Therefore, predictions that are extrapolated from lower to higher dimensions contain more inherent model-based uncertainty than predictions requiring no dimensional extrapolation. However, field volume observations consisted of three single measurements and therefore might contain a high overall measurement uncertainty compared to a single 1D measurement. The exact quantity of model vs. field measurement error propagation is unknown, but the net outcome of the tests performed shows that 3D volume produced the highest AGB model accuracies, followed by 2D and then 1D models. Better model performance and linearity via the 3D predictor variable of volume may also be explained by the structural variability of multi-stemmed shrubs. For example, shrub stems can grow comparably long while being simultaneously thinner rather than being shorter but thicker, so the total dry weight of the shrub with the longer stems may be lower than the dry weight of a shrub that has shorter but thicker stems. This variability is represented in the scatterplots of Figure 3a,b and illustrates that shrub structural variability cannot be sufficiently explained using a measurement from one single perspective alone. The structural heterogeneity of shrubs is a function of site conditions, such as nutrient, water, and light availability. To capture structural heterogeneity, a measurement is needed that describes the shrub structure from three different perspectives. For example, a taller shrub with a single stem will be narrower in width and cover than a shrub with many stems extending in multiple directions. If we assume that the shrub with many stems has a larger width, then it is also likely that the shrub with many stems will have more biomass (dry weight). We demonstrated that neither max stem length nor max basal diameter could be used to predict the dry weight of multi-stemmed shrubs. Volume, however, captured the shrub extent and directional growth and therefore predicted total AGB with less total model uncertainty. The exception of better model performance using max basal diameter for Shepherdia canadensis can be explained by the comparably low number of stems for each harvested individual (<12 stems per plant) and the observed uniform growth of this species in the areas sampled.
A second alternative to volume is the measurement of basal diameters for all stems per plant, converted to cross-sectional area and summed. This is because (a) stem count is represented and (b) shrubs with larger stem counts usually have greater extents (width and line-intercept cover) and dry weight compared to shrubs with lower stem counts. To improve the accuracy of AGB predictions for northern boreal shrubs, measurements to determine volume (max height, line-intercept cover, and width) are recommended. These can be measured rapidly in the field. 1D measures may take slightly less time for shrub individuals that have developed a low number of stems. However, 1D measurements result in both over-and underestimation of AGB depending on the regression form used, especially for shrubs with a high number of stems. 2D cross-sectional area provides the second-best predictions of shrub AGB, although it is slightly more time intensive to measure the basal diameter of each stem per shrub individual. Here, the time required increases with number of stems.
Comparison of Regression Models for Shrub Total AGB Prediction
With regard to model comparisons (LLR, LLRC, and NLS), NLS produced the best model fits for each shrub genus and species (Supplementary Material 3, Table S6) as well as for the multispecies data (Table 2). Here, we found significant differences using 1D vs. 3D models. RMSE varied between 251.68 (NLS, using max basal diameter) and 317.95 (LLRC, using max stem length) g and were greatly reduced with volume as the input variable (RMSE between 141.04 (NLS) and 144.12 (LLRC) g). For the multispecies data, NLS and LLRC overestimated total AGB for all 1D, 2D, and 3D models, while LLR continuously underestimated total AGB ( Table 2). NLS produced the best model fit, independent of the variable used, while the 3D models of all regression forms resulted in the best AGB predictions (RMSE = 141.04 g, R 2 = 0.790 (NLS), RMSE 144.08 g, R 2 = 0.788 (LLR), RMSE = 144.12 g, R 2 = 0.788 (LLRC); Table 2). However, although NLS produces slightly better model fits, nonlinear models require an even variance of errors across the domain of the predictor variable in order to perform valid comparisons of model uncertainties and regression coefficients amongst datasets (e.g., [23]). Residual analysis of our models showed that errors were free of heteroscedasticity. However, when models are transferred to different areas and data, we recommend using the LLRC-based models. This is because biomass data can contain natural heteroscedastic variation. Heteroscedasticity needs to be accounted for in the model development to ensure that model results do not contain bias [25]. For example, Mascaro et al. [25] reported a bias of~100% overestimation when extending predicted small tree (diameter at breast height (DBH) range 2-12 cm) aboveground biomass to stand level biomass using NLS. For model transfer purposes, we have provided regression coefficients and error statistics not only for our best models based on NLS but also for our LLRC models (Supplementary Material 2, Table S3).
Comparison of 1D and 2D Variables for Tree Total AGB Prediction
For the short-stature tree AGB equations, diameter measured at 0.3 m stem length resulted in better model fits compared to stem length for each genus and species (Supplementary Material 3, Table S7). For the multispecies data, total AGB prediction using stem length (1D variable) provided the lowest goodness of fit (highest RMSE and lowest R 2 , Table 4, Figure 4), while inclusion of stem diameter at 0.3 m, alternatively cross-sectional area at 0.3 m, improved predictions by 87% using LLRC or NLS (Table 4).
For multispecies tree total AGB prediction, the exponent α was greater for the 1D model based on stem length and close to unity for the 1D model diameter at 0.3 m, alternatively 2D model cross-sectional area at 0.3 m ( Table 5). The 2D cross-sectional area model produced equivalent results compared to the 1D diameter model but allowed to effectively reconcile tree with shrub AGB predictions. The combined prediction of shrub and tree AGB has not been developed for this study region yet, but would allow the prediction of AGB for boreal short-stature shrubs and trees (<4.5 m) as well as tall-stature trees (>4.5 m) in just a few steps (see Section 3.5). When transferring these allometric models to different areas within the region, we recommend measuring the diameter at 0.3 m stem length. For multispecies tree total AGB prediction, the exponent was greater for the 1D model based on stem length and close to unity for the 1D model diameter at 0.3 m, alternatively 2D model crosssectional area at 0.3 m ( Table 5). The 2D cross-sectional area model produced equivalent results compared to the 1D diameter model but allowed to effectively reconcile tree with shrub AGB predictions. The combined prediction of shrub and tree AGB has not been developed for this study region yet, but would allow the prediction of AGB for boreal short-stature shrubs and trees (<4.5 m) as well as tall-stature trees (>4.5 m) in just a few steps (see Section 3.5). When transferring these allometric models to different areas within the region, we recommend measuring the diameter at 0.3 m stem length.
Comparison of Regression Models for Tree Total AGB Prediction
For the prediction of tree total AGB per genus/species and all genera/species combined, NLS achieved the lowest RMSE, highest R 2 , and lowest total percentage error compared to measured biomass (<−5%-4%), while the dependent and independent variables were significantly related (p < 0.001, Table 4 and Table S7). LLR predictions resulted in the highest RMSE, similar R 2 , and highest total percentage error relative to the measured biomass compared to LLRC and NLS for each genus/species and for all data combined. The single exception was for predicting total AGB for Picea spp. based on stem length (Table S7). Using LLR, the prediction based on stem length resulted in an underestimation of total AGB of −51% and an underestimation of −20% when using diameter or cross-sectional area, respectively, as input variable for the multispecies models. LLRC had comparably lower RMSE, similar R 2 , and underestimated total multispecies AGB by −6% (stem length) to <−10% (diameter at 0.3 m, cross-sectional area at 0.3 m). In order to address potential heteroscedasticity effects, we have provided the regression coefficients for both NLS and LLRC models with cross-sectional area (measured at 0.3 m stem length) as predictor variable (Supplementary Material 2, Table S4).
Comparison of Regression Models for General Shrub and Tree Total AGB Prediction
The prediction of total AGB for shrubs and trees combined resulted in similar predictive capability to multispecies shrub and multispecies short-stature trees (R 2 ≥ 0.770, p < 0.001). This was determined using the 2D independent variable of cross-sectional area, measured at the base for shrubs and at 0.3 m stem length for trees (Table 6, Figure 5). Relating modeled total AGB to measured total AGB showed no evident bias in the 2D-based prediction of combined shrub and tree AGB in comparison to 3D multispecies shrub and 2D multispecies tree AGB models. This is depicted in Figure 6, which shows modeled AGB in relation to measured AGB of the 2D general shrub and tree AGB model (Figure 6e,f) in comparison to the 3D multispecies shrub (Figure 6a,b) and 2D multispecies tree (Figure 6b,c) AGB models. Similar to the multispecies shrub models, NLS achieved the lowest RMSE and highest R 2 (RMSE = 94.80 g, R 2 = 0.776). The RMSE of model LLR increased by 53% (RMSE = 202.17 g, R 2 = 0.770) and by 54% for the LLRC model (RMSE = 206.37 g, R 2 = 0.770). Compared to Ali et al. [27], who derived best model fits for combined shrub and tree AGB prediction using diameter of the longest stem and total plant height combined, our model results show that AGB of boreal plants can be predicted with a simpler one-variable model using cross-sectional area. For our AGB models based on cross-sectional area, the exponent α was greater for the 1D model based on stem length and close to unity for the 2D model cross-sectional area at 0.3 m (Table 7). However, stem length of shrubs and trees was also weakly related to measured total AGB (p < 0.001, Table 6) and thus represents an alternative to cross-sectional area, which has potential for use in rapid field measurement or non-invasive observation situations (e.g., remote sensing via airborne lidar) where it may be acceptable to trade accuracy at the individual sample-level for greater overall population representation. For model transfer purposes, we recommend the use of LLRC regression coefficients and correction factor in order to address heteroscedasticity. Figure 6. Measured total AGB related to modeled total AGB using LLRC and NLS for the 3D-based equations for multispecies shrubs (a,b) and the 2D-based equations for multispecies trees (c,d) as well as general shrubs and trees (e,f).
Conclusions
Our analysis has shown that AGB of shrubs can be modeled with higher accuracies when using a 3D field variable, such as volume. Small-stature tree AGB can be most accurately predicted with the stem diameter measured at 0.3 m stem length. In addition, we found that shrub AGB can be reconciled with small-stature tree AGB when using total stem cross-sectional area as the predictor variable. Based on the two best models, we have provided regression coefficients and error statistics for the modeling of short-stature shrub and tree AGB for the region of sporadic to discontinuous permafrost in NWT, Canada. For model uncertainty propagation to total AGB predictions, we have provided the standard error of each model coefficient. We have provided species-and genus-specific as well as multispecies allometric models for shrubs and trees commonly found in boreal forest and peatland ecosystems. These equations are necessary for improving understanding and quantification of biomass change and the potential implications for carbon pools in northern environments, which are highly susceptible to climate change.
Supplementary Materials: The following are available online at http://www.mdpi.com/1999-4907/11/11/1207/s1, Table S1: Descriptive statistic by shrub species and genus by plant compartment (range of values in parentheses, followed by average ± standard deviation); Table S2: Descriptive statistic by tree species by plant compartment (range of values in parentheses, followed by average ± standard deviation); Table S3: Volume-based regression coefficient estimates with error statistics to be input into Equations (4)-(7) as appropriate to derive shrub AGB; Table S4: Volume-based regression coefficient estimates with error statistic to be input into Equations (4)-(7) as appropriate to calculate tree AGB; Table S5: Volume-based regression coefficients with error statistic to be input into Equations (4)-(7) as appropriate to calculate general shrub and tree AGB; Table S6: Model performances for shrub total ABG prediction per genus/species (all p values < 0.001); Table S7: Model performances for tree total ABG prediction per genus/species (all p values < 0.001). | 2020-11-19T09:08:25.959Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "c1561528c69272a590de7d896882e62016b8f921",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4907/11/11/1207/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "501d1337297952c66dc769be5dc07c5a4f997c0b",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
28351122 | pes2o/s2orc | v3-fos-license | Assessment of atherosclerotic risk among patients with epilepsy on valproic acid, lamotrigine, and carbamazepine treatment
Objective: To compare the long-term effects of carbamazepine (CBZ), valproic acid (VPA), and lamotrigine (LTG) as monotherapy on the markers of vascular risk. Methods: The present cross-sectional study was carried out at the Department of Neurology, Jinnah Postgraduate Medical Centre (JPMC), Karachi, Pakistan, from 2012 to 2013. We selected 120 adult patients with epilepsy and 40 control subjects. The patients with epilepsy were divided into 3 groups according to the use of antiepileptic drugs (AEDs) (CBZ, n = 40; VPA, n = 40; and LTG, n = 40). All participants’ total cholesterol (TC), triglycerides (TG), low-density lipoprotein cholesterol (LDL-c), very low-density lipoprotein cholesterol (VLDL-c), high-density lipoprotein cholesterol (HDL-c), ratio of TC/HDL-c, ratio of LDL-c/HDL-c, body mass index (BMI), and blood pressure was determined. Results: In patients with epilepsy, CBZ and VPA treatment caused a noteworthy increase in the concentrations of TG, TC, and LDL-c compared with LTG treatment and the control group (p<0.001). The HDL-c significantly decreased in CBZ, VPA, and LTG-treated patients as compared with controls (p<0.001). The ratio of LDL-c/HDL-c and TC/HDL-c significantly increased in VPA- and CBZ-treated groups compared with the LTG-treated, and control group, while the ratio was also considerably elevated in patients treated with CBZ as compared with the patients treated with VPA. The weight and BMI of the patients treated with AEDs were higher (p<0.01). Conclusion: Patients with epilepsy on CBZ or VPA have changed vascular risk markers that may lead to atherosclerosis, while LTG-treated patients have less alteration in lipid profile.
E pilepsy is a chronic non-communicable neurological disorder that influences people of all ages, races, and social class from all over the world. There are more than 50 million people in the world suffering from epilepsy; 80% of these people live in developing regions, and, annually, 2.4 million new cases arise. 1 It is reported that with proper diagnoses and treatment three-fourths of patients could lead normal lives. It was estimated that in Pakistan the prevalence of epilepsy is approximately 10 per 1000 people. 2 Epileptic subjects receive long-term treatment with AEDs; this may cause an alteration in serum lipids and lipoproteins and thus increase the risk of atherosclerosis in epileptic patients. Derangement in lipid profile is considered a significant threatening factor for atherosclerosis. 3 Use of certain AEDs has been linked with an augmented risk of atherosclerosis, suggesting that there could be increased chances of ischemic heart disease in patients with epilepsy. 4,5 A recent study concluded that the process of atherosclerosis increases in patients with epilepsy treated with AEDs, whether enzyme inducer or enzyme inhibitor AEDs. 6 Derangement in lipid profile by valproic acid )VPA( treatment is debatable; it is reported that it may decrease, increase, or have no effect on lipid profile. 5,[7][8][9][10] In the literature, there is disagreement regarding the effect of AED therapy on the advancement of atherosclerosis. There is very limited literature on this topic in this part of the world, while most of the published studies have been conducted in Western countries. However, there is a difference in our diet, culture, and genetics from the Western world; hence, some studies on the consequences of AEDs treatment in our local population were essential. Therefore, this study was designed to compare the long-term effects of carbamazepine )CBZ(, VPA, and lamotrigine )LTG( as monotherapy on the markers of vascular risk.
Methods. The present cross-sectional study was carried out at the Department of Neurology, Jinnah Postgraduate Medical Centre )JPMC(, Karachi, Pakistan, which is a tertiary care hospital, from 2012 to 2013. We selected 120 adult patients with epilepsy from the outpatient clinic by the convenience sampling technique )55 males and 65 females(. The epileptic patients were divided into 3 groups according to the use of antiepileptic drug )AED( )CBZ, n = 40, 18 males and 22 females; VPA, n = 40, 18 males and 22 females; and LTG, n = 40, 19 males and 21 females(. All included patients used AED as monotherapy for at least 2 years before the start of the study. The people who were on other AEDs, any other regular medication, alcoholics, and pregnant/lactating females were excluded. Patients with a history of ischemic heart diseases, hypertension, diabetes mellitus, cigarette smoking, endocrine disorders, or autoimmune diseases, or receiving medication that could affect blood lipid levels were also excluded. Physical examination of all studied subjects was performed, and age, duration, onset, etiology of epilepsy, and duration of AED were noted. Forty healthy, age-and gender-matched individuals )26 male and 14 female( were recruited from the general population in the control group. The Ethical committee of JPMC, Karachi, approved the present study, and written consent was taken from all patients and control subjects.
Five milliliters of blood was drawn from an antecubital vein from each patient, and after centrifugation the serum was analyzed for lipid profile. All samples were collected after 12-14 hours of fasting. The measurement of total cholesterol )TC(, triglycerides )TG(, and highdensity lipoprotein cholesterol )HDL-c( was carried out using the Cholesterol Oxidase Phenol Aminophenazone method, and kits were supplied by Randox Laboratories private limited )London, England(. Very low-density lipoprotein cholesterol )VLDL-c( and low-density lipoprotein cholesterol )LDL-c( were calculated using the Friedewald formula that is, LDL-c = TC -)HDL-c + TG/5( and VLDL-c = TG/5.
The data were analyzed using the Statistical Package for Social Sciences )SPSS Inc., Chicago, IL, USA( version 16. Comparison between the basic characteristics of the control group and patients with epilepsy in the 3 groups, and biochemical parameters among all 4 groups was carried out by one-way analysis of variance )ANOVA(, and the categorical variable )gender( was compared by a Chi-Square test. A p-value <0.05 was taken as the significant level. Results. The comparison of baseline characteristics of the patients in all 4 groups is shown in Table 1. There was a significant difference in the duration of treatment )p<0.01(. The body mass index )BMI( and body weight of the antiepileptic treated groups was notably high )p<0.01( compared with the age-matched control group ) Table 1 A few studies documented no noteworthy differences in lipid profile parameters among the different AED groups; only HDL-c level reduced in patients taking VPA. 21,22 In the epileptic literature, derangement in lipid profile by VPA treatment is found debatable. 5,7-10 Some studies observed no effect, 7,8 while others established a noteworthy alteration in lipid profile. 9,10,23,24 One study 9 reported a considerable elevation in total cholesterol and LDLs after 12 months of VPA treatment, but levels of TG and HDLs remained the same. 9 In our study, the HDL-c significantly decreased in VPA-, CBZ-and LTG-treated patients compared with controls. The present results are compatible with several studies. 21,22 The deranged lipid levels in adult patients with epilepsy can be explained by the fact that such patients usually do less exercise and are physically less active, and this could be a reason for low HDL-c concentration in epileptic patients. People with epilepsy, and sometimes physicians themselves are reluctant to include physical exercise programs as a supportive treatment because of the fear that exercise would trigger seizures, or it may be due to lack of information. 25 On the contrary, the literature recommends inclusion of exercise while treating epileptic patients because it has a positive outcome for epileptic control and for improving the quality of life of epileptic subjects.
The ratio of TC/HDL-c and LDL-c/HDL-c significantly increased in the VPA-and CBZ-treated groups compared with controls and the LTG-treated group, while the ratio also significantly increased in the CBZ-treated group compared with the VPA-treated group. It is considered that the ratio amongst the different fractions of cholesterol )TC/HDL-c; LDL-c/ HDL-c( is an important forecaster of atherosclerosis development. Decreased ratios of TC/HDL-c and LDL-C/HDL-c have potent protecting effects against atherosclerosis and vice versa. 26 In our epileptic patients, the elevated ratios of TC/HDL-c and LDL-c/HDL-c indicate a higher atherosclerosis risk.
Diet affects the lipid profile levels; therefore, to exclude the influence of diet, we collected all samples from only one public sector hospital. In a country like Pakistan, for several reasons, almost all people who use the public sector hospitals' facilities are from the low socio-economic class. Therefore, their dietary patterns are almost similar, and it seems that the change in lipid profile in patients with epilepsy was due to AEDs.
There are several limitations to our study. First, we did not measure other biochemical vascular risk factors. Second, the thickness of the intima-media of the carotid artery was not determined because of unavailability of the facilities. Third, a history of the )p<0.001(, while no significant change was observed between LTG-treated and control groups. The HDL-c significantly decreased in VPA, CBZ, and LTG-treated patients compared with controls )p<0.001(. The levels of VLDL-c considerably increased in CBZ-treated patients compared with the VPA and LTG-treated groups and controls )p<0.001(, while, in the VPA-treated patients, it was higher than the LTG-treated group )p<0.001( ) Table 2(. The ratio of TC/HDL-c and LDL-c /HDL-c significantly increased in VPA-and CBZ-treated groups compared with controls and the LTG-treated group, and the ratio also significantly increased in the CBZ-treated group compared with the VPA-treated group ) Figures 1 & 2
(.
Discussion. In the present study, epileptic patients had significantly increased body weight and BMI. Numerous studies have reported variable effects of AEDs on body weight. Three studies observed weight gain with VPA, [11][12][13] and both weight gain 14 and no change in weight 12,13 have been linked with CBZ. However, no change in weight was reported with LTG. 13,15 In our study, it was observed that the levels of TC, TG, and LDL-c significantly increased in CBZ-and VPA-treated patients compared with the LTG-treated and control groups, while no substantial difference was observed between LTG-treated and control groups. Several similar and dissimilar results were found in the literature. Results of several studies have shown that CBZ and phenytoin )PHT( )enzymeinducing AEDs( significantly elevate the levels of TC, LDL-c, and triglycerides. 7,16,17 They further suggested that enhancement in the thickness of intima-media of the common carotid artery in patients treated with PHT or CBZ could be due to deposition of TC and LDLs. Chuang et al 7 reported that the length of treatment with CBZ, PHT, and VPA shows a significant difference in the speeding up of atherosclerosis, and constitutes the main role in this process. 7 The alteration in lipid profile is considered a significant threatening factor for atherosclerosis because elevated levels of LDLs play an imperative role in atherosclerotic development by increasing endothelial permeability, withholding lipoproteins inside the intima of blood vessels, conscripting inflammatory cells, and creating foam cells. 3,18 In the present study, the results of LTG effects on lipid profile are similar to other studies 7,19,20 these studies also showed no effects of LTG on lipid profile. Moreover, a study reported an improvement in vascular risk factors when a patient is shifted from an enzyme-inducing AED to LTG )a non-enzyme inducing AED(. 19 Neurosciences 2017; Vol. 22 )2( www.neurosciencesjournal.org subject's physical activities could not be taken. Fourth, detailed information on possible differences in dietary habits among groups was not collected, and it is also a single center study so our results cannot be generalized.
In conclusion, patients with epilepsy on CBZ or VPA have changed vascular risk markers that may lead to atherosclerosis while LTG-treated patients have less alteration in lipid profile. Antiepileptic drug treatment also increases body weight and BMI. Hence, careful monitoring of body weight and lipid profile is needed to detect and manage any significant change. The choice of drug when treating patients should also be taken into consideration. | 2018-04-03T02:18:40.745Z | 2017-04-01T00:00:00.000 | {
"year": 2017,
"sha1": "abbf314df1e80049dedc26f2f8ef33e77a6ad06d",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc5726816?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "abbf314df1e80049dedc26f2f8ef33e77a6ad06d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52891978 | pes2o/s2orc | v3-fos-license | Lipoprotein-Associated Phospholipase A2 is Linked with Poor Cardio-Metabolic Profile in Patients with Ischemic Stroke: A Study of Effects of Statins
ABSTRACT Objectives: The objective of the study is to investigate the effects of statins on the lipoprotein-associated phospholipase A2 (Lp-PLA2) mass in patients with ischemic stroke. Materials and Methods: A total number of 59 patient ages 43–69 years with cerebral stroke compared to 39 healthy controls that matching the age and body weight. The patients were divided into 32 patients on statins therapy assigned as statins users and 27 patients, not on statins therapy assigned as nonstatins users. Anthropometric and biochemical measurements were done including lipid profile and inflammatory biomarkers. Results: Stroke patients on statins therapy showed a comparable low of Lp-PLA (29.82 ± 3.19 IU/mL) to nonstatins user stroke patients (15.58 ± 5.73 IU/mL). Lp-PLA2 mass levels were positively correlated with body mass index, blood pressure changes, total cholesterol, triglyceride, and very low-density lipoprotein and stroke risk (SR) percentage. Conclusions: Patients on statins with ischemic stroke had low levels of Lp-PLA2 mass levels compared to nonstatins user with ischemic stroke. Lp-PLA2 mass levels were higher in men than women and correlated with lipid profile and SR in patients with ischemic stroke.
Introduction
S troke is the second cause of death worldwide, that accounting for about 11% of total death. [1] From 1990 to 2010, the number of strokes declined by about 10%. Most of the strokes happened in those over 65 years old. The threat of stroke augments exponentially from 30 years of age, 95% of strokes take place at age 45 years and older, however, it can occur at any age including childhood. [2] In addition, 40% of global stroke deaths are more occurred in South Asians old peoples. [3] Stroke is more common in male than female by about 25% but 60% of stroke-induced death is more in female than male. [4] Stroke is a medical emergency status due to reducing brain blood flow results in brain ischemia and cellular death. Stroke is divided into two types thrombotic stroke 85% and hemorrhagic stroke 15% while; stroke that caused by idiopathic cause is called cryptogenic stroke. [5] Stroke leads to focal brain dysfunction which itself damage cerebral cells causing secondary ischemic area called ischemic penumbra. [8] During ischemia, energy, and adenosine triphosphate declined to cause neuronal cell membrane damage and interruption of ion pumping. These changes activate the release of excitatory neurotransmitters such as glutamate which act on the specific receptor called N-methyl-D-aspartate receptors causing neuronal cell death. [9] Indeed, neuronal and endothelial damage activate the release of pro-inflammatory mediators which lead to the development of brain edema which further leads to the brain injury. [10] Therefore, ischemic stroke triggers a cascade of pathological episodes including neuronal excitability, intracellular calcium overload, ion homeostasis disturbances, lipid peroxidation, and free radical productions. [11] These events may occur in a time-dependent manner as neuronal excitability happens within minutes while; the inflammatory changes occur within hours after stroke. These sequential events might explain the differences in the levels of inflammatory biomarkers following stroke attacks. [12] Inflammation plays a crucial role in the pathophysiology of ischemic stroke since; neuronal damage provokes the release of inflammatory-related molecules which activate the immune system. These inflammatory mediators could escape into cerebrospinal fluid and bloodstream thus; these mediators could be simply being detectable following ischemic stroke. [13] Lipoprotein-associated phospholipase A2 (Lp-PLA2) is a phospholipase A2 enzyme encoded by the PLA2G7 gene, it is one of different platelet-activating factors acetylhydrolase, it is 45-kDa of 441 amino acids. Lp-PLA2 is mainly combined with low-density lipoprotein (LDL) in about 80% and less with high-density lipoprotein (HDL) in about 20%. Lp-PLA2 is produced by inflammatory cells which are responsible for hydrolysis of oxidized LDL phospholipids. [14] Furthermore, Lp-PLA2 is correlated with atherosclerosis since; it produced from the atherosclerotic plaque also; it regarded as a biomarker of cardiac diseases. In addition, Lp-PLA2 was observed as a novel biomarker for coronary artery disease and ischemic stroke. [15] High Lp-PLA2 serum levels are associated with two folds increase in the risk of stroke and poor outcomes, thus; Lp-PLA2 emerges as an excellent predictor of the incidence of stroke. [16] Statins are 3-hydroxy-3-methylglutaryl coenzyme A reductase inhibitors; inhibit de novo cholesterol biosynthesis and LDL levels as well as triglyceride (TG). Statins are effective for prevention of ischemic heart disease and cerebrovascular accidents through LDL-dependent and LDL-independent effects. [17] On the other hand, most of the statins produced differential effects on Lp-PLA2 activity and mass with a comparable reduction in LDL serum levels. On the contrary, pravastatins increases Lp-PLA2 activity and reduces Lp-PLA2 mass. [18] Therefore, the objective of the present study was to investigate the effect of statins on the Lp-PLA2 mass in patients with ischemic stroke.
Materials and Methods
In this cross-sectional study, a total number of 59 patient ages 43-69 years (28 males + 31 females) with cerebral stroke were recruited from stroke unite compared to 39 healthy controls that matching the age and body weight. All stroke patients were diagnosed by a consultant neurologist and they kept under supervision. Full medical history and detailed current pharmacotherapy were taken from each patient. Those patients were divided into two groups: Group A -32 patients (17 males + 15 females) previously and currently on statins therapy assigned as statins users. Group B -27 patients (11 males + 16 females) not on statins therapy assigned as nonstatins users. All patients were selected according to the guidelines and diagnostic criteria of the American Academy of Neurology. [19] The study procedures were done in harmony to the Declaration of Helsinki. All patients and healthy subjects gave an informed verbal consent for their participation in this study. The study was approved by Ethical Committee and Clinical Research Editorial Board, College of Medicine, Al-Mustansiriyia University, Baghdad, Iraq from June to September 2017.
Inclusion criteria
Patients with recent cerebral strokes admitted to the stroke unite with or without statins therapy.
Exclusion criteria
Exclusion criteria were as follows hemorrhagic stroke, diabetes mellitus, thyroid disorders, valvular heart diseases, acute and chronic liver diseases, acute and chronic kidney disorders, rheumatic and connective tissues diseases, sepsis, acute and chronic infections, complicated stroke, psychiatric, and mental disorders.
Anthropometric measurements
Body weight, height, and body mass index (BMI) were recorded, BMI = Weight (kg)/Height (m 2 ). Systolic and diastolic blood pressures were recorded at supine position from the left arm by automated digital sphygmomanometer.
Pulse pressure = SBP-DBP, mean arterial . [20] Biochemical measurements After an overnight fasting, 10 mL of venous blood sample were drown by sterile needle and plastic syringe from antecubital area from each patient at morning, 5 mL put into plain tube for routine investigations and 5 mL put into ethylenediaminetetraacetic acid tube for the assessment of the inflammatory biomarkers.
Measurement of inflammatory markers
The blood samples were centrifuged for 10 min and the separated sera were used for evaluation of Lp-PLA2 serum concentration which was the mass level of Lp-PLA2 by specific ELISA kit method in IU/mL (SEA867Hu, Double-antibody Sandwich, Wuhan USCN Business Co. Ltd, China). While high sensitive C-reactive protein (CRP) serum levels were assessed by specific ELISA kit method in mg/dL (Cat. No. ABIN1115432, Wuhan USCN Business Co. Ltd, China). Each sample was calculated twice and mean of two values was estimated to reduce the errors.
Statistical analysis
Data analysis was prepared using Statistical Package for the Social Sciences (IBM, Statics for Window, Version 20.00; Armonk, NY, USA: IBM Corp). Results are presented as mean ± SD, percentage, and number. Unpaired Student's t-test was used for determination of the significant difference between two different groups, while two-way ANOVA test was used for assessing the differences among different groups. Pearson correlation was used for estimating the correlation of Lp-PLA2 serum concentration with other study parameters. The differences were significant when P < 0.05.
Results
A total number of 98 recruited subjects, 60.20% of them were patients with stroke and 39.80% were healthy regarded as control. Regarding the gender and race in the present study 52.54% and 47.45% were female and male respectively with high percentage of white race 91.5% compared to the black race 8.5%. The patients with stroke had positive family history in about 57.62% compared to 42.37% with negative family history.
In the present study, the stroke patients currently on statins therapy were 54.23% compared to 45.76% of the stroke patients not on statins therapy. The duration of statins therapy was 3.12 ± 1.05 years. In addition, many associated diseases were recorded in those stroke patients, including hypertension 74.57%, dyslipidemia 67.79%, ischemic heart diseases 18.64%, and asthma 3.38%. Other pharmacotherapy and characteristics have been illustrated in Table 1. With regard to the cardio-metabolic profile, statins users showed better cardio-metabolic profile than nonstatins users in AI, AC, CRR, and similar High-sensitivity CRP (hsCRP) (P > 0.05) to the healthy controls. However, other parameters were significantly differed P < 0.01. Statins therapy illustrated an improvement in lipid profile, AI, AC, CRR and stroke risk (SR) compared to nonstatins user P < 0.01. Insignificant differences between statins and nonstatin user were observed in BMI and blood pressure levels. Moreover, stroke patients on statins therapy showed comparable low of Lp-PLA (29.82 ± 3.19 IU/mL) to nonstatins user stroke patients (15.58 ± 5.73 IU/mL) [ Table 2].
Lp-PLA2 mass levels were positively correlated with BMI, blood pressure changes, TC, TG, VLDL, AC, CCR, SR %, and HsCRP, but it negatively correlated with HDL serum levels in statins user stroke patients. On the other hand, nonstatins user showed high hsCRP which illustrated high correlation with Lp-PLA2 mass P < 0.01 compared to P < 0.05 in statins user [ Table 3].
In the present study, univariate logistic regression illustrated that Lp-PLA2 is an independent predictor for SR in patients with poor metabolic profile [ Table 4]. Furthermore, the differential effects of statins on Lp-PLA2 mass levels were 9.8 ± 2.5 in atorvastatin, 8.95 ± 2.6 in rosuvastatin, 10.11 ± 2.1 in simvastatin, and 12.9 ± 3.1 in fluvastatin-treated patients; thus, rosuvastatin produced the powerful reduction effect on Lp-PLA2 mass levels compared to other types of statins but not to the statistically significant level P > 0.05 [ Figure 1].
Besides, there was gender difference on Lp-PLA2 mass levels in patients with ischemic stroke. Male patient had a higher Lp-PLA2 mass levels than female patients in both statins and nonstatins users but the differences were nonsignificant [P > 0.05, Figure 2].
Discussion
Stroke remains the main cause of permanent disability and death worldwide. [23] Inflammation plays an important role in the development of ischemic stroke by relative unknown mechanism since; stroke may possibly break the equilibrium between the inflammatory and pro-inflammatory reactions. [24] Therefore, inhibition of inflammatory reactions could lessen and recover the neurological functions. Moreover, inhibition of systemic inflammation might affect the subsequent outcomes and could deteriorate and worsen cerebral repair and functional recovery following ischemic stroke. LDL-associated Lp-PLA2 is linked with the atherogenic potential while; HDL-associated Lp-PLA2 although at low levels, play an important role in anti-atherogenic effects. [25] In the present study, patients with ischemic stroke were linked with a poor cardio-metabolic profile as presented with dyslipidemia, hypertension, and overweight as [26] In contrast, Doehner et al. reported a protective role of obesity and overweight on stroke outcome including functional recovery and reducing mortality rate. [27] Therefore, obesity paradox may protect against metabolic complications in stroke patients. [28] In the current study, most of the stroke patients were hypertensive with poor therapeutic control as in agreement with animal model study of Domin et al., a study that illustrated the hypertension as an important risk factor for ischemic stroke. [29] Moreover, Sultan et al. showed the strong association between dyslipidemia and SR as in the present study. [30] Many observational studies illustrated the associations between lipid biomarkers mainly TC, LDL, and low HDL with the incidence of ischemic stroke due to atherogenic effect, prothrombotic changes, and abnormalities in fibrinolytic cascade. [31,32] Our results revealed elevated high-sensitive CRP levels in patients with ischemic stroke compared to healthy controls since; an increment in hsCRP serum levels are associated with high risk of cerebrovascular events due to inhibition of endothelial nitric oxide. Stroke patients on statins therapy revealed comparable low hsCRP compared to the nonstastins users which might be due to the anti-inflammatory effect of statins. In addition, hsCRP serum levels are regarded as a predictor factor for the incidence and recurrence of ischemic stroke. [33] On the other hand, Guo et al., study exposed that high CRP at admission predicts poststroke complications. [34] Interestingly, Lp-PLA2 mass serum levels in the present study were high in stroke patients compared to the controls as demonstrated by recent Tian et al., a study that showed the association between high, Lp-PLA2 mass serum levels but not Lp-PLA2 activity with the SR. [35] This is the main reason for selection of Lp-PLA2 mass serum levels but not Lp-PLA2 activity in the present study to evaluate the potential inflammatory role in the ischemic stroke as Lp-PLA2 mass serum levels showed less time-dependent biological variability compared to Lp-PLA2 activity. Moreover, levels of Lp-PLA2 activity are linked with TIA and complicated vascular events whereas; Lp-PLA2 mass levels are more specific and accurate than Lp-PLA2 activity in the association with SR in general populations. [36] The findings of the current study pointed out that patient with ischemic stroke showed elevated levels of Lp-PLA2 mass compared to the controls as corresponding with Polupanov et al., a study that demonstrated a significant association between high Lp-PLA2 mass levels and ischemic stroke. [37] Likewise, there is a higher expression of Lp-PLA2 and its products in carotid plaque in patients with ischemic stroke. Therefore, higher Lp-PLA2 mass levels were linked with the development and progression of atherosclerotic changes which are responsible for atherogenesis and ischemic stroke. [38] Many studies revealed significant differences in Lp-PLA2 mass levels in relation to the ethnicity [39] but in this study, ethnic differences were minimally affected results of the present study since; 91.5% of our patients were white race compared to 8.5% of the black race.
Definitely, the present study suggested that high Lp-PLA2 mass levels could possibly be used as a biomarker for ischemic stroke since; it regarded as SR stratification mainly in Chines and general populations. [40] Findings of our study certainly revealed that statins therapy produced significant reduction in Lp-PLA2 mass levels in statins user patients compared to the nonstatin users in patients with ischemic stroke that correspond with Safarova et al., study that revealed a significant effect of statins on the reduction of Lp-PLA2 mass levels in patients with progressive atherosclerosis. [41] On contrast, Otsuka et al.
showed insignificant effects of statins on Lp-PLA2 activity at atherosclerotic lesions which necessities the addition of anti-inflammatory agents for further diminution of Lp-PLA2 activity for prevention the progression of atherosclerotic lesions. [42] Moreover, evidence indicates that Lp-PLA2 is as well present in central nervous system microglia thus; central inhibition of Lp-PLA2 activity by statins or Lp-PLA2 selective inhibitor (GSK2647544) may lead to significant amelioration of cognitive function following ischemic stroke. [43] Furthermore, different statins including simvastatin, fluvastatin, lovastatin, and atorvastatin lessen Lp-PLA2 mass serum levels in correspond with the reduction of LDL-cholesterol serum levels [44] but pravastatin might increase the serum levels of Lp-PLA2 mass by unknown mechanism. [45] IN addition, Ren et al.'s study demonstrated the potential effect rosuvastatin in reduction of Lp-PLA2 mass serum levels due to reduction of total LDL or to the preferential reduction effect of Lp-PLA2 mass serum levels. [46] As well, simvastatin reduce LDL-associated Lp-PLA2 through reduction of LDL and augmentation the clearance of LDL-associated enzyme. [47] These findings are in consistent with findings of the present study that illustrated the potent effect of rosuvastatin compared to the lowest effect of simvastatin which might be due to the half-life of different statins. [48] Moreover, rosuvastatin improves endothelial function, and inflammatory biomarkers and vaspin serum levels in obese patients [49] which might explain it ameliorative effect on Lp-PLA2 mass serum levels in patients with ischemic stroke. Concerning the gender differences in Lp-PLA2 mass serum levels, the male showed a higher level compared to the female in both statin and nonstatins users. Similar results were revealed by different studies [50,51] in contrast a recent Lu et al., study by disclosed high inflammatory biomarkers in women compared to male following cardiovascular ischemic disorders. [52] Therefore, the present study gave an important innovative insight that ischemic stroke patients had higher Lp-PLA2 levels compared to the healthy controls which also correlated with poor cardio-metabolic disorders and SR. Statins significantly reduce Lp-PLA2 levels and reduce SR.
Conclusions
Patients on statins with ischemic stroke had low levels of Lp-PLA2 mass levels compared to a nonstatins user with ischemic stroke. Lp-PLA2 mass levels were high in men than women and correlated with lipid profile and SR in patients with ischemic stroke. | 2018-10-14T17:01:43.719Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "c39d5719c237cfcfb08f12ca80a6b75d3de049fe",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.4103/jnrp.jnrp_97_18.pdf",
"oa_status": "GOLD",
"pdf_src": "Thieme",
"pdf_hash": "92598c7119b0822f528c59a97a237907cf105338",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
122601804 | pes2o/s2orc | v3-fos-license | Tunable diode laser absorption spectroscopy of argon metastable atoms in Ar/C2H2 dusty plasmas
The tunable diode laser absorption spectroscopy method was used to measure Ar metastable density in order to study the dust growth process in hydrocarbon-containing plasmas. A simple model was proposed that successfully interprets the experimental results of pristine plasmas. The model is also suitable for explaining the influence of dust particle size on metastable density and for examining the dust growth process. The metastable density responded strictly to the formation of dust particles and their growth in processing plasmas. Using metastable density as an indicator is, therefore, a non-intrusive and effective method for the study of the dust growth process in hydrocarbon-containing plasmas.
Introduction
The interest in plasma-particle interactions in dusty plasmas has grown enormously during the last decade [1,2]. The increased interest was mainly caused by applied research related to materials science [3]- [5] and, recently, also with regard to plasma diagnostics [6]- [8]. Powder formation has been a critical concern for the microelectronics industry, because dust contamination can severely reduce the yield and performance of fabricated devices. Submicron particles deposited on the surface of process wafers can obscure device regions, cause voids and dislocations and reduce the adhesion of thin films [9,10].
In hydrocarbon-containing plasmas, a large amount of dust particles is effectively and continuously created and trapped inside the plasma. Dust particle size and density can reach significant values resulting in a recognizable decrease of transmitted laser light and a remarkable increase of scattered laser light [11]. A dust void is subsequently created at the center of the dust cloud under the action of the ion drag force [12], which becomes significant once the dust particle size and density reach critical values. Meanwhile, a new dust generation forms in the free space. The new generation grows both in size and number density until it is also pushed out.
The growth process of dust generation in a chemically reactive plasma can be divided into three steps [10,13]: nucleation, agglomeration and accretion. In the nucleation phase, the (sub)nanometer-sized protoparticles are formed as a result of homogeneous or heterogeneous processes. Later in the agglomeration step, these protoparticulates, after reaching a critical density, merge together, forming particles with sizes of a few per tens of nanometers. The particles of this size quickly acquire a negative electric charge by ion and electron collection, which requires the plasma to reorganize. In order to compensate the electron losses, the effective electron temperature increases [14] resulting in an increase of ionization and excitation. This reorganization of the plasma is called the α-γ transition [15]. After this transition the particles accrete neutral and ionic monomers to grow into larger particles until all of them are driven away. The accretion process is relatively slow, and no new particle is formed in the accreting dust cloud. The time evolution of the dust growth, therefore, is parameterized by three characteristic times: T trans -the time the processing plasma needs to reach the α-γ transition; 3 T void -the time for the appearance of the dust void; and T p -the growth period of one dust generation.
Tunable diode laser absorption spectroscopy (TDLAS) can support the optimization of industrial plasma processes by permitting highly specific, accurate, and non-intrusive realtime monitoring of species densities. TDLAS offers significant advantages compared with conventional spectroscopy. The spectral width of the laser radiation (a few megahertzs) is much smaller than the width (a few gigahertzs) of the Doppler-broadened absorption profile. The sensitivity and the signal-to-noise ratio are also increased in TDLAS due to the use of a highpower coherent source. The high sensitivity gives TDLAS the ability to detect and measure gas temperature and low concentrations. For example, such measurements have been successfully shown during Al sputtering from a magnetron discharge [17,18]. Moreover, an advantage of TDLAS in studying dusty plasma is that the presence of dust particles does not influence the metastable density measurement. The scattering and absorption effects can be taken out from the intensity profile.
The metastable density is an important plasma parameter. The metastable atoms are abundant and energetic. So far, the role and effects of metastable atoms in dusty plasmas have not been considered to a large extent, especially in the formation and growth of dust particles in processing plasmas. There has been a lack of consideration of these species in such a plasma environment. In the present paper, we study the influence of dust particle size and density on argon metastable density. Metastable density, consequently, will be used as an indicator to investigate the dust growth process in hydrocarbon-containing plasmas.
Experimental set-up
The experiments were performed in the PULVA reactor (figure 1) [19]- [22]. It consists of a vacuum chamber of 40 cm diameter. Two caps each with a small slit (2 mm × 30 mm) were used to decrease the absorption length to 25 cm in order to avoid saturation of the absorption line. A typical absorption spectrum, together with the etalon signal, is displayed in the inset of figure 1. The chamber is pumped by a turbo molecular pump with a pumping speed of 260 liters s −1 , which is backed by a membrane pump. The residual gas pressure is 10 −4 Pa. An adjustable butterfly valve is mounted between the pump and the chamber. Working gases are introduced into the vacuum chamber by two separate flow controllers. As the process gas, an argon/acetylene gas mixture was used. The discharge was driven at 13.56 MHz by a radio frequency (rf) generator coupled to the bottom electrode by a matching network. The bottom electrode has a diameter of 13 cm and is situated near the center of the chamber; the chamber wall serves as the other electrode. Measurements were carried out at working pressures of 1-15 Pa and at rf powers of 1-60 W. The typical electron temperature and the density of pure Ar plasma are 2 eV and 2 × 10 15 m −3 (at 10 W), respectively [23,24].
The laser system consists of a tunable single-mode diode laser and a control unit for diode temperature and diode current (Toptica DL 100). The diode laser utilizes an extended cavity laser set-up with optical feedback into the laser diode from the first order of a spectrally selective grating. The laser light traverses a polarization filter and is directed to a beam splitter. The transmitted light is registered by a photo diode behind a Fabry-Perot etalon to monitor the light frequency [17]. The second light beam traverses the plasma chamber and is detected by a second photodiode. Argon term diagram of the 1s and 2p levels (in Paschen's notation). Energy values were taken from the NIST database [16].
Without plasma, the laser intensity, as measured by the second photodiode, linearly increases. In the case of a burning discharge, the photodiode signal shows pronounced variations in the neighborhood of a wavelength of 811.53 nm due to absorption by excited argon atoms (see figure 2). The transmitted intensity follows the Lambert-Beer law of absorption. Under the assumption of a constant Ar density inside the plasma region, and the Doppler broadening of the absorption line, the atom density n m is related to the integrated absorption profile κ(ν) [26] where κ 0 is the absorption coefficient in the center of the profile, ε 0 is the dielectric constant, c is the speed of light, m e and e 0 are the electron mass and charge, respectively, m a is the atomic mass of Ar, k is the Boltzmann constant, λ 0 = 811.53 nm is the central wavelength of the investigated transition and f is the optical oscillator strength for the investigated transition. The width of the absorption signal is related to the temperature T of Ar atoms by the equation where ν is the effective full-width at half-maximum of the measured absorption profile. An advantage of metastable argon is that argon essentially consists of 40 Ar isotope (99.6%) with zero magnetic momentum; therefore the transmitted photodiode signal can be fitted by a Doppler profile. A least-squares fit was used to analyze the data. The fit quality was excellent with deviations, on a point-to-point comparison, of less than 1%. The accuracy of the obtained atom density derived via equation (1) is essentially determined by the estimated accuracy of the underlying oscillator strength f of ±25%, an estimated accuracy of the absorption length inside the plasma of 5% and a negligible statistical uncertainty of less than 0.1%, adding up to an estimated overall accuracy for the absolute mean argon atom density of about ±25%.
Metastable density in pristine argon plasma and the quantitative treatment
The dependence of the argon metastable atom number density n m on the input rf power has been obtained experimentally in a pure argon discharge (see figure 3). As can be seen, the metastable density monotonically increases with power with a tendency to saturate before 50 W.
The behavior of the metastable atoms can be explained in the framework of a simple model. We use the balance equation for the metastable atoms: where G i are the rates of different production processes and L j are the rates of losses. The spatial distribution of metastables is not uniform, and in our case n m is a 'line-of-sight'-averaged quantity.
As long as we are interested in a stationary solution (∂n m /∂t = 0), the balance equation can be solved algebraically with respect to n m . The production channels are independent of the density of metastables. Most of the loss rates are proportional to n m , i.e. L j = n m ν j , where ν j is the frequency of the corresponding loss channel. The diffusion is also represented in this linear form, assuming that the coefficient of excitation energy accommodation on surfaces equals 1. Metastable pooling, an essentially nonlinear loss channel, is negligible in our conditions. Thus, the solution to equation (3) is given as Among the numerous channels leading to population and depopulation of the metastable level, the radiative-collisional coupling to other excited states (for the most part to 1s-and 2p-states) is difficult to accurately take into account. For this purpose, the whole system of balance equations for all excited states should be solved. But the low-temperature, low-pressure conditions, which we have, allow us to make a simplifying assumption concerning the role of the excited states. We assume that their densities are small and do not contribute to the population of metastables. The metastable atoms are quenched by electron impact to radiative 1s-states with a rate constant k quench = 2 × 10 −7 cm 3 s −1 , widely accepted in the literature [27,28]. Collisions with electrons can also excite metastables into 2p-levels; the rate constant for this channel is estimated to be of about the same value k m exc = 2 × 10 −7 cm 3 s −1 , which is consistent with the data in [27,29].
The frequency of diffusion loss (de-excitation due to collisions with walls) is accounted for by the term D m /l 2 eff , where D m = 3.3 × 10 3 cm 2 s −1 [30] is the diffusion coefficient of metastable argon atoms in the parent gas at 3 Pa, and l eff is the effective diffusion length.
We may drop the terms accounting for losses in two-and three-body collisions with background atoms, since they start to play a role at pressures at least one order of magnitude higher [31].
The final result describing the dependence of n m on plasma parameters is obtained as where n g is the ground state density, n e the plasma density and k exc the rate coefficient for the ground state excitation into the metastable level. k exc is defined as usual as a product of electron velocity and cross section, averaged over the velocity electron distribution function (EDF). We take the cross section for ground state excitation from the work [32], and take an average over a Maxwellian EDF. As a result, we get k exc (T eff e ) as a function of the effective electron temperature T eff e . With equation (5) we got a plasma density dependence of metastables. Now let us take advantage of the fact that in low-pressure CCP discharges n e is to a great extent proportional to input power P, whereas the mean electron energy does not change with power. This fact allows us to formulate a relation between power and plasma density scales n e = k P, with the proportionality coefficient k = 2 × 10 8 cm −3 W −1 [23].
Finally, we have two fitting parameters T eff e and l eff for the experimental curve. A good fit to the experimental data yields the values T eff e0 = 1.8 eV and l eff = 3.0 cm (see figure 3). It is a well-established fact in the literature that for the low-pressure case of CCP discharge the EDF is bi-Maxwellian. The majority of electrons have a temperature in the range of 0.3-1 eV, and 10-20% have a temperature in the range of 3-4 eV (see, for example, [23,34,35]). Thus, the value of T eff e0 = 1.8 eV appears reasonable. The value of l eff is also realistic for our geometry. (In the case of pure spherical or cylindrical geometry l eff = R r /π or l eff = R r /2.4, where R r is the corresponding radius.) The input power dependence of argon metastable number density can be understood as follows. At low powers the diffusion losses are dominated by diffusion and n m rises almost linearly (with power or plasma density). Then the losses through electron collisions take over, and at higher powers n m tends to a constant value. In between there is a transition region, where both loss channels play an equally important role.
Time evolution of metastable density in hydrocarbon-containing plasma
Similar to small molecular ions [11], the temporal behavior of metastable density also follows the periodic variation of the plasma as well as particle density and size. As can be seen in figure 4, the metastable density drops instantly when C 2 H 2 is inserted due to the reactions between metastable atoms and the processing gas, producing reactive radicals that ignite the neutral growth channel of dust particles [33]: C 2 H + Ar * → C 2 + H + Ar.
The formed radicals further react with C 2 H 2 to form larger radicals, e.g.
and C 2 + C 2 H 2 → C 4 + H 2 (9) and so forth. Shortly after the acetylene flow is added, the metastable density drastically increases, indicating the change in plasma mode, namely a α-γ transition. The appearance of a dust void leads to a separation of the plasma into a dust-free plasma inside the void with low metastable density and a dusty plasma with high metastable density outside. This is because of the high electron temperature in the dust-containing region compared with the dust-free region [37]. Therefore, the metastable density slightly decreases as the void starts to expand. At the end of the dust growth period, when the outer dust cloud is pushed away the metastable density drastically drops again (figure 4). As explained above, the metastable density follows the time evolvement of the dust growth in a processing plasma. From the temporal behavior of the metastable density, we can easily define T trans , T void and T p (see figure 4). The metastable density can be used as an indicator of the changes in the plasma, especially the electron temperature and density, as well as to trace the phase transitions in a processing plasma between: nucleation, agglomeration, accretion and dust-expelling phases.
A significant difference exists between the argon metastable densities before introducing the acetylene and after switching off the acetylene (figure 4). In the former case, we have a pure Ar plasma with low metastable density, whereas in the latter case we have dusty plasma of nanohydrocarbon particles giving rise to high metastable density.
The dust productivity of Ar/C 2 H 2 plasma is so high that only slow phases like accretion and dust-expelling phases can be seen. In order to get insights into the nucleation phase, an Ar/CH 4 rf plasma was employed to extend T trans . The evolvement of metastable density in this transient time can be visualized. In the nucleation phase, there exists a period where the metastable density slightly increases (marked by a red circle in figure 5). The increase in metastable density from 1.8 × 10 14 to 2.2 × 10 14 m −3 (∼25%) is small compared with the change at the α-γ transition, but in fact cannot be neglected. In the nucleation phase, we have a particle balance for metastables between the direct excitation by electrons, the loss by electron collision, the loss by diffusion and the loss by reaction with hydrocarbon species. At this plasma stage, the hydrocarbon species are growing and accumulating more and more in the plasma. Therefore the loss of metastables in reactions with hydrocarbon species should generally increase, which results in about 60 s decrease of metastable density in the first half of the nucleation phase. Therefore, the increase of metastable density in the latter half of the nucleation phase implies a slight rise of electron temperature. This change in electron temperature can only be explained by the appearance and growth of negative ions in this plasma. The dust growth process therefore can be associated with the growth of hydrocarbon negative ions.
In order to understand dust growth and the role of metastable atoms in processing plasmas, investigations of the dependence of metastable density on the dust particle influence are therefore highly important. The results from these studies will be expressed in the following section, which focuses on the interactions between dust particles and metastable atoms in dusty plasma as well as the influence of dust particles on metastable density.
Influence of grown dust particles on metastable density
Normally, in a void-free dust-dense plasma, the metastable density is significantly higher than in pristine plasma (metastable density at a plasma power less than 10 W in figure 6). However, in the presence of the dust void (plasma at a power greater than 10 W), the plasma separated itself into two parts: a pristine plasma inside the void and a dusty plasma outside the dust void. The expansion of the dust void with an increase in plasma power leads to a metastable density closer and closer to the value for pristine plasma ( figure 6).
Generally speaking, the presence of dust particles has both positive and negative effects on metastable density. On the one hand, dust particles collect electrons from the plasma. In order to sustain the plasma, the electron temperature has to increase. The increase of electron temperature results in an enhancement of the excitation rate of metastables. On the other hand, the dust particles also act as quenching surfaces destroying metastables [22]. Depending on the balance between these two effects that depend on dust size and density, the metastable density will be either higher in pristine plasma if the first effect is dominant or vice versa. argon plasmas. The dusty plasma was prepared by running for 5 min a processing plasma of a 10 sccm Ar and 2.5 sccm C 2 H 2 gas mixture at 10 W. When the acetylene flow was switched off, the plasma power was reduced to 1 W and then stepwise increased up to 50 W, while the metastable density was measured.
Considering equation (5) under the influence of dust particles, the effective diffusion length has to be adjusted to include the diffusion of metastables to dust particles with the corresponding effective length l tot [22]: where l D = 1/n d S d is the effective length for diffusion of metastable to dust particles with n d and S d being the dust particle density and surface area, respectively. The source term in equation (5) also changes due to the increase of electron effective temperature and consequently the excitation rate coefficient. Using the model in section 3 to fit the measured metastable density in dusty plasma gives a relatively high electron effective temperature, T eff e ≈ 25 eV. This extreme temperature suggests that other excitation channels, e.g. cascade from higher levels, should play a considerable role in the metastable balance equation. Let us call the total rate of the cascade excitation k c exc ; the metastable density in dusty plasma can be written as This equation will be used to evaluate the influence of dust particle size and density later on.
The influence of dust size on metastable density
The dusty plasma with mono-dispersed grown particles was produced in an argon rf plasma in continuous mode by adding a short pulse of acetylene. The length of the C 2 H 2 pulse, τ C 2 H 2 , was chosen to fulfill T trans < τ C 2 H 2 < T void . This constraint ensures that the dust density is the same for all the chosen flow lengths, because, after the α-γ transition, no new (or only a few more) particles were created, which guarantees that the total dust particle number remained the same. Also, there was no dust void seen, which means that the total volume occupied by dust also remains constant. Meanwhile, longer flow length produced bigger particles. The size of particles produced by the τ C 2 H 2 acetylene flow is more or less the same as the size of dust particles in a processing plasma at the time t = τ C 2 H 2 after the acetylene flow was added. The dependence of dust size on the acetylene pulse length, therefore, can be regarded as the growth of dust particle size in the precessing plasma.
By comparing the metastable density of the dusty plasmas prepared by different acetylene flow lengths (τ C 2 H 2 ), we obtain the relation between metastable density and dust particle size. Using equation (11), the relation between dust particle size and metastable density, in the case of constant dust density, can be expressed as the following: with A = (k exc + k c exc )n g n e , B = n 2 d D m and C = D m /l 2 eff + (k quench + k m exc )n e . The metastable density in dusty plasmas produced by different acetylene flow lengths was measured and is plotted in figure 7. The dependence of metastable density on acetylene flow length can be well fitted to the function y = c 1 /(c 2 τ 2 C 2 H 2 + c 3 ). According to equation (12), the particle surface in a hydrocarbon-containing plasma after the α-γ transition is therefore proportional to t. This result is also in agreement with the assumption we made in [11] that the product n d S d is linearly dependent on time (n d is constant in this case). Therefore the particle radius must be proportional to square root of time Figure 8. Metastable density of a pulsed rf plasma at different duty cycles using an Ar/C 2 H 2 gas mixture. The metastable density tends to reach a steady state value, indicating an equilibrium dust density inside the plasma.
which also means dr This particle size growth rate is a transition between ionic (∝ t −2/3 ) and neutral growth (constant in time) rates [36], which suggests a combination of the two growth mechanisms in hydrocarbon-containing rf plasmas. The particle radius (∝ t 1/2 ) is also in agreement with the measurement of dust growth in an argon/silene plasma [38].
The influence of dust density on metastable density
It is more difficult to change the dust density of the dusty plasma generated by a processing plasma than to change the dust size. In the continuous rf plasma mode, shortly after the acetylene was turned on, the plasma is ready to proceed to the γ mode (figure 4). The total number of dust particles does not change from then until the end of the dust generation growth.
We therefore employed pulsed rf plasmas to create plasmas of different dust densities. The dust density in a pulsed rf plasma is determined by its dust-confining ability. As can be seen in figure 8, after a certain time the metastable density in a pulsed rf plasma reaches the steady state value, implying that the dust density in this plasma also reached the equilibrium value. Consider the balance equation for dust particles The density of the dust confined in a pulsed plasma, besides the plasma power, depends on two important parameters: the processing gas flow C 2 H 2 and the duty cycle D. The dust generation rate G d is obviously proportional to D and the amount of processing gas density available in the plasma, which is determined by the C 2 H 2 flow. Therefore G d can be reasonably expressed as follows: where a 1 is a parameter that depends on the plasma power. In this consideration, since the plasma power was fixed at 10 W, a 1 is considered to be constant. The loss rate L d of dust particles is roughly proportional to the off time of the plasma when the particles lose their charge and some of them can escape to the chamber wall. L d is therefore inversely proportional to the duty cycle. Meanwhile, L d linearly depends on the dust particle density n d since the uncharged dust is mainly driven out by diffusion in collision with background gas. Thus L d can be written as Similar to a 1 , a 2 is also a constant with fixed plasma power. In steady state conditions, the two rates are equal (L d = G d ) giving the confined dust density. Combining equations (16) and (17), we obtain the relation for the confined dust density in a pulsed rf plasma: However, interpretation of results in the case of a pulsed rf discharge is not as straightforward as in the continuous discharge. In order to compare measured metastable densities of plasmas at different duty cycles, one has to adopt a new quantity, which is the density ratio of dusty to pristine plasmas at the same duty cycle and plasma condition. This ratio is representative of the dust population in the pulse modulated dusty plasma. The relation of density ratio to the duty cycle of dusty plasmas prepared by two different acetylene flows (5 and 1 sccm) is expressed in figure 9. As already mentioned, the metastable density depends on the balance between two opposing effects of the dust particles. The loss of metastable atoms on the dust particle surface is linearly proportional to the dust density. The metastable excitation rate is, however, exponentially proportional to the electron temperature [29], which more or less quadratically depends on the charge portion on the particles [40]. According to equation (18), with an increase in duty cycle the density of confined dust should increase quadratically. The metastable density ratio, however, should start from unity and decrease to a shallow minimum before increasing to a plateau at larger duty cycles ( figure 9). The plateau appearing at larger duty cycles is a consequence of the close parking effect, which reduces the particle charge when particles are too close to each other. The close parking effect becomes important at high particle density. With increasing dust density, the charge portion, and thus metastable density, cannot increase any further. The saturation occurs at 40% for the 5 sccm and at 90% for the 1 sccm gas flows, respectively (figure 9). The dust density of these two plasmas as calculated from equation (18) is relatively equal (8100a 1 /a 2 for the 1 sccm case at 90% duty cycle and 8000a 1 /a 2 for the 5 sccm case at 40% duty cycle), which confirms our estimation of confined dust density in pulse modulated plasmas.
Conclusions
Argon metastable density was measured in rf plasmas and compared with a simple model for metastable density. The model can explain the trend of the metastable density with respect to the change of plasma input power. In order to apply this model to the case of dusty plasmas, the excitation channel by cascades from higher levels has to be taken into account.
The metastable density responds closely to the time evolvement of the dust growth in the processing plasma. From the temporal behavior of metastable density, we can identify the times T trans , T void and T p for the α-γ transition, the appearance of the dust void and the dust particle growth period, respectively. The metastable density can be used as an indicator of the changes in the plasma, especially electron temperature and density as well as to trace the phase transitions in a processing plasma between: nucleation, agglomeration, accretion and dust-expelling phases.
The change in the metastable density of dusty plasmas in comparison with that of pristine plasmas is the result of the balance between two effects depending on dust particle size and density: the loss of metastable atoms due to quenching at the dust particle surfaces and the increase of the excitation rate due to the increase of the electron temperature. In a void-free dust dense plasma, the metastable density is much higher in a dusty plasma than in a pristine plasma since the electron temperature in this plasma is significantly increased. Meanwhile, at low dust density, the loss effect can outweigh the enhancement effect. The metastable density in the dusty plasma therefore is lower.
By comparing the metastable density in a dusty plasma with different dust sizes and densities produced by hydrocarbon-containing plasmas, one can conclude that 1. The dust particle radius growth rate after the α-γ transition in a hydrocarbon-containing plasma is proportional to t −1/2 , which is the combination of neutral and ionic growths. 2. The confined dust density in a pulsed processing plasma is proportional to the processing gas flow and to the square of duty cycle.
Through measuring the metastable density, TDLAS therefore can be used as an indirect tool to study the dust growth process in processing plasmas. | 2019-04-20T13:09:06.629Z | 2009-03-01T00:00:00.000 | {
"year": 2009,
"sha1": "a8c8099d2c9a85b4d9a12d1a85ebe1ef92371295",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/11/3/033020",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "47b03a0aa4dbdb025a0c3de23bb63ce2eaa02fa7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
248727092 | pes2o/s2orc | v3-fos-license | Site Characterization Data Model and GIS-based Tools for Offshore Engineering Projects
Offshore engineering projects require the management of a huge amount of heterogeneous georeferenced data - among others metocean, geophysical, geotechnical, and environmental, which need a Data Model, data visualization and data analytics features on a common geographic basis. A Digital Data Platform (DDP) has been developed on a GIS ambient with the aim to speed up the engineering design process (i.e. minimization of routine operations), and also prevent misalignment of the data originating from different sources from Owner to Suppliers and any potential loss of information. The proposed GIS architecture is composed by two main components: i) the Data Model geodatabase, and ii) the GIS-Model Toolbar add-in. The proposed development represents a step forward on the definition of a common specification and dictionary for offshore project execution overcoming the current bottlenecking and inefficiency on the design phases between the project owner and the engineering contractor. The paper illustrates “what” and “how”, and in particular: i) the geodatabase and Data Model framework, ii) the required parameters to be organized and stored for offshore engineering design, and iii) the widgets implementation (i.e. GIS-based tools). Its application on a case study project with practical examples is presented.
Introduction
For Oil & Gas and Energy Companies, the availability of data management systems guaranteeing the quality of information is crucial. With reference to the complex offshore environment, it is essential that the data archives & interfaces remain easily manageable, coherent, fully adaptable, and scalable. For this reason, information and data standards for different data sources have been developed in the last decades. Geographic Information Systems (GIS) has gained worldwide acceptance by Operators and now is being the most used platform for the management, mapping, and analysis of geospatial data for offshore engineering projects. At Company Corporate level, GIS has become the central repository for storing all geospatial data. The operational implementation of GIS in support of SURF (Subsea Umbilicals, Riser, and Flowlines) field development are relatively new. Marrannes et al. [1] (2012) presented from a Contractor experience and perspective the GIS application used on several large construction projects performed offshore. Savazzi et al. [2] (2015) presented from Owner experience a Ground Model as an integrated high value database to manage, store and use the huge amount of geotechnical and geological data throughout the various project engineering phases.
As common practice in the offshore industries, metocean design parameters are assessed fulfilling internationally accepted recommended practice as DNV-RP-C205 [4] or Standard API RP 2MET [5] which gives guidance for modelling, analysis and prediction of environmental conditions as well guidance for calculating environmental loads acting on structures. Scientific literature on metocean Data Models are very limited. An accurate database of operational metocean parameters was developed by Graaff et al. [3] (2012). The database provided basic information for planning offshore installation and operations, as well as input for design of offshore platforms, pipelines, and structures in general.
Some examples of Data Model to store critical information, analyze data and manage data geospatially in a referenced database are: SSDM (Seabed Survey Data Model) [6] developed by the geomatics committee of the IOGP (International Association of Oil & Gas Producers), ArcMarine [7] (ESRI Data Model) which represents a new approach to archive marine surveyed data, PODS [8] (Pipeline Open Data Standard) developed for asset oriented spatial Data Model, or UPDM [9] (Utility and Pipeline Data Model) which is a geodatabase Data Model template for operators of pipe networks in the gas and hazardous liquids industries.
The development of comprehensive geotechnical database involves: i) collection of borehole data from various reputed sources, ii) validation of data in terms of accuracy and redundancy and iii) standardizing and organizing the geotechnical information for incorporating into the database [3] . A standard form of borehole data was suggested and implemented in a web-based GIS application by Chang and Park [9] (2004). Recently, Sun et al. [10] (2014) highlighted the advantages of an integrated GIS-based system for geotechnical data to reliably predict the spatial geotechnical information with specific application to the Seoul area in Korea. Turer & Bell [11] (2010) presents a method of managing site survey data based on geo-information management. The methodology defined a GIS Data Model for seabed survey data interpretation and satisfies GIS query, visualization, and data exchange purposes. Priya & Dodagoudar [13] (2017) presented the meth-odology of building a digitally formatted and integrated spatial database using geotechnical data and geographic information system. Another example was given by S. Varnell et al. [14] (2010) in which geological, geophysical, and geotechnical data were added to project GIS database during a feasibility phase.
For offshore projects, the Data Life Cycle process begins at the conceptual phase and ends with decommissioned asset. During this cycle data shall be managed and audited with data/information governance activities to ensure the owner's requirements are always fulfilled. Looking at the amount of data that are needed to be collected is mandatory to reach a common language and data repository from the early beginning of the project among all different stakeholders: owners, suppliers, and subcontractors. All the existing standards and guidance which have been developed to meet specific needs and purposes are undoubtedly necessary for the value chain of engineering projects, however a unified Data Model covering the design basis parameters for offshore projects is still missing. Main aims of this research are: i) to draft an extended Data Model dedicated to design parameters for offshore projects, ii) to give an orientation towards a common dictionary for offshore industry players and iii) to facilitate the entire design data life cycle for exchange and reuse of data to any phase of project development via a GIS unified data platform.
GIS-based Digital Data Platform
A GIS-based Digital Data Platform (DDP) has been developed with the purpose to pull into a common view, through visualization map and graphical interface, all the data required for engineering design starting from the conceptual phase to the so-called Life-of-Field ( Figure 1).
The DDP embraces two main components: i) the Data Model geodatabase, and ii) the GIS-Model Tools (i.e. Add-in) ( Figure 2). The former is the repository of all the geographical data and the related information involved in the project: the main purpose is to act as a "single source of truth" for the different phases of the engineering, representing a data source that is consistent, coherent, continually updated during the time and easily shareable among the different actors involved in the project. The latter, through ad-hoc developed GIS widgets, allows these data to be displayed and analyzed in graphical and numerical formats according to different geometries and related properties.
The presented Site Characterization Data Model includes the following dataset (Table 1): I. metocean data (i.e. wave, current, wind climate and extreme data, weather and hydrology); II. geotechnical, geophysical, bathymetric and topographic data; III. cartographic data (coastline, orthophotos, coastal structures geometries); IV. environmental data (i.e. marine growth, water quality, soil quality, habitat mapping); V. third party interaction data (i.e. combining marine traffic data and offshore asset). The geodatabase is organized as an ESRI File (*.gdb) containing several feature classes for the GIS features, like the coastline, metocean and geotechnical monitoring stations as well as project field layout and facilities; this kind of data is visualized graphically using ArcGIS Desktop (ArcMap).
The engineering data contained in the Data Model geodatabase are organized as a series of "Bitemporal Data" tables, as shown in Figure 3. The main rules for the organization of the data tables are: -The primary key of each table is a single unique object identifier, independent of any other kind of data; no composite primary keys are used, and no business meaning is stored in the primary key.
-The foreign keys are used only when the external data that is referenced is not versioned; the data consistency is verified mainly using external procedures that validate the data in the geodatabase during the data loading process.
-Two specific fields are used to declare the rows that are valid in a specific time (start and end validity), reflecting the changes to the objects which that data represents.
-Two different fields are used to handle the data corrections (start and end assertion), reflecting the changes made to correct mistakes.
The bi-temporal fields are used to handle the validity of the records in the geodatabase as previously explained and are not used to store any business value related to the meaning of the data itself. Using the bi-temporal fields, it is possible to avoid any data deletion: the new and updated data are added to the system, and the old data is marked as obsolete.
The GIS-Model Toolbar is an ESRI add-in: the generic GIS functionalities of the ArcGIS Desktop platform has been extended through the development of a set of specific software components, accessible directly from the ArcMap toolbar, that integrate all the available operations. The ESRI addin has been developed using the C# programming language and the Microsoft .NET platform, and it is based on ESRI "ArcObjects SDK for .NET" technology.
The main functionalities of the GIS-Model Toolbar are: -Import Functions: the information can be added (and then updated) using the "GIS-Model Toolbar" import functions. The input data are organized as excel sheets with a well-defined structure, adapted to the needs of the client and the project; the data is then integrated inside the geodatabase in data structures specifically tailored to handle the subsequent operations with the maximum efficiency.
-Computational Models: specific Matlab computational models have been developed to process the site characterization and the geographical data stored in the geodatabase; these models are easily accessible through the GUI (Graphical User Interface) functions of the GIS-Model Toolbar. The results of the elaborations (relational and geographical datasets) are then stored in the geodatabase for the visualization using the GIS and the reporting system, as well for further elaborations.
-Reporting System: as part of the ESRI add-in, specific reports have been developed to offer a meaningful representation of the data contained and the elaboration results; these reports can either be dynamic tables or graphical plots and chart, and they are displayed directly inside the GIS environment using the GUI functions available.
Site Characterization Data Model
The Site Characterization Module of the DDP is composed by: i) Metocean Data, ii) Geophysical Data, iii) Geotechnical Data, iv) Environmental Data, v) 3 rd Party Interaction Data. All these data represent the basis of any design for offshore projects.
The hierarchy structures of Data Model are based on three different concatenated levels: Level 1 defines the geometrical properties, the Level 2 defines the data typology and Level 3 provides attribute values of the physical parameters.
• LEVEL 1: Definition of the Geometry (e.g. point, area or linear element); • LEVEL 2: Definition of the Cluster Type (e.g. metocean/geotechnical/geophysical etc.) • LEVEL 3: Definition of Attribute Values (i.e. Master tables) Attribute Values for each category of data type have been defined looking at most common International Standards and Recommended Guidelines.
With reference to Metocean Data accepted industry parameters embrace marine current, waves and wind conditions and all meteorological and hydrological parameters. The loads are limited to wind, wave and current and typically representing: i) climate and ii) extremes. The representativeness of the metocean condition to a geographical area is crucial when considering the complexity of the environment. Each Point describes a homogenous area limited by some boundaries. When applied to GIS each point within this area maintains the same conditions. An example is shown in Figure 4. The climate is a scatter diagrams of main variables such as Hs vs Direction, Ws vs Direction, Hs vs Tp etc., whereas the extremes represent an event to specified return periods applied for structural verification. For waves these includes: i) sea surface stationary sea state for a duration of 1 hour to 3 or 6 hours with energy spectrum such as those of Jonswap, Pierson-Moskowitz or Ochi-Hubble, ii) single wave H-T representing individual waves for time dependent fatigue assessment. One example of common parameters used to describe the marine sea states are: Hs (m); Tz (s); Tp (s); Tp (s) 5%; Tp (s) 95%; γ; Hmax (m); Tmax (s). Parameters used to describe the wind are the 10-minute mean wind speed Ws 10 at height 10 m and the mean direction.
Appropriate gust factors are generally applied to convert wind statistics from other averaging periods than 10 minutes depending on the frequency location of a spectral gap, when such a gap is present. Unless data indicate otherwise, a Weibull distribution can be assumed for the arbitrary 10-minute mean wind speed Ws 10 in each height z above the sea water level. Parameters used to describe the marine current are the Cs "marine current speed" at different level along the water columns. To be noted that the total current velocity at a given location (x,y) should be taken as the vector sum of each current component present, wind generated, tidal and circulation currents. Level 3 Master Tables associated to climate and extremes are listed in Table 2 and Table 3, respectively.
Another example is given for geotechnical data. Customized Master Tables has been developed to collect the geotechnical design parameters necessary to perform foundation design (e.g. shallow or deep) and the pipe-soil interaction analysis. Regarding the foundation design, the first step (Level 1) is the definition of a geometry, i.e. a polygon around the structure locations to be analyzed need to be identified (example shown in Figure 5a). The polygons will be characterized by specific geotechnical parameters, and they will be characterized by a name and a univocal identifier ID. Further step (Level 3) is the collection of the physical parameters. Geotechnical characterization includes a large quantity of soil parameters, specific for the different soil conditions and geotechnical issues (main parameters listed in Table 4). The Master Tables has been designed for all types of foundations, ranging from slender piles and caissons to shallow foundations and rock berms with the purpose of analysis of shortand long-term settlement, verification of stability against horizontal, vertical and overturning failure, installation analysis and liquefaction check.
With reference to soil-pipe interaction, the Level 1 shall be defined through consecutive pipeline sections characterized by homogenous soil parameters as shown in Figure 5b. The geotechnical zonation along the pipeline is generally performed integrating the available geophysical and geotechnical data from the survey results. Physical parameters required for the pipe-soil interaction are listed in Table 5. Main analyses that could be performed are those relevant to the pipeline embedment calculation and the assessment of soil resistances during vertical, axial, and lateral pipe movement.
GIS-based Tools
Dedicated widgets have been implemented allowing the user to work within the DDP. Some functionalities are: i) upload project data into the geodatabase according to different formats (shape, raster, table), ii) to visualize them, and iii) to run external models (here so-called GIS widgets) capable of performing engineering analysis. Specifically, the widgets exploit the "loose coupling architecture" on which the DDP is based: they are displayed in the GUI through a tool that has been called GIS-Model Toolbar add-in (Figure 6a, b). The basic functions on the GIS-Model Toolbar add-in are: • Import Data button (Figure 6a) which is the tool that allows to import new data in the geodatabase according to a standard-defined structure as described in the previous section.
• Show Data (Figure 7a, b and c) is a tool that allows to show data of different points on map. One example is given in Figure 9. Data can be presented as tabular or graphical formats. For example, data relating to wind (speed, direction, etc.) are structured into time intervals (e.g. months, seasons or all year), distribution, return periods, etc. (Figure 7a, b & c) Basis of GIS widgets are Matlab scripts dedicated to engineering assessments. Initial tools included in the DDP first release refer to coastal analysis aiming to assess wave and current impact to shore and providing structural analysis. Coastal application widgets include the following models (see Figure 8 • Wave Models: − Wave Spectrum Model, based on JONSWAP Wave Spectrum Model [4] ; − Breakwater: Run-Up Model, to obtain wave run-up for impermeable and permeable slopes [15] ; − Nominal stable diameter for Rubble Mound Breakwater, to define stable grain size subjected to wave and current loads [15] ; − Wave Kinematics Model, to calculate Wavelength, Group Velocity, Wave Particle Velocity, Acceleration and Displacement (Horizontal and Vertical), Subsurface Pressure [4] ; • Coastal Evolution Model, solving the Pelnard-Considere's equation [15] ; • Cross-shore model, solving the wave energy balance equation and implementing the Soulsby's formulations (1997) [ Calculations are performed as an ordered set of steps that deal with different processing, to generate a geographical feature that represents the results of the processing. The widget execution has been organized in i) pre-processing, ii) execution and iii) post-processing phase. Pre-processing allows operators to set-up all parameters required to execute the model (input data to be able to run the widget are retrieved directly from the geodatabase and refer to the offshore "point" closest to target location: in this way, the data of interest are immediately associated with the widgets), the execution bottom initiate the calculation back to Matlab codes embedded into the platform and the post-processing displays the results directly into the GIS enabling data reading and interpretation. A first application is presented. Data acquired during project feasibility phase have been processed to gain design basis data to be uploaded within the DDP. The geodatabase has been set to have a common repository during project execution whereas coastal models have been applied by operators to assess the short-term and long-term impact of new jetty construction.
The Coastal Evolution Model graphically shows how a coastline will be modified by new groin construction. To activate the GIS widget, first step the GIS operator requires to select the coastline and transect layers that should be considered for coastline evolution (Figure 9).
Conclusions
The proposed development has been established with the main aim to overcome the current bottlenecking and inefficiency on offshore project design phases between the project Owner and the engineering Contractor. The conventional approach which foreseen the delivery of hundreds of documents and scattered data should be substituted in favor of a unified delivery including all the required georeferenced data and studies. A unified technical dictionary for offshore design parameters is still missing in the industry community and the "common language" can be achieved only by the jointly contribution of industrial (owner and contractors) and certification companies. With the proposed development we achieved that data archived within the Digital Data Platform are integrated, stored, analyzed and easily managed during the entire data life cycle of the project. The main peculiarity of the system consists in the customization of a unified Data Model. The digital platform concept is an iterative and dynamic process which needs to be continually updated as new data become available. With reference to developed GIS widgets no limitations are foreseen thus giving an opportunity for further developments. | 2022-05-13T15:03:05.539Z | 2022-05-11T00:00:00.000 | {
"year": 2022,
"sha1": "1be696eecb81673bc6ca2d0b9db303e77caf567e",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.bilpublishing.com/index.php/hsme/article/download/4568/3692",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3a0457c0133801d58644289c15389dcda92c6e5f",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
15334969 | pes2o/s2orc | v3-fos-license | Quantum Field Theory Description of Tunneling in the Integer Quantum Hall Effect
We study the tunneling between two quantum Hall systems, along a quasi one-dimensional interface. A detailed analysis relates microscopic parameters, characterizing the potential barrier, with the effective field theory model for the tunneling. It is shown that the phenomenon of fermion number fractionalization is expected to occur, either localized in conveniently modulated barriers or in the form of free excitations, once lattice effects are taken into account. This opens the experimental possibility of an observation of fractional charges with internal structure, close to the magnetic length scale. The coupling of the system to external gauge fields is performed, leading us to the exact quantization of the Hall conductivity at the interface. The field theory approach is well supported by a numerical diagonalization of the microscopic Hamiltonian.
I. INTRODUCTION
The relevance of the edges in the quantum Hall effect was stressed several years ago in a seminal paper by Halperin [1]. The basic idea is that the existence of gapless excitations at the edges, which follows from the general principle of gauge invariance, provides a mechanism for the universal character of the quantum Hall effect, or in other words, the presence of plateaus for the Hall conductivity, independently of factors such as the degree of disorder and the sample geometry. After this initial observation, the subject was left untouched for a while until the works by Stone [2] and Wen [3,4], which showed many interesting connections with the quantum field theory of (1+1)-dimensional models , in particular a relation to Kac-Moody algebras [5]. One of the consequences of these works is that they may be useful as a promising tool for the study of tunneling between quantum Hall systems [6]. Such a research program is completely supported by the recent advances in the fabrication of microstructures, which may give a real oppurtunity for testing ideas originated from (1+1)dimensional quantum field theory.
Motivated by this possibility, in this paper we will concentrate on the integer quantum Hall effect in the simplest situation, viz, the tunneling along a quasi one-dimensional interface, taking into account that both samples have their first Landau levels completely filled.
We will make contact here with the theory of fermion number fractionalization [7][8][9], which has found some applications in the condensed matter physics of polymers [10,11], superfluid 3 He [12], and recently suggested to play an analogous role at the interface of quantum Hall systems [13,14]. In fact, as we will show, similarly to the case of polyacetylene [10,11,15,16], lattice effects may induce the presence of fractional charges moving along the interface. It is interesting, however, to observe that in the present situation the excitations have a peculiar internal structure, composed by two "subparticles", each one localized around one of the edges.
This paper is organized as follows: in section II we present simple arguments which give the effective model of tunneling. This will be useful to show what is to be expected from a more rigorous analysis, to be seen in further sections. In section III we build the effective model starting from the microscopic definition of an interface. For the sake of a more complete analysis, and to study the conductivity, we also consider the presence of additional external gauge fields. In section IV we explore some physical consequences of the model defined in the previous section, such as charge trapping in specific barriers or via an Aharonov-Bohm effect. The Hall conductivity is found to be quantized, regardless the presence of localized states or disorder at the interface. In section V we study lattice effects in the tunneling, showing that lattice distortions may occur, associated to a gap in the fermion spectrum as well as to fractionally charged excitations at the interface. Distortions are represented by a complex bosonic field, interacting with the fermion system. In section VI we perform a numerical analysis of the problem of charge trapping in a modulated barrier, obtaining a very good agreement between the microscopic system and the effective (1+1)dimensional field theory. In section VII we conclude our discussion and point directions for future investigations. Finally, in order to make the paper as self-contained as possible, there is an appendix on fermion number fractionalization in field theory, where we show in detail the computation of the fermionic current for the model of tunneling, according to the method of Goldstone and Wilczek [8].
II. HEURISTIC MODEL
We will show how the general form of the effective model of tunneling may be found through phenomenological arguments, relying only on a few basic assumptions [13]. This will provide us with some motivation before taking the more complicated task of a complete microscopic analysis. In this section we assume the absence of two-body interactions and eletromagnetic perturbations.
Our problem is to study what happens when we approximate two planar samples, through a common plane, taking into account that both of them have their first Landau levels completely filled. As one sample gets closer to the other, there will be some tunneling through an approximately one-dimensional interface. A sufficient condition for the presence of tunneling is that we have a certain degree of disorder at the interface and that the distance between the samples be of the order of the magnetic length ℓ.
If we consider one of the samples as the square −L/2 ≤ x ≤ L/2 , −L + Λ ≤ y ≤ Λ, with Λ ∼ ℓ, its edge can be defined by −Λ ≤ y ≤ Λ. The system is under the influence of a magnetic field B = Bẑ and its edge may be physically generated via the introduction of an electric field E = Eŷ, which avoids the presence of electrons (at zero temperature) in the region y ≥ Λ. Working in the Landau gauge, A = (−By, 0) , we have eigenfunctions localized only in the y direction. According to Stone [2] we can define the charge density operator at the edge as where φ(x, y) is the field operator in second quantization, constructed as a sum of only first Landau level states (which is a good approximation to the case of strong magnetic fields).
We expect the low energy excitations at the edge to be associated to deformations of the quantum Hall droplet, contained in −Λ ≤ y ≤ Λ [2,17]. The Fourier expansion of j(x), as given by (2.1), may be recovered from is the Hamiltonian which describes the tunneling process. Taking t ≡ −(t 1 − it 2 ), and using the chiral representation for the γ matrices, where σ 1 , σ 2 and σ 3 are the Pauli matrices, it is a simple matter to show that we can obtain (2.3) from the following Lagrangian The fact that t may depend on space and time variables, opens interesting experimental possibilities, related to the phenomenon of fermion number fractionalization. We will come back to this point later.
III. MICROSCOPIC DERIVATION
In our phenomenological description, given by the Lagrangian (2.6), the basic input is the tunneling amplitude t. A deeper question, therefore, is to ask for a microscopic derivation of (2.6), in which the starting point of analysis is the exact Hamiltonian of the system. Only in this way we would know how to obtain the tunneling amplitude from a specific potential barrier, characterizing the relevant class of microscopic structures for the observation of interesting phenomena.
A. COMPUTATION IN THE ABSENCE OF EXTERNAL GAUGE FIELDS
Let us consider a system of independent electrons in the two-dimensional (x, y) plane, under the influence of a magnetic field B = Bẑ and confined only in the x direction by the strip |x| < L/2. We will be interested in the limit L −→ ∞. A simple model of interface is described by the Hamiltonian The first term in the Hamiltonian represents an non-interacting two-dimensional electron gas. The second term can be decomposed as the sum of two contributions: eV (x, y) = eV 1 (y) + eV 2 (x, y) , The potential V 1 (y) is a parabolic barrier, which means, as we will see, that for a certain range of the chemical potential µ there will be a region |y| < y 0 completely free of electrons.
By varying µ we can have y 0 ∈ (0, ∞). The other piece of V (x), the term V 2 (x, y), breaks translation invariance in the x direction, generating a modulated tunneling amplitude along the interface. It is clear that the Hamiltonian (3.1) is unbounded from below, but this does not present any problem in our approach: we could regularize the potential V 1 (y) to be well-behaved for |y| ≫ ℓ, which would not change the physics of tunneling at the interface.
When g(x) = 0, we have an exactly soluble model. In this case the time independent Schrödinger equation is Imposing periodic boundary conditions in the x direction, we can look for a solution of the form ϕ(x, y) = exp(ik n x) ξ(y), where k n = 2πn L , and n is an integer. The equation for ξ(y) is 1 2m [(k n + eBy) 2 + P 2 y ] − Ay 2 ξ(y) = Eξ(y) , Therefore, equation (3.7) represents an one-dimensional harmonic oscillator, with eigenfunctions given by and H p is the Hermite polinomial of order p. The energies are where ω c = 1 mℓ 2 is the ciclotron frequency. If the external magnetic field B is high enough we can limit the Hilbert space to the first Landau level. Therefore, from now on we will restrict the state space to the set of normalized wavefunctions ϕ n (x, y) = ϕ n,0 (x, y) = 1 The electric current associated with a function ϕ n (x, y) can be immediately computed: If we give a look at expression (3.11), we easily recognize an interesting relation between the k n space and the real (x, y) space. The wavefunction ϕ n represents a delocalized state in the x direction and a localized state in the y direction. These states are centered in the y direction around y n = α 4 ℓ 2 k n . From (3.10) and (3.11), one can see that for each value of the energy, there are two states, one centered at +y n and the other at −y n , so that the lesser the energy, the greater |y n |.
We see from (3.13) that n > 0 and n < 0 define, respectively, electrons moving to the right and to the left directions along the x axis. We consider α > ∼ ℓ, which is equivalent of saying that the wavefunctions are spread in the y direction whithin a region of order ℓ.
This means that two wavefunctions, one with n > 0 and the other with n < 0 will have some overlap only if their orbitals are distant from the line y = 0 by a lenght of order ℓ. It is, thus, enough to consider b = ℓ in the expression for V 2 (x, y), if one wants to produce a modulated tunneling amplitude at the interface.
For each value of the chemical potential µ in the first Landau level, there is a coordinate y 0 in such a way that all the states contained in the region |y| < y 0 will be empty ones. The value of y 0 is a function of µ and enters in our model as a phenomenological parameter. Since α > ∼ ℓ, it is adequate to take y 0 ∼ ℓ so that there is some overlap between wavefunctions situated at the opposite sides of the interface.
Acording to (3.11), the state centered at y 0 (or −y 0 ) defines a value of k n (or k −n = −k n ) given byk = y 0 ℓ 2 /α 4 . This state has the quantum numbern =kL/2π. All we need to do from now on is to find a theory for the modes neark and −k.
Using the wavefunctions we found for the case of g(x) = 0, given by (3.11), we may write the second quantized field operator as φ = n<0 a R n ϕ n (x, y) + n>0 a L n ϕ n (x, y) . (3.14) In this representation the Hamiltonian becomes where H is the first quantized Hamiltonian (3.1).
In order to identify the filled states of our model with a "Dirac sea" in the effective theory, it is necessary to make some redefinitions of the operators a R n and a L n . Let us make the following transformation The equation (3.14) now becomes In the "large box" limit, the set of modes k n becomes dense and we can define the continuum theory through We, thus, obtain In order to completely identify the filled states with the "Dirac sea", it is necessary to redefine the energy too, setting to zero the energy of the modesk and −k. In this way, we shift (3.10) to where and The above gaussian integrals can be computed exactly. We have (3.28) As we defined before, b ≃ α ≃ ℓ, so that we can write The existence of tunneling depends essentially on the function g(x). If, for example g(x) = constant, the term in (3.24) representing the tunneling between electrons R and L vanishes. This occurs because in momentum spaceg(k) = δ(k), whereas in the tunneling process each electron changes its momentum by approximately 2k or −2k. Therefore, an interesting class of functions is given by we are considering functions centered at ω = 0, with support in the interval ∆ω <<k.
Substituting the proposed form for g(x) into (3.24), we obtain where, according to (3.29) and (3.30), The Hamiltonian (3.31) describes the complete system, and we want to study only the modes associated with tunneling, which are given approximately by |k| <k. We have to select in (3.31) only those degrees of freedom involved in the tunneling process, retaining the most relevant couplings. Therefore, we obtain from (3.31), a new effective Hamiltonian where We can retain in H I ef f , only the first three terms. In fact, using (3.33) and (3.34) we have That is, the ratio between c 2 (−2k, 0) and the factor 1 √ 2 e −(kℓ) 2 in the tunneling amplitude is e −2(kℓ) 2 . Now, ifk ∼ 1/ℓ, we have e −2(kℓ) 2 ∼ e −2 ∼ 10 −1 , which indeed shows that we can keep only the first three terms of (3.35), to investigate the tunneling.
The expressions for E k−k and E k+k , can be linearized around k = 0. Using (3.22) we get (3.38) Taking into account these approximations, we obtain the effective Hamiltonian we see that H I ef f is, in coordinate space, which agrees with (2.3).
B. INTRODUCTION OF GAUGE FIELDS
We will obtain now the effective Hamiltonian for the interface, taking into account the presence of an external gauge field a µ . We have to consider the more general microscopic We can write where and U is the unitary operator In this expression c represents a path in the (x, y) plane, given by c : The second quantized unitary operator U takes the form Using (3.21) and neglecting terms similar to a R + k a L k ′ , we obtain given by (3.28). We must retain in equation and substituting k = k ′ = 0 in the expressions forc 2 , c 3 and c 4 , we get or alternatively, using the definitions (3.41-3.42), To compute H I ef f = U I + H I 0,ef f U I , it is important to know the following operator products: and, in the same way, Therefore, we get, using the above relations, where D x ≡ ∂ ∂x − iea 1 (x, 0), and also iv) From (3.58) and (3.59) we obtain (3.60) The gauge invariant Lagrangian associated with this Hamiltonian is It is not dificult to understand why the above Lagrangian is the correct generalization of (2.6), which takes into account the presence of gauge fields. The first attempt, in order to implement gauge invariance, would be to replace, in (2.6), γ µ ∂ µ (we are now considering v = 1) by the covariant derivative, D / = γ 0 (∂ 0 + iea 0 ) + γ 1 (∂ 1 + iea 1 ). This is, however, only a partial answer. It is necessary to consider that the operators ψ R (x) and ψ L (x) create hole states in different positions of the two-dimensional plane. In this way, if a gauge transformation is performed, ψ R and ψ L will be multiplied by different phase factors. Besides that, one additional requirement is that the effective model of tunneling be gauge invariant at a classical level. This condition is obtained from the decoupling betweeen the modes at the interface and the bulk degrees of freedom. Considering the gauge transformation (below where φ ≡ e∂ 2 α| y=0 2y 0 is the phase factor implied by the physical distinction between ψ R and ψ L , as fields defined in edges separated by 2y 0 , a simple check shows us that (3.61) in fact satisfies all the above conditions.
We have not considered, in the microscopic derivation, the presence of two-body interactions. Indeed, as far as the wavefunctions at the edges are delocalized in the x direction, their low electric charge densities make the Coulomb repulsion generate logarithmic effects on the spectrum [18]. The Coulomb potential may be relevant, however, in situations where we have localized states, induced by specific configurations of the tunneling amplitude.
IV. CHARGE TRAPPING AND CONDUCTIVITY AT THE INTERFACE
Let us explore some of the consequences of the effective model of tunneling, given by (3.61). We will consider first the case a 0 = a 1 = a 2 = 0. This model may exhibit the phenomenon of charge fractionalization [7], intrinsically related to the global behaviour of the external fields f 1 (x) and f 2 (x). A gradient expansion of the fermion current, J µ , computed assuming f 1 and f 2 as slowly variating fields in space-time, yields [8] (see appendix) For a configuration given by f 1 = ǫ, ǫ → 0 and f 2 = A tanh(λx), we obtain, from the expression for J µ , a total charge +1/2 or −1/2, localized near x = 0, when ǫ → 0 + or ǫ → 0 − , respectively. The role of f 1 is limited to providing a regularization of (4.1). The two possibilities for the total charge are associated with a zero mode in the spectrum, which by its turn implies a doubly degenerate state: the occupied zero mode, with charge 1/2 and the unoccupied one, with charge −1/2. According to the discussion of section III, this profile of f corresponds to the modulating potential This potential could, in principle, be manufactured with the techniques used in the fabrication of microstructures. A difficulty is that the spatial period of (4.2) occurs at not yet very manageable scales (∼ 100ϕA), by present day technology. However, it is possible that in disordered interfaces (which must contain, in Fourier space, modes close to 2k), modulated in adequate scales by a function qualitatively similar to tanh(λx), we have a more pratical way to observe the occurrence of fermion number fractionalization.
Let us now investigate the introduction of perturbing gauge fields in the system. There is a curious Aharonov-Bohm effect, which shows how a magnetic flux may control the amount of charge localized at the interface. In order to see it, we consider a 0 = 0, f = const. and a magnetic flux Φ, confined in the interior of a very thin imaginary solenoid crossing the plane at the point (x, y) = (0, 0). In this way, the exponential factor which appears in (3.61) is exp(−ieΦγ 5 ), for x = 0 − and exp(ieΦγ 5 ), for x = 0 + . According to (4.1), this discontinuity will induce an accumulation of charge Φ/π at x = 0.
It is important to remark that equation In a "soliton" profile, as f = iA tanh(λx), the zero mode may be explicitely computed [19] and the localization region inferred to be δ ∼ lim x→∞ 2xv x 0 c|f (y)|dy .
(4.3)
We also note that in the situations we have been discussing, the charge concetrated at the interface will have an interesting internal structure: the relation between Fourier space and coordinate space (the y direction) will make any charge distribution have two equivalent and disjoint pieces, localized at the edges R and L. The numerical analysis of section VI will show this effect very clearly.
Since we know how the system is coupled to gauge fields, we can compute the conductivity at the interface, as a response to small electric fields.
The current density which crosses the interface, j ⊥ , flowing from one edge to the other, may be calculated through where α = α(x, y) and is the generating functional of model (3.61 where F µν = ∂ µ a ν − ∂ ν a µ . In this way, from (4.6) and (4.4), we obtain Above, E x is the x component of the electric field. We may also study the influence of a small electric field, pointing in the y direction. In this case, we have a 0 = a 1 = 0 and a 2 = E y t , where E y represents the y component of the electric field. Applying (4.1) to this problem, we get Therefore, the above relations tell us that the response to external electric fields is given (in the usual units) by the following conductivity tensor showing the quantization of the Hall conductivity at the interface. It is interesting to note that this result is strongly related to the imposition of gauge invariance and is independent on the specific configuration of the tunneling amplitude, even when it is associated to localized states. The above argument may be regarded as an exact version, worked out for a particular problem, of the fundamental works of Halperin [1] and Laughlin [21], , since here we do not have to take any averages over magnetic fluxes.
V. LATTICE EFFECTS
The quantum Hall effect is observed in approximately two-dimensional electron systems, confined at the interface of semiconductor devices as Si-SiO 2 or GaAs-Ga 1−x Al x As. The latter has been preferred in recent years due to its high-mobility parameters, allowing for very precise measurements of conductivity and other physical quantities [22].
Let us consider for a study of lattice effects, the two-dimensional electron gas as having a certain thickness 2L ⊥ , experimentally found to be around 50ϕA, and interacting with a three-dimensional atomic lattice of macroscopic size, in fact much larger than any relevant characteristic length in the process of tunneling. The lattice structure is relatively complex.
The GaAs lattice, for instance, has a zincblende structure, consisting of two interpenetrating fcc sublattices, one of Ga and the other of As atoms. The inter-atomic distance, s , is close to 5ϕA. We will study the effects of the lattice on the tunneling, through an approach analogous to that of Takayama et al. [15] for the case of polyacetylene.
The fortunate fact that the magnetic length is nearly 100ϕA in the quantum Hall effect, means that we can, in order to estimate couplings, simplify the discussion assuming the lattice to have a simple cubic structure composed of interlinked atoms through an harmonic potential. Therefore, the lattice potential may be expressed as a sum of single atomic potentials centered at sites x i = s(n 1x + n 2ŷ + n 3ẑ ), with n 1 , n 2 , n 3 integers, where we consider distortions as represented by ξ( x i ), in a linear approximation.
We will be interested in static distortions of the lattice, so that we can write the total Hamiltonian of the system (quasi one-dimensional interface + lattice) as where H I ef f is given by (3.43), K ∼ 10 −6 (ϕA) −3 is the force constant associated to distortions, and φ( x) is defined from (3.21), according to φ(x, y) → φ(x, y, z) = , which corresponds to three-dimensional wavefunctions confined in |z| < L ⊥ . The sum in (5.2) is carried over nearest neighbors. We are also assuming the absence of additional external gauge fields.
A convenient approximation is to replace sums over sites in (5.1) and (5.2) by integrals.
We find, then and As an estimate, we can write V 0 ( x) = −4πZe 2 s 2 δ 3 ( x), where e 2 = 1/137 is the fine structure constant and Z ≃ 2 is an effective atomic number. Substituting the above quantities in The first term on the RHS of (5.5) is a constant and may be shifted to zero. As we noticed in section III, only certain Fourier components of V ( x) will be relevant in the tunneling.
(5.6)
From (5.6) we get, neglecting gradients of σ and ϕ, Therefore, we see that only σ 1 must be considered in our analysis. On the other side, the quadratic term in ξ in the Hamiltonian (5.4) turns out to be (restricted to σ 1 ) where, assuming that σ 1 and ϕ are dominated by slow modes, we have substituted the above trigonometric functions by their averaged values.
Defining the complex field σ = σ 1 exp(iϕ) and using the approximations (5.7) and (5.8), we find, for the total Hamiltonian of the system, It is useful to work in Fourier space, through (5.10) We now define a variational ansatz for the Hamiltonian (5.9): let us consider a class of fields σ( q), parametrized by ∆ 1 , ∆ 2 ≥ 0, according tõ This means , roughly, that we are taking the distortion field σ( x) as diferent from zero only in a certain neighborhood of the interface, given, in Fourier space by the two variational parameters ∆ 1 and ∆ 2 . Substituting (5.11) in (5.9) we find, after straightforward computations, the Hamiltonian where and (5. 16) In order to study the variational problem, we will consider the path-integral formalism, defining the generating functional where We will simplify our discussion , assuming that t 1 , t 2 = 0, or in other words, that we have a very clean interface, without any degree of disorder or modulating potentials. We can integrate over the fermion fields in (5.17), neglecting variations of |σ| and ϕ. This may be considered as the first term in a gradient expansion of the fermion determinant. We get, then, with where In the computation of (5.21) it is important to consider the presence of a cutoff atk in the fermion theory. The variational strategy is to find extremes of (5.20), in the space of configurations of σ(x) and also in the space of parameters ∆ 1 and ∆ 2 . Let us perform this analysis in two steps: first, we consider a fixed pair (∆ 1 , ∆ 2 ) in order to find the (constant) fieldσ(∆ 1 , ∆ 2 ) which extremizes S ef f . Second, we look for extremes of S ef f [σ (∆ 1 , ∆ 2 )], in the space (∆ 1 , ∆ 2 ).
We obtain, in the first step, the gap in the fermion spectrum, 5.22) and the effective action, evaluated forσ (∆ 1 , ∆ 2 ), It is readily seen that the second step in the variational analysis, ∂S ef f /∂∆ 1 = ∂S ef f /∂∆ 2 = 0, leads to ∂ ∂∆ 1,2 This equation has, in fact, many solutions, which determine a curve in the (∆ 1 , ∆ 2 ) plane.
An estimate yields an isotropic solution ∆ 1 ∼ ∆ 2 ∼ 10. If we look at (5.11), we see that the degenerecence in the solutions of (5.26) could mean a "torsion" in the lattice displacements, if the parameters ∆ 1 and ∆ 2 were non-trivially dependent on q x . We assume, however, that the isotropic solution represents a mean field for the fluctuations of ∆ 1 and ∆ 2 . Anyway, the gap, as given by (5.22) does not depend on the degenerecence of ∆ 1 and ∆ 2 : substituting (5.26) into (5.22), we find two possible "vacua", physically measurable as a gap at the interface.
We can, in the same way, find non-trivial solutions of the Euler-Lagrange equations for the complex field σ. We will have, in general, solitons which interpolate the vacuum values of < σ >. The transition region is given by the square root of the ratio between the kinectic and mass term coefficients in the action (5.20). This quantity is η −1/2 . Using ∆ 1 , ∆ 2 ∼ 10, we have η −1/2 ∼ 10ϕA. Since σ changes its sign in this specific configuration, we will have solitons carrying charges + − 1/2 (compared to the electron charge) propagating along the interface. Using equation (4.3) we find that the charge will be spread, along the interface, in a region of lenght ∼ 500ϕA. Here, as in the case of polyacetylene, we can also have solitonantisoliton (polarons) excitations. One can suppose that the soliton states would be found as midgap excitations, but since the gap is estimated to be small, it would be hard to observe any kind of soliton production threshold through variations of the sample temperature. We point, however, that there is a mechanism, related to Coulomb repulsion, which raises the soliton energy up to more clearly observable scales. The argument is as follows. As we have already noticed, the edges are associated to different wave-numbers of the fields ψ R and ψ L .
This means that all the charge-density profiles of these excitations will be symmetrically displaced at opposite sides of the interface. In this way, the Coulomb repulsion will raise the soliton energy by ∼ (e/4) 2 /(2ℓ) ∼ 3 × 10 −6 (ϕA) −1 , which is close to the gap between Landau levels (∼ 4 × 10 −6 (ϕA) −1 ). We see, therefore, that the Coulomb interaction, which was playing a minor role in the theory so far, has an important participation in the soliton spectrum.
Another interesting point is the possibility of an enhanced electric conductivity at the interface, via soliton excitations. The same phenomenon was conjectured to be present in the polyacetylene, but in view of the relatively small polymer filaments, the solitons probably do not contribute directly to the conductivity [23,24]. In our case, however, the interface's length may be constructed many orders of magnitude larger than the magnetic length, in such way that we could hope the solitons to be relevant in the conductivity process.
VI. NUMERICAL ANALISYS
In order to test the accuracy of the approximations made in sec. III we performed a numerical investigation of the model of tunneling defined by relation (3.1).
(6.3)
The elementary calculation of < n|H 0 |m > leads to a diagonal matrix with its elements given by (3.10), with p = 0. In order to evaluate < n|H I |m > we must choose a specific modulating potential barrier. Since we want to explore the occurrence of fermion number fractionalization, we may take a function f (x) with the same asymptotic behaviour as the modulating potential considered in sec IV or that of the predicted solitons of sec V. Some numerical improvement is obtained from the consideration of In order to obtain the charge distribution along the interface we must evaluate |ψ n (x, y)| 2 = ij (c n i ) * c n j ϕ i (x, y) * ϕ j (x, y) , (6.5) where c n i is the i component of the n th eigenvector of (6.1). The result for the localized state (zero mode) is shown in fig. 2a. On the other hand, a typical delocalized state in the band, far away from the gap, is depicted In Fig. 2b. Note that in the localized state we have peaks at both sides of the interface. The integral of the two peaks is close to 1/2, so that at each side of the interface there is an accumulated charge of 1/4, measured in units of the electron charge. In the numerical computation the total charge is not exactly 1/2, in view of finite size effects, as was numerically verified.
It is interesting to analyse the "vacuum" structure of the theory. For this aim it is enough to integrate expression (6.5) in the y direction. We define, then, a projected charge density, |ψ n (x)| 2 ≡ ∞ −∞ dy |ψ n (x, y)| 2 . In the field theory context, the vacuum charge density is evaluated by means of Note that in (6.6) not only the lower band but also the upper band is considered in the calculation. The numerical result, shown in fig 3, agrees with the field theory expression. In this computation one observes that the states in the upper band cancel the states in the lower band, in such a way that the charge density of the "vacuum" turns out to be determined only by the zero mode. This fact suggests that the approximation of linearizing the energy near the Fermi energy is indeed a very good aproximation. This result also confirms that our matrix Hamiltonian is adequate to obtain the physics at the interface, without interferences from the bulk. Therefore, the equivalence of the two calculations presented above is a clear evidence that the "Dirac sea" of the field theory model reproduces accurately the completely filled lower band.
VII. CONCLUSION
We studied the tunneling across a quasi one-dimensional interface in the integer quantum Hall effect. The particular form of the electron spectrum at the edges allowed a mapping from the microscopic definition of the interface to a relativistic (1 + 1)-dimensional quantum field theory. Gauge fields and lattice effects were considered in the description. Regarding the coupling to gauge fields, the Hall conductivity was found to be quantized, independently of the possible induction of localized states by a non-uniform tunneling amplitude. We also obtained a peculiar Aharonov-Bohm effect, which shows the influence of magnetic fluxes on the charge concetrated at the interface. The study of interactions between edge excitations and a three-dimensional lattice showed us a natural mechanism for the generation of fractionally charged solitons propagating along the interface. They are associated to topologically stable solutions of the Euler-Lagrange equations for a complex scalar field. We point that these excitations may contribute strongly to the σ xx component of the conductivity tensor.
A numerical test supported the field theory approximations in the case of charge trapping in a modulated barrier.
We believe that an experimental investigation of the above predicted phenomena is cru- conversations. This work was supported by CNPq (Brazil).
APPENDIX A: FRACTIONAL FERMION NUMBER
In order to make this paper self-contained, we will show in this appendix how the fermion current (4.1) may be obtained. We will compute it through the adiabatic method, as outlined by Goldstone and Wilczek [8]. A more rigorous approach may be found in ref. [9].
Let us consider the following fermionic Lagrangian in (1 + 1) dimensions where f 1 and f 2 are classical external fields, and the γ matrices are defined in (2.5). This model may exhibit the phenomenon of fermion number fractionalization, according to the topology of the fields f 1 and f 2 .
The model is invariant under rotations in the (f 1 , f 2 ) plane. This is related to global chiral transformations. That is, considerinḡ where | f | = (f 2 1 + f 2 2 ) 1 2 , we see that a rotation in the plane of coordinates (f 1 , f 2 ) by an angle Θ gives tan −1 (f 2 /f 1 ) → tan −1 (f 2 /f 1 ) + Θ, which can be absorved by a global chiral The existence of this symmetry will be important to establish the topological nature of J µ .
Substituting the expansion S 0 (k + q) = S 0 (q) + k ν ∂ ∂q ν S 0 (q) in the above expression, we get Since T r[γ µ γ ν γ 5 ] = 2ǫ µν , we obtain Performing a Wick rotation , q 0 → iq 0 , the above integral yields i d 2 q m (q 2 + m 2 ) 2 = 2πi In this way, using m = −cf 1 (x 0 ), we find As we mentioned, the theory has chiral invariance. This means that we must write the current J µ (x 0 ) in a chiral invariant way. In other words, it must be invariant under rotations in the (f 1 , f 2 ) plane. Therefore, we are led to It is interesting to note that from a perturbative calculation and taking into account the chiral symmetry of the model, it was possible to find the non-perturbative expression (A17).
The dependence of J µ with the topology of the fields f 1 and f 2 may be clearly seen by considering, for example, the total charge The charge density at the interface, |ψ(x, y)| 2 , and their level curves. Fig. 2a) shows the profile of the charge trapped in the potential barrier. Fig. 2b) shows the charge density of a delocalized state. | 2014-10-01T00:00:00.000Z | 1994-05-26T00:00:00.000 | {
"year": 1994,
"sha1": "67a70d713baf1fd6725f364e72b8b5c200d455cd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9405081",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "89d1218aa16b9650c05c8f384ac46d2db89ee119",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17923421 | pes2o/s2orc | v3-fos-license | Nitroglycerine-Induced Nitrate Tolerance Compromises Propofol Protection of the Endothelial Cells against TNF-α: The Role of PKC-β 2 and NADPH Oxidase
Continuous treatment with organic nitrates causes nitrate tolerance and endothelial dysfunction, which is involved with protein kinase C (PKC) signal pathway and NADPH oxidase activation. We determined whether chronic administration with nitroglycerine compromises the protective effects of propofol against tumor necrosis factor (TNF-) induced toxicity in endothelial cells by PKC-β 2 dependent NADPH oxidase activation. Primary cultured human umbilical vein endothelial cells were either treated or untreated with TNF-α (40 ng/mL) alone or in the presence of the specific PKC-β 2 inhibitor CGP53353 (1 μM)), nitroglycerine (10 μM), propofol (100 μM), propofol plus nitroglycerin, or CGP53353 plus nitroglycerine, respectively, for 24 hours. TNF-α increased the levels of superoxide, Nox (nitrate and nitrite), malondialdehyde, and nitrotyrosine production, accompanied by increased protein expression of p-PKC-β 2, gP91phox, and endothelial cell apoptosis, whereas all these changes were further enhanced by nitroglycerine. CGP53353 and propofol, respectively, reduced TNF-α induced oxidative stress and cell toxicity. CGP53353 completely prevented TNF-α induced oxidative stress and cell toxicity in the presence or absence of nitroglycerine, while the protective effects of propofol were neutralized by nitroglycerine. It is concluded that nitroglycerine comprises the protective effects of propofol against TNF-α stimulation in endothelial cells, primarily through PKC-β 2 dependent NADPH oxidase activation.
Introduction
Ischemic heart disease is a leading cause of death in many regions. The mortality of myocardial infarction remains significant despite advancement in surgical techniques and pharmacological therapies. Organic nitrates (such as nitroglycerin, L-arginine) are still useful drugs and have been widely used for the prevention and treatment of ischemic heart disease for more than 100 years [1,2]. However, these drugs are also known to induce nitrate tolerance after prolonged, continuous, or high dose administration, which leads to the abolishment of clinical or hemodynamic response to organic nitrates [3] and subsequently induces endothelial dysfunction [4]. It has been reported that nitrate tolerance and endothelial dysfunction are associated with increased vascular production of reactive oxygen species (ROS) via mechanisms that involve increased protein kinase C (PKC) and NADPH oxidase activation, eNOS uncoupling in the vascular endothelium [4][5][6]. Interestingly, circulatory proapoptotic inflammatory cytokines (such as tumor necrosis factor (TNF-) ), which are increased during myocardial infraction and atherosclerosis, may promote the production of ROS subsequent to the induction of cardiomyocyte apoptosis and endothelial cells apoptosis [7]. A further study had shown TNF-induced human endothelial cell apoptosis which involved the activation of PKC [8]. Despite these observations, whether or not organic nitrates aggravate TNFinduced endothelial cell apoptosis and the underlying mechanisms by which PKC isoforms exert adverse effects in this pathology remain unclear.
Propofol, an anesthetic with demonstrated antioxidant properties [9], has shown protective effects in various models against ischemia-reperfusion injury [10][11][12]. We previously reported that propofol dose-dependently reduced TNFinduced apoptosis in primary cultured human umbilical vein endothelial cells (HUVECs) [13]. Our further study showed that the supplementation of L-arginine exacerbated TNFinduced cellular toxicity by enhancing oxidative stress and nitrative stress, which was neutralized by propofol treatment [14]. It is unknown, however, whether or not propofol achieves these effects via inhibition of PKC-2 , a PKC isoform that may play a major role in TNF-induced human endothelial cell apoptosis [15]. Of interest, propofol has been shown to activate PKC-, PKC-, PKC-, and PKC-in cardiomyocytes [16][17][18], which may represent an important cellular mechanism of propofol-induced myocardial protection in the setting of ischemia-reperfusion injury. However, in all these studies, the effect of propofol on PKC-2 has not been reported, nor has it been investigated in endothelial cells in the condition of nitrate tolerance. In the present study, we hypothesize that nitrate tolerance induced by organic nitrates comprises the protective effects of propofol against TNFinduced toxicity in endothelial cells. Our data suggests that nitroglycerine supplementation promoted PKC-2 activation in HUVECs subjected to TNF-stimulation, which subsequently increased the activation of NADPH oxidase and compromised the protective effects of propofol against TNFinduced damage.
Cell Culture.
Primary cultured HUVECs will be prepared using established procedures as previously described [13]. Briefly, cells were digested from the umbilical vein with 0.1% collagenase I (w/v) at 37 ∘ C for 20 min, after which they were cultured in 0.1% (w/v) gelatin-coated flasks in Medium 199 supplemented with 10% fetal bovine serum, 15 mg/L ECGS, 2 mM glutamine, 100 units/mL penicillin, and 100 g/mL streptomycin in an atmosphere of 5% CO 2 at 37 ∘ C. The medium was changed every 2-3 days until the ECs reached confluence. Cultured cells were identified as ECs by their morphology and the presence of the Factor VIII-related antigen was detected using an indirect immunocytochemistry method as described [19]. The purity of HUVECs in culture was higher than 95% and passages 2-4 were used in the research.
Experimental Conditions.
When the cells were at 70% confluence, the cultured cells were then randomly divided into the following groups: cells were either not treated (control group, Con.) or treated with 40 ng/mL TNF-(TNFgroup, T) alone or TNF-in the presence of 1 M CGP53353 (CGP) (TNF-+ CGP group, T + C), 10 M nitroglycerine (NTG) (TNF-+ NTG group, T + N), 100 M propofol (TNF-+ propofol group, T + P), or NTG plus propofol (TNF-+ NTG + propofol group, T + N + P), and NTG plus CGP53353 (TNF-+ NTG + CGP53353 group, T + N + C), respectively, for 24 hours. In specific groups, cultured cells were pretreated with propofol for 30 min before other treatments. The concentration of TNF-used to induce apoptosis in the present study was chosen on the basis of our previous studies [13], which demonstrated that TNF-at the dose of 40 ng/mL could significantly induce ECs apoptosis. The concentration of NTG adopted is according to the studies [20,21], which demonstrated that NTG at the dose of 10 M could induce nitrate tolerance. The choice of concentration of PKC-2 inhibitor was based on that 1 M CGP53353 could selectively inhibit PKC-2 activation in our previous study [22]. In our preliminary study, propofol at the dose of 100 M reversed TNF-(40 ng/mL) induced cell injury but propofol at the dose of 100 M per se did not cause apparent apoptosis under the present experimental condition in the absence of TNF-stimulation. Therefore, we chose the concentration of 100 M as the treatment dose of propofol for the further mechanistic study.
Determination of Lipid Peroxidation.
The content of malondialdehyde (MDA), which is a marker of lipid peroxidation, was measured to evaluate the oxidative injury of ECs. After homogenizing on ice in normal saline, the MDA levels of the supernatants of cell samples was determined by the thiobarbituric acid colorimetric method using MDA assay kit (Jiancheng Co., Nanjing, China) as described [23,24]. The results were expressed as nanomole per milligram protein (nmol/mg protein).
Determination of the Levels of NOx, O 2 −
, and Nitrotyrosine. Cultured cells were homogenized in ice-cold PBS and centrifuged at 3,000 g for 15 minutes at 4 ∘ C for supernatant collection. The supernatant protein concentration was determined via a Lowry assay kit (Bio-Rad, CA, USA). Concentrations of nitrites (NO 2 − ) and nitrates (NO 3 − ), the stable end products of nitric oxide (NO), were determined by the Griess reaction as previously described [13]. NOx levels were expressed as mol/L protein. Myocardial O 2 − production was determined via lucigenin chemiluminescence method [25,26]. The supernatant samples were loaded with dark-adapted lucigenin (5 M) and read in 96well microplates by luminometer (GloMax, Promega). Light emission, expressed as mean light units (MLU)/min/100 g protein, was recorded for 5 minutes. Myocardial nitrotyrosine levels ( g/mg protein) in the collected supernatant were determined by chemiluminescence detection via the Nitrotyrosine Assay Kit per manufacturer's protocol (Millipore, USA).
Detection of Apoptosis by Flow
Cytometry. DNA fragments which are lost from apoptotic nuclei and nuclear DNA content can be easily measured by flow cytometry after nucleic acid staining with specific fluorochromes. Briefly, cells (1 × 10 6 ) were harvested and processed as described [14]. Then the cells were performed to Annexin-V-fluos Staining and analyzed using a flow cytometer (Beckman Coulter, Brea, CA) according to manufacturer's protocol. Electronic compensation of the instrument is required to exclude overlapping of the two emission spectra. All measurements were performed in the same instrumental settings.
Statistical Analysis.
All the data are expressed as mean ± S.E.M. Significance was evaluated by analysis of one-way variance (ANOVA) followed by Tukey's test. GraphPad Prism software program (GraphPad Software Inc., San Diego, CA, USA) was used for statistical analysis. < 0.05 was considered statistically significant.
Cell Viability and LDH Release.
The cytotoxicity of the cultured endothelial cells was assessed by MTT assay and LDH release. As shown in Figure 1(a), cell viability was significantly reduced after TNF-stimulation as compared with control, which was reversed by propofol treatment. The supplementation of nitroglycerine further exacerbated TNF-induced reduction in cell viability. The treatment of propofol improved but not restored the viability of the cells subjected to TNF-stimulation in the presence of nitroglycerine. By contrast, CGP53353, a selective inhibitor of PKC-2 , reversed the reduced cell viability induced by TNFwith or without the presence of nitroglycerine. Stimulation with TNF-resulted in a significant increase in LDH release in the medium of cultured HUVECs (Figure 1(b)). Addition of nitroglycerine further increased TNF-induced LDH release. Both propofol and CGP53353 significantly restored the TNF-induced LDH release. By contrast, propofol reduced but not reversed the levels of LDH release in the presence of nitroglycerine. Representatives of the C, T, T + N, T + P, T + N + P, T + C and T + N + C, respectively. All results are expressed as mean ± S.E.M., = 7, * < 0.05 compared with Con., T + N, T + P, T + C and T + N + C, * * < 0.01 compared with Con., T + P, T + C and T + N + C.
Endothelial Cell
index ( Figure 2). Nitroglycerine further increased TNFinduced cell apoptotic death. On the other hand, CGP53353 and propofol significantly attenuated cell apoptosis induced by TNF-. Propofol attenuated but not prevented the combination of nitroglycerine and TNF-induced cell apoptotic death, which was profoundly decreased by the treatment of CGP53353. The patterns of apoptotic index results obtained from TUNEL staining were similar to those obtained by flow cytometry (data not shown). progress of nitrate tolerance and endothelial dysfunction [4], we measured superoxide and MDA production, which is a marker of lipid peroxidation. As shown in Figure 3, the levels of superoxide and MDA were significantly increased in HUVECs subjected to TNF-stimulation as compared to control group, which were prevented by the treatment of propofol or CGP53353. Addition of nitroglycerine further promoted the production of superoxide and MDA, which was neutralized by propofol treatment but reversed by CGP53353 treatment.
NO and Nitrotyrosine
Production. We next determined the production of NOx and nitrotyrosine in HUVECs. Stimulation of TNF-increased the levels of NOx and nitrotyrosine production, and nitroglycerine further increased their levels ( Figure 4). Propofol treatment had no effects on NOx production in the cells subjected to TNF-or combination with nitroglycerine stimulation, but significantly decreased TNF-induced production of nitrotyrosine. By contrast, CGP53353 prevented TNF-induced NOx production and nitroglycerine-mediated increase of NOx production and reversed TNF-induced production of nitrotyrosine whether or not in the presence of nitroglycerine.
Protein Expression of p-PKC-2 and gP91phox.
We previously found that PKC-2 activation played a critical role in TNF-induced oxidative stress in endothelial cells [27], and further study have shown that gP91phox but not p22phox played an important role in TNF-induced ROS production and HUVECs apoptosis [15]. Therefore, our present study measured the protein expression of p-PKC-2 and gP91phox, one of the membrane subunits of NADPH oxidase, which catalyzes the generation of superoxide and is the major source of ROS in cardiovascular system [28]. As shown in Figure 5, the protein expressions of p-PKC-2 and gP91phox were significantly increased in HUVECs subjected to TNFstimulation as compared to those of control group, which were prevented by the treatment of propofol or CGP53353. Addition of nitroglycerine further increased the protein expression of p-PKC-2 and gP91phox, which was neutralized by propofol treatment but reversed by CGP53353 treatment.
Discussion
In the present study, we examined the protective effects of propofol against TNF-induced toxicity in human umbilical vein endothelial cells in the presence or absence of nitrate tolerance. We demonstrated that propofol inhibited or prevented the adverse effects of TNF-stimulation in the cultured endothelial cells. Furthermore, our results demonstrated that chronic treatment with nitroglycerine further exacerbated TNF-induced cell toxicity by promoting PKC-2 activation, with subsequently increased activation of NADPH oxidase, and ultimately neutralized the protective effects of propofol. This is the first study showing the role of PKC-2 activation in nitroglycerin induced nitrate tolerance, which compromises the protective effects of propofol in endothelial cells subjected to TNF-stimulation.
Endothelial dysfunction is implicated in a variety of cardiovascular diseases, such as hypercholesterolemia, atherosclerosis, hypertension, diabetes, and heart failure (see for review [29]). A relationship has been suggested to exist between inflammation and endothelial dysfunction [30]. TNF-, one of the most important proinflammatory cytokines, is well known to increase ROS production in the endothelium and subsequently induce endothelial dysfunction [31]. This is well demonstrated by our present study showing that TNF-resulted in a significant increase of LDH release and cell apoptosis, accompanied with increased superoxide and NOx production, elevated levels of the lipid peroxidation product MDA, and increased production of nitrotyrosine, a nitration product formed by peroxynitrite-mediated nitration of protein tyrosine residues. All these changes except NOx production were suppressed or prevented by propofol, an anesthetic with demonstrated antioxidant properties [9]. However, the precise mechanisms by which propofol attenuates TNF-induced oxidative stress and endothelial dysfunction are not clear.
Endothelial NADPH oxidase is a major source of superoxide in blood vessels and is implicated in the oxidative stress accompanying various vascular diseases [32,33]. NADPH oxidase contains two membrane-bound subunits gp91phox (Nox2) and p22phox and cytoplasmic subunits such as p47phox, p67phox, and a low-molecular-weight G protein (rac 1 and rac 2) [34]. Many protein kinase pathways have been involved in the regulation of NADPH oxidase activation, among which the PKC family seems to play an important role in this process [35]. PKC-activation has been shown to play important or critical roles in NADPH oxidase activation [36,37]. PKC-2 is preferably upregulated in failing human hearts [38], which is accompanied with increased levels of TNF-production [39] and NADPH oxidase activation [40]. Therefore, PKC-2 and NADPH oxidase interplay may play critical roles in mediating cellular damage in situations associated with increased TNF-production, such as AMI, heart failure, and diabetes, as well as during cardiac surgery using cardiopulmonary bypass. In the present study, propofol prevented TNF-induced overexpression of p-PKC-2 and gP91phox in endothelial cells. Of interest, the selective inhibitor of PKC-2 CGP53353 has the similar effects as propofol. Therefore, we assumed that propofol preserves endothelial cells though inhibits PKC-2 activation signal pathway, including inhibition of NADPH oxidase.
Although there are reports on PKC involvement upon NADPH oxidase activation after TNF-stimulation in cultured HUVECs [8], especially in nitrate tolerance condition, the major or specific PKC isoform that is involved and the precise regulation mechanism remain unknown. Our previous study has demonstrated that PKC-2 but not PKCisoform pathway activation played dominant role in ROS production in this context [15]. However, a new finding in the present study showed that PKC-2 activation was involved in NADPH oxidase activation in the condition of nitrate tolerance, a well known phenomenon that the clinical or hemodynamic response to organic nitrates (such as nitroglycerin, L-arginine) is attenuated or abolished after prolonged, continuous, or high dose nitrate treatment (see for review [2]). In the present study, supplementation of nitroglycerine further increased the apoptosis of endothelial cells and the activation of PKC-2 induced by TNF-stimulation, accompanied with enhanced levels of gP91phox and ROS production, which were reversed by the selective inhibition of PKC-2 with CGP53353. This suggests that excessive activation of PKC-2 and subsequent activation of NADPH oxidase paly a critical role in nitrate tolerance induced adverse effects. Of interest, propofol treatment reversed the increased levels of superoxide, MDA, nitrotyrosine, and the elevated protein expression of PKC-2 and gP91phox, as well as LDH release and cell apoptosis in the endothelial cells after TNFstimulation. In the presence of nitroglycerine administration, however, propofol attenuated but not completely prevented these changes induced by TNF-stimulation. This means that chronic treatment with nitroglycerin neutralized the protective effects of propofol.
In summary, the results from the present study indicate that nitrate tolerance further exacerbated TNF-induced human vascular endothelial cell injury, as well as increased ROS production by PKC-2 dependent activation of endothelial NADPH oxidase, and that the protective effects of propofol were compromised by nitroglycerine administration in experimental settings that are associated with persistent TNF-stimulation. Further studies need to be performed in endothelial cell with deficit of the targeted kinase enzyme derived from gene knockout animals or gene silenced with specific antisenses to confirm the findings of the current study. | 2017-04-26T02:52:24.329Z | 2013-12-12T00:00:00.000 | {
"year": 2013,
"sha1": "f7b837912921eec33948bf04d66efc6d1e513cf3",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/omcl/2013/678484.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8b0b8dc178193ee1c6bf469dd0565a3142a7a8be",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
211085241 | pes2o/s2orc | v3-fos-license | Plasmids Shaped the Recent Emergence of the Major Nosocomial Pathogen Enterococcus faecium.
Enterococcus faecium is a gut commensal of humans and animals but is also listed on the WHO global priority list of multidrug-resistant pathogens. Many of its antibiotic resistance traits reside on plasmids and have the potential to be disseminated by horizontal gene transfer. Here, we present the first comprehensive population-wide analysis of the pan-plasmidome of a clinically important bacterium, by whole-genome sequence analysis of 1,644 isolates from hospital, commensal, and animal sources of E. faecium Long-read sequencing on a selection of isolates resulted in the completion of 305 plasmids that exhibited high levels of sequence modularity. We further investigated the entirety of all plasmids of each isolate (plasmidome) using a combination of short-read sequencing and machine-learning classifiers. Clustering of the plasmid sequences unraveled different E. faecium populations with a clear association with hospitalized patient isolates, suggesting different optimal configurations of plasmids in the hospital environment. The characterization of these populations allowed us to identify common mechanisms of plasmid stabilization such as toxin-antitoxin systems and genes exclusively present in particular plasmidome populations exemplified by copper resistance, phosphotransferase systems, or bacteriocin genes potentially involved in niche adaptation. Based on the distribution of k-mer distances between isolates, we concluded that plasmidomes rather than chromosomes are most informative for source specificity of E. faecium IMPORTANCE Enterococcus faecium is one of the most frequent nosocomial pathogens of hospital-acquired infections. E. faecium has gained resistance against most commonly available antibiotics, most notably, against ampicillin, gentamicin, and vancomycin, which renders infections difficult to treat. Many antibiotic resistance traits, in particular, vancomycin resistance, can be encoded in autonomous and extrachromosomal elements called plasmids. These sequences can be disseminated to other isolates by horizontal gene transfer and confer novel mechanisms to source specificity. In our study, we elucidated the total plasmid content, referred to as the plasmidome, of 1,644 E. faecium isolates by using short- and long-read whole-genome technologies with the combination of a machine-learning classifier. This was fundamental to investigate the full collection of plasmid sequences present in our collection (pan-plasmidome) and to observe the potential transfer of plasmid sequences between E. faecium hosts. We observed that E. faecium isolates from hospitalized patients carried a larger number of plasmid sequences compared to that from other sources, and they elucidated different configurations of plasmidome populations in the hospital environment. We assessed the contribution of different genomic components and observed that plasmid sequences have the highest contribution to source specificity. Our study suggests that E. faecium plasmids are regulated by complex ecological constraints rather than physical interaction between hosts.
from other sources, and they elucidated different configurations of plasmidome populations in the hospital environment. We assessed the contribution of different genomic components and observed that plasmid sequences have the highest contribution to source specificity. Our study suggests that E. faecium plasmids are regulated by complex ecological constraints rather than physical interaction between hosts. constructed a core gene alignment for 1,644 isolates of E. faecium clade A. This alignment was filtered for recombination, and the remaining variable sites were analyzed to classify the 1,644 isolates into (85) sequence clusters (SCs) using hierBAPS (postBNGBAPS.2 group) (see Data Set S1 in the supplemental material). In total, 955 genes (orthologous groups) were used to reconstruct the population phylogeny of our E. faecium collection (Fig. 1A) (https://microreact.org/project/BJKGTJPTQ).
In accordance with previous E. faecium population studies, we split the 1,644 E. faecium isolates into clade A1 and non-clade A1 isolates (Fig. 1). Hospitalized patient isolates (1,142) were mostly designated clade A1 (1,098; 96%), representing the most frequent source in this clade (1,098/1,227 [89%]). We also identified clade A1 isolates in nonhospitalized persons (18) and pets (102) (Fig. 1B). Furthermore, pet isolates represented the biggest nonhospital source (78%) present in clade A1 (Fig. 1B). These pet isolates were mainly from dogs from the Netherlands, randomly selected in an unbiased nationwide survey of healthy pet owners with no recent antibiotic usage history. In this survey, cocarriage of vancomycin-resistant E. faecium between owners and dogs was not observed (14).
Human community isolates from nonhospitalized patients were widely dispersed over the phylogenetic tree outside clade A1 (Fig. 1A). Farm animal isolates, represented in this study mostly by isolates from poultry and pigs, clustered in clade A distinct from the hospital clade A1 in polyphyletic groups, confirming that there is no distinct clade A2 representing isolates from farm animals (2,7), in contrast to what was reported previously (6). Pig and poultry isolates were grouped in a limited number of distinct SCs, with 88% of pig isolates grouping in SCs 29 and 30 and 93% of poultry isolates grouping in SCs 24, 25, and 35 (Data Set S1).
Completed plasmid sequences show extensive modularity. To elucidate whether plasmids have shaped the observed E. faecium population structure, we first fully resolved the plasmids of E. faecium by performing Oxford Nanopore Technologies sequencing (ONT) and subsequently constructed a hybrid assembly of 62 E. faecium isolates. These isolates were selected to capture the highest plasmidome variability present in our 1,644 clade A E. faecium isolates based on PlasmidSPAdes predictions (15) and a homology search against a curated database of replication initiator proteins in enterococci (11), as previously described (13) (see Text S1).
Hybrid assemblies resulted in 48 completed (finished) chromosome sequences (and 14 chromosomes distributed among two contigs or more), 305 plasmids, and 6 phage sequences present in single circular contigs (Data Set S1). The 48 complete chromosomes ranged in size from 2.42 to 3.01 Mbp. Hospitalized patient isolates (n ϭ 32) had the largest chromosomes (mean, 2.82 Mbp), whereas poultry isolates (n ϭ 2) carried the smallest chromosomes (mean, 2.42 Mbp). Notably, hospitalized patient isolates had up to 20% larger chromosomes than E. faecium from other sources, which highlights the considerable genomic flexibility of this organism.
We characterized these plasmids using a standard classification (11) based on (i) presence of replication initiator proteins (RIP) (Data Set S1) and (ii) presence of relaxases (MOB) (Data Set S1). A considerable proportion of plasmids (48/294 [16%]) were multireplicon plasmids, with plasmids encoding up to four different RIP gene families, indicating a high degree of plasmid modularity (see Fig. S1). This was most prominent in Rep_1 and Inc18 family plasmids, which contained at least one other RIP with a frequency of 1.0 (8/8) and 0.53 (30/57) (Fig. 2B), respectively. The predominant RIP family RepA_N (n ϭ 82) was mainly encoded by large plasmids (mean plasmid length, 155.3 kbp) and was less frequently associated with other RIP sequences (n ϭ 15, 18%) (Fig. 2B). Plasmids encoding the Rep_3 family (n ϭ 56; mean plasmid length, 12.4 Isolates selected for long-read sequencing are indicated with ϩ under long-read selection. Isolates were colored based on their isolation source: hospitalized patients (red), nonhospitalized persons (blue), pet (green), pig (pink), poultry (brown), and other sources (black). Arrow in the RAxML tree indicates the internal node 1227 used to split the clade A1 and non-clade A1 isolates. (B) For each isolation source (x axis), we specified the count and percentage (y axis) of isolates belonging or not to clade A1.
FIG 2
Overview of completed plasmid sequences (n ϭ 305). (A) Pairwise Mash distances (k ϭ 21, s ϭ 1,000) of the completed plasmid sequences (n ϭ 305) were transformed into a distance matrix and clustered using hierarchical clustering (ward.D2). Node positions in the dendrogram were used to sort and represent in different panels: (i) isolation source, (ii) replication initiator gene (RIP), and (iii) plasmid size (kbp) of the completed plasmid sequences. (B) Intersection plot of the combination of RIP and relaxases found in the set of completed plasmid sequences with associated RIP sequences (n ϭ 294). kbp) and Rep_trans (n ϭ 24; mean plasmid length, 25.7 kbp) were less frequently present in multireplicon plasmids (n ϭ 6, 11%) (Fig. 2B). No RIP family was characterized for 11 plasmids (mean plasmid length, 9.6 kbp).
The observed modularity of E. faecium plasmids became even more apparent when relaxase gene families were linked to the fully sequenced plasmids. All identified relaxases cooccurred in plasmids with different RIP genes and even in multireplicon plasmids (Fig. 2B). In total, we observed 46 different Rep-relaxase combinations (Fig. 2B). A more extensive characterization of mosaicism of plasmid sequences is available in Text S1.
Hospitalized patient isolates have the largest predicted plasmidome sizes. To predict the plasmidome content present in the other 1,582 E. faecium isolates that were only sequenced with short-read technology, we previously used the information derived from the completed plasmid sequences to develop and validate a machinelearning classifier called mlplasmids (13). The classifier achieved an accuracy of 0.95 and an F1 score (harmonic mean between precision and recall) of 0.92 on a test set of E. faecium sequences generated by short-read sequencing. A more extensive description of the classifier validation and its performance compared to that of existing plasmid prediction tools can be found in the study by Arredondo-Alonso et al. (13).
mlplasmids was used on the present collection of E. faecium isolates, resulting in an average number of base pairs predicted as plasmid derived of 240,324 bp (52 contigs), while the average number of chromosome-derived base pairs was 2,619,359 bp (113 contigs) per isolate. mlplasmids did not predict plasmid-derived contigs in four isolates, including one isolate that was previously described as plasmid-free (64/3, in this study named E2364) (16).
We observed significant differences in the number of base pairs predicted as plasmid derived depending on the source of the E. faecium isolates (P Ͻ 0.05) (Fig. 3A). Predicted plasmidome size of isolates from hospitalized patients was considerably larger (mean, 276.16 kbp; P Ͻ 0.05) than that from other isolation sources (Fig. 3). This finding is in line with previous reports which showed that isolates from clade A1 are enriched for mobile genetic elements (6,17).
Plasmidome populations are strongly associated with isolation source. To structure the pan-plasmidome of E. faecium, we determined pairwise distances of isolates based on the k-mer content of their predicted plasmidome. We computed a neighbor-joining tree (bioNJ) to cluster E. faecium isolates exclusively on the basis of gain and loss of plasmid sequences (Fig. 4A). During this analysis, 37 isolates were excluded, as they showed no signs of plasmid carriage signatures based on their distribution of pairwise distances (see Fig. S3).
To evaluate the core genome clonality of isolates clustering in the same plasmidome population, we incorporated information regarding isolation source and SCs into the plasmidome tree (Fig. 4A) and core genome phylogeny (Fig. 4B). Isolates with a similar plasmidome contents but different SCs were positioned in different parts of the core genome phylogeny (Fig. 4B), which could be indicative of horizontal transmission of plasmid sequences.
To quantify and formalize these observations of horizontal or vertical transfer of plasmid sequences, we estimated clusters of isolates with similar plasmidomes. The k-mer distances of the plasmidomes were clustered using hierarchical clustering (ward.D2), and we estimated an optimal number of 26 clusters (average silhouette width, 0.45) (Fig. S4A). To enable meaningful statistical inferences, we only considered clusters that contained more than 50 isolates and had an average silhouette width, as a measure of goodness of fit, higher than 0.3 (Fig. S4B). This resulted in 9 clusters that are referred to as plasmidome populations 1 to 9 (Fig. 3B, S4, and S5). We then calculated the SC diversity of all isolates of each plasmidome population (Simpson index) and tested for enrichment of particular isolation sources (Fig. S4B). However, these plasmidome populations may be driven by the k-mer content of large plasmid sequences and could obscure the potential transfer of small plasmid sequences (B) Pairwise Mash distances (k ϭ 21, s ϭ 1,000) of plasmid-predicted contigs in 1,607 isolates were transformed into a distance matrix and clustered using hierarchical clustering (ward.D2). Based on the quantile function of our defined gamma distribution, we grouped isolates in five blocks: black (0 to 0.01), red (0.01 to 0.25), orange (0.25 to 0.5), yellow (0.5 to 0.75), and white (0.75 to 1.0). Dissimilarity matrix of the isolates was visualized as a heat map colored based on the previous blocks. We incorporated the defined plasmid populations (n ϭ 9) and isolation source information on top and left dendrograms, respectively. between isolates. An extensive evaluation of the plasmidome populations and potential transfer of the complete plasmid sequences obtained in our study is described in Text S1.
To evaluate the influence of other factors than source category to explain the plasmidome clustering, we modeled the observed plasmid k-mer distances using three linear regression models with three different covariates: source, isolation time, and geographical distance between pairs of isolates. We observed that modeling k-mer distances using exclusively source explained 39% of the variance present in the plasmid k-mer distances, whereas using time (difference in years between the isolates) as covariate explained 29% of the variance. Geographical distance between isolates explained less than 1% of the variance. Finally, we incorporated the three predictors into a multiple linear regression model, which increased the variance explained up to 43%. This elucidated that isolation source was the most important predictor to explain plasmidome clustering, but a difference in time between strains must be considered: isolates which are circulating during the same period of time are more likely to share plasmid sequences. Geographical distance between isolates seems not relevant to explain the observed clustering, which suggests a high mobility and spread of E. faecium plasmid sequences.
Restriction modification systems, but not CRISPR-Cas, could act as barriers of horizontal gene transfer. The absence of CRISPR-Cas systems in clade A1 isolates was previously postulated as a plausible explanation for a nondiscriminatory accumulation of plasmid sequences in clade A1 isolates (6,18). However, we only observed a CRISPR-Cas system in a single non-clade A1 isolate and no occurrence of the recently described Jet system in any of the isolates (19). The absence of a CRISPR-Cas system is therefore unlikely to result in a higher and different plasmidome content of clade A1 isolates from hospitalized patients.
Recently, a novel defense mechanism consisting of a restriction modification (RM) system was postulated as contributing to the subspeciation of E. faecium (20). The specificity of the RM system resides in the S subunit, which binds to different DNA sequences by two target recognition domains. In our collection, we also identified the S subunit (WP_002287733) as present and enriched in clade A1 isolates (P Ͻ 0.05), whereas the subunits M and R were identical in both clade and non-clade A1 isolates and always present together with the S subunit. Furthermore, we identified 8 novel S subunit variants in our set of 62 isolates with complete genome sequences. Of these, four variants (E1774_00555, E7313_02981, E4413_00571, and E4438_00276) were significantly enriched in clade A1, while two other variants (E0139_00520 and E4227_ 02943) were enriched in non-clade A1 isolates, which reinforces the hypothesis that different RM systems contribute to the differentiation of the plasmidome content between isolation sources (Text S1).
Characterization of genes driving the plasmidome populations. To identify which genes were potentially driving the observed plasmidome populations (n ϭ 9), we determined, for each plasmidome population, which genes were present in more than 95% of the isolates and defined those as plasmidome population core genes. We further characterized these genes using eggNOG to retrieve the cluster of orthologous genes (COG) and associated KEGG pathways. These plasmidome population core genes were then searched in our set of complete plasmid sequences to identify the type of replicon sequences bearing these genes, such as large RepA_N or Inc18 plasmids.
Most of the plasmidome population core genes belonged to COG S (unknown function) and COG L (DNA replication, recombination, and repair) ( Fig. S6; Data Set S1). Within these two COG groups, we identified functions such as toxin-antitoxin (TA) systems, involved in the stabilization of large plasmid sequences (e.g., RelE/AbrB, MazEF, and HicAC systems), and a type IV TA "innate immunity" bacterial abortive infection (Abi) system that protects bacteria from the spread of a phage infection (AbiEi/AbiEii). This TA system interferes with phage RNA synthesis, enables stabilization of mobile genetic elements (21), and was extensively described in lactococcal plasmids (22).
Interestingly, we identified some plasmidome population core genes only present in particular populations. For plasmidome population 1 (pig and nonhospitalized isolates), we identified a copper resistance operon (tcrYAZB) that provides a mechanism to tolerate high concentrations of this heavy metal as plasmidome population core genes. Copper was commonly used as a growth-promoting agent for pigs (23). However, high levels of copper result in toxicity for the bacterial cells. The tcrYAZB operon provides a plasmid survival mechanism to tolerate high concentrations of this heavy metal. In addition, we identified the glycopeptide resistance-encoding vanA gene cluster as a plasmidome population core in the population. These genes were harbored on a RepA_N conjugative plasmid of 140 kbp (LR132068.1 and LR135180.1) and colocalized with genes encoding plasmid stabilization systems (RelE/AbrB and AbiEi/AbiEii), which may explain the persistence of this large plasmid in the population.
Plasmidome population 2 (poultry associated) also showed plasmidome population core genes which were exclusively present as core in this population. This included the bile salt hydrolase (BSH) choloylglycine hydrolase and putatively a tetronasin resistance-encoding permease gene. BSH is involved in the deconjugation (hydrolysis) of bile acids, which have antimicrobial activity, especially against Gram-positive bacteria (24). Therefore, acquisition of BSH could serve as a selective advantage for E. faecium for gut colonization. In a recent review, BSHs have been described as the gatekeepers of bile acid metabolism and host-microbe cross talk in the gastrointestinal tract (25). In addition, as mentioned, homologous searches revealed only hits for E. faecium strains isolated from chicken, but we also obtained hits for Enterococcus cecorum (100% similarity in amino acids [AA]), which is a species mainly found in birds. In both strains, BSH was located downstream of the same site-specific recombinase, which highly suggests HGT between these 2 species. We also observed a tetronasin resistance gene as a plasmidome population core gene. The presence of this gene on a mobile element among E. faecium poultry isolates was previously described and may be the result of selective pressure due to the wide use of ionophores, e.g., tetronasin for coccidiosis prophylaxis in poultry (26). Interestingly, this gene is often colocated on a plasmid with Tn1546 encoding vancomycin resistance and TA systems.
In the case of the hospital-associated plasmidome populations (3, 5, 6, 7, 8, and 9), we characterized some genes present in all these populations. Of these, a locus of three genes putatively encodes an ABC transport system, while one gene encodes an ATP-binding protein and the other two genes encode permeases. These genes were assigned to COG V (defense mechanisms) and were similar to the previously described vex locus of Streptococcus pneumoniae. In S. pneumoniae, this gene cluster was initially linked to vancomycin tolerance (27), but Moscoso and coauthors disproved these results (27,28). Protein analysis of the ATP binding protein Vex2 revealed the presence of domains with similarity to lipoprotein/bacteriocin/macrolide export systems, which may suggest that this system is involved in antibiotic resistance. We also observed antimicrobial resistance genes such as aminoglycoside resistance (aacA-aphD) and erythromycin resistance (erm) present in the plasmidome population core of all the hospitalized patient populations.
In line with the hypothesis of different routes of hospital adaptation, we observed some plasmidome population core genes that are only present as core in some plasmidome populations associated with hospitalized patients. We observed the presence of a bacteriocin with homology to BacA in populations 5, 7, and 8 and previously described as a plasmid-borne bacteriocin in E. faecalis (29). BacA can act as a more evolved toxin-antitoxin system in which not only daughter cells but also cells from the same generation not bearing the BacA plasmid are excluded. Furthermore, it was demonstrated that plasmid dissemination was more prominent under conditions of fluctuations in the population of E. faecium, since BacA activity exclusively affects dividing cells (29). We also observed a complete phosphotransferase system putatively involved in mannose/fructose/sorbose utilization present in the plasmidome cores of populations 6 and 7. This may provide novel pathways for the utilization of complex carbohydrates in these hospital-associated populations.
A complete characterization of the plasmidome population core genes and the complete plasmid sequences in which these genes are located can be found in Text S1.
Plasmidome content is the major genomic component driving niche specificity. To assess which of the genomic components (chromosome or plasmidome) contributed most to source specificity, we compared the distributions of k-mer pairwise distances using three different inputs: (i) whole-genome contigs, (ii) chromosomederived contigs, and (iii) plasmid-derived contigs. We hypothesized, for source-specific components, that k-mer distances between pairs of isolates belonging to the same source were lower than pairs of isolates from different or random sources. This difference can reflect the association strength between niche and genomic component (whole-genome, chromosome-derived, and plasmid-derived contigs). We followed a bootstrap approach to compare and average k-mer pairwise distances of (i) pairs of isolates from the same isolation source (within-source group), (ii) pairs of isolates belonging to different isolation sources (between-source group), and (iii) pairs of isolates randomly selected (random group).
Whole-genome contigs explained most of the source specificity of all the isolation sources except for nonhospitalized person isolates, based on the highest k-mer pairwise distance differences between isolates from the same source (within source) and randomly selected isolates (Fig. 5 and S7).
However, with the exception of nonhospitalized person isolates, the plasmidome contribution was higher than the chromosome contribution to explain source specificity. This was based on the highest difference in k-mer pairwise distances between isolates from the same (within-source group) and different (betweensource group) sources when comparing the plasmidome versus the chromosome ( Fig. 5 and S7).
Most notably, we observed significant similarities of the whole genome and chromosome of dog and hospitalized patient isolates (positive difference, 0.20; P Ͻ 0.05) but a significant dissimilarity between these two sources when considering their plasmidomes (negative difference, Ϫ0.13; P Ͻ 0.05) ( Fig. 5 and S7). In addition, pig and nonhospitalized person isolates had significantly similar plasmidomes as observed by a small difference in k-mer distances (positive difference, 0.15; P Ͻ 0.05), corroborating the postulated exchange of plasmid sequences between these two groups (Fig. S7).
DISCUSSION
We used a combination of ONT long-read and Illumina short-read technologies to perform a comprehensive analysis of the pan-plasmidome of the nosocomial pathogen E. faecium which has evolved in different niches. The high number of multireplicon plasmids consisting of several combinations of RIP families confirmed the high levels of mosaicism previously observed for E. faecium plasmids, which challenges the classification of Enterococcal plasmids based on RIP schemes (30).
We observed that the total plasmidome size of isolates from hospitalized patients was substantially larger than that from animal isolates and isolates from nonhospitalized persons. Moreover, clustering of k-mer pairwise distances from our set of predicted plasmid sequences revealed a high level of diversity in E. faecium plasmidomes. We estimated the potential contribution of different genomic components (whole genome, chromosome, and plasmid) to source specificity and observed that the plasmidome explains source specificity in dogs and hospitalized patients, while their corresponding core genomes share an evolutionary history. This finding suggests that either the hospital-adapted population was founded by a host jump from the canine population or, alternatively, the host jump happened in the other direction. In line with previous reports (31,32), we observed that nonhospitalized person isolates in our collection shared their plasmidomes with pig isolates, which indicates an exchange of plasmids or strains between both sources.
Source specificity of plasmid sequences was highest in pigs and poultry isolates and significantly differed from the other sources, but also, the plasmidomes of clinical isolates were highly dissimilar to isolates from other sources. This suggests that the pan-plasmidome of E. faecium plays a role in the emergence of this organism as a nosocomial pathogen of major importance. There was not, however, a single preferred plasmidome configuration for hospital patient isolates, but rather, these isolates were associated with six different plasmidome populations, indicating different possible routes of plasmid acquisition within the hospital environment.
The existence of distinct host-associated plasmidome populations indicates that the dissemination of plasmids within the E. faecium population is restricted. The presence of particular S subunit variants belonging to a type I RM system enriched either in clade A1 isolates or non-clade A1 isolates in the E. faecium population suggests that they play an active role as HGT barriers between isolates from different sources (20). Restriction modification systems potentially limit the exchange of plasmid sequences and might contribute to source specificity. In a few cases, we observed the presence of single isolates from a specific source in plasmidome populations dominated by a different source, as exemplified in the case of plasmidome population 4 (dog enriched) and the hospitalized patient isolate E8172. In this case, we identified a similar RepA_N conjugative plasmid potentially transmitted from or to dogs to or from that particular , plasmidpredicted (second column), and whole-genome (third column) contigs were scaled and compared between all the isolation sources. Each row corresponds to a particular isolation source (e.g., first row refers to dog isolates) and the distribution of pairwise distances against other sources (dog in green, hospitalized patient in red, nonhospitalized person in blue, pig in pink, poultry in brown, and random isolates in black) for each genomic component. These average distances were computed using a bootstrap approach (100 iterations). The distribution of pairs of isolates from the same source type with respect to the distribution of pairs from random isolates (black group) reflects the specificity of the genome component in each source. If pairs from the same source deviate to the left, it indicates a higher specificity of that particular genomic component, whereas a deviation to the right with respect to the pairs of random isolates (black group) indicates a lower specificity than expected by chance. hospitalized patient's isolate. The presence of identical S subunit variants between hospitalized patient and dog isolates (clade A1 enriched) could enable an occasional exchange of plasmid sequences between different sources.
Exploration of the core genes of the predicted plasmidome populations revealed that most plasmid genes are poorly characterized. We further characterized some of the plasmid genes with an unknown function as toxin-antitoxin systems. The widespread occurrence of these selfish systems is indicative of their importance in plasmid maintenance and stabilization. Previous reports have shown a high prevalence of particular toxin-antitoxin systems, such as mazEF, in E. faecium clinical isolates (33). They could contribute to the stabilization of plasmid-mediated antibiotic resistance by the maintenance of a single plasmid structure and might thus provide an interesting alternative target for antibiotic therapy.
We also identified a set of copper resistance genes (tcrYAZB operon) in the core plasmidome of population 1 (pig and nonhospitalized associated). Copper was used as a growth-promoting agent in piglets (34), and high levels of copper are toxic for most bacterial species. The acquisition of copper resistance genes may have contributed to the adaptation of E. faecium to environmental constraints imposed by pig farming. Recently, Gouliouris et al. also described the same copper resistance operon as overrepresented in pig isolates, thus confirming that this set of plasmid-borne genes has played an important role in E. faecium survival in farms (35). Those plasmid genes were identified in our set of complete plasmid sequences and were present in a RepA_N conjugative plasmid (140 kbp) identified in pig and nonhospitalized isolates. Furthermore, we identified a BSH gene widely present in the poultry-associated plasmidome population. E. faecium was previously characterized as one of the microorganisms with the highest level of BSH activity in the intestines of chickens (36) and capable of developing new mechanisms to tolerate a high concentration of bile salts (37). The BSH gene described here could be functionally responsible for the bile tolerance of poultry isolates.
The presence of several plasmid genes involved in carbohydrate metabolism and utilization in plasmidome populations associated with hospitalized patients may indicate the acquisition of novel pathways to process complex carbohydrates. This observation is in line with previous reports (6,38) in which phosphotransferase systems enriched in clade A1 isolates and encoded by mobile genetic elements were fundamental for E. faecium during gastrointestinal (GI) tract colonization. The high frequency of plasmid genes with an unknown function or corresponding to hypothetical proteins could mask the presence of other plasmid-mediated mechanisms contributing to niche adaptation. This highlights the importance of further functional studies to elucidate the roles of these plasmid genes.
The observations that plasmid sequences are highly informative for source specificity and that particular genes may have a clear benefit for E. faecium in particular niches suggest that the distribution of plasmid genes among E. faecium isolates is regulated by complex ecological constraints, and thus contributes to niche adaptation, rather than by opportunities arising from physical interactions between different sources. Of note, this approach does not calculate the contribution of a single genomic sequence but of the whole genomic component (plasmid or chromosome) to the niche specificity. Small chromosomal alterations or rearrangements could also be involved and play an important role in niche specificity.
Based on our findings, we elucidated that isolation source was the most important predictor to explain the observed plasmidome clustering and indicated that isolates from the same niche can exchange plasmid sequences during the same time frame. Combining extensive short-and long-read sequencing of a large collection of isolates from a diverse set of sources, as reported here for E. faecium, may serve as a broadly applicable approach to study the pan-plasmidome of evolutionary and ecologically diverse populations.
MATERIALS AND METHODS
Genomic DNA sequencing, assembly, and characterization of plasmids. Detailed description of Illumina and ONT sequencing is available in Text S1 in the supplemental material and in the study by Arredondo-Alonso et al. (13), which includes a full description of ONT selection of E. faecium isolates (n ϭ 62) and consecutive hybrid assembly using Unicycler (39). Characterization of fully assembled plasmids is also described in Text S1.
Population genomic analysis. Pangenomes for the entire genome data set (1,684 strains) and the clade A data set (1,644 strains) were created using Roary (40) with default settings. A core gene alignment was generated using the -mafft option in Roary, resulting in a core gene alignment of 859 genes for the entire data set and of 978 genes for the clade A data set. To estimate recombination events and to remove them from the core genome alignment, we used BratNextGen with default settings, including 20 hidden Markov model (HMM) iterations, 100 permutations run in parallel on a cluster, and 5% significance level, similar to those in earlier publications (41,42). To determine sequence clusters (SCs) in the core genome alignment where significant recombinations had been removed, we used 5 estimation runs of the hierBAPS method (43) with 3 levels of hierarchy and the prior upper bound for the number of clusters ranging in the interval 50 to 200. All runs converged to the same estimate of the posterior mode clustering. We considered the second level of hierarchy (postBNGBAPS.2) to determine SCs in our collection. To estimate a phylogenetic tree, we used RAxML (44) with the GTRϩGamma model on a core gene alignment stripped of recombination. The bootstrap option was disabled in RAxML due to an extremely long runtime.
To observe the presence of the restriction modification system described by Huo et al. (20), we retrieved the nucleotide sequences of the S subunit (WP_002287733.1), M subunit (WP_002287732.1), and R subunit (WP_002287735.1) from the E. faecium genome sequence (NZ_GG688488). We screened for the presence of these subunits in our entire collection of isolates (1,644) using Abricate and defined a 95% minimum identity and 90% coverage as thresholds (version 0.8.2). Later, we focused our analysis on the set of complete genome isolates (62) and performed a multiple-sequence alignment on the protein level of all the S subunits identified using Clustal Omega (version 1.2.4) (47). Based on the multiple-sequence alignment, we defined 8 novel S subunit variants that were tested for enrichment in either clade A1 or non-clade A1 isolates using a Fisher exact test with the function fisher.test from R stats package (version 3.4.4).
Predicting the plasmidome content of short-read sequenced E. faecium isolates. To determine the plasmidome content of the remaining 1,582 isolates, we used mlplasmids (13). mlplasmids (version 1.0.0) was run, specifying "Enterococcus faecium" model and a minimum contig length of 1,000 bp. For further analysis, we discarded predicted contigs with a posterior probability lower than 0.7 of belonging to the assigned class (chromosome/plasmid; https://gitlab.com/sirarredondo/efaecium_population/raw/ master/Files/mlplasmids_prediction/prediction_svm.tsv). Differences in the numbers of chromosomeand plasmid-derived base pairs predicted by mlplasmids between hospitalized patient isolates and other isolation sources were assessed using the Kruskal-Wallis test (significance threshold, 0.05) available in ggpubr package (version 0.1.7) (48).
We calculated pairwise Mash distances (k ϭ 21, s ϭ 1,000; version 1.1) between isolates (n ϭ 1,640), only considering plasmid-predicted contigs. We reconstructed a plasmidome tree with the bioNJ algorithm implemented in the R ape package (version 5.1) using computed Mash distances (49,50). The resulting phylogenetic tree was midrooted using the midpoint function in the R phangorn package (version 2.4.0) (51). To improve the resolution of the bioNJ tree, we observed the distribution of the computed Mash distances and fitted a gamma distribution using the fitdist function (distr ϭ "gamma" and method ϭ "mle") available in the R fitdistrplus package (52). We discarded isolates with an average pairwise mash distance superior to 0.12, which was calculated using the qgamma function (P ϭ 0.9, shape ϭ 2.344073, rate ϭ 35.870449, lower.tail ϭ TRUE) in the R stats package (version 3.4.4). All remaining isolates (n ϭ 1,607) were used to reconstruct the plasmidome tree.
We used the function NbClust (method ϭ "ward.D2" and index ϭ "silhouette") available in the R NbClust package (version 3.0) (53) to evaluate an optimal number of clusters derived from pairwise Mash distances. We computed hierarchical clustering using the hcut function (method ϭ "ward.D2", isdiss ϭ TRUE, k ϭ 26) and cut the resulting dendrogram specifying 26 clusters. For each resulting cluster, we uniquely defined plasmidome populations (n ϭ 9) based on two criteria: (i) clusters with more than 50 isolates and (ii) an average silhouette width greater than 0.3.
Correlation of plasmidome populations and isolation sources was determined using a one-sided Fisher exact test (alternative ϭ "greater") from the fisher.test function (R stats package version 3.4.4) and naive P values were adjusted using the Benjamini-Hochberg (BH) method implemented in p.adjust function (R stats package, version 3.4.4). We considered an adjusted P value threshold of 0.05 to determine enrichment of isolation sources for specific plasmidome populations. We incorporated metadata and plasmid population information into plasmid bioNJ and the E. faecium core genome tree using the R ggtree package (version 1.13.3). Simpson index based on SC diversity (postBNGBAPS.2 group) (Data Set S1) and its associated 95% confidence interval from 1,000 bootstrap replications was computed using the R package iNEXT (version 2.0.19) (54).
We evaluated the influence of two other covariate (time and distance) in the clustering derived from Mash distances. For each pair of isolates, we determined (i) if they belonged to the same or different isolation source, (ii) time difference (in years) between their isolation times, and (iii) geographical distance. To calculate the geographical distance, we considered the latitude and longitude of each isolate and used the distm function (R geosphere package, version 1.5-7). We fitted three linear regression models (function lm in R stats package, version 3.4.4) considering as response the pairwise Mash distances and the previous defined covariates. For each model, we retrieved its adjusted R 2 to explain the percentage of variance explained by each covariate. We combined all three covariates in a multiple linear regression model using the function lm (R stats package, version 3.4.4) and further evaluated the observed correlations by performing a permutation test with the function lmp from the package lmPerm (version 2.1.0) (55).
Contribution of genomic components to source specificity. To evaluate the contribution of genomic components on source specificity, we considered three different inputs: (i) Mash pairwise distances from whole-genome contigs, (ii) Mash pairwise distances from chromosome-derived contigs, and (iii) Mash pairwise distances from plasmid-derived contigs. Pairwise distances were scaled using the scale function (scale ϭ TRUE, center ϭ TRUE) from the R stats package (version 3.4.4). For each isolation source (hospitalized patient, dog, poultry, pig, and nonhospitalized person), we used a bootstrap approach (100 iterations) to calculate the average pairwise distances of 50 random isolates belonging to the following combinations: (i) pairs of isolates belonging to the same niche (within-source group), (ii) pairs of isolates belonging to different niches (between-source group), and (iii) pairs of isolates belonging to random isolation sources (random group). This random group consisted of an artificial group in which we merged 50 random isolates belonging to any of the five isolation sources after sampling 100 isolates from each of the sources to avoid overrepresentation of hospitalized patient isolates. This random group was used to statistically assess whether the distribution of pairwise distances belonging to within-source and between-source groups differed from that of random pairwise distances. We used a one-way analysis of variance (ANOVA) test (aov function, R stats package version 3.4.4) and computed differences in the observed means using Tukey's honestly significant difference (HSD) function available in the R stats package (version 3.4.4). Significant (adjusted P Ͻ 0.05) positive and negative observed differences of the means were considered indications of niche adaptation similarity and dissimilarity, respectively.
Estimating the core plasmidome of the defined populations. We used Roary (version 3.8) (40) to define orthologous groups present in each plasmidome population by defining a threshold of 95% amino-acid-level similarity and nonsplitting paralogues. We defined the core plasmidome of each population as the total number of core genes (OGs present in more than 99% isolates) and soft-core genes (OGs present in more than 95% of the isolates but less than 99% of the isolates). To group these core plasmidome genes into different COG categories, we used eggNOG (version 1.0.3-5-g6972f60) with the translate option and the bacterial database (4.5.1) provided.
Data availability. The complete code used to generate the analysis reported in the manuscript is publicly available at the following GitLab repository: https://gitlab.com/sirarredondo/efaecium _population.
Illumina NextSeq 500/MiSeq reads of the 1,644 E. faecium isolates used in this study have been deposited in the following European Nucleotide Archive (ENA) public project: PRJEB28495. Oxford Nanopore Technologies MinION reads used to complete the 62 E. faecium genomes are available under the following figshare projects: 10.6084/m9.figshare.7046804 and 10.6084/m9.figshare.7047686.
Hybrid assemblies generated by Unicycler (v.0.4.1) are available under the ENA and NCBI project PRJEB28495 and also retrievable at the following GitLab repository: https://gitlab.com/sirarredondo/ efaecium_population/tree/master/Files/Unicycler_assemblies. Annotation of the complete genome sequences generated in this study are available on NCBI under BioProject PRJEB28495.
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. TEXT S1, DOCX file, 0.1 MB. | 2020-02-13T09:23:22.518Z | 2020-02-11T00:00:00.000 | {
"year": 2020,
"sha1": "25d560395de62528cc9ce95e3cf4e0e834a41255",
"oa_license": "CCBY",
"oa_url": "https://mbio.asm.org/content/mbio/11/1/e03284-19.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "70db373adec45de639da4288d23ead704c588c3f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119630685 | pes2o/s2orc | v3-fos-license | Large time behavior of the heat kernel
In this paper we study the large time behavior of the (minimal) heat kernel $k_P^M(x,y,t)$ of a general time independent parabolic operator $L=u_t+P(x, \partial_x)$ which is defined on a noncompact manifold $M$. More precisely, we prove that $$\lim_{t\to\infty} e^{\lambda_0 t}k_P^{M}(x,y,t)$$ always exists. Here $\lambda_0$ is the generalized principal eigenvalue of the operator $P$ in $M$.
Introduction
Let k M P (x, y, t) be the (minimal) heat kernel of a time independent parabolic operator Lu = u t + P (x, ∂ x )u which is defined on a noncompact Riemannian manifold M . Denote by λ 0 the generalized (Dirichlet) principal eigenvalue of the operator P in M .
Over the past three decades, there have been a large number of works devoted to large time estimates of the heat kernel in various settings (see for example the following monographs and survey articles [2,4,6,8,9,14,15,16,17,19,20,21], and the references therein). Despite the wide diversity of the results in this field, the following basic question has not been fully answered.
Question 1.1 Does lim t→∞ e λ 0 t k M P (x, y, t) always exist?
The aim of this paper is to give a complete answer to Question 1.1 for arbitrary P and M . The following theorem [12] gives only a partial answer to the above question (see also [3,9,14,18,19]). ∞ 0 e λ 0 t k M P (x, y, t) dt = ∞, and the ground states ϕ and ϕ * of P − λ 0 and P * − λ 0 respectively, satisfy ϕ * ϕ ∈ L 1 (M )), then lim t→∞ e λ 0 t k M P (x, y, t) = ϕ(x)ϕ * (y) ∞ 0 e λ 0 t k M P (x, y, t) dt = ∞, and the ground states ϕ and ϕ * of P − λ 0 and P * − λ 0 respectively, satisfy ϕ * ϕ ∈ L 1 (M )), then Moreover, if one assumes further that P is a formally symmetric operator (P = P * ), then in the null-critical case lim t→∞ e λ 0 t k M P (x, y, t) = 0. The main result of the present paper is the following theorem which answers the author's conjecture [12,Remark 1.4] about the existence of the limit in the null-critical nonsymmetric case. The lim t→∞ e λ 0 t k M P (x, y, t) exists for all x, y ∈ M , and the limit is positive if and only if the operator P − λ 0 is positive-critical.
Moreover, let G M P −λ (x, y) be the minimal positive Green function of the elliptic operator P − λ on M . Then The proof of Theorem 1.3 hinges on Lemma 4.1 which is a slight extension of a lemma of Varadhan (see, [19,Lemma 9,page 259] or [14, pp. 192-193]). Varadhan proved his lemma for positive-critical operators on R d using a purely probabilistic approach. Our key observation is that the assertion of Varadhan's lemma is valid under the weaker assumption that the skew product operatorP = P ⊗ I + I ⊗ P is critical inM = M × M , where I is the identity operator on M . We note that ifP is subcritical inM , then by Theorem 1.2, the heat kernel ofP onM tends to zero as t → ∞. Since the heat kernel ofP is equal to the product of the heat kernels of its factors, it follows that ifP is subcritical inM , then lim t→∞ k M P (x, y, t) = 0. In Section 4, we formulate and give a purely analytic proof of Lemma 4.1. Our proof of the lemma is in fact the translation of Varadhan's proof to the analytic apparatus. It uses the large time behaviors of the parabolic capacitory potential and of the heat content (see Section 3).
The proof of Theorem 1.3 is given in Section 5. We conclude the paper with some open problems which are closely related to the large time behavior of the heat kernel (see Section 6).
Remark 1.5
In the null-recurrent case, the heat kernel may decay very slowly as t → ∞, and one can construct a complete Riemannian manifold M such that all its Riemannian products M j , j ≥ 1 are null-recurrent (see [5]). Remark 1. 6 We would like to point out that the results of this paper, are also valid for an elliptic operator P in divergence form and also for a strongly elliptic operator P with locally bounded coefficients.
Preliminaries
Let P be a linear, second order, elliptic operator defined in a noncompact, connected, C 3 -smooth Riemannian manifold M of dimension d. Here P is an elliptic operator with real, Hölder continuous coefficients which in any coordinate system (U ; x 1 , . . . , x d ) has the form where ∂ i = ∂/∂x i . We assume that for every x ∈ M the real quadratic form is positive definite. The formal adjoint of P is denoted by P * . We consider the parabolic operator L Lu = u t + P u on M × (0, ∞). Throughout this paper we always assume that λ 0 ≥ 0.
For every j ≥ 1, consider the Dirichlet heat kernel k By the maximum principle, {k M j P (x, y, t)} ∞ j=1 is an increasing sequence which converges to k M P (x, y, t), the minimal heat kernel of the parabolic operator then P is said to be a subcritical (respectively, critical) operator in M , [14]. It can be easily checked that for λ ≤ λ 0 , the heat kernel k M P −λ of the operator P − λ is equal to e λt k M P (x, y, t). Since we are interested in the asymptotic behavior of e λ 0 t k M P (x, y, t), we assume throughout the paper (unless otherwise stated) that λ 0 = 0.
It is well known that if λ 0 > 0, then P is subcritical in M . Clearly, P is critical (respectively, subcritical) in M , if and only if P * is critical (respectively, subcritical) in M . Furthermore, if P is critical in M , then C P (M ) is a one-dimensional cone. In this case, ϕ ∈ C P (M ) is called a ground state of the operator P in M [12,14]. We denote the ground state of P * by ϕ * .
The ground state ϕ is a global positive solution of the equation [12,14]. In the critical case, the ground state ϕ (respectively, ϕ * ) is a positive invariant solution of the operator P (respectively, P * ) in M (see for example [12,14]). That is, Remark 2.2 Let 1 be the constant function on M , taking at any point x ∈ M the value 1. Suppose that P 1 = 0. Then P is subcritical (respectively, positive-critical, null-critical) in M if and only if the corresponding diffusion process is transient (respectively, positive-recurrent, null-recurrent) [14]. In fact, since we are interested in the critical case, it is natural to use the h-transform with h = ϕ. So, Note that P ϕ is null-critical (respectively, positive-critical) if and only if P is null-critical (respectively, positive-critical), and the ground states of P ϕ and (P ϕ ) * are 1 and ϕ * ϕ, respectively. Moreover, Therefore, throughout the paper (unless otherwise stated), we assume that (A) P 1 = 0, and P is a critical operator in M.
It is well known that on a general noncompact manifold M , the solution of the Cauchy problem for the parabolic equation Lu = 0 is not uniquely determined (see for example [10] and the references therein). On the other hand, under Assumption (A), there is a unique minimal solution of the Cauchy problem and of certain initial-boundary value problems for bounded initial and boundary conditions. More precisely, we mean the limit of the solutions u j of the following initial-boundary value problems Remark 2.5 It can be easily checked that the sequence {u j } is indeed a converging sequence which converges to a solution of the initial-boundary value problem (2.7).
Auxiliary results
Then w is a decreasing function of t, and lim t→∞ w(x, t) = 0 locally uniformly in M .
Proof: Clearly, It follows that 0 < w < 1 in B * × (0, ∞). Let ε > 0. By the semigroup identity and (3.2), Hence, w is a decreasing function of t, and therefore, lim t→∞ w(x, t) exists. We denote the limit function by v. So, 0 ≤ v < 1 and v is a solution of the elliptic equation P u = 0 in B * which satisfies u = 0 on ∂B. Therefore, 1 − v is a positive solution of the equation P u = 0 in B * which satisfies u = 1 on ∂B. On the other hand, it follows from the criticality assumption that 1 is the minimal positive solution of the equation P u = 0 in B * which satisfies u = 1 on ∂B. Thus, 1 ≤ 1 − v, and therefore, v = 0.
where w is the heat content of B * . Therefore, the corollary follows directly from Lemma 3.1.
Varadhan's lemma
In this section, we give a purely analytic proof of a lemma of Varadhan [19, Lemma 9, page 259] for a slightly more general case. We consider the Riemannian product manifoldM := M × M . A point inM is denoted bȳ x = (x 1 , x 2 ). Let P x i , i = 1, 2 denote the operator P in the variable x i , and letP = P x 1 + P x 2 be the skew product operator defined onM . We denote byL the corresponding parabolic operator. Note that ifP is critical inM , then P is critical in M . Moreover, if P is positive-critical in M , thenP is positive-critical inM .
By comparison of u 1 with the parabolic capacitory potential of B * , we obtain that On the other hand, It follows from (4.6) and Lemma 3.1 that there exists T > 0 such that |u 2 (x, t)| ≤ ε for allx ∈K and t > T. Combining (4.5) and (4.7), we obtain that |u(x 1 , t) − u(x 2 , t)| ≤ 2ε for all x 1 , x 2 ∈ K and t > T . Since ε is arbitrary, the lemma is proved.
Proof of Theorem 1.3
Without loss of generality, we may assume that P 1 = 0, where P is a nullcritical operator in M . We need to prove that lim t→∞ k M P (x, y, t) = 0. Consider again the Riemannian product manifoldM := M × M and let P = P x 1 + P x 2 be the corresponding skew product operator which is defined onM . IfP is subcritical onM , then by Theorem 1.2, lim t→∞kM P (x, y, t) = 0. SincekM P (x,ȳ, t) = k M P (x 1 , y 1 , t)k M P (x 2 , y 2 , t), it follows that lim t→∞ k M P (x, y, t) = 0. Therefore, there remains to prove the theorem for the case whereP is critical inM . Fix a nonnegative, bounded, continuous function f = 0 such that ϕ * f ∈ L 1 (M ), and consider the solution Let t n → ∞. then by subtracting a subsequence, we may assume that for any t ∈ R the function v(x, t + t n ) converges to a nonnegative solution Invoking Lemma 4.1 (Varadhan's lemma), we see that u(x, t) = α(t). Since u solves the parabolic equation Lu = 0, it follows that α(t) is a nonnegative constant α.
We claim that α = 0. Suppose to the contrary that α > 0. The assumption that ϕ * f ∈ L 1 (M ) and (2.5) imply that for any t > 0 On the other hand, by the null-criticality, Fatou's lemma, and (5.1) we have Hence α = 0, and therefore Using the parabolic Harnack inequality and (2.5), we obtain that for all x, y ∈ M and t+t n > 1 (see [12]). Now let t n → ∞ be a sequence such that lim n→∞ k M P (x, y, t + t n ) exists for all (x, y, t) ∈ M × M × R. We denote the limit function by u(x, y, t). It is enough to show that any such u is the zero solution. Recall that as a function of x and t, u ∈ H + (M ×R) (see [12]). Moreover, (5.3), the semigroup identity, and the dominated convergence theorem imply that u(x, z, t + 1) = M u(x, y, t)k M P (y, z, 1) dy.
It follows that either u = 0, or u is a strictly positive function. On the other hand, Fatou's lemma and (5.2) imply that Since f 0, it follows that u = 0. Let P be an elliptic operator of the form (2.1) such that λ 0 ≥ 0, and let v ∈ C P (M ) and v * ∈ C P * (M ). It is well known [13] that The parabolic Harnack inequality and (5.4) imply that for all x, y ∈ M and t > 1 (see [12]). Recall that in the critical case, v and v * are in fact the ground states ϕ and ϕ * of P and P * respectively, and by (2.5), we have equalities in (5.4). We now use theorems 1.2 and 1.3, estimate (5.5), and the dominated convergence theorem to strengthen Lemma 4.1 for initial conditions which satisfy a certain integrability condition. Hence, if the integrability condition of Corollary 5.1 is not satisfied, then the large time behavior of the minimal solution of the Cauchy problem may be complicated. The following example of W. Kirsch and B. Simon [11] demonstrates this phenomenon.
Example 5.2 Consider the heat equation in R d . Let R j = e e j and let Let u be the minimal solution of the Cauchy problem with initial data f . Then for t ∼ R j R j+1 one has that u(0, t) ∼ 2 + (−1) j , and thus u(0, t) does not have a limit. Note that by Lemma 4.1, for d = 1, u(x, t) has exactly the same asymptotic behavior as u(0, t) for all x ∈ R.
Remarks and open problems
In this section, we mention some general open problems that are related to the large time behavior of the heat kernel. The first conjecture deals with the exact long time asymptotics of the heat kernel. The answer to this conjecture seems to be closely related to the question of the existence of a λ 0 -invariant positive solution (see [7,13]). The second conjecture was posed by the author [12, Conjecture 3.6]. As noticed in [12], if the conjecture is true, then Theorem 1. | 2018-05-08T18:34:23.813Z | 2002-06-26T00:00:00.000 | {
"year": 2002,
"sha1": "7c20443471102c71ea328a4454afa64e084865d6",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/s0022-1236(03)00110-1",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "330999eb422e4d018bb29eaae429c0fd9f2f9f27",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
216392447 | pes2o/s2orc | v3-fos-license | Occurrence of overweight in schoolchildren and analysis of agreement between anthropometric methods
Abstract The child population is strongly affected by obesity. Accessible and reliable strategies for the obesity diagnosis are of utmost importance.. The aim of this study was to identify childhood obesity according the WHO (World Health Organization): malnourished, healthy weight, overweight and obese. It was collected measures of height, Body Mass Index (BMI), Waist Circumference (WC) and Triceps Skinfold Thickness (TSF) of 449 children from Municipal School of Araras/SP, from 7 to 10 years old. It was performed a Spearman correlation test between BMI, WC and TSF variables. Also, was realized cross tabulation between the found results by the different methods, constructing a contingency table 2x2, with absolute frequency of boys and girls classified as “without overweight” and “with overweight”. The concordance between methods was analyzed by kappa index. In the results, 28.3% of children presented overweight according to BMI, with higher prevalence in boys. Generally, the found results through TSF showed strong correlation with both BMI and WC (rs=0.7994 e rs=0.7519, respectively). The same was observed when data was analyzed separately by sex. When crossed the TSF data with BMI and WC, the kappa index demonstrated a satisfactory concordance (0.4419 e 0.5161, respectively). The TSF can be suggested a method to body composition assessment and cardiometabolic risk in children.
INTRODUTION
Obesity is defined as the excessive accumulation of body fat and can be determined by the percentage of adipose tissue presented by the individual. It is a chronic disease considered a worldwide problem, reaching epidemic proportions in both developing and developed countries, regardless of gender and age 1 .
The child population is, also, strongly affected by obesity, which is considered a concerned scenario. Currently in Brazil, 34.8% of girls and 25.9% of boys are overweight and have been increasing over the decades 2,3 . Obesity phenotype is high associated with lifestyle changes achieved by contemporary society, where children consume large amounts of foods with high levels of fat and carbohydrate. As well, the reduction of physical activity levels 4,5 .
The overweight at this stage of life is a risk factor for a number of health problems, which may manifest on early age or late age. Among these, we can highlight physiological and metabolic complications such as elevation in blood pressure, dyslipidemia, cardiovascular disease and type 2 diabetes; postural changes, accompanied by joint pain, as well as psychological problems related to low self-esteem and self-concept [6][7][8][9] .
There are evidences that a great number of obese children and adolescents will remain obese when they reach adulthood 9 . Therefore, the older you are and the greater overweighed you are, the most difficult it will be to reverse, due to the incorporated eating habits and installed metabolic changes. For that reason, strategies to combat this condition have become important. Not only in pursing the weight lost, but also in diagnostic overweight in an accessible and reliable way. Thus, both Body Mass Index (BMI) 10 and skinfold method 11 are widely used for diagnosis. However, the studies which uses skinfold measures the thickness of more than one skinfold, such as triceps, subscapular and suprailiac 12 . Thus, it was proposed the use of triceps skinfold (TSF) as a respectable method to identify overweight in school-age children.
The present study aimed to identify the nutritional status and body fat content of children aged 7 to 10 years in a municipal school of the city of Araras, state of São Paulo, Brazil. Therefore, BMI, TSF and WC methods were used to correlate the results obtained and to compare the agreement between the methods, besides providing subsidies for the best choice of methods for the analysis of body composition of school age children.
METHOD
The present study was classified as a cross-sectional, observational and quantitative field research. Anthropometric measurements of height, body mass, WC and TSF were collected from 449 children (209 boys and 240 girls from a municipal school in the city of Araras/SP, aged 7 to 10 years. All children were previously instructed to wear shorts and t-shirt, and to remain barefoot to measure body mass and height. Participants' body mass was measured using a Filizola ® scale with 100g divisions and maximum load of 150 kg, as well as a portable Sanny® stadiometer for height measurement. Those two data were used to calculate BMI, where the body mass value in kilograms (Kg) is divided by the square of the height in meters (m 2 ). The BMI value was applied to classify the nutritional state of the subjects as malnourished, healthy, overweight and obesity. For this, we used the percentile table proposed by the World Health Organization, considering gender and age 13 , and the cutoff values proposed by Division of Nutrition, Physical Activity and Obesity of Centers for Disease Control and Prevention 14 .
To obtain the WC, all children were positioned in anatomical position, with a measuring tape in a horizontal plane over the umbilical scar, and being analyzed in WC percentiles according gender and age 15 .
To obtain the TSF, a Sanny ® adipometer was used. The anatomical point of the TSF was determined parallel to the longitudinal axis of the arm on the posterior face, and the point was the average distance between the superolateral edge of the acromion and the oclecranon. For the analysis of the Percentiles, the gender and age of the children were considered 16 . The same Physical Education teacher performed all evaluations.
The Spearmen correlation test was performed between the variables BMI, WC and TSF, with a statistical significance as p < 0,05. The test was executed on BioEstat 5.3. ® program.
A cross tabulation was also performed between the results found by the different methods and constructing a 2x2 contingency table, with the absolute frequency of boys and girls classified as "non-overweight" and "overweight". Then, the kappa index analysis was performed, thus allowing evaluate of the agreement between the methods os diagnosis of obesity. The kappa index was classified as proposed by Landis and Koch 17 and, also, performed by the BioEstat 5.3. ® program.
All procedures followed the principles of Ethics Committee for Human Research, under protocol CAE number 64158617.3.0000.5385. For involving minors, parents and/or guardians signed the Informed Consent agreeing to data collection. Table 1 shows the results regarding the occurrence of overweight and obesity in the children participating in the study, totally and divided by gender. Overall, 28.3% of 449 children were overweight (127 children) according to BMI, 31% male (n = 65) and 25.9% female (n = 62). On the other hand, girls had a higher occurrence of overweight, but the number of obese was higher among boys ( Table 1).
RESULTS
The Lillieforns test showed that it was a non-normal sample. Thus, the Spearman coefficient was performed, so that when correlated BMI and TSF, we observed an r s = 0.7994 (p<0.05), revealing a strong correlation ( Figure 1A). Strong correlations between the methods were also observed when analyzes were made separately by gender, with r s = 0.7901 (p<0.05) for boys and r s = 0.8220 (p<0.05) for girls ( Figures 1C and 1E respectively). The same pattern was noticed when we executed the correlations of WC with TSF, with r s = 0.7519 (p<0.05) for the whole sample ( Figure 1B), with r s = 0.7653 (p<0.05) for the boys ( Figure 1D) and r s = 0.7550 (p<0.05) for girls ( Figure 1F). The cross tabulation of the results obtained by BMI, WC and TSF are presented in 2x2 contingency tables, with Table 2 being the cross tabulation of BMI TSF results and Table 3 the cross tabulation of WC and TSF results. When the BMI and TSF data were crossed, the kappa index revealed a satisfactory agreement, both for the whole sample (0.454) and for boys (0.4260) and girls (0.4527) ( Table 2). Satisfactory agreement was also found when WC and TSF results were crossed, with a kappa index of 0.5161 for the entire sample, 0.5381 for boys and 0.4970 for girls (Table 3).
DISCUSSÃO
In our study, we found overweight in 28.3% of children (15.6% overweight and 12.7% obesity) ( Table 1). These numbers may be related to the fact that currently a number of social factors contribute to increased physical inactivity and decreased energy expenditure of children, among them the advancement of technology that makes electronic games more attractive and policy-related factors, such as decreased safety for leisure activities on the streets and reduction of free space for such activities 18 . Corroborating our finding, studies report an alarming number of overweight children 19,20 .
Among genders, boys had a higher occurrence of overweight than girls, accompanied by a higher occurrence of obesity (Table 1). Opposite data were observed in the literature, since there are studies showing the highest occurrence in girls 21 , or even found no difference between the genders 19 . Therefore, we can suggest that obesity at school age is not correlated with gender.
According to literature data [19][20][21] , BMI is widely used for the diagnosis of overweight and obesity when dealing with large populations, supported by a several studies that have shown that it is a reliable method for this purpose. A study of children in a public school with a mean age of 9.2 ± 1 years showed good agreement between BMI and skinfold method 22 . Similar results were compared with those obtained by X-ray double emission densitometry (DEXA) method, considered the gold standard for body composition evaluation 23 . In the present study, we observed that only the measurement of TSF thickness is a method that shows good agreement with BMI and may be a new strategy for investigating the nutritional status of school-age children. Moreover, it is also considered a fast, practical and relatively low cost method, with easy interpret results. Others studies were performed comparing the results obtained by BMI and skinfold method 11,24 . Januário et al. 11 investigated the occurrence of obesity in 200 children from 8 to 10 years old in public schools in Londrina -PR. The authors observed a kappa index of 0.43 for boys and 0.50 for girls. These results are similar to those observed in our study, 0.43 for boys and 0.45 for girls (Table 2). Similar results were observed by Landis and Koch 17 that evidenced a moderate agreement between the methods. It should be taken into consideration that in our study only the TSF was evaluated, while the study by Januário et al. 11 used both TSF and subscapular fold. Thus, this study proposes that only TSF measurement can be a new method for diagnosis of obesity for adolescents, given the strong correlation with BMI ( Figure 1) and the moderate agreement by the kappa index (Table 3), revealing the TSF may also be a good method for the diagnosis of cardiometabolic risk in this population. Since WC has been proposed as an important tool for this purpose 25,26 . However, it is important to highlight that the present study used more than one doubly indirect method to obtain body fat 12 . Although the literature describes the gold standard technique 23 as a more accurate technique, the results obtained by measuring the TSF corroborate the other methods used to analyze body composition. In view of this, it would be interesting to perform a comparative analysis between the method used in this study with a gold standard technique, in an attempt to obtain consistent and homogeneous information among them, to elect a new method with potential of easy applicability and cost benefit.
CONCLUSION
In the present study, it was possible to observe that approximately one in four children from 7 to 10 years old are overweight, and this occurrence is higher among boys. The TSF method showed a strong correlation with both BMI and WC, with a satisfactory agreement between the methods. Thus, we can conclude that TSF can be a good method for investigating body composition and cardiometabolic risk for schoolchildren of both sexes.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. This study was funded by the authors. ments: RDL. Analyzed the data: RDL, RMP, VRM, RSC, PHC. Contributed reagents/materials/analysis tools: PHC. Wrote the paper: RDL, RMP, VRM, RSC, PHC. All authors read and approved the final version of the manuscript. | 2020-03-12T10:20:34.020Z | 2020-02-27T00:00:00.000 | {
"year": 2020,
"sha1": "efb61ede082b9498d5544279aa742a4126294b66",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbcdh/v22/1415-8426-rbcdh-22-e67037.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "f3871c73c8b21a184cc8bdc4686b67d534dab550",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14124474 | pes2o/s2orc | v3-fos-license | Unified description of charge and spin excitations of stripes in cuprates
We study stripes in cuprates within the one-band and the three-band Hubbard model. Magnetic and charge excitations are described within the time-dependent Gutzwiller approximation. A variety of experiments (charge profile from resonant soft X-ray scattering, incommensurability vs. doping, optical excitations, magnetic excitations, etc.) are described within the same approach.
Introduction
If the mechanism of high-T c superconductivity is electronic, to understand the excitation spectrum is as important as to understand phonons was important to develop BCS theory. In this regard charge and spin inhomogeneous states, often found in strongly correlated systems, are interesting because they can support new collective modes, "electronic phonons", that would not be present in a weakly interacting fluid. For example stripes in cuprates have many oscillatory modes that can couple with carriers and eventually lead to pairing and superconductivity or anomalous Fermi liquid properties.
Cuprates, unlike BCS superconductors, are unique and therefore their remarkable properties may well be rooted on details. For example a recent study shows that superconductivity in La 2−x Sr x CuO 4 appears only when incommensurate low energy scattering parallel to the CuO bond is present and not, when the scattering is on the diagonals (at very low doping), or when the system is in a more conventional Fermi liquid phase at high doping [1], thus the excitation spectra must be understood in detail. In the last years we have developed a method to compute collective excitations of inhomogeneous strongly correlated systems based on a time-dependent Gutzwiller approximation (GA), named GA+RPA [2,3,4,5,6].
Many theoretical descriptions of cuprates are suited for only one experiment. Here we show that within the same approach one can reproduce a variety of experimental features of La 2−x Sr x CuO 4 in the charge and magnetic channel. Results in the one-band Hubbard model (1BHM) and the threeband Hubbard model (3BHM) are quite similar. We compare results from both models.
For the 3BHM we use the following LDA parameter set: ǫ p − ǫ d = 3.3 eV for the splitting between the diagonal energies of a hole in the copper d-and oxygen p orbital, t pd = 1.5 eV (t pp = 0.6 eV) for the p-d (p-p) hopping integral, U d = 9.4 eV (U p = 4.7 eV) for the repulsion between two holes on the same Cu (O) orbital and U pd = 0.8 eV for the Cu-O repulsion [7]. For the 1BHM results reported here we use t = 353.7meV for the nearest neighbor hopping, U = 8t for the effective on-site repulsion, and t ′ = −0.2t for the next-nearest neighbor hopping.
Ground state properties
Static properties are computed within the Gutzwiller approximation. Probes which are sensitive to the Cu and O atomic character need to be addressed with the 3BHM. In Fig. 1 we show the evolution of the charge profile with doping in the 3BHM for metallic stripes parallel to the CuO bond. The charge profile in real space has a width of about 4 lattice sites [8]. The width of the stripes defines two regimes: for low doping (a), the stripes do not overlap and therefore are weakly interacting, for high doping (b),(c) stripes overlap.
Recently Abbamonte and collaborators have shown that our charge profile predicted in the 3BHM [8] is in excellent agreement with the profile measured with resonant soft X-ray scattering [9].
The overall shape of the charge profile in the 1BHM is similar to the one in the 3BHM (see Ref. [10]). One also finds a non overlapping regime (d > 4), and an overlapping regime (d ≤ 4).
For large and negative −t ′ /t (non shown here) one finds that checkerboard states are favored [12].
In Fig. 2 we show the energy for metallic stripes as a function of doping for the 1BHM. In each curve the charge periodicity perpendicular to the stripe is fixed and takes integer values (from left to right) d = 10 − 3. In this computations stripes are bondcentered (BC). The energy for site-centered (SC) stripes is slightly lower at small doping but the differences decreases with the system size. At optimum doping both structures become degenerate.
A small difference with respect to the 3BHM is that in the latter BC stripes are more stable at low doping and become quasidegenerate with SC ones at optimum doping. Thus the role of SC and BC is interchanged. We expect the three-band results to be more reliable in this respect.
In the inset we plot the incommensurability vs. doping taking into account the range of stability of each solution and compared with experiments from Refs. [11]. The result is a Devil's staircase. Up to x ≈ 1/8 the plateaux are short and correspond to a number of added holes per unit length along the stripe close to ν ∼ 0.5. As doping increases one jumps from one solution to the other and the density of stripes increases with doping. This explains the behavior of the incommensurability ǫ = 1/(2d) = x/(2ν) ≈ x as seen in neutron scattering experiments in this doping range. For x > 1/8 the right branch of the d = 4 solution is more stable than the ν ≈ 0.5 and d = 3 solution due to the overlap effect and one gets a wider plateau explaining the saturation of the incommensurability seen in the experiment.
Some details are slightly different in the 3BHM. The ǫ = 1/8 plateau starts at lower doping and ends at higher doping improving the agreement with experiment [8]. For doping x > 1/8 holes populate the stripes and therefore the baseline in Fig. 1 increases.
For doping x > 0.195 (x > 0.225) in the one-band (three-band) model we find the d = 3 stripe (ǫ = 1/6 ≈ 0.17) to become the lowest energy solution. At this doping a variety of different solutions becomes close in energy and the present saddle-point approximation brakes down. Probably around this doping a quantum melting of stripes takes place. It is also possible that the d = 4 stripe solution phase separates with the overdoped Fermi liquid skipping the d = 3 solution. This scenario will also produce a large ǫ = 1/8 plateau in the incommensurate scattering. In this case long range Coulomb effects have to be taken into account [13].
One can see from the plot of the energy in Fig. 2 that the electron chemical potential µ = −∂E/∂x is approximately constant in the low doping regime (weakly interacting stripes) whereas it decreases in the overlapping stripe regime. The same behavior was found in a dynamical mean-field study of the one-band Hubbard model [14]. This is in qualitative agreement with experiment [15] although we find a larger rate of change of µ with doping than in the experiment. It is also possible that this is due to phase separation between d = 4 stripes and the overdoped Fermi liquid plus long-range Coulomb effects [13].
Magnetic excitations
The dispersion relation for magnetic excitations in the insulator computed within GA+RPA applied to the 1BHM are shown in Fig. 3 and is in excellent agreement with the experiment. The dispersion is weakly sensitive to t ′ /t and it is strongly sensitive to U/t. A larger U/t produces a flatter dispersion in the (π, 0) -(π/2, π/2) direction which does not agree with the experiment [4].
The collective excitations in the stripe phase with the same parameter set have been reported in Ref. [5]. One obtains a resonance energy at 65 meV to be compared with the experimental value 55 meV [17]. A slight decrease of U improves the agreement of the magnetic excitations with experiment as reported in Ref. [6] but produces a charge transfer gap in the insulator which is 10% too low. The present parameter set is a compromise between the charge and magnetic channel. Details of the magnetic response and the doping dependence are discussed in an accompanying paper [18].
Optical conductivity
The optical conductivity is qualitatively similar in the three-band and 1BHM. In Fig. 4 we show the result in the 3BHM. The charge transfer (CT) gap in the insulator is close to 2eV in good agreement with the experiment. In the 1BHM the CT gap is also at 2eV. This is mainly determined by our value of U = 2.8eV confirming our parameter choice.
Doping induces a doping dependent mid-infrared (MIR) band and Drude weight at zero energy since our stripes are metallic. The MIR band is due to collective lateral fluctuations of the stripe. These lateral fluctuations form a band of excitations as for a violin string and the MIR band corresponds to the zero momentum component.
The "string" fluctuation band is massive due to pinning of the commensurate stripes to the underlaying lattice. However, both the experiment [19] and our computations [3] show that the mass decreases with doping. This is rooted in the quasi-degeneracy found among BC and SC stripes and indicates that stripes form a floating phase at optimum doping. MIR band 0 0.08 ∼ 500 ∼ 250 * [19] Magnon (π/2, π/2) 0 293 286 ± 5 [16] Magnon (π, 0) 0 326 333 ± 7 * [16] Resonance 0 1/8 65 55 [17] The MIR band produces an absorption at low energy which explains the failure of the Drude model at optimum doping. This indicates that anomalous Fermi liquid properties are rooted in the low energy excitations of the floating phase.
In the 1BHM the MIR band is at energy ∼ 0.5 eV for doping 0.08 which is higher than in the experiment (Table 1). Experimentally the band is located at ∼ 0.5 eV at very low doping and softens with doping faster than what we found. This may be due to a finite size effect since our optical conductivity computations are done in system sizes much smaller than the computations of magnetic properties (16 × 4 and 40 × 40 respectively.) Table 1 summarizes the energy of selected excitations in the 1BHM with the present parameter set. It is remarkable that such a simple model with only three parameters is able to provide a reasonable description of a variety of excitations in different channels plus several ground state properties.
Discussion and Conclusions
In cuprates true magnetic long-range order is sometimes detected in experiment [11] however, more often, Bragg peaks are not found. Two possibilities arise: the system may be close to a quantum critical point in the disordered phase (dynamic stripes) or the system may be in the ordered side of the transition but long-range order is not observed because of disorder (glassy stripes). Dynamic stripes have quasi long-range order in time and space whereas glassy stripes at low temperatures have practically long-range order in time but short range order in space as in a structural glass. This latter situation is clear for example in the magnetic channel at low doping, in muon spin-relaxation experiments where below a certain temperature the system develops a static (on the muon time scale) magnetic field which is not necessarily accompa-nied by Bragg peaks [20], thus the spin rotational symmetry is broken without long range order. In these quantum or classically disordered cases our results apply as long as the energy scale is not too low (typically tenths of meV). How this reflects on the Fermi surface is discussed separately [21].
To conclude we have shown that the GA+RPA approximation allows for a unified description of collective modes in the charge and magnetic channel of striped cuprates. Results are similar in the one-band and the 3BHM although some details may differ in which case the 3BHM is expected to be more accurate. These modes are likely to play an important role in the anomalous properties of cuprates. | 2014-10-01T00:00:00.000Z | 2006-06-15T00:00:00.000 | {
"year": 2006,
"sha1": "e63900b8efe31878a443866615adba2f15205c0b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0606411",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e63900b8efe31878a443866615adba2f15205c0b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
216588853 | pes2o/s2orc | v3-fos-license | A cutting-edge immunoinformatics approach for design of multi-epitope oral vaccine against dreadful human malaria
Human malaria is a pathogenic disease mainly caused by Plasmodium falciparum, which was responsible for about 405,000 deaths globally in the year 2018. To date, several vaccine candidates have been evaluated for prevention, which failed to produce optimal output at various preclinical/clinical stages. This study is based on designing of polypeptide vaccines (PVs) against human malaria that cover almost all stages of life-cycle of Plasmodium and for the same 5 genome derived predicted antigenic proteins (GDPAP) have been used. For the development of a multi-immune inducer, 15 PVs were initially designed using T-cell epitope ensemble, which covered >99% human population as well as linear B-cell epitopes with or without adjuvants. The immune simulation of PVs showed higher levels of T-cell and B-cell activities compared to positive and negative vaccine controls. Furthermore, in silico cloning of PVs and codon optimization followed by enhanced expression within Lactococcus lactis host system was also explored. Although, the study has sound theoretical and in silico findings, the in vitro/in vivo evaluation seems imperative to warrant the immunogenicity and safety of PVs towards management of P. falciparum infection in the future.
Introduction
World Health Organization has documented almost 405,000 deaths including 228 million infections globally towards human malaria disease [1]. Five diverse species of Plasmodium, i.e., P. falciparum, P. vivax, P. malariae, P. ovale, as well as P. knowlesi are culprit for the disease outbreak in which P. falciparum has stood first for lethality. About 99.7 and 62.8% disease cases were documented merely for P. falciparum (Pf) in African as well as South-East Asia realms, respectively, which further supports the above fact [2]. In recent findings, P. vivax has also been found capable to develop severe malaria amongst populations living in sub-tropical countries [3]. The only preferred option is cost intensive chemotherapy for human malaria [4,5]. The reason being the fact that currently, none of cutting-edge effective human malaria vaccine is accessible, which can provide protection towards most of the worldwide population together with endemic regions. On the other hand, the exhaustive research from decades has led to development of total 44 malaria vaccine candidates together with 19 subunit, 10 DNA, 10 recombinant vector, 1 recombinant protein as well as 4 live/attenuated vaccine preparations, of which, merely 7 vaccines are revealed for human host (http://www.violinet.org/). Most of these vaccines are either single or multi-antigens derived from various life-cycle stages of the parasites P. falciparum, P. vivax, P. yoelii, P. berghei and P. chabaudi [6,7]. For instance, Pf vaccine combination involves multi-antigens namely MSP1, MSP2 and RESA derived from blood-stage [8], while NYVAC-Pf7 includes antigens CS, SSP2, LSA1, MSP1, AMA1, SERA as well as Pfs25 from multi-stage of pathogenic life-cycle [9]. Besides these, P. falciparum reticulocyte-binding homologue 5 (PfRH5) was also reported as good antigen for development of malaria vaccine [10,11] that elicits human monoclonal antibody in vaccine trial [12]. Most of the aforesaid vaccines were found to elicit immune responses, but unfortunately, failed to clear phase-III clinical trial owing to rapid waning of vaccine efficacy due to geographical antigenic variation and human leukocyte antigen (HLA) allelic diversity [3,[13][14][15]. Apart from these, apoptosis of infected erythrocytes and their inability to express HLA class I molecules on cell surface that assists in avoiding cytotoxic T lymphocytes (CTL) response is also another aspect [16][17][18]. Thus, there is pressing need towards the development of innovative vaccines using reverse vaccinology together with immunoinformatics that can target majority of the stages of parasite's life-cycle including species level conservation so as to cover the world-wide human population [19].
In last two decades, the reverse vaccinology strategy has been extensively exploited by world-wide research groups for genomewide screening of vaccine antigens against several pathogens like Neisseria meningitides serogroup B, P. falciparum, Leishmania and so on [20][21][22][23][24]. It has been synergistically progressive with onset of immunoinformatics, which is another cost-effective and quicker strategy towards prediction of B-as well as T-cell epitopes present on antigenic proteins and targeted population coverage analysis [25][26][27][28][29]. In recent years, the aforementioned strategies have been used very frequently in designing of novel vaccines by various researchers against different diseases like Dengue [30], Schistosomiasis [31], Fascioliasis [32], Encephalitis [33], Lassa fever [34], Neonatal meningitis [35] and H7N9 influenza A [36]. Furthermore, Toll-like receptors (TLRs), e.g., TLR-2, TLR-4 and TLR-9 typically present in plasma membrane of host cell recognized as pathogen-associated molecular patterns (PAMPs) that provokes phagocytosis and develop innate immune responses through production of cytokines, interleukins, and antibodies that prohibit the parasite entry in pre-erythrocytic stage of malaria [37][38][39][40]. To the best of our knowledge, this is one of the first computational studies for designing of multi-epitope based oral vaccine against human malaria. Overall, this investigation focuses on the designing of 15 innovative polypeptide vaccines (PVs) utilizing predicted B-and/or T-cell epitopes sourced from 5 genome derived predicted antigenic proteins (GDPAP) assembled together with specific linkers and adjuvants towards P. falciparum malaria [24].
Methodology
The methodological flow chart depicting the strategy for development of innovative PVs is presented in Fig. 1 of test PVs, positive as well as negative polypeptide vaccine controls using chimeric technique, (vi) Tertiary structure prediction and molecular docking of PVs with TLR2 and TLR4 receptors, (vii) Characterization of structural and functional properties viz. secondary structure, physicochemical, adhesion, antigenicity, allergenicity, solubility and biological activity of leading PVs (viii) Immune simulation of leading PVs, (ix) Molecular docking of leading PVs with protective antibodies (IgG1 and IgG3), (x) Molecular dynamics of leading PVs complexed with TLR2 and TLR4 and (xi) In silico cloning and expression of potent PVs in Lactococcus lactis etc. Further, the accomplishment of aforementioned steps required various bioinformatics tools, which are provided in Table 1.
B-cell epitopes prediction
The presence of linear (16-mer) and conformational B-cell epitopes were predicted using BCPREDS and DiscoTope tools, respectively.
Forecast of T-cell epitopes
The linear B-cell epitope sequences (as forecasted in section 2.2) were used as input for forecast of HLA class I and II restricted T-cell epitopes through IEDB based consensus strategy with threshold criteria of binding affinity (IC 50 ) ≤ 500 nM and percentile rank ≤3, correspondingly.
Forecast of population coverage and selection of T-cell epitope ensemble
The IEDB based population coverage tool was exploited towards the predicted population coverage (PPC) analysis of forecasted Tcell epitopes with their corresponding HLA binding alleles. Further, HLA class I as well as II epitope ensemble was developed as described previously [24]. Finally, HLA class I and II epitope ensembles were then mapped to forecasted continuous B-cell epitopes.
Prediction of cytokine responses
The induction of cytokines response predictions, i.e., IL-4, IL-10 and IFN-γ were carried out for epitope ensembles using tools IL-4Pred, IL-10Pred and IFNepitope, correspondingly.
Designing of multi-epitope PVs
In this study, the new multi-epitope PVs were developed using the linker EAAAK (L1) at N-terminal with or without adjuvant following Ali et al. [30] where Cholera toxin B subunit (A: UniProt accession no. AIE88420.1) and 50S ribosomal L7/ L12 (B: UniProt accession no. P9WHE3) were used as adjuvants against TLR-2 (PDB ID: 2Z7X) and TLR-4 (PDB ID: 4G8A), correspondingly. During PVs designing, the epitopes were coupled with linkers by adopting following strategies: HLA class I epitopes with GGGS (L2), HLA class II epitopes with GPGPG (L3), B-cell epitopes with L2 or L3, HLA class I and II epitope with L3, HLA class II epitope and B-cell epitope with L3. Also, the adjuvants were coupled with epitopes using linker L1. The linker L1 was also employed to connect adjuvant with HLA class I and B-cell epitope [41][42][43][44][45].
Tertiary structure prediction and molecular docking of PVs with TLR2 and TLR4 receptors
The forecast of tertiary structures of PVs was performed using RaptorX tool. Further, the refinement as well validation of 3D structure was carried out by tools ModRefiner and PROCHECK, respectively. The molecular docking studies of PVs with molecular complex receptors TLR2-TLR1 (PDB ID: 2Z7X) and TLR4-MD2 (PDB ID: 4G8A) were performed using ClusPro 2.0 tool. The PVs developed without and with TLR2 and TLR 4 specific adjuvants that were docked with receptors TLR2-TLR1 and TLR4-MD2, correspondingly. The ligands Escherichia coli heat labile enterotoxin type IIB Bpentamer (C1; PDB ID: 1QB5) and carbohydrate recognition and neck domains of surfactant protein A (C2; PDB ID: 1R13) were used as controls for docking with receptors TLR2 and TLR4, correspondingly [46,47].
Characterization of structural and functional properties of leading PVs with positive vaccine controls
The self-assembling protein nanoparticles (SAPN) from P. falciparum FMP014 (C3) and fusion protein from Staphylococcus aureus (C4) were selected as positive vaccine controls as detailed previously in Kaba et al. [48] and Ahmadi et al. [49] for comparative evaluation of several properties of leading PVs., respectively. The physico-chemical properties [Grand Average Hydropathy (GRAVY), molecular weight, isoelectric point (pI) and half-life] were calculated using ExPASy-ProtParam tool. The antigenic properties were predicted with the involvement of VaxiJen2.0, ANTIGENpro, Protein antigenicity prediction by Kolaskar and Tongaonkar and Secret-AAR tools. Further, the recombinant protein solubility was predicted using tools RPSP, Protein-Sol, CamSol and SOLPro. The analysis of secondary structure elements (alpha helix, extended strand and random coil) were performed using PSIPRED tool. Further, tertiary structure analysis was carried using tools ModRefiner and PROCHECK. The biological function and allergenicity were evaluated based on tools DeepGOPlus and AllergenFP, correspondingly.
Immune simulation of leading PVs
The best docked complex (in terms of lowest docking energy) PVs with receptors TLR2 and TLR4 were chosen for immune simulation study using C-ImmSim tool along with two positive vaccine controls (C3, C4) as mentioned in section 2.8 and one negative vaccine control (C5) so as to compare the simulation results. The C5 was designed using suitable linkers as well as non-binding HLA class I and II epitopes by applying the same strategies as used in PVs. The non-epitopes were screened using the criteria of 14 lowest ranking HLA class I (HLA-A*0201, -B*5301) and 3 lowest ranking HLA class II (HLA-DRB1-0411) as predicted by IEDB based consensus method, correspondingly in a randomly selected highly variable erythrocyte membrane protein 1, (PfEMP1: PF3D7_0617400.1). The C-ImmSim is a simulator of agentbased model, which forecasts the induction of immune response (cellular and humoral response) along with forecast of T-cell epitope as well as B-cell epitope [50]. The default simulation parameters were chosen except HLA allele, number of antigen (10000) and time steps [51]. The host HLA alleles (HLA-A*02:01, HLA-B*53:01 and HLA-DRB1*04:11) were selected based on prevalent alleles associated with human malaria [52][53][54][55]. The time steps 1, 42 and 84 were selected following Kaba et al. [48].
Molecular dynamics of leading PVs complexed with TLR2/TLR4
Molecular dynamics of top 2 docked complexes PV1A-TLR2 and PV3B-TLR4 were performed through iMODS server to explain the collective protein motion in the internal coordinates through normal mode analysis (NMA). The NMA in dihedral coordinates naturally mimics the combined functional motions of protein molecules modelled as a set of atoms connected by harmonic springs [58].
Codon optimization and in silico cloning of leading PVs
The DNA coding sequences of the oral PVs (PV1A and PV3B) were optimized for elevated protein expression using Java Codon Adaptation Tool (JCat) involving following options: i) Lactococcus lactis (strain IL1403) as expression host, ii) avoid rho-independent transcription terminators, iii) avoid prokaryotic ribosome binding sites and iv) avoid cleavage sites of restriction enzymes. Further, for in silico cloning of PV1A and PV3B cDNA (with stop codon) SnapGene software was used involving insertion at restriction site of FspI (6006) in plasmid vector pIL1 (Gene bank accession number: HM021326) [59].
Results and discussion
According to VIOLIN database (accessed on June 26, 2019), total 16 vaccines available so far for against P. falciparum from different life-cycle stages, but they have not succeed to get approval from FDA, USA for world-wide marketing [60]. The RTS,S/AS01 is the only world's first European Medicines Agency (EMA) approved malaria vaccine with partial protection in young children (36.3%) for use to only Sub-Saharan African region along with severe adverse effect (24.2%-28.4%) and incurable adverse effect (1.5%-2.5%) [61,62]. In addition, the efficacy was further declined to almost zero after 4th year and negative in 5th year [63]. The aforementioned facts warrant exhaustive efforts/research towards the development of a more effective PV that can elicit robust immune response globally. The present study is an extension of our previous report [24] that exploits 5 homologous antigens conserved amongst human malaria parasites P. falciparum, P. vivax, P. ovale and P. malariae (with minimum 38.62% identity recognized through BLASTp tool) as potential platform for designing of PVs [64].
Prediction of B-and T-cell epitopes for screening of epitope ensemble
In recent years, epitope based designing of vaccine is a new strategy that has been employed by world-wide researchers towards the development of efficient PVs against numerous diseases such as leishmaniasis, malaria and so on. In this context, the exploitation of computational approaches is not only cost-effective for vaccine development but also diminishes time period and risk of failure in experimental studies [26,27,65,66]. In this study, 82 continuous B-cell epitopes were forecasted from 5 GDPAP using BCPREDS (Supplementary Table S1). These 82 continuous B-cell epitopes were found to possess total 433 T-cell epitopes including 142 HLA class I epitopes and 291 HLA class II epitopes (Supplementary Table S2). These T-cell epitopes were forecasted from the pool of predicted continuous B-cell epitopes as the antigen presentation to T-cells was supposed to be more efficient if it is recognized by the B-cell. In addition, an antigen-specific B-cell may present multiple T-cell epitopes to the immune system and, thus enhances its ability to be triggered in a specific manner [67][68][69]. Further, based on the PPC analysis an epitope ensemble of 13 HLA class I epitopes with 98.75% and 3 HLA class II epitopes with 56.85% world coverage were designed using criteria described previously (Table 2) [24]. However, a combined set of 16 HLA class I and II epitope ensemble revealed human population coverage of highest 99.46% and lowest 94.47% for world and South America, respectively (Fig. 2). The aforementioned criteria involved the screening of cross-presented epitopes amongst different set of HLA binding alleles in a selected population with higher PPC and VaxiJen score. The technique of identifying such 'promiscuous' epitopes that cover diverse HLA alleles of affected population are highly desirable as they could enhance the vaccine efficacy [51]. Concerning HLA class I epitope ensemble of P. falciparum, epitopes YTLTAGVCV (T1) and YFNDDIKQF (T5) covered 56.56% and 39.26% of world population were also reported in similar study conducted by Pritam et al. [24].
Induction of cytokine responses of epitope ensemble
In case of malaria, adaptive immune system elicits both cellular and humoral immune responses, which are associated with B and T lymphocytes, respectively. However, mainly the CD4+ T lymphocytes (also known as helper T cell (Th), Th1 and Th2) elicit IFN-γ and IL-4, correspondingly) regulate the malaria infection [68,70]. Besides these, TLRs are also involved in the activation of different signalling cascade that ultimately express the genes of pro-inflammatory cytokines like IFN-γ, etc. [71]. The IFN-γ is associated with depletion of liver-stage parasites [72,73]. This is also supported by present study, where the epitopes T2 , T7, T8, T10, T11 and T1, T2, T3, T4, T5, T6, T7, T8, T9, T11, T12, T13, T14, T16 were found to induce the IFN-γ and IL-4 responses, correspondingly (Supplementary Table S3). Amongst aforementioned epitopes ensemble, the T14 was recorded as one of the potent candidate to induce IL-10 response that found to suppresses the pathogenic inflammatory responses concerning control of malaria parasite [74]. Table 3 Order of linkers, epitopes and adjuvants used in designing of 15 polypeptide vaccines and positive as well as negative vaccine controls.
Design of PVs for malaria
Linear B-cell epitopes is linked to antibody generation, where identification of such epitopes using traditional approaches is not only costly but also time consuming with involvement of difficult processes [75]. In order to overcome aforementioned issues, the present study involved the prediction of T-cell epitopes using linear B-cell epitopes as input instead of whole antigen so as to minimize not only the size of PV but also elicit both cellular (T-cell epitope) as well as humoral (B-cell epitope) immune responses. Further, the non toxic nature of adjuvants A and B also helps in production of several cytokines (e.g., INF-γ, TNF-α, IL-2, IL-4, IL-6, IL-12) through induction of dendritic cell, B-cell, macrophage and T-cell, which ultimately boost the concentration of the antibodies reported in several studies linked to various disease causing agents including human rotavirus, HIV, Helicobacter pylori, Influenza virus [76][77][78][79]. Therefore, 15 PVs were designed through epitope ensemble of Tcell epitopes and/or linear B-cell epitopes having epitope ensemble with different linkers as well as adjuvants, which are responsible for the activation of TLR2 and TLR4 receptors pertaining to malaria. Initially, five non-adjuvant PVs (PV1-PV5) were designed followed by incorporation of TLR2 and TLR4 binding specific adjuvants that resulted into respective design of 10 adjuvant PVs, i.e., PV1A-PV5A and PV1B-PV5B (Table 3). Further, EAAAK linker was incorporated at N-terminal of PVs as it is stiff and prevents the assembly of adjuvant with other vaccine domain [80,81]. Although, the adjuvants are found to enhance the immunogenicity of vaccines but they may cause toxicity/adverse reaction. Therefore, we have designed 5 PVs without adjuvants, where the designing of PV1 having only T-cell epitopes (HLA class I and II) and they were joined together by using linker L 2 and L 3 . Likewise, in PV2, we have exploited merely linear B-cell epitopes attached together with linker L 3 . Similarly in PV3, both T-and B-cell epitopes were joined with linkers L 2 and L 3 while, in PV4, we have exploited merely linear B-cell epitopes attached together with linker L 2 . Amongst these two linkers, L 3 is a universal linker, which can enhance the proteasome processing along with immunogenicity, while L 2 is a flexible linker that can stimulate better immune response [42,77,82]. As exemplary vaccine is found to induce multi-immune response (Band T-cell immune response), therefore in the designing of further PVs both the T-and B-cell epitopes were used so as to elicit humoral/cellular response [83]. The PV3 and PV5 were differing from each other with respect to linkers L 3 and L 2 , respectively used for joining continuous B-cell epitopes. However, in case of designing a negative polypeptide vaccine control, linkers L2 and L3 were employed to connect non-HLA class I and II T-cell epitopes ( Table 3). Fig. 3 (a, b) depicts the exemplar design of PV1 and PV3 with adjuvants A and B i.e., PV1A and PV3B. The advantage of using linkers and adjuvants used in the present study for designing of multi-epitope malaria PVs have been also revealed by several contemporary researchers against other diseases [36,84,85] to enhance the antigen processing and presentation ability as well as immunogenicity. Also, the cost effective Cholera toxin B subunit adjuvant is cytokines inducer (Th1 and Th2 response), which increases the antibody titration [86]. Thus, the use of both T-cell and B-cell epitopes together with linkers and adjuvants can increase the potential of PVs towards induction of multi immune responses.
Molecular docking of PVs with receptors TLR2 and TLR4
The TLRs, especially the surface one, viz. TLR2 as well as TLR4 are available not only on the immune cells, but also on epithelial cells and fibroblasts that recognizes PAMPs and bridge the innate as well as adaptive immunity of the host by regulating the balance between Th1 and Th2 type of responses [87][88][89][90]. For example, the merozoites stage of P. falciparum releases glycosylphosphatidylinositol (GPI) anchored surface antigens, which act as ligands recognized by both TLR1-TLR2 heterodimers and TLR4 homodimers of host immune cells. Such events indeed results in decreasing the parasitic load from host by triggering the production of various pro-and anti-inflammatory cytokines as well as antibody isotype switching [38-40,91,92,]. Thus, for enhanced protection, selection of respective TLR2 and 4 mucosal protein adjuvant A (CTB) and B (50s ribosomal L7/L12) in designed PVs could be the good choice against P. falciparum [77,78,86,93]. Even combining two distinct TLR agonists into an adjuvanted subunit vaccine have showed synergetic protective efficacy [94,95]. Altogether, these facts led to the hypothesis of using both TLR2 and 4 receptors agonists A and B, respectively in the designed PVs and subsequently docking experiment was performed to reveal the possible association amongst PVs and TLR [96,97]. For molecular docking, the tertiary structures of 15 PVs were predicted that revealed N80% of amino acids in favoured regions. Overall 22 docking studies were carried out using ClusPro2.0 tool including control C1 and C2 against receptors TLR2 and TLR4, respectively (Table 4). This resulted into total 18 docked models, i.e., M1 to M18 including 16 PVs and 2 controls. It is quite interesting to note that the PVs designed without adjuvants were also able to interact (dock) with TLR2 and TLR4 (having good energy scores) over control except PV3. Therefore, they might be capable to elicit innate immunity [98][99][100], which are in well agreement with earlier studies regarding the rapid production of IFN-γ [101,102]. Amongst 15 designed PVs, PV3, PV5A and PV4B were not able to dock by ClusPro tool with their respective receptors. So, a total 12 (Table 4). However, based on overall docking score, PV1A (−1275.5) and PV3B (−1269.2) against receptors TLR2 and TLR4, respectively were selected as leading PVs for further structural and functional analysis (Fig. 4).
Comparative evaluation of structural and functional properties of leading PVs with positive as well as negative vaccine controls
The negative GRAVY values of both PVs PV1A (−0.377) and PV3B (−0.479) were pointing towards their hydrophilic nature (that exposed on outer surface) and, therefore may elicit elevated humoral immune response [93]. Generally, in vitro protein stability is determined by instability index b40. Considering this, the present study depicted PV1A and PV3B as stable proteins with their corresponding instability index values of 36.35 and 26.22. However, in vivo half-life of PV1A and PV3B showed N10 h and, therefore reflecting the stabilities of these two PVs, which might enhance the durability as well as strength of immune response [104,105]. The leading PVs, i.e., PV1A and PV3B were predicted as probable antigens in this study using several antigenicity forecasting tools viz. VaxiJen, ANTIGENpro, protein antigenicity prediction and Secret-AAR including SPAN at default threshold values. Nevertheless, non-allergenicity of PV1A and PV3B were forecasted by AllergenFP tool at threshold value N0.8. Also, the secondary structure analysis (SSA) of a protein is beneficial for understanding its folding, stability as well as function [106][107][108][109][110]. In this context, the present study revealed alpha helices of 31.31 and 25.75%, β-strands of 9.89 and 16.71% and coils of 58.79 and 57.53% for PV1A and PV3B, respectively (Fig. 5). The predicted tertiary structures of PV1A and PV3B were refined by ModRefiner tool in which the Ramachandran plot exhibited respective favoured regions of 92.3 and 91.4% as well as allowed regions of 6 and 5.7%. These values indicated high quality and stability of refined protein structure model based on Ramachandran plot as described previously [111] (Fig. 6). Further, PV1A and PV3B were forecasted to possess respective 8 and 18 linear as well as 104 and 315 discontinuous B cell epitopes at default thresholds (Supplementary Table S4). These leading PVs were also predicted to be involved in multi-organism process as well as cell adhesion and immune system process, respectively, as predicted by DeepGOPlus tool, which is based on deep convolutional neural network model and Gene Ontology (GO) scheme. The overall structural and functional analysis of leading PVs showed comparatively similar properties over positive vaccine controls C3 and C4 (Table 5). Thus, the leading polypeptide vaccines PV1A and PV3B have the capability to induce both humoral as well as cellular immune responses. However, the orally administered polypeptide vaccines suffer from the poor stability, insolubility, weak bioavailability and low immunogenicity due to acidic environment of the upper GI-tract and inefficient delivery to the mucosa-associated lymphoid tissue. Therefore, genetically engineered L. lactis expression host can be used for production and delivery of vaccine antigens due to several advantageous properties viz. easy and safe production as well as storage, survival in gastric environment and self-adjuvanticity [112,113].
Immune simulation of leading PVs
In the course of human malaria infection, pro-inflammatory (TNF-α, IFN-γ and IL-12) and anti-inflammatory (IL-4 and IL-10) cytokines were produced by Th1 and Th2 cells, respectively [114]. In addition, cytotoxic T lymphocyte, natural killer cells and macrophages were activated by elicitation of IL-4, which helps to control pathogen effect [115,116]. Even, the most successful vaccine candidate of malaria, RTS,S was reported to elicit IFN-γ, IL-2, IgG titers, and activation of CD4+ T cell responses [72,117]. In this background, the present study involved the immune simulations of PV1A and PV3B using C-ImmSim tool along Table 5 Comparative evaluation of structural and functional properties of positive vaccine controls (C3, C4) and leading polypeptide vaccines (PV1A and PV3B).
Properties
Parameter Multi-organism process Immune system process and cell adhesion with the positive vaccine controls (C3, C4) ( Table 6). The C3 is a selfassembling polypeptide nanoparticle (SAPN) based P. falciparum malaria vaccine candidate that elicit IFN-γ, TNF-α, IL-4, IL-10 and IgG antibody titers in mice [48,118]. The C4 is a novel fusion protein of Staphylococcus aureus that indicated a high titer of specific antibodies (IgG1 and IgG2a) responses and decrease the viable cell counts through elicitation of mixture of Th1, Th2, and Th17 immune responses. The simulation results were displayed no alteration in antigen level as well as immunogenic responses except generation of IFN-γ against negative control (C5). While the positive vaccine controls as well as PV1A/PV3B showed drastic decrease in antigen counts that ultimately reached to zero after 5th day of injection (Supplementary Fig. 1). Besides these, they were also involved in the elicitation of B lymphocytes, cytotoxic T lymphocytes, helper T lymphocytes and macrophages responses that lead to generation of cytokines (IFN-γ, TGF-β, IL-2, IL-10 and IL-12) as well as antibody titration (IgG + IgM and IgG1 + IgG2). The generation of high level of IgM under study pointing towards better primary immune response as well as decrease in antigen level with enhancement in B cell population with antibodies (IgM, IgG1 + IgG2 and IgG + IgM), which further reflecting good secondary and tertiary immune responses. These results agree well with the earlier finding of Shey et al. [66]. Utilizing similar in silico approach, the leading PVs designed and characterized in the present study was compared with the wet lab experimental data of Kaba et al. [48] and Ahmadi et al. [49] ( Table 6). The predicted result of immune simulation indicated the elicitation of macrophages, B and T lymphocytes for the production of cytokines (IFN-γ, TGF-β, IL-2, IL-10 and IL-12) as well as antibodies (IgG + IgM and IgG1 + IgG2) against proposed top two PVs, which seems to be similar observations obtained by aforementioned research group in mice. Fig. 7 summarizes the comparative account on immune simulations of C3, C4 with one of leading PVs (PV1A) having higher potential to induce protective immune responses that might be owing to use of Cholera toxin B subunit adjuvant. Therefore, designing strategy used in PV1A/PV3B could be highly effective in stimulation immune responses. Moreover, validity of immunoinformatics tools for prediction of epitopes, protective immune response analysis, constructing chimeric multi-epitope vaccine, assessment of vaccine safety as well as efficacy and immunization modelling have been exercised in the last five years with N500 literatures in the PMC database that assisted in the preclinical and clinical studies of several vaccine project including Hepatitis B Virus, Dengue, Schistosoma haematobium, Treponema pallidum, S. aureus, Trypanosoma cruzi, Helicobacter pylori, Middle East Respiratory Syndrome Coronavirus, Zika virus [26,45,119,120]. Therefore, the use of bioinformatics tools for prediction of antigenicity, epitopes and molecular interaction are convenient and adequate approach in vaccine design and development [47,84,121].
Molecular docking of leading PVs with antibodies IgG1 and IgG3
When an antigen interacts with antibody it induces the humoral immune response and helps in clearance of pathogen. The IgG antibodies (named in order of decreasing abundance IgG1, IgG2, IgG3, and IgG4) are one of the most abundant pathogens neutralizing molecules found in human serum. These antibodies share N90% amino acid sequence identity but each subclass has exclusive effector properties including half-life, epitope binding, immunological complex formation, complement activation, triggering of effector cells and placental transport. Moreover, the IgG profile of a given individual is determined by their inherited allotypes that can potentially influence the clinical manifestation of the immune response [122]. However, broadly neutralizing antibodies (bNAbs) have been found in a rare population of patients that control the infection [123][124][125]. These bNAbs tend to target different conserved antigenic regions exposed on the outer surfaces of a pathogen across the circulating strains. Here, in the present study, a protein-protein global docking method (ClusPro server) was used to reveal the shape complementarity between PVs (as ligands) and the interacting domains of antibodies IgG1 and IgG3 (as the receptors) to eliminate the need of a long term exposure of malaria patients to selected antigen mimetics PV1A and PV3B involving the epitopes (B1, B4 and B5) of P. falciparum strains. These antibodies could be considered as bNAbs if they found with a well detectable neutralization activity in wet lab experimental studies [126][127][128][129][130]. Furthermore, the respective source proteins P28, P25 and MSP1of epitopes B1, B4 and B5 have been characterized as leading vaccine candidates [130,131]. Also, the antibodies IgG1 and IgG3 have been found associated with human malaria protection [132,133]. Thus, a structure based vaccinology approach could be exploited to predict the probability of potent PVs that might be able to block infection even more effectively [134]. These data lead to provoke the molecular interaction studies of leading PVs (PV1A as well as PV3B) along with cocrystallized control epitopes towards antibodies IgG1 and IgG3 ( necessary protein dynamics to their normal modes [136]. The NMA allowed the demonstration of docked protein-protein complex mobility and stabilization. Fig. 9(a, b) showed the 3D interaction model of respective polypeptide vaccines PV1A and PV3B complexed with TLR2 and TLR4. The direction of each amino acid residue was given by arrows and the length of the arrow corresponded to the degree of mobility. It also provided the profiles of deformability (c, d), mobility (e, f), eigenvalue (g, h), variance map (i, j), covariance matrix (k, l) and elastic network (m, n). The value of NMA-B-factors (mobility) indicated the relative amplitude of the atomic displacements around the equilibrium confirmation. While the deformability calculated the gradient of the atomic displacements summed over all modes at every atomic position. High values are expected in flexible regions such as hinges or linkers between domains, whereas low values usually correspond to rigid parts. The obtained higher and lower values of maximum mobility and deformability for PV3B (2.038E+02, 1.088E−06) indicated towards more flexible regions compare to PV1A (3.443E+01, 4.740E−06). The eigenvalue associated to each normal mode represented the motion stiffness. Lower the eigenvalue, easier the deformation i.e., lower energy is required to deform the complex structure. The respective eigenvalues for PV1A and PV3B complexed with TLR2 and TLR4 were found 1.064E −06 and 7.498E−09 that indicated the greater stability of complex PV1A-TLR2. The individual and cumulative variances associated to each normal mode were inversely related to the eigenvalue. The covariance matrix indicated the coupling between pairs of residues, i.e. whether they experience correlated (red), uncorrelated (white) or anticorrelated (blue) motions whereas elastic network graph characterizes pairs of atoms connected by springs and each dot in the graph represented one spring between the corresponding pair of atoms [137].
Codon optimization, in silico cloning and expression of PV1A and PV3B
The sequence length of obtained cDNA for PV1A and PV3B were 1092 bp and 1992 bp, correspondingly. The Codon Adaptation Index (CAI) values for PV1A and PV3B were 0.9857 and 0.9584, respectively. For reliable optimization of codon, CAI value should lie between 0.9 and 1.0 [138]. However, the GC content of improved DNA sequence of PV1A and PV3B were found 42.12% and 43.12%, which are lying in the optimal range (30% to 70%) that could be easily expressed in any suitable expression host [139]. Although, P. falciparum antigens could be expressed in E. coli but require the codon harmonization (reduction of amino acid misincorporation) to improve the immunogenicity [140]. In the present study, the solubilization probability of recombinant proteins (PV1A and PV3B) to be expressed in E. coli revealed by bioinformatics tools RPSP, Protein-Sol, CamSol and SOLPro was lower compare to positive vaccine controls (C3, C4) that indicated to look for alternative expression host (Table 5). Additionally, L. lactis was used as expression host alternative to E. coli due to following advantageous properties i) generally recognized as safe (GRAS) microorganism ii) lack of outer membrane (iii) insignificant extracellular proteolysis activity (iv) free of endotoxins (v) no lipo-polysaccharide contamination (vii) accommodates cysteine-rich proteins (vii) accessibility of both inducible and constitutive genetic control systems (viii) able to express prone-toaggregate and/or difficult-to-purify proteins (ix) presentation to the host immune system in the context of micro-particles to avoids the immunotolerance, which is normally provoked by oral delivery of soluble antigens (x) exhibits similar codon bias to P. falciparum, which makes it efficient protein expression and secretion system to outer surface that could easily interact with host immune system [113,[141][142][143]. In recent years, several wet lab studies have confirmed the utilization of L. lactis as an expression host to produce properly folded, pure and stable chimeric and/or single antigenic proteins of many pathogens that elicited high levels of functional antibodies/cytokines including P. falciparum [144][145][146][147][148], Mycobacterium bovis [149], Mycobacterium tuberculosis [150], Helicobacter pylori [151], Polish avian H5H1 influenza [152], cancer [153] and Staphylococcus aureus [154]. Moreover, L. lactismediated delivery of DNA vaccines also lead to the expression of post-translationally modified antigens by host cells resulting in presentation of conformationally restricted epitopes to the immune system for induction of both cellular and humoral immune responses [112]. Also, with the aforementioned properties, the last two decades witnesses the use of genetically engineered L. lactis system as effective oral based vaccine vehicles for delivering antigens of viruses, bacteria and parasites to elicit both systemic and mucosal immunity [155][156][157][158]. Finally, the size of PV1A and PV3B recombinant DNA (obtained after insertion of cDNA into pIL1 expression vector) was observed as 7477 bp and 8377 bp, respectively which lies inside the ORF and could be translated into respective protein sequences with four additional amino acids (MCKC) at the N-terminus (Fig. 10). Therefore, an ideal multi-epitope polypeptide vaccine should compose of a series of epitopes and/ or adjuvants that can elicit simultaneous and strong innate and adaptive (humoral and cellular) immune responses involving T-and B-cells responses against a targeted pathogen of malaria. In contrast to traditional killed/live attenuated or single-epitope vaccines, multi-epitope vaccines have distinctive properties such as involvement of numerous HLA-restricted epitopes derived from different antigens of various Plasmodium species/strains that can be recognized by various T-cells, bringing of additional components with adjuvant capability to enhance the immunogenicity as well as long-lasting immunity and reduction of unnecessary parts that can trigger the pathogenicity/adverse effects. Well-designed multi-epitope vaccines with such advantages should become powerful prophylactic and therapeutic agents against malaria infections. However, the present problems in the field of multi-epitope vaccine design include the selection of appropriate candidate antigens and systematic arrangement of their immunodominant epitopes for effective oral delivery through virus-like particles and SAPN. The present study successfully utilized the immunoinformatics tools for prediction of suitable epitope ensemble of target proteins for designing a multi-epitope malaria oral vaccine.
Conclusion
Surprisingly, so far no licensed malaria vaccine is available in the market to protect world-wide human populations regardless of decades of research. One of the major bottlenecks of malaria vaccine development is immune escape mechanism of pathogen through antigenic variation and/or HLA diversity. The designed PVs (PV1A and PV3B) under present study may overcome the aforementioned issues as they possess both B-and T-cell epitopes derived from 5 antigenic proteins that involve multi -stages of pathogen life-cycle with worldwide human population coverage (99.46%). Moreover, these PVs have the higher potential to elicit both innate (TLR2 and TLR4) and adaptive (cellular and humoral) immune responses. However, this warrants further experimental validation so as to evaluate their efficacy in the preclinical studies. Supplementary data to this article can be found online at https://doi. org/10.1016/j.ijbiomac.2020.04.191.
Funding
This study was self-financed and did not receive any grant from funding agency.
Declaration of competing interest
The authors declare that they have no conflicts of interest. | 2020-04-29T13:01:47.915Z | 2020-04-29T00:00:00.000 | {
"year": 2020,
"sha1": "28db8e9fcf106e7e3392f627f6f5770e180ec33a",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.ijbiomac.2020.04.191",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "d491e585c3ce13ea3d71707e1e2f81bd87ccc3ed",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12311937 | pes2o/s2orc | v3-fos-license | Empirical Performance Evaluation of Raster-to-Vector Conversion Methods: A Study on Multi-Level Interactions between Di ff erent Factors
SUMMARY Many factors, such as noise level in the original image and the noise-removal methods that clean the image prior to performing a vectorization, may play an important role in a ff ecting the line detection of raster-to-vector conversion methods. In this paper, we propose an empirical performance evaluation methodology that is coupled with a robust statistical analysis method to study many factors that may a ff ect the quality of line detection. Three factors are studied: noise level, noise-removal method, and the raster-to-vector conversion method. Eleven mechanical engineering drawings, three salt-and-pepper noise levels, six noise-removal methods, and three commercial vectorization methods were used in the ex-periment. The Vector Recovery Index (VRI) of the detected vectors was the criterion used for the quality of line detection. A repeated measure ANOVA analyzed the VRI scores. The statistical analysis shows that all the studied factors a ff ected the quality of line detection. It also shows that two-way interactions between the studied factors a ff ected line detection.
Introduction and Literature Review
The empirical performance evaluation of raster-to-vector conversion methods is an important topic in the area of graphics recognition [1], [2]. Comparing the performance of a vectorization method with third party methods will not only prove that the quality of one's own method is better than the others but also will gauge the maturity of the rasterto-vector methods being studied. The outputs of the systems (methods) are compared with that of others using test images and a selected performance evaluation criterion, which may include time and storage [3]- [5]. However, it is highly recommended to perform such a test (or more precisely a large systematic test) among many methods (research prototypes and/or commercial software) using a unified platform with suitable test images and a proper performance evaluation method. It is also desirable to study the effect of other factors (processes) on the quality of vectorization systems. These factors may include noise type, quantity of noise, and noise-removal methods. Performing such an experiment is not only complex and labor intensive, but it also may require a proper statistical test to analyze the output of the experiment. Phillips and Chhabra [6] performed an empirical test on three automatic raster-to-vector converters (two commercial and one research prototype). EditCostIndex was the performance evaluation criterion. The experiment tested the detection of straight lines (dashed and solid), arcs, circles, and text. The test data was synthetic images with their corresponding ground truth images in VEC format. The VEC file format is also introduced in this work. Three vectorization software applications (two commercial and one research prototype) were used: I/Vector (Vectory), VPstudio, and MDUS. However, no noise was incorporated into the images used in the experiment. To reveal the weaknesses and the strengths of the systems (methods), the authors suggested increasing the degradation of the test images and using more complex drawings in future studies.
Chhabra and Phillips [7] performed another empirical test to evaluate complete vectorization systems. The major advantage of this test compared with previous works is that real scanned images with their ground truth data were generated manually. The work also describes how to set up such experiments. Because of the time-intensive process of converting real paper images into usable scanned images, only a small subset (ten images) of the available paper images were used. Generating the ground truth data and aligning them with the real scanned one was another labor-intensive part of the experiment. The ground truth data was saved as VEC files. The empirical test involved four vectorization systems (three commercial and one research prototype): Scan2CAD, TracTrix, Vectory, and VrLiu. Again, EditCostIndex was used as the performance evaluation criteria.
The focus of Wenyin et al. [8] was on solid arc detection in raster images. Seven images (four synthesized and three scanned) were used in the test. The ground truth data were saved as VEC files. The overall average of VRI scores was used as the ultimate measure of performance. The test involved two research prototypes (the methods of Dave Elliman [9] and Xavier Hilaire [10]) with their parameters fixed during the test. No commercial software were tested. Four types of noise were introduced into the images: Gaussian, high-frequency, hard pencil, and geometry distortion.
Copyright c 2011 The Institute of Electronics, Information and Communication Engineers Further research by Wenyin [11] was also on solid arc detection from engineering drawings. Twelve real scanned and synthetically generated images were used (some images were corrupted by artificial noise). The ground truth data were saved as VEC files. Again, the systems were run as a black box with fixed parameters, and no user intervention was allowed during the test. The overall average of VRI was the unique measure of performance. Two research prototypes (the methods of Song JiQiang and Dave Elliman [9]) were evaluated. The recognition of arcs was more challenging because the test images were complex and contained tangent arcs that were hard to precisely locate.
The main theme in Wenyin's study [12] was arc segmentation in engineering drawings. The new element in this experiment included the use of eighteen new images (six real scanned and twelve noisy versions). The ground truth data were saved as VEC files. The images were distorted by a moderate amount of salt-and-pepper noise. The average of the VRI was the performance evaluation criterion. However, an updated version of the VRI formula was used.
) used the geometrical mean rather than the mathematical mean. Three research raster-to-vector software (the methods of Dave Elliman [9], Daniel Keysers & Thomas Breuel [13], and Xavier Hilaire [14]) were used in this test. However, the noise effect was not large, and some noisy images obtained high scores when compared with their original clean ones [12].
Shafait et al. [15] shifted the way of representing the ground truth data and the performance evaluation. Five real scanned images were used. Ground truth data were generated manually from the scanned drawings and stored in TIF image files. Vectorial score was the criterion rather than VRI. The experiment used four vectorization software, out of which three were commercial: VPstudio (VP), Vectory (Vec), and Scan2CAD (S2CAD); one was a research method (VrLiu). No noise was used in this experiment.
Al-khaffaf et al. [16] proposed a methodology to study the effect of many factors that may affect line detection. The three factors were noise-removal method, noise level, and raster-to-vector method. The study was performed using six noise-removal methods: kFill [17], [18], Enhanced kFill (EkFill) [19], Activity Detector (AD) [20], and their respective enhanced counterparts Algorithm A (AlgA) [21], Algorithm B (AlgB) [22], and Algorithm C (AlgC) [23]. Three noise levels were studied: 5%, 10% and 15%. Three commercial raster-to-vector conversion methods were tested: VP, S2CAD, and Vec. The experiment used eleven images from GREC'03 and GREC'07. VRI was the performance evaluation measure. Although this paper studied many factors, the statistical method that analyzed VRI scores could only answer limited questions in the context of revealing the important factors that affect line detection. The interactions between the studied factors were also not shown.
Al-khaffaf et al. [24] studied the performance of two research prototypes (VrLiu [25] and Qgar-Lamiroy [26]) and three commercial software (VP, S2CAD, and Vec). The work also included studying the performance of many ver-sions of one commercial software. The study created new test images, and VRI was the performance index. No artificial noise was used.
From the review above, we conclude that empirical performance evaluation tests are already becoming a trend within the graphics recognition community. These studies have usually been performed during contests attached to the International Workshop on Graphics Recognition (GREC). One advantage of such contests is the adoption of the contest data and evaluation methods by other researchers in their work and publications [1], [2].
With all the advantages brought by the previous studies, there are still some issues and shortcomings that need tackling by researchers, such as the following: 1. There is insufficient research on the effect of noise on raster-to-vector conversion process and the use of an unspecified small amount of noise (such as the study of Wenyin [12]) or performing the test on clean images (such as the study of Shafait et al. [15]).
The interaction between noise level and noise removal
is not studied in the context of raster-to-vector conversion. The interaction could be considered obvious when we look at it as an image processing problem in which more noise in the image make it more difficult to remove the noise. However, the interaction between these factors in the context of document image analysis and recognition is not trivial, and it still needs to be studied. Here, we can cite the work of Wenyin [12], in which the author sheds some light on the effect of salt-and-pepper noise on the quality of line detection. 3. In the case of using noise, it has not been shown which method (if any) removes the noise before performing vectorization. Hence, the effect of the interaction between noise-removal methods and raster-to-vector methods is not clear. Here, we recall the question of Karl Tombre [2]: "Do we actually test the quality of the de-noising method or the recognition capabilities of the method?" The interaction between the noise-removal method and the vectorization algorithm is not obvious. As is demonstrated in this paper, vectorization methods have varying sensitivity to noise-removal methods. A vectorization method would have better performed line detection if the noise was removed using one noiseremoval method rather than the other (Sect. 4.2.1 of this paper). 4. When noise is used, it is not stated how much noise is added to the image. Also, the effect of using many levels of noise is not studied. 5. The interaction between many factors (noise-removal method and vectorization method, for example) that may affect line detection has not been studied yet. 6. The effect of the resolution factor also has not been studied.
This paper tackles the first five issues stated above. Many types of noise may appear in scanned images. However, in this study, the focus was on salt-and-pepper noise only. Other types of noise were not covered in this study. The resolution factor was not studied because of the lack of availability of images with different resolutions within the graphics recognition community. The available image datasets usually have one resolution. Most of the available datasets were created during Arc Segmentation Contests attached to the GREC workshop, such as the following four editions: GREC'03, GREC'05, GREC'07, and GREC'09. The creation of a new dataset is time consuming and labor intense for the time being (it is necessary to search for many paper drawings, scan the drawings with different resolutions, and perform ground truthing for all of them). However, it may be a good subject for another study.
The rest of this paper is organized as follows. Section 2 shows the limitation of a related study and how it is avoided in the proposed methodology. Section 3 presents the steps of the proposed methodology. The steps include selecting/corrupting the images dataset, removing the noise, vectorizing the images, measuring the performance index, and statistically analyzing the performance index scores. The details of the statistical test are presented in Sect. 4. This includes the preliminary statistical test on data, analysis of each variable, and the analysis of the interaction between variables. The conclusions of this work are presented in Sect. 5.
Background and the Proposed Method
The suitability of a vectorization method in terms of line detection can be judged by its ability to recognize line features correctly and thoroughly. Line features include end points, line width, line style, line shape, and center (for arcs). Because line detection usually follows other image analysis stages, its action upon the image would be affected by prior stages that change image content. Among the many factors affecting the quality of the detected vector are the amount of noise in the original raster image, the noise-removal method, and the vectorization algorithm, which detects lines and their features. A recent study [12] gives some insight on how noise affects the resulting vector data, but only one unspecified level of salt-and-pepper noise was added to the images separately. The authors also did not study the effect of different noise-removal methods on the quality of line detection. The interactions between different factors (vectorization and noise removal, for example) were also not studied. The total number of VRI values analyzed was only 54 (#images * 3 * #vectorization methods = 6 * 3 * 3), which is not enough for a more stringent analysis of the results. Because the study only uses one noise level, the effect of noise can only be sensed by looking at the VRI values and/or by looking at the mathematical mean. Another limitation of this methodology is that it prevents the researcher from performing a rigorous study on the effect of each factor and the effects of interaction between the different factors.
This paper proposes a new methodology that studies many factors that may affect the quality of line detection. As with many previous studies, VRI is the performance in-dex. Noise factor, used in other studies, is also to be inspected here. However, our study uses it in a more systematic way, and we will also study many levels of noise. The third factor is noise removal (newly studied). As opposed to other methodologies that can only evaluate raster-to-vector methods, the proposed methodology could be used to study the interactions between many different factors affecting the quality of line detection and enables a proper analysis based on a statistical test. Other methodologies focus on studying only one factor (vectorization), and their focus on other factors is limited. Because the proposed methodology relies on statistical analysis, it can detect significant improvements in performance for any studied algorithm in the context of line detection. Other methodologies can only show the performance of the tested vectorization methods, which may not be enough when more stringent criteria are required to select a method for specific industrial problems. Figure 1 shows the detail of the proposed methodology. The next section presents the details of each step of the proposed methodology.
Selecting/Corrupting the Images
Real scanned images of mechanical engineering drawings were selected. This type of image contains straight lines as well as circular arcs. The images from the GREC'03 and GREC'07 contests [11], [15] were used because ground truth files were readily available for the performance evaluation task.
Uniform salt-and-pepper noise was added to each image in an independent manner (i.e., the original images were always used to generate noisy images). In this way, bias was avoided, and the data is suitable for the statistical analysis. We have used the same amount of noise as in our previous study [16]. Because the image data in this area of research (Graphics Recognition) is mostly binary images (black and white), the highest noise level used (15%) was not small. High noise values will distort the fine lines in such drawings, which may render the vectorized image useless and make the process of vectorization meaningless. In this research, the focus is not on the noise-removal factor alone but on many factors that may affect line detection; hence, it is reasonable to use three noise levels taken arbitrarily in the range from 5% to 15%. This will ensure that the noisy images are still usable (it is practical for them to be vectorized) and that the interaction between the factors can be studied. The total number of images created by corrupting images with three levels of noise was #images * #noise levels = 11 * 3 = 33.
Removing the Noise
All noisy images were then cleaned by six salt-and-pepper noise-removal methods. We used the same methods as in our previous study [16]. The parameters for the noiseremoval methods were set as follows: the window size was set to 3 * 3 pixels for all algorithms. The parameters for AD were set according to the values suggested by Simard and Malvar [20] to perform strong noise removal. For AlgA, AlgB, and AlgC, LT (Length Threshold) was set to 4, 5, and 6 for 5%, 10%, and 15% noise, respectively. The total number of images created by cleaning all the noisy images using the six noise-removal algorithms was #noisy images * #noise-removal methods = 33 * 6 = 198.
Vectorizing the Images
The cleaned images were then vectorized by several commercial software. The three vectorization software that were used in our previous study [16] were also used here. The software applications vectorized cleaned images and saved the detected vectors as DXF files. These files were then converted to VEC files, which have a simple format and are easier to deal with using the performance evaluation tool. Our interest was in the automatic conversion process; thus, most software features that could be manually used to enhance the detection were not used.
Some parameters that the three vectorization software needed were pre-set prior to applying vectorization. This ensured consistency between different software. The measuring units were unified, and the drawing type was set to Mechanical Engineering. Other parameters and thresholds were unchanged.
The total number of vector images created by vectorizing the cleaned images using the three vectorization software was #cleaned images * #vectorization software = 198 * 3 = 594.
Measuring the Performance Index Scores
The VRI of the detected vectors was the criterion used to judge the quality of vector detection. The performance evaluation method † compared the detected vector file with the ground truth file and outputted the VRI score. VRI is an objective performance evaluation of line detection algorithms (vectorization software in our case) that works at the vector level. The VRI index is a combination of two matrices: vector detection rate D v and vector false alarm rate F v . The VRI is calculated as in Eq. (1) below.
The vector detection rate (D v ) is defined by two terms: line basic-quality and fragmentation quality. Line basic-quality represents the accuracy of the detection of line attributes, which include end points, width, line style, line shape, and center (for arcs) compared with the attributes of ground truth data. Fragmentation quality measures the fragmentation of the detected line compared with the ground truth line. The vector false alarm rate (F v ) measures the probability of a detected line being a false alarm. VRI value is in the range of 0 to 1, with higher values indicating better vector recovery. VRI is a well accepted criterion for performing empirical performance evaluation of raster-to-vector methods. It has been used in several editions of the Arc Segmentation Contests held in conjunction with GREC. The total number of VRI scores obtained by running the performance evaluation tool is equal to the total number of vector images generated by the vectorization.
Statistically Analyzing the Performance Index Scores
Our experiment included three factors to be studied, with many levels for each factor. Hence, hundreds of VRI scores generated by applying the three factors on the images needed analysis. Simple statistics (such as the mathematical mean) were not sufficient to show the significance of a specific factor, nor can they directly explain the interactions between the different factors. A repeated measure ANOVA is a suitable statistical analysis method used in our experiment, considering that many factors were involved in the study and each subject (image) participated in more than one score (measurement). ANOVA is a well-known statistical method. However, the use of ANOVA helps extract more information on the interaction between two or more studied factors. That is the reason why ANOVA is used in this study. The methodology coupled with ANOVA could be used to find the best match of the off-the-shelf noiseremoval methods (for example) with other raster-to-vector conversion methods. The methodology, however, is not limited to the three different factors already studied in the paper, but it is also applicable to other factors stemming from research preferences.
Experimental Results and Discussions
The work flow of the experiment can be summarized as follows. The eleven raster images were distorted with the three noise levels and then cleaned by the six noise-removal methods. The cleaned images were then vectorized by the three commercial raster-to-vector software. One VRI value (score) was computed from each detected vector file and its corresponding ground truth vector file. A total of 594 separate VRI values were generated out of the performance evaluation stage shown in Fig. 1, but some values could not be generated and thus reduced the number of VRI values to 588. The VRI values were then analyzed by a repeated measure ANOVA. The values that could not be generated were related to AD and AlgC when the noise level was set to 15%. This is due to the number of connected components generated that turned out to be larger than the allocated space in the implementation. Because some data are missing, we had an unequal n design and reporting EMM rather than the observed mean, avoiding the bias incurred in calculating the mathematical mean when some data are missing.
For clarity purposes, the three variables (Vectorization, Cleaning, Noise) are shown in italics when referenced in the text.
Because we have many factors to study and each image was used in measuring more than one VRI score, a repeated measure ANOVA was used to analyze the VRI scores.
There are three requirements to use this statistical test [27]: (i) order effects should be avoided. This condition was guaranteed because we used separate copies of the image before applying any treatment (one level of a factor); (ii) the data in each cell is normally distributed (some abnormality is accepted); and (iii) the sphericity condition should not be violated.
The second and third requirements above are explained below: 1. Before proceeding with a repeated measure ANOVA, the data needs validation for the analysis. The Shapiro-Wilk test checks the normality of the data in each cell of the design. Equations (2) and (3) show the null hypothesis and the alternative hypothesis for the Shapiro-Wilk test, respectively. If the significance (ρ) of the Shapiro-Wilk test is less than or equal to .05, then the null hypothesis will be rejected and the alternative hypothesis will be accepted. A normality test shows that 50 out of 54 cells are normally distributed (i.e., we fail to reject the null hypothesis At this stage, the data were ready to be analyzed. GLM repeated measure † was used to analyze the resulting VRI values. We had three factors: noise level, noise-removal method, and vectorization. Hence, three independent variables (IV) were created: Noise [three levels: 5%, 10%, and 15%], Cleaning [six methods: kFill, EkFill, AD, AlgA, AlgB, and AlgC], and Vectorization [three software: VP, Vec, and S2CAD]. One dependent variable (DV) was created (VRI).
The analysis of VRI scores using a repeated measure ANOVA will be explained in the following sections. All studied factors and two-way interactions were shown to be significant (Sig. ≤ .05). Hence, the Within-Subjects Contrasts (Table 1) can be further interpreted. † SPSS menu item: Analyze-> General Linear Model-> Repeated Measures
Single-Factor Effects
The next three sections present the effect of the three factors independently. For each factor, the effect of each level on the VRI index is shown.
Noise-Removal Effect
The Cleaning factor was significant, F(5, 45) = 54.816, ρ < .001. The differences in means (Table 3) and the other five algorithms were significant † . This difference can be seen by investigating Fig. 3. It indicates a low performance for AD compared to the other noise-removal methods. However, the mean differences between each of the other five algorithms were not significant, indicating that their performances were similar.
Noise Level Effect
The Noise factor was significant, F(2, 18) = 29.077, ρ < .001. The difference in means (Table 1) between 5% and 10% noise was not significant, F(1, 9) = 1.549, ρ = .245. However, the difference between 10% and 15% noise was significant, F(1, 9) = 74.618, ρ < .001. This difference can be observed by investigating Fig. 4. The pair-wise comparisons in Table 4 show that the difference between the EMM of VRI for 5% and 10% noise was not significant. However, there was a significant difference between 10% and 15% noise and between 5% and 15% noise. This indicates that the VRI performance index did not drop much when the amount of noise in the image was increased from 5% to 10%, whereas the performance drops significantly when the noise level was increased from 10% to 15%.
Multi-Factor Interaction Effects
As opposed to the previous three sections, in which each factor was studied independently, the next three sections present the two-way interactions between the three factors. For each pair of factors, we studied the interaction effect of their different levels on the VRI index.
Two-Way Interaction: Vectorization * Cleaning Effect
The effect of the combination of the two factors Vectoriza- of Cleaning was significant, F(1, 9) = 10.444, ρ < .05. This indicates that the mean difference in the VRI made between AlgB and AD was not the same across the two levels of Vectorization (Vec and VP). The EMM of AD within Vec was considerably lower than the first four levels (Fig. 5), indicating low performance of AD within the Vec software. The performance of AlgC was also low compared to the first four methods; this indicates that the performance of Vec dropped when used with the AD and AlgC noise-removal methods.
Considering the second two levels of Vectorization (VP vs. S2CAD), the five contrasts of Cleaning were significant, which indicates that the mean differences in the VRI made between kFill vs. AlgA, AlgA vs. EkFill, EkFill vs. AlgB, AlgB vs. AD, and AD vs. AlgC are not the same across the two levels of Vectorization (VP and S2CAD). S2CAD had more sensitivity to cleaning methods, which is shown as a sharp oscillation of performance (Fig. 5). The results of this section can be summarized as follows: S2CAD shows poor performance when used with the AD noise-removal method, Vec shows significant poor performance when working with AD and AlgC, and VP shows little sensitivity when working with different noise-removal algorithms. It is thus considered stable and may be used with any of the six noiseremoval methods.
Two-Way Interaction: Vectorization * Noise Effect
The effect of the combination of the two factors Vectorization*Noise was significant, F(3.937, 35.434) = 35.155, ρ < .001. To know at which levels the difference of the means was significant, we refer to Table 1. The Vectoriza- tion*Noise contrast tested the hypothesis that the mean of the specified Noise contrast is the same across the three Vectorization levels. All the four contrasts were significant, indicating that the mean difference of VRI values was not the same in the three software that considered the three noise levels. Intuitively, increasing the noise level caused a drop in VRI values for all tested vectorization software in general. This confirms the results of Sect. 4.1.1, which demonstrated that the three different software have a large difference among them in affecting VRI scores. The performance of the S2CAD dropped sharply when the noise level was increased (Fig. 6). Vec had a moderate drop in performance when the noise level was increased. VP had the best resistance to noise; hence, its performance slightly dropped when the noise level was increased. VP performance at the 15% noise level was better than Vec and S2CAD at the three noise levels. However, VP had unexpected behavior with 5% noise, at which it scored lower than the other two cases (10% and 15%). This unusual case is difficult to interpret because we are dealing with the software as a black box in which the content (such as the raster-to-vector algorithm) is not usually known. A similar unusual behavior has been reported in past studies, such as that performed by Wenyin [12], in which some noisy images obtained higher VRI compared to their original clean images.
Two-Way Interaction: Cleaning * Noise Effect
The effect of the combination of the two factors Cleaning*Noise was significant, F(3.471, 31.243) = 5.108, ρ < .01. To know at which levels the differences of the means were significant, we refer to Table 1.
The Cleaning*Noise contrast tested the hypothesis that the mean of the specified Noise contrast is the same across the six levels of Cleaning. Only three contrasts were significant, indicating that the VRI values for the significant contrast were not the same in the two noise levels (10% and 15%) considering the four methods (EkFill, AlgA, AD, and AlgC) of Cleaning.
The sixth contrast (10% vs. 15%) was significant, F(1, 9) = 6.044, ρ < .05, which indicates that the mean difference in VRI between 10% and 15% noise levels was not the same across the two levels of the noise-removal algorithms (EkFill and AlgB). The enhanced counterpart of the noise-removal algorithms performed better with a higher level of noise than its original counterpart (Fig. 7). Ek-Fill suffered a mean difference of (.539 − .501 = .038) when noise level was increased from 10% to 15%, whereas our algorithm (AlgB) suffered a mean difference of only (.531 − .527 = .004) under the same condition.
The eighth contrast (10% vs. 15% noise) was significant, F(1, 9) = 68.143, ρ < .001, which indicates that the mean difference in VRI between the 10% and 15% noise levels was not the same across the two levels of noise-removal algorithms (AlgB and AD). AlgB performed better with a higher level of noise than AD did (Fig. 7). AlgB suffered a mean difference of only (.531 − .527 = .004) when the noise level was increased from 10% to 15%, whereas AD suffered a mean difference of (.487 − .402 = .085) under the same condition.
The tenth contrast (10% vs. 15%) was significant, F(1, 9) = 6.546, ρ < .05, which indicates that the mean difference in VRI between the 10% and 15% noise levels was not the same across the two levels of noise-removal algorithms (AD and AlgC). AlgC performed better with a higher level of noise than AD did (Fig. 7). AlgC suffered a mean difference of only (.535 − .489 = .046) when noise level was increased from 10% to 15%, whereas AD suffered a mean difference of (.487 − .402 = .085) under the same condition.
Conclusions
Many of the reviewed studies look at the raster-to-vector conversion process as the sole major factor when performing empirical performance evaluation, or they do not study other factors (such as noise removal and noise level) rigorously. In this paper, we have proposed a methodology to study many factors that may have a role in affecting line detection in the raster-to-vector conversion process. The proposed method-ology, which can study two or more factors simultaneously, was also coupled with a robust statistical analysis method. It is not the aim of this proposed methodology to directly improve existing raster-to-vector method(s). However, the methodology provides a means to utilize the large number of existing methods, whether for noise removal or raster-tovector conversion. For example, the method could be used to find algorithms from noise removal and from raster-tovector conversion that can fit with each other in a way that allows the production of high-quality vector data. An experiment was performed to study three independent factors: noise-removal algorithm, noise level, and vectorization software. The interpretation of the output of the statistical analysis shows that the three studied factors affected line detection and that there is an interaction between the studied factors.
In the vectorization factor, the three studied methods showed significant inter-differences. The best performer was VP, followed by Vec. S2CAD was the lowest performer. Concerning the noise-removal factor, AD showed a significant difference in performance when compared to the other five methods, which showed comparable performance among themselves. AD was the lowest performer. Concerning the noise level factor, 15% noise showed a significant difference compared to the other lower noise levels (5% and 10%). The VRI index dropped significantly at 15%.
For the two-way interactions, the vectorizationcleaning interaction was significant, which shows that the quality of line detection for raster-to-vector methods is related to the noise-removal method used in image enhancement. VP was the most stable and had no sensitivity when used with any of the six noise-removal methods. It performed best when used with EkFill. Vec had low performance when used with AD and AlgC but good performance when used with EkFill. S2CAD was more sensitive to noiseremoval methods and showed poor performance when used with AD but good performance when used with AlgB/AlgC.
The vectorization and noise level interaction was also significant. The VP method showed stable performance, and its VRI index had a moderate drop when noise was increased. VP performance at 15% noise was better than the performance of Vec and S2CAD at the three noise levels. The Vec and S2CAD methods had sharper drops in performance when the noise was increased.
The significance of the interaction between noise level and noise-removal method is intuitive. All algorithms showed no significant drop in performance when noise level was increased from 5% to 10%. However, increasing the noise from 10% to 15% affected some algorithms as follows: AlgB showed significantly better performance compared to its original counterpart (EkFill). AlgB also performed significantly better than AD. AlgC performed significantly better than its original counterpart (AD).
The proposed methodology of this paper is not limited to studying the factors affecting raster-to-vector conversion. The methodology can also be used in other areas of computer vision, such as OCR. The stages in Fig. 1 could be re-placed by other stages of the computer vision process, such as OCR stages. The key point in using the methodology is to run the experiment systematically and in alignment with the requirements of the statistical analysis method.
The future direction of this research includes studying the scanning resolution factor. | 2018-04-03T00:30:53.400Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "56c08c328e74146e94cdc48d12245685f60993ba",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/transinf/E94.D/6/E94.D_6_1278/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4a37e4d0b7b8ad53b74e3df82c5174904183691f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
267631771 | pes2o/s2orc | v3-fos-license | Dual centrifugation as a novel and efficient method for the preparation of lipodisks
Polyethylene glycol (PEG)-stabilized lipodisks have emerged as innovatiive, promising nanocarriers for several classes of drugs. Prior research underscores the important role of lipid composition and preparation method in determining the lipodisk size, uniformity, and drug loading capacity. In this study, we investigate dual centrifugation (DC) as a novel technique for the production of PEG-stabilized lipodisks. Moreover, we explore the potential use of DC for the encapsulation of two model drugs, curcumin and doxorubicin, within the disks. Our results show that by a considerate choice of experimental conditions, DC can be used as a fast and straightforward means to produce small and homogenous lipodisks with a hydrodynamic diameter of 20 – 30 nm. Note-worthy, the technique works well for the production of both cholesterol-free and cholesterol-containing disks and does not require pre-mixing of the lipids in organic solvent. Furthermore, our investigations confirm the efficacy of DC in formulating curcumin and doxorubicin within these lipodisks. For doxorubicin, careful control and optimization of the experimental conditions resulted in formulations displaying an encouraging encapsulation efficiency of 84 % and a favourable drug-to-lipid ratio of 0.13 in the disks.
Introduction
Due to their biomimetic properties and non-toxic, biocompatible nature, nanoparticles built from lipid bilayers have found frequent use as both model membranes and vehicles for drug delivery.Uni-and multilamellar liposomes, i.e., spherical particles consisting of an aqueous core surrounded by one or multiple self-closed lipid bilayers, constitute the to date most widely used class of lipid-based nanoparticles.Nevertheless, depending on the application, alternative types of nanoparticles may offer better opportunities.As one example, the closed, hollow structure of liposomes can prove disadvantageous in experiments probing analyte-membrane interactions, since a large proportion of the surface area is initially hidden from interaction with the surrounding aqueous environment (Johansson et al., 2007(Johansson et al., , 2005)).Further, the spherical shape, rather large size, and comparably high rigidity may render liposomes suboptimal as nanocarriers for certain drugs, such as tumour-targeted anticancer agents.In line with this, accumulating evidence suggests that non-spherical shape, deformable morphology, and small size all are features that enhance the ability of nanoparticles to target and penetrate deep into tumour tissue (Ding et al., 2019;Niora et al., 2020;Zhu et al., 2019).Hence, unless the anticancer drugs belong to the category which requires or benefits from encapsulation in the aqueous core of liposomes, there are good reasons to consider alternative nanocarriers.
Polyethylene glycol (PEG)-stabilized lipodisks constitute a class of nanoparticles that are closely related to liposomes.Thus, they consist of self-assembled lipid bilayers, and are built from the same lipid components that are typically used to construct PEGylated liposomes.In lipodisks the bilayer is not self-closed, however, but stabilized into a discoidal shape by the edge-active PEG-lipids (Fig. 1).
Formation of the flat, circular lipodisks is observed when PEG-lipids are mixed with lipids in the gel or liquid ordered phase state, and the concentration of the former exceeds the bilayer saturation limit (Edwards et al., 1997;Johnsson and Edwards, 2003;Sandström et al., 2007;Viitala et al., 2019).The disks are surrounded by a curved rim composed predominately of PEG-lipids (Lundquist et al., 2008), and their size can be changed and adapted by varying the PEG-lipid concentration (Johansson et al., 2005;Johnsson and Edwards, 2003) or PEG molecular weight (Morin Zetterberg et al., 2016).Due to their high PEGlipid content, lipodisks display excellent structural stability and typically remain unchanged upon storage for several months at 4 • C (Johansson et al., 2005;Morin Zetterberg et al., 2016).Numerous studies have reported on the applicability of PEG-stabilized lipodisks as versatile model membranes for, e.g., drug partition studies and investigations involving amphiphilic peptides and membrane proteins (see Zetterberg et al. (Morin Zetterberg et al., 2016), and references therein).During the last decade, lipodisks have received increasing attention also as vehicles for drug delivery.Thus, lipodisks have been employed for single and dual delivery of several different classes of anticancer drugs, including conventional chemotherapeutic drugs and anticancer peptides (Ahlgren et al., 2017b(Ahlgren et al., , 2017a;;Feng et al., 2019;Gao et al., 2016;Lundsten et al., 2020;H. Wang et al., 2019a;Wang et al., 2018;L. Wang et al., 2019b;Zhang et al., 2018Zhang et al., , 2014)).Noteworthy, two recently published studies involving head-to-head comparisons of lipodisks and PEGylated liposomes conclude that lipodisks display greater tumour accumulation and considerably more efficient tumour penetration (Dane et al., 2022;Huang et al., 2022).Data reported by Dane et al. more over verify a significantly extended circulation half-life for lipodisks, as compared to liposomes (Dane et al., 2022).A number of recent studies furthermore point towards lipodisk provoking a weaker anti-PEG IgM response and being less susceptible to accelerated blood clearance (ABC) phenomena than PEGylated liposomes (Grad et al., 2020;Wang et al., 2022).
A variety of different preparation protocols, including methods based on, e.g., simple hydration of thin lipid films (Johansson et al., 2005;Sandström et al., 2008), probe sonication (Ahlgren et al., 2017a;Johansson et al., 2007;Sandström et al., 2008;Wang et al., 2018), detergent depletion (Ahlgren et al., 2017b(Ahlgren et al., , 2017a;;Lundsten et al., 2020;Sandström et al., 2008), ethanol injection (Dane et al., 2022), and microfluidic technology (Levy et al., 2021), have been used to produce lipodisks.The selection of preparation protocol, as well as the choice of lipid composition, has been shown to affect the size and size heterogeneity of the lipodisks.Protocols built on detergent depletion using size exclusion chromatography tend for instance to generate particularly small disks (Ahlgren et al., 2017a;Sandström et al., 2008), and the use of cholesterol supplemented lipid mixtures typically result in disks that are comparatively large and rather heterogeneous in size (Johansson et al., 2007(Johansson et al., , 2005;;Sandström et al., 2008).Apart from preferences regarding nanoparticle size or homogeneity, other circumstances may be decisive for the choice of preparation method or lipid composition.Thus, protocols that in some step include dissolution of the lipid components in organic solvents may not be compatible with the incorporation of proteins, or other sensitive molecules, in the disks.Also, the intended application may require a lipodisk concentration that is higher than what is possible to achieve with conventional preparation protocols.
Dual centrifugation (DC) has emerged as a novel, fast, aseptic and convenient technique for the preparation of liposomes (Koehler et al., 2021;J. K. Koehler et al., 2023b;Massing et al., 2008).When using this method, a highly concentrated aqueous dispersion of the lipids is homogenized to produce a vesicular phospholipid gel (VPG), which can then either be stored or converted to a conventional liposome dispersion by simple dilution.The VPG-production is performed "in-vial" using a specially equipped centrifuge that allows not only high-speed centrifugation but also forces the sample vial to turn around a second rotation axis (Massing et al., 2017).The resulting fast movement of the viscous sample back and forth from the bottom to the top of the vial, together with the presence of added ceramic beads, ensures efficient homogenization of the dispersion.Noteworthy, although the sample is exposed to a large number of homogenization events, each of these involve only moderately strong shear forces.Hence, DC-homogenization is gentler than high-pressure homogenization, and thus less likely to induce, e.g., lipid degradation (Massing et al., 2008).Further, in contrast to most other liposome-production methods, liposome preparation by DC does not require premixing of the lipid components in an organic solvent.Thus, even when working with complex lipid compositions including cholesterol the VPGs can be produced from dry lipids after hydration with a small volume of aqueous buffer (J.Koehler et al., 2023a).
Previous studies have confirmed that PEGylated liposomes can be produced by DC (J.Koehler et al., 2023a;Koehler et al., 2021).Given the compositional similarities between PEGylated liposomes and PEGstabilized lipodisks, it's reasonable to assume that DC could be used as a fast and convenient means to produce also the latter.There are some noteworthy differences, however, between the two types of nanoparticles.Compared to PEGylated liposomes, lipodisks contain a considerably higher proportion of PEG-lipids with high water-binding capacity.At the same time, lipodisks are smaller and have, in contrast to liposomes, no water filled core.These dissimilarities can be foreseen to affect the lipid-to-water ratios and forces needed to successfully produce the nanocarriers by DC-homogenization.Hence, it is not obvious that the experimental conditions found optimal for the production of liposomes (J.Koehler et al., 2023a) apply for lipodisks.
In the current study, we have explored the potential of DC as a novel production method for lipodisks.To this end, we have carried out systematic studies to investigate how different parameters, such as lipid composition, lipid concentration, and the duration of the homogenization process affect the particle size and morphology in the preparations.Further, using curcumin and doxorubicin as model compounds, we have made initial investigations regarding the possibilities of exploiting DChomogenization as an efficient method for the formulation of anticancer drugs in lipodisks.
Preparation of lipodisks by DC
The lipodisk formulations were prepared using a dual centrifuge (DC, ZentriMix 380R, Andreas Hettich GmbH & Co KG, Tuttlingen, Germany), as described previously (Koehler et al., 2021).Based on the formulation, samples were prepared either by a dry lipid mixture or using the in-vial lipid film method.The two formulations prepared and analysed are HEPC:Cholesterol:DSPE-PEG 2000 with a molar ratio of 35:45:20 and HEPC:DSPE-PEG 2000 with a molar ratio of 80:20.Formulations are hereafter denoted as HEPC:Chol:DSPE-PEG 2000 and HEPC: DSPE-PEG 2000 .
2.2.1.1.Preparation of lipodisks using dry lipid mixture.Briefly, accurately weighed amounts of lipids to produce a final lipid concentration of 10-60 % w/v in the formulations, were taken in 2 mL conical screw cap vials (Sarstedt AG & Co. KG, Nümbrecht, Germany).600 mg zirconium oxide beads with a diameter of 1.5 mm (SiLibeads® Type ZY-P Pharma, Sigmund Lindner GmbH, Germany) were added to each sample and the calculated amount of HBS buffer (20 mM HEPES, 150 mM NaCl; pH 7.4) were then added to the vials.Depending on the lipid content used, a phospholipid gel with a final batch size of 50 mg was prepared by mixing 5-30 mg of lipid mixture and 45-20 µL of aqueous buffer (i.e., HBS).Due to the density of the buffer and phospholipid being close to 1 g⋅mL − 1 , we calculated that dispersion of 1 mg lipid per 100 µL corresponds to 1 % lipid concentration (J.Koehler et al., 2023a).Phospholipid gels were then produced by homogenizing the lipids at 2350 rpm for 5-60 min.
In case of cholesterol-containing lipid mixtures, the centrifuge was pre-cooled to 4 • C before sample preparations.When using mixtures without cholesterol, precautions were made to avoid exposing the sample to temperatures below the gel-to-liquid crystalline transition temperature (T m ).Thus, the vials and DC plates were pre-heated by The formed lipid gels were diluted with HBS buffer by low-speed DC or by gently vortexing the samples.In order to avoid foaming and potential loss of PEG-lipid, care was taken to completely fill the vials, i.e., add buffer up to 2 mL, during the dilution step.
Lipodisks preparation using in-vial lipid film method.
The lipodisks were also prepared by a protocol involving the pre-mixing of the lipid components in an organic solvent.Briefly, the different lipids were carefully weighed in the 2 mL conical screw cap vials (Sarstedt AG & Co. KG, Nümbrecht, Germany).The lipids were then dissolved in an organic solvent mixture (chloroform: ethanol 2:1; v/v) and were thoroughly vortexed.The organic solvent was thereafter evaporated under a gentle stream of N 2 gas to obtain dry a lipid film.The samples were further dried overnight in a vacuum oven (3608-1ce, Lab-Line Instruments Inc, Melrose Park, Illinois, USA) to remove any traces of organic solvent.The lipodisks were then prepared using DC as described previously.
Formulation of curcumin and doxorubicin in lipodisks
The stock solutions of curcumin and doxorubicin were prepared in ethanol: chloroform (4:1 mixture) and methanol respectively.The required volume of the stock solutions was transferred to the DC vials (Sarstedt AG & Co. KG, Nümbrecht, Germany).The organic solvents were then evaporated and lipodisk prepared using the dry lipid mixture or in-vial lipid film method as described earlier.
Preparation of drug-loaded lipodisks by probe sonication
Lipids in desired molar ratios were weighed directly into the clean glass vials and were then dissolved in chloroform.Based on the required drug-to-lipid ratio, calculated quantities of the stock solutions of doxorubicin or curcumin were pipetted into the lipid mixture.The organic solvents were evaporated first under the gentle stream of N 2 gas and then in a vacuum oven (3608-1ce, Lab-Line Instruments Inc, Melrose Park, Illinois, USA) overnight to remove the residual solvents.The dried lipid film was hydrated with 2 mL HBS (pH 7.4) at 65 • C for approximately 45 min with intermittent vortexing.The lipid dispersion was then sonicated for 5 and 10 min for cholesterol-free and cholesterolcontaining formulations respectively, using a probe-sonicator (Soniprep 150, MSE Scientific Instruments, London, UK).During the sonication, cholesterol-based formulations were maintained in ice-cold water while the cholesterol-free formulations were kept above the transition temperature of the dominant lipid (i.e., HEPC).The formulations were centrifuged at 380 g for 2 min to remove any metal remnants produced from the sonicator tip.The prepared lipodisks were stored at 4 • C until further analysis.
Physicochemical characterizations 2.2.4.1. Dynamic light scattering (DLS).
The number-weighted average hydrodynamic diameter (Dh) of the lipodisks was measured by frequency power spectrum (FPS) combined with laser-amplified detection using a Nanotrac wave (Microtrac MRB GmbH, Germany).The instrument is equipped with a 3 mW solid-state laser diode with a wavelength of 780 nm and backscattered light detection at 180 • .The particle diameter and polydispersity index (PDI) were measured after diluting the sample to a final lipid concentration of 1 mM with HBS (viscosity: 0.89 mPa.s; refractive index: 1.33).The diameter obtained from the measurements corresponds to the ''equivalent sphere" diameter and is therefore not identical to the actual disk diameter.
Cryogenic transmission electron microscopy.
Cryo-EM samples were prepared at a controlled temperature (25 • C) and high humidity in a custom-built environmental chamber.A minute volume of the specimen was deposited on a copper grid (300 mesh, Agar Scientific, Essex, UK) coated with a custom-made holey polymer film.After blotting away excess liquid with a filter paper, the sample was rapidly vitrified by plunging the grid into liquid ethane, maintained just above its freezing point.The prepared grid was then placed in a Gatan CT3500 holder and transferred to a Zeiss TEM Libra 120 transmission electron microscope (Carl Zeiss AG, Oberkochen, Germany) for examination.Throughout the transfer and viewing procedures, the sample was consistently maintained at a temperature below − 160 • C. The microscope, operating at 80 kV in zero-loss bright-field mode, captured digital images under lowdose conditions using a BioVision Pro-SM CCD camera from Proscan elektronische Systeme GmbH (Scheurig, Germany) (Almgren et al., 2000;Morin Zetterberg et al., 2016).
Quantification of lipid contents.
The total amount of the lipid in lipodisk formulations was determined by analyzing the phosphorous content of the samples similar to that described previously (Paraskova et al., 2013).Briefly, the calculated amounts of the lipodisk samples were pipetted in phosphorous-free clean glass vials.The vials were then placed in a laboratory ashing furnace (CSF 1200, Carbolite Gero Ltd, Sheffield, UK) at 550 • C for at least 5 h to calcinate the samples.Post digestion, the vials were allowed to cool down and the obtained dry ashes were dissolved in 2 mL Milli-Q water.Afterwards, 0.5 mL of a freshly prepared solution consisting of seven parts reagent A (2.74 mg×mL − 1 of K(SbO)C 4 H 4 ⋅5H 2 O, 40 mg×mL -1 of (NH 4 ) 6 Mo 7 O 24 ⋅4H 2 O and 2.5 M H 2 SO 4 ; 1:3:10) and three parts reagent B (0.1 M C 6 H 6 O 6 ) was added to the vials.The vials were then carefully vortexed and allowed to settle for at least 15 min.The phosphorous contents were then quantified at λ max = 882 nm using a UV-VIS spectrophotometer (HP 8453, Agilent Technologies, Santa Clara, USA).The standard curve for phosphorous was constructed using a phosphorous standard solution (KH 2 PO 4 , 0.65 mM; Sigma-Aldrich, St. Louis, MO, USA).It is important to note that the cholesterol, which is the phosphorous-free component, was assumed to contribute in line with the original molar ratios in the formulation when calculating the final lipid content (Grad et al., 2020).
Encapsulation efficiency (EE %).
The encapsulation efficiency (EE %) of the curcumin-containing formulations was determined using a method based on solvent extraction.Briefly, 1 mL of freshly prepared lipodisk formulation was centrifuged at 11,500 g for 15 min using a Biofuge pico centrifuge (Heraeus Instruments, Osterode, Germany) to separate the disks from the unloaded drug.Post centrifugation, the supernatant (containing drug-loaded lipodisks) and the pellet (containing unloaded drug) were separated.Thereafter, 0.15 mL of supernatant was diluted to 3 mL using DMSO supplemented with 20 % v/v HBS buffer.The amount of curcumin encapsulated was quantified by measuring the absorbance at λ max = 435 nm using a UV-VIS spectrophotometer (HP 8453, Agilent Technologies, Santa Clara, USA).The separated pellet was dissolved in 1 mL of DMSO.0.15 mL of dissolved pellet was diluted to mL with DMSO containing 20 % v/v HBS buffer.The amount of curcumin was then quantified as described previously.DMSO supplemented with 20 % v/v HBS buffer (20 mM HEPES, 150 mM NaCl, pH 7.4) was used as blank.The standard curve for the curcumin was constructed in the same solvent system and the molar extinction coefficient (ε) was determined.Measurements of a saturated solution confirmed that the solubility of curcumin in the HBS buffer was very low, i.e., in the range of 1.0--1.4µM.Phosphorus analysis moreover verified that the lipid content in the pellet was negligible.The EE % was then determined using the formula: (Ali et al., 2021) The EE % from the doxorubicin-containing formulations was, due to doxorubicin's comparatively high aqueous solubility, determined using a slightly modified version of the solvent extraction method.Briefly, mL of lipodisk dispersion was centrifuged at 11,500 g for 15 min.The resulting pellet (containing precipitated unloaded drug) was dissolved in 1 mL of 30 mM Triton™ X-100 solution.0.5 mL of supernatant (containing drug-loaded lipodisks and some dissolved unloaded drug) was added to Amicon® Ultra centrifugal filters (Ultracel®, MWCO 3 K; Merck Millipore, Carrigtwohill, Ireland) and centrifuged again at 16,089 g for 25 min.After the filtrate (containing dissolved unloaded drug) was separated, the filter was inverted into a new Eppendorf tube and centrifuged again at 6093 g for 2 min to get the lipodisk concentrate (containing the drug-loaded lipodisks).30 µL of the concentrate was then diluted to 0.5 mL with HBS buffer (20 mM HEPES, 150 mM NaCl, pH 7.4).Subsequently, 125 µL of each of the filtrate, lipodisk concentrate, the pellet and the supernatant from the original formulation were diluted separately to 2.5 mL using 30 mM Triton™ X-100 solution.The absorbance was then recorded at λ max = 480 nm using the UV-VIS spectrophotometer (HP 8453, Agilent Technologies, Santa Clara, USA).The doxorubicin standard curve was generated in 30 mM Triton™ X-100 solution and the molar extinction coefficient (ε) was then calculated.The same solvent system was used as blank.The EE % for doxorubicin was determined using the following formula while taking into consideration the possibility of potential loss of some of the drug and lipid to the filters: Here, [Doxorubicin] total represents the total amount of doxorubicin present in the formulation (encapsulated + free), whereas [Doxorubicin] encapsulated represents the doxorubicin that is encapsulated in the lipodisks.The latter is calculated from what is left in the concentrate fraction, corrected by the free drug found in the filtrate.Since the Amicon® filtration procedure concentrates the lipodisk formulation, Eq [2] uses the [drug]/[lipid] ratio in order to translate it to Eq [1].
Preparations based on HEPC:Chol:DSPE-PEG 2000
Previous studies show that the morphology, size and size polydispersity of DC-produced liposomes can be tailored and adapted through deliberate adjustments of the conditions used during the homogenization process (J.Koehler et al., 2023a;Koehler et al., 2021).We deemed it plausible that the characteristics of lipodisk preparations in a similar fashion could be modified and optimized by a considerate choice of the experimental conditions.Thus, we began our investigations by exploring how changes in some key parameters affected the size and structure of the lipid self-assemblies found in the formulations.
Effect of homogenization time.
As a first step, we analysed how changes in the duration of the initial homogenization step affected the morphology and homogeneity of the lipid particles present in the final preparations.To this end, a series of samples with 40 % lipid concentration were first homogenized by DC for different periods of time.The resulting lipodisk gels were then dispersed in buffer, and thereafter analysed by cryo-EM.The images shown in Fig. 2 illustrate the differences observed between the preparations.Samples homogenized for 5 min were dominated by lipodisks but also contained some irregular bilayer patches, as well as a small population of unilamellar liposomes with diameters from about 50 to 150 nm (Fig. 2A).The latter often appeared to encapsulate some lipodisks in their aqueous core.Noteworthy, the lipodisks were rather heterogeneous in size.As judged by the cryo-EM images, which do not reveal the lipodisk PEG-corona, the bilayer part of the disks displayed diameters from about 150 nm down to around 10 nm.Given that the core of a DSPE-PEG 2000 micelle can be estimated to be about 6.4 nm (Johnsson et al., 2001), it cannot be excluded that the smallest structures observed represent particles that are more or less exclusively composed of PEG-lipid.Increasing the time for homogenization to 15 min had a clearly noticeable effect on the particle homogeneity.A few liposomes were still detected in the cryo-EM analysis but no irregular bilayer patches were observed.Also, the lipodisks assumed a narrower size distribution and disks with diameters around 10 nm, or above 100 nm, were now only rarely observed (Fig. 2B).Increasing the homogenization time to 30 min did not lead to any visible changes in size or homogeneity of the lipodisks (Fig. 2C).The images suggested, however, a slightly larger population of unilamellar liposomes than what was observed after 15 min homogenization.Further extension of the homogenization time to 60 min leads to some major structural changes in the samples.Thus, in addition to lipodisks, the images revealed a significant number of large liposomes and irregular bilayer patches.The cryo-EM analysis also disclosed the presence of some large, contrast rich, amorphous assemblies (Fig. 2D).The size of these structures was often found to be in the micrometer range.
In order to complement the cryo-EM analysis, the samples described above were also measured by DLS.In case of samples homogenized for 5, 15 and 30 min, the number weighted size distributions displayed a single narrow peak corresponding to particles with a mean hydrodynamic diameter in the range between 25 and 30 nm (Fig. 3A).As expected from the cryo-EM analysis, the DLS measurements returned inconclusive and irreproducible results for the samples that were homogenized for 60 min (results not shown).
The results reported above confirm that cholesterol-containing lipodisks can be efficiently produced by DC.Our findings show, however, that the duration of the homogenization step is of crucial importance for the homogeneity of the preparations.Homogenization times that are too short risk leading to insufficient mixing of the lipid components and, as shown by the cryo-EM analysis, result in lipodisks with a broad size distribution.Excessively long homogenization times may, on the other hand, result in samples that, apart from lipodisks, contain significant amounts of non-desired structures, such as liposomes and the large amorphous assemblies shown in the inset of Fig. 2D.It can be speculated that the appearance of these unwanted structures is linked to a partial lipid degradation taking place during the prolonged homogenization.The latter may lead to compositional changes, including loss of PEGlipid, that, together with the high energy input, help explain the shift from lipodisks towards liposomes and large assemblies of, as yet, uncharacterized components.It can be concluded that in order to minimize the risk of lipid degradation, while at the same time ensuring sufficient lipid mixing, a homogenization time in the interval between 15 and 30 min is optimal for the production of lipodisks based on HEPC and cholesterol.
Influence of lipid concentration.
We next set out to explore the influence of lipid concentration.Based on the results reported in the previous section the time for homogenization was kept constant at 15 min, while the lipid concentration was varied between 10 and 60 %.Cryo-EM analyses confirmed that lipodisks were the dominating structures in all cases, but also disclosed some important lipid concentrationdependent differences between the samples.Thus, some unusually large liposomes, as well as aggregates of apparently non-dispersed lipid material, were observed in samples prepared with 10 % lipid (Fig. 4A).The presence of these structures is corroborated by the considerably larger mean hydrodynamic diameter and wider particle size distribution returned by DLS for the 10 %, as compared to the 40 %, sample (Fig. 3B).Preparations based on the use of 20 % lipid contained apart from lipodisks and some small liposomes, also a population of irregular bilayer patches/fragments (see inset Fig. 4B) that were not observed in samples prepared with 40 % lipid (Fig. 2B).In case of samples prepared with % lipid, the cryo-EM analyses revealed lipodisks with a broad size distribution.Thus, similar to what was observed with 40 % lipid, lipodisks with sizes in the range from 20 to 80 nm were the dominating structures, but disks with diameters around 200 nm were also frequently observed in the micrographs (Fig. 4C).Also noteworthy, in contrast to the observations made for the 40 % samples, no liposomes were detected in the preparations based on 50 % lipid.A comparison of the DLS data retrieved for the corresponding samples indicates that the use of the higher lipid concentration indeed results in particles with somewhat larger mean hydrodynamic diameter and wider size distribution (Fig. 3B).In case of samples prepared with 60 % lipid concentration the cryo-EM images revealed a few small lipodisks but most of the material was found in the form of large discoidal or irregular bilayer structures (Fig. 4D).In line with this, DLS suggested an increase in particle mean hydrodynamic diameter from 30 to 95 nm as the lipid concentration was changed from 50 to 60 % (Fig. 3B).
Based on the results of the cryo-EM and DLS investigations it can be concluded that the most homogeneous preparations in terms of particle size and structure were obtained when the lipid concentration was set to 40 % during the homogenization step.The size and structural heterogeneity observed for preparations based on 20 and, in particular, 10 % lipid is likely a consequence of the sample viscosity being too low to ensure efficient homogenization.The poor result noted in case of preparations based on 50 and, especially, 60 % lipid concentration is, on the other hand, most likely due to a shortage of water.As previously discussed in connection with the use of DC for liposome preparation (J.Koehler et al., 2023a), there exists a maximum lipid concentration above which there is simply not enough water available to hydrate the lipids and allow them to assemble into the desired nanostructures.It is in this context worth noting that while lipodisks, in contrast to liposomes, do not have a water-filled core, the disks contain a high content (20 mol%) of PEG-lipids with high water binding capacity.More specifically, each PEG 2000 molecule is expected to bind up to 210 water molecules when fully hydrated (Tirosh et al., 1998).In addition to this, about 12 molecules of water are needed to hydrate a phospholipid headgroup (Binder et al., 1999;Jendrasiak et al., 1996;Tu et al., 1996).A rough estimation hence indicates that more than 80 % of the available water is consumed to hydrate the HEPC and PEG-lipid components when the homogenization step is performed with a lipid concentration corresponding to %.In case of preparations based on 60 % lipid the water content is clearly not sufficient to fully hydrate the phospholipid headgroups and PEG-polymer chains.We will come back to the issue of limited water content in the next section.
Preparations based on HEPC:DSPE-PEG 2000
In the same manner as done for the cholesterol containing lipodisks, (caption on next column) we performed systematic studies to optimize the conditions for the production of cholesterol-free lipodisks consisting of HEPC and DSPE-PEG 2000 alone.The cryo-EM pictures displayed in Fig. 5 show the structures observed in samples with 40 % lipid concentration homogenized for 5, 15, and 30 min.Irrelevant of the homogenization time, all samples contained lipodisks.Use of the shortest homogenization time resulted, however, in samples that in addition to lipodisks also displayed some larger, irregular bilayer structures (Fig. 5A).
The analyses revealed no major differences between samples homogenized for 15 and 30 min (Fig. 5B and 5C).In both cases, the micrographs suggested a homogenous population of lipodisks with diameters between 20 and 40 nm (excluding the PEG corona).The observations of similar lipodisk sizes and size distributions were supported by DLS measurements (Fig. 6A).Due to the previously mentioned potential risk of lipid degradation, homogenization times longer than 30 min were not investigated.Noteworthy, in contrast to what was observed in the presence of cholesterol (Fig. 2) all investigated samples appeared completely devoid of liposomes.DLS measurements moreover confirmed a generally smaller mean particle size in the absence, as compared to the presence, of cholesterol (compare Fig. 6A and 3A).
In order to complement the above-described investigations of the cholesterol-free preparations, we explored the effects brought about by varying the lipid concentration used in the homogenization step.For these experiments, the duration of the homogenization process was set to 15 min.As revealed by the micrographs shown in Fig. 7, preparations free of liposomes, and completely dominated by lipodisk with sizes in the range of 30-60 nm, were obtained when the lipid concentration was either decreased to 10 and 20 % (Fig. 7A and 7B, respectively) or increased to 50 % (Fig. 7C).The particle size and morphology remained essentially unchanged also when the lipid concentration was increased to 60 % (Fig. 7D).Thus, albeit the micrographs occasionally revealed the presence of some irregular or discoidal bilayer patches, small, homogenous lipodisks were the clearly dominating structures in all the investigated samples.In line with this, DLS measurements returned data suggesting only a modest increase in mean particle size with increasing lipid concentrations (Fig. 6B).
It may appear somewhat surprising that the use of lipid concentrations as high as 60 % resulted in samples dominated by small, homogeneous lipodisks.Similar to the case with the cholesterol containing preparations, simple calculations confirm that the amount of water present in the 60 % samples during the homogenization process is below what is required for full hydration of the lipid and PEG-lipid components.It is therefore unlikely that a gel consisting of small, well-formed lipodisks can be produced at this lipid concentration.It is in this context worth noting, however, that small lipodisks of uniform size have been shown to form upon simple hydration of cholesterol-free lipid films composed of PEG-lipids and long-chained saturated phosphatidyl choline lipids (Sandström et al., 2008).This indicates that the lipodisks found in the cholesterol-free samples may represent equilibrium structures.Hence, even if the low water content prevents conversion of the lipid material into small, well defined lipodisks during the homogenization step, it is reasonable to expect that lipodisks of small and uniform size will form upon dilution of the samples.In case of samples supplemented with cholesterol, the situation is obviously different and simple hydration is not enough to transform the lipid material in the gels into small lipodisks.
Drug-loaded lipodisks
Having mapped and optimized the experimental conditions for lipodisk production, we continued by exploring the utility of DC for the formulation of two different drugs, curcumin and doxorubicin, in the disks.Both drugs have previously been shown suitable for incorporation in lipodisks (Ahlgren et al., 2017a;Feng et al., 2019;Zhang et al., 2014).
Disks loaded with curcumin
For the experiments involving curcumin, we followed a protocol in which the lipids (40 %) and curcumin were co-homogenized by DC for 15 min, whereafter the lipodisk gels were diluted with buffer and dispersed by low-speed DC.The resulting lipodisk dispersions were investigated by cryo-EM, measured by DLS, and analysed in terms of lipid and curcumin content (see section 2.2.4 for details).
The choice of lipid composition was found to have an effect on both the particle homogeneity and the lipodisk encapsulation efficiency (EE).As noted previously for the drug-free preparations, formulations devoid of cholesterol were completely free of liposomes, and contained lipodisks of more uniform size than did the cholesterol-supplemented formulations (Fig. 8A and 8B).The presence of cholesterol did on the other hand lead to somewhat higher EE (Table 1).Thus, for samples prepared with an initial drug-to-lipid molar fraction of 0.2, the encapsulation efficiency increased from 21 to 31 % upon inclusion of cholesterol in the preparations.
A further increase of the EE to 66 % was achieved by decreasing the initial drug-to-lipid fraction in the cholesterol-supplemented preparations to 0.06 (Table 1).This improvement suggests that the presence of high quantities of curcumin during the homogenization step either decreases the disks' capacity to accommodate the drug, or, more likely, leads to a lower proportion of curcumin being available for incorporation into the disks.An explanation for the latter could be that the use of high quantities of curcumin leads to less efficient, and perhaps incomplete, disintegration of the crystalline drug material.This hypothesis is supported by the fact that DC-prepared formulations analysed directly after dilution of the gels, i.e. without prior separation of unencapsulated drug by conventional centrifugation, occasionally displayed structures that likely represent crystalline drug (inset of Fig. 8B).
In an attempt to further increase the EE, we co-dissolved the lipid and curcumin components in a mixture of chloroform and ethanol, and subsequently formed a dry "lipid film" before proceeding as before with the DC homogenization.As shown in Table 1, for preparations with an initial drug-to-lipid fraction of 0.06 the EE increased to 72 %, as a result of the pre-mixing in organic solvent.Increasing the homogenization time from 15 to 30 min did, on the other hand, have the general effect of decreasing the EE.Thus, the EE dropped to 56 and 55 % for samples prepared with and without pre-mixing in organic solvent, respectively.
For comparison, curcumin was also formulated in cholesterolsupplemented lipodisks by use of a preparation protocol based on probe sonication (see Materials and Methods for details).When employing this method, the lipid components were first co-dissolved with curcumin in the organic solvent.Probe sonication of samples with a total lipid concentration of 9.5 mM, and initial drug-to-lipid molar fraction corresponding to 0.2, resulted in an EE of 16 % (Table 1).The EE was thus lower than what was noted for formulations with the same initial drug-to-lipid molar fraction prepared by DC.Upon decreasing the lipid concentration to 3.9 mM the EE increased to 32 %, i. e., close to what was obtained with DC.A further increase of the EE to 70 % was achieved by lowering the lipid concentration to 1.0 mM.These findings indicate that the lipodisks have a high loading capacity for curcumin (maximum drug-to-lipid molar fraction in disks ≥ 0.13), but that the drug is less efficiently encapsulated in the disks when performing the sonication with a higher, as compared to lower, drug and lipid content in the samples.As judged by Cryo-TEM and DLS, neither the lipodisk size homogeneity nor the proportion of liposomes present in the preparations varied notably depending on the lipid concentration.Thus, in all cases the lipodisks displayed a similar and rather broad size distribution, and the proportion of liposomes was comparable (Table 1, Fig. 8C, Fig. S1).It hence appears that the lower EE noted for formulations prepared with higher lipid concentrations can be traced back to the higher initial drug content of the samples.
From the data collected for the cholesterol-supplemented curcumin formulations it can be concluded that preparation based on sonication enabled a drug-to-lipid fraction in the disks that was about two times higher than what could be achieved using DC (0.13 as compared to 0.06).The use of DC enabled the production of formulations that were clearly superior to those produced by sonication in terms of particle size and structural homogeneity.It should in this context be mentioned that the use of lipid concentrations lower than 40 % was found to be suboptimal for the production of cholesterol-containing lipodisks by means of DC (see Section 3.1.1.2).
Disks loaded with doxorubicin
A similar protocol as used for curcumin was employed to prepare doxorubicin-loaded lipodisks by DC.Thus, doxorubicin and lipids (40 %) in varying molar fractions were co-homogenized by DC for 15 min and the resulting gels were thereafter diluted and dispersed in buffer.Also, since initial cryo-EM analyses indicated the presence of some aggregated drug/lipid material in formulations based on HEPC:Chol: DSPE-PEG 2000 (Fig. S2A), we limited the investigations to formulations based on HEPC:DSPE-PEG 2000 , for which no observations of such aggregated material were made.
Cryo-EM and DLS investigations confirmed that the doxorubicin formulations prepared by use of HEPC:DSPE-PEG 2000 were free of liposomes and contained lipodisks of small and uniform size (Fig. 9, Table 2).As noted for the curcumin-containing formulations, the initial drug-to-lipid molar fraction had a significant effect on the EE.Hence, encapsulation efficiencies corresponding to 64 and 77 % were obtained for formulations with initial drug-to-lipid molar fractions of 0.13 and 0.06, respectively (Table 2).Thus, also in case of doxorubicin it appears that the proportion of drug available for incorporation in the disks tends to decrease when the DC homogenization is performed in the presence of high drug quantities.In order to explore this matter further, we investigated the effects brought about by reducing the lipid concentration used in the homogenization step from 40 to 10 %.As shown in Table 2, this modification of the protocol resulted in an EE of 84 % for formulations prepared with an initial drug-to-lipid molar ratio of 0.15.Since this EE translates to a drug-to-lipid fraction in the disks of 0.13, it is highly unlikely that the inferior EE obtained when using 40 % lipid, and an initial drug-to-lipid fraction of 0.13, is due to saturation of the disks with doxorubicin.The reason for the negative correlation between EE and lipid concentration must thus be sought elsewhere.As discussed in connection with the curcumin formulations, a likely explanation for the observed trend is that ineffective, or incomplete, disintegration of the crystalline drug material during the homogenization step leads to suboptimal EE.
For comparative reasons, we also prepared some doxorubicincontaining formulations by means of the method based on probe sonication.Cryo-TEM and DLS analysis verified that the formulations were completely dominated by small homogeneous lipodisks (Fig. S2B, Table 2).As shown in Table 2, an EE corresponding to 99 % was obtained when using a lipid concentration of 3.9 mM and an initial drug-tolipid molar fraction corresponding to 0.06.This can be compared to the EE of 77 % determined for formulations prepared by DC using the same initial drug-to-lipid molar fraction and 40 % lipid concentration.The superior EE obtained when using sonication again suggests that the high drug quantities employed in the DC preparation resulted in disks with less than maximal doxorubicin content.
Some important conclusions can be drawn from the results presented in section 3.2.First, lipodisks are capable of encapsulating substantial amounts of both curcumin and doxorubicin.The data collected in Tables 1 and 2 indicate a maximum drug-to-lipid molar fraction in the disks corresponding to at least 0.13 in case of both drugs.Second, DC can be used as a straightforward means to produce lipodisk-based formulations of the two drugs.Our findings indicate, however, that a conscious choice of lipid composition and drug-to-lipid molar ratio in the starting material is important to ensure high encapsulation efficiencies.An important observation is that the use of high lipid concentrations during the DC homogenization may result in limited possibilities to fully utilize the drug loading capacity of the lipodisks.In case of formulations based on the use of cholesterol-supplemented lipodisks, this limitation appears difficult to circumvent.The situation is different, however, when cholesterol is omitted from the formulations.In this case, the DC homogenization can be performed at low enough lipid concentration to ensure that both the EE and the drug-to-lipid molar fraction in the lipodisks is kept high.
Concluding remarks
The present study represents the first report on the successful use of dual centrifugation to produce small and homogeneous PEG-stabilized lipodisks.The preparation of lipodisks by DC, which in principle can be done under aseptic conditions, turned out to be fast and straightforward.As shown for curcumin and doxorubicin, the technique allowed for the production of lipodisk-based drug formulations with high encapsulation efficiencies.The results and insights gained from this initial systematic study opens up for the screening for optimal conditions for efficient and robust entrapment of curcumin, doxorubicin, and other drugs in small and uniform lipodisks.Such screening can include variations in lipodisk lipid composition and initial drug-to-lipid ratios with the purpose to identify conditions that lead to a satisfactory balance between the loading efficiency and the amount of drug encapsulated in the lipodisks.Investigations of the impact of the aqueous phase may also be warranted as part of the optimization procedure.Here, different pHvalues, buffer capacities, and buffers can be explored and compared in terms of their effect on the drug-lipodisk interactions.Also, the effect of the addition of other ions or substances can be tested in the search for conditions leading to an optimal lipodisk-formulation.As one example, potential issues connected to recrystallization of the drug might be counteracted by the addition of alcohol.The presence of alcohol may, on the other hand, alter the structure and micro-mechanical properties of the bilayer and, as a consequence, have an undesirable effect on the drug release behavior or drug loading capacity of the disks.In conclusion, DC enables fast and simple preparation of drug-loaded lipodisks and has the potential to become a valuable tool in the continued development of lipodisks as versatile platforms for efficient and safe drug delivery.
of the study, in the collection, analysis or interpretation of data, in the writing of the manuscript, and in the decision to publish the results.
Fig. 8 .
Fig. 8. Cryo-EM images of curcumin formulations prepared with an initial drug-to-lipid molar fraction corresponding to 0.2.(A) HEPC:Chol:DSPE-PEG prepared by DC and using a lipid concentration of 40 %, (B) HEPC:DSPE-PEG 2000 prepared by DC and using a lipid concentration of 40 %, and (C) HEPC: Chol:DSPE-PEG 2000 prepared by sonication of a sample with a lipid concentration of 9.5 mM.Arrow in (A) indicates a unilamellar liposome filled with lipodisks.Inset in (B) shows non-lipid particles (presumed to represent crystalline drug) present in the samples before separation of unencapsulated drug.The lipid gels were dispersed in HBS (pH 7.4) before visualization.The scale bar represents 100 nm.
S.Ali et al.
Table 1
Physicochemical evaluation of curcumin-containing formulations.
a Lipid concentration used for DC homogenization b Lipid concentration in the final formulation (mean ± SD) c Initial drug-to-lipid molar fraction (mean ± SD) d Hydrodynamic diameter (mean ± SD) determined by DLS e Polydispersity index calculated from hydrodynamic diameter as (SD/mean) 2 f Formulation homogenized for 15 min g Formulation homogenized for 30 min S.Ali et al. | 2024-02-13T16:07:39.394Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "54681faa5a75e74d655093080fdb646b8c58f020",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijpharm.2024.123894",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "530e03b8de8c2544f86f8ccecd8629ca238f41e8",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59603742 | pes2o/s2orc | v3-fos-license | Air-stable redox-active nanomagnets with lanthanide spins radical-bridged by a metal–metal bond
Engineering intramolecular exchange interactions between magnetic metal atoms is a ubiquitous strategy for designing molecular magnets. For lanthanides, the localized nature of 4f electrons usually results in weak exchange coupling. Mediating magnetic interactions between lanthanide ions via radical bridges is a fruitful strategy towards stronger coupling. In this work we explore the limiting case when the role of a radical bridge is played by a single unpaired electron. We synthesize an array of air-stable Ln2@C80(CH2Ph) dimetallofullerenes (Ln2 = Y2, Gd2, Tb2, Dy2, Ho2, Er2, TbY, TbGd) featuring a covalent lanthanide-lanthanide bond. The lanthanide spins are glued together by very strong exchange interactions between 4f moments and a single electron residing on the metal–metal bonding orbital. Tb2@C80(CH2Ph) shows a gigantic coercivity of 8.2 Tesla at 5 K and a high 100-s blocking temperature of magnetization of 25.2 K. The Ln-Ln bonding orbital in Ln2@C80(CH2Ph) is redox active, enabling electrochemical tuning of the magnetism.
Resolution in linear mode is not high enough for analysis of isotopic distribution. In the reflector mode, strong fragmentation does not allow for detection of molecular peak, but spectral resolution is sufficient to prove correct isotopic distribution of the Gd2@C80 − fragment. In the negative ion mode, strong fragmentation does not allow for detection of molecular peak, but spectral resolution is sufficient to prove correct isotopic distribution of the Y2@C80 − fragment.
Supplementary
Supplementary Figure 10. Matrix-assisted laser desorption/ionization time-of-flight (MALDI TOF) massspectra of TbY@C80(CH2Ph). Linear negative (top) and positive (bottom) ionization modes. Resolution in positive mode is not high enough for analysis of isotopic distribution. In the negative ion mode, strong fragmentation does not allow for detection of molecular peak, but spectral resolution is sufficient to prove correct isotopic distribution of the TbY@C80 − fragment. Supplementary Figure 17. Matrix-assisted laser desorption/ionization time-of-flight (MALDI TOF) massspectra of Ho2@C80(CH2Ph). Linear negative (top) and positive (bottom) ionization modes. Resolution in positive mode is not high enough for analysis of isotopic distribution. In the negative ion mode, strong fragmentation does not allow for detection of molecular peak, but spectral resolution is sufficient to prove correct isotopic distribution of the Ho2@C80 − fragment.
Supplementary
Supplementary Figure 18. Separation of Er2@C80(CH2Ph). Three steps HPLC separation were required to obtain the pure compound (I) HPLC profile of the mixture of benzyl-derivatized Er-EMFs. The highlighted fraction contained Er2@C80(CH2Ph) was collected for further separation. HPLC conditions: linear combination of two 4.6 × 250 mm Buckyprep columns; flow rate 1.6 mL/min; injection volume 800 µL; toluene as eluent; 40 °C. (II) Recycling HPLC separation of the fraction collected in the first step. The highlighted fraction was collected for further separation (10 × 250 mm Buckyprep column; flow rate 1.5 mL/min; injection volume 4.5 mL; toluene as eluent). (III) HPLC separation of the fraction collected in the second step. Pure Er2@C80(CH2Ph) was obtained as the highlighted fraction. (10 × 250 mm Buckyprep-D column; flow rate 1.0 mL/min; injection volume 4.5 mL; toluene as eluent).
Supplementary Figure 19. Matrix-assisted laser desorption/ionization time-of-flight (MALDI TOF) massspectra of Er2@C80(CH2Ph). Linear negative (top) and positive (bottom) ionization modes. Resolution in positive mode is not high enough for analysis of isotopic distribution. In the negative ion mode, strong fragmentation does not allow for detection of molecular peak, but spectral resolution is sufficient to prove correct isotopic distribution of the Er2@C80 − fragment.
Supplementary Figure 20. HPLC profiles of the purified M2@C80(CH2Ph). HPLC conditions: linear combination of two 4.6 × 250 mm Buckyprep columns; flow rate 1.6 mL/min; injection volume 800 µL; toluene as eluent; 40 °C. The identical retention times for TbxGd2-x@C80(CH2Ph) indicates that the separation of these compounds by HPLC is an impossible mission.
Supplementary Figure 21.
Air stability: HPLC trace of the freshly synthesized {Tb2} and 8 months after the synthesis (during this time period the sample was studied in air by SQUID magnetometry and NMR spectroscopy and stored in solution). Figure 22. Molecular structure of Dy2@C80(CH2Ph) determined with single-crystal X-ray diffraction at temperatures from 100K to 290K. The thermal ellipsoids were shown with 50%. The hexagon belt of the C80 fullerene cage where the Dy2 locates is highlighted with yellow bonds. Color code: grey for carbon, green for Dy, the deeper the green color, the higher occupancy of the Dy site. Figure 23. Molecular structure of Dy2@C80(CH2Ph) determined with single-crystal X-ray diffraction at temperatures from 100K to 290K. The Dy sites are shown as spheres whose radii are scaling proportional to the site occupancy (the bigger the sphere, the higher the occupancy).
Supplementary Table 1 (continued). Crystal data and data collection parameters
Dy2@Ih (7) . b Electrochemical gap is defined as the difference between the first oxidation and the first reduction
Supplementary Note 2. Axiality of the ligand field in {Ln2} molecules
As follows from ab initio calculations (Supplementary Tables 8-11), single-ion magnetic anisotropy in {Ln2} is rather high. For instance, the total LF splitting for Tb ions is ca 1000 cm −1 , and the energy of the first excited KD state is ca 260 cm −1 . Ligand field is indeed highly axial, resulting in high-spin ground states of Ising type.
The axiality of the ligand field in {Ln2} may have several reasons. First, metal atoms transfer their valence electrons to the fullerene cage, resulting in accumulation of the negative charge on carbon atoms coordinated to the endohedral lanthanide ion. Note that this interaction also has considerable covalent contribution via overlap of π-electron density of the fullerene with vacant d-orbital of the lanthanide. Next, covalent Ln-Ln bonding results in a concentration of the electron density between two Ln ions. In Ref. 1 we used a simple point-charge model to show that even relatively small negative charge located between two lanthanide ions may induce rather high axial magnetic anisotropy. Thus, metal-metal bond is important not only for exchange interactions, but also to support the axial ligand field. Finally, lanthanide ions in EMFs have no "equatorial" ligands -the situation which also facilitates imposing of the axial ligand field. Here, the Stevens factors , and are rational numbers depending on , , and and describe the angular shape of the 4f charge distribution, < > are the expectation values of taken with the radial 4f wave function, (i) are the CF coefficients describing the charge distribution around the 3+ ion at site i, and ̂ are the standard Stevens operators expressed in polynomials of ( , , , ). 14 For the axial parameters, = 0, the parameters B(l,0) given in Table S8 Table 8 shows that states with different hardly mix for Tb ions (and similar was observed for Dy in Ref. 1 ). Thus, the LF Hamiltonian is almost diagonal for the Tb and the Dy ions, i.e., only the axial LF parameters are relevant. We evaluated the LF levels of the Tb and Dy ions using only the axial LF parameters and assuming the non-axial parameters to be zero. The resulting levels (curves denoted as axial 2, 4, 6) are compared with the levels obtained by our ab-initio calculations (crosses) in Supplementary Fig. 43. As expected for the case of small mixing, the diagonal LF terms describe the ab-initio levels very well.
Supplementary Note 3. Magnetic anisotropy and ligand-field states of different lanthanide ions in similar ligand-field environments
In order to estimate the relative importance of the individual axial terms (l = 2, 4, 6), we repeated the calculation with fourth-order term set to zero (curves denoted as axial 2, 6) and finally with both fourthand sixth-order terms set to zero (curves denoted as axial 2). For the Tb ions we find that the quantum chemical data are very well described by considering only the second-order axial LF term; the fourth order provides a minor correction and the sixth order has no influence that would be visible at the scale of the plot. For the Dy ions as well, the second order alone describes the levels reasonably well. However, a minor correction is provided by the fourth-order term and a somewhat larger correction by the sixth order.
The relatively larger influence of the sixth-order term in Dy as compared to Tb can be understood from a combination of two factors. First, the sixth-order term gains a larger importance in Dy due to the larger Jvalue of its ground state multiplet, = 15/2 , compared to = 6 for Tb. This value enters the expectation values of the Stevens operators in the power of l, providing an enhancement by a factor of 2.4 for the importance of order six vs. order two. Second, a factor of modulus 1.5 in favor of the Dy sixth-order contributions is gained by the ratio of the Stevens factors / , see Supplementary Table 12. In a very similar manner one can understand that the fourth-order term has more or less the same influence on the Tb and the Dy ions. Here, a factor of 1.6 due to the different J-values is partly compensated by a factor of 0.8 due to the ratio of the Stevens factors / , see Supplementary Table 12. Summarizing the discussion for the Tb and the Dy ions, the whole LF spectrum and the LF states of Tb 3+ in {Tb2} can be described by a single parameter B(2,0) and the same holds, with a grain of salt, for Dy 3+ in {Dy2}. The sign of this parameter determines, whether the magnetic anisotropy is of uniaxial type, i.e., the ground state is |Jz| = J, or if it is of easy-plane type with a ground state |Jz| = 0 or ½. For the two discussed ions, B(2,0) is negative, implying a uniaxial ground state. The negative sign of B(2,0) for both cases comes about by the same, positive sign of A20 for both systems combined with the same, negative sign of the related Stevens factors.
Turning to the case of the Ho 3+ ions, it can be noted that the second-order Stevens factor αJ is negative as in Tb 3+ and Dy 3+ . Thus, given the same sign of A20 as in the other cases, also B(2,0) is negative and Ho has a high-spin ground state like the previous cases. However, the effect of the sixth-order axial crystal field term is much stronger than in Tb or Dy, since the ratio / provides a factor of 5.3 (as compared with Tb) and the larger J = 8 provides a factor of 3.2. Calculation of the LF levels from the axial terms alone (not shown) shows that the high-spin ground state is (just) not spoilt by the higher-order terms. However, by virtue of the larger pre-factors, the non-axial sixth-order LF parameters now produce a strong mixing of different Jz states, see Supplementary Table 9. This mixing is present even in the ground state. It is responsible for an important contribution to transitions between (quasi) degenerate states. The importance of fourth-order terms is slightly enhanced, if compared to Tb and Dy, but their impact is much smaller than that of other terms.
Finally, the second-order Stevens factor of Er 3+ is positive, opposite to all the other cases. Thus, an easyplane ground state is realized in Er which is not compatible with SMM behavior. The influence of sixthorder contributions to the LF Hamiltonian of Er 3+ in {Er2} is similar to the case of Ho, since the ratio of the Stevens factors / is somewhat larger in Er 3+ than in Ho 3+ , while the value of J = 15/2 is smaller than in Ho. Fourth-order terms have the same, minor importance as in the other systems.
Summarizing all the considered cases, the LF ground states for {Ln2} are solely determined by the sign of the related second-order Stevens factors, encoding the different shapes of the Ln-4f shells. Moreover, it is possible, in a decent approximation, to describe all LF spectra by one single parameter A20, which is determined by the quadrupolar potential at the Ln sites, and appropriately scaled by the radial expectation value of the specific Ln-4f wave function. A refined description, which is needed to account for mixing of the pure Jz states in the cases of Ho and Er, however, has to include all sixth-order terms.
The dominance of the axial second-order LF term can be attributed to the large, almost axially distributed charge on the Ln dimers, including the single-electron σ-bond. The less important, though still significant for the Ho and Er systems, sixth-order contributions (axial and non-axial) can be attributed to the charge distribution on the carbon cage. This means that a transfer of the Ln dimers to other kinds of carbon cages will influence mainly the less important sixth-order terms and keep the present results qualitatively unaffected.
Supplementary Note 4. Spectra of Model Hamiltonians
Although the simple Ising-type Hamiltonian (Supplementary Equation 1) gives insights mostly in the ground state, the formal analysis of the complete Hamiltonian (Supplementary Equation 1) spectra can be made, and transition probability between different states i and f can be estimated as (see code PHI 10 for details): Equation 1) of 55 cm −1 one can estimate the barrier for the effective Orbach process, through the semi-degenerate exchange-excited doublets to be of 555.8 and 558.5 cm −1 (~800 K). For the non-collinear system, the Hamiltonian S1 is not expected to be fully adequate, but still, the formal spectrum and transition probabilities can be computed and the effective Orbach process barrier can be estimated. For the eff value of 40 cm −1 , there is a state with the energy of 259.6 cm −1 (374 K), which has an almost perpendicular orientation of the main anisotropy axis with respect to the ground state. Experimentally, an Orbach barrier of ~330 K is found for {Ho2}.
Magnetic properties
Magnetic susceptibility in the further discussion is defined as χ = M/H both in experiment and in theory. Note that in high field the M/H ratio is significantly different from differential susceptibility defined as a derivative of magnetization with respect to the external field.
Simulations of magnetic properties are based on the spin Hamiltonian (all calculated curves are powderaveraged): ̂s pin ({Ln 2 }) =̂1 +̂2 − 2 eff̂(̂L n 1 +̂L n 2 ) where ligand field parameters in ̂ are obtained from ab initio calculations, and exchange constant K eff is the only unknown parameter. The value of K eff is varied to find the best fit to the experimental data.
For {TbY}, the spin Hamiltonian is reduced to the following form: where ligand-field parameters in ̂T b are obtained from ab initio calculations. In the fitting of the magnetic data we used the followed strategy: χT curve measured for a given {Ln2} compound was fitted for only one value of the magnetic field, 1 T, to determine K eff . This K eff value was then used to calculate χT curves for other values of the magnetic field (3, 5, and 7 T) as well as to calculate magnetization curves at different temperatures. The agreement of the model and experiment is considered to be good, when a single K eff value can give a good agreement for the whole set of experimental data.
Due to the small mass of the sample, reliable estimations of the absolute magnetization values are possible only for {Dy2} (ref. 1 ), {Er2} and {Ho2}, which show the highest yield in the synthesis. For other sample uncertainties in the mass determination introduced during the sample preparation are too large to allow accurate determination of the absolute magnetization values. We therefore use arbitrary units for these {Ln2} molecules
DFT calculations
We used the VASP code, version 5.0, to optimize the molecular structures at the PBE-D level using PAW pseudopotentials with standard cutoffs as recommended. 3,4,5,6 The 4f shells of the lanthanide elements do not contribute to chemical bonding. Thus, we included the 4f shell in the core potential, i.e., we used the so-called open-core approximation (here, implemented as unpolarized potential). The SCF calculations accounted for spin polarization of the valence states. This procedure is expected to provide realistic results for structures involving Ln ions. 15 The pseudopotential configuration 5p 6 6s 2 5d 1 was used for all Ln atoms. All molecular structures were optimized such that the residual forces for all atoms were below 10 -4 eV/Å. | 2019-02-05T15:29:47.900Z | 2019-02-04T00:00:00.000 | {
"year": 2019,
"sha1": "795e692a2c90cf1c1bfe93d1a75d20dfca3f19ab",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-019-08513-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "795e692a2c90cf1c1bfe93d1a75d20dfca3f19ab",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
20360713 | pes2o/s2orc | v3-fos-license | Photo release of nitrous oxide from the hyponitrite ion studied by infrared spectroscopy. Evidence for the generation of a cobalt-N2O complex. Experimental and DFT calculations
a CEQUINOR, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, CONICET (CCT La Plata), Boulevard 120 N° 1465, 1900 La Plata, Argentina b Comisión de Investigaciones Científicas CICPBA, Provincia de Buenos Aires, Argentina c Departamento de Ciencias Básicas, Universidad Nacional de Luján, Luján, Argentina d Departamento de Ciencias Básicas, Facultad de Ingeniería, Universidad Nacional de La Plata, La Plata, Argentina
Introduction
Photochemistry of compounds related to NO and other biochemically and environmentally relevant nitrogen-oxygen species is of considerable current interest [1,2].
Hyponitrite ion, related structurally with NO, plays also a relevant biological role as intermediate species in the denitrification process in which the nitrous oxide is a product [3]. Specifically, the reduction of nitric oxide to nitrous oxide by NO reductase would be produced through a hyponitrite intermediary where the N 2 O 2 2 − is bonded between two ferric iron centers [4,5]. The chemistry of the hyponitrite ion has been reviewed by Bonner et al. [6]. The hyponitrite ion and its conjugate acids are in equilibrium in aqueous solution; depending on the pH, all these species undergo decomposition [7]. In the photolysis of an aqueous suspension of silver hyponitrite, NO was generated as a consequence of light irradiation [8].
The hyponitrite ion may bond to a transition metallic centre in a number of ways. cis-Hyponitrite may coordinate a metal (Ni, Pt) through oxygen acting as a bidentate ligand in a pseudo-square configuration [12]. In a binuclear heme Fe\ \Cu complex the trans-hyponitrite ion bridges two metal centers coordinated by the nitrogen atom [13].
Nan Xu et al. [14] reported a binuclear iron porphyrin bridged by hyponitrite ion bonded by oxygen. In the binuclear cation studied in this work the hyponitrite bridge is coordinated in an asymmetric way (\ \N(O)NO\ \), being one cobalt nucleus bonded by oxygen and the other by nitrogen (see Scheme 1c) [15,16].
In the present work the sodium, silver and thallium hyponitrite salts and a binuclear complex of cobalt bridged by hyponitrite (μ-hyponitrite bis[pentaamminecobalt(III)] 4+ ) were studied by irradiation with light of different wavelengths. The compounds were photolyzed in the solid state to avoid the hyponitrite decomposition, possible side reactions and complex equilibria in aqueous solution. Quantum chemical calculations were also carried out to support experimental results.
Spectroscopic Measurements
The IR spectra of the substances as KBr pellets or Nujol mulls were recorded in the 4000 to 400 cm −1 range on a Bruker Equinox 55 FTIR spectrometer with 4 cm −1 resolution.
The UV-Visible spectra of the solutions were registered using Chrom Tech CT-5700 series spectrophotometer in the range of 190-1100 nm, with 2.0 nm resolution. Quartz cuvettes: 1 cm path length.
The initial (reference) infrared spectra of the samples were recorded before the irradiation process. After an irradiation period, the light source was turned off and the final infrared spectra of the samples, which were used in monitoring the photolysis process advances, were recorded.
In monitoring the photolysis, the infrared region 1200-900 cm −1 , which involves the hyponitrite bands and the infrared region around 2200 cm −1 , which involves the band due to (NN)N2O vibration, constituted the two target regions; the former to follow the reagent disappearance and the later to follow the products generation. It should be noted here that thallium and silver hyponitrite samples were darkened after irradiation, presumably by formation of respective oxides (M 2 O, M = Tl, Ag, Na) (see Results and Discussion section).
Kinetic Measurements
Kinetic curves for binuclear complex of cobalt were calculated from band areas of infrared spectra scanned at different irradiation times (irradiation wavelength 253.7 nm). Experimental details for sample preparation and spectroscopic setup were described in Sections 2.2 and 2.3.
For the kinetics decay the hyponitrite band at 1033 cm − 1 (δ(ONNO)) was chosen for area calculation because it shows the horizontal base line and it was not overlapping with other bands, and consequently, no extra band fit was required. For the products, increase the ν(NN) N2O band at 2226 cm −1 was used because it is the most intense and is not overlapping. We use Opus software (version 4.2) of the Bruker infrared spectrometer for bands area calculation in the absorbance mode. The kinetic curves were then fitted by standard decay and rise functions discussed in Section 3.3.
Computational Details
Computational methods were used to investigate the electronic structure of the hyponitrite ion and the binuclear complex of cobalt. Geometry optimization of both species was performed. The bulk solvation effect was considered by the approach of the conductor-like polarizable continuous model (CPCM) [18]. Absorption spectral properties were calculated by non-equilibrium time-dependent density functional theory (TDDFT) [19]. The simulated UV-visible based on Truhlar's functionals optimized geometries were calculated for the lowest 80 singletsinglet electronic transitions. In these calculations, the Dunning-Huzinaga style augmented correlation-consistent-valance-polarizedtriplezeta basis set (aug-cc-pVTZ) were used. For the hiponitrite anion [20,21], the calculations were performed with the M062X [24] hybrid DFT method [22]. The geometry optimization and frequency calculations performed for the binuclear complex, the M06-L hybrid DFT method [23] were preferred. The stabilities of the predicted six mononuclear cobalt complexes were tested by the DFT based calculations in order to support the photolysis reaction proposed here for the binuclear complex. The optimized geometry of [(NH 3 ) 5 CoNNO] + n and [(NH 3 ) 5 CoONN] + n ion complexes with n = 2 and n = 3 were calculated at M06-L/aug-cc-pvTZ level of theory by using Gaussian09 software [24]. The vibrational frequencies of these structures were also calculated at the same level of theory to confirm that they correspond to the local minima on the energy surfaces [25].
Electronic Spectra
The electronic spectrum of a silver hyponitrite suspension reported by Kumkely et al. [8], shows absorption bands at 207(sh), 255(sh) and 419 nm (wide). The last band was assigned to LMCT (Ligand Metal Charge Transfer).
Our electronic spectrum of solid thallium hyponitrite in KBr pellets shows absorption bands at 262, in the region 294-401 and 454 (sh) nm. In agreement with the wavelength reported by Poskrebyshev et al. (ε(248 nm) = 6550 cm −1 M −1 ) [7], the electronic spectrum of sodium hyponitrite (see Fig. 1a) includes an absorption band at 246 nm, which is blue shifted with respect to silver and thallium hyponitrites.
The calculation performed with hybrid DFT methods provided a reliable interpretation of the observed electronic spectra of the hyponitrite ion and its photolysis behavior.
The experimental and calculated spectra of hyponitrite ion are compared in Fig. 1a. Theoretical calculations predict an absorption band at 238 nm (the envelope of the two main electronic transitions) in good agreement with the experimental data (cf. ref. 6). The assignments for electronic transitions predicted by quantum chemical calculations are collected in Table 1. The major contributions were found for the transitions at 272 and 233 nm. The first transition involves HOMO and LUMO + 3. The HOMO can be characterized as a π bonding molecular orbital mainly between the two atoms of nitrogen, and LUMO + 3 as π antibonding molecular orbitals mostly localized in N\ \O bonds (HOMO and LUMO + 3 are represented in Fig. 2). The transition at 233 nm, which involves HOMO-1 and LUMO + 2, was understood as a transition from a bonding orbital delocalized over all the atoms to a non-bonding orbital over the oxygen atoms.
The transition at 435 nm involves H-1 and L orbitals. H-1 can be described as a π non bonding orbital mainly located in the hyponitrite moiety and L as an orbital mainly centered in the O\ \Co\ \N bonding axis.
The 241 nm transition involves the H (O\ \N nonbonding and N\ \N π bonding) and L + 4 (π antibonding hyponitrite) orbitals. Diagrams for the HOMO and LUMO + 4 molecular orbital of the binuclear complex of cobalt are depicted in Fig. 3.
The experimental and calculated spectra for [(NH 3 ) 5 CoN(O) NOCo(NH 3 ) 5 ] 4+ ion are compared in Fig. 1b, where the irradiation zones which cover all the absorption regions are represented by shadows areas. Tables 1 and 2). Grey shadows show irradiation regions.
Hyponitrite Salts Photolysis
In order to monitor the photolysis progress, infrared spectra of pellets (samples dispersed in KBr matrix) were compared before and after irradiation, identifying the generated products. The electronic spectrum shown in Fig. 1S (Supplementary material) substantiate that the KBr is transparent to all radiation used in this work. Then, it is concluded that KBr could be used as a matrix component because it does not absorb any component of the employed radiation. The radiation sources were chosen so as to cover all the absorption regions in the electronic spectra of the samples.
The photolysis of hyponitrite salts is wavelength dependent, as it is inferred from the electronic spectra described above and for the irradiations results described below. While sodium hyponitrite undergoes photolysis only with the 253.7 nm line, thallium, and silver salts are photoinduced with all the wavelengths used in our irradiation set up. Moreover, darkening after irradiation was observed only in the silver and thallium hyponitrite pellets (see identification products below).
The electronic transitions theoretically predicted for the hyponitrite ion explain the observed electronic spectra of Na 2 N 2 O 2 (see Fig. 1a). It was assumed that both hyponitrite electronic transitions H-1 → L + 2 (π → π nb ) and H → L + 3 (π → π ⁎ ), predicted at 238 nm, may be induced by electronic excitation with light of 253.7 nm, very close to the experimental absorption band at 246 nm. However, this transition fails to reproduce the photolysis and the spectral shifts observed in thallium and silver hyponitrites. We then hypothesize that this is a possible (reversible) charge transfer transition proposed in the reference [8] from hyponitrite to metal that was not taken into account in the theoretical approach. On the other hand, the energy difference between HOMO hyponitrite and LUMO configurations for the Na + , Tl + and Ag + ions in aqueous solution is expected in the sequence ΔE Na N ΔE Tl N ΔE Ag in agreement with the experimental electronic spectral shifts. Fig. 4 compares the infrared spectra of sodium, silver and thallium hyponitrites before and after irradiation with the 253.7 nm line. Characteristic ν(NO) bands (around 1000 cm −1 ) decrease drastically with irradiation time and at the same time, a new feature around 2230 cm −1 appears after irradiation for each salt (Fig. 4). Since this new doublets at around 2230 cm − 1 is very close and shows a very similar profile than that reported to ν(NN) of N 2 O in the gaseous state [26] (see also Fig. 2S), we propose the formation of N 2 O gas as a product of the photolysis of hyponitrite salts that remains caught in the solid state system. This was observed also in the binuclear complex as a singlet band at 2226 cm −1 (see following section).
Other bands grow during the irradiation of sodium hyponitrite salt at~1400, 850, and 650 cm −1 . They are assigned to the asymmetric stretching (ν 3 ), out-of-plane bending π(CO 3 ) (ν 2 ) and δ(OCO) (ν 4 ), respectively, corresponding to a carbonate ion generated during the photolysis [27]. Fig. 5 shows the overall picture of the sodium hyponitrite infrared spectra before and after irradiation and the sodium carbonate reference. The ν(CO) as was also found in the infrared spectra of the irradiated binuclear complex as a widening of the 1391 cm −1 band (ν(NN) N2O2 ).
To show whether the samples undergoes photolysis without atmospheric CO 2 , hyponitrite salts were irradiated under vacuum. For such purpose a quartz tube connected to a gas cell fitted with KBr windows were assembled (see Fig. 3S Supplementary material). Pure samples of sodium, thallium and silver hyponitrites (without KBr dilution) were irradiated into the quartz tube under vacuum with UV light (253.7 nm). The infrared spectrum of the gas cell scanned after irradiation of a sodium hyponitrite is shown in Fig. 2S. This spectrum is compared in the same figure with that coming from a N 2 O cylinder in the same instrument under similar conditions. Since both spectra are identical it is concluded that the N 2 O is also generated without the presence of atmospheric CO 2 and confirms that the reaction proceeds even without potassium bromide. We then suggest that the atmospheric CO 2 freely penetrates the pellet samples, but why the N 2 O does not escape from the solid? The suggestion that the CO 2 freely penetrates the samples and the N 2 O is being unable to escape from it, are mutually exclusive. The last statement and the darkness observed in silver and thallium hyponitrites samples require further explanation. The CO 2 penetration and N 2 O entrapment could be compatible under the assumption that the N 2 O is trapped in the hyponitrite packing while the CO 2 penetrates the KBr pellets and reacts with the products of reaction on the microcrystal surface. Due to photolysis, the formation of M 2 O (M = Tl, Na, Ag) inside the microcrystal arises as a necessary product. It is also compatible with the darkness observed in silver and thallium hyponitrites after photolysis.
To check if silver remains in the oxidation state I after Ag 2 N 2 O 2 photolysis, the chemical reaction proposed in [8] was tested. It was noted that the black solid contacted with ammonia solution (50%) dissolved immediately at room temperature. Since elemental silver does not dissolve in aqueous ammonia, it was inferred that silver remains in oxidation state I (Ag 2 O) after photolysis.
The products of photolysis identified in this work are not in agreement with those reported in [8] for the irradiation of silver hyponitrite. In fact, in that work the reported products of irradiation were elemental silver and nitric oxide and this conclusion was indirectly inferred from subsequent reactions of nitric oxide with oxygen and water to produce nitrous acid, which was detected in electronic spectra [8]. Therefore, the difference between those results and the present results are due probably to Kumkely et al. carrying out their measurements in an aqueous suspension. It seems therefore that infrared spectroscopy is a better analytical tool to determine products after photolysis in the solid state instead of indirect methods.
Binuclear Complex Photolysis
To analyze the changes during the irradiation we first review briefly the known infrared spectra of the binuclear complex. The infrared spectra of binuclear cobalt hyponitrite show a distinctive band pattern than that of sodium, thallium and silver hyponitrites, as expected from bond description shown in Scheme 1 (Table 3).
Mercer et al. [28] and Miki and Ishimori [29] reported the infrared spectrum of the binuclear complex cation. For a better assignment, these two studies included infrared spectra after 15 N bridge substitution (50%). More recently, a structural study and reinterpretation of the vibrational spectra of the binuclear complex was published [15]. This included the first Raman spectra allowing new assignments for the bridged \ \ONN(O)\ \ moiety. The information provided was also supported by quantum chemical calculations. The ν(NN) band was individualized when the nitrate counterions were replaced by bromide ions avoiding the overlap of nitrate anti-symmetrical stretching mode (ν 3 (NO)) to the ν(NN) hyponitrite band at 1391 cm −1 . Infrared wavenumbers for hyponitrite ions in the cobalt binuclear complex ( 14 N, 15 N) [15,28,29] and sodium [27,30,31], silver and thallium hyponitrite salts (this work) are summarized in Table 3.
Wavenumbers for the binuclear complex were also calculated in this work using a more precise quantum method than that reported in Ref. [15]. The values obtained for the hyponitrite moiety were then collected in Table 1S (Supplementary material) and compared with the experimental and theoretical values reported in [15]. It can be seen from this table that the frequencies of hyponitrite modes are confirmed and that the values in the present work are closer to the experimental than those reported [15].
The KBr-binuclear complex pellets were sequentially irradiated with the 488.0 nm (Ar + laser), 340-460 nm (from high pressure mercury lamp filtered) and 253.7 nm (from low pressure mercury lamp) lines. As mentioned above for the hyponitrite salts, photolysis progress was monitored scanning infrared spectra before and after each irradiation. The light selected for irradiation cover all absorption regions of the studied compound. We observed that the binuclear complex photolysis only progressed when the wavelength of exciting radiation was 253.7 nm. This wavelength is close to the absorption band at 264 nm and to the predicted transition by quantum chemical calculations at 241 nm. This was understood as a transition from O\ \N nonbonding and N\ \N π bonding to π antibonding hyponitrite (intraligand charge transfer). We hypothesize that the bond order reduction produced in hyponitrite moiety by the electronic transition may induce the chemical reaction observed experimentally.
Interestingly, the electronic transition that generate photolysis in sodium hyponitrite have the same description predicted for the binuclear complex of cobalt, although the molecular geometry and hyponitrite bonds are different. Moreover, photolysis in both compounds is caused by the same irradiation wavelength and shows similar magnitude for the molar extinction coefficient (ε). A better comparison should consider lattice effects.
The characteristic infrared bands of bridged hyponitrite in the Cobinuclear complex were located in the spectral region 1400-900 cm − 1 . Fig. 6 shows a selected infrared spectral region of the binuclear complex (KBr pellet) before and after irradiation with light of 253.7 nm. This figure also shows the spectra of a sample containing the 15 N labelled-bridge. It can be seen that hyponitrite bands decrease with increasing time of irradiation whereas a new band grows at 600 vw substituted-sample. Besides, a new shoulder at 1282 cm −1 and band broadening at 588 cm −1 are detected. These features were confirmed by difference spectra. For the 15 N labelled sample these bands are shifted to 1261 and 580 cm − 1 , respectively. The wavenumbers and the relative infrared intensities in this study for the two set of bands ( 14 N and 15 N) are in agreement with those previously reported for N 2 O gas [26,30,31]. Accordingly, we propose to assign the infrared bands observed at 2226, 1282 and 588 cm −1 as well as those observed at 2157, 1261 and 580 cm −1 to the ν(NN), ν(NO) and δ(NNO) vibrations of N 2 O for the 14 N and the 15 N substituted samples [31], respectively.
The ν(NN) band at around 1391 cm −1 ( 14 N normal samples) of the hyponitrite bridge becomes wider after irradiation. For the 15 N substituted sample (Fig. 6d) it is shifted to 1348 cm −1 as a shoulder of a very strong band at 1321 cm − 1 (δ(NH 3 )) [15]. In spite of this shift, in the isotopic 15 N sample, the band at 1399 cm −1 still grows after irradiation. It is concluded, therefore, that this new band is not related to the bridge decomposition because it does not move its spectral position with 15 N isotopic substitution. We reason that this may be due to carbonate ion formation in the reaction between photolysis products with atmospheric CO 2 .
The photolysis process was not completed after 6 h of irradiation, as shown by the features associated to the hyponitrite bridge which did not vanish entirely. To verify whether the carbonate bands come from atmospheric CO 2 , two additional photolysis were carried out: one irradiation with the sample under vacuum (Oxford cryostat (OX8ITL) with KBr optical windows) and other with KBr pellets soaked with halocarbon oil. We find that in both set ups the photolysis progressed following the same behavior as hyponitrite salts under vacuum, but, on the other hand, carbonate bands (1400, 850, and 650 cm −1 ) were not observed after photolysis. Then, we concluded that the measured carbonate bands came from the atmospheric CO 2 .
We review in Table 4 N 2 O infrared bands under different conditions in order to explain why after photolysis the ν(NN) N2O is observed as doublet in hyponitrite salts and as a singlet in the binuclear complex. While some of these were taken from our spectra others associated with different compounds, including N 2 O moiety, were from the current literature.
This includes the unusual ruthenium and osmium N 2 O complexes [32,33] and low temperature matrices of N 2 O diluted in argon, xenon [34] and nitrogen [31]. Accordingly, a singlet ν(NN) would be expected either if the N 2 O molecule bonds to a transition metal forming a complex or if it is trapped in a matrix at low temperature. In both cases no vibro-rotational contour would be observed because molecular rotations are inhibited (a typical doublet is observed for a free linear molecule). Therefore, the singlet band formed at 2226 cm −1 during the irradiation of the binuclear complex should be attributed to the N 2 O moiety coordinated with one cobalt center.
This assumption is supported by the fact that the Ag, Tl, Na cations do not produce any complex with the N 2 O, and the observed doublet does not shift significantly from the N 2 O(g) band positions, suggesting weak interaction between them (see arrow in Fig. 6). Hussain et al. reported a single band at 2237 cm −1 when N 2 O is co-adsorbed on ZnO at room temperature [35]. This observed singlet band suggests a strong interaction (or a chemical bond) between N2O and Zn2+ which can be compared to that determined for the binuclear complex studied in this work.
The single band observed for the ν(NN) N2O mode after the photolysis of cobalt-binuclear complex suggests a bond between Co and N 2 O.
The stability of the possible [(NH 3 ) 5 CoN 2 O] +n complex was tested by DFT calculations in order to support our conclusions on reaction products proposed for the binuclear complex. The optimizations for the six possible configurations for the [(NH 3 ) 5 CoNNO] +n and [(NH 3 ) 5 CoONN] +n ions, with n = 2 and n = 3 were carried out with M06-L density functional and TZVP basis sets. For n = 2 low and high spins configurations were considered. Only two of the six likely complex structures converged to a minimum in the potential energy surface; these are [(NH 3 ) 5 Based on the above results, the following reactions are therefore proposed for the hyponitrite salts photolysis. Inside the microcrystals: The photo-reactions are not reversible and the N 2 O(g) trapped in the corresponding microcrystals could be detected in the IR spectra.
The reaction with carbon dioxide is proposed for the microcrystal surface. The N 2 O(g) generated on the surface would not be trapped in the KBr matrix and then: A similar result is observed for the binuclear complex, although in this case the formed N 2 O possibly remains bound to Co. Like in the previous case of saline hyponitrites, the carbonate bands were also detected when the photolysis of the binuclear complex proceeds under atmospheric CO 2 . Based on the results discussed above, the following products are proposed inside the microcrystals: The products of reaction may react with atmospheric CO 2 according to: The degree of interaction of N 2 O with transition metals may be discussed comparing bands position in different complexes. It is found that ν(NN) bands move to higher frequencies in complexes of Co, Ru, Os while ν(NO) shifts to lower ones (Table 2), whereas no correlation for the δ(NNO) bands could be determined from the available data. These results suggest that the N 2 O interaction with metal increases in the order Co b Ru b Os.
The kinetics of the binuclear complex photolysis was also followed by infrared spectroscopy. Fig. 7 shows the decrease of the δ(ONNO) band area at 1033 cm −1 (red triangles) and the simultaneous increase of the ν(NN) N2O band area at 2226 cm −1 (blue circles). Band area for both are shown in Table 3S (Supplementary materials). For the first mode, the decay of the band area was fitted using the exponential decay function y = y 0 + a e −kt with a = 1.93 ± 0.08 cm −1 and k = 0.022 ± 0.003 min − 1 , which shows that the decay area of bridge bands follows a first order law.
The growth of the ν(NN) N2O band area mode was fitted using the function y = a(1 − e −kt ) with: a = 3.44 ± 0.02 and k = 0.0237 ± 0.0006 min −1 . Since both fitting functions yield the same k values we conclude that N 2 O is generated from the decay of the bridge. Fig. 7 shows a good agreement between the kinetic experimental values (circles) and the fitting functions (black lines). Fitness details for the decay and raise functions were detailed in Supplementary Material.
Conclusions
In this work the photolysis behavior of sodium, silver, and thallium salts of the hyponitrite anion as well as of a binuclear complex of cobalt bridged by the hyponitrite anion, were investigated. In all cases the samples were irradiated in the solid state with different wavelengths, in coincidence with the absorption region of each electronic spectrum.
It was found that the photolysis of each compound depends selectively on the irradiation wavelength. Irradiation with 340-460 nm light and with the 488 nm laser line generates photolysis only in silver and thallium hyponitrite salts, while 253.7 nm light photolyzed all the studied compounds.
DFT calculations explained the electronic spectra of the free hyponitrite ion and of the binuclear complex of cobalt in aqueous solution because predicted transitions are close to the absorption band peaks. Successful photolysis, achieved in all compounds with light of 253.7 nm, may also be assigned by DFT calculations. A charge transfer transition inside the hyponitrite moiety from π bonding molecular orbitals to π antibonding N 2 O 2 2− , either in the free hyponitrite ion and in the binuclear complex may induce the reaction observed in all samples. However, calculations are unable to account for the photolysis in the sodium, thallium, and silver hyponitrite salts. We assume that an additional reversible charge transfer transition from hyponitrite to the metal might induce the photolysis processes for silver and thallium hyponitrites which produce the pellets darkening as consequence of M 2 O formation.
The products of photolysis were detected by infrared spectroscopy. N 2 O bands are observed for all the compounds after photolysis. Carbonate bands were also detected after photolysis when the samples are exposed to air. The photolysis at the solid state favored the identification of products by infrared spectroscopy which cannot be carried out in solution.
The generation of nitrous oxide by the photolysis in our compounds, in the solid state, is unambiguously determined. However, the spectral profile of nitrous oxide obtained in the hyponitrite salts is different than that observed for the binuclear complex. For the first case, the observed bands are due to free N 2 O molecules trapped in the microcrystals, while for the second, the observed single band is due to bonded N 2 O groups, similar to that measured for the nitrous oxide complexes reported in the literature [32,33].
Therefore, we conclude that these are the evidences for the formation of a cobalt complex with nitrous oxide, which results from irradiation of the binuclear complex.
The proposed formation and stability of a [(NH 3 ) 5 Co(N 2 O)] +3 complex is supported by DFT calculations.
It should be remarked here that different structures of hyponitrite ions were irradiated in this work (see comparison in Scheme 1b (down) with 1c), however, the common product generated in all the photolysis process is N 2 O molecule.
Kinetic measurements for binuclear complex photolysis may be explained by a first order law either for the intensity decay of hyponitrite IR bands or for the intensity increase due to N 2 O generation. Kinetic parameters also suggest that the N 2 O generation is originated in the hyponitrite bridge decomposition. | 2018-04-03T05:43:51.177Z | 2017-04-05T00:00:00.000 | {
"year": 2017,
"sha1": "981705048c5e6b73282f03ffeed410eb64cf922e",
"oa_license": "CCBYNCSA",
"oa_url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/104767/Documento_completo.pdf?sequence=1",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "bec6c0f0d223b6d86424730158b1e3872c231c77",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
264436385 | pes2o/s2orc | v3-fos-license | Evaluating the Knowledge Base Completion Potential of GPT
Structured knowledge bases (KBs) are an asset for search engines and other applications, but are inevitably incomplete. Language models (LMs) have been proposed for unsupervised knowledge base completion (KBC), yet, their ability to do this at scale and with high accuracy remains an open question. Prior experimental studies mostly fall short because they only evaluate on popular subjects, or sample already existing facts from KBs. In this work, we perform a careful evaluation of GPT's potential to complete the largest public KB: Wikidata. We find that, despite their size and capabilities, models like GPT-3, ChatGPT and GPT-4 do not achieve fully convincing results on this task. Nonetheless, they provide solid improvements over earlier approaches with smaller LMs. In particular, we show that, with proper thresholding, GPT-3 enables to extend Wikidata by 27M facts at 90% precision.
Recently, LMs have been purported as a promising source of structured knowledge.Starting from the seminal LAMA paper (Petroni et al., 2019), a throve of works have explored how to better probe, train, or fine-tune these LMs (Liu et al., 2022).
Nonetheless, we observe a certain divide between these late-breaking investigations, and practical KB completion.While recent LM-based approaches often focus on simple methodologies that produce fast results, practical KBC so far is a highly precision-oriented, extremely laborious process, involving a very high degree of manual labour, either for manually creating statements (Vrandečić and Krötzsch, 2014), or for building comprehensive scraping, cleaning, validation, and normalization pipelines (Auer et al., 2007;Suchanek et al., 2007).For example, part of Yago's success stems from its validated >95% accuracy, and according to (Weikum et al., 2021), the Google Knowledge Vault was not deployed into production partly because it did not achieve 99% accuracy.Yet, many previous LM analyses balance precision and recall or report precision/hits@k values, implicitly tuning systems towards balanced recall scores resulting in impractical precision.It is also important to keep in mind the scale of KBs: Wikidata currently contains around 100 million entities and 1.2B statements.The cost of producing such KBs is massive.An estimate from 2018 sets the cost per statement at 2 $ for manually curated statement, and 1 ct for automatically extracted ones (Paulheim, 2018).Thus, even small additions in relative terms might correspond to massive gains in absolute numbers.For example, even by the lower estimate of 1 ct/statement, adding one statement to just 1% of Wikidata humans would come at a cost of 100,000 $.
In this paper, we conduct a systematic analysis of the KB completion potential of GPT, where we focus on high precision.We evaluate by employing (i) a recent KB completion benchmark, WD-KNOWN, (Veseli et al., 2023), which randomly samples facts from Wikidata and (ii) by a manual evaluation of subject-relation pairs without object values.Our main results are: 1.For the long-tail entities of WD-KNOWN, GPT models perform considerably worse than what less demanding benchmarks like LAMA (Petroni et al., 2019) have indicated.Nonetheless, we can achieve solid results for languagerelated, socio-demographic relations (e.g., na-tiveLanguage).
2. Despite their fame and size, out of the box, the GPT models, including GPT-4, do not produce statements of a high enough accuracy as typically required for KB completion.
3. With simple thresholding, for the first time, we obtain a method that can extend the Wikidata KB at extremely high quality (>90% precision), at the scale of millions of statements.
Based on our analysis of 41 common relations, we would be able to add a total of 27M highaccuracy statements.
2 Background and Related Work KB construction KB construction has a considerable history.One prominent approach is by human curation, as done e.g., in the seminal Cyc project (Lenat, 1995), and this is also the backbone of today's most prominent public KB, Wikidata (Vrandečić and Krötzsch, 2014).Another popular paradigm is the extraction from semistructured resources, as pursued in Yago and DBpedia (Suchanek et al., 2007;Auer et al., 2007).Extraction from free text has also been explored (e.g., NELL (Carlson et al., 2010)).A popular paradigm has been embedding-based link prediction, e.g., via tensor factorization like Rescal (Nickel et al., 2011), and KG embeddings like TransE (Bordes et al., 2013).An inherent design decision in KBC is the P/R trade-off -academic projects are often open to trade these freely (e.g., via F-1 scores), yet production environments are often critically concerned with precision, e.g., Wikidata generally discouraging statistical inferences, and industrial players likely use to a considerable degree human editing and verification (Weikum et al., 2021).
For example in all of Rescal, TransE, and LAMA, the main results focus on metrics like hits@k, MRR, or AUC, which provide no bounds on precision.
LMs for KB construction Knowledge extraction from LMs provides fresh hope for the synergy of automated approaches and high-precision curated KBs.It provides remarkably straightforward access to very large text corpora: The basic idea by (Petroni et al., 2019) is to just define one template per relation, then query the LM with subject-instantiated versions, and retain its top prediction(s).A range of follow-up works appeared, focusing, e.g., on investigating entities, improving updates, exploring storage limits, incorporating unique entity identifiers, and others (Shin et al., 2020;Poerner et al., 2020;Cao et al., 2021;Roberts et al., 2020;Heinzerling and Inui, 2021;Petroni et al., 2020;Elazar et al., 2021;Razniewski et al., 2021;Cohen et al., 2023;Sun et al., 2023).Nonetheless, we observe the same gaps as above: The high-precision area, and completion of already existing resources, are not well investigated.
Several works have analyzed the potential of larger LMs, specifically GPT-3 and GPT-4,.They investigate few-shot prompting for extracting factual knowledge for KBC (Alivanistos et al., 2023) or for making the factual knowledge in a LM more explicit (Cohen et al., 2023).These models can aid in building a knowledge base on Wikidata or improving the interpretability of LMs.Despite the variance in the precision of extracted facts from GPT-3, it can peak at over 90% for some relations.
Recently, GPT-4's capabilities for KBC and reasoning were examined (Zhu et al., 2023).This research compared GPT-3, ChatGPT, and GPT-4 on information extraction tasks, KBC, and KGbased question answering.However, these studies focus on popular statements from existing KBs, neglecting the challenge of introducing genuinely new knowledge in the long tail.
In (Veseli et al., 2023), we analyzed to which degree BERT can complete the Wikidata KB, i.e., provide novel statements.Together with the focus on high precision, this is also the main difference of the present work to the works cited above, which evaluate on knowledge already existing in the KB, and do not estimate how much they could add.
Analysis Method
Dataset We consider the 41 relations from the LAMA paper (Petroni et al., 2019).For automated evaluation and threshold finding, we employ the WD-KNOWN dataset (Veseli et al., 2023) tail entities, by randomly sampling from Wikidata, a total of 4 million statements for 3 million subjects in 41 relations (Petroni et al., 2019).Besides this dataset for automated evaluation, for the main results, we use manual evaluation on Wikidata entities that do not yet have the relations of interest.For this purpose, for each relation, we manually define a set of relevant subject types (e.g., software for developedBy), that allows us to query for subjects that miss a property.
Evaluation protocol In the automated setting, we first use a retain-all setting, where we evaluate the most prominent GPT models (GPT-3 textdavinci-003, GPT-4, and ChatGPT gpt-3.5-turbo)by precision, recall, and F1.Table 1 shows that none of the GPT models could achieve precision of >90%.In a second step, the precision-thresholding setting, we therefore sort predictions by confidence and evaluate by recall at precision 95% and 90% (R@P95 and R@P90).To do so, we sort the predictions for all subjects in a relation by the model's probability on the first generated token2 , then compute the precision at each point of this list, and return the maximal fraction of the list covered while maintaining precision greater than the de-sired value.We threshold only GPT-3, because only GPT-3's token probabilities are directly accessable in the API, and because the chat-aligned models do not outperform it in the retain-all setting.
Approaches to estimate probabilities post-hoc can be found in (Xiong et al., 2023).Since automated evaluations are only possible for statements already in the KB, in a second step, we let human annotators evaluate the correctness of 800 samples of novel (out-of-KB) high-accuracy predictions.We hereby use a relation-specific threshold determined from the automated 75%-95% precision range.MTurk annotators could use Web search to verify the correctness of our predictions on a 5-point Likert scale (correct/likely/unknown/implausible/false).We counted predictions that were rated as correct or likely as true predictions, and all others as false.
Prompting setup To query the GPT models, we utilize instruction-free prompts listed in the appendix.Specifically for GPT-3, we follow the prompt setup of (Cohen et al., 2023), which is based on an instruction-free prompt entirely consisting of 8 randomly sampled and manually checked examples.In the default setting, all example subjects have at least one object.Since none of the GPT models achieved precision >90% and we can only threshold GPT-3 for high precision, we focus on the largest GPT-3 model (text-davinci-003) in the following.We experimented with three variations for prompting this model: 1. Examples w/o answer: Following (Cohen et al., 2023), in this variant, we manually selected 50% few-shot examples, where GPT-3 did not know the correct answer, to teach the model to output "Don't know".This is supposed to make the model more conservative in cases of uncertainty. 2.
Results and Discussion
Can GPT models complete Wikidata at precision AND scale?In Table 1 we already showed that without thresholding, none of the GPT models can achieve sufficient precision.Table 2 shows our main results when using precision-oriented thresholding, on the 16 best-performing relations.The fourth column shows the percentage of subjects for which we obtained high-confidence predictions, the fifth how these translates into absolute statement numbers, and the sixth shows the percentages that were manually verified as correct (sampled).In the last column, we show how this number relates to the current size of the relation.We find that manual precision surpasses 90% for 5 relations, and 80% for 11.Notably, the bestperforming relations are mostly related to sociodemographic properties (languages, citizenship).
In absolute terms, we find a massive number of high-accuracy statements that could be added to the writtenIn relation (18M), followed by spo-kenLanguage and nativeLanguage (4M each).In relative terms, the additions could increase the existing relations by up to 1200%, though there is a surprising divergence (4 relations over 100%, 11 relations below 20%).
Does GPT provide a quantum leap?Generating millions of novel high-precision facts is a significant achievement, though the manually verified precision is still below what industrial KBs aim for.The wide variance in relative gains also shows that GPT only shines in selected areas.In line with previous results (Veseli et al., 2023), we find that GPT can do well on relations that exhibit high surface correlations (person names often give away their nationality), otherwise the task remains hard.
In Table 3 we report the automated evaluation of precision-oriented thresholding.We find that on many relations, GPT-3 can reproduce existing statements at over 95% precision, and there are significant gains over the smaller BERT-large model.At the same time, it should be noted that (Sun et al., 2023) observed that for large enough models, parameter scaling does not improve performance further, so it is well possible that these scores represent a ceiling w.r.t.model size.
Is this cost-effective?Previous works have estimated the cost of KB statement construction at 1 ct.(highly automated infobox scraping) to $2 (manual curation) (Paulheim, 2018).Based on our prompt size (avg.174 tokens), the cost of one query is about 0.35 ct., with filtering increasing the cost per retained statement to about 0.7 ct.So LM prompting is monetarily competitive to previous infobox scraping works, though with much higher recall potential.
In absolute terms, prompting GPT-3 for all 48M incomplete subject-relation pairs reported in Table 2 would amount to an expense of $168,000, and yield approximately 27M novel statements.
Does "Don't know" prompting help?In Table 4 (middle) we show the impact of using examples without an answer.The result is unsystematic, with notable gains in several relations, but some losses in others.Further research on calibrating model confidences seems important (Jiang et al., 2021;Singhania et al., 2023).
GPT-3 text-davinci-003
GPT-3 text-curie-001 BERTLarge Relation R@P 95 R@P 90 R@P 95 R@P 90 R@P 95 R@P 90 Does textual context help?Table 4 (right) shows the results for prompting with context.Surprisingly, this consistenly made performance worse, with hardly any recall beyond 90% precision.This is contrary to earlier findings like (Petroni et al., 2020) (for BERT) or (Mallen et al., 2023) (for QA), who found that context helps, especially in the long tail.Our analysis indicates that, in the highprecision bracket, misleading contexts cause more damage (lead to high confidence in incorrect answers), than what helpful contexts do good (boost correct answers).
How many few-shot examples should one use?
Few-shot learning for KBC works with remarkably few examples.While our default experiments, following (Cohen et al., 2023), used 8 examples, we found actually no substantial difference to smaller example sizes as low as 4.
Conclusion
We provided the first analysis of the real KB completion potential of GPT.Our findings indicate that GPT-3 could add novel knowledge to Wikidata, at unprecedented scale and quality (27M statements at 90% precision).Compared with other approaches the estimated cost of $168,000 is surprisingly cheap, and well within the reach of industrial players.We also find that, in the high-precision bracket, GPT-3 distills web content to a degree that context augmentation does not easily help.
Open issues remain in particular around identifying high-confidence predictions within an LM's generations (Jiang et al., 2021;Singhania et al., 2023;Xiong et al., 2023), and the choice of examples.
Limitations
Using LMs for automated knowledge generation comes with the standard risk of exacerbating demographic biases.For example, many of the bestperforming relations are language-related, where the model presumably often estimates a person's native language entirely from their name.
In terms of reproducibility, it should be noted that our results are tied to a closed-source commercial API.Although GPT-3/4/chatGPT are widely used in research and industry, and OpenAI has announced plans to keep stable model versions online, long-term reproducibility is not ensured, and the internal workings of GPT are publicly not known.
Although statement generation is at the core of KB completion, for a complete KBC pipeline, we are still missing critical components.This concerns in particular entity disambiguation, which is essential for relations with more than a few hundred possible object values.Similarly, Wikidata and other KBs give critical importance to scrutable referencing of statements.This is not easily possible with LMs.
A Predictions
In the following we show some predictions from our different experimental setups.Q marks the query, P the prediction by GPT-3 and A the ground truth object.Please note that to prompt GPT-3, relation-specific examples are used at the beginning of each query.The prompts w/ examples can be seen in Appendix B.
A.1 Standard
This section shows predictions from our standard setup with the number of few-shot examples being 8.The prompts B.1.
A.1.1 GPT-3 Q: As It Is in Heaven # original language P: Swedish A: Swedish Q: John Gruber # employer P: Daring Fireball A: The Pittsburg Times, Pittsburgh Post, Pittsburgh Gazette, The Gazette Times, Pittsburgh Pirates Q: Hillary Canyon # continent P: Antarctica A: North America Q: The Incredible Hulk # developer P: Edge of Reality A: Edge of Reality, Amaze Entertainment Q: Architecture and secondary structure of an entire HIV-1 RNA genome # language of work P: English A: English A.1.2GPT-4 Q: Who are the developers of The Incredible Hulk?Give me a list with no additional text.P: Edge of Reality A: Radical Entertainment, Edge of Reality, Amaze Entertainment, Pocket Studios, Universal Interactive Q:In which languages is Architecture and secondary structure of an entire HIV-1 RNA genome available?Give me a list with no additional text.P: English A: Iḿ sorry, but I couldn't find specific information about the languages in which the mentioned scientific study or information is available.
A.1.3ChatGPT Q: Who are the developers of The Incredible Hulk?Give me a list with no additional text.P: Edge of Reality A: Edge of Reality, Amaze Entertainment, Beenox, Foundation 9 Entertainment, Griptonite Games, Fizz Factor, Nucore Q:In which languages is In which languages is Architecture and secondary structure of an entire HIV-1 RNA genome available?Give me a list with no additional text.available?Give me a list with no additional text.P: English A: English
A.2 Textual context
In this section we show some predictions from our standard setup with textual contexts.The examples used for prompting in this setup can be seen in section B.2.
Table 4 :
Effect of variations to the standard prompting setting.
The St. Lawrence River is a large river in the middle latitudes of North America.Its headwaters begin flowing from Lake Ontario in a roughly northeasterly...St. Lawrence River, hydrographic system of eastcentral North America.It starts at the outflow of Lake Ontario and leads into the Atlantic Ocean in the... A: North America Q: Cerro El Charabón # continent C: 65, Estancia El Charabón.49. 66, Área costero-marina Cerro Verde e Islas de la Coronilla-Área General.48.67, Area protegida Laguna de Castillos -Tramo... Casa del Sol Boutique Hotel.A cozy stay awaits you in Machu Picchu.... Altiplánico San Pedro de Atacama ... Welcome to El Charabon.El Charabon.A: Americas Q: Hinterer Seekopf # continent C: Following the breakup of Pangea during the Mesozoic era, the continents of ... of the best day hikes in Kalkalpen National Park is the Hoher Nock -Seekopf.Dec 5, 2016 ... Hinterer Steinbach.Inhaltsverzeichnis aufklappen ... Inhaltsverzeichnis einklappen ... Charakteristik.Hinweise; Subjektive Bewertung... A: Europe Q: Šembera # continent C: Rephrasing Heidegger: A Companion to Heidegger's Being and Time [Sembera, ... Being and Time (Suny Series in Contemporary Continental Philosophy).Feb 26, 2016 ... Coming from Uganda, UNV PO Flavia Sembera was familiar with diversity.... shared across the continent while experiencing Zambia's beautiful... A: Europe largest professional community.Andrzej has 15 jobs listed on their profile.Nov 14, 2016 ... Congratulations to the newest Java Champion Andrzej Grzesik! ... in Poland (sfi.org.pl) and in his work as a Sun Campus Ambassador. | 2023-10-24T18:42:31.346Z | 2023-10-23T00:00:00.000 | {
"year": 2023,
"sha1": "6778fbc38179f3f79afd36f7ac091bcb3ad111f6",
"oa_license": "CCBY",
"oa_url": "https://aclanthology.org/2023.findings-emnlp.426.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6d7fe2887731c5acdf571ae5f1e7fdd25d571bcf",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
239462898 | pes2o/s2orc | v3-fos-license | COVID-19, CITIZEN’S PURCHASING BEHAVIOUR AND SMART CITY GOALS. EVIDENCE FROM BARCELONA
Even if we are unable to avoid the advent of harmful viruses from emerging, a Smart City (SC) can be seen as a means to lessen the effects of future pandemics on society. The COVID-19 outbreak has not only had economic consequences, but also social and environmental. This paper investigates how citizens purchasing behaviors have drastically changed because of the pandemic in the region of Barcelona. Amongst the results gathered from an online questionnaire distributed to Barcelona’s citizens, it was interpreted and analyzed to reach the conclusion where COVID-19 is acting as an accelerator for the implementation of more smart city initiatives and therefore, achieve to a certain extent, smart city goals in Barcelona. —————————— ◆ ——————————
INTRODUCTION
The concept of a SC was developed to address the significant challenge of urban development faced by cities in the 21 st century and fueled by globalization . By the world becoming more interconnected, there's been "an increase in the number of residents and a concentration in urban areas" (Dirks, Gurdgiev & Keeling 2010, p.199). This has been reflected in the United Nations (2018) reports, which show how the population in cities is projected to increase from 55% to 68% by 2050.This current scenario calls for cities to find ways to manage new challenges; some have started to develop high quality and more efficient public transport , smart lighting, waste management Derqui, Grimaldi, & Fernandez, 2020), and even moving towards renewable energies (Grimaldi & Fernandez, 2015).
Regardless of these efforts, cities are still exposed and vulnerable to unforeseeable events, such as infectious disease outbreaks and pandemics (Costa & Peixoto 2020). To be able to prepare and overcome them, cities need to become more efficient and sustainable (Ramirez, Palominos, Camargo, & Grimaldi, 2021), that is, becoming "smarter" (IBM 2010, p. 1). Smart cities (SC from now) can be said to represent part of the solution. A SC nowadays is one that uses ICT and other means to improve quality of life and economic competitiveness while achieving sustainable development to meet the needs of present and future generations (International Telecommunication Union 2014). This paper explores how citizens purchasing behaviors have drastically changed because of the pandemic, and if these have helped in the achievement of SC goals. Therefore, the research objectives are twofold: • To explore how Covid-19 has evolved smart cities • What changes have citizens experienced in their purchasing behavior as a result of In this study, the SC concept is explored to respond to the following research question: • Have changes in citizens purchasing behavior due to Covid-19 helped in the achievement of SC goals?
Case study research is considered fruitful for studying incipient phenomena. As an analytical tool, case studies are increasingly common in business research. They have been used to analyze very complex phenomena (Liedtka & Liedtka, 2014;Yin & Yin, 2013). They are particularly appropriate in the early phases of theory development when key variables and their relationships are to be explored (Gibbert, Ruigrok, & Wicki, 2008). To examine our research question, we use the case study of Barcelona (Spain).
Identifying smart city goals
Like any other public or private organization, the metrics for the assessment of a SC should consider social, economic, and environmental aspects (Oliveira, Oliver & Ramalhinho 2020). These three spheres are closely related to the theory of the Triple Bottom Line (TBL) (Elkington 1994), a well-known framework that all entitiesincluding cities-should acknowledge before making decisions. Altogether, literature agrees on the fundamental objective of a SC: protect and ensure citizen's safety (Dameri 2017). Indeed, technology is a vital part of a SC, but citizens are necessary to embrace it in their daily lives. Eggers and Skowron (2018) agree with Dameri (2017) and highlight how the focus of any SC should be on its people. On the other hand, even if international literature states that the final aim of a SC is to improve the quality of life in a city (Dameri 2017). Oliveira, Oliver and Ramalhinho (2020) also comment on the challenges that arise when engaging citizens in social decisions. More research is needed in this area, as not many authors address this equally important subject.
How has COVID-19 evolved the SC existing framework?
Chourabi et al, (2012, p. 2290) developed a framework to illustrate the relationship between eight factors in a SC (shown in Figure 1) and how they can be influenced by or influence other factors. The factors in the inner circle (organization, technology, policy), are the only variables that can reach and affect SC initiatives. On the other hand, the outer circle factors (governance, people/communities, economy, build infrastructure, and natural environment), are highly likely to be altered more than the inner circle, as they are more exposed and vulnerable to outer conditions. Consequently, as the outer circle is more susceptible to the macroenvironment, these factors could be seen, among other things, affected by the rapid and uncontrolled propagation of the Coronavirus disease . Covid-19 is an infectious disease -highly transmittable-, which emerged in Wuhan, China, and rapidly spread around the world in 2020. Lockdowns and mandatory quarantines in cities -where 95% of the cases came from (UN Habitat 2020)were some of the actions taken by governments to limit the movement of their citizens, which intended to control and minimize the devastating effects of the pandemic.
PEOPLE AND COMMUNITIES
By countries closing their borders, limiting citizens movement, and even making them stay at their home for months, governments are taking away an essential right from its citizens, that is, their freedom of movement (Donthu & Gustafsson 2020). Covid-19 has directly attacked an essential social factor in the SC framework, being people and communities, by putting at risk both their physical and mental health. They have been deprived of a good "quality of life" that a SC offers, as well as the opportunity to engage with the SC initiatives framework that stimulates "more informed, educated, and participatory citizens" (Chourabi et al. (2012(Chourabi et al. ( , p. 2293).
ECONOMY
SC initiatives intend to create economic outcomes, such as job creation, business creation, and improvement in the productivity (Chourabi et al. 2012). However, the economic variable of the framework has been attacked by Covid-19. Businesses have been forced to shut down, likely causing bankruptcy for many local brands (Tucker 2020), leading to many people feeling pressured economically by losing their jobs and stability. For example, the travel industry has experienced 80% of hotel rooms being empty and airlines cutting their workforce by 90% (Asmelash & Cooper 2020).
NATURAL ENVIRONMENT
Before, cities consumed over two-thirds of the worlds energy and produce over 70% of carbon dioxide emissions. However, since Covid-19 emerged, there has been a noticeable drop in pollution and greenhouse gases emissions worldwide. For instance, China's emissions of harmful gases have declined 25%, increasing 11.4% the quality of air, and saving up to 50,000 lives (Khan, Shah & Shah 2020). These have been the positive results of the industrial shutdowns -both in the manufacturing and economic sectors-experienced by just the country of China.
Key trends of citizen's purchasing behavior
With lockdowns, staying at home, and social distancing, Covid-19 has generated significant disruptions in purchasing behavior.
E-COMMERCE
The online retail volume is expected to grow up to 15% until 2023 (Zierlein et al. 2020), as online channels have become a new trend from where to purchase goods and services, because of closed stores and contact bans. The biggest change has been in the purchase of online groceries and medicines, even if stores for this category were still open (Zierlein et al. 2020). Certainly, market preferences are now based on the most fundamental needs -food, health, and hygienewhile non-essential categories are falling. This rise in caution and costconscious purchase behavior is the economic result of Covid-19 on citizens (Wright & Blackburn 2020, p. 12). As many local businesses have been forced to shut down and people have been left without jobs, keeping financial stability is one of the main concerns people have.
HOME CONSUMPTION
Our spending on out-of-home activities has ceased as a result of closed restaurants, bars, entertainment attractions, and recreational facilities. As a result, there has been a drastic usage of the internet and social media (Nowland, Necka & Cacioppo 2018), which not only keep human interactions but also maintain education and remote work (Donthu & Gustafsson 2020). This means citizen's behavior has changed to embrace digital technologies and spend more on internetbased tools to stay connected.
TRANSPORTATION
The global shared mobility market experienced a boom before Covid-19 emerged, valued at USD 104.95 billion in 2017. Despite its advantages, Covid-19 has shift consumer's behavior to return to old habits: private ownership. To avoid infections in shared spaces and prioritizing their health above convenience, a large number of consumers are demanding fewer services from the sharing economy, even after the Covid-19 crisis (Zierlein et al. 2020). It seems that suddenly the acceleration of future mobility has come to a halt .
Impact of the change in purchasing behavior on the SC framework
Proximity commerce is one area severely affected by Covid-19. If this new, safer, and more convenient trend of purchasing online continues, it could negatively impact SC in an economic, social, and environmental way.
ECONOMIC IMPACT
The economic-financial crisis that took place worldwide in 2008, caused the closing of countless local shops and unemployment. Citizens suffered from financial instability; the same scenario experienced as a result of Covid-19. Jorda, Sing & Taylor (2020) state how following the patterns of other historical pandemics, there is a tendency to save our capital and discourage investments, deriving in a ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume VIII-4/W1-2021 6th International Conference on Smart Data and Smart Cities, 15-17 September 2021, Stuttgart, Germany decrease in economic growth. Similarly, if online buying from multinationals keeps being the preferred choice of citizens, we will find ourselves in the same economic situation, that is, experiencing "a desertification of the urban economic environment" (Grimaldi, Fernandez & Carrasco 2018, p. 250), making cities unattractive and without means to invest in SC initiatives (such as those shown in figure 1). All the mentioned factors could dangerously put at risk the economic goal of a SC being: "creating economic competitiveness by attracting industry and talent" (Eggers & Showron, 2018, p.3).
SOCIAL IMPACT
In parallel, this desertification does not only damage a city's economy but also creates "a social problem of security and quality of urban life" (Grimaldi, Fernandez & Carrasco 2018, p. 249). Local shops in the city cover cultural, leisure, artistic, entertainment, sports, and gardening aspects, that bring together the citizens in the streets (SenStadt 2007). The closing of local retailing leaves large proportions of unoccupied space in the city, leading to people not walking by the streets, and therefore, decreasing its security (Grimaldi & Fernandez, 2017). According to Carley et al. (2000), derelict shopping areas build upon neighbors feeling of "dead frontage" and Grimaldi, Fernandez, and Carrasco (2018) believe that this adds to parents not allowing their children to go walking to school (due to their increased impression of the city's dangerousness). Of course, this domino effect would be avoided if local shops open and citizens stop their purchase online, giving security to the streets. It is part of the SC goals to ensure citizens safety as well as economic stability.
ENVIRONMENTAL IMPACT
Finally, a decline in the local retailing can also cause severe environmental issues. As previously mentioned, the lack of people on the street due to the impression of insecurity, force people to take other forms of transportation to move around, increasing congestion and contamination (Grimaldi, Fernandez & Carrasco 2018). In addition, large online companies, such as Amazon, have been the major pollutants as people's purchasing behavior shifted to online. In 2019, before the drastic increase in their sales because of Covid-19, Amazon's carbon footprint rose 15% (Pisani 2020). Despite Barcelona's effort to advertise against online consumption and before, "think twice", more initiatives need to be taken to regulate this growing issue, which goes against environmental SC goals.
On the other hand, as new forms of transportation rise, such as micromobility, cities have acted accordingly to adapt to this new trend. Cities have started to redefine car lanes to create more spaces for bikes and e-scooters as people begin to avoid public transportation (Heineke et al. 2020). For instance, Barcelona's mayor stated how "with paint we have recovered 30,000 square meters to be able to do bike lanes, walk with more space and have more space for bars and restaurants terraces for physical distance" (Smart City Expo World Congress, 2020). This has raised an opportunity to promote greener alternatives of transportation and contribute to the environmental goal of a SC being "an environmentally conscious focus on sustainability".
METHODOLOGY
As an analytical tool, case studies are increasingly common in business research. They have been used to analyze very complex phenomena (Liedtka & Liedtka, 2014;Yin & Yin, 2013). They are particularly appropriate in the early phases of theory development when key variables and their relationships are to be explored (Gibbert et al., 2008). To examine our research question, we employ a two-step approach using the case study of Barcelona (Spain).
As a first step, before making the survey possible, a theoretical model is created (shown in the appendix I), to define the three main different constructs of the questionnaire, these were: COVID-19 impact on society, citizen's change in purchasing behavior and impact on cities attractiveness. Then, variables for each construct were identified to help, later on, define the questions in the questionnaire. For example, to further interpret the construct of the impact COVID-19 has had on society, the number of deaths, number of people in the hospital, and the number of news are statistics that assist when describing the pandemics repercussion on society. The major challenge in the design of this theoretical model was the wide range of variables that could be taken into account when defining each construct. A significant discard had to be made to ensure the questionnaire didn't end up with forty questions.
The second step was to build a survey based on an online questionnaire (shown in Appendix III), to explore and test the relationships between constructs. The survey is created using a cloud-based data management tool called Google Forms (Raju & Harinarayana 2016). The survey required a consistent structure for which we opted for the use of a Likert scale, as the first testing of the questionnaire exposed unclear results and confusion. In addition, to be able to distribute it among Barcelona's citizens, social media platforms such as WhatsApp, LinkedIn, Instagram, and Facebook posts are used, enabling a rapid widespread of responses. Social media acted as an advantage to reach Barcelona's citizens, but also, extended outside this region, which wasn't our initial focus. It indeed raised the number of samples obtained, yet these results had to be discarded, leaving a lower sample sized than desired. Appendix IV illustrates a more detailed analysis of the methodology conducted, with a workflow chart.
Finally, along with the model and the survey, three hypotheses were developed (shown in the appendix II): Hypothesis H1. The more COVID-19 impact, the less cities are attractive Hypothesis H2. The more COVID-19 impact, the more citizens are changing the purchasing behaviors Hypothesis H3. The more people change their purchasing habits, the less cities are attractive.
Demographic results
The online questionnaire gathered a total of 243 responses through various forms of digital distribution. While the main age group of respondents was 18 to 24 years old -34.7%there was an equal number of responses across all 7 age groups, ranging from under 18 to over 65 years old. With respect to the employment status, 49.8% said to be currently employed. Other responses said to both studying and working, only studying or retired. Altogether, respondents were asked where they currently resided, to which the majority -66.7%-answered inside the province of Barcelona. This was done to verify the target respondents, as this paper is a case study of the region of Barcelona only. The remaining percentage was discarded due to the scope of this study.
Impact on cities attractiveness results
To determine the economic impact on businesses as a result of COVID-19, people were asked whether their revenues had decrease compared to 2019. While the most popular response -29.2%-was not agreeing or disagreeing with this statement, an extremely high percent of 67.4 strongly agreed or agreed when they were asked if their expenses had decreased (also compared to 2019). 77% of the population whose revenues had not been affected by Covid-19 and their expenses had decreased, still consider themselves pessimistic regarding Barcelona's economic future. On the other hand, those 50 respondents who confirmed to be optimistic, 78% affirmed to be currently working.
Likewise, the population was asked if after quarantine they had purchased more local products -as local shops started to open-and in this way, evaluate the current economic situation in Barcelona from citizens point of view. 49,8% agreed to purchase more locally than before Covid-19. The most prominent argument with an 86% was to support local shops. Despite the fact the majority of respondents now purchase locally, and a 20.6% still prefers the alternative of purchasing in large supermarkets. The dominant reasoning was due to its convenience (36.7%), along with their competitive price (24.5%), and finally, security issues in terms of Covid-19 exposure (24.5%).
Respondents were further inquired about their use of public or shared transport utilization to enable a comparison of usage before and after quarantine restrictions. 55.6% agreed that before Covid-19, their frequency of usage with respect to public or shared transportation was higher. Then, the following question was posed: "now, you have significantly reduced public or shared forms of transport", to which over half -52.7%-of the population targeted strongly agreed or simply agreed.
Finally, to understand the social effect Covid-19 has had on purchasing behaviors, two variables were examined: the use of online services and the sense of security of Barcelona's citizens. Since Covid-19 emerged, 66.2% of respondents admitted to using more online services, for example, social media. Similarly, to finalize the questionnaire, respondents were asked if they now feel less safe when leaving home. The leading answer for this was a full agreement, with 140 claims. A remarkable low percentage of 19.3% disagreed with not feeling safe. 37 out of 47 (79%) of these responses aforesaid to prefer going out and purchase locally instead of online, explaining their lack of fear towards Covid-19.
All these results gathered from the survey questions helped measuring the variables of the model regarding the impact on cities attractiveness and are presented in the following table (table 2).
Citizen's change in purchasing behavior results
The online questionnaire distinguished between purchasing habits before, during, and after Covid-19 quarantines. First, to gain an insight into the usual habits of citizens before Covid-19, they were asked if they had previously bought certain categories -essential and nonessential goods-online. Secondly, the numbers showed a positive claim towards the higher purchase online during Covid-19 with 52.3% strongly agreeing or agreeing. Nevertheless, considering there were heavy restrictions during Covid-19 quarantine, there is still a considerable high number of 56 respondents -23%-neither agreeing nor disagreeing with the displayed statement. Similarly, just over half of the of respondents (51.1%) confirm that even after Covid-19 restrictions have been eased, they remain purchasing online. The main argument extracted from the questionnaire was because they felt safer because of Covid-19 reasons (50%). Nonetheless, for the remaining percentages that didn't fully agree (25.9%), results show how respondents keep their same purchasing habits as before Covid-19 emerged. In other words, they keep purchasing in person despite Covid-19 risks is to support local stores (58.1%) and because they confessed, they don't like buying online (38.7%), leading to an 22.6% preferring the personal service from going to the stores.
All these results gathered from the survey questions helped measuring the variables of the model regarding the changes in purchasing habits and are presented in the following table (table 3). Table 4. Variables of citizen's change in purchasing behavior.
DISCUSSION & CONCLUSIONS
When exploring the impact COVID-19 has had on each construct shown in Appendix I, results indicate how the pandemic is acting as a moderator effect.
Hypothesis H1
As illustrated in table 2, 84.8% of respondents agreed upon their actions -going to bars, restaurants, malls, and travelingbeing restricted by the news they watched or read. To certain extent, this variable confirms the existing impact of COVID-19 on society. In consequence, as the pandemic increases, it is influencing the economic, social, and environmental spheres of the TBL, impacting Barcelona's city's attractiveness.
Economic impact
In agreement with Jorda, Sing and Taylor (2020), cities are becoming less attractive as we are following the patterns of the economic-financial crisis of 2008, where we save our capital and discourage investments, decreasing the economic growth of cities. Building upon this literature and with the survey conducted, Barcelona's citizens are also viewing the economic damage, as 48.2% consider themselves pessimistic in regard to the future economic growth of Barcelona. Altogether, there is a proven correlation between the effects of COVID-19 increase (e.g the closing of local shops and unemployment), which are essential factors for making a city attractive, as shown in table 1.
Social impact
Simultaneously, the more COVID-19 impact, the more social problems raise and impact a cities attractiveness. For example, in table 3, it can be seen the results of the survey where 66.2% of respondents have increased the usage of online services (mainly social media), as a result of quarantines and lockdowns caused by the pandemic. Even though Donthu & Gustafsson (2020) state this has enabled us to keep human interactions and also maintain education and remote work, Nowland, Necka, and Cacioppo (2018) reminded in their article how social media can also bring out the worst in people through the sharing of fake news or trolling. It is part of the SC goals to ensure citizen's safety both physically and mentally, and along with the survey concluding 57.6% perceiving the health at risk when leaving home (shown in table 3), more COVID-19 is making Barcelona less safe and therefore, less attractive.
Environmental impact
Concerns about health become a priority as COVID-19 cases increase. 52.7% of survey respondents confess to drastically decrease the use of public or shared transportation as a way to move around the region of Barcelona. People shift to other forms of transportation which increase congestion and contamination, more specifically, Barcelona is experiencing an increase in CO2 from car emissions. This has happened after lockdown measures have been eased. As venues and offices are opening, the people are moving more around the city as a consequence of being locked for months. Nevertheless, before this occurred, during the lockdown, it is true carbon emissions have been proven to have declined (Khan, Shah & Shah 2020), even if this has been experienced during a very short period.
However, it is no longer a matter of sharing a vehicle and the fear of getting infected, but as s Grimaldi, Fernandez, and Carrasco (2018) argue, the lack of people on the street due to the impression of insecurity as local shops are closed, force people to take other forms of transportation to move around. Once again, this damages the environment of the region, both by air and noise pollution, making the city of Barcelona less attractive.
Overall, it can be concluded that, in accordance with H1, the more the variables in table 2 increase, the higher the (negative) influence on a city's attractiveness. H1 is therefore confirmed.
Hypothesis H2
E-commerce has been the major change in purchasing habits by citizens as a result of COVID-19. Temporary quarantines and lockdowns forced people to move from local shopping to online. Table 3 displays the results of the increase in online purchasing up to 51.1% of citizens making it their preferred option against the pandemic, even after government restrictions were eased. The main argument extracted from the survey was due to "feeling safer" purchasing online, with no possibility of infection. Similarly, the biggest change in the purchase of online groceries was regarding essential products (food and medicines), the main market preference. Wright and Blackburn (2020, p.12) agree a rise in caution and costconscious buying as a result of COVID-19, even if 50.2% of respondents had never purchased this category online before (table 3). On the whole, it can be argued that COVID-19 is one cause for the rise in purchasing behavior online, especially for what is considered an essential good.
Furthermore, as more news come out in relation to COVID-19, the more citizens fear out-of-home activities and opt for home consumption. Table 3 reveals how 67.4% expressed to have reduced their spending during and after quarantine compared to 2019. Closed restaurants, bars, entertainment attractions, and recreational facilities have forced citizens to cut their spending, and unconsciously, encouraging savings. As previously mentioned, saving capital and the discouragement of investments, decreases the overall growth of cities, making them once again, unattractive. H2 is therefore, confirmed by at least two changes in citizens purchasing behavior as a result of more COVID-19.
Hypothesis H3
Clearly, as changes in purchasing habits increase, this directly alters Barcelona's attractiveness in a social, economic, and environmental way. For example, as proven by the literature and the survey conducted, there has been an increment in online retailing. This new trend in citizen's purchasing behavior has disrupted local buying, as they experience a considerable decline in sales. This means local shops are enforced to close, leaving people unemployed and no movement of capital in the city. Multinationals such as Amazon, are the ones gaining clients due to their online services, meaning they are becoming the major pollutants trying to transport all goods worldwide, and in addition, the capital spend by citizens, leaving the country. This negatively affects a city's attractiveness both from an economic and environmental point of view. H3 is therefore confirmed too, but taking into account it is a domino effect, as a change in behavior impacts the city's attractiveness.
On the whole, it can be concluded that COVID-19 is a major cause for the alterations in the variables illustrated in Appendix I. The pandemic has had significant effects on citizens purchasing behaviors and consumption and has consequently affected the goals of a smart city. Nevertheless, it has been discussed how cities can lead the way towards economically, socially, and environmentally sustainable societies which guarantee an improvement of quality of life for their residents. As emphasized, despite the negative aftereffect of COVID-19, it has indeed accelerated the vision of a smart city in the region of Barcelona, enabling governments to adapt to the changing trends and implement smart city initiatives for those that might come.
Our research has however some limitations. First of all, the market research is naturally limited to the small population size to whom the survey was distributed to and was, unfortunately, not possible to obtain an overall picture of all citizens opinion in the region of Barcelona. In this way, future srsearch should extent the scope of the present study and our results should be confirmed by performing similar research in other cities to verify their scalability. The different cases could then support the validation of our hypotheses. Besides, future researches could investigate the benefits of social media tools such as Twitter as an alternative solution to the common google-based survey as other studies have already demonstrated in other domains of application (Grimaldi, Diaz, & Arboleda, 2020) | 2021-10-22T15:26:22.341Z | 2021-09-03T00:00:00.000 | {
"year": 2021,
"sha1": "027252fbe8ce49cfeaca617842c437c81d0f5201",
"oa_license": "CCBY",
"oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/VIII-4-W1-2021/97/2021/isprs-annals-VIII-4-W1-2021-97-2021.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9caaf3ea6dba6d3778f0758ae87cc9ad5f67313d",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Business"
]
} |
258033757 | pes2o/s2orc | v3-fos-license | Correlation between nutritional awareness and food consumption behavior of the elderly in Samut Prakarn Province, Thailand
This study was conducted to study the correlating factors between personal nutritional awareness and consumption behavior of the elderly in Samutprakarn Province, Thailand. Members of the elderly society of Samutprakarn Province were interviewed with a rating scale questionnaire to know about their personal data, nutritional awareness and the factors that influence their food consumption behaviors. 400 elderlies participated in the study. Their ages range is between 60 and 84 years old (±0.79); with average age of 67.20 years. There were 156 males (38.90%) and 244 females (61.1%). Factors that influence nutritional awareness showed reliability statistic between 0.25 and 0.46 and food consumption behavior was 0.34. Moreover, result shows emotional awareness (r=0.43, p-value < 0.05), accurate self-assessment (r=0.13, p-value < 0.05), and self-confidence (r=0.16, p-value < 0.05) were positively and significantly correlated with food consumption behaviors. It can be concluded that the 3 personal awareness factors were positively correlated with food consumption behaviors. With a high rating, personal awareness can impact proper food consumption behaviors.
INTRODUCTION
Our global population is getting older (U.S. Census Buereau, n.d.).In 2017, there was a total population of 7.4 billon people and more than 962 million were 60 years old and above, accounting for 13% of the global population.The growing rate of the elderly has increased by 3% resulting in an inevitable change in our population structures.European countries are the first to expect the change with the lowest (16%) of the population being considered children and 21% being over 65 years of age.Meanwhile, African countries have the highest population growth rate as the children population accounts for 42% and only 5% are elders.Thailand and other countries in Asia like Singapore and Japan are rapidly becoming aged society since 2005, meaning that the population f over 60 years of age accounts for more than 10% of the total population.Thailand's National Statistical Office *Corresponding author.E-mail: sorasun@g.swu.ac.th.
Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License announced that 10.7% of the nation's population is in their elderly stage.More than half of this number (58.8%) is early-aged, 31.7% are mid-aged and 9.5% are more than 80 years of age (National Statistical Office of Thailand, 2007).
Suboptimal diet is an important preventable risk factor for non-communicable diseases (NCDs); however, its impact on the burden of NCDs has not been systematically evaluated.This study aimed to evaluate the consumption of major foods and nutrients across 195 countries and to quantify the impact of their suboptimal intake on NCD mortality and morbidity (GBD, 2017;Diet Collaborators, 2019).There have been studies in Ethiopia (Abate et al., 2020), South Africa (Naidoo et al., 2015), Sir Lanka (Damayanthi et al., 2018) and Brazil (Boscatto et al., 2013) stating that high proportion of older adults are malnourished and socio-economic characteristics and depression are significantly associated with malnutrition.
Moreover, Benjamas (2008)'s study reveals that the common elderly meals in Thailand are negatively related to numbers of health problems; heart and coronary arteries diseases.Most seniors will eventually have to experience changes physically, mentally and even socially.The common diseases found in aged people are diabetes, blood pressure abnormality, heart and coronary arteries diseases, gout and cancers.Because the important factor for these diseases is their food consumption behavior; they and their caregivers must be able to determine or provide appropriate nutrition for them.Moreover, with organs deterioration that comes with age, there is a higher risk for nutrition imbalance condition which leads to numerous chronic diseases; obesity, diabetes, and high blood pressure.Mazengo et al. (1996)'s study revealed the relationship between diet and dental caries by assessing 24 h food intake in 273 subjects aged 12, between 35-44 and 65-74.Their study found that the mean number of decayed teeth (DT) increased significantly with age.Wang et al. (2014) declared that more than 31,588 people above 50 years old have died due to coronary arteries disease and diabetes.Gunsam and Murden (2016) also reported factors influencing food choices in the elderly, which reflected in their eating behaviors and health (Gunsam and Murden, 2016).
MATERIALS AND METHODS
The purpose of this cross-sectional study is to study and analyze how personal factors (age, gender, education, income, and familyexistence) affect the elderly people's food consumption behaviors.Our subjects for this study were people above 60 years old, both male and female, who were fully aware and could fully communicate.The study subjects were members of Samutprakarn's Senior Club.The main tools used in this study were interviews and questionnaires.The acquired data were analyzed to study the relationship between the consumption behavior of the elderly and their nutritional awareness.
Research population
There are 17,920 people aged over 60 years in Samutprakarn who lived with family and/or a caregiver.The sample size was calculated using Taro Yamane's formula as follows: where n = sample size, N = population under study, and e = acceptable sampling errors (=0.05).
From the total of 221,543 Samutprakarn's senior citizens based on Samutprakarn's Statistical Office's data and the acceptable sampling errors of 0.05, the sample size was calculated to be 400 people.
Simple random sampling was performed on 400 people from 3 districts in Samutprakarn Province.The questions in the interviews and questionnaires were adapted from Goleman (1995)'s selfawareness theory and the 9 nutrition guidelines for the elderly.The questions were tested and reviewed in a small group of people aged between 50 and 59 prior to the actual study.
(2) Residents who had been living in Samutprakarn for at least 6 months (3) People in their full conscious, with no sign of dementia and could communicate perfectly without hearing problem.(4) People who were able to eat without any health concerns limitation.
(5) People in no need of assistance, physically or mentally.(6) Having four natural posterior teeth (2 opposing teeth) or having functional denture
Exclusion criteria
(1) People who could not freely choose their own meal e.g., members of senior housing.
(3) Having no teeth or not wearing denture Data were collected by interviewing the sample group.The interview was done with the help of village health volunteers and dental nurses in the area.Inter-examiner calibration was done.Each subject was interviewed directly by the researchers individually for approximately 10 min.The questions included their personal information, nutrition awareness, and consumption behavior.The details are as follows: (1) Personal information: Sex, age, highest education, income, family and living conditions (2) Nutrition awareness: Questions asked in the section were based on Goleman's (1995) Mixed Model of Emotional Intelligence Theory, Healthy Diet for the Elderly: Guideline for food consumption (Healthy Diet for the Elderly, 2017).It was adapted from sakoolnamarka et al. ( 2021)'s research questions.The main topics were emotional awareness, accurate self-assessment and selfconfidence ratings.
(3) Consumption behavior: Here, the interviewees were to answer questions regarding their consumption behavior.Questions here were adapted from Kitkamolsawet's (2017) "Food consumption behavior and its risk/benefits to the health of the elderly in Nakorn Nayok".The questions were mainly about the frequency of the elderly food consumption in a certain period of time.
The data obtained were then analyzed and interpreted by scoring with the criteria as follows: Emotional Awareness: For positive behavior, 5 = very agreed, 4 = agreed, 3 = neither agreed nor disagreed, 2 = disagreed, 1 = totally disagreed.Reverse scoring was applied for negative behavior, as 5 = Totally Disagreed and 1 = Very Agreed.Consumption Behavior: The scoring was based on the frequency of their healthy/unhealthy food consumption.For healthy consumption/behavior, 5 = Everyday, 4 = Almost everyday, 3 = Every other day, 2 = Once a week, 1 = Rare, and 0 = Not once.Reverse scoring was applied for unhealthy behavior, as 5 for Not once and 0 for Every day.
Then, the average scores were calculated in each topic to represent the behavior of the sample group.Reliability test was done using Cronbach' alpha method.The data calculated were then analyzed to find the factors that influence older adults' consumption behaviors using independent t-test, One-way ANOVA and Pearson correlation coefficient.In this test, the confidence level was set at 0.95.Normality tests were done to determine if the obtained data set, including nutrition awareness and consumption behavior, were well-modeled by a normal distribution.The result from the Skewness and kurtosis test was between -1.00 and +1.00 which was in an acceptable range.Therefore, the data were considered normally distributed.
The questionnaires' internal consistency reliability was tested by three professionals to determine: (1) The content validity and appropriateness of the language used.Index of Item-Objective Congruence (IOC) all came back above 0.5.
(2) Reliability: The reliability test was done by interviewing a similar sample group with the exact same questions before the actual interview.Cronbach 'alpha reliability coefficient was between 0.78 and 0.83.
RESULTS
From the total of 400 old adults, 156 (38.90%) were males and 244 (61.10%) were females.Their age range is from 60 to 84 years old (±0.79) and the average age was 67.20 years old.227 (56.70%) had the highest education level (primary school education).Thirty-four percent of the subjects had an income of less than 6,000 Baht per month.Most of the subjects (93.20%) stayed with their family.Table 1 shows the demographic profiles of the population.
The demographic data were analyzed together with the consumption behavioral data obtained using independent t-test and one-way ANOVA.The result of the overall test showed Education level and Family income had statistically and significant effect on the consumption behavior of the elderly (p=0.00).
Individual nutritional awareness
The data obtained were calculated to find the sample group's representative value, the average for the individual nutritional awareness.The calculated scores were then interpreted as follows: 0.00-2.50= Very Low, 2.51-3.00= Low, 3.01-3.00= Fair, 3.51-4.00= High and Very High for more than 4.01.
Emotional awareness
The result shows that the subjects are highly aware that taking soft drinks makes them feel satisfied when they are thirsty/hungry (Average of 3.55).The subjects do not feel guilty for eating junk foods (Average of 3.45).They feel good when they get to eat as much as they like (Average of 3.18).It is alright for them to eat unhealthy food which they like (Average of 3.16).They love to eat with their family (Average of 3.13) and they are alright not to eat their meal on time.However, they tend to feel good when they drink after a meal; they do not feel guilty for not finishing a meal and for having a leftover meal (average of 2.99, 2.92 and 2.90, respectively).They do not feeling guilty after consuming restricted food (1.5).
The results are shown in Table 2.
Accurate self-assessment
The results from the accurate self-assessment test are as follows.The samples were able to accurately assess themselves high for these questions: "Can you chew hard food" (3.83)?"Do you know when you are full" and "Can you consume spicy food" (3.79)?"Are you tempted to eat desserts even when you know that it is not good for your health" (3.58)?
For the following questions, the samples assessed themselves at the intermediate level (3.29, 3.28 and 3.04, respectively): "Do you continue to eat until you feel full even though you already have a lot" and "can you indicate healthy and unhealthy food portions in your daily consumption"?However, the samples were able to assess themselves poorly for the following questions (2.48 and 2.01): "Are you mindful of your consciousness after consuming alcohol" and "can you assess your calories intake after a meal"?The results are shown in Table 3.
Self-confidence
The results from the self-confidence test show that the samples were highly confident that their meals were healthy and that they consume enough fruits and vegetables on a daily basis (average of 3.84 and 3.68, respectively).However, the results reveal that the samples were only moderately confident of their food choice and their ability to make healthy food choices for themselves and their family at the average of 3.46 and 3.39.Similarly, the confidence levels of their own ability to control food portions and to guide others on risk and benefits of their meals were at mid-way (average of 3.28 and 3.20).Lastly, their confidence levels were lowest (2.36) for not trusting the food taste cooked by others.The results are depicted in Table 4.
Consumption behavior
The consumption behavioral data show the highest consumption of properly washed and cooked food (3.54), water and drinks from trustworthy sources (3.51), alcohol beverages (3.48) and high sodium food (4.22) (Table 5).
Afterwards, the personal data gathered were analyzed together with the average awareness scores including emotional awareness, accurate self-sssessment, selfconfidence and consumption behavior using Independent T-test and One way ANOVA technique.The results show that education and family income had the highest effect on all 3 awareness topics shown in Table 6.
There is statistical significance and positive relationship between all 3 personal awareness and the elderly eating behavior, with significant levels at 0.00 (Table 7).
After the educational background and family income data were tested and confirmed, they were significant in statistical variance (F).Sheffe's method was applied to compare the mean difference between the two factors and the sample's food consumption behavior.The test result revealed that educational levels and family income both significantly had influences on how people consume food and their awareness.The average data are shown in Tables 7 to 8.
When comparing the elderly's emotional awareness, the ones with highest education level (below elementary) had a significantly 0.18 lower mean value than the ones at elementary level.People with Bachelor's degree or higher tend to have the highest ability to accurately assess themselves than the other groups (below primary school, primary school, secondary school, and high school or equivalent at significantly 0.50, 0.37, 0.43 and 0.35 sequentially).Similarly, people with Bachelor's degree or higher had higher self-confidence levels (at significantly 0.59, 0.37, and 0.49 respectively) than the others (below primary school, primary school, secondary school).Likewise, food consumption behavior scores were significantly higher in Bachelor's degree or higher group than the rest (0.36, 0.25, 0.37 and 0.29) (Table 8).
Elders with average family income in the range of 10,001 to 25,000 Baht per month had a higher emotional awareness (month (significantly 0.23 and 0.22, respectively) than those with 6,000 Baht and 6,000-10,000 Baht per.
Regarding accurate self-assessment, the elderly with average family income in the range of 25,001 to 50,000 Baht per month had a significantly higher mean scores (at significantly 0.23 and 0.24, respectively) than the ones with their income lower than 6,000 Baht and 6,000-10,000 Baht per month.
Similarly, the group with the least monthly income (less than 6,000 Baht per month) had the lowest selfconfidence compared to the ones with average family income in the range of 10,001 to 25,000 Baht and 25,001 to 50,000 Baht per month.The mean differences were significantly 0.33 and 0.35.
However, when comparing the elderly with average family income in the range of 6,000-10,000 Baht to the other groups, their food consumption behaviors were scored at the lowest.The mean differences in comparison with the less than 6,000 Baht/month, 10,001-25,000, 25,001-50,000 and over 50,000 Baht per month were significantly 0.15, 0.18, 0.26, and 0.26 accordingly (Table 9).
DISCUSSION
An interesting finding in the research paper is that the elders personal factors are associated with their food consumption behaviors.This creates a further understanding of how internal and external factors could influence the elderly's dietary behaviors.The results from this research are that personal factors including the highest educational level and average family income do have a direct influence on one's consumption behavior.
Likewise, elders' emotional awareness as well as their self-assessment and confidence also have a positive relationship with their food consumption behavior.The demographic data have an effect on the elders' eating behaviors in this research, conforming to sakoolnamarka et al. ( 2021) who studied the Correlation between Nutritional Awareness and Food Consumption Behaviors of The Elderly in Nakhon Nayok Province in all 3 aspects.sakoolnamarka et al. ( 2021) concluded that the highest educational level and the elderly's nutritional awareness are directly related.Also, family income influenced how elders accurately assess themselves.
The result regarding the positive relationship between the elderly's family income and their diets correspond to the work of Myres and Kroetsch (1978) on "The Influence of Family Income on Food Consumption Patterns and Nutrient In take in Canada".They stated that "Mean intake of nutrients increased with increase in income in all physiological groups".Moreover, Ren et al. (2019), in studying "Family income and nutrition-related health: Evidence from food consumption in China'', had similar result.Income-BMI gradients tend to increase along with income percentiles, and income has a significantly positive impact on BMI and overweight for the male sample; but it has no significant impact on the female sample.Worsley et al. (2004) discovered in their "The relationship between education and food consumption in the 1995 Australian National Nutrition Survey" study that higher education is associated with the regular consumption of a wider variety of foods.Our study also gives out a matching result.A person with a higher education tends to do better with his diet.
Emotional awareness along with accurate selfassessment and self-confidence are also positively related to the elders' consumption behavior.The results corresponded with Sakoolnamarka et al. (2021) study on 'Correlation Between Nutritional Awareness and Food Consumption Behaviors of the Elderly in Nakhon Nayok Province'.Sakoolnamarka stated in her study that "The emotional awareness, accurate self-assessment, and self-confidence were positively correlated with food consumption behaviors."In addition, the study of Rabiei et al. (2013) on "Understanding the relationship between nutritional knowledge, self-efficacy, and self-concept of high-school students suffering from overweight" also revealed there is direct relationship between nutritional knowledge, self-concept, and self-efficacy.
To focus on the relationship between emotional awareness and consumption behavior, Magnus (2016) reported the influence of emotional awareness on behaviors and abilities.He stated that a person's emotional awareness has a direct impact on the ability to improve one's behavior in every aspect.Likewise, Shouse and Nilsson (2011), in their study on "Self-Silencing, Emotional Awareness, and Eating Behaviors in College Women", discovered that emotional awareness moderated the relationships between self-silencing and disordered eating and intuitive eating.
Many researches have proved similar results for accurate self-assessment factor.Schroder et al. (2013) reported "Habitual self-control: A brief measure of persistent goal pursuit."Junger and van Kampen (2010) studied "Cognitive ability and self-control in relation to dietary habits, physical activity and bodyweight in adolescents" and Kennett and Nisbet (1998) studied "The influence of body mass index and learned resourcefulness skills on body image and lifestyle practices".These studies have shown that self-regulation of eating is likely to interact with biologically-mediated variation in appetite, and as a consequence, general selfregulation questionnaires show only modest associations with healthy eating behaviors and weight control.
The self-confidence factor also corresponds to previous studies.Berman (2006) studied the relationship between eating self-efficacy and eating disorder symptoms in a non-clinical sample; it is concluded it has relationship with eating behaviors and weight-loss in clinical samples.Sawdon et al. (2007) studied "The relationship between self-discrepancies, eating disorder and depressive symptoms in women"; they disclosed that eating disorder and depressive symptoms were correlated with a number of self-discrepancies.Neha (2017) studied "The Role of Body Image, Dieting, Self-Esteem and Binge Eating in Health Behaviors" and found that the relationship between body image and dieting was mediated by selfesteem.Furthermore, the relationship between dieting and self-esteem was mediated by binge eating.
In this study, the demographic data which are the educational level, family income and awareness focused in this study, are statistically and significantly related to elders' eating behavior.
The results that the elders who graduated with bachelor degrees or higher had the best consumption behavior indicate that educational level is highly involved in a person's diets.Therefore, promoting adequate education in communities would improve the population's dietary behavior.The current young adults would become better and educated elders with better health and knowledge.
However, the financial factor came out unexpectedly in this study.Elders with higher income did not show any significantly higher scores in both awareness and eating behavior aspects.There have been numerous studies regarding the influence of money on people's lifestyles.Even though people with no financial restrictions are able to determine and weigh the advantages and disadvantages of their diets, their decisions are still disagreeable.The spread of the novel COIVD-19 virus is one of the obvious examples.The number of infected people was continuously high in the area with highincome people who did not agree that the vaccines are safe (Wellcome Trust, 2018).Now that we have learned how personal awareness affects dietary behavior, we should encourage the society to help the elderly create more nutritional awareness by not only educating them with nutritional knowledge, but also arranging an entertaining activity that would bring self-awareness, accurate self-assessment and selfconfidence.
The result from this study helps us understand the relating coefficients regarding the elderly in a small city.
Conclusion
(1) There is a significant correlation between awareness factors (emotional awareness, accurate self-assessment and self-confidence) and food consumption behavior (p=0.00).
(2) Personal factors (highest education level and family income) have a positive influence on food consumption behavior with significance (p=0.00).
Table 1 .
Demographic finding in population.
Source: Authors study.
Table 2 .
Emotional awareness of the population.
Source: Authors study.
Table 3 .
Accurate self-assessment in population.
Table 4 .
Self-confidence score in population.
Source: Authors study.
Table 5 .
Consumption behavior in population.
Source: Authors study.
Table 6 .
Demographic analysis on factors influencing self-awareness and food consumption.
Source: Authors study.
Table 7 .
Correlation between all 3 personal awareness and the elderly consumption behavior.
Table 8 .
The mean differences in people's awareness and behaviors between each level of the population's highest education.
Table 9 .
The mean difference between different family income ranges and food consumption behaviors of the population.
Source: Authors study. | 2023-04-09T15:14:42.191Z | 2023-04-30T00:00:00.000 | {
"year": 2023,
"sha1": "2d8f995fa95614c39cb32335031215d06ae3546c",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JPHE/article-full-text-pdf/3B852CC70561.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3f12a2bd84280645dba80f0734eb8cb1c1c5fbdf",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": []
} |
250167713 | pes2o/s2orc | v3-fos-license | THE INTERNATIONAL JOURNAL OF HUMANITIES & SOCIAL STUDIES Determinants of Senior High School Students’ Performance in the Physical Sciences in the Twifo – Hemang Lower Denkyira District: The Interplay of Ambition and Effort as Mediating Variables
chemistry, and elective mathematics. In 2012/2013, out of the 357 physical science students who sat for the WASSCE examination, only Abstract: The study aims to find out whether the family, peers, teachers, gender, students' perception, students' ambition, and the academic effort of students do influence students' performance in the physical sciences in the Twifo-Hemang Lower Denkyira District. There has been wide speculation as to why the performance of students in the physical sciences continues to fall and what could be done to remedy this poor trend. On the one hand, performance has been attributed to environmental factors. In Ghana, research on factors influencing students' academic performance especially in the physical sciences at the Senior High level appears to be relatively limited. Specifically speaking, such research in the Twifo-Hemang Lower-Denkyira District is virtually nonexistent. The research designs for this study were descriptive and correlational research designs with quantitative methods of data collection. The population of the study was three hundred and eighty (380) physical science students in the Twifo-Hemang Lower Denkyira District. The population for the study included all the senior high schools in the Twifo-Hemang Lower Denkyira District. Purposive sampling technique was used in selecting a sample for the study which was 194, thus 114 (58.8%) male students and 80 (41.2%) female students. Questionnaire and test items were used in collecting data for this study. Analysis of the data was done through the use of the SPSS version 22. The study revealed that there is a strong and positive relationship between family influence, teachers' influence, peer influence, gender, academic effort and students' performance in Physics, Chemistry and
Methodology
A research design is an arrangement demonstrating how the issue under scrutiny can be understood (Orodho, 2003). The research designs for this study were descriptive and correlational research designs with quantitative methods of data collection. The researchers' studied the connection between the predictor variables in the study and students' performance in physical sciences. The study explicitly looked to see if there was a solid, moderate or weak relationship between the predictor variables and the criterion variable. The above reason informed the researcher to include correlation research design due to its ability to establish relationships. The researchers used multiple regression analysis for testing the hypothesis Fraenkel and Wallen (2000) likewise accept that correlation research portrays a current connection between variables. The population of the study was 300 and eighty (380) physical science students in the Twifo-Hemang Lower Denkyira District. The objective populace for the study involved all the senior high schools in the Twifo-Hemang Lower Denkyira District. Purposive sampling technique was used in selecting a sample for the study which was 194, thus 114 (58.8%) male students and 80 (41.2%) female students.
Instrument
Two instruments were used in gathering information for this study. They are questionnaires and test items based on the physics, chemistry and elective mathematics SHS two syllabus. The questionnaire was created based on the research questions got from the related literature. The questions were the Likert kind of scales. The researcher also used questionnaires in light of the fact that the whole populace was proficient (Fraenkel and Wallen, 2000). The validity of the questionnaire was acquired by introducing it to the researcher's principal and co-supervisors. The principal and cosupervisors helped with remedying indistinct, one-sided and insufficient things, and assessed the appropriateness of items in the different areas. The Cronbach alpha was used to quantify the internal consistency and to decide the reliability of the questionnaire. This statistics gives a thought of the average correlation among the entirety of the items that make up the scale of the instrument. The Cronbach Alpha coefficient was 0.812. The reliability of the test items was obtained by introducing it to three raters to score a respondent script. Analysis of the data was done using SPSS version 22. Data gathered from the respondents were analyzed using descriptive and inferential statistics. Information for explore question one was dissected utilizing The Independent Samples t-test. Data for research questions were analyzed using the independent samples t-test and Pearson bivariate correlation.
Descriptive Statistics
An area of equal importance was the family influence on students' performance in the physical sciences. This variable was measured in terms of the amount of influence family members or parents devote to students in their study. In all, four individual items were used to gather this information using a 6 point Likert scale. The means of the various items as presented in Table 1 imply that family members matter a lot in the education of their children. The magnitude of the standard deviations further confirms this. The Influence of peers was also of interest to this study. Five different items were used to elicit the respondents' views on the subject matter. This was also measured on a 6 point Likert scale. Table 2 below gives the distribution of the respondents regarding the extent to which their peers influence them in their study.
Statements
Freq The data in Table (2) above indicate that the means of the various items suggest that peer influence was quite high. This is further supported by the standard deviation figures which do not show much variation in the students' observations. This suggests that the respondents' peer connectedness and influence were significant. The influence of teachers was also examined in the study. A set of items were designed to gather information on the influence of instructors. All the six items were estimated on a 6 point Likert scale. The outcome is exhibited in Table 6 beneath.
Statements
Freq. The means of the various items in Table 3 indicate that the influence of teachers was relatively high. This is affirmed by the different standard deviation figures which show little varieties in the respondents' perceptions. It also implies that the influence of teachers is very crucial in students learning. The effort of students devoted to their studies especially in normal class and during weekends was also of particular interest. As a result, two items were designed for the respondents to come out with their views on the amount of time they spent after normal classes and during weekends. This variable was measured on a 6 point Likert scale. Table 4 gives a detailed summary of respondents' effort they devote to their studies.
Statements
Freq The results in Table 4 show that the mean hour students spend studying after normal classes and the hours they devote to their studies during weekends are above average. Perceptions students hold about physical science was also of interest to the researcher. A set of items was carefully designed to gather information on this variable. The five items designed were estimated on a 6 point Likert scale. The outcome is displayed underneath in Table 5.
Statements
Freq The means of the various items in Table 5 show that the respondents hold a positive view of the study of physical science. This is supported by the standard deviation figures which show that irrespective of the views students form about physical science, they still strive to perform well. Students' academic ambition was one of the mediating variables considered in this study. In this regard, three questions measured on a 6 point Likert scale were used to evaluate the respondents' academic ambition. Below is the distribution of the respondents' by their academic ambition is seen in Table 5.
Statements
Freq. The outcomes in Table 6 demonstrate that the respondents' had very high academic ambitions for their study. As it is seen, all the means are far above average which goes on to confirm the fact that respondents had relatively high academic ambitions for their study. Concerning the standard deviation figures, one can simply say that only a small proportion of the respondents is less academically ambitious. Measuring students' academic performance was also paramount to the study. However, the students' performance was estimated by the scores on Physics, Chemistry and Elective Mathematics test planned by the researchers with the assistance of subject instructors. The essential fundamental of assessing is to plainly express what it is you are endeavoring to quantify (Terenzini & Reason, 2005). The items of the test were therefore selected for an approved GES syllabus for SHS two students. The test instruments were made to cover subjects that were accepted to have been as of now instructed. Taking all things together, three separate tests were designed in Physics, Chemistry and Elective Mathematics. Each test contained 25 multiple-choice items. The scores from the test items were estimated on a 6 point Likert scale. Every one of the outcomes are introduced underneath in Table 7.
Statements
Freq Table 7 provides a summary on students' performance in Physics, Chemistry and Elective Mathematics tests. The results indicate that the majority of the participants performed above average in all three subjects. This indicates that most of the respondents performed very well in physics, chemistry, and elective mathematics.
Inferential Statistics
To what extent do male and female students differ in academic performance in Physics, Chemistry, and Elective Mathematics? Table 8, the Levene's test was conducted to investigate whether there is fairness of differences. The outcomes showed that there was no significant difference in performance for males (M=4.737, SD=1.022) and females (M=4.575, SD=1.134; t (192) =1.037, p=0.301). This finding implies that the level of males that improved in physics was not considerably not quite the same as their female partners. Levene's test was led to examine whether there is equality of variances. The outcomes showed that there was no statistically significant difference in performance for males (M=4.798, SD=0.952) and females (M=4.862, SD=0.990; t (192) = -1.455, p=0.649). The outcomes affirm that the level of males that improved in science was not measurably unique in relation to their female partners. Levene's test was directed to explore whether there is correspondence of changes. The outcomes showed that there was no statistically significant difference in performance for males (M=4.781, SD=0.938) and females (M=4.563, SD=1.168; t(192) =1.440, p=0.151). From the results above we find that the Pearson bivariate correlation coefficients obtained on Physics, Chemistry, and Elective Mathematics were r = 0.639**, r = 0.602** and r = 0.650** respectively. They are all positive with p-value = 0.000 which is not exactly the alpha worth = 0.01, suggesting that family influence was fundamentally identified with students' performance in Physics, Chemistry, and Elective Mathematics. The findings propose that the higher the influence of family, the more probable that the performance of students in Physics, Chemistry and Elective Mathematics will be high. The degree to which friend influence identifies with students' performance in Physics, Chemistry, and Elective Mathematics as introduced in Table 10 shows that the Pearson bivariate correlation coefficients got for Physics, Chemistry, and Elective Mathematics are r = 0.520**, r = 0.629** and r = 0.402** respectively. The coefficients are positive with pvalue = 0.000 which is not exactly the alpha worth = 0.01. The suggestion from the findings remains that peer influence was significantly identified with the student's performance in Physics, Chemistry, and Elective Mathematics. Concerning the outcome in Table 11, the Pearson bivariate correlation coefficients for Physics, Chemistry, and Elective Mathematics are r = 0.384**, r = 0.333** and r = 0.339 correspondingly. The coefficients are for the most part positive with p-value = 0.000 which is not exactly alpha = 0.01 meaning that teachers' influence was significantly identified with students' performance in Physics, Chemistry, and Elective Mathematics. Despite the fact that the coefficients will in general be low, it remains that teachers' influence related to the performance of students. From table 12 we find that the Pearson correlation coefficients acquired in Physics, Chemistry, and Elective Mathematics are r=0.558**, r=0.571** and r=0.658** individually. The coefficients are for the most part positive with p-value =0.000 which is under alpha=0.01. This suggests there is a positive and solid connection between students' academic effort and their performance in Physics, Chemistry, and Elective Mathematics.
Academic Effort Academic Ambition
Pearson Correlation .501** Sig. (2-tailed) .000 N 194 Table 13: Students' Ambition and Academic Effort **. Correlation Is Significant at the 0.01 Level (2-Tailed) The outcome in Table 13 shows the zero-order correlation coefficient got between students' ambition and academic effort is r=0.501** with p-value=0.000 which is under alpha=0.01. This infers students' ambition in science is significantly identified with their academic effort. This finding gives a clear signal that academically eager children are quicker in committing a sizeable proportion of their time to their studies. Most students are of the view that their high expectations and efforts are the main reasons for high school success (Ashby & Schoon, 2010).
Academic Effort Students' Perception
Pearson Correlation .-542 Sig. (2-tailed) .000 N 194 From the results above, we find that the Pearson bivariate correlation coefficient is r = -0.542. The correlation coefficient is negative with significance or p-value = 0.000 which is less than the alpha = 0.01. This implies that there is a negative and strong relationship between perceptions students have about physical science and their academic efforts. Table 15 shows the results of the multiple regression analysis. Model 1, Model 2 and Model 3 gives the coefficients of the predictor variables, the standard error, the degree of importance, the correlation (R), the R-square (R2) and the adjusted R2 (AR2). In model 1, when the Physics test score was regressed on the independent variables, two predictor variables, along these lines teacher influence and the gender of the students were seen not as significant predictors of performance in the Physical Sciences. In Model 2 and 3, when academic ambition and effort were separately presented, teacher influence and gender were as yet seen not as significant predictors of performance in Physics. Especially, in model 3, when the mediating variable was presented, accordingly academic effort, students' perception was likewise seen not as a significant predictor of performance in Physics apart from teacher influence and gender in the previous models. This indicates the independent variables share their predictive power with the intervening variables. This infers the independent variables determine performance in Physics except if the intervening variable is present. In the nutshell, the independent variable makes an impact just when it goes through the mediating variables. Though family influence and peer influence consistently remained significant predictors, their coefficients reduced when the intervening variables were presented in Models 2 and 3. For example, when academic ambition was brought into Model 2, all but one of the constantly significant independent variables shrank. That is to say, family influence, peer influence and students' perception shrunk by 10%, 13%, and 14% respectively while the influence of teachers appreciated by 13%. This implies that the values lost by the shrinkages establish the commitment of the intervening variables to the independent variables. Lastly when the academic effort was injected into Model 3, both family influence and peer influence still shrunk by 21% and 36%. The findings reveal that academic ambition stimulates hard work, that is, the academic effort will eventually raise the performance in Physics. The results from Table 16 illustrate the results of the regression analysis. All three models were involved. Model 1, 2 and 3 gives the coefficients of the predictor variables, the standard error, the degree of significance, the correlation (R), the R-square (R 2 ) and the adjusted R 2 (AR 2 ). In model 1, when the Chemistry test score was regressed on the predictor variables, thus family influence, peer influence, teacher influence, the gender of the child and students' perception, three predictor variables teacher influence, gender, and student's perception were found not to be significant predictors of performance in chemistry. The findings suggest that most of the respondents indicated that the influence of teachers is very crucial in their performance in Chemistry. Apart from that, irrespective of the views students form bout Physical Science, they still strive to perform well in chemistry, hence there were no variations and for that matter the nonsignificant of the coefficients in Table 20. In model 2, when the Chemistry test score was regressed on the same independent variables and one intervening variable, teacher influence, gender, and students' perception were still found not to be significant predictors of performance in Chemistry. This suggests that these independent variables share their predictive power with the intervening variable. The implication is that the independent variables did not determine students' performance unless the intervening variable was there. That is to say, the independent variables made an effect when it passed through the mediating variable. Lastly, when the academic effort was introduced in Model 3, the same independent variables were still not significant predictors. This confirms that the independent variables did not directly determine performance in Chemistry. They did so through the intervening variables. Moreover, in Table 16, when the intervening variables, thus academic ambition and academic effort were introduced in Models 2 and 3, the coefficients of the majority of the independent variables shrunk while others appreciated. For example, when academic ambition was introduced into Model 2, teacher influence and gender shrunk by 29% and 3% respectively and lost their significance too. Family and peer influence on the other hand appreciated by 3% and 1% respectively. Again, with the introduction of academic effort in Model 3, family and peer influence shrunk by 14% and 10% respectively yet there were still significant. This suggests that the values lost by the shrinkages constitute the contribution of the intervening variable to the independent variables themselves.
Testing the Hypothesis
These revelations imply that most of the predictor variables for example family influence and peer influence, though significant predictors in Model 1, when the mediating variables were introduced, they appreciated. The other predictor variables, that is, teacher influence and gender were still not significant when the intervening variables were introduced into the equation. This suggests that they shared their predictive power with the intervening variables, hence they cannot be major predictors of students' performance in chemistry. This implies that the introduction of the intervening variables indicates the inadequacy of the predictor variables to determine performance in the Physical Sciences. The results from Table 15 reveal that family influence and peer influence were consistent predictors of performance in chemistry even though they both shrunk in Model 3. In this regard, family influence and peer influence were the major independent predictors of performance in chemistry. The researcher, therefore, theorizes that when family members are involved in different roles throughout the school, the performance of all children in the schools tend to improve, not just the children of those who are actively involved, because children whose families may not be active see parents of their peers just like their own (Kahle, 2002). The results from Table 17 illustrate the results of the regression analysis. The analysis was run in Models. Model 1, Model 2 and Model 3 gives the coefficients of the predictor variables, the standard error, the level of significance, the correlation (R), the R-square (R 2 ) and the adjusted R 2 (AR 2 ). In model 1, when the Elective Mathematics test score was regressed on the independent variables, thus family influence, peer influence, teacher influence, the gender of the child and perception of students, two predictor variables, peer influence, and teacher influence were found not to be significant predictors of performance in Elective Mathematics. In these tables, there was almost no variation in the magnitude of peer influence and teacher influence. From the table, it indicates that most of the respondents indicated that they spend most of their time in class with their peers discussing academic-related issues. They also indicated that they are free to consult teachers anytime they have problems with the subject. In Models 2 and 3, when academic ambition and effort were respectively introduced, peer influence and teacher influence were still found not to be significant predictors of performance in Elective mathematics. This suggests that the independent variables share their predictive power with the mediating variables. This implies that the independent variables do not determine performance in Elective Mathematics unless the intervening variables are present. In the nutshell, the independent variable makes an effect only when it passes through the intervening variables. For instance, when academic ambition was introduced in Model 2, family and peer influence shrunk by 8% and 39% respectively. The gender of the students on the other hand shrunk by 9%, yet it was still significant. Perception of students also shrunk by 14%. Again, with the introduction of academic effort in Model 3, family and teacher influence as well as students' perception shrunk by 20%, 40%, and 87% respectively. Surprisingly, peer influence appreciated by 11% though it was still not significant. The findings indicate that the values lost by the shrinkages constitute the contribution of the mediating variables to the independent variables.
The results from Table 16 eventually reveal that family influence and the sex of the child were consistent predictors of performance in Elective Mathematics even though they both shrunk in Models 2 whilst the sex of the child appreciated in Model 3. Given this, the researcher is of the view that, when family members encourage learning and voice out high expectations for the future, they are promoting attitudes in their children that are keys to achievement. The evidence from the analysis of the data in the table shows that the independent variables cannot directly predict performance in Physics, Chemistry, and Elective Mathematics. In reality, the intervening variables, thus academic ambition and academic effort share their predictive power with the independent variables. The researcher failed to reject the null hypothesis which states that "The independent variables will not directly determine students' performance in the Physical Sciences".
Findings
The study found no significant difference in performance for males and females in Physics, Chemistry, and Elective Mathematics. The study revealed that there is a strong and positive relationship between family influence and students' performance in Physics, Chemistry and Elective Mathematics (r= 0.639, r= 0.602 and r= 0.650, p<. 01). The study established that there is a strong and positive relationship between peer influence and students' performance in Physics, Chemistry and Elective Mathematics (r= 0.520, r= 0.629 and are= 0.402, p<. 01). The study showed that there is a positive relationship between teachers' influence and students' performance in Physics, Chemistry and Elective Mathematics (r= 0.384, r= 0.333 and r= 0.339, p<. 01). The findings from the study showed that there is a strong and positive relationship between academic effort and performance of students Physics, Chemistry and Elective Mathematics (r= 0.674, r= 0.571 and r= 0.658, p<. 01). The study found that there is a positive relationship between students' academic ambition and academic effort (r= 0.501, p<. 01). This study discovered that there is a negative relationship between perceptions students have about the physical sciences and their academic effort (r= -0.542, p<. 01). The researcher failed to reject the null hypothesis because it was found that the independent variables by themselves did not predict students' performance in Physics, Chemistry, and Elective Mathematics.
Conclusions
The family or parents provide a congenial environment for learning, motivate their children to study hard, emphasize high expectations for their children and the need to earn an honorable place in society. They also encourage their children to be academically ambitious and when the children do so, this emboldens them to study hard to succeed in their ambition. No matter how much family members and parents motivate their children to study, become actively involved in school-related activities, talk to them about the benefits of education, students would not excel academically unless they develop positive attitudes toward learning and translate their educational ambitions into the effort. The influence of high school physics preparation and affective factors. Science Education, 91 (6), 847-877. xi. Head, P. (1990). Performance indicators and quality assurance. London: Council for National Academic Awards. xii. Kahle, A. (2002 | 2020-07-02T10:17:52.980Z | 2019-01-31T00:00:00.000 | {
"year": 2019,
"sha1": "96fb283401b0f1ce134fd568c1f5ba345ccfb330",
"oa_license": null,
"oa_url": "http://www.internationaljournalcorner.com/index.php/theijhss/article/download/150983/105331",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "46dae8484f4572664f1417526edf3afe03c25793",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
253018909 | pes2o/s2orc | v3-fos-license | A Low-mass, Pre-main-sequence Eclipsing Binary in the 40 Myr Columba Association -- Fundamental Stellar Parameters and Modeling the Effect of Star Spots
Young eclipsing binaries (EBs) are powerful probes of early stellar evolution. Current models are unable to simultaneously reproduce the measured and derived properties that are accessible for EB systems (e.g., mass, radius, temperature, luminosity). In this study we add a benchmark EB to the pre-main-sequence population with our characterization of TOI 450 (TIC 77951245). Using Gaia astrometry to identify its comoving, coeval companions, we confirm TOI 450 is a member of the $\sim$40 Myr Columba association. This eccentric ($e=0.2969$), equal-mass ($q=1.000$) system provides only one grazing eclipse. Despite this, our analysis achieves the precision of a double-eclipsing system by leveraging information in our high-resolution spectra to place priors on the surface-brightness and radius ratios. We also introduce a framework to include the effect of star spots on the observed eclipse depths. Multicolor eclipse light curves play a critical role in breaking degeneracies between the effects of star spots and limb-darkening. Including star spots reduces the derived radii by $\sim$2\% from an unspotted model ($>2\sigma$) and inflates the formal uncertainty in accordance with our lack of knowledge regarding the star spot orientation. We derive masses of 0.1768($\pm$0.0004) and 0.1767($\pm$0.0003) $M_\odot$, and radii of 0.345($\pm$0.006) and 0.346($\pm$0.006) $R_\odot$ for the primary and secondary, respectively. We compare these measurements to multiple stellar evolution isochones, finding good agreement with the association age. The MESA MIST and SPOTS ($f_{\rm s}=0.17$) isochrones perform the best across our comparisons, but detailed agreement depends heavily on the quantities being compared.
INTRODUCTION
Research on the formation and evolution of low-mass stars and planets relies on fundamental stellar parameters derived from stellar evolution models. As with many subfields of tion (Bastian et al. 2010). With radius, we can derive the radii of transiting planets (Gaidos et al. 2012), which is particularly exciting at young ages where planets are expected to evolve through some combination of thermal contraction (Fortney et al. 2011), photoevaporation (Owen & Jackson 2012;Owen & Wu 2013), and core-powered (Ginzburg et al. 2018) mass loss.
Despite their far-reaching application, there exist few direct tests of the accuracy of fundamental parameters predicted by models, especially at young ages. This has led to the development of (semi)empirical relations (e.g., Torres et al. 2010;Mann et al. 2015aMann et al. , 2019Kesseli et al. 2019) to avoid the systematic uncertainties that accompany modeldependent values. Empirical relations are widespread for main sequence (MS) stars but are sparse at young ages (Herczeg & Hillenbrand 2014;David et al. 2019). Benchmarking stellar evolution models at young ages is an important step in developing accurate models, including identifying the physical processes that are missing.
Detached eclipsing binaries (EBs) are one pathway for benchmarking stellar models. The fortuitous orientation in which we view these systems allows for the measurement of their masses and radii at statistical uncertainties that routinely reach better than 1% precision. This precision far surpasses what is possible for single stars and, critically, EB measurements rely on few model-dependent assumptions, making them less susceptible to the typical inherited systematic uncertainties. When an EB is a member of young association or cluster, additional high-precision measurements are afforded from the coeval ensemble (e.g., age, metallicity).
EBs have a long history of testing stellar evolution theory (e.g., Andersen 1991, and references therein). A primary finding is that models consistently underestimate MS stellar radii by ∼5% (López- Morales 2007;Torres et al. 2010). The most common hypothesis for the discrepancy is the effect of magnetic activity. Short-period EBs are expected, and observed, to have high activity levels due to rapid rotation from tidal spin-up by their binary companions (Kraus et al. 2011). However, a similar level of discrepancy exists for long-period systems (Irwin et al. 2011). Magnetic fields have been implemented in stellar models in their ability to inhibit convective flows (Feiden & Chaboyer 2012, and to alter standard radiative transfer via star spots (Somers & Pinsonneault 2015;Somers et al. 2020).
While the inclusion of magnetic field prescriptions appears to ease the tension for MS stars, discrepancies exist on larger scales for pre-MS stars, particularly at low masses. In the study of nine EBs in the 5-7 Myr Upper Sco association, David et al. (2019) found there is good relative agreement among most models between 0.3 and 1 M , but that they overpredict the radii for young stars below 0.3 M . This is the opposite of the MS radius discrepancy, highlighting that, although magnetic fields are likely altering these young systems in similar ways to MS stars, larger-scale uncertainties exist in our understanding of pre-MS evolution.
Beyond the shortcomings of current models, which are likely due, in part, to the absence of magnetic phenomena, the observational characterization of EBs typically also ignores their effects. EB analyses rely on few model assumptions, but one common assumption is that stellar photospheres can be described as a uniform, limb-darkened disk. This assumption is false for any young system where star spots are not only present, but likely have large covering fractions (Gully-Santiago et al. 2017;Fang et al. 2018a;Cao & Pinsonneault 2022). The specific orientation of spots or spot complexes alters the detailed surface-brightness distributions, and can significantly impact the measured eclipse depths (Morales et al. 2010;Rackham et al. 2018). The direction and magnitude of this effect depend on the specific spot geometries with respect to the eclipse geometry, and are unlikely to result in a consistent systematic offset common to all EB radius measurements. Still, given that spot geometries are rarely known and their effects are rarely addressed in eclipse light-curve modeling, quoted radii uncertainties (often 1%) are likely underestimated for spotted systems. This underestimation of the error may be a contributing factor to the significant discrepancies found in the derived radii between different groups modeling the same EB systems (e.g., see Morales et al. 2009 andWindmiller et al. 2010;Kraus et al. 2017 andGillen et al. 2017).
As part of an effort to increase the population of young, benchmark EBs, we present the characterization of TOI 450 (TIC 77951245). Initial followup of the nominal planet host was undertaken by the THYME collaboration (TESS Hunt For Young and Maturing Exoplanets; Newton et al. 2019) and the TESS Follow-up Observing Program (TFOP) community, where it was identified as a double-lined spectroscopic binary (SB2) (Battley et al. 2020). In this study, we confirm TOI 450's membership to the ∼40 Myr Columba association using the kinematic selection methodology presented in Tofflemire et al. (2021), now updated for Gaia DR3 (Gaia Collaboration et al. 2016. We then perform a joint radial-velocity (RV) and eclipse light-curve fit to derive the fundamental parameters of the system, confirming its components are on the pre-MS. Our analysis includes two key additions to standard EB modeling. First we place a joint prior on the surface-brightness ratio and radius-ratio informed by our spectroscopic decomposition. This prior enables a fit to this single-eclipsing system that reaches a formal precision on par with double-eclipsing systems. Second, we develop and implement a framework to include the effect star spots have on eclipse depths. Our ability to constrain the impact of spots relies heavily on multicolor eclipse observations. The combination of TESS to find EBs and Gaia to confirm their association memberships, and therefore age, makes this a pivotal time in our ability to find benchmark EBs and improve our understanding of early stellar evolution. (Ricker et al. 2015) with 2 min cadence during Sectors 5 and 6 in Cycle 1 of the primary mission (UT Nov 15, 2018-Jan 6, 2019, and during Sector 32 of the extended mission (UT Nov 20, 2020-Dec 16, 2020. In all observations, TOI 450 fell on Camera 3. Two-minute cadence data are processed by the SPOC pipeline (Jenkins 2015;Jenkins et al. 2016). Our analysis makes use of the presearch data conditioning simple aperture photometry (PDCSAP; Smith et al. 2012;Stumpe et al. 2012;Stumpe et al. 2014) light curve. Figure 1 presents the TESS light curves, where two clear eclipse events can be seen in each sector. The light curve also shows stellar flares, seen most clearly at the beginning of Sector 5, and spot modulation. The eclipse events were detected by the SPOC Transiting Planet Search pipeline (TPS; Jenkins 2002;Jenkins et al. 2010) with a period of 10.71 days and was alerted as a TESS Object of Interest (TOI), TOI 450, in May 2019 (Guerrero et al. 2021).
One full eclipse was successfully monitored on 2019 Feb 25 UTC. These observations were completed with two 1 m telescopes at the Cerro Tololo Inter-American Observatory (CTIO) in the Sloan r and I filters. The observations were 224 and 208 minutes in duration, centering on the eclipse, with effective cadences of 188 and 60 seconds, respectively. Differential photometry was computed using eight and five nonvarying field stars, respectively. The final differential light curve includes airmass detrendeing.
SALT-HRS
During the fall of 2019, 11 epochs of high-resolution optical spectra were obtained with the High Resolution Spectrograph (HRS; Crause et al. 2014) on the Southern African Large Telescope (SALT; Buckley et al. 2006) located at the South African Astronomical Observatory. HRS is a crossdispersed echelle spectrograph with separate blue and red arms that cover a 3700-8900Å. Our observations were made in the high-resolution mode, which delivers an effective resolution of R ∼ 46, 000. Data reduction, flat field correction, and wavelength calibration are performed with the facility's MIDAS pipeline (Kniazev et al. 2016(Kniazev et al. , 2017. For each epoch, three spectra were taken back-to-back and reduced individually. Table 2 presents the mean BJD of each epoch and our RV measurements (see Section 3.1).
ESO 3.6m-HARPS
TOI 450 was observed three times in the fall of 2019 with the HARPS spectrograph (Mayor et al. 2003) on the ESO 3.6m telescope in the high-efficiency mode (EGGS) as part of the follow-up efforts of NGTS planet candidates (NOI-104351; Wheatley et al. 2018). These spectra cover a wavelength range of 3782-6913Å at a spectral resolution of R∼80,000. Monitoring was stopped after the target was identified as an SB2. We derive RV measurements from them, and provide their relevant information in Table 2. 2.3. Speckle Imaging: SOAR-HRCam Speckle imaging of TOI 450 was obtained to assess the presence of unresolved companions, which can alter the color and depth of eclipses. Our observations were made on 2019 Mar 17 (UTC) with the High-Resolution Camera (HRCam) on the 4.1 m Southern Astrophysical Research (SOAR) telescope. Observations were made in the I-band (λ eff ∼ 8790 A). Details on HRCam observations and data reduction, as well as the SOAR TESS survey are described in Ziegler et al. (2020). Figure 2 presents the 5σ contrast curve, where no sources are detected within 3 . Adopting the τ = 40 Myr isochrones of Baraffe et al. (2015), the corresponding limits in companion mass and physical projected separation are M < 85M Jup at ρ = 5.3 AU, M < 55M Jup at ρ = 8.0 AU, M < 40M Jup at ρ = 10.6 AU, and M < 35M Jup at ρ ≥ 16 AU.
Limits on Companions from Gaia EDR3
The presence of nearby companions can inflate the astrometric errors in Gaia observations, resulting a larger value of the Renormalized Unit Weight Error (RUWE; Lindegren et al. 2018) above the expected value of RU W E = 1.0 for a star with a well-behaved astrometric solution. This inflation can result from genuine photocenter orbital motion that is not yet being modeled (e.g., Belokurov et al. 2020) or from the influence of spatially resolved companions that bias the centroid measurements (Rizzuto et al. 2018;Wood et al. 2021;A. L. Kraus et al., in prep). The Gaia documentation recommends a threshold of RU W E = 1.4 for assessing whether the astrometry is being inflated, but the RUWE distribution of old field stars suggests that RU W E = 1.2 provides a robust discriminator for field stars (Bryson et al. 2020;A. L. Kraus et al., in prep). However, the distribution is biased to higher values of RUWE for known single stars in young stellar populations (∼10; Myr Fitton et al. 2022). RUWE might be inflated in protoplanetary disk hosts due to scattered light (with a 95% threshold of RU W E = 2.5), but also in young disk-free stars, perhaps due to second-order effects in astrometric correction terms due to brightness or color variations (with a 95% threshold of RU W E = 1.6).
In Gaia EDR3, TOI 450 seems to have mildly inflated astrometric scatter (RU W E = 1.324) with respect to the estimated uncertainties. This value would represent an excess with respect to well-behaved field stars, but does not exceed the threshold generically suggested for all sources by the Gaia team, nor the threshold seen for young disk-free stars by Fitton et al. (2022). There is no evidence of additional companions from speckle imaging (Section 2.3) or followup spectroscopy (Section 3.2), so the mild RUWE excess should not be regarded as strong evidence of any additional companions in the system.
Finally, the Gaia EDR3 catalog also provides deep limits on additional companions within the system. The membership of this system in Columba implies that there will be very wide comoving neighbors, but there are no comoving and codistant sources in the Gaia EDR3 catalog within Baraffe et al. 2015). We therefore conclude that there are no wide stellar or brown dwarf companions to TOI 450.
Literature Photometry & Astrometry
We compile broadband photometry and astrometry from various surveys in our characterization of the TOI 450 system (Sections 3.8 and 5.1) and our assessment of its membership to the Columba moving group (Section 4). Table 1 compiles these measurements and other relevant quantities we derive from them.
ANALYSIS
In this section we describe the analysis of our primary data sets. These measurements serve as inputs to our joint RV and eclipse fit in Section 5 and provide important priors that enable a precise analysis of this grazing EB system.
Radial Velocities
Stellar RVs are measured from our high-resolution optical spectra by computing spectral line broadening functions (BFs; Rucinski 1992) using the saphires python package . The BF is the result of a linear inversion of an observed spectrum with a narrow-lined template, and represents a reconstruction of the average stellar absorption-line profile. When the observed spectrum contains the light from two stars, as it does in an SB2 system like TOI 450, the BF provides the velocity profile of each star. Figure 3 displays the BFs from two epochs. The BF is similar to the commonly used cross-correlation function (CCF), but offers a higher fidelity result (Rucinski 1999) 1 whose profiles more directly map to physical properties (e.g., vsini, flux ratio). The higher fidelity, in particular, is critical when decomposing blended stellar profiles common in SB2 observations. Synthetic spectra generally make poor narrow-lined templates, especially in the case of low-mass stars where the detailed match with observations at high resolution is still limited. Empirical templates produce BFs with much lower noise due to their improved match. The trade-off is that empirical-template BFs no longer reproduce the average absorption-line profile, but rather the profile that will reproduce the observed spectrum when convolved with the template. The result is a narrower BF profile, which aids in RV precision. As this is the goal of the current analysis, we create empirical spectral templates for spectral types M0.0 through M5.0 in steps of 0.5 using the CARMENES spectral library . Only slowly rotating (unresolved line profiles; vsini < 2 km s −1 ) stars are included. Using a uniform cubic basic (B-spline) regression following the SERVAL package's implementation , we a create spectral template for each order, oversampling the spline to match the native number of resolutions elements in the order. We find consistent results (RVs) across the spectral templates, but find the M4.5 template produces the consistently highest signal-to-noise BFs from order to order. As such, we adopt it as our narrow-lined template.
With our M4.5 narrow-lined template, we compute the BF for individual SALT-HRS orders with high signal to noise and low telluric contamination. In practice, this includes 34 orders from ∼ 5200 − 8800Å. For the HARPS spectra, we break the 1D spectrum (default data product) into 26 sections of ∼ 100Å in length, covering ∼ 5200 − 6700Å. Individual orders are then combined into a high signal to noise BF, weighted by the noise at high velocities where no stellar contributions are present. For 10 of our 14 spectra, the stellar components do not overlap in velocity space (e.g., botton panel of Figure 3). Each component is fit with a Gaussian profile to measure the stellar RV. Uncertainty on the RV measurement is assessed with a bootstrap approach in which 10 5 BFs are combined and fit from a random sampling with replacement of the contributing orders. The standard deviation of the RV measurement distribution is adopted as the uncertainty. For four epochs where the stellar profiles are blended (e.g., Figure 3 bottom), we impose bounds on the relative strength of the two fit components, informed by the 3σ bounds of the values measured in well-separated epochs. This bound prevents nonphysical flux-ratio values (see Sec- tion 3.6) that can skew the RV values. For the SALT-HRS epochs, we adopt a weighted mean and standard deviation of the three individual spectra as our value. Observed RVs are corrected to the barycentric frame using the barycorrpy package (Kanodia & Wright 2018). Our barycentric RVs and their relative uncertainty are presented in Table 2. The absolution precision of the RV measurements is on the order of 0.5 km s −1 , based on the offset we measure between the SALT-HRS and HARPS velocity zero-points (Section 5.2).
Spectroscopic Components
We clearly detect two stellar components in the combined BF (Figure 3), as expected for a high-mass-ratio EB. The absence of other features in the BF provides an independent limit on the presence of additional companions, bound or otherwise. Computing a quantitative limit on the detection threshold of an additional companion is not straightforward given that our sensitivity to companions depends on their spectral features (i.e., spectral type or T eff ) and rotational velocity. Still, we easily detect the binary components using empirical templates ∼4 spectral subtypes away from the optimal value, and similarly, Tofflemire et al. (2019) showed sensitivity to component detection with synthetic template mismatch of 500 K. Furthermore, a luminous component in the spectrum with different spectral features (i.e., a much earlier spectral type) would introduce structure and noise in the high-velocity BF baseline, which is not present in our BFs for TOI 450. With this information, we can conservatively rule out the presence of slowly rotating companions (vsini < 10 km s −1 ) with M spectral types and flux ratios of 10% (2.5 mags), which would be visually obvious in the BF, within the 2. 2 SALT-HRS fiber.
Rotation Periods
The TESS light curve contains sinusoidal modulation that results from variations in the combined, projected spotcovering fraction as each star rotates. We compute a Lomb-Scargle periodogram (Scargle 1982) for each TESS Sector (masking out the eclipse events) finding only one strong, consistent peak near 5.7 days. Smaller, yet technically significant, peaks in the periodogram likely arise from spectral leakage due to the modulation not being strictly sinusoidal. These features vary in location and strength from sector to sector and are not present in an autocorrelation function. From this analysis, we determine that only one astrophysical period can be extracted from the TESS light curves, which we interpret as both stars having the same rotation period. This result is expected given the equivalent stellar radii (Section 5) and vsini values between each component.
To measure the rotation period in the presence of evolving spot configurations, we model the light curve with the celerite Gaussian process . The covariance kernel consists of a damped, driven, simple harmonic oscillator at the stellar rotation period and another at half the rotation period. In addition to the period, the kernel is described by the primary amplitude, A, the damping timescale (or quality factor) of the primary period, Q 1 , the ratio of the primary to secondary amplitude (A 2 /A 1 ), M ix, and the damping timescale of the secondary period (P/2), Q 2 . After masking 2 hr windows centered on each eclipse and removing flares, we fit the parameters above in natural logarithmic space using emcee (Foreman-Mackey et al. 2013). Our fit employs 50 walkers. Fit convergence is established once the chain autocorrelation timescale (τ ) reaches a fractional change less than 5% and the chain length exceeded 100τ . Our posteriors discard the first five autocorrelation times as burn in.
Fits are made to each TESS Sector returning periods of 5.8±0.2 d, 5.7±0.3 d, and 5.6±0.2 d for Sectors 5, 6 and 32, respectively. We adopt the error weighted mean and standard deviation, 5.7 ± 0.1 d, as the rotation period for each star. (We repeated this analysis with the SAP light curve reduction, as opposed to the PDCSAP reduction used elsewhere, finding consistent results with larger uncertainties.) Figure 4 presents the rotational-phase-folded light curve from all three TESS Sectors with the variability model over-plotted. Very little evolution in the spot modulation is observed between TESS Sectors 5 and 32.
We note that the synchronized stellar rotation period is shorter (i.e., more rapidly rotating) than the Hut (1981) pseudo-synchronization prediction for TOI 450's orbital eccentricity (P ps ∼ 7 days). Sub-and super-pseudo synchronous binaries have been observed in other young clusters (e.g., Meibom et al. 2006), making our finding unsurprising. As a young association member with a benchmark age, TOI 450 may be a useful probe of tidal evolution theory.
Projected Rotational Velocities
To measure the projected rotational velocity (vsini) of each component, we compute a separate set of BFs using a 3100 K, log(g) = 4.5 synthetic template from the Husser et al. (2013) PHOENIX model suite. Although this template is a worse match to the observed spectra, its absorption lines have no rotational or instrumental broadening and therefore produce a BF whose width reflects the broadening components intrinsic to the observed stars. We fit the combined BF (following Section 3.1) with an absorption-line profile (Gray 2008) that includes instrumental, rotational, and macroturbulent broadening (the synthetic template includes microturbulent velocity broadening). From the eight SALT-HRS epochs with large component velocity separations, we fit the vsini and v mac for each component, finding average values and standard deviations of: vsini 1 = 3.2 ± 0.3 km s −1 , v mac1 = 2.0 ± 0.3 km s −1 , vsini 2 = 3.2 ± 0.5 km s −1 , and v mac2 = 2.1 ± 0.4 km s −1 . . Wavelength-dependent flux ratios from eight SALT-HRS epochs. Decreased measurement precision at short wavelengths is due to decreasing signal-to-noise at short wavelengths. Scatter in orders near ∼8000Å probes temperature-sensitive TiO features where spot variability has the largest impact. Filter curves from the photometric filters used to observe eclipses are provided in the bottom panel.
Stellar Rotation Inclination
With measurements of the vsini, rotation period, and stellar radius (Section 5), we can infer the inclination of the stellar rotation. The inclination probability distribution functions, computed following Masuda & Winn (2020), peak at 90 • , but are broad with 95% confidence intervals at 59 • and 48 • , for the primary and secondary, respectively. This result is consistent with alignment between the stellar and orbital angular momentum vectors.
Spectroscopic Flux Ratio
In SB2 systems, the ratio of the area of the BF components encodes the flux ratio of the two stars over the wavelength range considered. For the eight SALT-HRS epochs with well-separated BF components, we measure the flux ratio for 28 orders between ∼ 5200 and 8700Å. Each epoch consists of three spectra, which are analyzed independently and then combined to compute the mean flux ratio and standard deviation for each order. For an order to be included for a given epoch, we demand that each of the three spectra produces a BF peak that is 5σ above the baseline noise. This constraint removes low signal-to-noise ratio (S/N) epochs and orders.
In Figure 5 we over-plot the wavelength-dependent spectroscopic flux ratios for each epoch. There is a maximum of eight epochs plotted for each order, which are presented at the order's central wavelength. Lines connect a given epoch. The r , TESS, and I filter curves are also included for comparison. All values hover around unity with increasing uncertainty at short wavelengths as S/N decreases. The increased scatter from ∼7500-8500Å marks orders containing temperaturesensitive TiO absorption features, which are likely influenced by the relative presence of cool spots and their variability as the stars rotate (Gully-Santiago et al. 2017).
These data capture a representative sampling of the projected surface-brightness variability over the time-baseline observed. Figure 4 presents the location of our flux-ratio measurements vertical dashed lines) as a function of the stellar rotational phase (see Section 3.3). The TESS light curve (blue) and stellar variability model (orange) are included to provide context for the range of flux-ratio values, caused by variable projected spot-covering fractions, that our measurements probe. The spectroscopy epochs are not contemporaneous, but fall between TESS Sectors 6 and 32.
For TOI 450, where the system orientation only provides a single, grazing eclipse, these measurements allow for critical priors to be placed on the stellar radii and surface-brightness ratios (see Section 5.1). The average flux-ratio value across all orders and epochs is F 2 /F 1 = 1.0 with a standard deviation of 0.1. Our choice of the primary star in this system is somewhat arbitrary, but is ultimately chosen as the more massive component in our definitive fit, although both masses are the same within our uncertainty.
Spectral Features
In this subsection we highlight the characteristics of two spectral features that trace stellar youth.
Hα: Chromospheric emission traces magnetic activity (e.g., Skumanich 1972), which declines as stars age and spin down via magnetic breaking (Weber & Davis 1967). The spread in late-M dwarf chromospheric activity, as probed by Hα in young clusters, is too large to determine a precise age (Douglas et al. 2014;Kraus et al. 2014;Fang et al. 2018b). The timescale to observe M dwarf activity evolution is on the order of Gyrs (Newton et al. 2016(Newton et al. , 2017. The presence of a close binary companion will also complicate a star's rotational evolution. Still, the presence of strong emission in this system, which is not particularly rapidly rotating, is consistent with youth. Figure 6 presents four Hα epochs. The orbital phase is provided to the right of each curve, and the primary and secondary velocities are shown in the blue and red verti- cal dashes, respectively. The Hα line profile for each star is double peaked, characteristic of self-absorbed chromospheric emission (e.g., Houdebine et al. 2012). The strength of each component is variable, as highlighted by the comparison of the top and bottom epoch, the former of which may have been observed during a flaring event on the primary star. There are only three epochs where the Hα line profiles are fully separated. From these we compute average equivalent widths through numerical integration, finding −2.2 ± 0.3 and −2.1 ± 0.4Å for the primary and secondary, respectively, where the uncertainty is the standard deviation of the three measurements. These values are corrected for the diluting effect of the two continuum sources; for an average flux ratio of unity, this amounts to a factor of 2 increase. Li: The presence of Li in a stellar atmosphere can provide a powerful probe of stellar age as the element is rapidly burned at the base of the convective zone. For M 4.5 stars, like TOI 450, lithium supplies are exhausted between 20 and 45 Myr (Mentuch et al. 2008, using Baraffe et al. 1998 models, and empirically, e.g., Kraus et al. 2014). We do not detect the Li I 6708Å absorption line, consistent with our expectation for an M4.5 dwarf in the Columba association.
Quantities Derived from Unresolved Photometry
We fit the unresolved photometry assuming a single star following the method outlined in Mann et al. (2015b). To briefly summarize, we compared unresolved photometry to a grid of optical and near-IR (NIR) spectral templates from Rayner et al. (2009) and Gaidos et al. (2014). We use BT-SETTL models to fill in gaps in the spectra (e.g., past 2.4 µm). The free parameters are template selection, model selection, and three free parameters to handle systematic errors in the flux calibration and scaling between the spectra and photometry. We generate synthetic photometry from the templates using the appropriate filter profile. For our comparison, we use photometry from Gaia EDR3, the AAVSO Photometric All Sky Survey (Henden et al. 2015), the SkyMapper survey (Wolf et al. 2018), the Two-Micron All-sky Survey (2MASS; Skrutskie et al. 2006), and the Wide-field Infrared Survey Explorer (ALLWISE; Cutri et al. 2013). We integrate the full spectrum to determine the bolometric flux (F bol ).
The fit yields an F bol of 0.024±0.002×10 −8 erg cm −2 s −1 and T eff of 3150±80 K (determined from the assigned tem-plates). The best-fit template spectra are all M4V-M5V, in good agreement with the CARMENES empirical-template match to our high-resolution spectra (Section 3.1). The final uncertainties account for both measurement errors and systematics in filter zero-points. We show an example fit in Figure 7. 4. COLUMBA MEMBERSHIP TOI 450 was first proposed to be a candidate member of the Columba association by Gagné & Faherty (2018), who used Banyan-Σ to evaluate the fivedimensional kinematics of all stars within D < 100 pc and check for agreement with the pre-defined six-dimensional loci of the major known moving groups. (Banyan-Σ predicts a 99.9% Columba membership.) Canto Martins et al. (2020) subsequently measured a photometric rotational period of P rot = 5 days, which, while on the upper envelope of the rotational sequence at τ 100 Myr (e.g., Rebull et al. 2016), and is on the short-period end of typical mid-M field stars (Newton et al. 2016). Our photometric analysis now shows that the stars are indeed substantially inflated over the MS (Table 3), implying that they are indeed young and still contracting to the MS. However, a precise age would substantially increase the value of TOI 450 in testing stellar evolutionary models, and the nature and age of Columba has remained unclear.
The Columba association was first identified as a subgroup within the notional "Great Austral Young Association" (Torres et al. 2001), a conglomeration of the Tuc-Hor, Carina, and Columba associations (Zuckerman et al. 2001;Torres et al. 2003Torres et al. , 2006. However, Columba was recognized to be more diffuse than many other associations (Torres et al. 2008), which led to lower membership probabilities and a broader scope for incorporating additional members. This led to the addition of such far-flung systems as HD 984, HR 8799, and Kappa Andromedae to its census (e.g., Zuckerman et al. 2011), further loosening its definition and raising the probability that field contaminants and even other young associations were incorporated into its definition. With this in mind, a sample of 50 Columba members was used to fit an isochronal age of 42 +6 −4 Myr (Bell et al. 2015). The Gaia era now offers a new opportunity to revisit the definition of the Columba association, especially in providing a contextual age for TOI 450.
To identify candidate comoving neighbors (hereafter "friends") to TOI 450, we have used the software routine FriendFinder (Tofflemire et al. 2021) that is distributed in the Comove package 2 . The FriendFinder is a quicklook utility that adopts the Gaia astrometry and a user-defined RV (Table 2; v rad = 23.74 km/s) for a given science tar-get, computes the corresponding XYZ space position and UVW space velocity, and then screens every Gaia source within a user-defined 3D radius (R = 25 pc) to determine if its sky-plane tangential velocity matches the (re-projected) value expected for comovement within a user-defined threshold (∆v tan ≤ 5 km/s). Plots are then generated for the friends' sky-plane positions, U V W velocities, and RV distribution (using Gaia RVs and any others that we manually add). Finally, additional catalogs are also queried to produce plots of the friends' GALEX UV photometric sequence (Bianchi et al. 2017) normalized by their 2MASS J-band flux, and WISE infrared photometric color sequence.
In Figure 8 (left), we plot a sky map of the 467 Gaia sources that were selected as friends. Each source's offset in v tan is shown with its shading, from dark (∆v tan = 0 km/s) to light (∆v tan = 5 km/s), and the 3D distance is shown with its size. Sources with RU W E > 1.2 (denoting potential binarity) are shown with squares, while others are shown with circles. If a source also has a known RV, then the point is outlined in blue if the RV also agrees with comovement to within ∆v rad < 5 km/s, whereas objects with discrepant RVs are replaced with crosses. Visual inspection shows that there is an overdensity of large, dark points surrounding TOI 450, elongated into an ovoid that is aligned roughly N-S. Many of these sources are also comoving in RV, and hence in their full three-dimensional velocity vector, so we conclude that there is likely a coherent comoving population around TOI 450.
In Figure 8 (right) we also show the XYZ spatial distribution of the full sample of friends. The locus of large dark points (denoting the apparently young, comoving stellar population) is concentrated in the center around TOI 450, with an approximate full extent of ±30 pc in X, ±15 pc in Y, and ±10 pc in Z. We note that there does appear to be potentially coherent structure among stars that are not as clearly comoving, especially for the pink points (∆v tan ∼ 3 km/s) that fall at +Y and −Z from the central locus. Those near-comoving and nearly-cospatial sources include stars that have been identified as potential Tuc-Hor members, further hinting at the existence of a kinematic link (but not an identical nature) between Columba and Tuc-Hor.
In Figure 9, we plot the corresponding distribution of ∆v rad for all friends that have known RV measurements in Gaia or in other catalogs. There is again a notable excess of sources that are comoving with TOI 450 to within ∆v rad < 5 km/s; the velocity distribution of the thin disk is much larger (σ vrad ∼ 30 km/s; ref), so an overdensity on a scale of 5 km/s further emphasizes the likely existence of a coherent comoving stellar population.
Finally, in Figure 10, we show the (M G , Bp − Rp) colormagnitude diagram (CMD) for all friends that have valid photometry in all bands. The CMD further demonstrates that TOI 450 is not merely surrounded by a comoving population, Figure 8. FriendFinder results for TOI 450, which recovers the Columba association. In each panel TOI 450 is labeled with a red ×. "Friends" are plotted in circle if their Gaia RUWE is less than 1.2 (presumed single) and in squares if their Gaia RUWE is greater than 1.2 (presumed binary). The size of the point encodes its 3D distance from TOI 450 (larger is closer). The color encodes the tangential velocity difference from TOI 450 as shown in the color bars. Left: Sky map of TOI 450 friends. Right: XYZ spatial distributions of TOI 450 friends. but that it is relatively young; the large dark points form a notable pre-MS that approximately traces a reference sequence for Tuc-Hor (Kraus et al. 2014). The presence of numerous sources along the field MS indicates that the friend population is substantially contaminated with field interlopers, and hence can not simply be adopted for further demographic studies. However, there is an apparent pre-MS turn-on at M G ∼ 8 mag;most sources above this limit have Gaia RVs Figure 10. Gaia color-magnitude diagram (CMD) of TOI 450 friends. The CMD distribution is broadly consistent the with Tuc-Hor empirical sequence at ∼40 Myr. The shape, size, and color coding scheme are described in Figure 8. that can be used to reject field interlopers, while the sources fainter than this limit can be screened by requiring them to fall above the visually obvious divide separating the pre-MS and MS sequences. The existence of a coherent pre-MS population demonstrates that the coherent comoving stellar population is likely young, agreeing with the apparent young age of TOI 450.
The FriendFinder also outputs plots of the GALEX NUV flux normalized by the 2MASS J-band flux and the WISE W1-W3 color, both as a function of the Gaia color. We exclude these plots for brevity, but both sequences behave as expected for a 40 Myr sequence. The GALEX plot shows a sequence sitting above the older and less active Hyades, while the WISE plot shows no evidence for infrared excesses (i.e., candidate members are not disk bearing).
In summary, the evidence strongly indicates that TOI 450 is embedded in a comoving and cospatial young stellar population that we recover as the Columba association. A full analysis of its age and demographics is beyond the scope of this current effort, and the age of TOI 450 could be further clarified with dedicated studies of lithium depletion and rotational spindown in the population. Because of Columba's complicated membership history, we do not adopt a previously published age, however, given the broad consistency between this population's CMD sequence ( Figure 10) and the isochronal sequence of Tuc-Hor, it seems broadly warranted to adopt a similar age of τ ∼ 40 Myr (e.g., Kraus et al. 2014) for TOI 450 and its host population.
ECLIPSING BINARY FIT
To derive the fundamental parameters of the TOI 450 binary system, we jointly fit the RV measurements and eclipse light curves with a modified version of the misttborn code (Mann et al. 2016;Johnson et al. 2018). The RVs are described by a Keplerian orbit, and eclipses are modeled with the analytic transit code batman (Kreidberg 2015), diluted by the companion's secondary light, assuming a quadratic limb-darkening law (Diaz-Cordoves & Gimenez 1992). Both data sets are fit within a Markov Chain Monte Carlo (MCMC) framework using emcee (Foreman-Mackey et al. 2013). The model has 23 parameters: the time of periastron passage (T 0 ), orbital period (P ), semi-major axis divided by the sum of the stellar radii (a/(R 1 + R 2 )), the ratio of the stellar radii (R 2 /R 1 ), cosine of the orbital inclination (cos i), mass ratio (q), sum of the velocity semi-major amplitudes (K 1 +K 2 ), center of mass velocity (γ), and a zero-point offset between SALT-HRS and HARPS RVs (µ). The orbital eccentricity (e) and the longitude of periastron (ω) are fit with the combined parameterization of √ e sin ω and √ e cos ω, which is computationally efficient and avoids biases at low and high eccentricities inherent in other approaches (e.g., Eastman et al. 2013). Finally, for each eclipse light-curve filter (r p , TESS, I) there are four parameters: a central surfacebrightness ratio (J 2 /J 1 ), two quadratic limb-darkening coefficients (LDCs; q 1 , q 2 ), and a photometric jitter term (σ LC ). The q 1 and q 2 LDCs are the Kipping (2013) triangular sampling parameterization of the standard quadratic LDCs u 1 and u 2 , where q 1 = (u 1 + u 2 ) 2 and q 2 = u 1 /2(u 1 + u 2 ). Given their similarity, we assume the primary and secondary have the same LDCs. With the exception of the photometric jitter terms, which are explored in logarithmic space, all parameters are explored in linear space.
We fit detrended light curves in this approach. For TESS, we use the Gaussian process model in Section 3.3 to remove stellar variability. To reduce computation time, we only fit the TESS light curve in 1.1 day windows centered on the superior and inferior conjunctions (determined from initial e and ω values from an orbit fit to the RV measurements). For the LCO r -and I-band light curves, we fit a line to the outof-eclipse regions, which is appropriate for the timescale of variability we observe in the TESS light curve, and normalize the light curve with that fit.
Certain choices in the measurements that are fit are made to reduce the effect of systematic and/or correlated measurement errors. Similarly, choices in the fit parameters themselves are made to reduce covariance between fit parameters. For the stellar RVs, we fit the primary RV (RV 1 ) and the difference between the primary and secondary RV (RV 1 −RV 2 ) in order to reduce the effect of correlated RV errors due to epoch-dependent shifts in the wavelength calibration (i.e., correlated shifts in RV 1 and RV 2 ). Fitting the RV difference also reduces the fit dependence on the zero-point difference between the SALT-HRS and HARPS instruments. For the fit parameters, we elect to fit the sum of velocity semi-major amplitudes (K 1 + K 2 ) and the mass ratio (q), as opposed to K 1 and K 2 , to reduce the covariance between these parameters and the center-of-mass velocity.
Our analysis assumes that gravitational darkening, ellipsoidal variations, reflected light, and light travel time corrections are all negligible. We confirm this by creating a model with our best-fit values using the eb 3 (Irwin et al. 2011) package (a C and python implementation of the wellestablished Nelson-Davis-Etzel binary model used in the EBOP code and its variants ;Etzel 1981;Popper & Etzel 1981), finding the deviations from our simplified model are a factor of ∼30 smaller than the uncertainty of our highestprecision photometric data set (r ), and a factor of ∼40 smaller than our radial velocity precision. The most significant astrophysical ingredient missing from our model is the effect of star spots, which we address in Section 6.2. Table 3 lists our model's fit parameters and their associated priors. In general, the bounds provided by our uniform priors (U) do not influence the parameter exploration but are listed for transparency. The only exceptions are the uniform priors on q 1 and q 2 , which bound the physical parameter space of the LDCs. Although it is common practice to subject the exploration of q 1 and q 2 to Gaussian priors on the true quadratic LDCs (u 1 , u 2 ) based on predictions of their filter specific For this reason, we do not place priors on the derived u 1 and u 2 values. The remaining priors on the radius ratio and central surface-brightness ratios are described in the following section.
Priors Informed by Spectroscopic Analysis
In a traditional, double-eclipsing, EB system, the combination of the primary and secondary eclipse is sufficient to constrain the central surface-brightness ratio (J 2 /J 1 ) and radius ratio (R 2 /R 1 ), such that an informed prior on either is not strictly required. Even so, constraints from spectroscopy have been used in many previous analyses (e.g., Stassun et al. 2006). Recent work has shown that these parameters can be independently constrained to a greater degree with measurements of the wavelength-dependent stellar flux ratio from high-resolution spectra and/or joint spectral energy distribution (SED) fitting (e.g., Kraus et al. 2017;Torres et al. 2019;Gillen et al. 2020). The impact of spectroscopic constraints is far reaching as they directly affect other fit parameters (a/(R 1 + R 2 ), cos i) and derived quantities (M 1 , M 2 , a, R 1 , R 2 ). In the case of TOI 450, its single grazing eclipse necessitates an informed prior in order to perform a meaningful fit to the system. In practice, the lack of a secondary eclipse does limit the inclination such that measurements of the stellar radii can be made with ∼30% precision. However, this is insufficient to rigorously test stellar evolution models, and ignores valuable information contained in our spectra. In this section we describe the construction of a joint surfacebrightness ratio-radius ratio prior.
With the wavelength-dependent optical flux ratios measured from the SALT-HRS spectra (Section 3.6) and the compiled broadband optical and NIR photometry (Section 3.8), we fit the combination of two synthetic stellar templates from the BT-SETTL atmospheric models (Allard et al. 2013) within an MCMC framework using emcee. We restrict our comparison to solar metallicity models and a surface gravity log(g) of 5. We test other surface gravities and find the effect is negligible. Thus, T eff uniquely determines the model selection.
The six free parameters are the primary T eff (T P ), the companion T eff (T C ), a scale factor for each star (S1 and S2), and two parameters that describe underestimated uncertainties in the unresolved photometry (s 1 [mags]) and the spectroscopic flux ratios (s 2 fractional). The scale factors describe the ratio of the measured flux to that of the model. For each step in the MCMC, we scale and combine the two model spectra to form an unresolved spectrum. We convolve this spectrum with the relevant filter profiles (e.g., Cohen et al. 2003;Mann & von Braun 2015), which we compare directly with the observed SED photometry (10 photometric bands). We also compute the spectroscopic flux ratio in optical bands matching the output from Section 3.6 (30 orders). Constraints from the SED and flux ratios are weighted equally in the likelihood function, assuming Gaussian errors after adding in the s parameters in quadrature with measurement errors.
The MCMC explores the scale factors using log-uniform priors, and all other parameters using linear-uniform priors. We run the fit with 20 walkers for 10,000 steps following a burn-in of 2000 steps. This is more than sufficient for convergence based on the autocorrelation time.
The atmospheric models likely have systematic errors due to missing opacities (Mann et al. 2013). However, the effect is almost identical on both stellar components due to a com-mon model grid and similar temperatures. We also mitigate this effect by shifting our posteriors into parameter ratios. Specifically, we convert the posteriors on T P and T C into the corresponding surface-brightness ratios in the r , TESS, and I bandpasses using the same BT-SETTL models and the posteriors. For radius, we use the scale factors, which are proportional to R 2 /D 2 . The two-component stars are the same distance, which makes it trivial to convert the ratio of the scale factors to the radius ratio.
We perform our fit for each of the eight SALT-HRS epochs where the stellar velocity separation is large enough for robust flux-ratio determinations. This is preferable to fitting the average of the eight because they span a range of rotational phases (Figure 4) allowing for the range of flux ratios presented by the system. Joining the posteriors of the derived parameters, J 2 /J 1 and R 2 /R 1 , we create a Gaussian kernel-density estimate (KDE) for each filter (r , TESS, and I), which serves as the priors for our eclipse model. Figure 11 presents the 68% contours of the TESS-specific posteriors for individual epochs (top panel) and a contour plot of combined posterior from which we compute a Gaussian KDE (bottom panel). The LCO r and I-band versions follow the same basic shape, centering at a radius ratio and surfacebrightness ratio of 1.
Results
We perform our joint RV and light-curve fit for each photometric data set (r , TESS, I) independently, which we call individual fits, and a final fit that combines all of the eclipse light curves, which we call the combined fit. Each fit employs 115 walkers where convergence is assessed following the scheme outlined in Section 3.3. In Table 3 we provide the results of each fit parameter as well as some derived quantities. Values and their uncertainties are the posterior's median and central 68% interval, respectively. We note that in order to more directly compare the results from the individual fits with single eclipses (r , I) to the TESS and combined fits, we place a strict Gaussian prior on the period for these two fits, informed by the period posterior from individual TESS fit. Figure 12 presents the RV orbital solution from our combined fit in the RV 1 and RV 2 (top panels), and RV 1 − RV 2 (bottom panels) spaces, along with their residuals in km s −1 and in units of the measurement error (σ). RVs are presented as a function of the orbital phase where φ = 0 corresponds to periastron passage. In the first O − C panel of the RV 1 , RV 2 panel set (top panels), specific SALT-HRS epochs show correlated errors where both the primary and secondary velocities are offset in the same direction from the best-fit model. Specifically the measurements at orbital phase, φ = 0.04, 0.11, and 0.50, highlight our motivation in fitting RV 1 and RV 1 − RV 2 , as opposed to RV 1 and RV 2 . Time from Eclipse Center (hours) Figure 13. Phase-folded eclipse light curves in the r (blue diamonds), TESS (orange circles), and I (red squares) bandpasses. The best-fitting eclipse model from our combined fit is over-plotted in each filter in its associated color. Horizontal lines at the right of the plot highlight the eclipse depth in each filter. A strong color dependence is observed due to wavelength-dependent limb-darkening. Figure 13 presents the r , TESS, and I eclipse light curves with the combined fit model overlaid. Horizontal lines to the right show the eclipse depth in each filter. The residuals of each filter are also provided in the subsequent panels. Here the wavelength dependence of the eclipse depth is clear, where the shortest wavelength (r ) has the shallowest depth. This behavior is expected for a grazing eclipse due to the wavelength dependence of limb-darkening. The same behavior could, in principle, result from specific spot patterns, which we discuss further in Section 6.2. Here, given our limited prior knowledge of the LDCs, we find they are sufficiently flexible to describe the system's wavelengthdependent limb-darkening and any other chromatic effects that may be at play due to spots.
We find good agreement between the fit variations. The largest differences exist in the LDCs, whose values shift and become more constrained in the combined fit. The corresponding radial brightness profiles are presented in Figure 14 and discussed further in Section 6.1. The r LDCs show the largest change between the individual and combined fit (∼1σ). The difference has a negligible impact on the derived properties, in part, because the LDCs are poorly constrained in both fits. Most of the variation occurs between the orbital inclination (i) and normalized orbital separation (a/(R 1 +R 2 )), which are covariant while producing the same derived radii between the fits.
In the light of the agreement between the fit variations, we adopt the combined fit as fiducial. The result is a stellar twin system consisting of two 0.177 M stars with radii of 0.35 R on a short-period (10.714762 d), eccentric orbit (e = 0.2969). Formal uncertainties on the masses are ∼0.2%. Formal radii uncertainties are ∼1%, but we address potential sources of systematic uncertainties in Section 6.2. The radii are larger than the MS prediction, consistent with the our expectation that ∼40 Myr stars of this mass should still reside on the pre-MS.
We note that the individual radii returned by our twocomponent, synthetic template fit in Section 5.1, 0.358 +0.008 −0.011 and 0.361 +0.010 −0.008 R , are systematically larger but have fair agreement (just over 1σ). From our empirical, singlecomponent fit in Section 3.8, we assume both stars have the same T eff and luminosity (reasonable given our results in this section). Using the Gaia distance to compute the bolometric luminosity, we compute radii of 0.348 ± 0.023 R , in better agreement, albeit with a larger uncertainty.
ASSUMPTIONS IN ECLIPSE FITTING
The largest assumptions made in our modeling of the binary eclipses are that the stellar surfaces are a single temperature (spot free) and that their radial brightness profiles can be described with a quadratic limb-darkening law. The former we know to be false given the rotational modulation seen in the TESS light curve (Figure 1) and the flux-ratio variability we observed in our SALT-HRS spectra ( Figure 5). The latter may not be categorically false, but it has been shown that even if a star's radial brightness profile can be described by a quadratic law, the theoretical predictions have large systematic offsets for cool stars (Patel & Espinoza 2022). In the following subsections, we attempt to determine the impact of these assumptions, particularly with respect to the derived stellar radii. Figure 14. Filter-dependent radial brightness profiles from our individual filter (left) and combined fits (right). Top panels present the surface brightness with respect to µ. The bottom panels are with respect to the normalized radius coordinate. The median profile for each filter is shown with a thick line, and 50 random draws from the limb-darkening coefficient posteriors are shown in thin lines of the associated color. In the right panels, dashed lines with black backgrounds represent the theoretical predictions. The vertical gray lines mark the maximal radius occulted by the eclipse (solid) and its uncertainty (dashed), and arrows indicate the portion of the plot that correspond to eclipsed area.
Limb Darkening
In Figure 14 we present the best-fit, filter-dependent radial surface-brightness profiles for the individual filter fits (left) and the combined fit (right). The top panels present the profile with respect to µ ( 1 − (r/R ) 2 ); the bottom panels are plotted as a function of the normalized radius coordinate (r/R ). The vertical lines mark the innermost radius occulted during the eclipse, where our data are able to apply constraints. In the individual fits, we find that the LDCs are largely unconstrained, as shown by wide range of faint profiles, which are random draws from the LDC posteriors.
Demanding that all filter light curves correspond to the same orbital and stellar parameters, as we do in the combined fit, we find the the LDCs are much more constrained and that the r profile falls off more steeply. This difference affects the interplay between the orbital inclination and normalized orbital separation (a/(R 1 + R 2 )), but as discussed above (Section 5.2), they do not have a significant effect on the derived radii. The combined fit highlights the value of a simultaneous multicolor fit in determining accurate LDCs.
In the right panels of Figure 14 we also present the theoretical predictions for each filter in dashed black and colored lines. Values are the mean of predictions from Claret & Bloemen (2011), Claret (2017, and the Exoplanet Characterization Tool Kit (Bourque et al. 2021). The I band predictions are the only ones that agree with our fit values within 1σ; however, the I and TESS curves generally trace the range of profiles allowed by our data. The largest discrepancy exists for r , which predicts a shallower fall off than our best-fit values.
To determine the affect of simply assuming the theoretical values, or applying a narrow prior on the LDCs, as is sometimes done in transit and eclipse fitting, we perform a combined fit fixing the LDC to the predicted values (u 1,r = 0.59, u 2,r = 0.25, u 1,T ESS = 0.17, u 2,T ESS = 0.55, u 1,I = 0.29, u 2,I = 0.45). From this fit, the derived radii values (R 1 = 0.352 ± 0.003, R 2 = 0.348 ± 0.005) agree with the fiducial combined fit within the 68% confidence intervals. The Bayesian information criteria (BIC) for these two models are equivalent (the fixed LDC model is 0.05% lower), indicating that our data are just at the point where they are able to provide meaningful constraints on the LDCs. This may be because the grazing eclipse only probes a small fraction of the stellar radius, or because our photometry does not have the precision to capture the subtle variations in the eclipse shape due to limb-darkening. With these findings, we conclude that our assumption of a quadratic law, and whether the LDCs are adopted from theory or fit, do not affect our fiducial fit results.
This discussion has not included the contribution from star spots, which is discussed in more detail in the following section, but an important caveat is worth including here. Briefly, because this system's eclipse is grazing, limb-darkening has a large effect on the the wavelength-dependent eclipse depth (shallower eclipses at longer wavelengths). The same behavior could be expected from occulting a heavily spotted area. Because these two effects are degenerate, and the spot orientation is unknown, the LDCs fit here should not be take as empirical truth for pre-MS M 4.5 stars, but rather the values that best account for the combined contribution of limbdarkening and this system's specific spot properties. This does not necessarily mean that the theoretical LDCs are correct; Patel & Espinoza (2022) found systematic offsets even for inactive solar-type stars, but some of the larger offsets seen for low-mass stars may be inflated by the unaccounted presence of spots, which we discuss below.
Star Spots
Star spots can alter the depths and detailed shapes of eclipse light curves. These effects are typically ignored but can produce biases in derived radii that are significantly larger than the typical ∼1% formal uncertainties in an unspotted fit (e.g., Section 5.2). Star spot crossing events produce the most obvious effect by introducing structure into the eclipse light curve (see Han et al. 2019, for examples of spotted EBs from Kepler). Less obvious and more problematic are the effects of uneclipsed spots, or the eclipse of large spot complexes, which can bias radius measurements (e.g., Rackham et al. 2018). Here we assume that spots are the dominant surface features, and that faculae and plages can be ignored. Young, active solar-type stars are found to be spotdominated (Montet et al. 2017), which we assume extends to the active M stars in TOI 450.
The key parameter defining the direction and magnitude of the effect (deeper vs. shallower eclipses) is the ratio of the average, projected spot-covering fraction, f s , to the spotcovering fraction of the eclipsed area, f s,ecl . This ratio encodes relative flux that each region carries (eclipsed vs. uneclipsed), which determines the eclipse depth. For instance: 1. If the ratio is unity (f s = f s,ecl ), independent of the specific f s value, or the presence of discrete spotcrossing events, the average eclipse depth will be the same as an unspotted system.
2. If the ratio is greater than one (f s > f s,ecl ), i.e., a lessspotted eclipsed area, the eclipse depth will increase compared to an unspotted model because the eclipsed region carries a larger relative share of the total flux.
3. If the ratio is less than one (f s < f s,ecl ), i.e., a morespotted eclipsed area, the eclipse depth will decrease compared to an unspotted model because the eclipsed region carries a smaller relative share of the total flux.
In transiting exoplanet systems, this is known as the transit light source effect (Rackham et al. 2018(Rackham et al. , 2019, and has straightforward impacts on the derived planetary radii: transiting less-spotted areas bias radii to larger values; transiting more-spotted eclipse areas bias radii to smaller values. In EBs, predicting the effect that spots have on derived radii is less straightforward. Combinations of the radius ratio, surface-brightness ratio, inclination, and orbital separation can conspire to produce counterintuitive results that require detailed modeling. This further emphasizes the value of priors informed by spectroscopy to limit areas of parameter space (see Section 5.1). Our ability to assess the impact of spots is also bolstered in this case with access to multicolor eclipse light curves. The change in the eclipse depth has a strong wavelength dependence, where any effect is more pronounced at shorter wavelengths where the spot contrast is larger.
Measuring f s or f s,ecl is challenging in the best-case scenarios and is often not feasible. Light-curve variability amplitudes are only sensitive to the longitudinally asymmetric components of spots and generally underestimate the spotcovering fraction (Rackham et al. 2018;Guo et al. 2018;Luger et al. 2021). Multicolor time-series photometry can diagnose the spot properties with wavelength-dependent modulation amplitudes, but with typical ground-based precision, this approach is only feasible for the most extreme spotted systems (T Tauri, RS CVn). NIR spectra can probe the projected spot-covering fraction through two-temperature spectral decomposition (e.g., Gully-Santiago et al. 2017;Gosnell et al. 2022;Cao & Pinsonneault 2022), but do not provide information on the spot orientation. Doppler imaging can map the distribution of hot and cold regions (Vogt et al. 1999;Strassmeier 2002), but requires bright, rapidly rotating stars. Finally, NIR interferometry can reconstruct stellar surfaces, but it is limited to the closest stars with large angular sizes (Roettenbacher et al. 2016). All of these approaches are made more difficult in the presence of a binary companion.
Without the data or means to constrain f s or f s,ecl directly, we begin by searching for temporal variability in the eclipse light curves caused by star spots. Visually, we do not find any coherent structures in the light-curve residuals and measure χ 2 red values 1 for each of the eclipses (see Figure 13). The exception is the I-band eclipse (χ 2 red = 1.8), which has deviations that are likely not astrophysical (e.g., variable cloud and/or water vapor opacity). They occur both in and out of eclipse and are not presented in the contemporaneous r eclipse, where the signature of spots should be enhanced (shorter wavelength). For some of the individual TESS eclipses, the best fit appears systematically above or below the data (while still within the errors). This behavior could result from a variable spot-covering fraction between eclipses, which is plausible given the difference between the stellar rotation and orbital period (Figure 4). We perform a joint RV and eclipse light-curve fit for each individual TESS eclipse and compare the eclipse depth to our GP stellar variability model. Under simplified spot orientations, namely those where f s and f s,ecl are correlated with stellar rotation, the eclipse depth will correlate with the total flux. We do not find any significant trend between the two or any statistically significant variability in the TESS eclipse depth. From this analysis, at the precision of our data, we do not find evidence for spot-induced temporal variability in the eclipse events.
To address how time-averaged spot properties may be biasing our derived radii, we perform additional fits to the com-bined data set (r , TESS, I, RVs) making various assumptions about the spot properties. In this approach, we scale the eclipse model by the ratio of the eclipse depth in a spotted scenario (δ spot ) to the eclipse depth without spots (δ 0 ). Ignoring limb-darkening, which, to first order, will be same for a spot-free and spotted star, the spot-free primary eclipse depth is: where F out and F ecl are the fluxes out of eclipse and in eclipse, respectively. These are rewritten in terms of the projected surface area of the stars (Ω 1 , Ω 2 ), the area of eclipsed region (Ω ecl ), and the stellar surface-brightness ratio (J 2 /J 1 ). For a spotted system, the in-and out-of-eclipse fluxes now contain contributions from the spotted and ambient regions. In this case, the primary ellipse can be written as: δ spot = Ω ecl,A J 1,A + Ω ecl,S J 1,S Ω 2,A J 2,A + Ω 2,S J 2,S + Ω 1,A J 1,A + Ω 1,S J 1,S , (2) where the same notation holds, but is now subscripted by an "S" or "A "to indicate the spotted and ambient surfaces, respectively. To arrive at the desired quantity, we can divide these two equations resulting in: (1 + f s,ecl (C 1 − 1))
R1
2 J2 J1 + 1 + f s,1 (C 1 − 1) , (3) where we have simplified some variables to align with our eclipse fitting parameters. We replace Ω 2 /Ω 1 with (R 2 /R 1 ) 2 , and define f s,ecl as the spot-covering fraction of the area eclipsed on the primary star (Ω ecl,S /Ω ecl ), C 1 and C 2 as the ratio of the spotted to ambient surface brightness on the primary and secondary, respectively (e.g., J 1,S /J 1,A ), and f s,1 and f s,2 as the spot-covering fractions of the primary and secondary, respectively (e.g., Ω 1,S /Ω 1 ). This ratio is filter specific as J 2 /J 1 , C 1 , and C 2 are wavelength dependent.
In its full form above, five additional fit parameters (f s,ecl , f s,1 , f s,2 , C 1 , C 2 ) that scale the eclipse depth and that are largely degenerate with each other, are unlikely to be supported by present data. We can, however, make simplifying assumptions given our prior knowledge of the system that allow us to probe different extremes of the parameter space (Sections 6.2.1, 6.2.2), and allow for us to perform a fit of the spot properties under certain assumptions (Section 6.2.3).
In each of the exercises, we leverage our knowledge of the TOI 450 stars and their similarity by pre-computing C, assuming it is the same for each star. We do so by combining model spectra from the BTSettl-CIFIST suite (Baraffe et al. 2015) and convolving them with each filter profile.
We set an ambient photospheric temperature of 3100 K and a spot-photosphere temperature ratio of 0.92 (Berdyugina 2005;Afram & Berdyugina 2015;Fang et al. 2018a;Rackham et al. 2019). For the r , TESS, and I filters, we compute spot to ambient surface-brightness ratios of 0.29, 0.53, and 0.63, respectively.
Eclipsing Ambient Photosphere
In this scenario we assume that the eclipse only passes over the ambient photosphere (f s,ecl = 0), but there exists some average spot filling factor. Here we assume that both stars have the same f s . Under these assumptions, Equation 3 can be simplified to: .
Using Equation 4 we select four spot-covering fractions (f s = 0.1, 0.2, 0.3, 0.4), scale the eclipse model by δ/δ 0 , which is always > 1, deepening the eclipse, and perform a combined fit. Table 4 presents a subset of the results of these fits for parameters of interest. Here we find that the derived radii increase with the spot-covering fraction while the inclination decreases (larger impact parameter) to maintain the same eclipse duration. At f s = 0.2, the radii differ by more than 1σ. At f s = 0.4 the radii have increased by more than 5%. In all cases, the radius ratio is consistent with unity. Figure 15 provides a graphical "toy-model" representation for each model at first contact, showing the corresponding eclipse light-curve model in the absence of spots and with spots. The radial brightness profiles for each model are provided in the bottom row. The comparison of these eclipse curves highlights the impact that spots have on eclipse depths and the variety of spot properties, orbital orientations, and derived radii that produce equivalent light curves. For the "Eclipsing Ambient" models specifically, we see that increasing f s deepens the eclipse (i.e., shallower in the "without Spots" row), and increases the difference between the eclipse depths in the different filters. The latter effect requires more exaggerated differences in the filter-dependent radial brightness profiles to match the observed eclipse depths. Despite their ability to reproduce the observe eclipse depths, there is circumstantial evidence to disfavor high f s values in this scenario. For instance, even in this grazing orientation, the primary eclipse covers roughly 12% of the projected stellar surface, which makes high f s values contrived as to exclude spots from the eclipsed region. Also, higher f s values require increasingly steep radial bright profiles to match the observed eclipse depth. Even with significant systematic uncertainties in theoretical LDCs, the high f s radial brightness profiles are likely unphysical.
Low f s models remain plausible. Including these possibilities requires roughly doubling the uncertainty in the derived radius from the fiducial fit.
Eclipsing Spots
In the opposite extreme, we assume that the all latitudes on the primary star below the highest extent of the grazing eclipse are spotted (i.e., f s,ecl = 1), while the rest of the primary and secondary are spot free (f s,2 = 0). While this might represent a pathological spot orientation, polar spots are often observed in Doppler imaging studies (Strassmeier 2009). The transit depth ratio for this case is: We perform a combined fit scaling the eclipse model by δ/δ 0 , which is always < 1, producing shallower eclipses for a given set of parameters. For this exercise, f s is dependent on the orbital and stellar parameters and is computed on-thefly for each model. The large effect occulting a spotted region has on the eclipse depth sends the fit to extreme regions of the allowed radius-ratio and surface-brightness-ratio parameter space. Select parameters from the fit are presented in Table 4, with a graphical representation in Figure 15. To balance the reduced eclipse depth, this fit reduces the surface-brightness ratio (less flux dilution from the secondary), and increases the relative occulted area by decreasing the primary radius by ∼20% and decreasing the impact parameter (higher inclination; less grazing). The secondary appears fully covered in spots in Figure 15, but this is instead the realization of the extreme stellar surface-brightness ratio this fit prefers. The corresponding primary f s is ∼ 0.32. This fit resides in a much less likely area of the radius-ratio surface-brightnessratio prior (Section 5.1) compared to other fits above, but it is the multicolor eclipse information that allows us to completely rule out this scenario. Not only is the fit unable to reproduce the relative eclipse depths in r , TESS, and I (Figure 15), the corresponding unspotted model predicts steeper limb-darkening at redder wavelengths, which is the opposite of theoretical predictions and empirical findings (e.g., Müller et al. 2013).
Fit Spots
In this last scenario, we assume the spot-covering fraction of both stars are the same (f s = f s,1 = f s,2 ) and allow both f s and f s,ecl to be fit as free parameters. With this setup, the eclipse depth ratio becomes, which we use to scale the eclipse depth. Although f s is unknown for TOI 450, we place a prior on its value based on the spot-covering fractions measured from SDSS-APOGEE Figure 15. Comparison of fits including spots. The top row presents a possible realization of the stellar surfaces at first contact of the primary eclipse, with the derived radii labeled for each star. The associated spot-covering fraction (fs) and spot-covering fraction of the eclipsed area (f s, ecl ) are provided below. The second and third rows present the eclipse models without and with the effect of spots included for each filter, respectively. Horizontal dotted lines in these rows represent the best-fit eclipse depth from the fiducial fit (i.e., the observed eclipse depth; Figure 13). The last row presents the limb-darkening profiles for each filter. The leftmost column is the fiducial fit (Section 5.2). The middle two columns are a subset of the "eclipsing ambient" models (Section 6.2.1). The third column is the "eclipsing spots" model (Section 6.2.2), which can be ruled out based on its poor match to the filter-dependent eclipse depths and its reversal of the expected filter-dependent limb-darkening trend. The secondary appears completely spotted, but instead has a lower surface brightness than the primary, which is favored in this fit. The rightmost column is the fit spot model (Section 6.2.3), from which we adopt our definitive measurements. Diverse spot configurations and derived radii can reproduce the observed eclipse depths. spectra (Majewski et al. 2017) of young cluster members (Cottaar et al. 2014;Donor et al. 2018;Cao & Pinsonneault 2022, Cao, L. private communication). The decreasing trend with age predicts f s ∼ 0.3 for an age of 40 Myr, which we adopt as the center of our normal distribution prior with a width of 0.15 (N (0.30, 0.15)), allowing support for f s = 0 models. In addition to this prior, f s and f s,ecl are limited to values between zero and one (f s,ecl does not have an informed prior). This approach is similar to that developed by Irwin et al. (2011) in its effect on eclipse depths, but it does not attempt to match the out-of-eclipse variability or absolute flux values between filters as we are working with normalized fluxes.
Fit Spots
We perform a combined fit for this scenario and present its results in Table 4 and Figure 15. The eclipses themselves do not constrain f s , and as such, our fit returns the input prior.
The fit does constrain f s,ecl , however, returning a value of 0.39±0.15. The spot parameter posteriors are positively correlated, and correspond to an f s,ecl /f s value of 1.4 +0.7 −1.1 . The change in the eclipse depth in the TESS bandpass corresponds to 0.94 ± 0.05. The stellar radii for this model are less than the fiducial, but still consistent, owing to the larger uncertainty in this spotted model (∼ 2% precision). The BIC for this model is marginally higher than the fiducial fit (0.02%), but not significantly different as to rule out its use.
In this scenario, the fit favors models in which spots act to shallow the eclipse depth and reduce the difference in eclipse depth across the three filters. The LDCs of this fit are in better agreement with the theoretical predictions, where both the r and I values are in agreement, and while the TESS values are not in strict agreement, they generally trace the same radial brightness profile. This result may be signifying that the sharp radial brightness profiles required to match the eclipse depths in the absence of spots provide a worse match to the eclipse shape, and reducing the eclipse depth with spots allows the LDCs to more easily describe the eclipse shape. This distinction is largely possible because we are able to jointly constrain the LDCs of three filters simultaneously.
To test the impact of the assumed f s prior, we perform an additional fit with a narrower and lower f s prior, N (0.1, 0.1). This fit returns consistent stellar and orbital parameters with the previous fit. As before, the f s posterior returns the prior. The f s,ecl value is higher in this fit, 0.26 ± 0.13, but the f s and f s,ecl pair result in the same transit depth ratio. This exercise reveals that our approach does not constrain the spot properties themselves, only whether the fit favors an eclipsed area that is more or less spotted than the global average.
We perform two additional tests to assess the impact of our choice of the limb-darkening prescription and the spotto-ambient temperature contrast. In the first, we implement a square-root limb-darkening law (Klinglesmith & Sobieski 1970), which has been show to provide a better approxi-mation of the NIR stellar intensity profile of late-type stars (van Hamme 1993). We do not select this limb-darkening law in the fits above because it does not have an analytical implementation in batman and is too computationally expensive for the variety of fits we have explored. With the N (0.30, 0.15) f s prior, we derive radii of 0.344 +0.004 −0.005 and 0.343 +0.006 −0.005 for the primary and secondary, respectively, in good agreement with the quadratic limb-darkening result above (Table 4). Lastly, we perform two quadratic limbdarkening fits (P (f s ) = N (0.30, 0.15)), setting the spotto-ambient temperature contrast to 0.89 and 0.95, as opposed to 0.92 used above. These result in the following: R 1 = 0.344 ± 0.007 and R 2 = 0.345 +0.007 −0.006 for T spot /T amb = 0.89 and R 1 = 0.347 +0.006 −0.005 and R 2 = 0.347 +0.006 −0.005 for T spot /T amb = 0.95. In each of these fit variations, the derived radii are consistent with the initial fit in this Section within 1σ.
Adopted Stellar Radii & Spot Summary
We adopt the results of the spotted fit in Section 6.2.3 (f s = N (0.30, 0.15)) as our definitive measurement, which returns radii of 0.345 and 0.346 R for the primary and secondary, respectively, with a formal uncertainty of 0.006 R (∼2% precision). These values are robust to our choice of limb-darkening profile, and spot properties. (Other fit and derived values that differ significantly from the fiducial fit are included in Table 4.) We note that stellar masses are independent of any plausible spot model explored, owing to its weak sin 3 i dependence at high inclinations. This approach includes the effect of spots under minimal added model complexity (two additional parameters) and modest assumptions: spots exist on the stellar surfaces, the spot properties of the primary and secondary are the same, and the spot-covering fraction of the eclipsed area can be different than the average, projected value. Regardless of the specific f s value, the fact that this grazing eclipse favors f s,ecl > f s values, points to a distribution of spots that favors high latitudes (i.e., more polar than equatorial configurations).
Throughout Section 6.2 we have shown that spots can produce significant changes in derived radii. The effects spots have on eclipse depth are largely degenerate with limbdarkening. Allowing the LDCs to vary can mask the effects of spots, producing a wide variety of spot orientations that are consistent with observations. Multicolor eclipse light curves provide important additional constraints that can narrow the range of allowed spot properties. Our analysis finds that including a flexible spot prescription results in best-fit LDC values that are in good agreement with theoretical predictions. This result suggests that rigorous tests of the limbdarkening models require multicolor observations that include the effect of spots. a Models disfavored based on their stellar radial brightness profiles and contrived spot geometries.
b Model completely ruled out by multi-wavelength eclipse light curves.
c Adopted definitive fit.
In the case of TOI 450, our spot prescription results in a reduction of the stellar radius on the order of ∼2% from the unspotted, fiducial model. This would seem to ease the tension between model radii and observations. However, it should not be assumed that the result here will apply to all EB systems. The direction that spots may be biasing derived radii will be unique to each system. TOI 450's grazing orientation likely makes it more susceptible to this effect. Systems with lower impact parameters, where the area eclipsed is a larger fraction of the total projected area, are less likely to result in f s,ecl values that differ significantly from global spot-covering fraction. While including the effect of spots is important for obtaining accurate radii and realistic uncertainties, we do not suggest that the tension between observed and model radii can be resolved with spots. This finding is also supported by the spotted EB analysis in Irwin et al. (2011).
COMPARISON TO STELLAR EVOLUTION MODELS
To further constrain the age of the Columba association and to test models of pre-MS evolution, we compare our measurements of TOI 450 to various model isochrones. We select three standard stellar evolution models: BHAC 2015 (Baraffe et al. 2015), the Dartmouth Stellar Evolution Program (DSEP, Dotter et al. 2008), and the MESA Iscochrones and Stellar Tracks (MIST, Dotter 2016;Choi et al. 2016). We also select three additional model suites that attempt to correct for the shortcomings of standard models. The PAdova and TRieste Stellar Evolution Code (PARSEC, Bressan et al. 2012) version 1.2S (Chen et al. 2014) introduces an ad hoc relation between the T eff and Rosseland mean opti-cal depth to improve the agreement with mass-radius relation for dwarf stars. The DSEP magnetic models (Feiden & Chaboyer 2012Feiden 2016) include a prescription for magnetically inhibited convection which slows pre-MS contraction. We test the version that applies a magnetic field strength in equipartition with the thermal energy. And lastly, the Stellar Parameters Of Tracks with Starspots models (SPOTS, Somers et al. 2020) includes a star spot prescription that impedes the energy transport near the surface, inflating the stellar radii. We explore the f s = 0, 0.17, and 0.34 versions. In all models, we assume solar metallicity. Figure 16 presents the mass-radius (MR) diagram comparing the TOI 450 components to the models described above. With the exception of PARSEC v1.2S, all of the models predict ages between 30 and 50 Myr, in good agreement with the our expectation for a Columba member. The standard models (BHAC 15, MIST, DSEP, SPOTS (f s = 0)) predict ages at or slightly below 40 Myr, while the DSEP Magnetic and spotted SPOT models (f s = 0.17, f s = 0.34) suggest older ages, between 40 and 50 Myr. The poor performance of the PARSEC v1.2s models may be the result of model alterations that are tailored to improve CMD agreement at field ages that do not carry over to young ages in MR space.
In Figure 17 we make the same comparison, now in the Hertzsprung-Russell (HR; T eff -luminosity) diagram. We compute the T eff and luminosity using two approaches. In the first, we adopt the bolometric flux from the empiricaltemplate SED fit in Section 3.8 and compute the bolometric luminosity assuming the Gaia distance. We then assume both stars have the same luminosity (i.e., divide by two) and compute the T eff using the Stefan-Boltzmann law and the derived radii from our EB fit (Section 6.3). This results in the single black point in Figure 17, whose error bar encompasses the positions of both stars. In the second approach, we adopt the T eff values from the two-component fit of synthetic model spectra to the SED and spectroscopic flux ratios (Section 5.1). The bolometric luminosity is then computed via the Stefan-Boltzmann law using the radii from the EB fit. The formal uncertainties are small enough that we include the measurements for the primary and secondary separately as the blue and red points, respectively. We favor the former, empirical approach as it is less model dependent, but include both, as the latter, synthetic approach is common in the literature (e.g., David et al. 2019). The HR diagram results follow the same trends seen in the MR diagram, but with a larger spread. Standard models (BHAC 15, MIST, DSEP, SPOTS (f s = 0)) predict ages from 20 to 40 Myr. The empirical measurement approach (black circles) provide better agreement with the mass tracks shown with gray lines. The SPOTS (f s = 0.17, f s = 0.34) and DSEP Magnetic models span ages of 40 to 90 Myr and produce better mass agreement with the cooler, synthetic measurement approach (red and blue circles). PARSEC v1.2s models produce large offsets in this plane as well, predicting ages >100 Myr and masses >0.25 M .
To make a quantitative comparison with the models, we perform a two-or one-dimensional interpolation of the models depending on the comparison in question. To determine the model ages and masses, we use scipy.interpolate.gridddata to linearly interpolate the MR and HR diagram model planes. For model radii and T eff for our measured dynamical mass, we use scipy's interp1d for the one-dimensional interpolation. In each case, we pass normal distributions to the functions representing each measurement's value and uncertainty, taking the returned distribution's median and standard deviation as the model value and error.
In Figure 18 we present this quantitative comparison for parameters of interest. The top-left panel presents the model ages for different approaches (i.e., MR diagram, HR diagram empirical and synthetic) compared to the Columba literature age (∼40 Myr). As discussed above, the MR diagram age values are largely consistent with each other and a Columba age, while the HR diagram values show a larger scatter, still centered around ∼ 40 Myr. The top-right panel presents the model mass based on the HR diagram, finding our empirical measurement approach performs the best across different models with typical fractional uncertainties of ∼20%.
In the bottom two panels, we leverage the high precision of our dynamical mass measurement to test the accuracy of models in predicting radii and effective temperatures. The Gray lines mark mass sequences, which are labeled in solar masses at the 20 Myr isochrone. The blue and red points are the Teff and luminosity for the primary and secondary, respectively, from the fit of synthetic templates described in Section 5.1. The black point is a fit using empirical stellar templates (Section 3.8), where we assume both stars have the same Teff and radius.
bottom-left panel displays the fractional radius difference for various models at three discrete ages. At 40 Myr, most models have good agreement, predicting the radius to within ±5%. The DSEP Magnetic and f s = 0.34 SPOT models predict larger radii than we measure, >5%, but would provide better agreement if the Columba age were closer to 50 Myr. In the bottom-right panel, we present the absolute T eff difference for the dynamical mass, again at three discrete ages. The standard models predict T eff values slightly hotter than we observe ( 100 K), while spotted and magnetic models predict cooler temperatures, still within ∼100 K agreement.
Despite the high-precision measurements we have obtained for TOI 450, our test of stellar evolution models is hampered by the lack of a precise, independent age and the narrow area of parameter space we are probing with a twin system. This fact is made clear by the comprehensive analysis of coeval Upper Sco EBs by David et al. (2019), whose components span masses from 0.1 to 5 M at 5-7 Myr. Their analysis highlights that model agreement is mass dependent, where most models perform best at intermediate masses (0.3M < M < 1M ) and begin to diverge at lower masses. Critically, many of the models diverge in the same way, namely predicting older ages, or equivalently larger radii at a given age for low-mass stars (the opposite direc-tion of the standard radius inflation problem). It is unknown whether this behavior continues at ∼40 Myr, so comparing models at TOI 450's masses (0.18 M ) alone does not probe the existence of systematic offsets that may be overpredicting the ages of stars at this mass. It is worth noting that David et al. (2019) found the SPOTS models (in their initial implementation; Somers & Pinsonneault 2015) produce the most consistent ages across the mass range explored. The fact that their values do not differ greatly from other models we compare to may signal that the effect is smaller or absent at ∼40 Myr. A precise Columba age (e.g., Li depletion boundary measurement) or additional EBs coeval with TOI 450 will be required to test this behavior.
Lastly, we note that model agreement depends on the quantities being compared. All models perform the best in the MR plane and diverge in other comparisons that rely on detailed radiative transfer physics that differ between codes. For instance, the MIST models appear to perform better that the DSEP models in our tests, but Mann et al. (2019) founds the opposite in a mass-luminosity (M K ) comparison for field M dwarfs. Additionally, the PARSEC v1.2S and DSEP Magnetic models appear to perform equally well in a CMD comparison of the ∼11 Myr Musca association, as well as (Mann et al. 2022), but diverge significantly here. In this light, our analysis emphasizes that pre-MS stellar evolution hosts additional challenges beyond the standard, MS radius inflation problem, particularly at low masses.
CONCLUSIONS
In this work we have characterized TOI 450, a young eclipsing binary in the ∼40 Myr Columba association. Our analysis makes use of multicolor eclipse light curves, allowing us to include the effect of star spots in our eclipse modeling, producing accurate stellar radii with realistic uncertainties. We compare our results to various stellar evolution models to assess their accuracy and refine the age of the Columba association. The conclusions of our study are as follows: 1. TOI 450 is a low-mass eclipsing binary. From our followup observations of the nominal exoplanet candidate, consisting of high-angular-resolution imaging and time-series, high-resolution spectroscopy, we find that TOI 450 is an eccentric, near-equal-mass binary whose on-sky orientation produces only a single grazing eclipse. We do not find evidence for additional stellar sources in the system, bound or otherwise. The stars have M4.5 spectral types and effective temperatures of ∼3100 K.
2. TOI 450 is a member of the ∼40 Myr Columba association. We confirm the BANYAN Σ membership of TOI 450 to the Columba association using the FriendFinder. This tailored search for coeval, comoving, companions is motivated Columba's diffuse on-sky clustering, which can lead to high contamination. Our search recovers many bona fide members of the Columba association, whose CMD distribution is consistent with the 40 Myr Tuc-Hor sequence.
3. Priors from high-resolution spectra enable a precise eclipsing binary fit, despite its single eclipse. Wavelength-dependent flux ratios across the orders of our SALT-HRS spectra, combined with the unresolved SED, are fit with a two-component model to jointly constrain the surface-brightness and radius ratio for the system. This fit is used to construct a joint surfacebrightness-ratio-radius-ratio prior using a Gaussian kernel-density estimate. These parameters would normally be adequately constrained by a primary and secondary eclipse. With this approach, we achieve measurement precisions in this single-eclipsing system that are on par with double-eclipsing systems.
4. TOI 450 is a twin system on the pre-MS. From our fiducial fit to the system, both stars are indistinguishable at our precision. We derive masses of 0.177 M with radii of 0.35 R , placing these stars well above the MS expectation.
5.
Including the effect of star spots in our eclipse model results in a 2% reduction in the stellar radii. We include a parameterization of the effect of spots in our eclipse model that functionally acts to scale the eclipse depth. The direction and magnitude of this scaling depends on whether the spot-covering fraction of the eclipsed area is higher or lower than the global value, resulting in a shallower or deeper eclipse, respectively. Degeneracies between star spots and the stellar limbdarkening profile are reduced with multicolor eclipse light curves. We find that the eclipses favor a model in which the grazing eclipse occults a more heavily spotted area than the average, projected value. Without constraining the total spot-covering fraction, this result suggests that spots on the primary star are preferentially at high (absolute) latitudes. The derived radii are below the fiducial value by more than 2σ. This result is not representative for spotted EBs generally, and is not a solution to the so-called radius inflation problem.
6. Model Comparisons. Standard stellar evolution models (BHAC 15, MIST, DSEP, SPOTS (f s = 0)) perform well in describing the properties of TOI 450, assuming an age of 40 Myr. Masses measured from the HR diagram are systematically low but within error of our empirically derived luminosity and T eff values. Predicted radii at our dynamical mass are consistent within 5% and predicted T eff values agree within 100 K. The f s = 0.17 SPOTS model performs equally well. Higher f s SPOTS models, the DSEP Magnetic models, and especially the PARSEC v1.2S models predict older ages, higher masses, and cooler temperatures than we observe, and are generally disfavored by our measurements. For this stellar mass and age, we find the MIST and SPOTS f s = 0.17 models provide the most consistent results across the tests we perform. We note that this is result is only valid for this mass and age and is not necessarily expected to extend to other mass and age regimes, or to agreement in the CMD.
In this study we lay out a flexible framework for including the effect of spots when modeling EB eclipse light curves. The method benefits significantly from multicolor eclipse light curves that help to break degeneracies with limb-darkening. Our approach is complementary to others addressing the effect of spots (e.g., Windmiller et al. 2010, eb Irwin et al. 2011starry Luger et al. 2019) but does not require long baseline light curves or the assumption that the detailed spot pattern is unchanging. By allowing for various spot orientations in this modeling, we probe the potential for systematic offsets in derived radii and produce more conservative formal uncertainties that should ease the tension that exists between different groups modeling the same systems. Our analysis suggests spots introduce a ∼ 2% precision floor in derived radii when multicolor eclipse light curves are available, and are likely higher when fitting spotted systems with a single band. We hope this approach will provide more robust empirical measurements to test models, but ultimately, a larger population of benchmark EBs across age and mass is required to identify specific shortcomings and improve the next generation of stellar evolution models. | 2022-10-21T01:16:01.005Z | 2022-10-19T00:00:00.000 | {
"year": 2022,
"sha1": "27cd8431d154bbe60dfb715e0af7e9bcd87e729f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "27cd8431d154bbe60dfb715e0af7e9bcd87e729f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252546191 | pes2o/s2orc | v3-fos-license | Anti-SARS-CoV-2 immunoadhesin remains effective against Omicron and other emerging variants of concern
Summary Blocking the interaction of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) with its angiotensin-converting enzyme 2 (ACE2) receptor was proved to be an effective therapeutic option. Various protein binders as well as monoclonal antibodies that effectively target the receptor-binding domain (RBD) of SARS-CoV-2 to prevent interaction with ACE2 were developed. The emergence of SARS-CoV-2 variants that accumulate alterations in the RBD can severely affect the efficacy of such immunotherapeutic agents, as is indeed the case with Omicron that resists many of the previously isolated monoclonal antibodies. Here, we evaluate an ACE2-based immunoadhesin that we have developed early in the pandemic against some of the recent variants of concern (VoCs), including the Delta and the Omicron variants. We show that our ACE2-immunoadhesin remains effective in neutralizing these variants, suggesting that immunoadhesin-based immunotherapy is less prone to escape by the virus and has a potential to remain effective against future VoCs.
INTRODUCTION
Coronavirus disease 2019 , caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is an ongoing devastating pandemic leading to a substantial global death toll and an unprecedented economic loss. Since its emergence, SARS-CoV-2 accumulates changes that lead to the appearance of different variants (Harvey et al., 2021). Currently, the B.1.1.529 (Omicron) variant as well as its BA.4 and BA.5 sub-variants are rapidly spreading worldwide, causing significant morbidity and concern. Anti-SARS-CoV-2 vaccines (Baden et al., 2021;Polack et al., 2020;Voysey et al., 2021) revolutionized the way we combat the pandemic but many people, including both unvaccinated individuals and vaccinated people that lose protection over time, still contract the virus and may develop a serious, lifethreatening disease. This problem seems to be especially pronounced with the Omicron-related variants. Therefore, there is still a pressing need to have a diversified set of good therapeutic options to alleviate the severity of the disease in hospitalized patients and reduce the likelihood of patients in high-risk groups from developing a serious illness.
Immunoadhesins make another class of immunotherapeutic agents. These are antibody-like molecules that consist of an engineered binding domain fused to an Fc portion on an antibody (Capon et al., 1989). The viral cellular receptor could serve as a binding domain for constructing such immunoadhesins. Due to natural adaptation, however, zoonotic viruses may bind to their animal-derived ortholog cellular receptors at higher affinities than the human cell-surface receptors (Shimon et al., 2017). Thus, immunoadhesins that make use of host-ortholog receptors can provide superior antiviral therapeutics. We recently demonstrated this approach by constructing Arenacept, which is a powerful immunoadhesin that targets viruses from the Arenaviridae family of viruses (Cohen-Dvashi et al., 2020a). Early in the pandemic, we (Cohen-Dvashi et al., 2020b) and others Chan et al., 2020;Glasgow et al., 2020;Lei et al., 2020;Monteil et al., 2020;Mou et al., 2021;Tada et al., 2020) have used angiotensin-converting enzyme 2 (ACE2), which is the cellular receptor of SARS-CoV-2 Walls et al., 2020;Yan et al., 2020), to construct anti-SARS-CoV-2 immunoadhesins that neutralize the virus and mediate Fc-effector functions (Chen et al., 2021b). Here, we describe the construction of this engineered ACE2 immunoadhesin and show that it retains efficacy toward Omicron as well as other VoCs.
RESULTS
The binding of SARS-CoV-2 to its ACE2 receptors is mediated by the receptor-binding domain (RBD), which is part of its spike complex (Lan et al., 2020;Shang et al., 2020;Walls et al., 2020;Wrapp et al., 2020). ACE2 has a long helical segment at its N-terminus, which forms most of the RBD-recognition site on ACE2 ( Figure 1A). A multiple sequence alignment of over 200 ACE2 sequences derived from mammals indicates that many of the ACE2 positions that comprise the SARS-CoV-2 recognition site The side chains of residues that make the recognition site for SARS-CoV-2 are shown as sticks. The N-terminal helix of ACE2 that makes the central part of the binding site is highlighted in orange. (B) The sequence diversity of the first N-terminal helix of ACE2 in mammals is presented using a WebLogo display (Crooks et al., 2004). The abundance of the amino acid types in each position is represented by the height of their single-letter code. The residues that interact with the RBD of SARS-CoV-2 are indicated by red arrows. (C) Interface properties of the various RBS/ACE2-ortholog models. Each dot represents a single model. From left to right, five panels show the calculated total Rosetta energy (using Rosetta energy units), the binding energy (DDG for binding), the buried surface area, the packing statistics, and the shape complementarity of the interface. All the panels are arranged such that the values at the top represent better results. The RBD/human-ACE is indicated with a green dot. The modified ACE2 is indicated with a red dot. iScience Article are not conserved ( Figure 1B). This notion indicates an enormous putative sequence space that ACE2 can adopt.
To identify advantageous alterations of ACE2 that may enhance the binding to SARS-CoV-2, we selected 68 orthologous ACE2 genes with sequence identity to the human-ACE2 greater than 80% (Table S1). We used Rosetta atomistic modeling calculations (Methods S1) to assess the stability, binding energy (DDG bind ), interface packing, and shape complementarity for the RBD (starting from PDB entry 6VW1) ( Figure 1C) (Shang et al., 2020). The computed binding energies correlate ( Figure S1) with experimentally measured binding affinities . We visually inspected the top 20 models according to DDG bind and identified mutations that the calculations indicated would improve contacts with the SARS-CoV-2 RBD relative to the human-ACE2. Due to the high sequence diversity in ACE2, many design options were available. We rejected mutations to Trp, due to their tendency to form undesired promiscuous interactions and furthermore consulted a deep mutational sequencing dataset on ACE2 mutations and their impact on binding to the RBD (Procko, 2020). The vast majority of the suggested mutations were enriched in this mutational scanning, but a few potential mutations were not highly enriched and hence we eliminated them.
Three of the mutations that we decided to incorporate are located at the first N-terminal helix of human-ACE2. These three mutations include a T27L mutation that improves packing with hydrophobic residues of Figure 2B), and a Q42R mutation that may form a salt bridge with Asp38 of ACE2 and stabilize it in a configuration that favors the formation of a hydrogen bond with Tyr449 of SARS-CoV-2 RBD ( Figure 2C). Alternatively, the new arginine may assume a different rotamer that makes favorable electrostatic interactions with the main-chain carboxylic oxygen of Gly447 from the SARS-CoV-2 RBD ( Figure 2C). Besides these three mutations at the N-terminal helix of ACE2 that we selected, we identified two additional sites in the surrounding regions of ACE2. In the first site, we identified a putative change of Glu75 and Leu79 to arginine and tyrosine, respectively, that may interact favorably with Phe486 of SARS-CoV-2 RBD ( Figure 2D). In the second site outside the first helix of ACE2, N330F may improve packing against the aliphatic portion of Thr500 from SARS-CoV-2 RBD ( Figure 2E). We used Rosetta to model the combination of these six mutations and its impact on the binding to SARS-CoV-2 RBD. Our design showed a remarkable improvement in DDG of binding as well as in the buried surface area ( Figure 1C).
We decided to incorporate additional modification at other sites on top of modifying ACE2 residues that directly interact with SARS-CoV-2 RBD. Human-ACE2 has a putative glycosylation site at Asn90 that was shown to bear a glycan according to the SARS-CoV-2 RBD/human-ACE2 cryo-EM structure (Yan et al., 2020). This N-linked glycan projects toward the SARS-CoV-2 RBD, and presumably imposes steric constraints for the binding of SARS-CoV-2 RBD. The aforementioned deep mutational scanning dataset (Procko, 2020) is highly enriched with mutations in this N-linked glycosylation site, further supporting this notion. To eliminate this glycosylation site, we mutated Thr92 from the N-X-T glycosylation motif to an arginine that can make polar interactions with nearby glutamine ( Figure 2F). Besides serving as a cellular receptor for SARS-CoV-2, ACE2 is an enzyme that has a critical biological function in regulating blood pressure by hydrolyzing angiotensin II (Keidar et al., 2007). While the enzymatic activity of ACE2 may protect from lung and cardiovascular damage (Imai et al., 2005;Kuba et al., 2006;Zoufaly et al., 2020), this activity may complicate the use of such ACE2-based reagent to fight viremia due to potential harmful effects of over conversion of angiotensin, when high doses are iScience Article administrated. Hence, we decided to eliminate the catalytic activity of ACE2 by mutating its key catalytic position, Glu375, to leucine. Overall, we designed a variant that have a unique (Table S2) set of eight mutations.
To test our design, we produced two chimeric proteins that included amino acids 19-615 of the human-ACE2 ectodomain (omitting the original signal peptide) fused to an Fc portion of human IgG1, with or without the eight above-mentioned mutations (i.e., T27L, D30E, Q42R, E75R, L79Y, N330F, T92R, & E375L). Both the WT construct (ACE2-Fc) and our designed ACE2 construct (ACE2 mod -Fc) readily expressed as secreted proteins using HEK293F cells in suspension and were easily purified to near homogeneity using protein-A affinity chromatography ( Figure 3A). Testing the enzymatic activity of both ACE2-Fc and ACE2 mod -Fc verified that the latter is indeed catalytic dead ( Figure S2). We then immobilized the two immunoadhesins on a surface plasmon resonance sensor chip and used purified SARS-CoV-2 RBD as an analyte to determine their binding affinities. Noteworthy is this configuration that does not allow avidity. A simple 1:1 binding model gave a good fit for the binding data of ACE2 mod -Fc to SARS-CoV-2 RBD, but the binding of ACE2-Fc to SARS-CoV-2 RBD could not be fitted using this model, and we, therefore, used a more complex heterogeneous-ligand model that assumes some heterogeneity of the ACE2-Fc ( Figure 3B). Such heterogeneity could presumably originate from partial glycosylation at Asn90 of ACE2. Remarkably, the binding affinity of ACE2 mod -Fc to SARS-CoV-2 RBD is more than two orders of magnitude stronger compared with the binding affinity of ACE2-Fc ( Figure 3B). Moreover, although ACE2 mod was designed to bind SARS-CoV-2, we further tested its ability to bind the original SARS and found that not only it binds SARS-RBD but it aslo binds it with a significant higher affinity compared to ACE2-Fc ( Figure 3C).
To test if the enhanced affinity of ACE2 mod -Fc could translate to improved biological functions, we conducted a pseudovirus neutralization assay using the spike complex of the original Wuhan-Hu-1 strain. The neutralization profile of ACE2 mod -Fc is apparently better compared to the profile of ACE2-Fc ( Figure 4A). There is more than a 10-fold improvement in both IC 50 and IC 80 values, comparing the two reagents. Anti-SARS-CoV-2 immunoadhesin that binds to cell-surface displayed spike complexes might recruit beneficial immune factions via its Fc portion. We used flow cytometry to monitor the ability of ACE2-Fc and of ACE2 mod -Fc to stain HEK293 cells that transiently express the SARS-CoV-2 spike complex ( Figure 4B). ACE2 mod -Fc has an apparent higher capacity to recognize the spike complex compared to ACE2-Fc. Achieving improved recognition of the SARS-CoV-2 spike complex prompted us to test the ability of ACE2 mod -Fc to directly neutralize live authentic viruses. For that, we performed plaque reduction neutralization test in a BSL-3 facility using the Wuhan-Hu-1 SARS-CoV-2 strain ( Figure 4C). Both ACE2-Fc and ACE2 mod -Fc displayed better neutralization capacity of the live iScience Article viruses compared with the pseudoviral system ( Figures 4A and 4C), and the potency of ACE2 mod -Fc was significantly higher compared to ACE2-Fc, achieving sterilizing effect well below 1 mg/mL. Hence, we managed to create a superior ACE2-based immunoadhesin that displays an improved capacity to target SARS-CoV-2.
Since its emergence as a human pathogen, SARS-CoV-2 is constantly changing by accumulating mutations.
The changes that occur on the spike of the virus, and specifically on its RBD, have the potential to render anti-SARS-CoV-2 immunotherapeutic reagents, like ACE2 mod -Fc, ineffective. To explore this possibility, we generated pseudoviruses that contain the RBD mutations of the Alpha, Beta, Gamma, Delta, and Omicron VoCs (Table 1). Compared to the original Wuhan-hu-1 strain, ACE2 mod -Fc effectively neutralizes the Alpha, Beta, Gamma, and Delta VoCs ( Figure 5A) that contain up to three alterations in their RBDs (Table 1). Unlike the other VoCs, the Omicron substantially differs from Wuhan-hu-1, having 11 relevant alterations in its RBD (Table 1). Nevertheless, ACE2 mod -Fc effectively neutralizes Omicron at the same efficiency as it neutralizes the Wuhan-hu-1 strain ( Figure 5B). Thus, the VoCs that so far emerged do not reduce the neutralization capacity of ACE2 mod -Fc.
Avidity is a critical aspect that contributes to the potency of antibodies and antibody-like molecules (Cohen-Dvashi et al., 2020a; Klein and Bjorkman, 2010). Since the ACE2 receptor has a substantial size (Lan et al., 2020), we were concerned that the IgG1-derived hinge that links between the ACE2 mod and the Fc portion is not sufficiently long to enable both ACE2 mod arms to bind simultaneously. We therefore extended this hinge by three consecutive Gly-Ser-Gly-Gly repeats (ACE2 mod -GS3 -Fc) and tested the effect of this extension on the capacity of neutralizing pseudotyped viruses. Extending the hinge significantly increased the neutralization potency ( Figures 6A and 6B). The IC 50 values of ACE2 mod -GS3 -Fc against both the Wuhan-Hu-1 and the Omicron strains are below 0.5 mg/mL ( Figures 6A and 6B). Hence, the longer iScience Article hinge is better suited for allowing the two ACE2-arms of the immunoadhesin to bind simultaneously to achieve avidity.
DISCUSSION
Immunotherapy is an effective therapeutic tool for mitigating the disease severity and reducing the overall risk from SARS-CoV-2 infections in target populations. Although monoclonal antibodies reach exceptional potencies, they readily lose their efficacy due to changes that SARS-CoV-2 is accumulating. Ideally, we will want to progress into clinical development reagents that will stay effective for extended periods of time and against current circulating VoCs as well as ones that will emerge in the future. As we demonstrated here, it is possible to construct potent ACE2-based immunoadhesin that remains efficacious against emerging VoCs. Generally, while any modification of ACE2 opens a door for potential escape by the virus, the probability of such an event to happen will be lower compared to the probability of escape from monoclonal antibodies. This is due to the fact that ACE2 mod -Fc interacts solely with residues that are part of the ACE2-binding site whereas monoclonal antibodies inevitably make contacts with residues outside the binding site. While the virus can equally accumulate changes in all of its residues, changes in residues that make the ACE2-binding site, which may allow escape from ACE2 mod -Fc, have a significant higher probability to bear a fitness cost for the virus than residues outside the binding site. Therefore, the probability of the virus to escape monoclonal antibodies will be higher compared to ACE2-based immunoadhesins.
Along these lines, when this manuscript was submitted for publication, the Omicron BA.1 was the most prevalent VoC. While in revision, the BA.4 and BA.5 sub-variants became the dominant strains (Hachmann et al., 2022). These sub-variants have several changes in their RBD's compared to Omicron BA.1, including L452R, and F486V mutations and a reversion of R493 (a BA.1 mutation) back to a glutamine (Wuhan-Hu-1 sequence). These changes reduce the serum neutralization capacities of both vaccinated individuals and people who got infected with the Omicron BA.1 strain (Hachmann et al., 2022). While we do not have experimental data, structural analysis suggests that these changes will not affect the neutralization capacity of ACE2 mod -Fc. Specifically, L452R is a change occurring at a remote site that is not part of the ACE2-binding interface and thus is less likely to affect ACE2 binding. The reversion of Arg493 back to glutamine as in the sequence of Wuhan-Hu-1 makes it an ACE2 mod -Fc-compatible residue ( Figure 4A). The F486V mutation replaces a large hydrophobic amino acid with a smaller hydrophobic amino acid (see Figure 2D) which might slightly reduce the overall affinity of the virus to ACE2, but could not interfere with the binding of ACE2 mod -Fc. Based on that, we predict that ACE2 mod -Fc will remain effective against the currently circulating VoCs.
Several ACE2-base immunoadhesins (Lei et al., 2020;Li et al., 2020;Tada et al., 2020), as well as phylogenetically-, library-, or structure-guided enhanced ACE2 (Chan et al., 2020;Glasgow et al., 2020;Mou et al., 2021) were previously described. Compared with these reported immunoadhesins, the ACE2 mod-GS3 -Fc iScience Article that we present here is a highly potent reagent that has a different set of mutations (Table S2). Hence, the other reported immunoadhesins have orthogonal designs, which can allow us to diversify the immunotherapeutic toolkit against SARS-CoV-2. Such diversification could provide a safety net from losing immunotherapeutic options altogether in the possible scenario of the emergence of a resistance strain in the future.
Limitations of the study
The modified ACE2-based immunoadhesin that we present here was evaluated in vitro. Many promising reagents fail to ultimately demonstrate sufficient efficacy in vivo due to various, often unexpected, reasons. Additional experiments will be needed before a reagent like ACE2 mod-GS3 -Fc could be considered for clinical use.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
ACKNOWLEDGMENTS
We are grateful to Julia Adler and Yosef Shaul for providing plasmids for the lentivirus system.
DECLARATION OF INTERESTS
The Weizmann Institute has filed for a patent for the ACE2 mod -Fc immunoadhesin.
Lead contact
Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Ron Diskin (ron.diskin@weizmann.ac.il).
Materials availability
All unique plasmids generated in this study are available from the lead contact with a completed Materials Transfer Agreement (MTA).
Data and code availability
The Rosetta script that was used in this study is included in the supplemental information.
A list of the ACE2 orthologous accession codes that were used for Rosetta modeling, are available as a Table S1.
Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
Cell culture
Adherent cell-lines: HEK293T and Vero E6 cells are from the global bioresource center ATCC. hACE2-FLAG overexpressing HEK293T cells were purchased from Genscript (Cat. No. SC1394). HEK293T cells were cultured in high glucose Dulbecco's modified Eagle medium (DMEM) supplemented with 10% fetal bovine serum (FBS; Gibco), MEM non-essential amino acids, 2 nM L-Glutamine, 100 U/mL penicillin sodium and 100 mg/mL streptomycin (Biological industries, Israel). Vero E6 cells were cultured in MEM supplemented with the upper mentioned supplements plus 12.5 Units/mL Nystatin (Biological Industries, Israel). hACE2-HEK293T were cultured in DMEM supplemented with 10% FCS and 1 mg/mL Puromycin (Sigma-Aldrich). All adherent cells were grown at 37 C under atmosphere of 5% CO2. Suspension cell-line: FreeStyle 293-F Cells were purchased from ThermoFischer and were cultured in FreeStyle media (Gibco) under agitation at 37 C and an atmosphere of 8% CO2.
Atomistic modeling
Orthologous sequences of ACE2 were collected by using a protein BLAST (Altschul et al., 1990) search of the human-ACE2 sequence at GenBank and filtering the results to mammalian origin, and to sequences with greater than 80% identity to human-ACE2. Sequences were aligned using MUSCLE (Edgar, 2004). Krammer F. Lab Full-length human ACE2 pCDNA3.1 Hyeryun Choe Lab Addgene plasmid #1786 pLVX-EF1alpha-SARS1-Spike-
Lentiviral particles production and neutralization
Lentiviruses expressing S-Covid19 spikes or mutated spikes were produced by transfecting HEK293T cells with Luciferase-pLenti6, D19 S_covid-pCMV3 and DR89 J vectors at 1:1:1 ratio, using Lipofectamine 2000 (Thermo Fisher). Media containing Lentiviruses was collected at 48h post-transfection, centrifuged at 600xg for 5 min for clarifying from cells, and aliquots were frozen at À80 C.
For neutralization assays, hACE2 overexpressing HEK293T cells (Genscript) were seeded on a poly-L-lysine pre-coated white, chimney 96-well plates (Greiner Bio-One). Cells were left to adhere for 4h, followed by the addition of S-covid19 lentivirions, which were pre-incubated with 4-fold descending concentration series of either ACE2-Fc or ACE2 mod -Fc. Luminescence from the activity of luciferase was measured 48 h post-infection using a TECAN infinite M200 pro plate reader after applying Bright-Glo reagent (Promega) | 2022-09-28T13:15:50.708Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "dbb61ea7fdf8d5b26ac46fa2fa37395b7b471dcc",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2589004222014651/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "828b1a19af44ad09f725a3c2a053debad6a4bf08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21383261 | pes2o/s2orc | v3-fos-license | Species and Genotype Effects of Bioenergy Crops on Root Production, Carbon and Nitrogen in Temperate Agricultural Soil
Bioenergy crops have a secondary benefit if they increase soil organic C (SOC) stocks through capture and allocation below-ground. The effects of four genotypes of short-rotation coppice willow (Salix spp., ‘Terra Nova’ and ‘Tora’) and Miscanthus (M. × giganteus (‘Giganteus’) and M. sinensis (‘Sinensis’)) on roots, SOC and total nitrogen (TN) were quantified to test whether below-ground biomass controls SOC and TN dynamics. Soil cores were collected under (‘plant’) and between plants (‘gap’) in a field experiment on a temperate agricultural silty clay loam after 4 and 6 years’ management. Root density was greater under Miscanthus for plant (up to 15.5 kg m−3) compared with gap (up to 2.7 kg m−3), whereas willow had lower densities (up to 3.7 kg m−3). Over 2 years, SOC increased below 0.2 m depth from 7.1 to 8.5 kg m−3 and was greatest under Sinensis at 0–0.1 m depth (24.8 kg m−3). Miscanthus-derived SOC, based on stable isotope analysis, was greater under plant (11.6 kg m−3) than gap (3.1 kg m−3) for Sinensis. Estimated SOC stock change rates over the 2-year period to 1-m depth were 6.4 for Terra Nova, 7.4 for Tora, 3.1 for Giganteus and 8.8 Mg ha−1 year−1 for Sinensis. Rates of change of TN were much less. That SOC matched root mass down the profile, particularly under Miscanthus, indicated that perennial root systems are an important contributor. Willow and Miscanthus offer both biomass production and C sequestration when planted in arable soil.
Introduction
There has been an increase in the use of dedicated biomass crops to exploit photosynthesis for bioenergy production over recent decades to address two pressing global concerns: C emission reduction and energy security [1,2]. Two dedicated low-input bioenergy crops frequently planted in temperate regions, such as the UK, are willow (Salix spp.) in short rotation coppice (SRC) systems and species of the perennial grass genus Miscanthus [3]. Commercial willow plantations produce 9-12 Mg ha −1 year −1 of biomass in 2-4-year SRC harvest rotations typically [1,[4][5][6].
Miscanthus is an annually-harvested perennial rhizomatous grass originating from Asia which has C 4 physiology and can produce biomass yields of 12-15 Mg ha −1 year −1 in the UK [1,2,6,7]. In England in 2015, there were 2885 ha under SRC (willow and Populus), yielding 17-35 Gg of dry biomass, of which 15 Gg was used in power stations, and 6905 ha under Miscanthus, yielding 69-104 Gg of dry biomass, of which 33 Gg was used in power stations [6].
In addition to production of biomass, perennial bioenergy crops may have a secondary benefit if they increase stocks of soil organic C (SOC), and total N (TN) by association, through capture of atmospheric C and allocation to belowground plant biomass [8]. Realising the potential for C sequestration over the lifetime of the stand, which is typically more than 15 years and up to 30 years for willow [9,10], is likely as perennial bioenergy crops require no cultivation during their lifetime, aside from at planting, and hence disturbance to the soil and the root system is minimised. The roots may persist longer than under annual crops, which is important knowing that SOC is primarily derived from roots [8,11]. The roots do need to turn over, however, to be incorporated into SOC, operationally defined as the < 2 mm soil size fraction, in order to contribute to sequestration. Perennial bioenergy crops allocate nutrients and C below ground to the root system during senescence in readiness for the following growing season [1,12], and some crops extend their root systems deep into the subsoil, especially Miscanthus [8,13,14]. In their comprehensive review, Agostini et al. [15] found that root C stocks as an input were greater for willow (1.0 Mg ha −1 year −1 ) than Miscanthus (0.5 Mg ha −1 year −1 ), but that the latter had a greater mean residence time (1.3 years) than the former (1.8 years), based on the finer nature of willow roots and their fast turnover. The importance of root inputs to SOC is likely dependent on their physical (diameter, association with soil structure) and chemical (greater or lesser 'labile' compounds) characteristics.
The potential for C sequestration is site-specific in part, being dependent on local environmental and management factors, including previous land use which largely controls initial SOC and TN contents [2,8,[16][17][18]. Whilst it may be desirable that bioenergy crops are concentrated on less-productive 'marginal' land [8,16], land used for food production has also been planted with bioenergy crops, causing a conflict between food and bioenergy production on higher-quality soils [2]. This is important because whether such crops are established on degraded or fertile soils may control the potential for C sequestration [8]. Also important is the age of the stand, as others have reported an establishment phase as crop yields increase [19] where resident SOC turns over before full replacement by new SOC deriving from the bioenergy crop [16], particularly where the former land use was under perennial grass [18,20,21]. Therefore, a full assessment of the potential for bioenergy crops to sequester significant amounts of C is still far from certain [8,15,22,23] and is reliant on monitoring the dynamics of SOC in well-designed field experiments. It has been estimated that sequestration under bioenergy crops needs to be at least 0.25 Mg SOC ha −1 year −1 for the system to be truly C-neutral [15,24].
We sought to assess the effect of different species and genotypes of bioenergy crops on root production, SOC and TN in temperate agricultural soil. With detailed prior knowledge of species and genotypic differences in above-ground traits of such bioenergy crops, the main hypothesis was that differences would also be reflected in the below-ground biomass and that these, in turn, affect SOC and TN dynamics. An existing field experiment in the UK with established stands of different willow varieties (genotypes) and Miscanthus genotypes, the principal crops grown solely for bioenergy production in the UK, was used to test the hypothesis. Below-ground biomass and quantified SOC and TN were compared in the underlying soil on two successive occasions when the bioenergy crop stand was 4 and 6 years old to assess stocks and changes. In addition to bulk SOC, the natural abundance 13 C isotope labelling of Miscanthus-derived SOC was used to estimate new C input in the establishment phase (inside 10 years).
Field Experiments and Treatments under Study
We focused on a field experiment established in 2009 at Rothamsted Research (Hertfordshire, UK) as part of the UK Biotechnology and Biological Sciences Research Council, Sustainable Bioenergy Centre [5]. Four genotypes each of willow and Miscanthus were planted in plots in a randomised block design with four replicates on a field previously under annual arable crops for at least 50 years [25]. Planting followed conventional commercial best practice [9,10]: willows were planted as cuttings using the typical twin-row design at a planting density of 16,667 plants ha −1 and Miscanthus was planted in single rows at a planting density of 20,000 plants ha −1 . Willows were cut back at the end of the establishment year in early 2010 and then subjected to a 2-year SRC regime thereafter, whereas Miscanthus was harvested annually. Harvesting and coppicing were done in January. Miscanthus plots received 100 kg N ha −1 every year in May and willow plots received 60 kg N ha −1 after each 2-year harvest (May 2010, 2012 and 2014), both as Nitram® (NH 4 NO 3 ; 34.5% N), following typical guidelines during the establishment phase [9,10]. Canopy traits were used to select two of the four willow varieties for investigation. 'Terra Nova' has short, ovate leaves, with a plant leaf area of 1.4 m 2 per plant [5] and an average leaf area index of 1.93 [4], whereas 'Tora' has long, lanceolate leaves with a plant leaf area of 0.8 m 2 per plant [5] and an average leaf area index of 1.26 [4]. Similarly, we chose two standard Miscanthus genotypes: M. × giganteus Greef et Deu ex Hodkinson & Renvoize is a nontufted tall-growing genotype, and M. sinensis Andersson is a tufted genotype with a shorter stature and which produces many more, thinner stems. Details of the site, soils and experiment [26][27][28][29] are given in Table 1, and full details of characteristics of the genotypes are given elsewhere (Supplementary Material; [5,23,30]). For brevity, the genotypes are termed Terra Nova, Tora, Giganteus and Sinensis hereafter.
Sample Collection and Preparation
We took intact soil cores (0.07 m diameter × 1 m length) using a steel core containing an inner sleeve which was driven into the soil using a hydraulic hammer and extracted using a tripod ratchet. As the crops were planted in rows (Table 1), two cores were collected from each plot containing the four genotypes: one in the twin row as close to the plant as possible (willow) or directly over the plant (Miscanthus), and another in the gap equidistant between rows (or twin rows for willow) of plants. This sampling regime was adopted to capture the spatial pattern associated with plants located at regular distances and are termed 'plant' and 'gap' locations hereafter. The cores were collected in the early summer when the stand age was 4 (June 2013) and 6 years (June 2015). In all, therefore, we collected 64 cores (4 genotypes × 2 locations (plant and gap)-× 4 replicate plots (blocks) × 2 years). All cores were wrapped in polythene to keep them intact and stored at − 18°C immediately after collection. In the laboratory, each core was brought to room temperature and divided into five depth interval samples: 0-0.1, 0.1-0.2, 0.2-0.3, 0.3-0.5 and 0.5-1.0 m. Each depth interval sample was then further divided in half longitudinally and weighed fresh. For one of the half-interval samples, we washed the soil away gently to leave the > 2 mm fraction (stones, litter and rhizome) for mass (105°C (stone) or 80°C (plant) for 48 h) and volume adjustment, and the washed roots which were stored in water at 4°C prior to analysis. The other half-interval sample was air-dried and crumbled to yield the same > 2 mm fraction for mass and volume adjustment, and the soil fraction (< 2 mm). A soil subsample was milled to < 350 μm with a Retsch PM 400 planetary ball mill (Retsch GmbH, Haan, Germany) for analysis and a separate subsample was oven-dried at 105°C for 48 h to calculate water contents (air-dried and field-moist) and dry mass. We estimated the linear compaction introduced to the soil core during sampling through comparing the length of the soil core to the depth of the hole from where it had been taken, to fully 'reconstruct' the core in its original compactionfree field state. We encountered negligible compaction (mean = 1.1%) as the cores were taken when the soil was not in a plastic state.
To provide a baseline from which to compare, we made use of an existing data set collected before the field experiment was established. In July 2009, 16 soil samples were collected to a depth of 0.9 m using a 0.02-m-diameter hydraulic Norsk Hydro Soil Sampler (Norsk Hydro ASA, Oslo, Norway) from locations covering the area of the field experiment. The samples were divided into three depths (0-0.23, 0.23-0.60 and 0.60-0.90 m) to measure various soil properties, including SOC and TN concentration (see method below). In October 2009, small intact soil cores (0.077 m diameter × 0.05 m length) were collected from ten depths in the top 1 m at four random locations in the field to measure soil dry bulk density. These data were adjusted to the same three depths as above pro rata. The baseline data set is given in Table 1.
Root Analysis
We subjected washed roots to image analysis using the WinRHIZO 2008a program (Regent Instruments Inc., Québec, Canada) connected to an EPSON Expression 1600 3.4 (Epson America Inc., Long Beach, CA, USA) scanner. Roots were spread onto an A4 scanner bed, covered with a layer of water and scanned. The resulting binary image was analysed to determine the mean root diameter and the root length density (RLD; length per volume of soil). The roots were dried at 80°C for 48 h to calculate their gravimetric concentration (per mass of soil) and the volumetric density (per volume of soil) through knowing the soil bulk density: where root density is the root density (kg m −3 ), root concentration is the root concentration (g kg −1 ) and ρ b is the soil dry bulk density (< 2 mm soil mass, total volume; Mg m −3 ).
Soil Analysis
Soil was analysed for total C and TN concentration using a Leco TruMac Combustion Analyser (LECO Corp., St Joseph, MI, USA). Inorganic C was analysed using a Skalar Primacs AIC Analyser (Skalar Analytical BV, Breda, Netherlands) and the difference with total C yielded the SOC concentration. Inorganic C was only a very minor component of total C (mean 0.13 g kg −1 , n = 320) at the experimental site. Volumetric densities of SOC and TN were then calculated using the same approach as above (Eq. 1) with the same units (replacing 'root' with 'SOC' or 'TN'). Following others [18,31], we used an 'equivalent soil mass' approach to adjust the measured SOC and TN density of the initial baseline data to that based on the bulk density measured after 4 and 6 years, which did not differ significantly. For only the soils collected from Miscanthus plots, the 13 C/ 12 C stable isotope ratios were quantified on prepared carbonate-free soil [23] with an IsoPrime 100 Stable Isotope Ratio Mass Spectrometer (Isoprime Ltd., Cheadle Hulme, UK) coupled with a Vario MICRO Cube Elemental Analyser (Elementar Analysensysteme GmbH, Langenselbold, Germany). By convention, the 13 C/ 12 C ratio was expressed as a δ value (‰) relative to the international Vienna Pee Dee Belemnite (VPDB) standard. As Miscanthus has C 4 physiology, we used the bulk soil δ 13 C value to estimate the SOC density derived from Miscanthus compared to older SOC deriving from the previous land use (all arable crops with C 3 physiology): where density-M indicates the Miscanthus-derived SOC density (kg m −3 ), δ 13 C M is the reference δ 13 C value representative of Miscanthus shoot and root plant material (−11.70 ‰), and δ 13 C C3 is the reference δ 13 C value representative of the soil at the study site under the previous C 3 (cereal crops) plants (− 28.16‰) [23].
We estimated area-based SOC and TN stocks at the larger scale (field or plantation) by a three-stage calculation. Firstly, we calculated a revised density using Eq. 1 but with a mean bulk density for each genotype × location × depth treatment (this was more appropriate when upscaling rather than individual replicate bulk densities) and then multiplied this by the depth interval of interest (m), to give an area-based stock (kg m −2 ). Secondly, we combined intervals to make larger 0-0.3 m ('topsoil') and 0.3-1.0 m ('subsoil') depth stocks. Finally, we estimated the proportion of land that is most influenced by the plant, for which the plant stock is most representative, and the remainder, which is represented by the gap stock. For Terra Nova, Tora and Giganteus, stocks were similar for both plant and gap (see later), but nevertheless we used the minimum distance in the field between adjacent plants (0.50 m for willow in a row and 0.65 m for Giganteus between rows; Table 1), and assumed that each plant exerted a circular influence on the soil from its centre to the distance halfway to its nearest neighbour. Thus, we assumed the circular plant radius was 0.25 m for both willows and 0.33 m for Giganteus. For Sinensis, which had a well-defined tuft, the tuft circumference was measured on four representative plants on all four plots in August 2014 and October 2015 (representative of the stand at 4 and 6 years) and the mean radii were 0.17 m in 2014 and 0.18 m in 2015. Using these radii with the planting density of each genotype (Table 1), we were able to divide the total area into plant and gap and hence adjust the stock estimates pro rata. These stocks, and the estimated initial baseline stock, were analysed together with the rate of change between 0, 4 and 6 years, as expressed in conventional units (Mg ha −1 ).
Statistical Analysis
All statistical analysis was performed using the GenStat (18th edition) program (VSN International Ltd., Hemel Hempstead, UK). We transformed the soil and root variates by log 10 firstly to normalise the distribution of the residuals. We then analysed the variates using residual maximum likelihood (REML) for the following model structures: where 'genotype', 'depth' and 'age' were self-explanatory, 'location' allocated data to either plant or gap, and '*', '/' and '×' represent all cross-products, nesting and interactions of the factors, respectively. We introduced the variance structure above for all analyses as there was no independence of samples at different depths from each treatment in the strictest sense because a single soil core was taken and subdivided into depth samples. We may expect correlations between pairs of depths for any property to vary rather than be constant, depending on the distance between them. Therefore, we introduced an autoregressive variance structure into our REML models to incorporate this feature. For the SOC and TN stocks and their rate of change, neither data transformation nor the variance structure were required, 'period' (i.e. 0-4 and 4-6 years) replaced 'age' for the rates of change, and 'location' or 'depth' were removed when the data were composited into larger-scale spatial and full 1 m depth scales. We present the statistical analysis of all transformed data (where required) in a summary table but present the original data elsewhere for ease of displaying measured quantities. The Wald statistic produced during REML analysis assesses the contributions of individual terms in the fixed model and corresponds to the treatment sum of squares divided by the stratum mean square.
Root Density
The full genotype × location × depth × age interaction was significant (P = 0.014) for root density ( Table 2; Fig. 1). For Sinensis, root density was significantly greater (P < 0.05) for plant (6.6 ± 1.5 after 4 years and 15.5 ± 3.1 kg m −3 after 6 years; mean ± standard error) compared to gap ( With the data transformed (log 10 ), the genotype × location × depth × age interaction was significant (P = 0.014) (see Table 2) The table gives the Wald statistic, the degrees of freedom (df) associated with the numerator (n) and denominator (d), the variance ratio statistic (F), the probability level associated with F (P) and the standard error of differences (SED). Note that all variates were transformed by log 10 firstly to normalise the distribution of residuals and hence the SED is also expressed in the transformed unit (P < 0.05) with few exceptions, and was generally greatest in the 0-0.1 m depth. The RLD was significantly greater (P < 0.05) for plant (3.9 ± 0.3 cm cm −3 ) compared to gap (3.1 ± 0.3 cm cm −3 ) averaged over all other factors and decreased significantly with depth (Table 2; Fig. 3). Giganteus had the smallest (0.2-9.9 cm cm −3 ) and Sinensis the greatest RLD (0.4-11.4 cm cm −3 ) at most depths. Only for Sinensis was there a significant increase (P < 0.05) in RLD between 4 and 6 years (2.7 ± 0.4 to 4.9 ± 0.6 cm cm −3 ).
SOC and TN Density
The SOC density increased significantly (P < 0.05) between 4 and 6 years at all depths below 0.2 m (from 3.6 ± 0.1 to 12.2 ± 0.7 kg m −3 after 4 years to 4.6 ± 0.1 to 14.3 ± 0.6 kg m −3 after 6 years, averaged over all other factors) ( Table 2; Fig. 4). Genotype and location effects were not significant. The SOC density after 4 and 6 years was similar to the initial baseline (0 years) in the upper 0.5 m, though it was much greater under Sinensis and Tora after 6 years in the upper 0.2 m, but less than initial density below 0.5 m (Fig. 4). Miscanthus-derived SOC density was significantly greater (P < 0.05) under Sinensis plant ( Fig. 5). Miscanthus-derived SOC density was also greater for plant than gap for Sinensis at most depths (P < 0.05), whereas differences between plant and gap were not significant for Giganteus. Averaged over both locations, Miscanthus-derived SOC density increased significantly (P < 0.05) between 4 and 6 years at all depths under both genotypes (by 0.4-3.2 kg m −3 ), with few exceptions. The TN density after 4 years was significantly greater (P < 0.05) for gap (2.2 ± 0.1 kg m −3 ) than plant (1.7 ± 0.1 kg m −3 ) at 0-0.2 m depths, but significantly less (P < 0.05) for gap (0.6 ± 0.0 kg m −3 ) than plant (0.8 ± 0.1 kg m −3 ) at 0.5-1.0 m depth (Table 2; Fig. 6). Averaged over all other factors, there was a significantly greater (P < 0.05) TN density associated with Sinensis and Terra Nova (1.4 ± 0.1 kg m −3 ) compared to Giganteus (1.2 ± 0.1 kg m −3 ). Compared to the initial baseline, only for Terra Nova and Sinensis was there a greater TN density recorded after 4 years at certain depths. There were no significant changes between 4 and 6 years. The SOC/TN ratio increased significantly (P < 0.05) from 4 to 6 years under all genotypes (by 1.2-1.6) except Giganteus, and at all depths (by 0.7-1.6) except 0-0.1 m (Fig. 7). Only at 0-0.2 m depths was there a significantly greater SOC/TN ratio for plant compared to gap (by 0.9-2.1). The SOC and TN densities and their ratio decreased significantly with depth (P < 0.05). The SOC/TN ratio after 4 and 6 years under bioenergy crops was greater than that Tora) and Miscanthus (c Giganteus and d Sinensis) genotypes, located either directly under the plant (P) or in the gap between adjacent plants (G), when the stand age was 4 and 6 years. With the data transformed (log 10 ), the location × age (P = 0.041), genotype × location × depth (P = 0.014) and genotype × depth × age (P = 0.005) interactions were significant (see Table 2 ) in the baseline in the upper soil layers. The original gravimetric SOC and TN concentrations (g kg −1 ) are given elsewhere (Supplementary Material).
SOC and TN Stock
Stocks of SOC increased significantly (P < 0.05) in the 0.3-1.0 m depth between 4 and 6 years from 29 ± 2 to 38 ± 2 Mg ha −1 , averaged over all crops, whereas TN stocks were unaffected by any factor ( Table 3). Stocks of SOC in the full 1 m profile changed from 79 ± 4 to 81 ± 5 Mg ha −1 after 4 years to 87 ± 5 to 96 ± 4 Mg ha −1 after 6 years, and TN stocks in the full 1 m profile changed from 9.2 ± 0.9 to 11.0 ± 0.5 Mg ha −1 after 4 years to 9.4 ± 0.5 to 10.8 ± 0.4 Mg ha −1 after 6 years. Genotype-specific differences were not significant however. Miscanthus-derived SOC increased significantly (P < 0.05) from 3.4 ± 0.5 to 5.2 ± 0.6 Mg ha −1 after 4 years to 11.4 ± 0.8 to 14.2 ± 1.7 Mg ha −1 after 6 years. For both SOC and TN, stocks in the 0-0.3 m depth were greater and stocks in the 0.3-1.0 m depth were lower after 4 and 6 years compared to those estimated for the baseline. Between years 4 and 6, SOC increased to 1 m depth at 6.4 ± 1.9 for Terra Nova, 7.4 ± 2.5 for Tora, 3.
Bioenergy Crop Root Characteristics
Root mass was greater under Miscanthus compared to willow, although such differences were restricted to the upper 0.3 m. Root mass below 0.3 m was very small under all genotypes due in part to the high clay content of the subsoil (> 26%) where the ability of roots to penetrate was likely restricted to existing structures such as shrinkage cracks [32]. Only for Miscanthus was a spatial pattern between root mass and location, with respect to canopy structure, identified whereby distinct plant and gap zones down to 0.2 m depth developed, particularly under Sinensis. Sinensis is a tuft-forming genotype, especially during establishment, and its areal coverage in the gap does not increase as much as Giganteus with age [7,23]. Only for Tora in the upper 0.1 m was there a suggestion of an effect of location with respect to willow root mass. We were not able to sample from directly over the centre of the willow plants; therefore, our genotypes, located either directly under the plant (P) or in the gap between adjacent plants (G), when the stand age was 4 and 6 years. With the data transformed (log 10 ), the location factor (P = 0.008) and genotype × depth (P = 0.007) and genotype × age (P = 0.016) interactions were significant (see Table 2 ) plant sample under willow differs slightly from that under Miscanthus which may partly mask the lack of effect of location, although RLD was greater for plant compared to gap for both bioenergy crop species. Observed growth of other species (weeds) between willow plants may have provided a confounding source of roots that could not be distinguished using bulk measurements. Nevertheless, the results indicate that willow roots spread laterally as the stand matures [33].
Root mass and RLD increased substantially over the 2-year period in the upper 0.2 m between measurements under Giganteus and, especially, Sinensis. Continual growth of roots under Miscanthus was expected as the stand was still maturing during this period [34], with each new shoot developing its own root system. Root biomass may also increase for Miscanthus during periods of water stress [1]. Although both 2014 and 2015 recorded greater-than-average (1981-2010) annual temperature and rainfall (+ 1.5°C and + 191 mm in 2014; + 0.7°C and + 48 mm in 2015), in March and April prior to sampling in 2015, rainfall had been up to 50% lower than average [29]. The root system of willows, particularly under the 2-year SRC regime, may have developed to the full extent prior to the first measurement after 4 years such that subsequent root growth was balanced by turnover, as observed by others [35]. Total above-ground yield for both willow varieties and Miscanthus (c Giganteus and d Sinensis) genotypes, located either directly under the plant (P) or in the gap between adjacent plants (G), when the stand age was 4 and 6 years. With the data transformed (log 10 ), the depth × age interaction was significant (P = 0.008), but neither genotype nor location factors were (see Table 2). The baseline data (0 years) for three depth intervals (adjusted for equivalent soil mass) is shown for comparison Fig. 5 Mean Miscanthus-derived soil organic C (SOC) density (n = 4) under Miscanthus (a Giganteus and b Sinensis) genotypes, located either directly under the plant (P) or in the gap between adjacent plants (G), when the stand age was 4 and 6 years. With the data transformed (log 10 ), the genotype × location × depth (P = 0.024) and genotype × depth × age (P = 0.016) interactions were significant (see Table 2) Miscanthus (c Giganteus and d Sinensis) genotypes, located either directly under the plant (P) or in the gap between adjacent plants (G), when the stand age was 4 and 6 years. With the data transformed (log 10 ), the genotype factor (P = 0.032) and location × depth × age interaction (P = 0.025) were significant (see Table 2). The baseline data (0 years) for three depth intervals (adjusted for equivalent soil mass) is shown for comparison and Miscanthus (c Giganteus and d Sinensis) genotypes, located either directly under the plant (P) or in the gap between adjacent plants (G), when the stand age was 4 and 6 years. With the data transformed (log 10 ), the location × depth (P < 0.001), genotype × age (P = 0.003) and depth × age (P < 0.001) interactions were significant (see Table 2). The baseline data (0 years) for three depth intervals (adjusted for equivalent soil mass) is shown for comparison Table 3 Mean Sinensis (18 to 20 Mg ha −1 ) over the same period. This would appear to support the differences in root biomass described above. Willow roots were finer than Miscanthus roots which may increase their turnover rate [15], although mean root diameter under all genotypes increased in size over time, and RLD did not vary much over the measurement period. Despite measurable differences in some above-ground traits of willows [5], these did not appear to be manifest in differences in the root traits we measured in this study. Our estimates of root biomass are comparable with other studies. Ferchaud et al. [22] reported root biomass of 4.1 Mg ha −1 , assuming a C content of 43% [15], under a 5year stand of Giganteus in northern France under similar climatic conditions, similar to our measurement (2.9-7.1 Mg ha −1 ) when root density is expressed as a stock. The rhizome biomass of 18.1 Mg ha −1 given by Ferchaud et al. [22] is within the wide range of that calculated for the 4-year-old Giganteus herein (6.4 in gap and 109.3 Mg ha −1 in plant). The rhizome itself may accumulate more than 1 Mg C ha −1 year −1 under established Miscanthus stands, providing a larger store of C than the root system (< 1 Mg C ha −1 year −1 ), according to Agostini et al. [15].
SOC and TN under Bioenergy Crops
We found that SOC increased over time under all genotypes below, though not above, 0.2 m depth. Differences in topsoil SOC gravimetric concentration (g kg −1 ; Supplementary Material) were not manifest in differences in volumetric density due to bulk density (which was significantly lower (P < 0.05) under plant at 0-0.1 m, averaged over all genotypes), as observed previously [23]. Concentrations of SOC matched patterns in root concentration and density, where regressions explained 0.58 (concentration) and 0.55 (density) of the proportion of the variance (Supplementary Material). This was most apparent for Miscanthus where there was strong evidence linking Miscanthus-derived SOC to root mass, with regressions explaining up to 0.79 proportion of the variance (Supplementary Material), though not obviously RLD. This supports the hypothesis that roots were the main source of SOC under bioenergy crops [8], assumed to be facilitated by the reallocation of photosynthate to the root and rhizome at senescence. Similar mechanisms occur with respect to the stools of willow [1,33].
Whilst there may be some inputs to the soil from leaf litter under willow, this may be recycled back into the plant rapidly as established stands effectively provide their own nutrient needs through internal cycling [1]. Rubino et al. [36] quantified significant incorporation of 13 C-labelled poplar litter into the topsoil horizon in Italy. Significant input from Miscanthus leaves is unlikely, however [18], despite a potential input of up to 7 Mg C ha −1 year −1 [37][38][39]. Even for late-harvest Miscanthus, leaf fall remains largely undecomposed [22], partly because of its reduced quality arising from the translocation of N from senescing leaves to the rhizome at the end of the growing season [40]. This supports below-ground biomass as the primary source of SOC.
Above-ground yields over the 2012-2016 period were greatest for Giganteus (15 Mg ha −1 year −1 ) and least for Sinensis (9 Mg ha −1 year −1 ), with the willow varieties being intermediate (12 Mg ha −1 year −1 ), yet this was not reflected in root biomass or SOC. As TN did not change significantly over time, the SOC/TN ratio increased, particularly in the top 0.2 m under Sinensis. This may reflect the high C/N ratio of the source material: the C/N ratio of Miscanthus roots can exceed 40 (data not presented) suggesting that microbial N mining may control the decomposition of organic C, thereby increasing potential C sequestration rates, and the maintenance of poor TN contents in this N-limited system [41].
Inputs of C to the soil may be important when willow is coppiced as increased root turnover may follow harvesting of above-ground biomass [2]. Comparable SOC sequestration rates of 3. [46], whilst others have reported losses of SOC under willow [47], particularly on former grassland [18,48]. On average, a lower mean annual SOC sequestration rate under willow of 0.6 Mg ha −1 year −1 has been reported [15].
Determining the stable 13 C isotope signature of SOC from Miscanthus stands confirms that fresh Miscanthus-derived C was progressively added to the soil since planting in 2009. Giganteus had a more extensive effect initially on increasing SOC stocks, but Sinensis became more effective at increasing SOC stocks. Equivalent Miscanthus-derived SOC accumulation rates in the 0-0. [15,51].
Data on SOC changes below 0.3 m are limited, but our estimated Miscanthus-derived SOC rates of up to 0.2 Mg ha −1 year −1 over the initial 4 years for both genotypes are similar to those reported in a global review of Giganteus planted following arable cropping (0.1 Mg ha −1 year −1 ; [51]) and studies in Italy (0.5 Mg ha −1 year −1 ; [44]) and the UK (0.1 Mg ha −1 year −1 ; [23]). For the following 2 years, we report considerably greater Miscanthus-derived SOC accumulation rates of 1.5 (Giganteus) and 3.3 Mg ha −1 year −1 (Sinensis) in the 0-0.3 m depth and 1.6 (Giganteus) and 2.1 Mg ha −1 year −1 (Sinensis) in the 0.3-1.0 m depth. These rates are still comparable, however, to others reported in the literature. Rates of up to 6.8 for a 4-5-year-old stand and 8.8 Mg ha −1 year −1 for an 8-9-year-old stand were recorded in the upper 0.25 m under Giganteus in Germany [52], and rates of up to 4 Mg ha −1 year −1 in the topsoil were reported for grass bioenergy crops in general [8].
Increases in SOC stock observed under Miscanthus were not wholly explained by Miscanthus-derived inputs, suggesting an input of C to the system from another (non-C 4 ) source. On occasions, we observed other plants to be present on the plot, despite weed control, though we could not quantify the areal extent. Other factors include trends in atmospheric CO 2 [53,54], and variations in plant input signatures relating to physiological stress [55,56], which all affect the δ 13 C determinations used to estimate Miscanthus-derived inputs. The former would be unlikely to have been significant given the short period between measurements, but the latter may have been important. In the 0-0.3 m depth of gap, however, we did estimate a greater accumulation rate of Miscanthus-derived SOC than total SOC, which would indicate turnover of existing C 3 plant-derived SOC. Preferential processing of inherent SOC leading to persistence of Miscanthus-derived SOC and an overall balance in SOC have been observed previously [22,57], and have led to a caveat that effective sequestration may be marginal [18,23].
Temporal changes in soil TN were largely insignificant, but interesting differences were observed between genotypes. There were marginal increases in TN in topsoil for plant (up to 0.13 Mg ha −1 year −1 ) and marginal decreases in gap (down to − 0.31 Mg ha −1 year −1 ), but the opposite in the 0.3-1.0 m depth (down to − 0.38 for plant and up to 0.41 Mg TN ha −1 year −1 for gap). We suggest that processes such as uptake of N by lateral roots in the gap, loss through leaching down the profile into the subsoil (where not directly mediated by the plant) and loss through denitrification (waterlogging has been observed at the field site in very wet periods) may have been responsible. Increased TN for plant probably derived from the same residues associated with SOC increases, albeit at much lower rates due to the relatively large C/N ratio of the plant inputs.
We estimate that by year 6, SOC was sequestered at a rate of 3 to 9 Mg SOC ha −1 year −1 under all crops in the upper 1 m, following an apparent loss of SOC in the first 4 years specific to the 0.3-1.0 m depth. That SOC may be lost in the establishment phase of bioenergy crop stands is normal, but we must be cautious with the baseline data. Stocks in 2009 prior to establishment were calculated from separate samples collected on separate occasions for SOC concentration and bulk density. . It was likely that measured bulk density was greater in 2009 in part because samples were collected in smaller cores presumably with less chance of including > 2 mm stones that are otherwise accounted for and removed. Additionally, the depth intervals in 2009 did not correspond to those in 2013 and 2015 which required adjustment pro rata. Nevertheless, we report an increase in SOC accumulation rates under all bioenergy crops with time. After 6 years, approximately 14% of SOC (11 to 14 Mg ha −1 ) had derived from Miscanthus. Soil TN was a different matter, and we estimated losses of up to − 0.15 Mg TN ha −1 year −1 under Terra Nova but marginal increases of up to 1.0 Mg TN ha −1 year −1 under the other crops. We may presume that high N requirements of willows [58] were met in part by soil TN reserves in addition to fertiliser.
Previous land use is important for a full evaluation of sequestration potential [8,16]. In their review of global datasets, Qin et al. [51] reported greater accumulation rates of SOC where willow and Miscanthus were planted on former arable land, compared to grassland where some net losses were recorded. This has been supported by paired-site studies in the UK [18]. Former grassland soils have been observed to recover such lost SOC in the medium term however [12,42]. Undoubtedly, some SOC was lost following initial cultivation of the field site, but after the first few SRC cycles of willow, or the first few harvests of Miscanthus, we may expect the SOC stock to increase such that the initial losses are regained, particularly as the site was previously under arable cropping. We therefore expect C to have been sequestered in the soils under bioenergy crops, compared to the previous arable land use, over the lifetime of the plantation [8].
We observed soil SOC accumulation rates exceeding by far a proposed C-neutrality threshold of 0.25 Mg ha −1 year −1 [15,24]. Indeed, if our rates of 3.09-8.84 Mg C ha −1 year −1 are converted firstly to CO 2 equivalents (11.3-32.4 Mg ha −1 year −1 ), we may then use the 100-year global warming potential factor of 298 to calculate the equivalent N 2 O emission threshold that should not be exceeded to maintain overall greenhouse gas mitigation. This estimate of 38-109 kg N 2 O ha −1 year −1 is far greater than current estimates of emissions under both arable (0.9-11.0 kg N 2 O ha −1 year −1 ) and grassland agriculture (1.6-22.0 kg N 2 O ha −1 year −1 ) in the UK [59]. Sequestration in the subsoil may be especially important as such SOC may become protected against further processing [60].
Although our SOC accumulation rates are greater than others (e.g. [17,48]), they are not unprecedently so (e.g. [43,52]), and the rates of change relative to initial SOC stocks are comparable elsewhere. Poeplau et al. [61] calculated accumulations of SOC of 10 to 50% within 10 years from arable to grass and arable to woodland land use conversions, which covers the kinds of proportional increases we found here (up to 12% increase in the 0-0.3 m depth and up to 43% increase in the 0.3-1.0 m depth from 4 to 6 years). The subsoil may also be affected by land use change [61,62] and accumulation rates may be greater where the subsoil SOC stock was low initially [17]. Berhongaray et al. [43] also calculated greater sequestration rates under SRC willow and poplar at 0.3-0.6 m depth compared to 0-0.3 m depth. Not all studies sample to the same depths as this present study and hence may miss SOC changes at depth. We ascribe our greater accumulation rates under both bioenergy crops to the younger age of the stand and the low initial SOC status of the soil which had been under long-term arable management prior.
Conclusions
Genotypes of willow and Miscanthus sequestered SOC in an underlying temperate agricultural silty clay loam soil with the root system being the most likely source. Soil TN was littleaffected and hence addition of fresh residues to the soil increased the SOC/TN under all genotypes. This may lead to persistence of the new residues, although isotopic evidence for turnover of the inherent SOC in preference to fresh Miscanthus-derived SOC was limited, perhaps due to inputs from undergrowth species. Both bioenergy crops offer the double benefit of biomass production and C sequestration when planted in arable soil initially low in SOC. Future studies will continue to monitor changes in SOC and TN to assess the dynamics reported here and will exploit isotopic and biochemical approaches to focus on turnover rates of above-and belowground plant constituents over the full life cycle of bioenergy crop stands. | 2018-05-07T18:25:12.132Z | 2018-03-20T00:00:00.000 | {
"year": 2018,
"sha1": "0b92193dafc78a1c5d73fa253f0141f9a35618f7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12155-018-9903-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "2872ad7a50eef1d68378296ffa616fa0925cabc9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
255047174 | pes2o/s2orc | v3-fos-license | White Matter Correlates of Domain-Specific Working Memory
Prior evidence suggests domain-specific working memory (WM) buffers for maintaining phonological (i.e., speech sound) and semantic (i.e., meaning) information. The phonological WM buffer’s proposed location is in the left supramarginal gyrus (SMG), whereas semantic WM has been related to the left inferior frontal gyrus (IFG), the middle frontal gyrus (MFG), and the angular gyrus (AG). However, less is known about the white matter correlates of phonological and semantic WM. We tested 45 individuals with left hemisphere brain damage on single word processing, phonological WM, and semantic WM tasks and obtained T1 and diffusion weighted neuroimaging. Virtual dissections were performed for each participants’ arcuate fasciculus (AF), inferior fronto-occipital fasciculus (IFOF), inferior longitudinal fasciculus (ILF), middle longitudinal fasciculus (MLF), and uncinate fasciculus (UF), which connect the proposed domain-specific WM buffers with perceptual or processing regions. The results showed that the left IFOF and the posterior segment of the AF were related to semantic WM performance. Phonological WM was related to both the left ILF and the whole AF. This work informs our understanding of the white matter correlates of WM, especially semantic WM, which has not previously been investigated. In addition, this work helps to adjudicate between theories of verbal WM, providing some evidence for separate pathways supporting phonological and semantic WM.
Introduction
Working memory (WM)is the cognitive system that allows us to maintain and manipulate information over short time periods [1].WM supports many other cognitive processes, including understanding [2] and producing [3] language.Here, we report on the neural basis of domain-specific WM, specifically WM for phonological and semantic representations, which are critical for language processing [4].Previous work has focused on elucidating the gray matter cortical regions supporting WM [5][6][7][8].Investigating the role of white matter-the myelinated tracts that connect gray matter regions-in supporting WM, has been a more recent endeavor.For a complete picture of the neural basis of WM, it is necessary to investigate the relation between not only WM performance and gray matter cortical regions, but WM and white matter tracts as well.That is, WM cannot be localized to any one brain area.Rather, WM is supported by networks of gray matter regions connected by white matter tracts.Impairments in WM could therefore occur due to the disruption of certain white matter tracts, even if the gray matter regions that they connect are spared.Consequently, relating deficits in domain-specific WM to the integrity of white matter tracts in individuals with brain damage should inform our understanding of how different brain regions communicate with each other to support phonological and semantic WM.
Existing work on white matter correlates of WM has primarily focused on the role of left hemisphere tracts and often does not control for single word processing or gray matter damage when these factors also potentially affect WM performance [9][10][11].Additionally, Brain Sci.2023, 13, 19 2 of 22 the work has focused on measures tapping phonological WM and thus far, no work has investigated the white matter correlates of WM for semantic representations.The present work addresses these gaps in the literature.
The Domain-Specific Model of WM
Understanding the neural network that supports WM can also help adjudicate between theories of WM.There are many different proposals regarding the structure of WM, including embedded processes models-where WM is the activated portion of long-term memory [12]-and buffer models-where WM is supported by a buffer capacity that is separate from long-term knowledge [13,14].One influential buffer model of WM is the multicomponent model of WM (Figure 1; Baddeley et al., 2021 [13]).The multicomponent model of WM contains one buffer capacity, the phonological loop, for all verbal representations, and another buffer capacity, the visuospatial sketchpad, for visual-spatial representations.More recently, an additional component has been added to the model: the episodic buffer has been proposed to bind together representations from the visuospatial sketchpad, the phonological loop, and episodic long-term memory (LTM) into a cohesive episodic representation [15].
Brain Sci.2022, 12, x FOR PEER REVIEW 2 of 23 Existing work on white matter correlates of WM has primarily focused on the role of left hemisphere tracts and often does not control for single word processing or gray matter damage when these factors also potentially affect WM performance [9][10][11].Additionally, the work has focused on measures tapping phonological WM and thus far, no work has investigated the white matter correlates of WM for semantic representations.The present work addresses these gaps in the literature.
The Domain-Specific Model of WM
Understanding the neural network that supports WM can also help adjudicate between theories of WM.There are many different proposals regarding the structure of WM, including embedded processes models-where WM is the activated portion of long-term memory [12]-and buffer models-where WM is supported by a buffer capacity that is separate from long-term knowledge [13,14].One influential buffer model of WM is the multicomponent model of WM (Figure 1; Baddeley et al., 2021).The multicomponent model of WM contains one buffer capacity, the phonological loop, for all verbal representations, and another buffer capacity, the visuospatial sketchpad, for visual-spatial representations.More recently, an additional component has been added to the model: the episodic buffer has been proposed to bind together representations from the visuospatial sketchpad, the phonological loop, and episodic long-term memory (LTM) into a cohesive episodic representation [15].In contrast, the domain-specific model of verbal WM includes separate buffers for phonological (i.e., speech sound), semantic (i.e., meaning), and orthographic (i.e., written) representations (Figure 2; Martin et al., 2021).The buffers are separate from each other as well as separate from long-term knowledge in their respective domains.The buffer capacities can also be damaged separately from each other.People with brain damage sometimes demonstrate striking double dissociations in their abilities to maintain phonological and semantic information.That is, some people have difficulty maintaining speech sounds but a better ability to retain word meanings after a left hemisphere stroke, suggesting selective damage to the phonological WM buffer.In contrast, others have difficulty maintaining word meanings but a better ability to maintain speech sounds after a left hemisphere stroke, suggesting selective damage to the semantic WM buffer [16][17][18].The development of the domain-specific model has been heavily influenced by neuropsychological investigations of patients with phonological and semantic WM buffer deficits [14,19,20].In contrast, the domain-specific model of verbal WM includes separate buffers for phonological (i.e., speech sound), semantic (i.e., meaning), and orthographic (i.e., written) representations (Figure 2; Martin et al., 2021 [14]).The buffers are separate from each other as well as separate from long-term knowledge in their respective domains.The buffer capacities can also be damaged separately from each other.People with brain damage sometimes demonstrate striking double dissociations in their abilities to maintain phonological and semantic information.That is, some people have difficulty maintaining speech sounds but a better ability to retain word meanings after a left hemisphere stroke, suggesting selective damage to the phonological WM buffer.In contrast, others have difficulty maintaining word meanings but a better ability to maintain speech sounds after a left hemisphere stroke, suggesting selective damage to the semantic WM buffer [16][17][18].The development of the domain-specific model has been heavily influenced by neuropsychological investigations of patients with phonological and semantic WM buffer deficits [14,19,20].Double dissociations between phonological and semantic WM have been reported.Patients with a prominent deficit for maintaining phonological representations are classified as having a phonological WM deficit, while those who have more trouble maintaining semantic representations are classified as having a semantic WM deficit.Patients with phonological WM deficits do not show standard phonological effects on WM, such as the word length effect and phonological similarity effect.They also perform better with the visual versus auditory presentation of list items, the opposite of the typical performance pattern.Phonological WM is often assessed using a task such as digit matching, where participants hear two lists of digits and must indicate whether the two lists are the same or different.Performance on digit matching relies on maintaining phonological information associated with the list items.Although some role of semantic representations is evident in the digit list recall [21], such tasks primarily tap into phonological WM.Patients with phonological WM deficits perform more poorly on such tasks than on others that focus on semantic information, such as the category probe task [17].In the category probe task, participants hear a list of words followed by a probe word.Then, they must draw on their short-term retention of the semantic representations associated with the trial items, to indicate whether the probe word is in the same category as any of the list items.In addition to performing worse on the category probe task in comparison to the digit matching task, people with semantic WM deficits do not show an advantage for memory of words over nonwords, though they do show standard phonological effects on WM.In summary, neuropsychological investigations of WM provide evidence for separable phonological and semantic WM buffers.This claim is corroborated by neuroimaging evidence concerning gray matter correlates of domain-specific WM [6].
Gray Matter Correlates of Domain-Specific WM
Most investigations of the gray matter correlates of WM suggest that there are distinct gray matter regions supporting WM capacity in distinct domains.Specifically, the supramarginal gyrus (SMG) has been implicated in phonological WM [6,8,[22][23][24] while the inferior and middle frontal gyri (IFG/MFG) and the angular gyrus (AG) have been related to semantic WM [5][6][7]25].Early work by Paulesu et al. (1993) found that phonological WM was related to activation in the left inferior parietal region, specifically the SMG.They also observed activation in frontal regions, including the IFG, that they interpreted as supporting articulatory rehearsal.The first study to contrast phonological versus semantic maintenance in a functional MRI study was Martin et al. (2003) which reported significantly greater activation in the left SMG for a phonological compared to a semantic maintenance task and marginally greater activation for semantic compared to phonological maintenance in the left IFG/MFG.Double dissociations between phonological and semantic WM have been reported.Patients with a prominent deficit for maintaining phonological representations are classified as having a phonological WM deficit, while those who have more trouble maintaining semantic representations are classified as having a semantic WM deficit.Patients with phonological WM deficits do not show standard phonological effects on WM, such as the word length effect and phonological similarity effect.They also perform better with the visual versus auditory presentation of list items, the opposite of the typical performance pattern.Phonological WM is often assessed using a task such as digit matching, where participants hear two lists of digits and must indicate whether the two lists are the same or different.Performance on digit matching relies on maintaining phonological information associated with the list items.Although some role of semantic representations is evident in the digit list recall [21], such tasks primarily tap into phonological WM.Patients with phonological WM deficits perform more poorly on such tasks than on others that focus on semantic information, such as the category probe task [17].In the category probe task, participants hear a list of words followed by a probe word.Then, they must draw on their short-term retention of the semantic representations associated with the trial items, to indicate whether the probe word is in the same category as any of the list items.In addition to performing worse on the category probe task in comparison to the digit matching task, people with semantic WM deficits do not show an advantage for memory of words over nonwords, though they do show standard phonological effects on WM.In summary, neuropsychological investigations of WM provide evidence for separable phonological and semantic WM buffers.This claim is corroborated by neuroimaging evidence concerning gray matter correlates of domain-specific WM [6].
Gray Matter Correlates of Domain-Specific WM
Most investigations of the gray matter correlates of WM suggest that there are distinct gray matter regions supporting WM capacity in distinct domains.Specifically, the supramarginal gyrus (SMG) has been implicated in phonological WM [6,8,[22][23][24] while the inferior and middle frontal gyri (IFG/MFG) and the angular gyrus (AG) have been related to semantic WM [5][6][7]25].Early work by Paulesu et al. (1993) [23] found that phonological WM was related to activation in the left inferior parietal region, specifically the SMG.They also observed activation in frontal regions, including the IFG, that they interpreted as supporting articulatory rehearsal.The first study to contrast phonological versus semantic maintenance in a functional MRI study was Martin et al. (2003) [22] which reported significantly greater activation in the left SMG for a phonological compared to a semantic maintenance task and marginally greater activation for semantic compared to phonological maintenance in the left IFG/MFG.Earlier functional MRI findings have been corroborated by more recent work using both univariate and multivariate approaches to functional MRI analysis.Yue and colleagues (2019) [8] observed sustained activation and a load effect in the SMG during the maintenance phase of a phonological WM task.Yue and Martin (2021) [24] offered further evidence for the SMG's role in phonological WM by using representational similarity analysis to demonstrate that observed patterns of neural activity in the SMG were related to memory items' phonological similarity, as represented by a theoretical phonological similarity matrix.Neuropsychological work, including lesion-symptom mapping with patients after a tumor resection and neural stimulation during awake neurosurgery, has also supported the SMG's role in phonological WM [26,27].
Less work has been carried out investigating the gray matter correlates of semantic WM, but the functional MRI work that has been reported with healthy young adults implicates frontal regions, including the inferior and middle frontal gyri.Shivde and Thompson-Schill (2004) [7] found that the short-term maintenance of semantic information was associated with activation in the IFG and MFG, and the maintenance of phonological information was associated with the SMG.Additionally, healthy young adults showed greater activation in the IFG and MFG when participants had to maintain a greater number of semantic representations during a language comprehension task [5].
One neuropsychological investigation into the gray matter correlates of WM that is of particular importance to the proposed work, is that of Martin, Ding, Hamilton, and Schnur (2021) [6], which specifically examined the differences between the neural damage affecting phonological versus semantic WM, using a lesion-symptom mapping approach.The study included 94 patients at the acute stage of a left hemisphere stroke, ruling out the possibility of reorganization of function, and related brain damage to semantic or phonological WM, while controlling for single word processing and the other WM component.Decrements in phonological WM were related to damage in the SMG, as well as to cortical regions in the frontal lobe and subcortical regions, which others have posited are involved in motor aspects of articulation and rehearsal.Decrements in semantic WM were related to damage to the angular gyrus and the inferior frontal gyrus.This recent lesion-symptom mapping study corroborates previous work finding the SMG's relation to phonological WM and the IFG, MFG, and AG's relation to semantic WM.Studies of the gray matter correlates of domain-specific WM, therefore, support the hypothesis of distinct gray matter correlates for phonological and semantic domains of WM.
White Matter Correlates of Domain-Specific WM
Compared to studies of the gray matter correlates of domain-specific WM, less is known about the white matter fiber tracts that support WM. Past work has focused on relating white matter tract integrity to individual differences in visuospatial or verbal WM and implicates tracts connecting widespread gray matter cortical regions across all four lobes of the brain [28].The consensus is that WM relies on communication between disparate gray matter regions that are connected by white matter tracts.In buffer models of WM, such as the multicomponent model [13] and the domain-specific model [14], information must be transferred into the storage buffer (e.g., phonological buffer region) from processing regions (e.g., speech perception regions).The better connected regions involved in processing or storage are, the more information that can be transmitted between cortical regions, thus resulting in increased efficiency of WM processing [29].It may also be the case that larger or denser axons allow for a greater range of neuronal oscillation frequencies, facilitating communication between brain regions [30].Miller and Buschman (2015) [31] applied this idea in the WM domain by proposing that, if cognitive functions, such as WM, rely on the synchronous activity of a brain network, then a greater range of possible neuronal oscillation frequencies would facilitate synchronous activity between the regions involved in WM processes.
Work investigating the white matter correlates of verbal WM typically relates white matter integrity to performance on tasks such as letter, digit, word, and nonword span.
These tasks depend substantially on the retention of phonological information [32][33][34] and are used to specifically measure the short-term retention of phonological information [17].While there is some variation across the tasks, the findings on the neural correlates of phonological WM typically implicate frontoparietal tracts, including the superior longitudinal fasciculus, and more specifically, the arcuate fasciculus.This finding has been replicated across many different study populations, including healthy young adults [10,35], older adults [11], children [36], people with a left hemisphere stroke [37], and people with multiple sclerosis [9,38].
For instance, Takeuchi et al. ( 2011) [35] measured young adults' WM capacity using a letter span task and found that, after controlling for age, WM capacity was related to white matter volume in the frontoparietal and temporal regions, including regions corresponding to the path of the AF.Further, Burzynska et al. (2011) [10] reported that the integrity of frontoparietal tracts was related to the performance on both the high and low load WM tasks, as well as the level of cortical responsivity (the difference between the BOLD activity in a region for low versus high load conditions) in gray matter regions supporting WM.Furthermore, in a study on middle aged and older adults using tract-based spatial statistics, the left AF was related to performance on WM measures [11].This finding has been similarly observed in other studies of the white matter correlates of WM in aging [39].At the opposite end of development, the maturation of frontoparietal white matter tracts, including the AF, has also been related to WM development in children.Ostby and colleagues (2011) [36] measured the radial diffusivity (RD) of the superior longitudinal fasciculus in a group of children and adolescents.RD is a measure of diffusion along the radial plane of the axon and is generally thought to indicate the level of myelination.They found that phonological storage capacity (as measured by forward digit span) was related to the RD of the superior longitudinal fasciculus.The relationship was interpreted as demonstrating the role that white matter myelination, specifically myelination of the superior longitudinal fasciculus, plays in the development of WM.
There is also some evidence that damage to frontoparietal tracts leads to deficits in WM for patients with white matter lesions after a stroke.A patient with selective damage to the superior longitudinal fasciculus and arcuate fasciculus was significantly worse at measures of verbal WM, including forward digit span and word span tasks, compared to the control participants [37].For patients with multiple sclerosis, a degenerative disease affecting the white matter of the brain, the microstructural degeneration of a diffuse network of white matter tracts connecting the frontal, parietal, and temporal regions, has been observed.This diffuse network included the superior longitudinal fasciculus and the AF and its degradation was associated with WM deficits [9,38].
Implications for Theoretical Models of Working Memory
Investigations of the gray and white matter regions underlying phonological and semantic WM capacities also inform our theories about the structure of WM, including adjudicating between buffer models of WM, such as the multicomponent model of WM [13] and the domain-specific model of WM [14].While both the multicomponent model of WM and the domain-specific model of WM include buffer capacities for verbal representations, the domain-specific model is unique in that it contains multiple buffers for different types of verbal representations, including phonological and semantic representations.The difference in the level of specificity for the buffers in the two models leads to different predictions about the neural correlates of WM from each model.The multicomponent model of WM contains a phonological loop, a buffer for maintaining phonological representations, but it is unable to explain cases of preserved semantic WM with impaired phonological WM.In the multicomponent model, semantic WM is supposedly supported by the episodic buffer, but the episodic buffer is conceived as a multimodal buffer for the integration of semantic, phonological, and spatial information [15].Thus, damage to the neural basis of the episodic buffer should affect the maintenance of both semantic and phonological WM.In contrast, the domain-specific model of WM includes separate WM buffers for lexical-semantic and phonological representations, and it predicts that the neural correlates of semantic and phonological WM should be distinct.Our investigation on the white matter correlates of phonological and semantic WM provides a piece of neural evidence that can be used to differentiate between the two approaches to WM and its relation to language processing.
The Current Study
In this work, we used diffusion tensor imaging (DTI) to analyze MRI data from a large group (n = 45) of people who had a left hemisphere stroke.We related the extent of damage to the left hemisphere white matter tracts of interest (Figure 3)-including the arcuate fasciculus (AF), uncinate fasciculus (UF), middle longitudinal fasciculus (MLF), inferior longitudinal fasciculus (ILF), and inferior fronto-occipital fasciculus (IFOF)-to decrements in semantic WM, phonological WM, and language processing abilities.These tracts were chosen based on the literature and/or because they terminate in gray matter regions previously found to support phonological or semantic WM (see further discussion below).In this work, there were two primary aims: (1) replicate and extend past work investigating the white matter correlates of phonological WM and (2) investigate the white matter correlates of semantic WM.
Brain Sci.2022, 12, x FOR PEER REVIEW 6 of 23 buffers for lexical-semantic and phonological representations, and it predicts that the neural correlates of semantic and phonological WM should be distinct.Our investigation on the white matter correlates of phonological and semantic WM provides a piece of neural evidence that can be used to differentiate between the two approaches to WM and its relation to language processing.
The Current Study
In this work, we used diffusion tensor imaging (DTI) to analyze MRI data from a large group (n = 45) of people who had a left hemisphere stroke.We related the extent of damage to the left hemisphere white matter tracts of interest (Figure 3)-including the arcuate fasciculus (AF), uncinate fasciculus (UF), middle longitudinal fasciculus (MLF), inferior longitudinal fasciculus (ILF), and inferior fronto-occipital fasciculus (IFOF)-to decrements in semantic WM, phonological WM, and language processing abilities.These tracts were chosen based on the literature and/or because they terminate in gray matter regions previously found to support phonological or semantic WM (see further discussion below).In this work, there were two primary aims: (1) replicate and extend past work investigating the white matter correlates of phonological WM and (2) investigate the white matter correlates of semantic WM.Fronto-parietal white matter regions have been associated with phonological WM performance in studies across many different healthy and clinical populations [11,34,38].As discussed earlier, one white matter tract which is consistently implicated in phonological WM performance is the AF [39].The AF is a large bundle of fibers that connects gray matter regions in the frontal, temporal, and parietal lobes.It consists of three subsegments: anterior (or parietal), posterior (or temporal), and direct segments.(See Figure 4.) The anterior and posterior segments of the AF are together referred to as the indirect pathway of the AF.The direct segment lies medial to the indirect pathway.The AF connects the SMG, the proposed location of the phonological WM buffer [6,8], to regions in the frontal lobe that support articulatory rehearsal [40,41] and executive function [42] and regions in the temporal lobe that support speech perception [43].Thus, we propose that the left posterior segment of the AF may support phonological WM by transferring speech that is perceived in temporal regions to the SMG for maintenance, and then, the left anterior segment passes that information to frontal regions for rehearsal.Our prediction is that the integrity of the posterior and anterior segments of the AF (together, the indirect pathway of the AF) will predict phonological WM performance when controlling for semantic WM, single word processing, and gray matter damage.While past work has implicated the left AF in verbal WM, little of this work has been carried out with people who have experienced a left Fronto-parietal white matter regions have been associated with phonological WM performance in studies across many different healthy and clinical populations [11,35,39].As discussed earlier, one white matter tract which is consistently implicated in phonological WM performance is the AF [40].The AF is a large bundle of fibers that connects gray matter regions in the frontal, temporal, and parietal lobes.It consists of three subsegments: anterior (or parietal), posterior (or temporal), and direct segments.(See Figure 4.) The anterior and posterior segments of the AF are together referred to as the indirect pathway of the AF.The direct segment lies medial to the indirect pathway.The AF connects the SMG, the proposed location of the phonological WM buffer [6,8], to regions in the frontal lobe that support articulatory rehearsal [41,42] and executive function [43] and regions in the temporal lobe that support speech perception [44].Thus, we propose that the left posterior segment of the AF may support phonological WM by transferring speech that is perceived in temporal regions to the SMG for maintenance, and then, the left anterior segment passes that information to frontal regions for rehearsal.Our prediction is that the integrity of the posterior and anterior segments of the AF (together, the indirect pathway of the AF) will predict phonological WM performance when controlling for semantic WM, single word processing, and gray matter damage.While past work has implicated the left AF in verbal WM, little of this work has been carried out with people who have experienced a left hemisphere stroke (but rather with healthy children, younger or older adults), and none has controlled for single word processing or semantic WM to understand the relationship between the AF and specifically the maintenance of phonological information.Further, this work is unique in its investigation into the specific roles that the subsections of the AF may play in supporting phonological WM.Past work on the role of frontoparietal white matter in verbal WM has focused on the integrity of the AF as a unitary structure, or even less specifically, the entire superior longitudinal fasciculus (a large white matter bundle that contains the AF, as well as other frontoparietal tracts) [45].Here we chose to analyze both the AF as a unitary structure, in order to better compare with past work, as well as the individual subsegments of the AF.
hemisphere stroke (but rather with healthy children, younger or older adults), and none has controlled for single word processing or semantic WM to understand the relationship between the AF and specifically the maintenance of phonological information.Further, this work is unique in its investigation into the specific roles that the subsections of the AF may play in supporting phonological WM.Past work on the role of frontoparietal white matter in verbal WM has focused on the integrity of the AF as a unitary structure, or even less specifically, the entire superior longitudinal fasciculus (a large white matter bundle that contains the AF, as well as other frontoparietal tracts) [44].Here we chose to analyze both the AF as a unitary structure, in order to better compare with past work, as well as the individual subsegments of the AF.To our knowledge, there have been no previous studies investigating the white matter correlates of semantic WM.However, there is evidence that the IFG, a gray matter region in the frontal lobe, is involved in semantic WM [5,6].Therefore, we predict that the left direct segment of the AF, IFOF, and UF, white matter tracts that connect the IFG with semantic processing regions (Figure 3) [45] will support semantic WM.A recent VLSM study found that, in addition to the IFG, AG damage was also related to semantic WM impairments [6].Additionally, an RSA investigation into the neural basis of semantic WM reported that during a delay period in a semantic WM task, semantic representations could be decoded from the AG [24].Thus, we also predict that the left MLF and ILF, which include projections to the AG and connect it to temporal regions supporting semantic knowledge, will also support semantic WM (Figure 3) [46].Overall, our prediction is that the left UF, MLF, ILF, IFOF, and direct segment of the AF will predict semantic WM after controlling for phonological WM performance, single word processing, and gray matter damage.
Participants
Participants included 45 people with left hemisphere brain damage.Behavioral and imaging data were collected from 24 participants recruited through Rice University, and 21 through Baylor College of Medicine.Participants recruited through Rice University were enrolled in studies in the laboratory between 2005 and 2020.The participants recruited through Baylor College of Medicine were initially recruited as part of a longitudinal study of the effects of a left hemisphere stroke on language, memory, and executive control, from the acute stage to one year post-stroke.All participants had brain damage To our knowledge, there have been no previous studies investigating the white matter correlates of semantic WM.However, there is evidence that the IFG, a gray matter region in the frontal lobe, is involved in semantic WM [5,6].Therefore, we predict that the left direct segment of the AF, IFOF, and UF, white matter tracts that connect the IFG with semantic processing regions (Figure 3) [46] will support semantic WM.A recent VLSM study found that, in addition to the IFG, AG damage was also related to semantic WM impairments [6].Additionally, an RSA investigation into the neural basis of semantic WM reported that during a delay period in a semantic WM task, semantic representations could be decoded from the AG [24].Thus, we also predict that the left MLF and ILF, which include projections to the AG and connect it to temporal regions supporting semantic knowledge, will also support semantic WM (Figure 3) [47].Overall, our prediction is that the left UF, MLF, ILF, IFOF, and direct segment of the AF will predict semantic WM after controlling for phonological WM performance, single word processing, and gray matter damage.
Methodology 2.1. Participants
Participants included 45 people with left hemisphere brain damage.Behavioral and imaging data were collected from 24 participants recruited through Rice University, and 21 through Baylor College of Medicine.Participants recruited through Rice University were enrolled in studies in the laboratory between 2005 and 2020.The participants recruited through Baylor College of Medicine were initially recruited as part of a longitudinal study of the effects of a left hemisphere stroke on language, memory, and executive control, from the acute stage to one year post-stroke.All participants had brain damage due to left hemisphere stroke(s) and were at least one year post-stroke at the time of testing.The mean participant age was 60.2 years (SD = 10.9), and the mean education level was 15.3 years (SD = 2.6).Seventeen participants identified as female.The participants recruited through Rice University were tested in accordance with Rice University's Institutional Review Board.Those recruited through Baylor College of Medicine were tested in accordance with the Institutional Review Board for Baylor College of Medicine.
Neuroimaging Acquisition
Neuroimaging data were collected over many years, and three different scanners were used.(Table 1).The acquisition parameters for the diffusion weighted and the T1 weighted scans associated with each scanner are presented below.While multi-institutional diffusion imaging studies are still uncommon, there is some evidence for the feasibility of combining diffusion weighted data collected across different magnets [48].Philips Intera 3T acquisition parameters.The acquisition parameters for the participants scanned in a Philips Intera 3T scanner were as follows:
Tractography and Tract Segmentation
All pre-processing, tractography, and tract segmentation was completed using Ex-ploreDTI [50].The preprocessing protocol for all diffusion weighted scans included signal drift correction, Gibbs ringing correction, and correction for Eddy currents.The diffusion weighted scans were registered to each participants' T1 scan to correct for motion and perform the EPI correction.
Whole brain tractography was performed on the processed and registered diffusion weighted data.We used a deterministic tractography approach with the following parameters: (1) FA threshold = 0.2; (2) step length = 1; (3) angle threshold = 30.Whole brain tractography was followed by the virtual dissections of the tracts of interest.The tracts were dissected using hand-drawn regions of interest in each patients' native space.The AF and it's anterior, direct, and posterior subsections were dissected manually, based on the methods described by Catani and colleagues (2005) [51].The IFOF, ILF, and UF, were segmented, based on the methods discussed in [51].The MLF was segmented, as described in [52].Fractional anisotropy (FA) values were extracted to quantify the integrity of each tract after segmentation.
Phonological Working Memory (Digit Matching Span)
In the digit matching span task, participants heard two lists of digits presented one after the other.They were asked to respond "yes" if the lists were the same and "no" if the lists were different.The list items were presented at approximately one word per second.The list lengths varied from two to six items.There were six lists for the two-item trials; eight for the three-item trials; six for the four-item trials; eight for the five-item trials; and ten for the six-item trials.Within each set of trials, half of the lists were matching, and half of the lists were non-matching.In the lists that were non-matching, two of the digits presented in the second list were transposed.The position of the transposition was approximately equal across the list positions.The task was discontinued when the participant's performance dropped below 75 percent correct on a given list length.Linear interpolation between the two list lengths that spanned 75 percent correct was used to calculate the estimated span length.If a participant did not score below 75 percent correct on the longest list length, their span was calculated using linear interpolation, assuming they would have scored 50 percent correct with lists of seven digits.
Semantic Working Memory (Category Probe)
In the category probe task, participants heard a list of words followed by a probe word.They were asked to indicate whether the probe word was in the same category as any of the words from the list.The categories represented in the list items were animals, body parts, clothing, fruit, and kitchen equipment.If the probe word was in the same category as any of the list items, participants responded "yes," and they responded "no" if the probe word was not in the same category as any of the list items.The responses could be verbal or nonverbal (e.g., pointing or nodding/shaking the head).Prior to administration, the participants were familiarized with the five categories from which the list items were sampled.The list length began with one item and increased up to four items.The items were presented at the speed of approximately one word per second.The category probe span was calculated using linear interpolation as in the digit matching span task described above.If a participant did not fall below 75 percent correct on the longest list length, linear interpolation was calculated, assuming that they would have scored 50 percent correct on the five-item list length.
Single Word Processing (Picture-Word Matching)
In the picture-word matching task, participants saw a black and white line drawing (e.g., a picture of a crown) and were asked a question about the name of the picture [6,53,54].The name provided could be the target name (e.g., Is this a crown?), a phonological distractor (e.g., Is this a clown?), a semantic distractor (e.g., Is this a hat?), or an unrelated foil (e.g., Is this a knife?).There were a total of 68 trials divided evenly into four presentation sets of 17 items.The participants responded "yes" if the question matched the picture and "no", otherwise.The responses could be either verbal or nonverbal (e.g., pointing or nodding/shaking the head).The dependent measures from this task were d' phonological and d' semantic values that indexed a participant's ability to discriminate between the target word and either the phonological or semantic distractor, respectively.
Analysis Plan
We used multiple regression to analyze the relationship between tract integrity and WM performance.In our models, integrity of each of our tracts of interest (quantified using FA) was regressed on both phonological and semantic WM, single word semantic and phonological processing, and gray matter damage to the regions where the tract of interest terminates.We chose a multiple regression approach to test our hypotheses because it allowed us to observe the relation between white matter tract integrity and each of our predictor variables independently of the other predictors included in the model.Controlling for gray matter damage in specific WM regions allowed us to test the prediction that white matter tract integrity predicts WM performance beyond damage to gray matter regions that are thought to support WM [55].For example, the UF connects the IFG, a proposed semantic WM region [5][6][7], with the anterior temporal lobe, a region proposed to represent semantic knowledge [56,57].Testing the relationship between UF integrity and semantic WM while also controlling for damage specifically to the IFG and anterior temporal lobe would be a strong test of the role that the UF plays in semantic WM.Because phonological and semantic WM are generally correlated with each other as well as single word processing, we made the decision to use white matter tract integrity as our dependent variable in our multiple regression models, regressing it simultaneously on the measures of phonological and semantic WM, single word phonological and semantic processing, and gray matter damage.This allowed us to determine the relation of tract integrity to each of the WM measures while controlling for all other measures.
Because of the extensive brain damage for some participants, some tracts could not be identified, suggesting that in those instances the tract no longer existed.Using an FA value of 0 for such tracts resulted in an approximately bimodal distribution of FA for some tracts (see Appendix A) which resulted in distributions that violated the assumptions of multiple regression-specifically, assumptions of normality of residuals and/or equal variances around the regression line.Thus, we elected not to include the 0 values in the multiple regressions [58].However, because this resulted in a substantial reduction in sample size for some tracts, we adopted a logistic regression approach for tracts where 10 cases or more were not reconstructed [59].Specifically, the left hemisphere tracts and the number out of 45 participants that could not be tracked included the AF (18), the anterior segment of the AF (21), the posterior segment of the AF (27), the direct segment of the AF (23), the IFOF (13), and the UF (11).Logistic regression was used to predict the involvement or lack thereof of a tract with both phonological and semantic WM, phonological and semantic single word processing, and gray matter damage to the regions where the tract of interest terminated.For the left AF and its subsections, IFOF, and UF, both logistic and continuous regressions were performed.
In all regression models, we screened for outliers using both studentized residuals and Cook's d.An observation was considered an outlier if it had a studentized residual of 2.5 or higher or greater than three times the mean Cook's d value.Outliers were excluded from the multiple regression models predicting the FA values for the left posterior AF and MLF, and no more than two outliers were ever identified and excluded from any model.
Descriptive Results
The histograms and box plots of the distributions for all white matter tract FA values (both with and without zero values for the untraceable tracts) are presented in Appendix A. The histograms and box plots for the distributions for WM and single word processing measures are presented in Appendix B. The descriptive statistics for all white matter tract FA values are presented in Table 2 and all WM and single word processing measures are presented in Table 3.
Tract Integrity and WM
In all continuous multiple regression models reported here, tract FA was regressed on phonological WM (digit matching), semantic WM (category probe), phonological single-word processing (phonological d'), semantic single-word processing (semantic d'), and the cube root of gray matter damage to the tracts' termination regions.We transformed the measures of percent damage to gray matter regions by taking the cube root because the distribution of gray matter damage was highly negatively skewed.The predicted relationships between left hemisphere tracts and phonological or semantic WM are outlined in Table 4, in terms of their independent contribution in the multiple regression.The pairwise correlations between the left hemisphere white matter tract FA values and the behavioral measures are presented in Table 5.Using the FDR correction for multiple comparisons (Benjamini and Hochberg, 1995) [60], separately, for the pairwise relations to semantic and phonological WM, phonological WM was related to the whole AF, the direct segment of the AF, the posterior segment of the AF, and the ILF.Semantic WM was related to the direct segment of the AF, the posterior segment of the AF, the IFOF, and the ILF.However, although the pairwise results suggested several relations between tract FA and both phonological and semantic WM, it is important to factor in single-word processing and gray matter damage to terminations because, for example, variations in phonological processing may have reduced pairwise correlations to phonological WM, whereas variations in semantic processing may have contributed to positive correlations.The results of the continuous multiple regression analyses that tested the hypothesized relations between left hemisphere tracts and WM, while including all the control variables, are presented in Table 6.As shown there, two tracts showed significant weights for semantic WM and two for phonological WM.Phonological WM was related to the integrity of the whole AF and the ILF.Semantic WM, on the other hand, was related to the posterior portion of the AF and the IFOF.In regard to correcting for multiple comparisons, the FDR correction cannot be directly applied to the results from several multiple regression analyses.We note, however, that if we treated the 16 total weights for semantic and phonological WM as independent observations, one might have expected that less than one weight would have been significant by chance alone (0.05 × 16 = 0.8) for alpha = 0.05.Thus, the fact As predicted, the weight for semantic WM but not phonological WM was significant in the continuous multiple regression model predicting the left IFOF (Table 6).The logistic regression results mirrored the results of the continuous regression in that semantic but not phonological WM predicted the presence of the left IFOF (Table 8).
Inferior Longitudinal Fasciculus (ILF)
When we predicted left ILF FA values, the weight for phonological but not semantic WM was significant (Table 6).
Middle Longitudinal Fasciculus (MLF)
In the model predicting the left MLF FA, neither the weight for phonological nor semantic WM was significant (Table 6).We did not observe a significant weight for either WM measure in the multiple regression models predicting the FA for the left UF (Table 6).Because there were many instances where the left UF could not be tracked, we also tested the relation between left UF integrity and WM using logistic regression.We predicted the presence of the UF with both WM measures, single-word processing, and the cube root of damage to UF terminations.Neither semantic nor phonological WM were significant predictors of the UF's presence (Table 9).
Discussion
Here, we have reported the relationships between white matter tract integrity and domain-specific WM in a large (N = 45) group of people with left hemisphere brain damage.We predicted that phonological WM would be related to the integrity of the left AF's anterior and posterior segments.Additionally, we predicted that semantic WM would be related to the integrity of the left direct segment of the AF, IFOF, ILF, MLF, and UF.Our predictions regarding the white matter correlates of phonological and semantic WM were based on the terminations of these tracts.Thus, we predicted that a tract would be involved in phonological WM if it terminated in the SMG and semantic WM if it terminated in the IFG or AG.A summary of the predicted and observed relations between left hemisphere tracts and WM performance is presented in Table 4.
Our predictions for the white matter correlates of phonological WM were partially supported.We reported a relation between the integrity of the whole AF and phonological WM, replicating past work reporting relationships between measures of frontoparietal tract integrity and phonological WM performance [9][10][11].In addition to the relation between the AF and phonological WM, we also observed an unpredicted relationship between phonological WM and the left ILF.While the ILF connects the temporal lobe with the inferior parietal and occipital lobes, we do not have a detailed understanding of where exactly this tract terminates.While there are certainly distinct patterns, there is also an amount of observed heterogeneity, particularly in brains that have been altered because of brain damage.While we assume the ILF is more often associated with the AG, a semantic WM buffer, it is possible that it has some terminations in the nearby SMG, the proposed phonological WM buffer as well.
Our predictions for the white matter correlates of semantic WM were also partially supported.The relationship between semantic WM and the IFOF came out as expected.The left IFOF has terminations in frontal regions including the left IFG, which is a proposed semantic WM buffer region [5,6].We propose that the left IFOF connects gray matter regions in the temporal lobes supporting semantic processing with the IFG, allowing for information in perceptual and semantic processing regions to be transferred to the IFG for semantic maintenance.There is also evidence that, in some people, the IFOF includes terminations in the precuneus region, which includes the AG [61].Thus, another explanation for the IFOF's relation to semantic WM could be that it connects two semantic WM regions, the IFG and the AG, as part of a larger network supporting semantic WM.Unexpectedly, we also observed a relationship between the posterior segment of the AF and semantic WM.As with the ILF, we observed heterogeneity in where exactly the posterior segment of the AF terminated in the parietal lobe.Considering the proximity of the SMG (proposed phonological WM buffer) and the AG (proposed semantic WM buffer), it is entirely possible that our method of segmenting each individual patient's tract in their native space meant that the posterior AF was, in at least a subset of our patients, connecting the left AG with the temporal lobe.
We did not observe support for the relationships we predicted between semantic WM and the left direct AF, ILF, MLF, or UF in the multiple regression analyses.The direct segment of the AF also has terminations in perceptual processing regions in the temporal lobe, that we predicted would allow it to transfer semantic information from processing regions to the IFG for storage.Similarly, the left ILF and MLF were predicted to support semantic WM because they have terminations in the occipital and inferior parietal lobe, which includes the AG region, as well as the anterior temporal region.In both cases, we predicted that the white matter tracts allow for the semantic knowledge stored in anterior temporal regions to pass to the AG, a semantic WM buffer [56].What are some possible explanations for why many of the semantic WM predictions were not supported?For the direct AF, ILF, and MLF, it may be that the region of the temporal lobe that these tracts terminate in is not critical for semantic processing across modalities.The hub-and-spoke-model of semantic processing proposes a modality-invariant hub coordinating semantic information across the distributed semantic processing regions [62].
Originally, it was proposed that this hub was located in the anterior temporal lobe [62].However, more recent evidence has suggested there are gradations within the ATL where modality-invariant semantic processing is related to more middle and inferior portions of the temporal lobe, including the anterior fusiform gyrus [63].Thus, it may be that while the ILF, MLF, and direct AF all have terminations in the temporal lobe, these terminations may not be in regions supporting modality-invariant processing, which would be most critical for semantic WM.Finally, while we predicted that the UF would be related to semantic WM because it provides a direct connection between the IFG and the anterior temporal lobe, we did not observe a relationship between the UF and WM after accounting for the contribution of other effects using our multiple regression approach.However, while the UF terminates in orbital frontal regions that include an area implicated in aspects of semantic processing (i.e., Brodmann's area 47; Poldrack et al., 1999) [64], prior studies specific to WM for semantic information have revealed more posterior IFG regions (e.g., Hamilton et al., 2009 [5]).
Our findings about the neural correlates of WM also contribute to our theoretical understanding of WM.Specifically, understanding the neural basis of WM delineates between two buffer models of WM: the multicomponent model of WM and the domainspecific model of WM.The multicomponent model of WM includes a phonological loop which maintains phonological information and an episodic buffer which integrates (and supports the maintenance of) phonological, semantic, and visuospatial information.In contrast, the domain-specific model of WM contains separable buffers for phonological and semantic WM.While the domain-specific WM predicts distinct white matter correlates of phonological and semantic WM, the multicomponent model of WM does not.We did not observe any overlap between the tracts supporting phonological versus semantic WM in our multiple regression analyses.The multicomponent model cannot account for tracts that are only related to semantic WM performance after controlling for phonological WM performance and vice versa.While the domain-specific model of WM contains a buffer specific to semantic WM, the multicomponent model of WM does not.The episodic buffer in the multicomponent model is conceptualized as a capacity for combining phonological, semantic, and visual representations into a cohesive episodic memory.We would expect that if the tracts related to only semantic WM were the neural basis of the episodic buffer, then they should have an independent relation to phonological WM performance as well.Thus, the evidence of neural correlates distinct to semantic WM or phonological WM is most closely aligned with the domain-specific model of WM.
While this work does address many of the limitations of past work on the white matter correlates of WM, it does have its own unique set of limitations that should be addressed in future work.First, a strength of this work was its large sample size, especially for a neuropsychological investigation, but the sample size was achieved by (1) combining neuroimaging and behavioral data from participants recruited from one institution over the course of 15 years and several updates in scanning technology and protocol and (2) adding to that data collected at a different institution and scanning facility.While some past work has suggested that it is feasible to combine diffusion-weighted data collected across multiple institutions in the analyses [48], more recent work has called that claim into question and suggested ways to mitigate the effects of including data collected via different scanners and/or with different scanning protocols [59].We would note, however, that when a scanner site is simply included as a covariate in the continuous regression models tested here, all of the previously reported significant effects remain significant (p = 0.0081-0.035).
Conclusions
Here, we have reported the white matter correlates of both phonological and semantic WM in a group of participants with left hemisphere brain damage resulting from a stroke.This is the first report of the white matter correlates associated with semantic WM.Our experimental approach controlled for several factors that have been previously unaccounted for in investigations of the neural correlates of WM, including gray matter damage to tract The authors state that the scientific conclusions are unaffected.This correction was approved by the Academic Editor.The original publication has also been updated.
Text Correction
There were errors in the original publication.There were errors in the dataset that was the basis of the analyses for this paper for eight out of forty-five participants.Highlighted text indicates an update from the original.
A correction has been made to Results, Sections 3.2-3.7.
Tract Integrity and WM
In all continuous multiple regression models reported here, tract FA was regressed on phonological WM (digit matching), semantic WM (category probe), phonological single-word processing (phonological d'), semantic single-word processing (semantic d'), and the cube root of gray matter damage to the tracts' termination regions.We transformed the measures of percent damage to gray matter regions by taking the cube root because the distribution of gray matter damage was highly negatively skewed.The predicted relationships between left hemisphere tracts and phonological or semantic WM are outlined in Table 4, in terms of their independent contribution in the multiple regression.The pairwise correlations between the left hemisphere white matter tract FA values and the behavioral measures are presented in Table 5.Using the FDR correction for multiple comparisons (Benjamini and Hochberg, 1995), separately, for the pairwise relations to semantic and phonological WM, phonological WM was related to the whole AF, the direct segment of the AF, the posterior segment of the AF, and the ILF.Semantic WM was related
Figure 1 .
Figure 1.The multicomponent model of WM
Figure 1 .
Figure 1.The multicomponent model of WM.
Figure 2 .
Figure 2. The domain-specific model of WM (the orthographic buffer included in the full version of the model is not pictured here)
Figure 2 .
Figure 2. The domain-specific model of WM (the orthographic buffer included in the full version of the model is not pictured here).
Figure 3 .
Figure 3. White matter tracts of interest overlaid on three-dimensional renderings of the gray matter cortical regions they connect.
Figure 3 .
Figure 3. White matter tracts of interest overlaid on three-dimensional renderings of the gray matter cortical regions they connect.
Figure A2 .
Figure A2.Distributions for all WM and single-word processing measures.Additional Affiliation(s) In the published publication, there was an error regarding the affiliation(s) for **Name of Author**.In addition to affiliation(s) **Insert Number(s)**, the updated affiliation(s) should include: **Insert affiliations**.The authors state that the scientific conclusions are unaffected.This correction was approved by the Academic Editor.The original publication has also been updated.Addition of an Author **Insert Author Name** was not included as an author in the original publication.The corrected Author Contributions statement appears here.The authors state that the scientific conclusions are unaffected.This correction was approved by the Academic Editor.The original publication has also been updated.**Insert new Author Contributions Statement** Missing Citation In the original publication, **Insert citation** was not cited.The citation has now been inserted in **Name of Section**, **Name of Sub-section if there is one**, **Paragraph Number** and should read: **Insert CORRECTED PARAGRAPH**The authors state that the scientific conclusions are unaffected.This correction was approved by the Academic Editor.The original publication has also been updated.
Figure A2 .
Figure A2.Distributions for all WM and single-word processing measures.
Table 1 .
Number of participants' scans acquired by the scanner.
Table 2 .
Descriptive statistics for the white matter tract FA values.
Table 3 .
Descriptive statistics for WM and the single word processing measures.
Table 6 .
Results of the continuous multiple regressions predicting the left hemisphere tract FA.
Table 7 .
Logistic regression models predicting the left AF and its subsections.
Table 8 .
Logistic regression model predicting the presence of the left IFOF.
Table 9 .
Logistic regression model predicting the presence of the left UF. | 2022-12-24T16:10:02.354Z | 2022-12-22T00:00:00.000 | {
"year": 2022,
"sha1": "cd8f845f1649a0f551ac59e6894549d86ef96997",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/13/1/19/pdf?version=1671694718",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d0a200a3e42907e1e2b803344155b989655412f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254780813 | pes2o/s2orc | v3-fos-license | Lessons Learned From a Case of Behcet’s Disease Presenting With Fever and Life-Threatening Venous Thromboembolism
Infection mimics pose a challenge in the world of infectious diseases. Fever of unknown origin (FUO) requires careful consideration for a broad range of diagnoses. The answer often lies in a careful history and dedicated clinical examination. A delay in diagnosis can result in greater morbidity for the patient. We present the diagnostic challenges in a patient with an infection mimic, Behcet’s disease (BD), who presented with recurrent venous thromboembolism (VTE) and fever of unknown origin (FUO). We present the case of a 53-year-old male of Irish Caucasian ethnicity who presented with a history of fevers and recurrent VTE at a university hospital in Dublin, Ireland. Past medical history includes schistosomiasis, which was treated following a trip to sub-Saharan Africa. Our patient was previously diagnosed with a provoked deep vein thrombosis (DVT). He went on to experience four subsequent episodes of VTE, including DVT, pulmonary embolism (PE), and cerebral venous sinus thrombosis (CVST) while on different forms of anticoagulation. On each of these occasions, there was a concern for sepsis due to fevers > 38°C and a C-reactive protein (CRP) > 200 mg/L. The infection workup included routine laboratory tests, blood and urine cultures, CT of the abdomen and pelvis (CTAP), echocardiogram, and PET-CT, all of which were unrevealing. However, a focused clinical examination revealed evidence of subtle scrotal and oral ulceration, pustulation, and erythema at several sites in his upper limb following venesection and cannulation. In this context, a diagnosis of Behcet’s disease was considered. A diagnosis of Behcet’s disease can only be confidently made after the exclusion of other potential etiologies. In this case, we had to consider a broad range of infectious (malaria, schistosomiasis, rickettsial disease, and endocarditis) and noninfectious (malignancy, antiphospholipid syndrome (APS), myeloproliferative disorders, and paroxysmal nocturnal hemoglobinuria (PNH)) diseases. A delay in diagnosis comes at the cost of increased morbidity and mortality for the patient. A detailed history and clinical examination are key, in addition to a high index of suspicion. Following the induction of high-dose steroid, our patient is doing very well on maintenance adalimumab. From an anticoagulation perspective, he is warfarinized and has not had any further episodes of VTE.
Introduction
Autoinflammatory diseases can often be difficult to diagnose and rely upon the exclusion of other potential pathologies. A delay in diagnosis can result in greater morbidity for the patient. We present the diagnostic challenges in the case of Behcet's disease (BD) who presented with recurrent venous thromboembolism (VTE) and fever of unknown origin (FUO).
Behcet's disease was first documented by Hippocrates in the fifth century BC [1] and was described in the literature by the Turkish dermatologist Dr. Hulusi Behçet in the 1930s [2]. Since then, we operate off diagnostic criteria outlined by the "International Study Group for Behcet's Disease" published in the Lancet in 1990 [2]. However, the incidence of BD in the Caucasian, Western European population is rare. The incidence of BD in the UK, who share a similar genetic heritage as Ireland, is 0.64 per 100,000. This compares to countries along the "Silk Road" with a higher incidence such as Turkey where the incidence is 421 per 100,000 [3]. As a result, a high index of suspicion is required to make a diagnosis. In addition to the low incidence of BD in the Irish population, it is a very heterogeneous disease with a wide spectrum of presentation [3]. The classical "triad" of genital ulcers, oral ulcers, and ocular disease is not always present. The BD spectrum is broad. It can include arthritis; skin manifestations such as erythema nodosum, purpura, and livedo reticularis; vasculature manifestations including venous or arterial thrombosis or aneurysms; and neurological manifestations including cranial neuropathy, sensory neuropathy, and visual disturbance. Fever is a rare feature in the presentation of BD; however, when present, it can point toward BD with vascular involvement [4].
Anticoagulation is the standard of care for systemic VTE. However, VTE in BD is less well-defined [8]. The duration and choice of anticoagulant vary among countries and may be influenced by the prevalence of the disease. This case highlights the use of anticoagulation for a case of BD with severe thrombosis at presentation.
Case Presentation
We present the case of a 53-year-old Irish Caucasian male who presented, in May 2022, with a history of fevers and recurrent VTE to a university in Dublin, Ireland. Past medical history includes schistosomiasis, which was treated following a trip to sub-Saharan Africa. Our patient was previously diagnosed with a deep vein thrombosis (DVT) of the left calf and popliteal vein in March 2020. This was preceded by a longdistance flight and was treated for six months with rivaroxaban from March to August 2020. He went on to experience four subsequent episodes of unprovoked VTE, which we will describe individually ( Table 1) In September 2020, having discontinued anticoagulation one month prior, our patient experienced left lower limb pain and edema with associated shortness of breath. On this occasion, our patient presented to a local private hospital, and an ultrasound (US) Doppler demonstrated extension of the previously diagnosed DVT into the left femoral vein, and a CT pulmonary angiogram (CTPA) also confirmed a pulmonary embolism (PE).
Following this unprovoked second VTE, he was commenced on enoxaparin at a dose of 1 mg/kg twice daily and was referred to a hemostasis and thrombosis consultant in our hospital. He was subsequently transitioned to rivaroxaban 20 mg for six months, which was dose-reduced to 10 mg with an intended indefinite duration.
In April 2022, our patient presented to an external hospital with a painful swollen right lower limb following a short-duration flight (less than four hours) four weeks prior. A US Doppler confirmed a new right common femoral vein DVT despite reported 100% compliance with anticoagulation. Given a new DVT on 10 mg of rivaroxaban, his anticoagulation was escalated to 175 U/kg tinzaparin. Following further discussion with the hemostasis and thrombosis service, it was decided to recommence rivaroxaban at a dose of 20 mg. The option of transitioning to warfarin was discussed; however, our patient expressed a preference to remain on rivaroxaban. At this stage, the patient mentioned that he suffers from headaches and fevers with each VTE. He underwent testing for antiphospholipid syndrome while on a reduced dose of rivaroxaban. Beta-2glycoprotein antibodies were within the normal range at 5.1 U/mL (normal range: 0-6.99 U/mL), IgG anticardiolipin antibodies were reported as low positive, and dilute Russell viper venom test (DRVVT) was high at a ratio of 1.41 (normal: 0-1.26). Anti-Xa while on low-molecular-weight heparin (LMWH) was 0.05 IU/mL. A paroxysmal nocturnal hemoglobinuria (PNH) screen was negative, and JAK2 V617F mutation was not detected.
In May 2022, the patient presented to our hospital with fevers, shortness of breath, and a swollen left lower limb. A US Doppler demonstrated a new left popliteal DVT, and a subsequent CTPA demonstrated segmental and subsegmental PEs in the right lower lobe ( Figure 1).
CT: computed tomography
Anticoagulation was changed to enoxaparin 1 mg/kg twice daily. In addition, there was concern over cellulitis around the left calf, and intravenous (IV) flucloxacillin was commenced. Ultimately, the plan was to transition to warfarin for indefinite anticoagulation with a target international normalized ratio (INR) of 2.5-3.5 following the completion of antibiotics.
Unfortunately, while taking LMWH at a dose of 1 mg/kg twice daily, he presented for a fourth time, before initiating warfarin, complaining of fever and headache. On this occasion, a CT venogram demonstrated a nonocclusive central venous sinus thrombosis (CVST) (Figure 2).
FIGURE 2: CT venogram demonstrating nonocclusive central venous sinus thrombosis (orange arrow)
CT: computed tomography Thus far, we have a previously healthy Caucasian Irish male who has presented on several occasions with VTE at different sites including the lower limbs, lungs, and brain. In addition, he is febrile and has persistently elevated inflammatory markers. He has been investigated by the hematology service for secondary causes of VTE without a positive result. The infectious disease and rheumatology services are consulted, who drew up a broad differential, which we will discuss below.
However, on focused history, our patient revealed that over the last five years, he has been experiencing recurrent ulceration in his scrotum and oral mucosa. In addition, he begins to develop erythema and pustulation at the site of venepuncture and cannulation. This led us to consider a diagnosis of BD of the vascular phenotype.
Treatment
BD is often delayed due to diagnostic challenges in ruling out other potential pathologies. In addition to this, missing an infection can have catastrophic consequences if immunosuppression is initiated.
Following a 10-day course of IV vancomycin and ceftriaxone, corticosteroids and colchicine were initiated to manage the hyperinflammatory state. We commenced prednisolone 60 mg once daily and colchicine 500 mg twice daily. Within 48 hours, our patient's fevers resolved, and his inflammatory markers began to normalize.
Steroids were tapered over 12 weeks, and adalimumab (an anti-tumor necrosis factor (TNF)) was introduced.
The choice of an anti-TNF was based on recent data supporting anti-TNF over more traditional agents such as interferon-based regimes [9].
Outcome and follow-up
Following discharge, our patient was weaned off corticosteroids and transitioned to adalimumab. He is anticoagulated with warfarin with a target INR of 2.5-3.5.
He has not had any further episodes of VTE or fevers and is being followed closely in the rheumatology and hematology outpatient department.
Discussion
There are some important considerations from this case. The first consideration is that we need to consider the differential for recurrent thrombotic episodes, especially while on anticoagulation, and how it changes in the context of presenting with a fever. The most common causes of recurrent VTE include malignancy [5], myeloproliferative neoplasms (MPN) [6], antiphospholipid syndrome (APS), and PNH [7].
Other potential causes include antithrombin (AT) deficiency leading to heparin resistance, vasculitides, noncompliance, inappropriate drug choice, and poor absorption.
Considering the choice and doses of anticoagulation received by our patient, compliance was reported at 100% at all times, there was no concern regarding absorption, nor was he receiving any medication that might alter drug metabolism to a clinically significant degree. Only one event occurred while receiving LMWH, and at that time, the patient had a normal AT III level of 1.01 IU/mL (reference range: 0.82-1.18 IU/mL) with an adequate peak anti-Xa (0.41 IU/mL). The activated partial thromboplastin time (aPTT) was slightly prolonged at 37.1 (reference range: 25-36.5) ( Table 2). The patient underwent extensive radiological investigation including a PET-CT, which failed to identify any features suggestive of malignancy. A PNH screen was negative, and JAK2 V617F mutation was not detected. This, in combination with a normal hematocrit and essentially normal platelet count, meant that a diagnosis of an MPN was felt unlikely. Careful consideration was required when deciding on the appropriate anticoagulation regime. We note that our patient initially had recurrent VTE on rivaroxaban and LMWH. Our routine reference range for monitoring of patients receiving 12 hourly LWMH at 1 mg/kg is 0.5-1.0 IU/mL. We think that it is important to highlight, however, that the use of anti-Xa to monitor LMWH has well-known issues [10]. Firstly, there is significant variability in results with different lots of enoxaparin [11]. Secondly, the clinical relevance of these ranges is not well proven, and additionally, different LMWHs accumulate at different rates with differing pharmacokinetics. Our patient had a peak reading of 0.41 IU/mL. This falls below our standard reference range; however, given the clinical history, demonstration of recurrent thrombosis despite at least an above-prophylactic level, and limitations of the test itself, we felt that this provided sufficient evidence of heparin failure and the requirement for an alternative anticoagulant strategy.
As mentioned, our patient did have a single positive lupus anticoagulant (LA) reading with a DRVVT of 1.46 (upper limit of normal (ULN): 1.26). This was taken while receiving LMWH (enoxaparin). While this usually only causes false-positive readings at supra-therapeutic levels, a false positive is still possible in this instance [12]. Patients with BD have also been seen to have positive LA readings, making this a nonspecific test in this case. More importantly, the patient did not have strong positive results for anticardiolipin and beta-2-glycoprotein antibodies. Recurrent thrombosis in patients on anticoagulation is common in APS, however, mostly confined to those with triple-positive disease (positivity to both antibodies and a persistently positive LA result also) [13]. To diagnose APS, a repeat DRVVT is required 12 weeks after the initial result; however, the patient has not been in a position to stop anticoagulation to facilitate this. Given the aggressively pro-thrombotic phenotype displayed by this patient, alongside features consistent with BD, but not APS, we felt, on balance, a diagnosis of BD was most plausible.
While we investigated a possible hematologic cause for recurrent VTE, we worked with our ID and rheumatology colleagues to further investigate a possible cause. From the perspective of potential infective etiologies, we had a broad differential. Given our patient's extensive travel history, both tropical and nontropical causes were included. Routine septic screens including chest X-ray, urine analysis, and blood cultures did not reveal an obvious source.
Tropical investigations took some time to get results but ultimately were unrevealing. These tests included malaria (ovale or vivax), schistosomiasis, and rickettsial disease ( Table 3). In addition to this, the CT of the abdomen and pelvis (CTAP) identified potential abscess on the spleen and kidney, which raised concern over infective endocarditis (IE). However, a PET-CT did not identify any evidence of septic emboli in these organs, and it also excluded IE based on the absence of fluorodeoxyglucose (FDG) avidity in the heart (Figure 3 and Figure 4). The PET-CT also excluded any solid organ malignancy.
Assay Result
Blood At this point, our suspicion was firm that a malignant or infective etiology was not driving this process. Our rheumatology colleagues suggested an autoinflammatory process. This fits with the recurrent fevers, elevated C-reactive protein (CRP), erythrocyte sedimentation rate (ESR), and ferritin, despite being clinically and hemodynamically stable throughout ( Table 4). On focused direct history, our patient revealed that over the preceding decades, he had been experiencing recurrent oral and scrotal ulceration. In addition to this, there was evidence of pustulation and erythema at several sites in his upper limb following venesection and cannulation, which represented a positive pathergy test. Combined with the extensive evidence of VTE, these findings meet the International Study Group (ISG) diagnostic criteria for BD [2].
Assay Result
Behcet's disease is a rare disease among the Caucasian population in Ireland [3]. It requires a high index of suspicion and astute clinical skills to make the diagnosis. Ultimately, the diagnosis, in this case, comes after ruling out other infectious and inflammatory causes. This case had added complexity in having to consider atypical tropical diseases. Only after ruling these out can an effective management strategy be implemented. Thankfully, following the initiation of prednisolone, adalimumab, and warfarin, our patient has not experienced any further morbidity.
Within the differential for the vascular phenotype of BD is Hughes-Stovin syndrome (HSS). Both present with recurrent thrombosis, however, the characteristic features of HSS, pulmonary artery aneurysms, are not present in our case.
The second consideration is the decision to continue anticoagulation. This was made in conjunction with the rheumatology and hematology services. There is a lack of evidence to support this decision, and expert opinion is divided [9]. However, on balance, due to the severity of thrombosis at presentation at multiple sites, it was decided to continue warfarin lifelong with a target INR of 2.5-3.5. However, as the evidence evolves, this is kept under review.
Conclusions
This case highlights the challenges faced in making a diagnosis of BD, especially in the low-incidence setting. Recurrent VTE, accompanied by fever, may point toward an autoinflammatory cause, in this case, BD. However, ultimately, a diagnosis comes after ruling out defects of the coagulation system, malignancy, and infection.
Finally, the evidence for continuing anticoagulation in BD is sparse, but in the case of severe multifocal thrombosis, in an otherwise healthy individual, it may be appropriate.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-12-17T16:16:19.417Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "8dc65e8b8adf7bd8ad387f5b0a6ddfec2f88fdf4",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/126905-lessons-learned-from-a-case-of-behcets-disease-presenting-with-fever-and-life-threatening-venous-thromboembolism.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "877d5b14ba45248bbd4afd37183e2a983c9b7082",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
209385083 | pes2o/s2orc | v3-fos-license | Nontraumatic spontaneous intracerebral hemorrhage: Baseline characteristics and early outcomes
Abstract Background and Purpose Hemorrhagic stroke, particularly nontraumatic spontaneous intracerebral hemorrhage (SICH), is a cerebrovascular condition with unfavorable outcomes. The aims of the present study were to evaluate patients who suffered from SICH and investigate the early outcomes in a single‐center study. Methods During a study ‐period of 6 years (2008–2014), 613 consecutive patients (mean age, 72 ± 12.7 years; 51.1% female), who suffered from nontraumatic SICH and were treated at the Department of Neurology at the University Hospital of Schleswig‐Holstein, Campus Lübeck, Germany, were included and prospectively analyzed. Results During a mean hospitalization time of 12 days, 148 patients (24.1%) died, 47% of those within the first 2 days and 79% within the first week. The patients who died stayed at the hospital for a shorter time (3) than those who survived (p < .001). In the multivariate logistic regression, following parameters were found to be associated with the in‐hospital mortality: female sex (OR, 2.0; 95%‐CI, 1.2–3.4; p = .009), a NIHSS score> 10 (OR, 10.5; 95%‐CI, 5.6–19.5; p < .001), history of hypertension (OR, 0.35; 95%‐CI, 0.19–0.64; p = .001), previous oral anticoagulation (OR, 2; 95%‐CI, 1.0–3.8; p = .032), and intraventricular extension of hemorrhage (OR, 2.8; 95%‐CI, 1.7–4.7; p = .001). At discharge, 192 patients (41.2%) showed favorable outcomes (mRS ≤ 2) whereas the median mRS of patients who survived was 3 (IQR 2–4). The good functional outcome at discharge from the acute hospital was decreased by an age> 70 years (OR, 0.56; 95%‐CI, 0.35–0.9; p = .017), NIHSS score> 10 at admission (OR, 0.07; 95%‐CI, 0.04–0.13; p < .001), and development of pneumonia during hospitalization (OR, 0.35; 95%‐CI, 0.2–0.6; p < .001). Conclusion The present study showed that SICH is a serious disease causing high mortality and disability, particularly in the early period after event.
| INTRODUC TI ON
The hemorrhagic stroke, nontraumatic spontaneous intracerebral hemorrhage (SICH), is a heterogeneous cerebrovascular condition that leads to disability and rapid death (Al-Khaled, Eggers, & Qug, 2014;Broderick et al., 2007;Weimar et al., 2003). It represents the second form of stroke with a number of patients affected worldwide each year as high as 4 million and a median case fatality of one month of 40% (Asch et al., 2010;Ferro, 2006;Kolominsky-Rabas et al., 1998;Sacco et al., 1998;Sadamasa, Yoshida, Narumi, & Chin, 2011). In such a way, SICH is characterized as the most killing disease in the early phase after onset (Hemphill, Farrant, & Neill, 2009). Many survivors remain severely disabled with only one in four patients having a good outcome (Feigin, Lawes, Bennett, & Anderson, 2003).
In SICH, the bleeding results from cerebrovascular vessels most likely due to first diagnosed or preexisting blood hypertension and as well as a result of amyloid angiopathy (Butcher and Laidlaw, 2003). The brain regions basal ganglia (34%) followed by lobar regions (25%) are the most frequently affected sites in SICH (Hemphill, Bonovich, Besmertis, Manley, and Johnston, 2001).
The SICH occurs typically in the basal region by hypertension ( Figure 1), whereas it may atypically occur in other brain regions ( Figure 2). The cerebral amyloid angiopathy causes SICH as well as asymptomatic microbleeds in the brain that can be detected by brain MRI (Figure 3).
Whereas an absolute benefit from surgery could have been revealed recently in patients with traumatic intracerebral hemorrhage (Gregson et al., 2015), the potential benefit of surgical compared to conservative treatment of SICH still remains controversial, a significant minor advantage was found in cases of SICH without intraventricular extension (Kim, Lee, Koh, & Choi, 2015;Mendelow et al., 2013).
| Study design
Between 2008 and 2014, 613 patients (mean age, 72 ± 12.7 years; female, 51.1%), who were suffering from SICH and were treated at the University Hospital of Schleswig-Holstein, Campus Lübeck, Germany, were included in the present prospective study. We used standardized radiological analysis in characterizing SICH.
The diagnosis of SICH was made by using cranial computed tomography (CCT) and magnetic resonance imaging of brain (cMRI).
Baseline parameters, diagnostic, and therapeutic procedures were retrieved from the medical patient's files ( Table 1). The assessment of outcome was performed by the modified Rankin Scale All deaths were assumed as neurological mortality. Favorable outcomes were set at a mRS ≤ 2. The patients were evaluated at the time of admission to the discharge from the hospital.
All included patients were admitted in the semi-intensive care (stroke unit) or in the intensive care unit in cases of altered consciousness and/or when assistance of respirations was required. All patients were treated by physicians with stroke experience as well as stroke neurologists.
The present study was part of the Stroke Registry at the Department of Neurology. The study (Stroke Registry) was approved by the local ethics committee.
F I G U R E 1 Brain CT scan showed a SICH by a 45-year-old patient, firstly diagnosed with blood hypertension. The patient was presented with a weakness on the left side and hypertensive crisis F I G U R E 2 Atypically localized sICB with ventricular extension showed by brain CT scan. A 64-year-old woman who were presented with a fast-progredient altered consciousness The patients, who were entered in this study, were exclu- Briefly, the management of SICH was conservative and included controlling of blood pressure and other vital parameters, if necessary an intra-arterial measurement was used. The monitoring of vital parameters was essential in the care of hemorrhage patients (Qureshi et al., 2010). Furthermore, patients with respiratory insufficiency due to altered consciousness received an intubation and ventilation for airway protection, those patients were treated initially on intensive care, after weaning the treatment was continued on the stroke unit.
| Statistical analysis
The SPSS software (version 23; IBM) was used to analyze the data.
The values of the continuous variables were reported with the mean and standard deviations, whereas the scores with the median and interquartile range (IQR). Absolute numbers and percentages were used to describe the categorical and nominal variables. A chi-square test was used to determine the correlation between categorical variables and a Mann-Whitney test between scores as well as a t test between continuous variables was performed. The multivariate analysis was performed to estimate the odds ratio (ORs), and all parameters with a significant association in the univariate analysis were entered in the logistic regression. A p-value below .05 was considered significant.
All patients have undergone at least one CCT scan, one at admission, and control CT scan during hospitalization or/and MRI scan.
During hospitalization with a mean overall duration of hospital stay of 12 ± 8 days, 148 patients (24.1%) died, 47% of those died within the first 2 days and 79% within the first week. The mean hospitalization of patients who did not survive was remarkably shorter (3 days) than that of those who survived (p < .001).
The baseline parameters and a comparison between patients who survived vs. those who did not are shown in Table 1. The rate of early death was found to be higher in the elderly, female gender, history of hypertension, and premedication with oral anticoagulants.
Furthermore, a significant association between mortality and localization as well as etiology of bleeding has been established (Table 1).
At discharge, 192 patients (41.2%) showed favorable outcomes (mRS ≤ 2) whereas the median mRS of patients who survived was 3 (IQR 2-4). Patients who showed favorable outcomes seem to be younger, less affected at admission, had hypercholesterolemia and had suffered the bleeding in the lobar region (Table 2).
In the multivariate logistic regression analysis, a good functional outcome at discharge from the acute hospital was decreased by an age
| D ISCUSS I ON
Intracerebral hemorrhage is a cerebrovascular disease with poor outcomes. It is associated with a considerably high rate of disability and mortality, especially in the short time following the event, regardless of conservative management or interventional care (Kim et al., 2015;Mendelow et al., 2013;Qureshi et al., 2010;Qureshi & Qureshi, 2018;Trabert & Steiner, 2012).
Comparable to previous study results (Hemphill et al., 2001;Mayer & Rincon, 2005), the intracerebral hemorrhage seems to be F I G U R E 3 MRI scan showed cerebral amyloid angiopathy, several microbleeds in the infra-as well extratentorial region. A 67 years patients presented with increasing cognitive impairment in the last year very lethal in the early short period after onset. We noticed in the group that did not survive the hemorrhage that four out of five patients died within the first week. However, it is important to note that patients who died in the hospital were older and much more severely neurologically impaired by the time of admission to the hospital than those who survived. In Addition, the mortality was associated with gender, consciousness level, history of hypertension, and anticoagulation as well as the localization of bleeding. Similar findings have been shown in survival with good outcomes.
Conservative treatment focuses on regulating and controlling blood pressure and the treatment of hypertension. Reduction in systolic blood pressure to values between 130 and 150 mm Hg, particularly in the early phase of symptom onset could play a role in reducing the expansion of intracerebral bleeding (Qureshi et al., 2010;Qureshi & Qureshi, 2018).
Contrarywise, we found that the previous hypertension was associated with survive of bleeding in the logistic regression analysis. This could be related to the effectiveness of the preexisting Abbreviations: ASA, acetylsalicylic acid; SD, standard deviation. a Change's by Bonferroni-corrections are significant.
TA B L E 1 Baseline characteristics and comparison between survival and death hypertensive drugs that predict hypertensive blood pressure above 180mm Hg and bleeding expansion.
Recent study (Bernardo, Rebordao, Machado, Salgado, & Pinto, 2019) showed that in an age between 18 and 65 years, the hospital mortality was found remarkably lower (14.9%) than that in our study with mean age of 70 and 77 years among patients who died.
However, the age is uncontrollable predictor for the prognosis compared to other factors, for example, pneumonia that leads to worsening of prognosis (Lindner et al., 2019).
In our study, the functional good outcome was significantly decreased in patients who were older (>70 years) and severely affected at admission (NIHSS score > 10) as well as in cases of development of pneumonia during hospitalization.
Interestingly, the hypercholesterolemia was found to be positively associated with the prognosis, in particular favorable functional outcomes, whereas the preexisting of statin use showed no association with the mortality and disability in patients with SICH.
Similar findings were found in a large cohort study that showed low cholesterol level was associated with poor functional and death at 3 months after event (Chen et al., 2017).
Our study has a limitation that it does not include long-term follow-up. Despite all efforts, the prognosis of SICH is still devastating and is associated with poor outcomes compared to an ischemic stroke.
In summary, SICH is a lethal disease apart from the management and maximal care. The survival is related to factors that are in whole or in part uncontrollable.
D I SCLOS U R E
The data were presented in the European Stroke Organization Conference 2018. The abstract was published in the European Stroke Journal 2018.
CO N FLI C T O F I NTE R E S T
On behalf of all authors, the corresponding author declares that no potential conflict of interest with respect to the research, authorship, and/or publication of this article is present.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2019-12-17T20:10:12.088Z | 2019-12-15T00:00:00.000 | {
"year": 2019,
"sha1": "9ebdc13ea7515c15819a9e227983187a83c9ce05",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.1512",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45e0495edc344c4a9823bc160ffbcbe606bb124a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245185864 | pes2o/s2orc | v3-fos-license | Gene Expression Profile in Primary Tumor Is Associated with Brain-Tropism of Metastasis from Lung Adenocarcinoma
Lung adenocarcinoma has a strong propensity to metastasize to the brain. The brain metastases are difficult to treat and can cause significant morbidity and mortality. Identifying patients with increased risk of developing brain metastasis can assist medical decision-making, facilitating a closer surveillance or justifying a preventive treatment. We analyzed 27 lung adenocarcinoma patients who received a primary lung tumor resection and developed metastases within 5 years after the surgery. Among these patients, 16 developed brain metastases and 11 developed non-brain metastases only. We performed targeted DNA sequencing, RNA sequencing and immunohistochemistry to characterize the difference between the primary tumors. We also compared our findings to the published data of brain-tropic and non-brain-tropic lung adenocarcinoma cell lines. The results demonstrated that the targeted tumor DNA sequencing did not reveal a significant difference between the groups, but the RNA sequencing identified 390 differentially expressed genes. A gene expression signature including CDKN2A could identify 100% of brain-metastasizing tumors with a 91% specificity. However, when compared to the differentially expressed genes between brain-tropic and non-brain-tropic lung cancer cell lines, a different set of genes was shared between the patient data and the cell line data, which include many genes implicated in the cancer-glia/neuron interaction. Our findings indicate that it is possible to identify lung adenocarcinoma patients at the highest risk for brain metastasis by analyzing the primary tumor. Further investigation is required to elucidate the mechanism behind these associations and to identify potential treatment targets.
Introduction
Lung cancer is the world-leading cause of cancer-related death [1], and lung adenocarcinoma has recently surpassed squamous cell carcinoma as the most common histology type [2]. Despite efforts in prevention, screening and treatment, many lung cancer patients still die of the disease, mostly because of distant metastasis. Among the metastatic sites, metastasis to the central nervous system, mainly the brain, is a major problem in patient care. Lung cancer, especially lung adenocarcinoma, has a strong propensity to metastasize to the brain. About 15% of patients already have brain metastasis at the time of the initial diagnosis [3]; more than 20% of all lung adenocarcinoma patients develop brain metastasis along their disease courses [4]. Of all cancer metastases to the brain, lung adenocarcinoma is the most common primary tumor, constituting 37% of all the cases [3]. The brain metastases can cause neurological deficits and increased intracranial pressure, resulting in significant morbidity and mortality. However, the current clinical practice has limited tools for the early detection and treatment of brain metastasis [5]. Because of the cost and radiation exposure related to brain imaging modalities, lung adenocarcinoma patients often do not receive regular brain imaging examinations until they develop symptoms and signs suspicious of brain metastasis. By this time point, multiple brain metastasis foci may have already developed, sometimes to a significant size, and surgical resection or stereotactic radiosurgery may not be feasible. Whole-brain irradiation and systemic therapy may be the patient's only choices, but the irradiation may cause a significant cognitive function decline, and the chemotherapeutic agents and targeted therapies for driver mutations (such as tyrosine kinase inhibitors) invariably encounter the problem of tumor resistance. These treatments can control the brain metastasis temporarily at best, and most patients eventually die of disease progression.
One possible way to improve the management of lung adenocarcinoma-derived brain metastasis is to identify patients who are at the highest risk of developing brain metastasis. If such patients can be identified, implementing a regular brain imaging schedule may be justified, and the metastatic disease may be detected at an earlier time point to allow for a more effective treatment. A preventive treatment, either with irradiation or pharmaceutical agents, may also be considered for this selected group. To achieve this goal, several possible approaches may be taken. Many studies attempted to investigate the mechanism of lung adenocarcinoma brain metastasis by comparing the same patient's primary lung tumor and a tumor from the brain metastatic site [6][7][8][9]. The rationale behind such an approach is that the "brain-tropic" clone of cells may be a minor clone in the primary tumor, which should be enriched in the brain site, and this phenomenon may allow us to identify genes and pathways important for this process. Indeed, studies by this method showed that MYC, YAP1, MMP13 and other genes may contribute to the development of brain metastasis, and these may be potential treatment targets [6]. However, the information gained from this approach may not be useful for a risk stratification of patients before brain metastasis occurs, since detecting the minor clone in the primary tumor may be difficult. Another possibility is that some lung adenocarcinomas may have an inherently higher likelihood of metastasizing to the brain, either because of specific driver oncogenes or because of the tumor-host interaction. In this situation, the genotype or phenotype associated with the brain tropism should be present in both the entire primary tumor and the metastatic site, and a prediction of the brain metastasis by analyzing the primary tumor may be more feasible in this kind of situation. Indeed, studies have found genes that are altered in this manner [6], indicating that at least some brain metastases develop in this fashion. It is this group of patients that is the focus of our current study. We further hypothesized that, instead of comparing patients with brain metastasis to lung adenocarcinoma patients in general, comparing patients with brain metastasis to patients with non-brain metastasis may help us identify features specifically related to brain-tropism. Since both groups of patients have metastatic diseases, any difference remaining may be more likely related to the brain-metastasizing mechanisms.
In order to address the unmet clinical need and to test our hypothesis, we retrospectively analyzed lung adenocarcinoma patients who received a surgical primary tumor resection and later developed brain or non-brain metastasis within 5 years in a single medical center. We first performed a targeted next-generation sequencing of the tumors to investigate their genetic composition. We also performed a comprehensive transcriptome analysis of the primary tumor tissue by RNA sequencing (RNA-seq) to identify differentially expressed genes (DE genes) between the two groups. Based on the difference between the groups, we proposed algorithms to segregate lung adenocarcinoma patients into the high risk/low risk categories for brain metastasis. We further compared our patient study results with the difference found in the study of brain-tropic and non-brain-tropic lung adenocarcinoma cell lines in animal models to look for common mechanisms between the two systems.
Basic Clinical and Pathological Characteristics
The basic characteristics of the patients are summarized in Table 1. A total of 16 patients who developed brain metastasis within 5 years after a surgical resection of the primary lung adenocarcinoma were identified, while 11 patients developed only non-brain metastasis in the same time window. These two groups of patients had similar age, size of primary tumor and experience of adjuvant chemotherapy. Of notice, a larger proportion of patients with brain metastasis were female (male to female ratio = 6:10), while more patients with non-brain metastasis were male than female (male to female ratio = 8:3). On the contrary, fewer patients with brain metastasis had a smoking history compared to those with non-brain metastasis (43.8 % vs. 63.6%). About the pathological features of their diseases, the predominant growth pattern in the primary tumors was mostly acinar in both groups. Regarding the growth patterns traditionally considered of high risk for metastasis (micropapillary and solid), 50% of the brain-metastasizing tumors contained predominantly either one of these two patterns, compared to 45.5% of the non-brain metastasizing tumors, although micropapillary-predominance was more common in the brain-metastasizing group. The distribution of the T stage and the N stage at the time of surgery was similar between the two groups, except that the brain-metastasizing group had more N2 cases (31.3% vs. 27.3%). Overall, some difference was observed in sex ratio, smoking history, frequency of histological micropapillary predominance and N2 stage, but none of these differences was of sufficient magnitude to allow for its use as clinical guidance for brain metastasis risk stratification, and the differences were all statistically non-significant (p > 0.05). The actual timeline of the brain/non-brain-metastasis occurrence and the follow-up length for each individual case are shown in Figure S1.
No significant Genomic Difference Was Identified between Brain-Metastasizing and Non-Brain-Metastasizing Lung Adenocarcinomas by Targeted Next-Generation Sequencing
We compared the genomic composition of the primary lung tumors of the two groups of patients with the FoundationOne CDx targeted DNA sequencing panel (Foundation Medicine, Cambridge, MA, USA) ( Figure 1). In our patient population, we found that a EGFR gene alteration was present in 68.75% of the patients with brain metastasis and 54.55% of those with non-brain metastasis. Among those with the EGFR alteration, the two most common alterations were equally found in both groups (five cases each for L858R mutation and exon 19 deletion in the brain metastasis group; two cases each in the non-brain metastasis group). The other, less common EGFR alterations were observed in single patients. In summary, there is no significant correlation between the EGFR gene alteration and brain metastasis (Fisher's exact test, p = 0.49). Chromosome rearrangements involving ALK and ROS1 were found in only one patient in the brain metastasis group (ALK-EML4) and one in the non-brain metastasis group (CD74-ROS1). Variants of K-RAS and BRAF mutations also occurred in single patients in each group. We did not find any other single genomic alteration that was significantly different between the two groups; other than the EGFR alterations mentioned above, no other genetic alteration was found in more than three cases (Table S1). None of the sequenced cases showed microsatellite instability (MSI). As for the tumor mutation burden, the average mutations per megabase were 4.59 in the brain-metastasizing group and 5.30 in the non-brain-metastasizing group; the difference was not significant using the Wilcoxon rank sum test (p = 0.7221).
The mRNA Expression Profile, including CDKN2A, Is Significantly Different between Brain-Metastasizing and Non-Brain-Metastasizing Lung Adenocarcinomas
We next compared the transcriptome of the two groups of primary tumors via a RNA-seq of fresh-frozen tumor tissue ( Figure 2). A volcano plot (Figure 2a) showed the differentially expressed genes (DE genes) with an at least two-fold expression difference and a p value less than 0.05, as determined by the DESeq2 program. A total of 390 DE genes were identified. A Gene Ontology (GO) enrichment analysis ( Figure 2b, Table S2) showed multiple biological processes varying between the two groups of tumors, notably including an "extracellular matrix organization", which may be related to their metastasis behavior. Interestingly, biological processes related to the nervous system, such as synaptic transmission and assembly, are also highlighted by the analysis, while a Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis also showed that neuroactive ligand-receptor interaction-related genes are differentially expressed between the groups (Figure 2c). The Gene Set Enrichment Analysis (GSEA) based on GO ( Figure 2d, Table S2) and KEGG (Figure 2e, Table S2) also pointed out that genes related to cell adhesion and the extracellular matrix were differentially expressed. When Receiver Operating Characteristic (ROC) curves were used to analyze the ability of individual genes to correctly segregate cases into brain-metastasizing and non-brain-metastasizing, the gene with top performance was CDKN2A, with an area under curve (AUC) of 0.86. Using the expression of this single gene in the primary tumor could correctly segregate cases into brain-metastasizing and nonbrain metastasizing with a sensitivity of 93.8%, a specificity of 81.8%, a positive predictive value (PPV) of 88.2% and a negative predictive value (NPV) of 90% (Table S3). A dot plot ( Figure 2f) showed that the brain metastasizing tumors demonstrated a range of CDKN2A expression, while most of the non-brain-metastasizing tumors showed a low CDKN2A expression. The difference was statistically significant (p = 0.002). Based on the gene list ranked with AUC, a stepwise method was used to build a 17-gene brain-metastasizing signature ( Figure 2g, Table S3). With the optimal threshold −1.89 determined by the ROC curve (Figure 2h), the brain-metastasizing signature was shown to identify 100% of brainmetastasizing tumors with a 91% specificity (Figure 2i). A leave-one-out cross validation was further applied, demonstrating that the signature had a 60% precision and a 75% recall. In addition, the expression of ARL9 was significantly lower in brain-metastasizing tumors than in non-brain-metastasizing tumors. (Figure 2j). The significance of this gene will be explained later in the article.
To assess the RNA expression difference at the protein level, we performed immunohistochemistry (IHC) for p16, the protein product of the CDKN2A gene, on a tissue microarray constructed from the patients' archived formalin-fixed, paraffin-embedded (FFPE) lung tumor tissue. (Figure 3a,b). We specifically chose this target because among the protein products of the genes in our list of high AUC candidates, p16 immunohistochemistry is the most widely performed in pathology laboratories. However, the correlation between the tumor CDKN2A mRNA expression level, p16-positive cell percentage and p16 immunohistochemistry H-score was only moderate (Figure 3c,d). The Pearson correlation coefficient between the p16-positive cell percentage and the CDKN2A expression was 0.47 (p = 0.014), while the correlation coefficient between the p16 immunohistochemistry H-score and the CDKN2A expression was 0.32 (p = 0.099). We noticed a few cases with very diffuse (100%) and strong p16 immunostaining but low mRNA expression (CPM < 12 in RNA-seq). These include two cases in the brain-metastasizing group and two cases in the non-brainmetastasizing group. Other than these cases, we found that the rest of brain-metastasizing tumors are more frequently positive for p16 staining with variable positive percentages and intensity, while the non-brain-metastasizing tumors show limited or no p16 staining. Nevertheless, the overall p16 staining was not significantly different between the two groups, either looking at the p16-positive cell percentage or the H-score (p = 0. 21 Comparing the gene expression profile of brain-metastasizing and non-brain-metastasizing lung adenocarcinomas using RNA-seq. The Volcano plot (panel (a)) showed differentially expressed genes (DE genes) with at least two-fold expression difference and p < 0.05 between the two groups by DESeq2. A total of 390 genes were identified. The GO enrichment analysis (panel (b)) and the KEGG pathway enrichment analysis (panel (c)) of the DE genes highlighted multiple groups of genes and pathways, notably the cellular interaction with extracellular matrix. The visualization of enriched GO terms or KEGG pathways were presented with clusterProfiler [10], and only the top 10 enriched GO terms were shown. The GSEA with GO (panel (d)) and KEGG (panel (e)) also found an enrichment of several similar gene sets, which were visualized by EnrichmentMap [11]. However, when the ability of the individual DE gene to segregate the two groups of tumors was analyzed, the top gene with the greatest AUC value in the ROC analysis was CDKN2A. The dot plot (panel (f)) of CDKN2A expression showed that while brain-metastasizing tumors have a range of expression levels, most non-brain-metastasizing tumors express very little of this gene (p = 0.0020, Mann-Whitney U test). A 17-gene brain-metastasizing signature (panel (g)) was identified for classification. The optimal threshold was determined as −1.89, as indicated in the ROC curve (panel (h)). The dot plot (panel (i)) showed that the brain-metastasizing signature was significantly higher in the brain-metastasizing group (p = 2.6 × 10 −5 , Mann-Whitney U test). The red line indicated the optimal threshold for classification. The dot plot (panel (j)) of ARL9 expression showed that the expression was significantly lower in brain-metastasizing tumors (p = 0.0055, Mann-Whitney U test). B: brain-metastasizing, NB: non-brain-metastasizing. c)), but the correlation is not significant for the p16 staining H-score (panel (d)). Note that 4 cases deviating from the correlation form a group and share the feature of low CDKN2A RNA expression and high p16 positive percentage and score (red circle). Of these cases, 2 belong to the brain metastasizing group and 2 belong to the non-brain-metastasizing group. Box plots of p16-positive percentage (panel (e)) and p16 H-score (panel (f)) show that the brain-metastasizing cases tend to have a variable staining of p16, some reaching high levels, while non-brain-metastasizing cases tend to have low p16 staining. However, the difference was not clear-cut nor statistically significant (p = 0.21 for the percentage and 0.26 for the H-score, Mann-Whitney U test). Scale bar: 100 micrometer. B: brain-metastasizing, NB: non-brain-metastasizing.
Comparing the Gene Expression Pattern between Brain-Metastasizing Patient Tumors and Brain-Tropic Lung Adenocarcinoma Cell Lines Showed a Small Set of Shared Differentially Expressed Genes
We hypothesized that lung adenocarcinoma cell lines with a higher propensity to metastasize to the brain may share common gene expression features with the lung adenocarcinoma patients' lung tumors that produced brain metastases. We examined the recently published MetMap [12] database to look for lung adenocarcinoma cell lines with a different metastasis tropism. In this database, various cell lines were genetically barcoded and intracardiac-injected into immunodeficient mice, then traced in different organs using single-cell sequencing technology. Among the tested cell lines, there were 11 derived from primary lung adenocarcinoma tumors with metastasis potential, and five of them were determined to have higher brain metastasis potential (Figure 4a). We retrieved the gene expression profile of these 11 cell lines from the Cancer Cell Line Encyclopedia (CCLE) database [13] and compared those with higher brain metastasis potential to those with lower potential. We found 1079 genes differentially expressed between the two groups ( Figure 4b). The GO enrichment and KEGG pathway enrichment analysis results are shown in Figure 4c,d. Interestingly, we found that multiple biological processes high-lighted the overlap with those found in our patient cohort analysis. In the GO enrichment analysis, "signal release", "modulation of chemical synaptic transmission", "regulation of trans-synaptic signaling", "extracellular matrix organization", "extracellular structure organization" and "extracellular encapsulating structure organization" were also enriched in our patient cohort analysis and appear to be related to the nervous system or cell adhesion. The overlapping results in the KEGG pathway analysis include "complement and coagulation cascades" and "Staphylococcus aureus infection", which may also contribute to brain metastasis (see Discussion below). We further compared individual genes on the cell line DE gene list with the DE gene list derived from our patient cohort. We found 28 genes that were differentially expressed both between the brain-tropic/non-brain-tropic cell lines and between the brain-metastasizing/non-brain-metastasizing patient tumors, and with the difference in the same direction (e.g., higher in the brain-tropic cell lines and higher in the brain-metastasizing patient tumors) (Figure 4e). Noticeably, only one gene in the patient cohort-derived brain-metastasizing signature, ARL9, was included in this 28-gene set (Figure 2j). In fact, the expression of classical immune-related genes, such as CD3 (hallmark of T lymphocytes) and CD20 (hallmark of B lymphocytes), are detected in our patient cohort (average CPM: CD20 14.99, CD3D 35.59, CD3E 30.58, CD3G 15.40) but not detected in the cell line experiment (average CPM: CD20 0.16, CD3D 0.04, CD3E 0.07, CD3G 0.01), highlighting the absence of the role of the immune system in the cell line experiment. This reflects the fundamental difference between patient tumors and cancer cell line behavior in animal models, yet those 28 differentially expressed genes shared between these two very different systems may warrant further study because they may be related to fundamental principles of lung cancer brain metastasis. were from primary tumors, and among them 11 were found to have substantial metastatic potential. Five of these 11 were found to have a higher brain metastasis potential, while 6 were considered to have a low brain metastasis potential. (b) Analysis of cell line RNA-seq data from the CCLE database showed that the brain-tropic and non-brain-tropic cell lines have 1079 differentially expressed genes with an at least 2-fold expression difference and a p value lower than 0.05. The GO enrichment analysis (c) and the KEGG pathway enrichment analysis (d) showed multiple differences between the two groups of cell lines; the representative GO terms or KEGG pathways that were also identified in our patient cohort analysis were highlighted with red color. (e) Twenty-eight genes were found to be differentially expressed in the same direction in both the cell line analysis and the patient cohort analysis.
Discussion
We proposed an algorithm to stratify lung adenocarcinoma patients into those with high risk for brain metastasis development and those with low risk, potentially useful for guiding the clinical management of patients receiving curative primary lung tumor resection. If the algorithm can be verified in a larger, statistically powered cohort in a prospective study, at the detection of the first metastasis, if not in the brain, the patient's primary tumor may be analyzed according to our algorithm, and the patient's brain metastasis risk assessed. If the risk is high, then the patient may begin to receive regular brain imaging even without neurological signs and symptoms, for the purpose of early detection. Preventive treatment may also be considered, although the risk and benefit of such treatments may require further studies to confirm. For neurologically asymptomatic patients who received brain imaging either during re-staging, because of a non-brain metastasis, or for surveillance only, sometimes small, equivocal lesions will be detected. Our algorithm may also provide the clinician and patient with more risk-stratification information in terms of how to manage such image findings. In a broader sense, any lung adenocarcinoma patient with distant metastasis may be analyzed for their risk of brain metastasis. However, whether our findings still hold true in this population may require further confirmation, and it is of interest to know if needle biopsies of the primary tumor or even a non-brain metastatic site can be used for this purpose.
Among the genes included in our prediction model, CDKN2A is most well-known for its role in tumor development. However, unlike the previous report that showed CDKN2A mutation was associated with brain metastasis [6], we found its over-expression is. Although many previous studies have characterized the phenomenon of CDKN2A/p16 loss in lung adenocarcinoma and its relationship with a poor prognosis [14][15][16], many studies also reported that CDKN2A/p16 expression is not related to the prognosis [17][18][19], or even that an over-expression is related to a poor prognosis [20]. Indeed, the role of CDKN2A/p16 in the formation of brain metastasis by lung adenocarcinoma has rarely been specifically studied. One report showed that the metastatic adenocarcinoma cells from the brain site express more p16 than the primary lung tumor [21]. To our knowledge, our study is the first to demonstrate a relationship between CDKN2A expression and the brain tropism of metastasis. The difference between our findings and the previous report [6] may be attributed to the different patient population studied; in our cohort, a high proportion (63%) of patients have EGFR gene alterations, which is common in east Asian lung adenocarcinoma patients in general but uncommon in Western countries. As for the mechanism whereby CDKN2A expression contributes to brain metastasis, it is conspicuous that traditional genes and pathways related to the CDKN2A function, i.e., cell-cycle-related genes and pathways, are not significantly differentially expressed between brain-metastasizing and non-brain-metastasizing tumors in this study. A possible explanation is that the CDKN2A expression difference may indicate a compensatory mechanism to various cell cycle dysregulations (e.g., responding to RB loss or CDK4/CDK6 gene amplification), and its function in brain metastasis lies in non-cell-cycle regulatory roles.
One study on head and neck squamous cell carcinoma showed that p16 expression can stimulate lymphangiogenesis but inhibit angiogenesis, which may correlate with the strong tendency of p16-positive head and neck squamous cell carcinoma to spread through the lymphatic system [22]. However, such a mechanism cannot explain the brain metastasis behavior of lung adenocarcinoma, which most likely occurs via the hematogenous route. In a mouse non-small-cell lung-cancer model, the inhibition of CDK4/6, the downstream target of p16, resulted in increased CD4 and CD8 T cell infiltration in the tumor [23]. It is now known that adaptive immune cells influence tumor angiogenesis and metastasis behavior [24]. Inflammation-associated angiogenesis may contribute to the establishment of metastasis specifically in the brain's microenvironment, which is reported to be the most inefficient and therefore crucial step in brain metastasis establishment [25]. Further studies are required to elucidate the mechanism behind the association we discovered.
The regulation of CDKN2A/p16 expression in cancer cells is complex [26]. Its loss is often ascribed to the deletion of the gene or the methylation of its promoter, but its over-expression is less understood. The cellular response to stress or other oncogenic environmental factors may drive its expression, and its normal function of inhibiting cell proliferation is negated by other mechanisms. In lung cancer, smoking has been linked to p16 over-expression [27]. Some studies reported the detection of human papilloma virus, a known cause of p16 over-expression, in lung cancer [28][29][30], while others did not [31,32]. In addition, we also noted in our study a group of patients with a low CDKN2A RNA level but high p16 immunohistochemistry staining. The post-translational regulation of p16 is not very well understood. The protein is generally considered short-lived and rapidly degraded by the proteasome in minutes to hours [26]. The interaction between p16 and proteasome activator REGγ has been shown to be required for its degradation [33]. Whether such interactions were disrupted in our cases with discrepant CDKN2A RNA-p16 protein levels requires further investigation. Another pathway of p16 degradation is through autophagy [34]. We found that in three of the four cases with low CDKN2A mRNA expression but strong p16 protein staining, the tumor harbors either PIK3CA mutation, PIK3CB amplification or loss of PTEN gene (Table S2, case B6, NB7, NB8). These genomic alterations can potentially increase the activity of the PI3K signal transduction pathway, which is known to be able to suppress autophagy [35]. PIK3CA, PIK3CB or PTEN alteration was not observed in cases without the CDKN2A/p16 discrepancy. The correlation between the PI3K pathway, autophagy and p16 requires further study to clarify.
The analysis of brain-tropic vs. non-brain-tropic lung adenocarcinoma cell lines based on their behavior in immunodeficient mice demonstrated a different gene expression pattern between the two groups, yet not many of these differentially expressed genes were found in our analysis of patient tumors. We think this is because the patient tumors and the cell line/mouse model systems have many important differences, notably the absence of immune surveillance in the cell line/mouse model. A significant limitation of our study is the relatively small number of patients studied, and a lack of testing cohort to verify the brain metastasis-related gene expression signature we identified, a role that the comparison with the cancer cell line data can only partially fill. However, despite these differences, we still identified 28 genes that were differentially expressed in the same manner in both systems, many of which were related to neurological processes. The GO enrichment analysis also found that genes related to synaptic transmission and signaling were enriched among the differentially expressed genes in both the patient cohort data and the cell line data. It is known that cancer cells can interact with cells in the central nervous system, such as neurons and glia cells, to facilitate the establishment of brain metastasis [29]. One gene, DSCAM, is more highly expressed in both the brain-metastasizing patient tumors and braintropic cell lines in our analyses. This gene encodes a cell adhesion molecule involved in glutamate synapse formation [36]. It has been reported in breast cancer that cancer cells can mimic the reciprocal relationship between astrocytes and neurons, metabolize glutamate to GABA and promote tumor cell proliferation [37]. On the contrary, our analysis found that both the brain-metastasizing patient tumors and brain-tropic cell lines express less mRNA of PLAT than their non-brain-metastasizing/tropic counterparts. PLAT encodes a tissue type plasminogen activator, and it has been shown that its activation target, plasmin, can inhibit brain metastasis by releasing FasL from astrocytes to promote cancer cell death, as well as inactivating the adhesion molecule L1CAM important for cancer spreading [38]. These findings demonstrate that cancer-glia/neuron interaction may play a fundamental role in lung cancer brain metastasis development, which transcends different species such as mouse and man.
In summary, it is possible to identify lung adenocarcinoma patients with a high risk of brain metastasis by analyzing the primary tumor. Our current study is limited by its relatively small sample size and its retrospective nature. Our RNA analysis was performed with fresh frozen tissue obtained during primary tumor surgery. Whether archived tissue can generate similar results is not known. A prospective study with larger patient numbers using FFPE tissue is required to validate these findings and to prove their clinical utility. An animal experiment comparing brain-tropic and non-brain-tropic metastatic lung adenocarcinoma in an immune-competent environment using genetically engineered models [39] is also required to validate our findings and further dissect the biological mechanisms. Therapies targeting the p16/CDK/Rb pathway may be evaluated for its role in the prevention or treatment of brain metastasis.
Patient Selection
We retrospectively enrolled patients who were at least 20 years old and received surgery for lung adenocarcinoma at Taipei Veterans General Hospital from 2007 to 2012. The inclusion and exclusion criteria are: (1) The patient received a primary lung tumor resection during this period, either by lobectomy or wedge resection. During surgery, the tumor was judged by the surgeon to be of sufficient size to allow the direct freezing of a portion of tumor specimen in liquid nitrogen. (2) The pathological diagnosis of the primary lung tumor was a pure adenocarcinoma of lung origin, with no squamous component, small cell component, mucinous phenotype or other special histology types. (3) The patient did not have another malignancy diagnosed from 5 years before to 5 years after the lung tumor resection date. (4) The patient did not receive neoadjuvant therapy before surgery (adjuvant therapy was allowed). (5) The patient had clinically or pathologically documented distant metastasis detected within 5 years after the surgery. Patients with only lung-to-lung metastasis were excluded because of the possible confounding factor of multiple primary lung carcinoma. Similarly, patients with multiple lung tumors at the time of surgery, in whom the primary tumor cannot be clearly determined by a clinical or pathological examination, were also excluded. Patients with only pleural metastasis were also excluded, considering the possible route difference (direct seeding versus hematogenous spreading) between pleural metastasis and other distal organ metastasis. (6) Follow up period: patients who developed brain metastasis within 5 years were all included, regardless of whether they had metastasis to another organ. Those who developed only non-brain metastasis were included only if the patients had at least 2.5 years of clinical follow-up after the surgery, or if the patient died within 5 years. This study was approved by the Institutional Review Board (IRB) of Taipei Veterans General Hospital (ID No. 2016-09-031AC) in accordance with the Declaration of Helsinki. The informed consent requirement was waived.
Targeted DNA Next-Generation Sequencing to Detect Genomic Alterations
Formalin-fixed, paraffin-embedded primary lung tumor sections from the patients were sent to Foundation Medicine (MA, USA) for targeted DNA sequencing using the FoundationOne CDx panel, which includes 324 known cancer-related genes for substitution, insertion/deletion, copy number variations and rearrangements. Microsatellite stability and tumor mutation burden were also assessed. The sample preparation and analysis process were performed according to the Foundation Medicine protocol.
Transcriptome Analysis and Identification of Differentially Expressed Genes
Total RNA was extracted from the lung tumor tissue fragments (approximately 0.5 × 0.5 × 0.5 cm) preserved in liquid nitrogen at the time of the surgery. The extraction was performed using a QIAGEN RNeasy Mini Kit (QIAGEN, Germantown, MD, USA). The cDNA library was built from the RNA with Illumina TruSeq RNA Exome Kit (Illumina, San Diego, CA, USA). 150bp paired-end sequencing, 50 million reads per sample, was performed on the Illumina HiSeq 4000 platform.
The raw sequencing data were aligned to the reference human genome (GRch38) using the STAR software (version 2.7.2a) [40]. The reads mapped to each gene were enumerated using HTSeq (version 0.11.1) [41]. After low-count filtering by edgeR [42], the read counts of protein-coding genes were fed into DESeq2 [43] to determine the differentially expressed genes between brain-metastasizing and non-brain-metastasizing tumors. Meanwhile, the CPM (counts per million) or log 2 CPM value was calculated for each DE gene. The list of DE genes was subjected to a GO (Gene Ontology) enrichment analysis a and KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway enrichment analysis by clusterProfiler [10]. A Gene Set Enrichment Analysis (GSEA) [44] was also applied to investigate the enriched function/pathways. The receiver operating characteristic (ROC) curve of each individual DE gene for its ability to segregate cases into brain-metastasizing and non-brain-metastasizing was plotted, and DE genes with top area under curve (AUC) values were identified. Additionally, a stepwise selection method based on a principal component analysis (PCA) was proposed to identify the optimal gene set for classifying brain-metastasizing samples. Specifically, according to the gene list with a ranked AUC value, one gene with a top AUC value was added into the gene set in each round. Then, PCA was applied using the expression profiles of the gene set. Consequently, the value of the first principal component for each sample was used for classification and the corresponding AUC was calculated for the specific gene set. This gene-adding process continued until the AUC could not be increased in the next five rounds. In this way, the gene set with the highest AUC was defined as the brain-metastasizing signature for classification. A leave-one-out cross validation was further employed to test the classification performance.
Immunohistochemistry
We examined the differentially expressed genes and identified genes of particular interest, i.e., genes with a top AUC value in the ROC plots, and for which there are antibodies commercially available against their protein products. We chose CDKN2A (p16, clone E6H4, Ventana Medical Systems, Oro Valley, AZ, USA) as our target. Immunohistochemistry was performed to corroborate the RNA expression differences on tissue microarrays.
Tissue microarrays were constructed from archived formalin-fixed, paraffin-embedded (FFPE) lung tumor tissue from the patients. All specimens were fixed for 6-72 h before embedding in paraffin. Two cores, each with a diameter of 2 mm, were taken from representative tumor areas of each patient. Four micrometer-thick sections were cut from the arrays and attached onto slides. One section was stained with hematoxylin and eosin for morphology evaluation. The other section was stained with the primary antibody on the Leica Bond-Max (Leica Biosystems, Mount Waverley, VIC, Australia) automated staining platform. The slides were stained with a primary antibody at room temperature for 15 min and then treated with the Bond Polymer Refine Detection Kit (Leica Microsystems, Milton Keynes, UK). The sections were counter-stained with hematoxylin. The percentage of tumor cells positive for p16 was recorded, and the immunohistochemistry H-score was calculated.
Comparison of Gene Expression Profile between Brain-Tropic Lung Adenocarcinoma Cell Lines and Patients with Brain-Metastasizing Lung Adenocarcinoma
A recently published database (MetMap) [12] described the metastasis organ tropism of various human cancer cell lines in an immunodeficient mouse model based on singlecell sequencing technology. In this database, 11 human lung adenocarcinoma cell lines derived from primary tumors with metastatic potential were identified. These cell lines were separated into brain-tropic versus non-brain-tropic based on their brain metastasis potential determined by the MetMap project. A potential greater than −2 (on a log 10 scale) is considered brain-tropic, and a value less than −2 is considered non-brain tropic. The RNA-seq-based gene expression profile of these cell lines was retrieved from the Cancer Cell Line Encyclopedia (CCLE) database [13]. Differentially expressed genes between the brain-tropic and non-brain-tropic cell lines were determined with DEseq2, similarly to the analysis performed on our lung cancer patient specimen RNA-seq data. A GO enrichment analysis and a KEGG pathway enrichment analysis were also performed for the identified DE genes. We compared the DE genes from the MetMap/CCLE cell line data to our patient tumor data and identified the overlapping DE genes with the same direction of difference (e.g., higher in both the brain-tropic cell line and the brain-metastasizing patient tumor).
Statistical Analysis
In general, a Student's t test was performed for the continuous variables, and a Chisquared test or Fisher's exact test was performed for the categorical variables to determine whether there was a significant difference between the brain-metastasizing and non-brainmetastasizing groups. A Mann-Whitney U test was performed to compare the CDKN2A RNA expression level and p16 immunohistochemistry staining between the two groups. A Pearson correlation coefficient was calculated to demonstrate the correlation between the CDKN2A mRNA expression and p16 immunohistochemistry results. A Wilcoxon rank sum test was performed to compare the tumor mutation burden between the two groups. A p value less than 0.05 was considered significant. | 2021-12-16T16:10:14.863Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "2663eac07a11975840a16974c656072dcba3a8cd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/24/13374/pdf?version=1639389556",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8988c2426da7bfbbe287d93cfbf4d7f9fb48405c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238741654 | pes2o/s2orc | v3-fos-license | Understanding the Renal Fibrotic Process in Leptospirosis
Leptospirosis is a neglected infectious disease caused by pathogenic species of the genus Leptospira. The acute disease is well-described, and, although it resembles other tropical diseases, it can be diagnosed through the use of serological and molecular methods. While the chronic renal disease, carrier state, and kidney fibrosis due to Leptospira infection in humans have been the subject of discussion by researchers, the mechanisms involved in these processes are still overlooked, and relatively little is known about the establishment and maintenance of the chronic status underlying this infectious disease. In this review, we highlight recent findings regarding the cellular communication pathways involved in the renal fibrotic process, as well as the relationship between renal fibrosis due to leptospirosis and CKD/CKDu.
Introduction
Leptospirosis is an infectious and zoonotic disease caused by pathogenic bacteria of the genus Leptospira. These highly motile spirochetes, characterized by a helicoidal and thin shape, have two endoflagella that never cover the entire bacterial length, which is one of the most important and peculiar characteristics of these bacteria [1][2][3][4][5]. Worldwide, it has been estimated that 1.03 million cases and 58,000 deaths from leptospirosis are reported annually [6][7][8][9]. The disease has become an issue of concern in countries in Europe and other developed countries, either as an emerging or re-emerging condition. In developing countries, such as those in East Asia, South and Central America, and Sub-Saharan Africa, the disease has been considered to be neglected [6,[10][11][12][13].
Natural disasters, rapid urbanization, and a lack of basic sanitization (e.g., water and sewage treatment or final garbage disposal) are considered risk factors for the occurrence of the disease, both in high-and low-income countries-but mostly the latter [8,14,15]. Professional occupation, gender, and age are also considered risk factors and are good predictors for leptospirosis. For example, living in a favela in Brazil or working in a rice plantation in an Asian country are high-risk factors for developing the disease and, as has been shown recently, chronic kidney disease [13,15,16].
Leptospirosis-related fibrosis, as a sequela from acute disease or due to persistent chronic infection and maintenance of bacteria in proximal convoluted renal tubules, remains overlooked and may be more associated to chronic kidney disease (CKD) than previously considered [13,16]. Kidney fibrosis is defined as accumulation of proteins from the extracellular matrix (ECM) in the interstitium and/or in the tubular basement membrane [17] and is the final outcome of many diseases (e.g., chronic kidney disease), characterized by the loss of architectural and functional roles. CKD (and, consequently, kidney fibrosis) affects around 10% of the global population. Unsurprisingly, countries with a high leptospirosis prevalence also have a high prevalence of CKD of unknown etiology [16,18,19].
With that in mind, in this review, we describe the most important pathways involved in tissue fibrosis and summarize recent findings regarding leptospirosis-related renal fibrosis. Furthermore, we highlight the incidental concurrence of this infectious disease and the outbreak of CKDu globally.
Kidney Fibrosis
As an overly complex process, kidney fibrosis in leptospirosis is approached in a stepwise manner for didactic purposes. First, we describe the pathways that are involved in fibrosis, the downstream effectors, and the outcomes of their activation, such as the epithelial-mesenchymal transition (EMT) and morphological and molecular patterns during this process and leptospiral infection. Then, we relate the pathways with the histological and gross alterations seen in kidney fibrosis during chronic leptospirosis. Finally, new approaches towards studying the relationship between leptospirosis and chronic kidney disease are discussed, as this subject has been a growing concern for nephrologists and Leptospira researchers.
TGF-β1 Signaling Pathway
As reviewed elsewhere [41,42], TGF-β1 signaling has been exhaustively studied and implicated as one of the most important pathways involved in fibrosis in multiple organs, including the kidneys. This pathway is generically constituted by a TGF receptor ligand (cytokines, growth, and differentiation factors), type-II or -I TGF receptors, and downstream effector proteins (named Smads). TGF-β signal transduction plays a crucial role in both physiological and pathological conditions.
In addition to post-translational modification inhibitory mechanisms, this pathway has inhibitory Smads that are activated downstream and interact with other Smads, abolishing or diminishing the response to certain stimuli [58][59][60]. Smad7 was first described in 1997 by Nakao et al. [61], as an intrinsic regulatory protein that inhibits the phosphorylation of mainly Smad2, but also Smad3, as well as preventing over-activation of the pathway.
It also targets the TGFβ type-I receptor for ubiquitylation, leading to the receptor's destruction by the proteasome, thus, decreasing the amount of receptors available in the plasma membrane and controlling the pathway's activation [62]. Overexpression of Smad7 in rat tubular epithelial cells prevents Smad2 phosphorylation and the production of ECM proteins, such as collagen I, III, and IV [60]. Furthermore, Smad7-disrupted mice have more evident kidney fibrosis-induced by Unilateral Ureteral Obstruction (UUO)-compared to the WT group. Collagen I and III deposition and α-SMA production in Smad7-deficient mice confirm the role of Smad7 in controlling TGF-β1 activation and the development of fibrosis [63].
Other important TGF-β signaling pathway repressors are Sloan-Kettering Institute (Ski) and Ski novel (SnoN). Both of these have structural similarities, including a SAND domain (named after Sp100, AIRE-1, NucP41/75, and DEAF-1), which interacts with Smad4 and represses the pathway after its activation [64]. In a study using a mouse model and tubular epithelial cells to evaluate the participation of TGF-β1 in kidney fibrosis, the authors demonstrated that there was a reduction in SnoN and Ski during in vivo fibrosis and that downregulating them in tubular epithelial cells amplified the response to TGF-β1 activation.
In line with a repressive role of these proteins on the TGF-β1 pathway, the ectopic expression of both in the cellular model rendered cells resistant to EMT and abolished the production of fibrosis markers, such as α-SMA [65]. Still, after stimulating HK-2 human renal proximal tubule cells with a hyperglycemic medium, the exogenous overexpression of SnoN and downregulation of Arkadia (a negative regulator of SnoN) can block EMT [66].
Although the pathway has its own regulators and repressors, there are other cytokines and growth factors that may enhance, be activated by (e.g., Wnt/β-catenin) [23,29], or antagonize (e.g., hepatocyte growth factor; HGF) the TGF-β1 signaling pathway. The crosstalk between the TGF-β1 and Wnt/β-catenin during EMT processes has been welldocumented where the activation of these pathways leads to transition of the epithelial phenotype to a mesenchymal one. Other pathways, such as PI3K-Akt-mTOR and MAPK, and the phosphorylation of downstream proteins (e.g., MEK1/2 and ERK1/2) also respond to TGF-β1 stimulus (Figure 1), albeit to a lower degree than when activated by their own receptor's ligands. Even so, these pathways play an important role in the occurrence of EMT, as their blockade abolishes or diminishes EMT in epithelial cells [42,67,68]. HGF is a pleiotropic growth factor mainly secreted by mesenchymal cells after an injury or an inflammatory process [37,[69][70][71][72][73][74]. Its action in cells is mediated by the MET receptor, which triggers the downstream phosphorylation and activation of effector proteins (e.g., ERK), leading to the expression of target genes resulting in the production of metalloproteases 2 and 9 [75], SnoN, and Ski [64][65][66] and inducing myofibroblast apoptosis [75].
The exogenous administration of HGF in different animal models of fibrosis leads to less prominent or abolishes organ fibrosis in chronic kidney disease [34][35][36], unilateral ureteral obstruction [38,76], and diabetic nephropathy [40]. Beyond its evident role in antagonizing TGF-β1 and its antifibrotic activity, the anti-inflammatory potential of HGF may also contribute to the final effects in vivo and in vitro, as renal inflammation directly contributes to progression of chronic kidney disease [71,[77][78][79][80]. This is further discussed in another section.
Wnt/β-Catenin Signaling Pathway
Another apparently simple but complex pathway involved in renal fibrosis is the Wnt/β-catenin pathway. It is simple, in that it has a few actors from its activation until the outcome, which are a Wnt-protein ligand, a frizzled receptor (and its co-receptors), and dephosphorylated β-catenin (see Figure 2). On the other hand, it is complex as it may activate and be controlled by many other pathways, including signaling by TGF-β1 [27,28,30,[81][82][83]. Until the past decade, studies on the Wnt signaling pathway focused mainly on cancer and its pharmacological treatment, and embryonic development. Colorectal cancer has been directly associated with mutations on important Wnt positive-and negative-regulators, and mutations on the main actors of the pathway cause abnormalities in embryo development, such as a lack of wings in Drosophila melanogaster and a lack of anterior cerebellum in mice [84]. Deregulation of this pathway has recently been associated with fibrosis, as a cause or consequence of tissue damage and pro-fibrotic stimulus [26,28,30,[85][86][87].
In total, 19 Wnt ligands, which are glycoproteins that bind to the frizzled receptors to trigger activation, have been described [26,88]. The pathway comprises scaffold and adaptor proteins that mediate the interaction between the receptors and effector proteins. After Wnt ligand binding to the frizzled receptor, LDL-related receptor proteins 5 and 6 (LRP5/6) are phosphorylated and recruit Dishevelled proteins, which are responsible for inhibiting the destruction complex (Glycogen synthase kinase 3β, GSK-3β; Casein kinase Iα, CKIα; Axin; and Adenomatosis polyposis coli, APC).
Then, stable dephosphorylated β-catenin is released from the complex, enters the nucleus, and interacts with LEF (lymphoid enhancer factor) and T-cell factor (TCF), leading to the expression of target genes of the pathway [84]. As with other signaling pathways, the Wnt pathway has its own regulators, thus, avoiding over-activation and pathological outcomes. One of the most important Wnt modulators is the Dickkopf family (DKK) of proteins [29,86,89], which binds to LRP5/6 co-receptors and prevents the binding of Wnt ligands to the Frizzled receptor, thus, blocking the pathway [90].
In a mouse model of kidney fibrosis caused by UUO, only three Wnt genes (Wnt5b, Wnt8b and Wnt9b) are not upregulated, with a similar expression in sham mice. The other 16 Wnt genes are upregulated, following different patterns throughout the course of the experiment. Some of them are overexpressed within the first week, reach a peak, and then expression levels decline over the next seven days; others are upregulated within the first week and expression levels are sustained throughout the 14-day experiment; and, in others, the expression levels increase progressively during the entire experiment. There is also an increase in cytoplasmic and nuclear β-catenin during the course of the assay, further reinforcing that the canonical pathway is truly activated, with a final response of ECM production and kidney fibrosis evident at day 14 after UUO [85].
Klotho is a membrane bound protein that has been implicated as a negative regulator of the Wnt/β-catenin pathway. In HK-2 cells, Klotho inhibition by si-RNA leads to a more evident EMT after TGF-β1 stimulation, showing that there is a crosstalk between both pathways. In an in vivo model of chronic allograft dysfunction, Klotho is downregulated by 24 weeks after transplantation [91]. Reduced Klotho expression was also reported in the folic acid model of fibrosis [92]. Non-canonical activation of this pathway occurs independently from Klotho activity since the MMP-7 activation of β-catenin is abrogated by an inhibitor of transcription of target genes, but not by Klotho [93].
C3a/C3aR Signaling Pathway
Complement system proteins play important roles in renal fibrosis. As part of the bloodstream's innate immune system, the effectors and regulators of the classical, alternative, and lectin pathways of the complement system are mostly produced and secreted by the liver directly into the circulation, acting against pathogens and other threats in the bloodstream [94][95][96][97][98][99][100][101][102]. As other cells and organs have been described as sources of complement system proteins as well, additional functions have been ascribed to this system. Here, we highlight the importance of intracellular and secreted complement activation in disease conditions, with an emphasis on its consequences for the development of kidney fibrosis [92,103,104].
Classical pathway activation is mediated by the binding of the C1 complex (C1q/C1r/ C1s) to immunoglobulins that are bound to target cell membranes. The lectin pathway is activated through the recognition of specific carbohydrates located on the pathogen surface and, finally, the alternative pathway, which is constantly activated at low levels and allows for the generation of C3b. All three pathways converge to a common, terminal pathway, leading to the assembly of the membrane attack complex (MAC), a membrane pore responsible for killing pathogens and/or target cells [95][96][97]99,100]. Cleavage of the central complement protein C3 generates the small fragments C3a and C3b, which become part of the C5 convertases that cleave C5 into C5a and C5b. Complement C6 and C7 bound to C5b form a complex that inserts into the plasma membrane, which acts as a scaffold for the assembly of C8 and multiple C9 molecules, culminating in MAC formation [101].
In a model of kidney fibrosis induced by UUO C57Bl/6, mice knockout for C1qA and C3 genes have an obvious increase in the production and deposition of ECM proteins, augmented production of α-SMA, reduction of E-cadherin, and increased production and secretion of TGF-β1 from ex vivo pericytes, compared to sham mice. In mice depleted of C1q (C1q − / − ), an increase in C1s/C1r proteases and diminished levels of C3 during kidney fibrosis were observed. The depletion of C1r in another mouse model causes an evident reduction in C3 production and activation. These findings indicate the relationship between C1r/C1s and C3 production and activation in the fibrotic kidney [92,104].
During complement activation, fragments derived from C3 and C5-known as C3a and C5a or anaphylatoxins-are generated. They bind to their respective receptors, leading to a pro-inflammatory response, such as NLRP3 activation by C5a/C5aR2 and the production of IL-1β and IFN-γ by TCD4 + cells [105]. C3aR is expressed in renal epithelial cells, and its activation is related to the production of the pro-inflammatory cytokines IL-1β, IL-6, IL-8, and TNF-α by immune cells; these cytokines may exert deleterious functions during renal damage, leading to kidney fibrosis [106].
There is growing evidence regarding the participation of C3a and C5a in the occurrence and progression of kidney fibrosis. HK-2 cells express markers of EMT and overproduce ECM molecules when cultured for 72 hours with C3a or C5a. Cells display strong staining for α-SMA and lose positive staining for E-cadherin in immunocytochemistry after incubation with both complement anaphylatoxins [31].
TGF-β1 versus HGF Balance
There is a close relationship and a fine balance between TGF-β1 and the multifunctional cytokine HGF (Figure 3), and the crosstalk between these pathways will be further discussed.
HGF is a pleiotropic growth factor that promotes diverse cell responses, such as mitosis, motility, and morphogenesis, as well as wound healing, tissue regeneration, tumorigenesis, and invasion. The primary structure of the protein, first studied as a hepatocyte mitogenic protein in cell cultures, was deduced more than 30 years ago [108,109]. The pathway has only one ligand-the HGF molecule-and only one receptor-the MET.
Although some bacteria are able to subvert and bind the receptor to enter the cell, there are no other endogenous ligands for MET, except for HGF itself [74,110]. HGF is composed of an alpha and a beta chain and needs to be cleaved to its active form by serine proteases, mostly by the HGF-activator. HGF is produced and secreted by mesenchymal cells [37,69,70,74,111]. MET is a tyrosine kinase receptor (TKR), which becomes phosphorylated in multiple tyrosine residues after binding HGF, and then recruits proteins from different pathways (e.g., PI3K and Akt), which exert their roles in the activated cells (i.e., suppression of cell death by expression of anti-apoptotic Bcl-xl, inhibition of Fas-FasL binding, and inhibition of caspase3-mediated apoptosis) [112][113][114][115]. Other roles of the HGF-MET pathways are dictated by the activation of RAS/ERK pathways, which regulate cell proliferation and motility [111]. A fine-tuned relationship between HGF and TGF-β1 has been reported both in vivo and in vitro, with respect to different cell types and animal models. EMT and fibrosis are consequences of an imbalance of this relationship, leading to the production and secretion of hallmark proteins [39,40,111,[116][117][118][119]. As EMT is induced by the exogenous administration of TGF-β1 in HK-2 cells, Wei et al. [116] evaluated the action of HGF from mesenchymal stem cells (MSC) in counteracting the effects of TGF-β1. After stimulation with the growth factor and co-culture of HK-2 cells and MSC, signs of EMT reduce, and E-cadherin and α-SMA return to basal levels. To prove that HGF is responsible for the blockade of EMT, MSC were transfected with siRNA for HGF, and there was no blockage, showing that HGF was necessary for the results [116].
In the renal fibrotic process, this balance is deregulated and leans towards TGF-β1, which is one of the causes of kidney fibrosis. In a mouse model of congenital nephrosis, kidney local expression and secretion of TGF-β1 is augmented while HGF levels are diminished, showing that a fine-tuned regulation of these two growth factors is required to maintain homeostasis and avoid kidney fibrosis [37]. HGF counteracts TGF-β1 by inducing the expression of SnoN, one of the co-repressors of the TGF-β1 pathway.
In a study using a cellular model, EMT and ECM production are induced through the stimulation of cells with purified TGF-β1. Cells were treated with TGF-β1 in the presence or absence of HGF, and whole-cell lysates were analyzed by western blot. When treated with both TGF-β1 and HGF, there is an increase in SnoN production, but no reduction in Smad2 phosphorylation. By co-immunoprecipitation, it was shown that SnoN and p-Smad2 are associated, indicating that SnoN physically binds to p-Smad2 and impedes the activation of target genes [120].
Renin-Angiotensin-Aldosterone System and Its Relation with Fibrosis
Despite its well-known role in maintaining blood volume, blood pressure, and Na + levels within normal values, the renin-angiotensin-aldosterone system (RAAS) has been implicated in organ fibrosis, mainly by its capacity to activate the TGF-β1 signaling pathway [121][122][123].
Angiotensin II (Ang II) directly binds to cellular receptors and activates important pathways (e.g., NF-κB and ERK). One of the described targets for Ang II is TLR4, known for its affinity for bacterial LPS. Renal dysfunction is characterized by augmented serum creatinine, albuminuria, and blood urea nitrogen, and kidney fibrosis hallmarks include increased collagens I/IV deposition and elevated levels of TGF-β1 and MMP9. Knocking down MD2-a TLR4 accessory protein that mediates binding of the ligand to its receptoror blocking it with a small inhibitor molecule leads to a decrease in renal dysfunction and kidney fibrosis seen in wild type mice after the subcutaneous injection of Ang II. It also affects cytokine and chemokine production mediated by NF-κB and ERK pathways. Of note, all the effects observed upon Ang II administration are MD2 dependent, thus, indicating that Ang II signals using the TLR4/MD2 complex [123].
The binding of Ang II to its type-1 receptor (AT1R) is apparently involved in the occurrence of EMT in HK-2 cells induced by high glucose medium, and mediated by the effector proteins mTOR and p70S6K. Silencing the receptor rescues the cells from EMT, characterized by overexpression of E-cadherin, diminished levels of α-SMA, and reduced expression of EMT core transcription factors [124]. Conversely, activation of the pathway by AT2R has the opposite effects and protects cells from TGF-β1 activation by inhibiting TGFβRII, and consequently, preventing EMT [125] Ang II is capable of inducing secretion, and activating the latent form of TGF-β1, thus, having a pro-fibrotic role [126]. Ang II activates Smad2 and Smad3, which will enter the nucleus and display the same effects already described upon activation by TFG-β1 binding to its receptor, such as an enhanced production of the TGF-β1 molecule, leading to positive feedback [122].
Although this pathway is well-described and associated to renal fibrosis in different pathological contexts [123][124][125][126][127], the involvement of RAAS in kidney fibrosis due to chronic infection by pathogenic Leptospira, as deep as it was possible for us to search, has not been addressed, but may be a possible subject of investigation regarding the sequelae from leptospiral infection.
EMT, a Hallmark of Kidney Fibrosis
As research involving EMT has grown in the past 20 years, it has become almost mandatory to elaborate guidelines to help researchers in formulating projects and to develop concepts that could aid in eliminating controversies regarding terms associated to EMT. In June 2020, an extensive committee, on behalf of The EMT International Association, published the Guidelines and definitions for research on epithelial-mesenchymal transition [128].
The epithelial to mesenchymal transition is a commonly described phenomenon that takes place during three major events: embryogenesis, cancer, and fibrosis. It is described as a process that occurs after specific stimuli in epithelial cells, leading to the acquisition of mesenchymal phenotype markers and a loss of epithelial markers. Although it may seem to be a static and end-point event, there is a spectrum regarding epithelial to mesenchymal phenotype markers, and, in fibrosis, cells mostly undergo an incomplete EMT, retaining characteristics that resemble both epithelial and mesenchymal phenotypes [128].
Although still controversial, there is evidence that the partial EMT of tubular cells participates in renal fibrosis indirectly, as EMT leads to an increased production of cytokines and chemokines, such as TGF-β1, that stimulate interstitial fibroblasts to produce more ECM proteins and, as such, attract more immune cells and enhance local inflammation [129][130][131][132].
In vitro and in vivo studies have also described the occurrence of EMT or partial EMT in tubular epithelial cells. The expression of mesenchymal markers (e.g., α-SMA), loss of epithelial markers (e.g., E-cadherin), increased production of ECM proteins (e.g., fibronectin and collagens I, III, and IV), and over-and down-regulation of specific genes (e.g., Zeb1 and Zeb2, Snai1, and Snai2) are indicative of this process, pointing to the active participation of epithelial cells in the occurrence and progression of kidney fibrosis [23,116,[132][133][134].
Although necessary during renal embryogenesis, Snai1 and Snai2 are completely inactive during adulthood and are only reactivated during fibrosis. Tubular epithelial cells from mice subjected to UUO show an increased expression of Snai1, while there is an evident and important increase in interstitial fibrosis concomitant with a reduction in the expression of epithelial markers, such as cadherin 1. In Snai1 fl/fl mice, fibrosis, and mesenchymal markers are clearly less evident, and epithelial markers are still perceived in tubular epithelial cells, demonstrating that Snai1 is necessary for partial EMT in those cells.
The contribution of tubular epithelial cells to the generation of myofibroblasts (ECMproducing cells) in kidney fibrosis is a controversial issue among EMT researchers. Using a mouse model expressing the Tomato fluorescent protein and UUO to induce fibrosis, it became clear that tubular cells undergo partial EMT, but do not delaminate and detach from the basement membrane to invade the interstitium, as less than 1% of interstitial cells were tdTomato + 7 days after obstruction [130].
Mice lacking Twist1 or Snail1 genes have 48.2% and 64.3% lesser tubular epithelial cells, respectively, that underwent EMT after UUO compared to wild type mice. In addition, those mice displayed less collagen deposition and better renal function. No signs of tubular epithelial cell detachment are seen both in wild type and knockout mice, indicating that these cells underwent partial EMT and that they are needed to induce α-SMA + myofibroblasts. EMT in kidney fibrosis is associated with a lower expression of important solute transporters, such as aquaporin 1 and Na + /K + -ATPase, and suppressing the expression of core EMT-transcription factors may explain the better renal function in Twist1 or Snail1 KO mice [135].
EMT is under strict control in adult humans and animals [128,130,131,136]. One of the regulators that helps keep EMT under control are micro-RNAs (miRNAs), especially the family of miRNAs known as miR-200, which downregulate the expression of Zeb1 and Zeb2. Both transcription factors are overexpressed during fibrosis and EMT, and downregulate the expression of E-cadherin [137][138][139]. In a rat tubular epithelial cell line model, EMT is induced by TGF-β1, while miR-200a is downregulated in a Smad2-dependent way. When miR-200a is overexpressed, cells are protected from TGF-β1-induced EMT, indicating that Smad2 is necessary and controls the expression of miR-200a [138]. With that in mind, the use of miR-200 family precursors as a possibility for kidney fibrosis treatment and/or retardation of fibrosis development has been tested, with some good results [140].
Acute Leptospirosis and the Development of Kidney Fibrosis
During infection, pathogenic Leptospira disseminate through the bloodstream and reach target organs, especially the kidneys, liver, and lungs [141]. The kidneys are then colonized by the bacteria and elicit an immunological response, initially based on the recruitment of neutrophils and further mediated by macrophages. Meanwhile, leptospires are still found in the blood in the first three days after infection, where neutrophils try to eliminate them by NETosis and other mechanisms.
Neutrophil-depleted mice have a higher leptospiral burden in the kidneys 15 days post-infection, which demonstrates that neutrophils help to eliminate Leptospira from the blood within the first few days [142,143]. Although both cell types are phagocytic and play important roles in the innate immune response to invading pathogens, Leptospira has the ability to subvert and scape phagocytosis, invade renal convoluted tubules, and persist in this niche [144][145][146][147][148].
The transition of acute kidney injury (AKI) caused by leptospiral infection to chronic infection and, consequently, kidney fibrosis is still not completely understood, and many research gaps remain open. This is also applied to the transition from AKI to CKD of non-infectious cause [149]. Continuous stimuli, the activation of inflammatory pathways, and the participation of innate immune cells in the development of chronic leptospirosis and kidney fibrosis are the main underlying causes that have been considered so far [25,142,143,[149][150][151].
The sustained stimulus caused by the presence of Leptospira in the convoluted tubules is responsible for the overactivation of inflammatory pathways, triggering pro-fibrotic signals. One month after infection, cellular infiltration in the kidneys is characterized by CD3-positive T-cells and CD11b-positive macrophages/monocytes, but no more neutrophils are observed [152]. In a rat model of gentamicin-induced acute kidney injury, the most prevalent cell infiltrate in kidneys were pro-inflammatory M1 macrophages within the first day after injury; whereas, at day 30 after injury, M2 macrophages accounted for 45% of the total cell infiltrate [149].
The total healing of the kidney, with no signs of tubular necrosis or glomerular sclerosis, can also be observed. Activation of the NF-κB and NLRP3/IL-1β pathways are implicated in the recrudescence of signs in rat kidneys 180 days after injury, with a low-grade inflammation occurring together with activation of Angiotensin II, as well as collagen and fibronectin deposition in the interstitium. These findings indicate that sustained inflammatory stimulus may lead to kidney fibrosis after acute injury [149].
As described above, the complement is another important system activated during leptospirosis. Fluid-phase or membrane-associated negative complement regulatory proteins avoid overactivation of this system and, consequently, tissue damage [98,101,105]. One of the membrane-associated complement regulators is the decay-accelerating factor 1 (Daf1), responsible for inhibiting the assembly and accelerating the disassembly of C3 and C5 convertases. In a mouse model of leptospirosis infection, Daf1 −/− mice have higher bacterial loads, greater susceptibility to infection, acute renal lesions, and more evident kidney fibrosis 90 days post-infection, with more tubulointerstitial collagen deposition than wild-type littermates. These findings point to a role of Daf1 in controlling the bacterial burden and inflammation during the acute phase, thus, helping to reduce chronic lesions and fibrosis [150].
Cytotoxicity mediated by nitric oxide (NO) is one of the mechanisms used by macrophages to control leptospiral infection. Use of the TLR2/NOD2 agonist CL429 increases NO production by mice peritoneal and bone marrow-derived macrophages when exposed to L. interrogans serovars Manilae str. L495, Copenhageni str. Fiocruz L1-130, and Icterohaemorraghiae str. Verdun [151]. The NO increase is correlated with a lower number of live Leptospira in cell culture and, as such, is associated with bacterial killing. Inducible nitric oxide synthase (iNOS) expression-and, consequently, NO production-is also associated with kidney fibrosis as the disease transitions from an acute to chronic state in C57BL/6J mice [152].
Of note, the initial lesion and the type of cellular infiltrate play roles in the disease progression and, thus, contribute to the evolution to chronic and fibrotic Leptospira-related kidney disease [142,150,152]. The roles of macrophages and galectin-3 in the survival rate and clinical course of the disease, acute interstitial nephritis, and development of chronic infection and kidney fibrosis in C57BL/6 mice infected by L. interrogans sorovar Copenhageni str. L1-130 have been investigated [142]. Although galectin-3 plays a crucial role in controlling bacterial burden during the acute phase, fibrosis and chronic disease is only correlated with the initial bacterial burden, being directly related to the development and extent of kidney fibrosis, leading to the activation and enrichment of fibrosis-related pathway genes [25,142,152].
In brief, acute leptospirosis is characterized mainly by inflammatory and immune responses mediated by cells.
Kidney Fibrosis and Chronic Leptospirosis
Chronic leptospiral infection leads to kidney fibrosis, but the underlying mechanisms and pathways involved are still poorly elucidated. In the first part of this review, the potential pathways related to renal fibrosis were described. In this section, we will address the current knowledge on the findings related to renal fibrosis caused by Leptospira infection.
Hamsters and guinea pigs are considered good animal models for acute and severe leptospirosis, as both animals die within the first 5-10 days after Leptospira inoculation [141,153,154]. On the other hand, rats and mice are suitable models to study carrier status and chronic disease. Although mice do not present signs of the disease, and their lesions are considered mild to moderate, good models of chronic leptospirosis in those animals have been developed, while some underlying mechanisms involved in the persistence of Leptospira and fibrosis induction have begun to be elucidated [142,150,152,155].
Using wild-type and different knockout mouse lineages to understand which pathways may be activated during chronic leptospiral infection, Fanton d Andon et al. [152] demonstrated that renal fibrosis in chronic mouse infection can be partially attributed to nitric oxide production but in a TLR-and NLR-independent manner. Furthermore, acute inflammation and T-cell infiltration do not contribute directly to the extent of renal fibrosis.
Chronic leptospiral infection has been associated to fibrosis in many different models of mouse infection [142,150,152,156]. Both wild-type and decay-accelerating factor 1deficient mice (DAFKO) developed fibrosis at 90 days post-infection, where collagen deposition is observed away from lymphocyte infiltration. In contrast to what has been previously described [152], the authors found a relationship between inflammation and interstitial fibrosis in both wild-type and DAFKO mice, but with more evident collagen deposition in the interstitium of DAFKO mice, compared to the wild-type [150].
Leptospira outer membrane proteins induce the accumulation of ECM proteins in renal epithelial cells through activation of the TGF-β1/Smad3 pathway, thus, contributing to the evolution of fibrosis associated to chronic infection [132]. Therefore, TGF-β1 presents a profibrotic action and is involved in the mechanisms of renal fibrosis during chronic leptospirosis. According to Tian et al. [157], outer membrane proteins from L. santarosai serovar Shermani enhance the secretion of collagen types I and IV by HK-2 cells, and the process is mediated by the TGF-β1 pathway.
Bone marrow derived-macrophages can transdifferentiate into myofibroblasts-cells that are involved in the secretion of α-SMA. This transition is coordinated by the TGF-β1/Smad3 pathway, in a process known as the macrophage-myofibroblast transition (MMT). In chimeric mice that had their bone marrow depleted by radiation and reconstituted by exogenous GFP-expressing C57BL/6 bone marrow, GFP-positive myofibroblasts were observed in the kidneys after UUO.
To understand which pathways are involved in chronic infection and kidney fibrosis caused by L. interrogans serovar Copenhageni str. L1-130 and the role of leptospiral infection in progression of CKD, Chou et al. [25] performed a mouse kidney transcriptomic analysis and detected increased gene expression of TGF-β1, Wnt, and integrin-β-crucial players in important fibrosis-related pathways. In addition, when mice are submitted to a nephrotoxic diet with 0.1 or 0.2% of adenine, those pathways are further enriched.
Comparing orthologous genes from mice and humans with leptospiral infection plus nephrotoxic stimulus and CKDu, respectively, there is an overlap of enriched genes. These findings provide support for the hypothesis that Leptospira infection is associated with CKD progression and may be an underlying cause of CKDu. Furthermore, as both Wnt and TGF-β1 pathways are enhanced in this model, it is suggested that they contribute to the progression of kidney fibrosis [25].
Further evidence of a fibrotic process triggered by infection with leptospires comes from recent in vitro and in vivo findings from our group. HK-2 cells infected with L. interrogans serovar Manilae str. L495 were shown to produce greater amounts of fibronectin and collagen type IV, compared to non-infected cells. Morphological alterations, such as spindle shape, loss of cell-cell contact, cell grouping, and abundant ECM production, were also evident in infected HK-2 cells. Cellular and molecular mechanisms underlying this EMT process are under investigation.
Fibrosis Collagen deposition was also seen in Periodic acid-Schiff ( Figure 5) at 15 days after infection in kidney considering mice infected with serovar Manilae str. L495. After 30 days of infection, mice that were infected with a high amount (2 × 10 8 ) of Leptospira showed a few more areas of interstitial, and peritubular fibrosis (indicated by blue arrow heads) than those infected with less (1 × 10 6 ) Leptospira (indicated by black arrow heads). These findings show the importance of this serovar in causing acute disease as well as chronic and profibrotic infection.
Kidney Fibrosis and the CKDu Outbreak-The Relationship with Leptospirosis
As CKD has become one of the most prevalent kidney diseases, due to systemic hypertension or diabetes, it is worth highlighting two major points: (i) impaired kidney function and fibrosis are present in almost every case of CKD, and (ii) the outcome of CKD is renal failure, requiring kidney transplantation. Highlighting those aspects is of utmost importance, as CKD patients demand constant care, such as dialysis, hospital care, and transplantation, thus, elevating healthcare expenditures [18,19].
According to a recent systematic analysis, 1.2 million people died from CKD and 697.5 million CKD cases were recorded globally in 2017, with a global prevalence of 9.1% [159]. The burden is higher in countries located in Oceania, Sub-Saharan Africa, and Latin America, following a pattern already described in other studies [16,19]. Of note, those regions accounted for the highest rate of leptospirosis cases worldwide [2,7]. Without any further analysis, it is noticeable that regions with a higher incidence of CKD and leptospirosis cases overlap. In the past few years, evidence that humans may remain chronic carriers of Leptospira has given rise to a discussion about how it could be related to the endemic increase in CKD and CKDu [160][161][162].
Another recent systematic review of observational studies has disclosed a correlation between leptospirosis and CKD. Poor renal function, characterized by a low estimated glomerular filtration rate (eGFR), is more prevalent in anti-Leptospira positive individuals than in negative ones, and those with higher titers of antibodies against Leptospira have poorer renal function (lower eGFR) [163].
In a study considering a Peruvian community, 189 out of 314 asymptomatic participants had high titers of anti-Leptospira antibodies and 13 were leptospiral 16S rRNA-positive by PCR from urine samples. Despite being asymptomatic, Leptospira molecular detection in those individuals evidence that the chronic carrier state is a reality. Although the likelihood of renal damage and kidney fibrosis was not assessed in this work, the presence of Leptospira in the kidneys may be a risk factor for the development of CKD [160]. In a study of canine leptospirosis, a relevant and close correlation between CKD and leptospirosis has been reported; furthermore, those dogs with CKD and leptospirosis are more frequently associated with Leptospira shedding [164].
Mesoamerican nephropathy, a type of CKDu, has brought to light evidence that chronic leptospirosis could be a cause of CKD. This condition is highly prevalent in male sugarcane workers that are in close contact with flooded areas at least during a period of the year. It is proposed that leptospirosis may be one of the causes (but not the only) of Mesoamerican nephropathy, and that exposure to other potentially nephrotoxic conditions or substances may influence the occurrence of the disease [165]. Recently, a study conducted in a mouse model of chronic leptospirosis, followed by toxic exposure to alimentary adenine, demonstrated that renal lesions and fibrosis are more severe and prevalent in mice primarily infected by Leptospira, showing that chronic leptospirosis may be a risk factor for the development of CKD [25].
Gaps and Perspectives
CKD/CKDu are important health problems that directly impact the quality of life and health systems all over the world. Despite recent advances in nephrology and conscientization campaigns, the searches for medical assistance and treatment are still insufficient. Assuming that leptospirosis may play a role in the development of CKD/CKDu, and considering the overlap of both conditions in Asia, Central and South America, and Sub-Saharan Africa, studies into the associated pathophysiology, cellular communication pathways, and treatment strategies to avoid or control the fibrotic process are needed.
As discussed throughout the text, kidney fibrosis and, consequently, CKD, as a sequela of Leptospira infection, is a well-documented manifestation of the disease; however, studies on the mechanisms associated with the pathophysiology of leptospiral renal fibrosis are still scarce and deserve to be deepened. Deciphering the cellular communication pathways in Leptospira-induced kidney fibrosis may shed light on the specific actors involved in this process. Funding: We thank the Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) for the funding, process number 2021/01122-9. This study was financed in part by the Coordenação de Aperfeiçoamento Pessoal de Nível Superior-Brasil (CAPES)-Finance code 001.
Institutional Review Board Statement:
The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Ethics Committee of Butantan Institute protocol number 9421030620, approved at 2020/06/03. | 2021-10-14T05:26:05.776Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "733bb52711c766f9cfa81d73d027c39ca8502cbc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/19/10779/pdf?version=1633769295",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "36709b7a97d13b8e513ef2e1f1ae2ebbdd017bc2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16377148 | pes2o/s2orc | v3-fos-license | Sensitivity and Specificity of the World Health Organization Dengue Classification Schemes for Severe Dengue Assessment in Children in Rio de Janeiro
Background The clinical definition of severe dengue fever remains a challenge for researchers in hyperendemic areas like Brazil. The ability of the traditional (1997) as well as the revised (2009) World Health Organization (WHO) dengue case classification schemes to detect severe dengue cases was evaluated in 267 children admitted to hospital with laboratory-confirmed dengue. Principal Findings Using the traditional scheme, 28.5% of patients could not be assigned to any category, while the revised scheme categorized all patients. Intensive therapeutic interventions were used as the reference standard to evaluate the ability of both the traditional and revised schemes to detect severe dengue cases. Analyses of the classified cases (n = 183) demonstrated that the revised scheme had better sensitivity (86.8%, P<0.001), while the traditional scheme had better specificity (93.4%, P<0.001) for the detection of severe forms of dengue. Conclusions/Significance This improved sensitivity of the revised scheme allows for better case capture and increased ICU admission, which may aid pediatricians in avoiding deaths due to severe dengue among children, but, in turn, it may also result in the misclassification of the patients' condition as severe, reflected in the observed lower positive predictive value (61.6%, P<0.001) when compared with the traditional scheme (82.6%, P<0.001). The inclusion of unusual dengue manifestations in the revised scheme has not shifted the emphasis from the most important aspects of dengue disease and the major factors contributing to fatality in this study: shock with consequent organ dysfunction.
Introduction
Dengue is the most widely distributed viral hemorrhagic fever in the tropical world, annually infecting approximately 100 million people in Southeast Asia, the Pacific region, and the Americas and often causing epidemics in urban and peri-urban areas [1]. In 2013, 2,351,703 cases were reported in America. Brazil was responsible for approximately 61% of these cases (1,451,432 cases), and all 4 serotypes of the dengue virus have been isolated in almost all Brazilian states [2].
Severe forms of dengue disease were first recognized in the 1950 s during dengue epidemics in the Philippines and Thailand. Today, severe dengue affects most Asian and Latin American countries and has become a leading cause of hospitalization and death among children in these regions [3]. An estimated 500,000 people with severe dengue require hospitalization each year, a large proportion of whom are children; approximately 2.5% of those affected die. In Brazil, the increase in hospitalizations and deaths among children has become a problem in recent years [4].
Although dengue is a single disease entity, it has various clinical presentations and often has an unpredictable clinical pathogenesis and outcome [5]. Patients with dengue can present with a range of clinical symptoms that varies according to its severity (asymptomatic, mild, or severe) and the age group affected (children or adults). To describe and categorize the common manifestations of dengue, the World Health Organization (WHO) developed a classification system that evolved from pioneering studies in Thailand in the 1950 s and 1960 s. This guideline for control, diagnosis, clinical classification, and treatment of dengue was first proposed in 1975 and revised in 1997, on the basis of a clinical study of 123 Thai children in 1966 [6,7]. It grouped the clinical presentations of dengue disease as dengue fever (DF), dengue hemorrhagic fever (DHF), and dengue shock syndrome (DSS). DHF is divided into 4 grades. Grades I and II are classified as DHF, and grades III and IV are considered DSS.
Nevertheless, some studies have shown that applying this classification system is challenging in dengue-endemic areas. The appearance of different manifestations such as dengue with hemorrhage but without plasma leakage or dengue with shock but without fulfilling the 4 DHF criteria (fever lasting 2-7 days, a tendency for hemorrhage shown by a positive tourniquet test or spontaneous bleeding, thrombocytopenia #100,000 platelets/ mm 3 , and evidence of plasma leakage), poses difficulties to clinicians in applying the current case classification scheme. The major problems identified were the rigidity of the definitions, low sensitivity, and difficulty experienced by some clinicians to differentiate DHF from DF since the clinical and basic laboratory parameters overlap in some cases [8][9][10][11]. To address these difficulties, the WHO Dengue Scientific Working Group has designed a multicenter study, DENCO (Dengue Control), to evaluate the perceived limitations of the WHO 1997 dengue classification scheme in all age groups from Southeast Asia and Latin America [12]. Based on the findings of this working group, a new classification scheme was proposed in 2009, which divides cases into dengue without warning signs, dengue with warning signs, and severe dengue [5].
The recent dengue epidemics among children in Rio de Janeiro provide an opportunity to assess the ability of these WHO dengue classifications to identify effectively severe dengue cases. The aim of this study was to evaluate the sensitivity and specificity of the WHO 1997 dengue classification compared with the WHO 2009 dengue classification to assess severe dengue among children who were admitted to pediatric reference hospitals in Rio de Janeiro during the epidemics of 2007/2008 and 2010/2011, using the need for intensive care as a reference standard of severity.
Data set and data management
A hospital-based study was performed in 3 tertiary care centers for children during the dengue epidemics of November 2007 through May 2008 and November 2010 through May 2011, in Rio de Janeiro, Brazil. These hospitals were part of the dengue network study whose regional reference center was the Laboratório de Doenças Febris Agudas at Instituto Evandro Chagas (IPEC/FIOCRUZ).
The sources of data were the computerized medical records collected from databases from 3 centers, all of which utilized a standardized protocol with demographic, clinical, and laboratory assessments, including daily hematocrit and platelet counts, serological findings, and therapeutic information. All cases were retrospectively reviewed by specialists to ensure data consistency and classify the cases according to the traditional (1997) and current (2009) WHO schemes [5,7].
Eligibility criteria
Inclusion criteria were children between 0 and 18 years of age who were admitted during the dengue epidemics of 2007/ 2008 and 2010/2011 to 1 of the 3 pediatric reference hospitals in Rio de Janeiro. Exclusion criteria were children without complete protocol data and laboratory-dengue confirmation.
Case classification
WHO 1997 scheme. According to traditional scheme the cases were classified as dengue fever (DF), dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS).
DF was defined as laboratory-confirmed cases with high fever without evidence of plasma leakage, with or without hemorrhagic manifestation. DHF grades I and II were characterized by evidence of plasma leakage associated with the presence of hemorrhagic manifestations (petechiae, ecchymosis, purpura, or bleeding from the mucosa of the gastrointestinal or urinary tract, injection sites, or other locations) and thrombocytopenia (# 100,000 platelets/mm 3 ) without shock. DSS was characterized by signs of circulatory failure, cold clammy skin, cyanosis, rapid pulse, pulse pressure ,20 mmHg, or hypotension in the presence of a hemorrhagic manifestation [7].
Children with laboratory-confirmed dengue who had evidence of plasma leakage but did not comply with the criteria for DHF or DSS were defined as unclassified cases.
WHO 2009 scheme. According to new scheme proposed, cases were grouped into dengue without warning signs, dengue with warning signs, and severe dengue [5].
Warning signs included: abdominal pain or tendeness (not intermittent); persistent vomiting (more than 5 times in 6 hours or more than 3 times in 1 hour); clinical fluid accumulation including pleural effusion and ascites identified as a reduction of vesicular murmur or reduction of thoracic-vocal trill; abdominal distention or dullness decubitus, confirmed by abnormal imaging findings (chest radiography, thoracic and abdominal ultrasound, or computed tomography for pleural effusion and ascites or gallbladder wall thickening); mucosal hemorrhage (gastrointestinal hemorrhage and/or metrorrhagia); lethargy (alteration of consciousness and/or Glasgow score ,15) or irritability; and liver enlargement (.2 cm below the costal margin). Laboratory findings were defined as follows: thrombocytopenia (platelet count , 50,000/mm 3 ) and hematocrit change of 20%, either raised or decreased by 20% from the baseline value during the convalescent period.
Severe dengue was defined by the following characteristics: (i) Plasma leakage resulting in shock or fluid accumulation with respiratory distress (defined as respiratory discomfort, dyspnea, respiratory failure, or increased respiratory rate of .60 breaths/ min for ages ,2 months; .50 breaths/min for ages 2 months to 1 year; .40 breaths/min for ages 1 to 5 years; .30 breaths/min for ages 5 to 8 years; and .20 breaths/min for those older than 8 years). Shock was defined as the presence of at least 2 of the clinical signs of hypoperfusion (e.g., slow capillary filling, cold clammy skin, and rapid and weak pulse), with or without an associated weak pulse pressure (#20 mm Hg) or hypotension for the specified age (decrease in blood arterial systolic pressure ,5th percentile for age [,PAS5], calculated as age [years] 62+70) [13]; or (ii) severe bleeding (in this study, defined as patients who presented persistent and or severe overt bleeding in the presence of unstable hemodynamic status, regardless of the hematocrit level or required transfusion of blood products); or (iii) severe organ involvement, e.g., severe hepatitis (aspartate aminotransferase/alanine aminotransferase levels $1000 IU/L), encephalitis (central nervous system involvement with impaired consciousness), or myocarditis (heart dysfunction, characterized by cardiac failure confirmed by echocardiography) and renal impairment (serum creatinine levels $2 times the upper limit of normal or a 2-fold increase from the baseline creatinine levels); Multiple-organ dysfunction syndrome was considered when dysfunction involved $2 organs [14].
Cases were considered severe when classified as DSS by the traditional scheme and as severe dengue by the revised classification.
Laboratory confirmation
Children who were admitted to 1 of the 3 hospitals during a dengue epidemic had at least 1 specific laboratory test performed. Cases were considered laboratory-confirmed dengue if dengue virus RNA was detected by reverse transcriptase polymerase chain reaction, IgM anti-dengue antibodies were detected from the third day after the onset of fever, or the non-structural protein-1 antigen test was positive. The dataset consisted of patients with laboratoryconfirmed dengue. Others laboratories data included a minimum of 2 complete blood count analyses (hematocrit and platelet count) from separate days, blood chemistry values, and imaging (radiography, ultrasonography, computed tomography scan, and echocardiography). Complete blood count analyses were performed daily and imaging studies were carried out according to clinical demand to investigate the presence of fluid accumulation or clinical improvement.
Reference standard
Deaths and intensive care unit (ICU) intervention were used as the reference standard to identify severe cases and, consequently, as the reference standard to compare both WHO classifications. Patients who required colloid, vasoactive amines, inotropic drugs, or transfusion of blood products; who utilized any kind of dialysis, or who required either invasive or non-invasive ventilator support were classified as having received ICU intervention.
Statistical analysis
The traditional and revised schemes were compared according to their positive and negative predictive values, sensitivity, and specificity. Sensitivity is the probability that the diagnostic instrument, here the WHO classifications schemes, indicates a positive result for individuals with severe disease, and specificity is the probability of a negative result of the instrument for those who do not have severe disease. Individuals in whom the WHO classification was contrary to the class they belonged to were defined as false-negative or false-positive [15]. The positive predictive value for the test population is the probability that a person has the severe form of disease given that the test is positive. The negative predictive value for the test population is the probability that a person does not have severe disease when the test is negative.
To compare the differences between the sensitivity and specificity of the 2 classification schemes, a binomial test was applied, and 95% confidence intervals were obtained [16]. To analyze the positive and negative predictive values, the relative values were calculated and compared according to the method of Moskowitz and Pepe [17]. Patients who could not be classified according to either the traditional or the revised scheme were not included in the analyses. All statistical analyses were performed using the statistical software R 3.0.1 [18].
Results
During the epidemics of 2007/2008 and 2010/2011, of 604 admissions to the reference hospitals, 450 children had complete set of clinical and laboratory data and 267 (59.3%) had laboratoryconfirmed dengue. Of these 267 cases, 28 were RT-PCR positive, 267 were IgM positive, and 20 NS1 positive cases as shown in Figure 1. According to the revised scheme, 18 (6.7%) of the children with laboratory-confirmed dengue were classified as having dengue without warning signs, 142 (53.2%) as having dengue with warning signs, and 107 (40.1%) as having severe dengue. According to the traditional scheme, 26 (9.7%) of the 267 children were classified as DF, 119 (44.6%) as DHF, 46 (17.2%) as DSS, and 76 (28.5%) could not be classified ( Table 1). The ages of children ranged from 0 to 18 years, with a median age of 8 years (interquartile range: 6-11), with a slightly higher proportion of girls (52.4%, 140/267) than boys (47.6%, 127/267). Eight cases were fatal (3%), all of which developed to severe dengue according to the revised scheme. The traditional scheme classified 6 of the 8 fatal cases (75%) as DSS and 1 case (12.5%) as DHF due to hemorrhagic complications. One fatal case with shock without hemorrhagic manifestations could not be classified into any specific category ( Table 1). The median duration of hospitalization was 4 days (interquartile range: 2-6 days), with a maximum of 21 days. All dengue-related deaths occurred within the first 6 days of disease (Table 1). In terms of the days after fever onset, the patients were hospitalized on the fifth day, which corresponds to the defervescence period.
Of the 267 studied cases, 76 (28.4%) received ICU interventions. Fifty-eight of the 76 cases (76.3%) were also part of the 191 children who required continuous monitoring due to hemodynamic instability despite previous fluid management with crystalloids. The recommendations of the International Guidelines for Management of Severe Sepsis [19] were the criteria used by pediatricians to decide on ICU interventions.
Seventy-six (28.5%) patients could not be classified according to the traditional scheme. They did not fulfill all criteria to be classified as DHF or DSS because they did not present with evidence of plasma leakage associated with bleeding manifestation or platelet reduction (Table 2). However, 18.4% (14/76) of these patients had shock and 44.7% (34/76) had clinical evidence of fluid accumulation.
Although these 76 cases could not be classified into any of the categories of the traditional scheme, they were classified by the revised scheme. These patients had the highest frequency of warning signs, including decrease in platelet count (88.1%), abdominal pain (60.5%), a 20% increase in hematocrit concentration (57.9%), clinical fluid accumulation (44.7%), liver enlargement (25%), and persistent vomiting (19%). In total, 65.8% (50/76) of the cases that could not be classified with the traditional scheme were categorized as dengue with warning signs, 25% (19/76) as severe dengue, and 9.2% (7/76) as dengue without warning signs by the revised scheme (data not shown).
The cases that could not be categorized by the traditional scheme were excluded from the sensitivity and specificity analysis to assess their ability to classify severe cases and DSS (n = 183). The revised scheme demonstrated a sensitivity of 86.8% and specificity of 73.0%, while the traditional scheme had sensitivity and specificity of 62.3% and 93.4%, respectively. The sensitivities and specificities of the schemes demonstrated statistically significant differences (P,0.05). This significant difference was also observed for positive and negative predictive values (Table 3).
Discussion
This study demonstrated the superiority of the revised classification (WHO 2009) for the detection of severe cases among hospitalized children with laboratory-confirmed dengue. Similar to the findings of other studies [20,21], the revised scheme had a significantly better sensitivity (86.8%; P,0.001) than the traditional scheme (62.3%) ( Table 3). This improved sensitivity allows for better case capture and increased ICU admission, which may aid pediatricians in avoiding deaths due to severe dengue among children. However, it may also result in the misclassification of patients' condition as severe, according to the observed lower positive predictive value (61.6%) when compared with the traditional scheme (82.6%). The lower sensitivity of the traditional scheme (62.3%) is reflected by the presence of 76 unclassified cases (28.5%) that did not present all criteria required to classify them as DHF or DSS, possibly because of the rigidity of the definitions in Table 2. Cont. accounting for early intravenous fluid therapy or the failure to maintain a good record of bleeding. Pediatricians do not uniformly accept the tourniquet test because it is time-consuming for an epidemic scenario. Whether it would provide additional important information for improving the management of severe children, and if it is specific to either dengue or severity is not well established [22]. Furthermore, the test may be negative during shock syndrome; hence, it would not distinguish DSS from cases of shock without hemorrhage [7].
Although the revised scheme, proposed in 2009, has been described as more effective than the traditional scheme, some controversies remain regarding its specificity, which was lower (73.0%) compared to the specificity of the traditional scheme (93.4%). This difference is perhaps due to a more restricted definition of severity in cases classified as DSS (Table 2). However, some aspects of the new classification also contribute to this lower specificity as the absence of dengue-specific definitions of organ dysfunction, especially for children. Excessive fluid treatment may exacerbate the accumulation of fluid in chest cavities leading to respiratory distress, which qualifies these cases as severe dengue and may explain the proportion of respiratory distress in the absence of shock. The definition of hepatic dysfunction, based only on high levels of transaminases (without an increase in bilirubin or alteration of coagulation), could also overestimate severity. The only case of severe hepatic involvement observed in this study probably occurred because of ischemia, as it was associated with shock [23].
Severe bleeding and respiratory distress symptoms resulted from disseminated intravascular coagulation and acute respiratory distress syndrome, respectively, which were also consequences of shock. Therefore, the hallmark of severe dengue was plasma leakage. Shock followed by organ dysfunction was associated with almost all deaths (7/8) among the studied children. Theoretically, shock could be prevented if plasma leakage and hypovolemia are detected early and managed properly. Nonetheless, the fatality rate was still high (3%) in this group of children even if they had early interventions with fluid management before hospital admission (191/267; Table 1). This study did not assess whether the excess of fluids or inadequate monitoring of clinical progression prior to hospitalization explains the poorer prognosis of some children. The duration of hospitalization was similar across groups and slightly longer for cases of severe dengue (Table 1), probably as a result of complications such as prolonged shock or hypoperfusion for . 72 hours ( Table 2).
The high observed incidence of warning signs (53.2%) could be related to the identification and utilization of these signs by the pediatricians from Rio de Janeiro as criteria for hospitalization (Table 2) [24]. Warning signs such as the reduction of platelet counts among children, followed by abdominal pain and hemo-concentration in all classification groups, indicates that these signs were the most recognized criteria for hospitalization (Table 2). However, which specific warning signs, if any, may be useful for predicting severity and death in these cases is beyond the scope of this study and would be better assessed with a prospective cohort design.
The major limitation of this study is its retrospective design, because signs and symptoms could have been incompletely recorded, especially among less severe cases, thereby generating a possible classification bias. Nevertheless, the standardized clinical care and rigorous data management protocols used in this study are likely to have mitigated inaccuracy in data collection. Although this study was performed at 3 pediatric reference centers for dengue in the city of Rio de Janeiro, it was a descriptive study, and the lack of a representative sample limits the generalizability of the results.
In spite of these limitations, the study showed the utility of the WHO 2009 dengue classification scheme to detect cases that are not classified by the WHO 1997 dengue classification scheme. In addition, the authors identified issues regarding the specificity of the revised scheme that could be refined in light of standard definitions such as those of the Consensus of Definitions for Organ Dysfunction in Pediatrics [13].
In conclusion, this study demonstrates the better sensitivity of the revised scheme to assess severe cases, which may allow for closer monitoring and management of children, and potentially avoid deaths. Although not easy to apply, it also shows the superiority of the traditional scheme to distinguish the truly severe cases that, in turn, could avoid workload to the health team.
Finally, the pattern of severity among children also permitted us to conclude that the inclusion of unusual dengue manifestations in the revised scheme has not shifted the emphasis from shock with consequent organ dysfunction, the major factor contributing to fatality in this study. | 2016-05-12T22:15:10.714Z | 2014-04-28T00:00:00.000 | {
"year": 2014,
"sha1": "5dff1c8ae6d303453f7956dd4a4c3ece32f56159",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096314&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5dff1c8ae6d303453f7956dd4a4c3ece32f56159",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259333569 | pes2o/s2orc | v3-fos-license | Surgical Robotics for Intracerebral Hemorrhage Treatment: State of the Art and Future Directions
Intracerebral hemorrhage (ICH) is a stroke subtype with high mortality and disability, and there are no proven medical treatments that can improve the functional outcome of ICH patients. Robot-assisted neurosurgery is a significant advancement in the development of minimally invasive surgery for ICH. This review encompasses the latest advances and future directions of surgical robots for ICH. First, three robotic systems for neurosurgery applied to ICH are illustrated. Second, the key technologies of robot-assisted surgery for ICH are introduced in aspects of stereotactic technique and navigation, the puncture instrument, and hematoma evacuation. Finally, the limitations of current surgical robots are summarized, and the possible development direction is discussed, which is named “multisensor fusion and intelligent aspiration control of minimally invasive surgical robot for ICH”. It is expected that the new generation of surgical robots for ICH will facilitate quantitative, precise, individualized, standardized treatment strategies for ICH.
Introduction
Intracerebral hemorrhage (ICH) refers to nontraumatic brain parenchymal hemorrhage, which is the second most detrimental subtype in stroke patients, with a mortality as high as 35%−52% [1]. Only approximately 20% of patients are able to achieve functional independence within 6 months after clinical treatment [2].
Acute medical management plays a crucial role in treating ICH by preventing its deterioration, which is achieved through measures such as controlling blood pressure to prevent hematoma expansion, and reversing anticoagulant effects if necessary. Additionally, it involves preventing and managing secondary brain injuries that may arise from seizures, elevated intracranial pressure, hyperglycemia, and fever [3]. Although various medical treatments have been explored, there is currently no specific treatment that has been proven to improve outcomes for ICH patients. Intuitively, hematoma removal could be a potentially effective therapy for ICH. The space-occupying effect and toxic content release are the main mechanisms by which hematoma leads to brain cell death. It has been suggested that the longer hematoma is present, the worse the prognosis is [4]. Thus, the key to the treatment is to remove hematoma as early as possible and prevent rebleeding [5]. However, studies have yet to demonstrate a significant functional outcome benefit from surgical intervention. The results of the STICH I and STICH II trials indicated that traditional craniotomy failed to reduce mortality or improve prognosis in ICH patients compared with conservative drug therapy [6,7]. The limited efficiency of traditional craniotomy on ICH could be attributed to counterbalancing of the inevitable injury to normal brain tissue during surgical manipulation and the benefits of hematoma removal, which makes minimally invasive surgery (MIS) the most promising surgical strategy. MIS can reduce mechanical damage to the surrounding normal tissues and shorten the duration of surgery, which includes neuroendoscopic surgery and stereotactic hematoma puncture and drainage followed by thrombolysis [8,9]. The MISTIE III trial showed that although MIS reduced all-cause mortality, it did not provide any functional outcome advantages in patients one year after ICH [10]. Robot-assisted surgery is a challenging and evolving technique, the application of which has been recently extended to certain surgical cases for treating ICH. Compared with traditional procedures, it has the advantages of high-accuracy position, short operative time, and considerable anti-interference, which could reduce the error of manual operation and, thus, enable a higher level of security. The meta-analysis by Xiong et al. suggested that robot-assisted MIS showed an overall superiority over conventional surgery or conservative management for ICH in terms of rebleeding rate, neurological function improvement and intracranial infection rate [11]. The era of precision medicine heralds much promise in developing more efficacious and personalized therapies to combat this disease. Therefore, robot-assisted surgery is considered a significant direction for the development of future minimally invasive strategies for ICH. This review encompasses the latest advances and future directions of surgical robots for ICH.
Current Status of Surgical Robots for ICH
The neurosurgical robots can increase accuracy and precision of targeting lesions, provide stable surgical platforms, and make telemedicine a reality [12], which is widely used for neurosurgical procedures such as stereo-electroencephalography, deep brain stimulation and stereotactic biopsy [13]. Currently, several representative surgical robots ( Fig. 1) (Table 1), including ROSA ® (Zimmer Biomet, Warsaw, Indiana, USA), Remebot ® (Remebot Technology Co., Beijing, China), and CAS-R-2 frameless stereotactic system (HOZ Medical Co, Beijing, China), have undergone preliminary validation through small-scale clinical trials for the treatment of ICH. These robots are all commercially available robotic, which employ robotic arms with multiple degrees of freedom to achieve accurate position and integrate the functions of surgical path planning, navigation and control software [14]. The related clinical researches have showed that robotic-assisted surgery for ICH is safe and efficient [15][16][17][18][19][20].
In the general surgical procedure, the preoperative threedimensional (3D) image is necessary for delineating the CAS-R-2 China (a) 5-DoF serial robot (b) Frameless registration (c) Integrated planning software [22] relevant anatomy which can be achieved via a CT scan, MRI, or a combination of these images fused together. Then the surgeon determines a desired trajectory based on the central point of the maximum plane and the long axis of the hematoma. After completing registration, the robotic arm automatically moves the target area along the preplanned trajectory. A burr hole is then created as the operation channel, and the surgeon inserts the surgical instruments and completes the operation with the robot's assistance. After the operation, postoperative imaging is usually used to evaluate the surgical result.
Key Technologies of Robot-Assisted Surgery for ICH
The integration of robotic systems into the treatment of ICH presents both opportunities and challenges to the surgeon.
The following is an overview of several key technologies of robot-assisted surgery for ICH, including stereotactic technique and navigation, the puncture instrument, and hematoma evacuation.
Stereotactic Technique and Navigation
Robotic devices were first applied to stereotaxy in 1985 when Kuoh combined a PUMA robot and framed stereo locator for intracranial needle biopsy [21]. Subsequently, ROSA ® , Remebot ® and a series of neurosurgical robotic systems were developed. Registration is one of the core of the entire operation, which aligns the preoperative 3D imaging and planned trajectory with the actual patient position in the operating room. There are a variety of different ways to achieve registration, including mechanical based surface registration using facial features or bone fiducial registration. The use of frame-based or bone fiducial-based registration is considered more accuracy to be more accurate for aligning preoperative imaging and intraoperative position, but it may also increases the suffering of patients [22]. The technique entails perforating the patient's skull and affixing metal frame to the bone, both of which can elicit intense discomfort and pain, thus accounting for the heightened suffering experienced by patients. Currently, computer-aided surgical navigation is widely used in neurosurgery and has achieved good outcomes in the treatment of ICH, with a registration accuracy of 2.12-2.51 mm [23]. However, the current 3D spatial reconstruction and localization methods of ICH are mostly limited to the segmentation and reconstruction of hematoma and brain tissue, and the estimation of possible risk factors (functional areas and vascular distribution) during puncture is insufficient. Traditional surgical navigation systems can realize the localization of the lesion, surgical planning and surgical tool tracking but cannot reflect changes in brain tissue and hematoma in real time.
Intraoperative imaging is one of the solutions and mainly includes intraoperative CT and MRI. Intraoperative CT can quickly and accurately determine the location, volume and morphology of hematoma [24], which can improve the neurological function of patients [25]. However, there are several problems related to intraoperative CT, such as radiation exposure, operation interruption during the scan, and metal artifact interference, and it cannot reflect the relative positional relationship between the hematoma and needle in real time [26]. Intraoperative MRI localization can accurately locate the location, volume and shape of the lesions without radiation. However, there are several disadvantages in using intraoperative MRI for ICH surgery, such as the long imaging time, narrow operating space, the special material requirement for the surgical instruments to adapt to the strong magnetic field, and the contradiction between the magnetic field strength of MRI equipment and imaging quality [27].
Endoscopic technology enables doctors to observe the relative positional relationship between the hematoma and surrounding brain tissue directly from the inside by neuroendoscopy [28]. The advantages are the wide vision of the operative field and the relatively small damage to the brain tissue surrounding the hematoma. Compared with traditional treatment, it can shorten the time of external ventricular drainage and reduce surgical complications [29]. However, the commonly used neuroendoscope lacks a stereo sense, the volume of the lens is too large, and there are problems such as lens atomization and pollution during the operation [30].
Furthermore, augmented reality and mixed reality technologies have been applied to surgical navigation processes, which enable users to engage with digital content and interact with holograms in the real world. Zhou et al. developed a multimodel mixed reality navigation system for ICH surgery, which can provide surgeons with direct observations for preoperative surgical planning and intraoperative tracking of surgical tools. The registration errors were 1.03 and 1.94 mm in phantom and clinical experiments, demonstrating the accuracy and effectiveness of the system and mixed reality technology for clinical application [23].
Compared to functional neurosurgery, MIS for ICH does not demand perfect stereotactic accuracy [31]. However, imprecise registration during MIS for ICH can lead to a deviated trajectory and targeting error, resulting in a higher risk of iatrogenic injury to functional regions and blood vessels, as well as rebleeding [32]. Although the current registration accuracy is adequate for MIS for ICH, ensuring precise registration is crucial to minimize these risks. Therefore, there is still a need to improve registration accuracy for minimally invasive evacuation of ICH. Robotic systems are well-equipped to handle spatial information and directives, and can provide multiple options for registration, which mutually correct each other to enhance accuracy [33]. Additionally, to enhance registration accuracy in minimally invasive evacuation of ICH, robotic systems can utilize multiple sensors and detectors to capture diverse information, high-precision image processing technology to obtain precise surgical scene information, machine learning and artificial intelligence algorithms to assist the robot in recognizing elements in the surgical scene, and real-time feedback and correction mechanisms to monitor and adjust position and posture in real time.
Puncture Instrument
During MIS for ICH, the puncture needle needs to pass through three types of tissues of different stiffnesses, including the skull (solid, rigid), brain tissue (semisolid, flexible) and hematoma (liquid, soft), the stiffness of which can differ by orders of magnitude [34][35][36]. Frameless stereotactic robots insert puncture instruments into specific targets through straight-line trajectories within the brain, while the mainstream brain puncture needles in clinical practice are straight and rigid, posing a challenge to the avoidance of critical anatomical structures and the efficacy of hematoma evacuation, especially when hematoma is deep or irregular in shape [14]. Continuous robots, particularly concentric tube robots, are capable of nonlinear motion, which provides a promising alternative for MIS [37].
In 2013, Burgner et al. presented a simple two-tube, 3-degree-of-freedom (DoF) concentric tube robot for image-guided evacuation of ICH, which was composed of a straight, stiff outer needle and a precurved, superelastic aspiration cannula. The authors completed the structural design, image guidance and the optimization method for the selection of the precurvatures of the aspiration cannulas, which can evacuate 83-92% of the hemorrhage volume that in vitro experiments demonstrate [38,39]. In 2015, Godage et al. integrated intraoperative image feedback and a concentric tube robot in the evacuation of hematoma in a phantom model, which achieved 85% removal of hematoma without appreciable damage to the surrounding brain tissue [40]. Furthermore, Chen et al. fabricated an MR-conditional steerable needle robot for ICH treatment, which achieved hematoma aspiration with a prebent internal tube guided by MRI [41]. Recently, Yan et al. demonstrated a continuum robot design consisting of a precurved cannula and 2-DoF flexible tips for minimally invasive aspiration of hematoma in ICH, which could achieve follow-the-leader (FTL) motion within 2.5-mm shape deviation and control performance within submillimeter errors [42]. In addition to the above steerable needle robots, Sheng et al. introduced a mesoscale medical robot for neurosurgical intracerebral hemorrhage evacuation (NICHE), which employs shape memory alloy actuators to actuate individual degrees of freedom [43].
Compared with the rigid puncture instrument, a steerable needle robot with higher degrees of freedom can reach the bleeding area more dexterously and maximize the evacuation rate of hematoma. However, all reported steerable needle robotic devices are still in the stage of in vitro experiments with no relevant animal experiments or clinical applications. Some of the reasons for this situation are the limitations in controllability, accuracy and functionality. Specifically, flexible movement at the end of the needle driver input and nonlinear mapping led to a decrease in the controllability, and the dynamic interaction between the flexible piercing needle and variable stiffness environment results in higher requirements for motion accuracy [44]. The existing flexible puncture needles only consider the basic motor performance and lack the multifunctional perception required for ICH treatment [37].
Hematoma Evacuation
It is generally believed by neurologists that removing the hematoma can reduce the damage to brain tissue. The results of the MISTIE III trial showed that only patients who reduced the hematoma by at least 70% or reduced the residual volume of the hematoma to less than 15 mL achieved neurological improvement within 1 year. However, 58% of patients could not achieve this goal during the study [10]. Currently, the artificial hematoma aspiration and drainage used in clinical practice cannot precisely control the operative parameters, such as the speed of aspiration and drainage and the target volume of hematoma clearance. Removing hematoma too quickly can lead to a rapid drop in intracranial pressure and negative pressure in the hematoma cavity, resulting in "decompression injury" and increasing the risk of rebleeding. Besides, the reduction in the compression effect of the hematoma could also increase the risk of rebleeding, particularly in patients with poor cerebrovascular conditions. The use of thrombolytic agents (e.g., RT-PA, urokinase) is an important aspect of MIS for ICH. A series of studies have shown that thrombolytic agents can improve patient outcomes, but clinical and animal studies have also shown that urokinase and RT-PA may have dose-dependent neurotoxic effects and increase the risk of rebleeding [45,46]. Therefore, the dosage of liquefaction agent needs to be controlled precisely to achieve the goal of dissolving the hematoma and reducing the toxicity and side effects. Currently, the dosage selection of thrombolytic agents is mainly based on medical images and the clinical experience of the surgeon.
Above all, there is a lack of detailed and quantitative indicators to guide MIS for ICH, which is highly dependent on the experience of doctors in clinical practice. More high-quality clinical research and basic research are needed to promote the standardization and individualization of treatment. Additionally, big data and deep learning make it possible to realize intelligent decisions in the diagnosis and treatment of ICH.
Discussion and Future Perspectives
Patients with ICH inevitably suffer a very poor prognosis, and no definitive treatments have been developed to improve functional outcome after ICH. Despite its therapeutic potential to be employed in the treatment of ICH, there is currently insufficient evidence in the literature to indicate that the efficacy of surgical hematoma removal is significantly better than that of conservative treatment. Irreversible injury induced by ICH, mechanical trauma to normal brain tissue caused by surgical maneuver, low visibility during surgery, deviation of the catheter, and a larger residual hematoma are important factors in the recovery of postoperative nervous function, which are in turn restricted to the current MIS techniques and equipment [10,47,48]. Robot-assisted surgery is one of the cutting-edge developments in the field of MIS and has become preferable to traditional minimally invasive modalities, which is crucial for modifying MIS for ICH patients.
However, there are still disadvantages within current surgical robots, which are reflected in the following three aspects. First, craniopuncture needles used for hematoma evacuation have a single function and, thus, may be difficult to apply to a variety of brain tissues of different stiffness, which poses a risk for iatrogenic damage during surgery. Second, utilizing robot-assisted surgery facilitates intraoperative neuronavigation positioning, but puncture remains to be performed manually. Moreover, the robots fail to achieve dynamic real-time monitoring of the puncture process. Therefore, it is difficult to ensure the safety and accuracy of the puncture. Finally, there is a lack of effective means to perceive the intracranial environment, which makes it difficult to remove as many hematomas as possible. Furthermore, there is a paucity of data to quantitatively describe the relationship between the parameter of aspiration procedure and the postoperative rehabilitation efficacy, so it is difficult to control the optimal parameter based on individual factors and lesion characteristics. Collectively, a relatively smooth reduction in the ICP remains elusive.
The concept of the Tri-Co Robot (the Coexisting-Cooperative-Cognitive Robot), proposed by Chinese scientists, has emerged as an innovative idea to solve the abovementioned problems. Tri-Co Robot refers to a robot that is able to adapt to dynamic and complex environments autonomously and naturally interact with working environments, humans and other robots [49]. According to this advanced theory, modifying existing surgical robots in aspects such as ontology structure, sensor fusion, and intellectualization is predicted to be capable of facilitating mutual perception between robots and the intracranial environment and finally achieves collaboration between robots and ICH patients as well as doctors. Based on this line of thought, we put forward the following scenario, named "Multisensor fusion and intelligent aspiration control of minimally invasive surgical robot for ICH", the salient points of which are provided below [50,51]: (1) Multichannel puncture needle of variable stiffness design: surgical robots with multichannel configuration have been preliminarily explored. Hendrick and colleagues developed a multichannel robotic system for transurethral procedures by adapting a rigid endoscope, which includes two manipulators, two fiber optic bundles, and an endoscope lens, all integrated into an 8.35 mm inner diameter sheath [52]. Furthermore, Yu et al. introduced a concentric tube robotic system featuring three channels, comprising a pair of manipulation channels and a vision channel, all encased within a 10 mm active sheath. The practicality of utilizing the multichannel system for transnasal nasopharyngeal carcinoma procedures was investigated through a range of simulations and experiments [53]. As for surgical robots for ICH, the puncture needle should be available in multichannel design that cover functions such as visualization, perception, and aspiration, which is also highly integrated to reduce iatrogenic injury. Moreover, the development of a rigid-flexible-soft puncture needle is dependent on cannula design to accommodate changes in tissue stiffness (skull, brain tissue and hematoma) and shows adequate performance for motion ( Fig. 2A, B). (2) Real-time visualization of the puncture process and autoregulatory control: multi-modal visualization technology, including CT, MRI, ultrasound, and other imaging modalities, has already been implemented in robotic procedures to facilitate intraoperative decisionmaking. In addition, emerging visualization technologies, such as photodynamic capture and enhanced microscopy, will allow real-time tissue imaging during surgical interventions [54]. In the scenario for surgical robots for ICH, real-time registration of endoscopy images and preoperative imaging profiles (CT and MRI) provides intraoperative visualization of the size, morphology and location of the hematoma and helps to avoid damage to blood vessels. Moreover, during manual surgery, surgeons usually rely on visual or force feedback to adjust their movements, which may not always result in accurate outcomes. However, the development and implementation of adaptive control techniques have enabled the suppression of tremors and improvement of targeting accuracy during robotassisted surgery. For example, Ebrahimi et al. demonstrated that the use of adaptive control techniques can significantly enhance the safety of sclera force during robot-assisted eye surgery [55]. Therefore, it is recommended to employ adaptive control methods in robotassisted surgeries for ICH to compensate for real-time puncture errors and achieve autoregulatory control of the puncture process. (3) Multisensor perception of the intracranial environment and intelligent decision-making model for hematoma aspiration: constructing a big data-based innovative diagnosis and treatment decision-making support model leveraging intracranial multimode dynamic perception information, including ICP, detection of microbleeding, and endoscopy images, assisting personalized hematoma evacuation strategies and enabling a smooth decrease in ICP (Fig. 2C).
Above all, this scenario is characterized by intraoperative visualization, personalized hematoma aspiration and precise treatment. "Perception" is, of course, a centrally important technique for realizing such a scenario. Actually, perceived competence of surgical robots can be structured into 4 different tiers: simple perception, local dynamic perception, global dynamic perception, and multimode dynamic perception, which correspond to the fulfilment of robot assistance of the surgery, task-level autonomy, procedure-level C multi-sensor perception of intracranial environment and intelligent decision-making model for hematoma aspiration autonomy and automated surgery (Fig. 3). With the development of robotic and artificial intelligence technologies, we envisage that "human-robot shared perception" is expected to become a higher level of perceived mode. This concept highlights the transformation of abstract clinical experiences into regular patterns through deep learning methods in large datasets, which are then combined with multimode dynamic perception to achieve the precise medical treatment of ICH.
In summary, the new generation of surgical robots for ICH should be subject to big data-oriented decision-making and could participate in the whole procedure of hematoma puncture and aspiration, finally revolutionizing MIS and facilitating quantitative, precise, individualized, standardized treatment strategies for ICH. | 2023-07-06T06:17:39.083Z | 2023-07-05T00:00:00.000 | {
"year": 2023,
"sha1": "642862185877694d7e549a14f460e8d6e6f73cae",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10439-023-03295-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "ac9f13fd890f7a242d94fbe1bb7c152a391d9a27",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252871526 | pes2o/s2orc | v3-fos-license | Effect of Blowby on the Leakage of the Three-Piece Oil Control Ring and Subsequent Oil Transport in Upper Ring-Pack Regions in Internal Combustion Engines
: The lubricating oil consumption (LOC) in internal combustion engines contributes to emission and deteriorates the performance of the aftertreatment. In this work, an optical engine with a 2D Laser-induced fluorescence (2D-LIF) system was used to study operating conditions critical to real driving oil emissions. Additionally, numerical models were used to analyze the ring dynamics, oil flow and gas flow. It was found that the intake pressure that results in zero blowby is the separation line between two drastically different oil flow patterns in the ring pack. With intake pressure lower than the separation line, the oil accumulation of the three-piece oil control ring groove (TPOCR) begins to increase, followed by the drastic increase of the oil accumulation in the third land, second land, and finally visible oil leaking through the top ring gap, given enough time. The time required for the oil to leak through different rings was investigated using both measurements and modeling. The effects of drain holes and rail gaps, as well as their relative rotation on oil accumulation and leakage from the TPOCR groove, were analyzed. These findings contribute to improving ring pack designs and engine calibration in spark ignition (SI), gas, and hydrogen engines equipped with TPOCR to minimize the negative impacts of LOC.
Introduction
With the rising demand of environmentally friendly energy generation in the transportation industry, gas emissions from internal combustion engines have been a critical factor for engine development. Lubricating oil consumption (LOC) in the combustion chamber can generate oil emission (OE), including harmful gases such as HC and NOx and particulates, with the potential to damage emission aftertreatment systems. In addition, the worldwide emission regulations, such as EURO VI, are leading a trend to shift from a chassis dynamometer test in a controlled laboratory environment to a real-world driving test, satisfying Real Driving Emissions (RDE) [1]. Moreover, the engine operation in hybrid-electric vehicles (HEVs) contains frequent starting and shutting off conditions, in turn introducing more complex control strategies regarding speed and load change. In constant changing engine speed and load operations, huge spikes of OE were observed occasionally, especially during the transient period when changing from low load to high load [2,3]. Thus, understanding the oil transport characteristics in continuously changing engine working conditions is critical to designing the piston and ring pack in order to reduce real-world driving OE.
Previous studies conducted by Thirouard [4,5] showed by studying the ramp change of engine load through a 2D Laser-induced fluorescence (2D-LIF) engine that, during low engine load condition, more oil is transported to the upper piston regions due to the lack of strong blowby gas flow which can carry oil to the crankcase. This provides a good view that downwards blowby is a critical factor to reduce LOC and is therefore a good design to
Engine Setup
The test engine (Table 1) is the same engine used in previous work [4,5,[9][10][11][12][16][17][18], which is a single cylinder research engine with a custom-made optical liner (Figure 1), allowing for a high-speed camera to record the lubrication oil movement as the piston moves. A cylinder head based on a PSA in line with a four-cylinder production engine with the three unused cylinders deactivated was used. The piston is a prototype piston with a dark graphite coating removed to reflect the laser light for stronger fluorescence signals. The test engine was equipped with a modern TPOCR design with the ends of the expander parallel to each other and making contact. The second ring has a Napier hook chamfer design on the outer surface to store oil ( Figure 1). The top ring has a barrel shape with a positive twist. Additionally, in order to have a better view of the last path of oil before reaching the combustion chamber, the top ring was pinned with the top ring gap in the optical window's area. The engine control system is based on a field-programmable gate array (FPGA) system including National Instrument hardware and software. Fuel injection and ignition are all controlled by the FPGA system and data collection is performed by a Windows computer and cDAQ system. The trigger of high-speed camera is through the FPGA, which allows the critical operation times to be synchronized. A detailed description of the engine test bench and control setup can be found in [19].
Optical Setup
The custom-made optical liner consists of a transparent sapphire window of 12 mm width and 98.5 mm length along the piston moving direction on the thrust side. This allows a high-speed camera ( Table 2) to capture the oil movement in the cylinder. The camera has two defined views: • full view as using 128 × 1024 resolution recording the whole optical window; • magnified view with 1024 × 1024 resolution focused on a 12 × 12 mm square area at a set position.
Furthermore, both high speed camera control and slow speed camera control were applied to serve different purposes. High speed mode can record as fast as 12,500 FPS with a 1/16,000 s shutter speed, around 1 frame per crank angle at 2000 RPM to get all the path as piston and ring moves in several full engine cycles. A slow speed camera can The engine control system is based on a field-programmable gate array (FPGA) system including National Instrument hardware and software. Fuel injection and ignition are all controlled by the FPGA system and data collection is performed by a Windows computer and cDAQ system. The trigger of high-speed camera is through the FPGA, which allows the critical operation times to be synchronized. A detailed description of the engine test bench and control setup can be found in [19].
Optical Setup
The custom-made optical liner consists of a transparent sapphire window of 12 mm width and 98.5 mm length along the piston moving direction on the thrust side. This allows a high-speed camera ( Table 2) to capture the oil movement in the cylinder. The camera has two defined views: • full view as using 128 × 1024 resolution recording the whole optical window; • magnified view with 1024 × 1024 resolution focused on a 12 × 12 mm square area at a set position.
Furthermore, both high speed camera control and slow speed camera control were applied to serve different purposes. High speed mode can record as fast as 12,500 FPS with a 1/16,000 s shutter speed, around 1 frame per crank angle at 2000 RPM to get all the path as piston and ring moves in several full engine cycles. A slow speed camera can capture one frame per cycle at a set crank angle (CA) position to capture the oil accumulation evolution over longer time scales. The oil was mixed with a specific dye which can be induced to fluorescence by laser. The detailed theory and setup of the laser induced fluorescence system was described by Zanghi [16,17] previously.
Test Procedure
The operation engine speeds were chosen at 1200, 2000, 3000 RPM. And engine load was changed by setting different absolute intake pressure ( Figure 2) ranging from 120 mbar (closed throttle) to 1 bar (wide open throttle, referred to as WOT hereafter). The lowest intake pressures achieved were 140, 120, 110 mbar at 1200, 2000, and 3000 RPM, respectively. The oil temperature was controlled at 50 ± 1 • C and coolant temperature was set at 80 ± 1 • C.
Test Procedure
The operation engine speeds were chosen at 1200, 2000, 3000 RPM. And engine load was changed by setting different absolute intake pressure ( Figure 2) ranging from 120 mbar (closed throttle) to 1 bar (wide open throttle, referred to as WOT hereafter). The lowest intake pressures achieved were 140, 120, 110 mbar at 1200, 2000, and 3000 RPM, respectively. The oil temperature was controlled at 50 ± 1 °C and coolant temperature was set at 80 ± 1 °C.
The main mechanism we examined was the effect of the gas flow rate. As the motored condition provides more consistent control in cylinder pressure and temperature, the transient experiments were conducted between motored conditions. Between each set of tests, a fired condition at 700 mbar for 5 min was conducted to create a condition with max blowby for the engine to clean the ring pack area. The load was modified using step transit change, which can complete the change in 0.05 s using the FPGA control system. The camera can be triggered either at the same time that the load change occurs to record the change of oil accumulation during and after the transient or 1 s before transit happened to compare the difference between before and after the transient. The main mechanism we examined was the effect of the gas flow rate. As the motored condition provides more consistent control in cylinder pressure and temperature, the transient experiments were conducted between motored conditions. Between each set of tests, a fired condition at 700 mbar for 5 min was conducted to create a condition with max blowby for the engine to clean the ring pack area. The load was modified using step transit change, which can complete the change in 0.05 s using the FPGA control system. The camera can be triggered either at the same time that the load change occurs to record the change of oil accumulation during and after the transient or 1 s before transit happened to compare the difference between before and after the transient. When the engine is operated at throttled conditions, especially during engine-brake, a relatively low absolute pressure in cylinder before compression starts can be generated. The reduced cylinder pressure can induce substantial reverse blowby flow. When the intake pressure is low enough, the blowby reaches zero. Further reducing the intake manifold pressure results in a decrease of the crankcase pressure to maintain a zero blowby.
Results and Discussion
As the engine changes from high load to low load, the force to help oil climb up along the piston may become stronger due to the decrease of downward blowby gas flow and increase of upward. The most extreme case is the transient from WOT to closed throttle condition at the lowest intake pressure available. Figure 3 shows an example of oil gradually climbing up the third land, second land and finally crown land as time goes by after transient happened. The recording takes one sample at 90 degrees after top dead center (ATDC) of the intake stroke for a total of 1500 engine cycles with the limitation of the camera's memory. This is 90 s in the Figure 4 120 mbar intake pressure 2000 RPM case.
Time for Oil to Climb up
When the engine is operated at throttled conditions, especially during engine-brake, a relatively low absolute pressure in cylinder before compression starts can be generated. The reduced cylinder pressure can induce substantial reverse blowby flow. When the intake pressure is low enough, the blowby reaches zero. Further reducing the intake manifold pressure results in a decrease of the crankcase pressure to maintain a zero blowby.
As the engine changes from high load to low load, the force to help oil climb up along the piston may become stronger due to the decrease of downward blowby gas flow and increase of upward. The most extreme case is the transient from WOT to closed throttle condition at the lowest intake pressure available. Figure 3 shows an example of oil gradually climbing up the third land, second land and finally crown land as time goes by after transient happened. The recording takes one sample at 90 degrees after top dead center (ATDC) of the intake stroke for a total of 1500 engine cycles with the limitation of the camera's memory. This is 90 s in the Figure 4 120 mbar intake pressure 2000 RPM case. Even though the window only accounts 12 mm width of a full bore, with inertia force flattening oil distribution away from the gap, the observed phenomena can be extended to represent the oil accumulation throughout the bore. Because of ring rotation, the gap locations changed during the recording and consequently changed the oil distribution lo- Even though the window only accounts 12 mm width of a full bore, with inertia force flattening oil distribution away from the gap, the observed phenomena can be extended to represent the oil accumulation throughout the bore. Because of ring rotation, the gap locations changed during the recording and consequently changed the oil distribution locally. However, the oil distribution in the window area can reach a steady level in all the piston lands when gaps are far away from the optical window. When a steady oil accumulation at each piston land was achieved and would remain for the rest of the test, it was defined as having reached equilibrium.
Shown in Figure 4, there is a hook chamfer at the bottom of the second ring and a chamfer at upper edge of third land. The oil being scraped down can accumulate in both chamfers. At 560 cycles both of the chamfers were almost full (shown in green dashed area). Subsequently, at cycle 596 after closing throttle, a large amount of oil droplets flow through the top ring gap. These droplets continued all the way for 1500 cycles, indicating that equilibrium may be reached in each region after droplets appeared.
A sidenote is that the LOC of the closed throttled conditions was so high that it can be roughly estimated in the following manner. In the experiment, all the lubrication oil was stored in an external tank and circulated in a closed system by two pumps for feeding and extracting from the engine. With this massive oil transport upwards, the LOC rate was estimated at 100 g/hour by measuring the oil level's change inside the oil tank before and after the experiment, which is far from usually acceptable LOC of several grams per hour [20] and should be prevented.
Each experiment was performed three times to verify the repeatability as shown in Figure 5. The scatters are separate points at each condition and the plot shows an average time for them. At each engine speed, as the engine load increased to a point, the oil would never reach the crown land. In general, a higher engine load can result in a longer time for oil to reach equilibrium at all three regions. However, the speed dependency is not clear.
Blowby Separation Line
The amount of oil droplets transported through the top ring gap during low load (under 150 mbar) period had no significant difference at 2000 RPM. However, a longer time for oil to reach the top ring gap as well as the second land and the third land was observed with higher intake pressure. A sudden change in overall trend only happened when reaching 150 mbar, as no oil droplets through the top ring gap were observed ( Figure 6). To verify this, the engine stayed at 150 mbar for over 10 min and still no oil droplets were observed. The same verifications were also done for all the intake pressures over 150 mbar. The intake pressure of 150 mbar was the blowby separation line at 2000 the intake pressure was higher than 150 mbar, the measured blowby is a posit indicating that the overall gas flow direction is from the combustion chamber t case. With the limitation of measurement system, the negative blowby numb measured. However, the measured crankcase pressure showed a drop when der this separation line. Under the condition of the crankcase pressure being a the blowby should be negative and the overall gas flow reversed its directio In general, the blowby separation line is an intake pressure above which blowby starts to become positive. The intake pressure of 150 mbar was the blowby separation line at 2000 RPM. When the intake pressure was higher than 150 mbar, the measured blowby is a positive number, indicating that the overall gas flow direction is from the combustion chamber to the crankcase. With the limitation of measurement system, the negative blowby number cannot be measured. However, the measured crankcase pressure showed a drop when running under this separation line. Under the condition of the crankcase pressure being atmospheric, the blowby should be negative and the overall gas flow reversed its direction ( Figure 7). In general, the blowby separation line is an intake pressure above which the overall blowby starts to become positive.
indicating that the overall gas flow direction is from the combustion chamber to the crankcase. With the limitation of measurement system, the negative blowby number cannot be measured. However, the measured crankcase pressure showed a drop when running under this separation line. Under the condition of the crankcase pressure being atmospheric, the blowby should be negative and the overall gas flow reversed its direction ( Figure 7). In general, the blowby separation line is an intake pressure above which the overall blowby starts to become positive. In addition, the 2D ring dynamics and gas flow model developed by Tian [21] were used for calculation. This model uses the cylinder pressure obtained from experiments as the pressure input. Engine geometry such as piston design, ring profile and thermal deformation were considered. Ring dynamics and gas flow in each piston land and ring groove can be calculated in each crank angle. The simulation results ( Figure 8) show that the blowby becomes negative as engine load drops below 150 mbar if the crankcase pressure is assumed to be atmospheric. Additionally, the crankcase pressure needed to maintain zero blowby drops below 1 bar as well. Both of them verified an absolute intake pressure of 150 mbar as a blowby separation line, and above which the engine can achieve positive blowby and vice versa. In addition, the 2D ring dynamics and gas flow model developed by Tian [21] were used for calculation. This model uses the cylinder pressure obtained from experiments as the pressure input. Engine geometry such as piston design, ring profile and thermal deformation were considered. Ring dynamics and gas flow in each piston land and ring groove can be calculated in each crank angle. The simulation results ( Figure 8) show that the blowby becomes negative as engine load drops below 150 mbar if the crankcase pressure is assumed to be atmospheric. Additionally, the crankcase pressure needed to maintain zero blowby drops below 1 bar as well. Both of them verified an absolute intake pressure of 150 mbar as a blowby separation line, and above which the engine can achieve positive blowby and vice versa. At a different engine speed from 2000 RPM, the blowby can change at the same intake pressure as shown in Figure 9, calculated from the 2D model [21]. From the experimental side, the measurement device has the lowest detectable limit of 10 mbar. It did not see a difference of blowby separation line at the chosen engine speeds of 1200, 2000 and 3000 At a different engine speed from 2000 RPM, the blowby can change at the same intake pressure as shown in Figure 9, calculated from the 2D model [21]. From the experimental side, the measurement device has the lowest detectable limit of 10 mbar. It did not see a difference of blowby separation line at the chosen engine speeds of 1200, 2000 and 3000 RPM. This indicates the shift of blowby separation line around 150 mbar is within 10 mbar's range. The same drastic change across the blowby separation line was also observed at 1200 and 3000 RPM. Thus, the blowby separation line is the controlling factor of whether lets will appear through top ring gap. Running under this separation fo time can result in huge LOC and should be eliminated in engine workin source of oil going up at low load will be examined in 3.2.
Oil Accumulation in TPOCR
The oil control ring is the first barrier to control the vast amount of o it is critical to understand how oil leaks through the OCR, and, particul flows into and out of the OCR groove.
In order to get the best view of oil accumulation inside the OCR gro fication view was applied with a 1024 × 1024 resolution focused on a 12 window area. The camera position was chosen at 76CA of the intake str ment the inertia force is changing direction from upwards to downward to the piston. Therefore, the oil accumulation reflects the maximum effec inertia force in a cycle, dwelling on the upper part of the groove. Since th able to see a shallow depth behind the optical window (Figure 10a), with the optical view is able to represent the whole volume inside the OCR gro the ability for quantified measurement. Recording started at the same tim blow-by(SLPM) Thus, the blowby separation line is the controlling factor of whether or not oil droplets will appear through top ring gap. Running under this separation for a long enough time can result in huge LOC and should be eliminated in engine working condition. The source of oil going up at low load will be examined in 3.2.
Oil Accumulation in TPOCR
The oil control ring is the first barrier to control the vast amount of oil below it. Thus, it is critical to understand how oil leaks through the OCR, and, particularly, how the oil flows into and out of the OCR groove.
In order to get the best view of oil accumulation inside the OCR groove, the magnification view was applied with a 1024 × 1024 resolution focused on a 12 × 12 mm optical window area. The camera position was chosen at 76CA of the intake stroke. At this moment the inertia force is changing direction from upwards to downwards with reference to the piston. Therefore, the oil accumulation reflects the maximum effect of the upward inertia force in a cycle, dwelling on the upper part of the groove. Since the camera is only able to see a shallow depth behind the optical window (Figure 10a), with the oil level flat, the optical view is able to represent the whole volume inside the OCR groove and provide the ability for quantified measurement. Recording started at the same time when transient happened. Slow speed camera control was applied to capture one frame per engine cycle. Computer vision in Python was applied to the recorded video to quantify the oil accumulation inside the OCR groove. A program to trace the upper and lower rail of the OCR was used to identify and separate the region inside the OCR. Figure 10b shows the result of tracing OCR rails during the recording, as the OCR moves up and down. When the oil is leveled on all the pitches, the pitches with an expander had less oil, as shown in Figure 11. With a measure of oil level harder to identify, it is easier to implement the total brightness measurement and it qualitatively correlates with the amount of oil. Figure 12a shows the brightness distribution below the OCR upper rail. It is clear the peaks and valleys match the position of pitches in the expander. Overall, the center of picture has the highest brightness because the laser pointing on the window is a Gaussian distribution [19], which has the highest intensity in the center. Furthermore, the highest brightness among all the pixels inside the OCR is always around 2200 regardless of load, referring to the oil at the center inside the OCR groove under upper rail. This number represents the saturated signal at the center of the laser with this optical setup and temperature. Thus, averaging the brightness in both circumferential and axial direction can represent the oil level's height inside the OCR groove. Computer vision in Python was applied to the recorded video to quantify the oil accumulation inside the OCR groove. A program to trace the upper and lower rail of the OCR was used to identify and separate the region inside the OCR. Figure 10b shows the result of tracing OCR rails during the recording, as the OCR moves up and down. When the oil is leveled on all the pitches, the pitches with an expander had less oil, as shown in Figure 11. With a measure of oil level harder to identify, it is easier to implement the total brightness measurement and it qualitatively correlates with the amount of oil. Computer vision in Python was applied to the recorded video to quantify t cumulation inside the OCR groove. A program to trace the upper and lower r OCR was used to identify and separate the region inside the OCR. Figure 10b s result of tracing OCR rails during the recording, as the OCR moves up and dow the oil is leveled on all the pitches, the pitches with an expander had less oil, as s Figure 11. With a measure of oil level harder to identify, it is easier to implement brightness measurement and it qualitatively correlates with the amount of oil. Figure 12a shows the brightness distribution below the OCR upper rail. It is peaks and valleys match the position of pitches in the expander. Overall, the picture has the highest brightness because the laser pointing on the window is a G distribution [19], which has the highest intensity in the center. Furthermore, th brightness among all the pixels inside the OCR is always around 2200 regardles referring to the oil at the center inside the OCR groove under upper rail. This represents the saturated signal at the center of the laser with this optical setup perature. Thus, averaging the brightness in both circumferential and axial dire represent the oil level's height inside the OCR groove. pixels Figure 11. Oil leveled on all the pitches. Figure 12a shows the brightness distribution below the OCR upper rail. It is clear the peaks and valleys match the position of pitches in the expander. Overall, the center of picture has the highest brightness because the laser pointing on the window is a Gaussian distribution [19], which has the highest intensity in the center. Furthermore, the highest brightness among all the pixels inside the OCR is always around 2200 regardless of load, referring to the oil at the center inside the OCR groove under upper rail. This number represents the saturated signal at the center of the laser with this optical setup and temperature. Thus, averaging the brightness in both circumferential and axial direction can represent the oil level's height inside the OCR groove. Figure 12b is the oil accumulation's change after the transient happened from WOT to 120 mbar at 2000 RPM. The y axis's unit is the absolute brightness averaged inside the OCR groove with a greater value representing more oil accumulation. At the first 20 cycles, the oil accumulation grows fast. Then, the oil amount inside the groove reaches a steady pattern with regular small fluctuations interrupted by large spikes, called dynamic equilibrium here. The small fluctuation represents the change in the pitch distribution in the window. The large spikes are the result of the lower rail gap being around the window as the lower rail gap provides an oil supply path into the groove.
Dynamic Equilibrium Level
It is clear both from the video ( Figure 13) and computer vision plot ( Figure 14) that at the same 2000 RPM, and with the increase of intake pressure, less oil will be accumulated inside the OCR groove when reaching equilibrium. When the intake pressure was at the lowest value of 120 mbar, almost half of the OCR groove could be filled with oil after reaching equilibrium. As the intake pressure increases to medium level at 500 mbar, only a thin layer of oil can be seen below the upper rail region. The reference is WOT motored working condition before changing to low load and the OCR groove is almost clean. Figure 12b is the oil accumulation's change after the transient happened from WOT to 120 mbar at 2000 RPM. The y axis's unit is the absolute brightness averaged inside the OCR groove with a greater value representing more oil accumulation. At the first 20 cycles, the oil accumulation grows fast. Then, the oil amount inside the groove reaches a steady pattern with regular small fluctuations interrupted by large spikes, called dynamic equilibrium here. The small fluctuation represents the change in the pitch distribution in the window. The large spikes are the result of the lower rail gap being around the window as the lower rail gap provides an oil supply path into the groove.
Dynamic Equilibrium Level
It is clear both from the video ( Figure 13) and computer vision plot ( Figure 14) that at the same 2000 RPM, and with the increase of intake pressure, less oil will be accumulated inside the OCR groove when reaching equilibrium. When the intake pressure was at the lowest value of 120 mbar, almost half of the OCR groove could be filled with oil after reaching equilibrium. As the intake pressure increases to medium level at 500 mbar, only a thin layer of oil can be seen below the upper rail region. The reference is WOT motored working condition before changing to low load and the OCR groove is almost clean.
Oil Supply to the OCR Groove Drain Holes
There are four drain holes inside the OCR groove, two on each side of the piston ( Figure 15). They are designed to allow downwards blowby gas to go through and carry the oil inside the OCR groove back to the crankcase. However, the drain holes can also serve as oil supply holes. During an engine cycle when the cylinder pressure is low, due to the lack of blowby gas, oil at the bottom of piston can be transported into the OCR groove through the drain holes. This amount of oil can come from the piston cooling jet or splashed from the crankshaft. Additionally, during down strokes, the scraped oil can directly flow to the groove through the drain holes. Therefore, the drainage should be concluded to be a net draining effect, namely the oil flowing out through the drain holes minus the oil flowing inside. If there is more oil transported through the drain holes into the groove, even at the same blowby condition, there will be more accumulated oil.
Under the blowby separation line, even though the drain holes can still drain oil due to inertia, as an average effect, the blowby cannot release oil. The oil supply comes from inertia force and reverse flow. As the load increases, this oil level reduction indicates a reduced reverse flow. When intake pressure increases over the blowby separation line, a similar trend is observed as the increased positive blowby gas carries oil back to the crankcase through the drain holes.
to the lack of blowby gas, oil at the bottom of piston can be transported into the OCR groove through the drain holes. This amount of oil can come from the piston cooling jet or splashed from the crankshaft. Additionally, during down strokes, the scraped oil can directly flow to the groove through the drain holes. Therefore, the drainage should be concluded to be a net draining effect, namely the oil flowing out through the drain holes minus the oil flowing inside. If there is more oil transported through the drain holes into the groove, even at the same blowby condition, there will be more accumulated oil. Under the blowby separation line, even though the drain holes can still drain oil due to inertia, as an average effect, the blowby cannot release oil. The oil supply comes from inertia force and reverse flow. As the load increases, this oil level reduction indicates a reduced reverse flow. When intake pressure increases over the blowby separation line, a similar trend is observed as the increased positive blowby gas carries oil back to the crankcase through the drain holes.
Lower Rail Gaps
As discussed earlier, the large spikes in Figure 14 appear when the lower rail gap is in the window area. Figure 16 shows the contrast between the oil accumulation inside the OCR groove with and without presence of the lower rail gap at different intake manifold pressures. Additionally, one can see the decrease of the oil accumulation with the increase of intake pressure regardless of the presence of the lower rail gap.
Lower Rail Gaps
As discussed earlier, the large spikes in Figure 14 appear when the lower rail gap is in the window area. Figure 16 shows the contrast between the oil accumulation inside the OCR groove with and without presence of the lower rail gap at different intake manifold pressures. Additionally, one can see the decrease of the oil accumulation with the increase of intake pressure regardless of the presence of the lower rail gap. The oil can enter the OCR groove through both the liner and piston shown in Figure 17. The first oil flow path is present during the entire down stroke when the oil on the liner is scraped and spread into the groove [22]. Thus, residing on the liner below the oil control ring is a determining parameter of the rate of oil supply from this path. The second path is present when the piston travels in the upper part of the liner and the inertia force due to piston acceleration pointing upwards. The amount of oil stored in the chamfer area between the oil control ring and skirt lubrication is critical to the second, and the oil accumulated in the skirt chamfer can be driven up by inertia force. The oil can enter the OCR groove through both the liner and piston shown in Figure 17. The first oil flow path is present during the entire down stroke when the oil on the liner is scraped and spread into the groove [22]. Thus, residing on the liner below the oil control ring is a determining parameter of the rate of oil supply from this path. The second path is present when the piston travels in the upper part of the liner and the inertia force due to piston acceleration pointing upwards. The amount of oil stored in the chamfer area between the oil control ring and skirt lubrication is critical to the second, and the oil accumulated in the skirt chamfer can be driven up by inertia force. control ring is a determining parameter of the rate of oil supply from this path. The second path is present when the piston travels in the upper part of the liner and the inertia force due to piston acceleration pointing upwards. The amount of oil stored in the chamfer area between the oil control ring and skirt lubrication is critical to the second, and the oil accumulated in the skirt chamfer can be driven up by inertia force. Although it is possible that skirt allows less oil to pass with higher intake pressure and there is less oil in the piston skirt chamfer area so that the inflow rate of to the OCR groove is lower with higher intake pressure, they are not directly observed. To understand these two observed trends in Figure 14, it is required that we examine the oil inflow, out flow, and the redistribution in the window area that is between two drain holes. Further quantitative analysis is left for further publications. In Section 3.3, a brief qualitative analysis will be presented.
Upper Rail Leaking
There are three main oil leaking sources from the TPOCR groove, namely direct leakage from the upper rail gap, oil leaked from the upper flank and up-scraping by the upper rail OD face. This work is focused on the upper rail gap leakage as the other two sources are not easy to identify. Figure 18 shows the oil leakage jet from the upper rail gap. One can see that with less intake manifold pressure, more oil is present inside the groove and a larger amount of oil in the oil jet on the piston third land comes out of the upper rail gap. With intake manifold pressure below 150 mbar, which is the blowby separation line, an oil jet can reach the top of the third land or inside the hook of the second ring with possible further lateral spreading. Above 150 mbar, when the blowby becomes positive, the oil jet only reaches halfway to the third land and will be most likely returned to the OCR groove when the inertia force shifts downwards. Therefore, it can be concluded that a net oil leakage through the upper rail gap does not exist when the blowby is positive for this engine.
in the oil jet on the piston third land comes out of the upper rail gap. With intake manifold pressure below 150 mbar, which is the blowby separation line, an oil jet can reach the top of the third land or inside the hook of the second ring with possible further lateral spreading. Above 150 mbar, when the blowby becomes positive, the oil jet only reaches halfway to the third land and will be most likely returned to the OCR groove when the inertia force shifts downwards. Therefore, it can be concluded that a net oil leakage through the upper rail gap does not exist when the blowby is positive for this engine. The oil flow through the upper rail gap is determined by the oil accumulation inside the groove and the driving forces that include the pressure difference and the inertia force from piston acceleration. It is thus not surprising that lower intake manifold pressure results in more oil flow through the upper rail gap. Figure 19 shows that lower intake pressure can lead to lower gas pressure in the third land as well. What is more interesting is that the presence of the upper rail gap may help suck the oil from the drain holes when the gas flow is from the drain holes, to the upper rail gap in the intake and early part of the compression stroke. The oil flow through the upper rail gap is determined by the oil accumulation inside the groove and the driving forces that include the pressure difference and the inertia force from piston acceleration. It is thus not surprising that lower intake manifold pressure results in more oil flow through the upper rail gap. Figure 19 shows that lower intake pressure can lead to lower gas pressure in the third land as well. What is more interesting is that the presence of the upper rail gap may help suck the oil from the drain holes when the gas flow is from the drain holes, to the upper rail gap in the intake and early part of the compression stroke.
Importance of Rail Gap Location
While TPOCR rotates together at all speeds and loads, it is observed that a relative rotation exist between the upper rail and lower rail at a speed less than the overall rotations ( Figure 20). When the gaps are close enough, at the same engine working condition, the oil leakage jet is stronger when compared to the situation of the gaps being far away from each other. The obvious reason is that the oil below the TPOCR can find its way passing the upper rail gap. As such, one of the main advantages of the TPOCR, namely, mis-aligning the rail gaps to avoid a direct oil flow from below to above the OCR, is temporarily muted. On the other hand, thanks to the parallel contact at the expander gap, the rail gaps are never trapped together with the expander gap forming permanently aligned gap. Yet, the upward oil flow is enhanced during the period the two rail gaps are close especially during low load conditions, contributing to unsteadiness of the LOC.
Importance of Rail Gap Location
While TPOCR rotates together at all speeds and loads, it is observed that a relative rotation exist between the upper rail and lower rail at a speed less than the overall rotations ( Figure 20). When the gaps are close enough, at the same engine working condition, the oil leakage jet is stronger when compared to the situation of the gaps being far away from each other. The obvious reason is that the oil below the TPOCR can find its way passing the upper rail gap. As such, one of the main advantages of the TPOCR, namely, mis-aligning the rail gaps to avoid a direct oil flow from below to above the OCR, is temporarily muted. On the other hand, thanks to the parallel contact at the expander gap, the rail gaps are never trapped together with the expander gap forming permanently aligned gap. Yet, the upward oil flow is enhanced during the period the two rail gaps are close especially during low load conditions, contributing to unsteadiness of the LOC. the oil leakage jet is stronger when compared to the situation of the gaps being from each other. The obvious reason is that the oil below the TPOCR can find passing the upper rail gap. As such, one of the main advantages of the TPOCR mis-aligning the rail gaps to avoid a direct oil flow from below to above the OCR porarily muted. On the other hand, thanks to the parallel contact at the expander rail gaps are never trapped together with the expander gap forming permanentl gap. Yet, the upward oil flow is enhanced during the period the two rail gaps especially during low load conditions, contributing to unsteadiness of the LOC. Now, combining the findings from Sections 3.2.2 and 3.2.3, the main contributor to the increase of the oil accumulation inside the TPOCR in the window area when the intake manifold pressure is decrease may be explained as follows:
•
When the upper rail gap is rotated to the window area, it helps suck more oil into areas of the TPOCR groove between the two drain holes when the intake manifold pressure is below blowby separation line. On the contrary, the upper rail gap can help release more oil through the drain holes when the blowby is positive; • the ring rotation is fast enough such that the effect of the upper rail gap remains before it comes back again.
Effect of Engine Speed
A higher engine speed introduces a higher inertia force. This can result in faster oil spreading along both axial and circumferential directions, and thus greater rates for oil release and leakage from each region. Therefore, less oil accumulation in each region can be observed with higher engine speed [23]. The same effect was also observed in this study, as shown in Figure 21. At the same engine load of 150 mbar, a higher engine speed can result in less oil accumulated inside OCR groove when reaching equilibrium.
• the ring rotation is fast enough such that the effect of the upper rail gap remains before it comes back again.
Effect of Engine Speed
A higher engine speed introduces a higher inertia force. This can result in faster oil spreading along both axial and circumferential directions, and thus greater rates for oil release and leakage from each region. Therefore, less oil accumulation in each region can be observed with higher engine speed [23]. The same effect was also observed in this study, as shown in Figure 21. At the same engine load of 150 mbar, a higher engine speed can result in less oil accumulated inside OCR groove when reaching equilibrium. Overall, as we are mostly concerned about oil leakage, a stronger leakage jet can be observed with the reduction of engine speed because of more oil accumulated inside OCR. Thus, a transient from low-speed low-load to high-speed and high-load presents a transient condition from a highest oil accumulation to lowest. Engine operated through this condition may induce large LOC if the release is not managed well. The verification of these implications will be conducted in future studies.
After Passing OCR: Pumping Effect
In general, the low load and low speed condition can result in a higher oil accumulation in the OCR groove and more oil leakage to the upper region. After passing the OCR, Overall, as we are mostly concerned about oil leakage, a stronger leakage jet can be observed with the reduction of engine speed because of more oil accumulated inside OCR. Thus, a transient from low-speed low-load to high-speed and high-load presents a transient condition from a highest oil accumulation to lowest. Engine operated through this condition may induce large LOC if the release is not managed well. The verification of these implications will be conducted in future studies.
After Passing OCR: Pumping Effect
In general, the low load and low speed condition can result in a higher oil accumulation in the OCR groove and more oil leakage to the upper region. After passing the OCR, the oil can be pumped upwards by the top two rings due to the change of ring lift and pressure. In order to quantify the oil pumping flow rate and its direction, the pumping model developed by Liu [14] was applied. This pumping model uses the pressure at piston lands, pressure inside ring grooves and ring lift to calculate the pumping flow rate in each crank angle. The pumping interface was decided according to piston and ring geometry. Shear stress between air and oil was neglected. The needed input such as ring lift characters and pressure inside ring grooves and piston lands all come from the 2D ring dynamics and gas flow model developed by Tian [21].
Hook Chamfer Design Result in Fully Flooded Pumping
As shown in Figure 22, the combination of the hook and chamfer of the Napier second ring is able to prevent oil from blocking the entrance of the lower flank clearance when the oil cannot fill the entire region. However, if the oil fills the entire region, it becomes available to be pumped into the second ring groove through the second ring dynamics and surrounding-pressure variation. As discussed earlier, the filling of the hook and chamfer become visible when the intake manifold pressure is below the blowby separation line. Figure 22, the combination of the hook and chamfer of the Napier second ring is able to prevent oil from blocking the entrance of the lower flank clearance when the oil cannot fill the entire region. However, if the oil fills the entire region, it becomes available to be pumped into the second ring groove through the second ring dynamics and surrounding-pressure variation. As discussed earlier, the filling of the hook and chamfer become visible when the intake manifold pressure is below the blowby separation line. When using the pumping model, the oil reservoir's size at the inner edge (ID) and outer edge (OD) of the ring-groove interface need to be manually set ( Figure 23). Figure 24 shows the sensitivity of the dependency of the oil flow rate across the lower flank of the second ring and groove clearance on the assumed oil puddle size. It can be seen that when the oil puddle size approaches the ring/groove clearance, the flow rate reaches an asymptotic value. This asymptotic value may represent the maximum flow rate across the ring/groove clearance and is used in this paper to evaluate the oil flow direction and flow rate across the ring/groove clearance. Below the blowby separation line, the estimation to be made can be considered to be close to reality. For all the other conditions and interfaces, the estimations should be considered as the maximal potential. When using the pumping model, the oil reservoir's size at the inner edge (ID) and outer edge (OD) of the ring-groove interface need to be manually set ( Figure 23). Figure 24 shows the sensitivity of the dependency of the oil flow rate across the lower flank of the second ring and groove clearance on the assumed oil puddle size. It can be seen that when the oil puddle size approaches the ring/groove clearance, the flow rate reaches an asymptotic value. This asymptotic value may represent the maximum flow rate across the ring/groove clearance and is used in this paper to evaluate the oil flow direction and flow rate across the ring/groove clearance. Below the blowby separation line, the estimation to be made can be considered to be close to reality. For all the other conditions and interfaces, the estimations should be considered as the maximal potential.
Pumping Rate of the Second Ring
Based on fully flooded boundary conditions, regardless of engine the pumping direction for the second ring lower flank is always in the gro trend for pumping flow rate is to increase with the reduction of engin engine speed can result in higher pumping rate by running more cycles Calculated with the size of the second ring groove, the pumping rate can times for pumped oil to fully fill the groove, as Figure 25 shows.
Pumping Rate of the Second Ring
Based on fully flooded boundary conditions, regardless of engine speed and load, the pumping direction for the second ring lower flank is always in the groove. The general trend for pumping flow rate is to increase with the reduction of engine load. A higher engine speed can result in higher pumping rate by running more cycles at the same time. Calculated with the size of the second ring groove, the pumping rate can be converted to times for pumped oil to fully fill the groove, as Figure 25 shows. After oil enters the second ring groove and approaches the upper flank, the pumping effect will also happen there. The flow direction at the upper flank is pointing out from the groove, which means oil can pass the second ring to the upper regions. In addition, oil leakage from the second ring was observed as well, providing another oil path to climb up. Figure 26 shows the pumping rate at 2000 RPM at both flanks, as a positive number After oil enters the second ring groove and approaches the upper flank, the pumping effect will also happen there. The flow direction at the upper flank is pointing out from the groove, which means oil can pass the second ring to the upper regions. In addition, oil leakage from the second ring was observed as well, providing another oil path to climb up. Figure 26 shows the pumping rate at 2000 RPM at both flanks, as a positive number of means pumping into the groove. The pumping rate also increases with the reduction of engine load. After oil enters the second ring groove and approaches the upper flank, th effect will also happen there. The flow direction at the upper flank is pointin the groove, which means oil can pass the second ring to the upper regions. In a leakage from the second ring was observed as well, providing another oil pa up. Figure 26 shows the pumping rate at 2000 RPM at both flanks, as a positi of means pumping into the groove. The pumping rate also increases with the r engine load.
Pumping Rate of the Top Ring
The top ring is designed mainly to seal the gas with a barrel shape. There is no chamfer on the outside and easily get fully flooded. The source for the top ring to pump comes from the leakage from the second ring, mainly pumping and gap leakage. Since at high load there is not enough oil leakage from the OCR, the hook chamfer on the second ring can prevent pumping up oil, the top ring's calculation was only conducted at low load condition. In addition, the leakage from the second ring gap can also be reduced, resulting in less oil in the third land at a high load.
The overall character for the top ring's pumping effect (Figure 27) is similar to that of the second ring. The higher load can reduce the pumping effect and a higher RPM can result in a higher flow rate. When looking back to the full view's transient data, running under blowby separation for typically 30-50 s can cause oil droplets to be seen through the top ring gap. This matches the time for oil pumping to pass the top two rings. Notice that the oil droplets observed through the top ring gap do not necessarily mean the ring grooves are full. The direct path from gaps and gas flow in non-contacting regions [24] can also contribute to upward oil transport past the top two rings. the second ring. The higher load can reduce the pumping effect and a higher RPM can result in a higher flow rate. When looking back to the full view's transient data, running under blowby separation for typically 30-50 s can cause oil droplets to be seen through the top ring gap. This matches the time for oil pumping to pass the top two rings. Notice that the oil droplets observed through the top ring gap do not necessarily mean the ring grooves are full. The direct path from gaps and gas flow in non-contacting regions [24] can also contribute to upward oil transport past the top two rings.
Summary
For the ring pack design studied here, it can be concluded that the condition with zero blowby separates two drastically different oil transport patterns across the piston ring pack. When the intake manifold pressure is below the one resulting zero blowby (blowby separation line), the oil can first leak through the upper rail gap of the TPOCR before flooding the hook/chamfer region of the Napier second ring, and then finally flood the top ring gap area and pass the top ring going upwards. Drain holes and lower rail gap can both be the supply and release routes for the oil accumulation inside the TPOCR groove. In addition to their gaps, the second ring and top ring can pump the oil through the ring/groove clearance when the clearance boundaries become full, which can be reached when running below blowby separation line. Of course, one obvious remedy in practice is to always run the engine with a positive blowby. However, this is not often the case. How we design drain holes to minimize the oil accumulation inside the OCR groove both for high and no-load conditions then becomes critical. To do that, an adequate understanding of the oil transport inside the OCR groove needs to be established.
Conclusions
The following conclusions can be drawn from this work: 1. When the engine is running under the blowby separation line for a long enough time, such as with engine brakes in driving, overall reverse flow will gradually drive oil upwards and even oil droplets can be seen through the top ring gap. This will result in massive LOC and should be eliminated in engine operation;
Summary
For the ring pack design studied here, it can be concluded that the condition with zero blowby separates two drastically different oil transport patterns across the piston ring pack. When the intake manifold pressure is below the one resulting zero blowby (blowby separation line), the oil can first leak through the upper rail gap of the TPOCR before flooding the hook/chamfer region of the Napier second ring, and then finally flood the top ring gap area and pass the top ring going upwards. Drain holes and lower rail gap can both be the supply and release routes for the oil accumulation inside the TPOCR groove. In addition to their gaps, the second ring and top ring can pump the oil through the ring/groove clearance when the clearance boundaries become full, which can be reached when running below blowby separation line. Of course, one obvious remedy in practice is to always run the engine with a positive blowby. However, this is not often the case. How we design drain holes to minimize the oil accumulation inside the OCR groove both for high and no-load conditions then becomes critical. To do that, an adequate understanding of the oil transport inside the OCR groove needs to be established.
Conclusions
The following conclusions can be drawn from this work:
1.
When the engine is running under the blowby separation line for a long enough time, such as with engine brakes in driving, overall reverse flow will gradually drive oil upwards and even oil droplets can be seen through the top ring gap. This will result in massive LOC and should be eliminated in engine operation; 2.
The oil inside the OCR groove can leak out from the upper rail gap. Low engine load and low engine speed can both introduce a higher oil accumulation level, which can result in more oil leakage. The alignment of both OCR rail gaps can result in more leakage than the situation when the gaps are far away from each other; 3.
The drain holes inside the OCR groove can act as oil supply holes. When running at zero blowby without a draining effect, the oil leakage jet from the upper rail gap can hit the second ring. This acts as the starting point of massive oil upwards pumping by filling the hook chamfer; 4.
The top two rings can pump the oil upwards through ring groove clearances at all the load conditions tested, provided there is sufficient oil supply to the boundaries of the ring/groove clearance. Therefore, the limiting factor for the ring/groove clearance to become an oil leaking path is the oil supply;
5.
It needs to be emphasized that different engines reach zero blowby at different levels of intake manifold pressure. Zero blowby rather than the magnitude of intake pressure as the threshold for drastic change of oil control bears more general implications. Furthermore, the findings in this work are applicable to not only different SI engines but also gas and hydrogen engines equipped with TPOCR. | 2022-10-13T15:21:11.626Z | 2022-10-10T00:00:00.000 | {
"year": 2022,
"sha1": "5e72dc60062795d6996c743ed31d4ec9ff7f700c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4442/10/10/250/pdf?version=1665401325",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "72866200372cd20e3ac8888490e093b000f3a9df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
237279232 | pes2o/s2orc | v3-fos-license | Profiling single-cell chromatin accessibility in plants
Summary Coupling assay for transposase-accessible chromatin sequencing (ATAC-seq) with microfluidic separation and cellular barcoding has emerged as a powerful approach to investigate chromatin accessibility of individual cells. Here, we define a protocol for constructing single-cell ATAC-seq libraries from maize seedling nuclei and the preliminary computational steps for assessing data quality. This protocol can be readily adapted to other plant species or tissues with minor changes to reveal chromatin accessibility variation among individual cells. For complete details on the use and execution of this protocol, please refer to Marand et al. (2021).
SUMMARY
Coupling assay for transposase-accessible chromatin sequencing (ATAC-seq) with microfluidic separation and cellular barcoding has emerged as a powerful approach to investigate chromatin accessibility of individual cells. Here, we define a protocol for constructing single-cell ATAC-seq libraries from maize seedling nuclei and the preliminary computational steps for assessing data quality. This protocol can be readily adapted to other plant species or tissues with minor changes to reveal chromatin accessibility variation among individual cells. For complete details on the use and execution of this protocol, please refer to Marand et al. (2021).
BEFORE YOU BEGIN
Experimental design 1. In the following protocol, we describe the development of a single-cell ATAC-seq (scATAC-seq) library using freshly harvested Zea mays (maize) seedlings as an example. The steps described below can be applied to any plant species or tissue that is capable of producing intact nuclei suspensions Ricci et al., 2019). In addition to 7-day old seedlings, this protocol has been successfully employed in axillary buds, pistillate inflorescence, staminate inflorescence, crown roots and embryonic roots of Zea mays, as well as the primary root of Arabidopsis thaliana. We detail nuclei isolation using fluorescence-activated nuclei sorting (FANS) but note that traditional wash and centrifugation steps may be taken for laboratories without access to cytometrical instruments with an increased likelihood of organeller contamination (Lu et al., 2017;Zhong et al., 2021). 2. For experiments aiming to compare samples, such as different genotypes, case/control treatments, et cetera, it is imperative to construct all scATAC-seq libraries simultaneously to avoid batch effects (Lafzi et al., 2018). Currently, commercial microfluidic chips (103 Genomics) allow for up to eight samples to be processed in parallel. It is also important to consider that at least one biological replicate per sample is highly recommended for all experimental designs. For experimental designs with more than four samples, biological replicates should be constructed in a separate session to remediate technical effects. We stress that biological replicates are critical for establishing reproducibility of the tested conditions. Lysis and Collection buffers can be stored at 4 C for several months without Spermine and 2-ME, which should be added the day of the experiment. Spermine stock should be stored at À20 C while all the remaining stocks can be stored at 2 C. For optimal results, prepare the Lysis and Collection Buffers fresh the same day as nuclei isolation experiments.
CRITICAL: 2-ME is a potential toxin and Spermine is a skin corrosive/irritant. Both should be handled under a fume hood for safety.
STEP-BY-STEP METHOD DETAILS Sorting intact plant nuclei
Timing: [0.5-1 h] Profiling chromatin accessibility requires isolating nuclei from heterogenous tissues (Minnoye et al., 2021). Tn5 can tagment organellar genomes with high efficiency due to a lack of nucleosomes on chloroplast and mitochondrial DNA (Lu et al., 2017). The following steps are designed to remove cellular and organellar debris and enrich for single nuclei. The gating strategy illustrated here has been optimized for maize nuclei, users may need to adjust the gating strategy specific to their organism. Growth and sampling conditions for published scATAC-seq data from maize and Arabidopsis thaliana tissues can be found in the STAR Methods section in Marand et. al.
1. Approximately 3-4 B73 maize V1 seedlings (7-10 day old) were placed on a petri dish on ice and saturated with 500 mL of chilled Lysis buffer ( Figures 1A and 1B). 2. Using a sterilized single-edge razor blade, seedlings were chopped for 2 min to break the cell wall and release nuclei into solution ( Figure 1C). 3. The aqueous nuclei slurry was then filtered through a 40 mm cell strainer ( Figure 1D).
Optional: Two layers of sterilized miracloth can be used in place of a 40 mm cell strainer.
4. 1 mL of DAPI (1 mg/mL) was added for every 200 mL of lysis buffer recovered (e.g., 2 mL DAPI for 400 mL of lysis buffer post filtering). 5. Sorting was performed on a Moflo Astrios EQ running sterile preservative free flow cytometry solution. a. The instrument was set up with a 70 mm tip, sheath pressure at 60 psi and sample pressure at 60.1 psi. b. Nuclei were run between 200 and 1000 events/second depending on their concentration. c. Nuclei were first gated on log scaled forward scatter (FSC) versus log scaled side scatter (SSC) to eliminate debris ( Figure 2A). d. All subsequent data were collected in linear mode.
Reagent
Final concentration Amount e. DAPI width versus DAPI area gating was used to eliminate doublets ( Figure 2B). f. The next gate was on a graph depicting green fluorescence vs. DAPI Area to eliminate any green fluorescing material ( Figure 2C). g. The final gate was on DAPI area selecting the first strong peak and all subsequent peaks depending on the downstream experiment ( Figure 2D). h. Sorting was performed in Purity 1 drop mode and triggered on DAPI fluorescence. 6. A total of 120,000 nuclei were collected across four 1.5 mL microcentrifuge catch tubes (30,000 nuclei in 200 mL of collection buffer, per tube) and immediately placed on ice.
CRITICAL: Refrain from chopping for longer than 2 min to reduce rupturing the nuclear envelope. Keep nuclei on ice whenever possible.
Timing: [30 min]
This step describes the evaluation of nuclei quality and concentration prior to generating scATACseq libraries.
7. Four tubes containing nuclear suspensions ($230 mL in each tube) were centrifuged at 500 RCF for 5 min at 4 C in swinging bucket centrifuge. 8. The supernatant was discarded, leaving approximately 10 mL in the bottom of each tube.
CRITICAL: Be careful not to disrupt the nuclei pellet at the bottom of the tube when collecting supernatant. The pellet may or may not be visible after centrifugation.
9. Nuclei pellets were resuspended in the remaining 10 mL of collection buffer by pipetting (10-times) and pooled in a single 1.5 mL centrifuge tube (total volume $ 40 mL).
CRITICAL: The nuclei resuspension should be done by gently pipetting with no visible debris or clumps in the tubes.
10. Nuclei concentration and quality were evaluated on a hemacytometer under a fluorescent microscope ( Figure 3). CRITICAL: Samples with excessive nuclear aggregates (clumping, Troubleshooting 1) or low frequency of intact nuclei (Troubleshooting 2) should not be used for scATAC-seq library preparation.
11. Input nuclei for the 103 Genomics scATAC-seq solution are required to be suspended in diluted nuclei buffer. To achieve this, nuclei were once again pelleted in a swinging bucket centrifuge at 500 RCF for 5 min at 4 C. 12. The supernatant was discarded with a P200 filter pipette tip, leaving approximately 10 mL of collection buffer. 13. The final 10 mL of collection buffer was discarded using a P10 filter pipette tip.
CRITICAL: Be careful not to disrupt the nuclei pellet at the bottom of the tube. The pellet may or may not be visible after centrifugation.
14. Nuclei were resuspended in 10 mL of diluted nuclei buffer (103 Genomics) by 10-times pipetting with wide-bore P20 filtered tips. CRITICAL: The nuclei resuspension should be performed by gently pipetting with no visible debris or clumps in the tubes.
Optional: An additional check of nuclei concentrations and quality can be performed using 1-2 uL of DAPI-stained nuclei with a hemacytometer under a fluorescent microscope. A loss of $50% of total nuclei is common during centrifugation.
15. The concentration of nuclei was then adjusted to 3,200 nuclei per mL using diluted nuclei buffer (103 Genomics). Keep in mind that if the sample number is less than eight it is necessary to load the unused chip wells with 50% glycerol. An easy solution is to load unused wells while the tagmentation reaction is being performed (5-10 min before the reaction finishes). c. For optimum loading, wait at least 30 s before adding glycerol in each sequential row. 18. Optimal droplet formation on the Chromium microfluidics chip H requires a predictable flow rate to generate gel beads in emulsion (GEMs). a. Any condition that impairs GEM formation, such as bubbles in the sample/master mix or gel beads well, should be avoided. b. Slowly adding reagents and samples to the bottom of each well helps prevent bubble formation. 19. During double-sided size selection, the volumetric ratio of SPRIselect to the sample library specified in the manual (74 mL SPRIselect to 130 mL of library) ensures proper selection of amplified fragments by removing primer dimers and excessively large fragments. Collecting more sample volume than the recommended 130 mL in the manual may negatively affect library quality and recovery.
EXPECTED OUTCOMES
The quality of the scATAC-seq library preparation can be evaluated by assessing the library concentration via Qubit or qPCR and the distribution of fragment lengths using a fragment analyzer ( Figure 4A) (Buenrostro et al., 2013(Buenrostro et al., , 2015. In general, good quality scATAC-seq libraries will have concentrations greater than 5 ng/mL and exhibit a high proportion of nucleosome-free fragments ($220bp) and periodic enrichment of fragments corresponding to mono-nucleosomes ($330bp) and di-nucleosomes ($500bp). Libraries lacking nucleosome-protected fragments or with low library concentration yields may indicate lysis of the nuclear envelope prior to tagmentation or a low recovery of nuclei ( Figure 4B) (Troubleshooting 3).
Following library sequencing and computational quality control/data processing, a successful scA-TAC-seq experiment will generally result in 40-60% of input nuclei passing quality control thresholds (example: 15,000 input nuclei yielding 6,000-9,000 high-quality nuclei). Several quality metrics, such as the number of unique Tn5 insertions per nucleus, percentage of reads mapping near transcription start sites (TSSs), and the Fraction of Reads in Peaks (FRiPs) should be assessedkeeping in mind that the distribution of these metrics are highly dependent on the species and organ or tissue-type. As a loose rule, nuclei should have at least 1,000 unique Tn5 insertions, fraction of reads near (1 or 2-kb depending on genome size and or number of genes) TSSs greater than 0.2, and FRiPs greater than 0.1. To establish reproducibility, biological replicates should be constructed and processed independently, and joined following identification of barcodes representing nuclei. Reproducible chromatin accessibility signals can be determined by assessing nuclei similarity according to the biological replicate of origin on the UMAP embedding. Louvain clusters containing approximately equal proportions of two or more biological replicates are generally regarded as reproducible. Additional details and an example of quality control and analytical steps can be found below.
QUANTIFICATION AND STATISTICAL ANALYSIS
The follow steps outline initial processing and quality control for Zea mays seedling scATAC-seq data generated by Marand et al. (Figure 5). For tutorials and example code for evaluating data quality and clustering with Socrates , see the following link: https://github.com/ plantformatics/Socrates/ 1. FASTQ files were generated using cellranger-atac mkfastq (v3.0) with default parameters. 2. FASTQ files were then trimmed and aligned to maize B73 AGP v4 reference genome using cellranger-atac count (v1.2) to obtain an unfiltered BAM file containing all sequenced reads with corrected barcodes appended to the ''CB'' tag. 3. Uniquely and properly paired mapped reads were extracted using samtools with the ''-q 10 -f 3'' parameters and by removing reads with more than one alignment in the ''XA'' tag added by BWA mem (a part of the cellranger-atac count pipeline).
OPEN ACCESS
Optional: Researchers may wish to adjust the ''-q'' parameter depending on the species. We opt to use mapping quality of 10 as a threshold to avoid removing reads derived from regions associated with the whole-genome duplication event that occurred $5-12 Mya in maize (Schnable et al., 2011).
4. PCR and optical duplicates were removed using picardtools MarkDuplicates with the following parameters ''REMOVE_DUPLICATES=true BARCODE_TAG=CB''. 5. The filtered BAM file was converted to BED (Quinlan and Hall, 2010) format representing baseresolution Tn5 integration sites by adjusting the 5 0 coordinates of forward and reverse aligned reads by +4/À5, respectively (Column 1 = chromosome, Column 2 = start position, Column 3 = end position, Column 4 = barcode, Column 5 = strand). 6. Unique Tn5 insertions were retained for each barcode using the UNIX tool uniq. 7. Barcodes passing quality thresholds (unique Tn5 insertions per nucleus, percent Tn5 insertions within 2-kb of TSSs, and FRIPs) were identified using the R package, Socrates v0.0.0.9 (example code can be found here: https://github.com/plantformatics/plant_scATAC_STAR_Protocol) and are hereafter referred to as nuclei (Troubleshooting 4). a. Socrates uses MACS2 (Zhang et al., 2008) to identify Accessible Chromatin Regions (ACRs) that depends on species-specific information. Researchers should take care to ensure that the parameters selected are appropriate for their species of interest. b. The effective genome size, which represents the proportion of mappable sequence, is generally 60-80% of the total genome size. For maize, the effective genome size was set to 1.6e9. c. Spline-fitting identified nuclei as barcodes with more than 2,348 unique Tn5 insertions ( Figure 5A). d. Nuclei with greater than 20% of Tn5 insertions within 2-kb of TSSs and 30% of Tn5 insertions with ACRs were retained as intact nuclei for further analysis ( Figure 5A). e. A sparse binary matrix composed of ACRs (rows) by nuclei (columns) was generated by Socrates using the ''convertSparseData'' function for further processing. 8. Following data quality evaluation and processing, nuclei with similar genome-wide chromatin accessibility profiles were clustered into groups by first filtering low-frequency features and nuclei with less than 1,000 accessible features ( Figure 5B). 9. Finally, Louvain clusters of nuclei representing putative cell types were identified as using the succession of ''regModel'', ''reduceDims'', and ''callClusters'' functions a part of Socrates ( Figure 5C) (Troubleshooting 5).
LIMITATIONS
This protocol has not yet been assessed from samples with low nuclei counts (less than 60,000 starting nuclei). Researchers working with small tissues (such as embryos) may need to pool multiple samples or optimize the nuclei extraction procedure to yield sufficient nuclei counts for scA-TAC-seq library construction. In addition, some samples and species may require adjusting the concentration and/or the type of detergent used, particularly if nuclei appear over-lysed with very few intact nuclei.
Potential solution
Nuclear aggregation after sorting can occur when the collection buffer values are low, or nuclei suspensions are highly concentrated. Increasing the collection volume and adding between 1%-5% Bovine Serum Albumin can aid to lower aggregation in nuclei suspensions. Researchers should aim to produce nuclei suspensions at a concentration below 4,000 nuclei per mL for species with nuclei similar in size to maize (<10 mm).
Potential solution
The loss of periodic fragment sizes corresponding to nucleosome-free and nucleosome-protected fragments can arise from multiple scenarios. The most common explanation is a lack of intact nuclei prior to tagmentation; lysed nuclei will result in chromatin accessibility profiles that resemble gDNA when sequenced. If nuclei preparations are largely intact, another possibility is excessive PCR cycles (over-amplification of small fragments) or insufficient extension times during library amplification.
Problem 4
Few nuclei passing quality control thresholds (Quantification and Statistical Analysis).
Potential solution
The selection of quality control thresholds is highly dependent on the species, quality of the reference genome and the tissue/organ used in the experiment. As such, the percent TSS and FRiP score thresholds presented above should be taken as guidelines. In the case where few nuclei pass a minimum Tn5 insertion threshold, the knee plot, such as in Figure 5A, should be heavily scrutinized. A reasonable threshold can be set based on the distribution, rather than a predetermined heuristic cutoff. For example, in Figure 5A, a possible cut-off for Tn5 insertions per nucleus can be set to the top of the ''knee'', at approximately 630 Tn5 insertions per nucleus (log10[2.8]). It is advisable to ensure a cut-off that results in the same or fewer nuclei than were loaded onto the device. For low percent TSS and FRiP scores, researchers may again set thresholds based on the observed distributions, such as removing nuclei below 1 standard deviation. Ultimately, the quality of nuclei can be established by visualizing accessibility of known marker genes. Nuclei with low read depth, percent TSS and FRiP scores may still exhibit highly discriminative chromatin accessibility profiles representing distinct cell identities. Thus, cell-type-specific marker genes should always be evaluated follow data processing regardless of the observed data quality metrics.
Problem 5
Lack of cluster separation following dimensionality reduction (Quantification and Statistical Analysis).
Potential solution
The absence of underlying structure has several possible explanations. The first is an absence of heterogeneity in the studied cell population. This possibility can be ruled out by visualizing chromatin accessibility profiles of known marker genes of expected cell types to determine if heterogeneity among nuclei is observed on the embedding. Such a possibility can be confirmed by performing clustering to force substructure followed by differential chromatin accessibility analysis. An absence of marked chromatin differences is evidence for a lack of heterogeneity among nuclei within the sample. A second possibility is an overabundance of doublets due to overloading the microfluidics chip. While it may be tempting to load more nuclei than the manufacturer's ll OPEN ACCESS recommendation, an abundance of barcodes containing multiple nuclei adds substantial noise to the data set. In particular, barcodes containing multiple nuclei with different chromatin and cell states typically fall within an intervening space between distinct cell-type clusters on the manifold. If researchers have loaded more than the recommended $16,000 nuclei, extra care should be taken to remove as many predicted doublets as possible using pre-existing computational solutions, such as DoubletFinder, Scrublet, or ArchR (Granja et al., 2021;McGinnis et al., 2019;Wolock et al., 2019). A third possibility is an unsuccessful isolation of intact nuclei. Heavily degraded and broken nuclei lose chromatin structure demarcating cell identity. Analysis of scATAC-seq as a pseudobulk ATAC-seq data set (by aggregating reads from all nuclei) is highly informative for evaluating the data quality from a traditional viewpoint. An absence of clear regions of enrichment at TSSs and coverage plots resembling whole-genome shotgun sequencing are indications that the majority of nuclei were broken prior to loading the chip or the tagmentation step has failed.
RESOURCE AVAILABILITY
Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Robert J. Schmitz (schmitz@uga.edu).
Materials availability
This study did not generate new unique materials.
Data and code availability Data can be found online at https://www.sciencedirect.com/science/article/abs/pii/S0092867 421004931 Code example can be found at https://github.com/plantformatics/plant_scATAC_STAR_Protocol Tutorials and documentation for Socrates can be found at https://github.com/plantformatics/ Socrates/ | 2021-08-25T05:24:50.342Z | 2021-08-12T00:00:00.000 | {
"year": 2021,
"sha1": "469de2aed1bd09d9ff1e52fd9d75a47a45ea36bf",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xpro.2021.100737",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "469de2aed1bd09d9ff1e52fd9d75a47a45ea36bf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
35783227 | pes2o/s2orc | v3-fos-license | Intestinal Evisceration and Gangrene: A Sinister Complication of Abdominal Wall Abscess
Sir, An abscess is a walled-off collection of pus in a tissue. Very small abscesses can be managed by simple antibiotics, but incision and drainage has to be performed for large abscesses. If the abscess is left untreated or is inappropriately managed, complications can arise. Such complications range from minor ones to the life threatening. The reported complications include spontaneous rupture to the surface or adjacent natural cavities, extension to adjacent tissues, metastasis to distant tissues, septicemia, prolonged pyrexia, and so on.[1,2] We report a case of untreated abdominal wall abscess, where the patient presented with evisceration and gangrene of the small intestine.
Sir, An abscess is a walled-off collection of pus in a tissue. Very small abscesses can be managed by simple antibiotics, but incision and drainage has to be performed for large abscesses. If the abscess is left untreated or is inappropriately managed, complications can arise. Such complications range from minor ones to the life threatening. The reported complications include spontaneous rupture to the surface or adjacent natural cavities, extension to adjacent tissues, metastasis to distant tissues, septicemia, prolonged pyrexia, and so on. [1,2] We report a case of untreated abdominal wall abscess, where the patient presented with evisceration and gangrene of the small intestine.
A 20-days-old female neonate presented in the nursery emergency department of our institution with visible intestinal loops at the anterior abdominal wall. There was no history of any previous operation. The mother told us that she had noticed a boil in the left upper quadrant of the abdomen 5 days back, which had been associated with fever, excessive crying, and reluctance to feed. The boil increased in size over the course of 3 days. The parents took the child to a "peer", who gave them some holy water for drinking as a cure. A few hours before arrival in our institution the boil had ruptured spontaneously and a long loop of intestine eviscerated through it. There was no history of bleeding per rectum, bilious vomiting or abdominal distension, prior to this incidence.
We immediately covered the intestine with warm salinesoaked gauze with the aim of preventing dehydration, contamination, and dryness. Intravenous (IV) access was established. The patient was resuscitated with IV fluids, antibiotics, and temperature maintenance. On arrival her temperature was 97°F, the heart rate was 150/min, and respiratory rate was 45/min. On inspection, there was an approximately 1 ft long loop of small intestine with doubtful viability eviscerating from a wound in the left upper quadrant of the abdomen. The patient was shifted to the operation theater after the parents had been counseled about the danger of intestinal gangrene if immediate intervention was not pursued.
At operation, an attempt was made to reduce the intestine through the wound, from which it had been eviscerated, though not successful. An exploratory laparotomy was then performed. A right supraumbilical transverse incision was made and the eviscerated intestine was reduced back into the peritoneal cavity. About 20 cm of ileum was gangrenous. There were no signs of any necrotizing enterocolitis (NEC) on inspection of intestine. The gangrenous portion of small bowel resected and an ileostomy was made. The total operative time was 20 minutes. The patient had an uneventful recovery and dated for stoma reversal.
Abscesses can be formed as a result of infections in the body. The etiology can vary widely. Abscesses can be the sequel of local infection or they may occur in immunodeficent states, which provide opportunities to various organisms for abscess formation. Important causes include primary local tissue infection, seeding from infection of some other organ, iatrogenic (e.g., due to the use of unsterilized syringes), immunodeficent states, infection of a posttraumatic hematoma, and so on. [3] In our case, the abscess had started as a small boil in the left upper quadrant of the abdomen.
Pus can collect anywhere in the body. When it collects in natural cavities it is called empyema; for example, pus in the pleural cavity is called empyema thoracis. When the pus collects in tissues, the inflammatory response of the body tends to wall it off in order to prevent its spread. Abscess can be found in the skin, abdomen, liver, spleen, brain, spinal cord, bones, joints, and so on. [3] In our case, the abscess was in the anterior abdominal wall.
The management options for abscesses include incision and drainage for established and walled-off abscesses; aspiration with a wide-bore needle, with or without ultrasound/CT guidance (depending upon the site, i.e., liver abscess); and conservative management with antibiotics. In the past, and in some parts of world even today, some solutions were applied over the abscess to facilitate its spontaneous bursting at the surface. [3][4][5][6] If left untreated abscesses may rupture spontaneously either at the surface or into a body cavity. In our case, the abscess had started as a boil in the skin of the abdomen; it then spread to involve the deeper layers of the abdominal wall to such an extent that there was ultimately a defect through which a loop of intestine eviscerated. After evisceration, the pressure over the mesentery caused strangulation of the intestinal loop. This seems to be the most plausible explanation for the findings in our patient. Our initial diagnosis on arrival was NEC related leak of intestine that eventually ruptured out the anterior abdominal wall with evisceration of small bowel, however, this theory is refuted in absence of any history of prematurity, bleeding per rectum, abdominal distension and operative findings not suggestive of NEC. To the best of our knowledge such a complication of an untreated anterior abdominal wall abscess has not been reported before.
Bilal Mirza, Lubna Ijaz, Muhammad Saleem, Afzal Sheikh
Department of Pediatric Surgery, The Children's Hospital and The Institute of Child Health, Lahore, Pakistan | 2018-04-03T02:58:48.763Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "4c061db9923191af9f6691d120a4be190bcbf1a9",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0974-777x.77306",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a357227e9c59e7c74f08e83a1439296dee88492d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
102656879 | pes2o/s2orc | v3-fos-license | Response Surfaces of Linoleic Acid of Swietenia Mahagoni in Supercritical Carbon Dioxide
The process variables pressure, temperature and particle size were studied for optimization of linoleic acid by response surface methodology following a Box-Behnken design of experiments. The results indicated the effect of extraction condition on linoleic acid of the extracts produced SC-CO2 gave the different effect. But, Analysis of the variance of the data indicated that there was no statistically significant difference between the other sample. Although there was greater variation within the sample, there was still no statistically significant effect of temperature and pressure on the extraction. The optimum conditions for linoleic acid yield from Swieteniamahagoni seed within the experimental range were found to be pressure of 29.02 MPa, a temperature of 67.88ºC and particle size 0.75 mm, and the predicted linoleic acid was found to be 34.91%.
Introduction
Swietenia mahagoni seeds have been applied as folk medicine for the treatment of hypertension, malaria, and diabetes [1]. There have also been reports of S. mahagoni seeds having anti-Inflammatory, antimutagenecity, antitumor [2], antoxidant and antimicrobial activities [3]. The therapeutic effects associated with the seeds are mainly caused by the biologically active ingredients; fatty acids and tetranortriterpenoids [4].
Supercritical carbon dioxide extraction (SC-CO 2 ) is an alternative technique to conventional extraction of lipids with organics solvents. Moreover, carbon dioxide as a solvent possesses many advantages (nontoxic, nonflammable, inexpensive and yields high purity oil) which can be successfully explored in food and pharmaceutical application [5,6,7]. SC-CO 2 was successfully used in the extraction of edible oils from a wide range of seeds, including hiprose [8], cuphea [9], flax [10], amaranth [11], sunflower and rape [12], Swietenia mahagoni [13]. Previous studies on SC-CO 2 of S. mahagoni seeds were mainly focused on the determination of total oil contents in ground seeds [13].
Response surface methodology (RSM) is a statistical technique, which is used to evaluate the effect of multiple factors and their interaction on one or more response variables. Recently, RSM has been successfully applied to optimize SC-CO 2 extraction of oils from Swietenia mahagoni seed [13], Salvia mirzayanii [14], Passiflora seed [15], Silkworm pupae [16], wheat germ [17], cotton seed [18], Curcuma longa [19], rosehip seed [20], Cyperus rotundus [21], and amaranth seed [22]. In the present The experimental design chosen for this study was that of Box and Behnken (BBD). BBD was applied to determine optimum extraction pressu extraction of S mahagoni seed. The pressure (A), temperature (B), and particle size (C) were independent variables studied to optimize the linoleic acid (Y) from rate was constant.
Box-Behnken design requires an experiment number (N) according to the following equation.
Where k is the factor number and cp is the replicate number of the central point. Three levels (low, medium, and high denoted as -1, 0, and + given in Table 1. Analysis was performed using commercial software Design The analysis of variance (ANOVA) was also used to evaluate the quality of the fitted model. The test of statistical difference was based on the total error criteria with a Determination of active constituents from extracted compounds were examined using Gas Chromatography-Mass Spectrometry (G modification. In order to evaluate the quality of extracted compounds, all the sample were analyzed by using gas chromatography-mass spectrometry. The GC (FAMEs) was performed on Agilent 1909Is was used to extract the oil from S. mahagoni seed. The aim was to investigated the rbon dioxide parameter on the linoleic acid on S. mahagoni A schematic flow diagram of the extraction apparatus is shown in Figure 1. S. mahagoni supercritical carbon dioxide. The ground sample of 5 g was placed in an extractor vessel. The extracts were collected in a glass vial placed in the separator at ambient temperature and was 2 mL/min. The investigated values of pressure, temperature, and particle size were varied from 20 to 30 MPa. 40 to 60ºC, and 0.25 to 0.75 mm, respectively. After each extraction, the obtained extract was placed into glass vials, sealed and store at 4º schematic design of the supercritical fluid extraction (SFE) unit The experimental design chosen for this study was that of Box and Behnken (BBD). BBD was applied to determine optimum extraction pressure, temperature and particle size for supercritical CO extraction of S mahagoni seed. The pressure (A), temperature (B), and particle size (C) were independent variables studied to optimize the linoleic acid (Y) from S. mahagoni Behnken design requires an experiment number (N) according to the following equation.
N=2k(k-1)+cp Where k is the factor number and cp is the replicate number of the central point. Three levels (low, 1, 0, and +1, respectively of variables chosen for the experiments are Analysis was performed using commercial software Design-Expert® v.6.0.4 The analysis of variance (ANOVA) was also used to evaluate the quality of the fitted model. The test of statistical difference was based on the total error criteria with a confidence level 95%.
Determination of active constituents from extracted compounds were examined using Gas Mass Spectrometry (GC-MS) as described by Kandhro, et al In order to evaluate the quality of extracted compounds, all the sample were analyzed by mass spectrometry. The GC-MS analysis for fatty acid methyl ester (FAMEs) was performed on Agilent 1909Is-433. A capillary column HP seed. The aim was to investigated the S. mahagoni seeds extract.
S. mahagoni seed oil was e of 5 g was placed in an extractor vessel. The extracts were collected in a glass vial placed in the separator at ambient temperature and was 2 mL/min. The investigated values of pressure, temperature, and particle size were varied from 20 to 30 MPa. 40 to 60ºC, and 0.25 to 0.75 mm, respectively. After each extraction, the obtained extract was placed into glass vials, sealed and store at 4ºC to prevent any design of the supercritical fluid extraction (SFE) unit.
The experimental design chosen for this study was that of Box and Behnken (BBD). BBD was re, temperature and particle size for supercritical CO 2 extraction of S mahagoni seed. The pressure (A), temperature (B), and particle size (C) were S. mahagoni seed. The CO 2 flow Behnken design requires an experiment number (N) according to the following equation.
Where k is the factor number and cp is the replicate number of the central point. Three levels (low, 1, respectively of variables chosen for the experiments are Expert® v.6.0.4.
The analysis of variance (ANOVA) was also used to evaluate the quality of the fitted model. The confidence level 95%. Determination of active constituents from extracted compounds were examined using Gas et al. [23]. with slight In order to evaluate the quality of extracted compounds, all the sample were analyzed by MS analysis for fatty acid methyl ester 433. A capillary column HP-5MS (5% phenyl were used for separation of fatty acid methyl esters. The initial temperature of 150ºC was maintained for 2 min raised to 230ºC at the rate of 4ºC/min, and kept at 230ºC for 5 min. The split ratio was 1:50, and helium were used carrier gas with the flow rate of 0.8 ml/min. The injector and detector temperature are 240 and 260ºC, respectively. The mass spectrometer was operated in the electron impact mode at 70 eV in the scan range of 50-550 m/z.
Results and Discussion
Since various parameters potentially affect the extraction process, the optimization of the experimental conditions represents a critical step in the application of the SFE method. The experimental design was adopted on the basis of coded level from three variables (Table 1), resulting in seventeen simplified experimental sets ( Table 2) with five replicates for the central point. The selected factors were extraction pressure (in MPa), temperature (in ºC) and particle size (in mm) with the consideration that these factors are important in the extraction process. The second order polynomial model used to express the total extraction linoleic acid (LA) of S. mahagoni as a function of independent variables (in terms of coded values) is shown below: LA=33.58+1.01*A+1.44*B+2.04*C-3.18*A2-4.68*B2-0.21*C2+3.88*A*B-1.14*A*C+1.26*B*C Assessment of extracts and linoleic acid from S. mahagoni seed at extreme carbon dioxide extraction was carried out at pressures (20, 25 and 30 MPa), temperature (40, 50 and 60ºC) and particle size (0.25, 0.50 and 0.75 mm).The study proved that the optimum yield of S. mahagoni seed was linoleic acid content was 34.91% at 29.02 MPa pressure, temperature 67.88ºC and particle size 0.75 mm with the equivalent value of this condition was 0.92 (desirability). The accuracy of the predicted value can be seen from the desirability value. Determination of desirability values serves to state the degree of optimum result of precision, which is closer to value 1, the higher the optimization value of precision [24]. The validation of the accuracy of this extraction condition was obtained from the results of the experimental extract of 20.07% and linoleic acid yield of 34.26%, this result shows agreement with the value predicted by Design Expert software The effect of extraction pressure and temperature on linoleic acid yield at constant particle size, the effect of extraction pressure and particle size on linole effect of extraction temperature and particle size on linoleic acid yield at constant pressure were illustrated in Figure 2, Figure increased with the increased in pressure from 20 MPa to 25 MPa and temperature 40ºC to 50ºC at constant particle size (0.5 mm). However, further increase in pressure from 25 MPa to 30 MPa and temperature from 50ºC to 60ºC resulted in decreasing of linol shows that at constant temperature, linoleic acid increased with icreased pressure from 20 MPa to 30 MPa and particle size 0.25 mm to 0.75 mm. Furthermore, in Figure with the increased in temperature from 40ºC to 50ºC and started to decrease when further increased from 50ºC to 60ºC at constant pressure (25 MPa). The study shows that the effect of SC parameters on the linoleic acid did not have the same traits as the effect of the SC the extracted oil. Effect of extraction condition on linoleic acid showed that linoleic acid of the extracts produced SC-CO 2 gave the different effect. But, Analysis of a variance of the data indicated that there was no statistically significant (р>0.05) difference between the other sample. Although variation within the sample, there was still no statistically significant effect of temperature and pressure on the extraction. The reason for the variation within the extract produced by SC extraction was due to the changes in the so conditions. The effect of extraction pressure and temperature on linoleic acid yield at constant particle size, the effect of extraction pressure and particle size on linoleic acid yield at constant temperature and also the effect of extraction temperature and particle size on linoleic acid yield at constant pressure were illustrated in Figure 2, Figure 3 and Figure 4 respectively. As shown in Figure 2, linoleic acid d with the increased in pressure from 20 MPa to 25 MPa and temperature 40ºC to 50ºC at constant particle size (0.5 mm). However, further increase in pressure from 25 MPa to 30 MPa and temperature from 50ºC to 60ºC resulted in decreasing of linoleic acid. Meanwhile, in Figure 3 shows that at constant temperature, linoleic acid increased with icreased pressure from 20 MPa to 30 MPa and particle size 0.25 mm to 0.75 mm. Furthermore, in Figure 4 showed linoleic acid increased re from 40ºC to 50ºC and started to decrease when further increased from 50ºC to 60ºC at constant pressure (25 MPa). The study shows that the effect of SC parameters on the linoleic acid did not have the same traits as the effect of the SC Surface plot of linoleic acid yield from S mahagoni as a function of temperature and particle size at constant pressure of 25 MPa tion on linoleic acid showed that linoleic acid of the extracts produced gave the different effect. But, Analysis of a variance of the data indicated that there was no statistically significant (р>0.05) difference between the other sample. Although variation within the sample, there was still no statistically significant effect of temperature and pressure on the extraction. The reason for the variation within the extract produced by SC extraction was due to the changes in the solubility of the linoleic acid with the changing extraction Surface plot of linoleic as a function of pressure and temperature at constant particle size of 0.50 mm. The effect of extraction pressure and temperature on linoleic acid yield at constant particle size, the ic acid yield at constant temperature and also the effect of extraction temperature and particle size on linoleic acid yield at constant pressure were respectively. As shown in Figure 2, linoleic acid d with the increased in pressure from 20 MPa to 25 MPa and temperature 40ºC to 50ºC at constant particle size (0.5 mm). However, further increase in pressure from 25 MPa to 30 MPa and eanwhile, in Figure 3, it shows that at constant temperature, linoleic acid increased with icreased pressure from 20 MPa to 30 showed linoleic acid increased re from 40ºC to 50ºC and started to decrease when further increased from 50ºC to 60ºC at constant pressure (25 MPa). The study shows that the effect of SC-CO 2 parameters on the linoleic acid did not have the same traits as the effect of the SC-CO 2 parameter on as a function of temperature and particle tion on linoleic acid showed that linoleic acid of the extracts produced gave the different effect. But, Analysis of a variance of the data indicated that there was no statistically significant (р>0.05) difference between the other sample. Although there was greater variation within the sample, there was still no statistically significant effect of temperature and pressure on the extraction. The reason for the variation within the extract produced by SC-CO 2 lubility of the linoleic acid with the changing extraction Surface plot of linoleic as a function of pressure and particle size at constant temperature of 50ºC The solubility of the oil in SC of the oil components. In general, SC decreases with temperature at constant pressure, where the density decrease becomes smaller at higher pressures. On the other hand, the volatility of oil components increases with temperature. These two opposing effects of temperature on den behavior of solubility isotherms. A temperature increase may also cause breakdown of cell structure and increase the diffusion rate of the oil in the particles, therefore accelerating the extractio [25].
The fatty acid of S. mahagoni There is one significant peak in the GC spectrums of the samples (Figure important roles in immune and inflammatory responses fatty acids are present in plasma membranes, which are capable of stimulating cellular proliferation and angiogenesis, thus exert an important role in the process healing. The topical administration of linolenic (n-3) and linoleic (n-6) acids essential and oleic (n the closure of surgically induced skin wounds inflammatory cytokines production in wound sites, stimulating th [28].
Conclusion
The extraction of linoleic acid in supercritical carbon dioxide was measured as a function of pressure, temperature and particle size. Eff SC-CO 2 gave different effect. But, Analysis of variance of the data indicated that there was no statistically significant difference between the other sample. Although there was greater within the sample, there was still no statistically significant effect of temperature and pressure on the extraction. The optimum conditions for linoleic acid yield from experimental range were found to be pressure of 2 size 0.75 mm, and the predicted linoleic acid was found to be 34.91%. Under these optimum conditions, the experimental values were in agreement with the predicted values The solubility of the oil in SC-CO 2 is mainly determined by the SC-CO 2 density and the volatility of the oil components. In general, SC-CO 2 density increases with pressure at constant t decreases with temperature at constant pressure, where the density decrease becomes smaller at higher pressures. On the other hand, the volatility of oil components increases with temperature. These two opposing effects of temperature on density and volatility lead to the well-established crossover behavior of solubility isotherms. A temperature increase may also cause breakdown of cell structure and increase the diffusion rate of the oil in the particles, therefore accelerating the extractio S. mahagoni seed extracted SC-CO 2 were tested for GC-MS analysis ( Table 2). There is one significant peak in the GC spectrums of the samples ( Figure 5 important roles in immune and inflammatory responses [26]. Monounsaturated and polyunsaturated fatty acids are present in plasma membranes, which are capable of stimulating cellular proliferation and angiogenesis, thus exert an important role in the process healing. The topical administration of 6) acids essential and oleic (n-9) acid nonessential fatty acids modulate the closure of surgically induced skin wounds [27]. The use of n-6 fatty acid may increase pro inflammatory cytokines production in wound sites, stimulating the cutaneous wound healing process GC spectrums of fatty acid from S mahagoni seed oils The extraction of linoleic acid in supercritical carbon dioxide was measured as a function of pressure, temperature and particle size. Effect of extraction condition on linoleic acid of the extracts produced gave different effect. But, Analysis of variance of the data indicated that there was no statistically significant difference between the other sample. Although there was greater within the sample, there was still no statistically significant effect of temperature and pressure on the extraction. The optimum conditions for linoleic acid yield from S mahagoni experimental range were found to be pressure of 29.02 MPa, a temperature of 67.88ºC and particle size 0.75 mm, and the predicted linoleic acid was found to be 34.91%. Under these optimum conditions, the experimental values were in agreement with the predicted values. ledge the financial support by Ministry of Research and Higher Education Indonesia, and acknowledgement is also extended to Universitas Negeri Makassar and Universiti Teknologi Malaysia for the use of laboratory instruments. density and the volatility density increases with pressure at constant temperature and decreases with temperature at constant pressure, where the density decrease becomes smaller at higher pressures. On the other hand, the volatility of oil components increases with temperature. These two established crossover behavior of solubility isotherms. A temperature increase may also cause breakdown of cell structure and increase the diffusion rate of the oil in the particles, therefore accelerating the extraction process MS analysis ( Table 2). 5). Fatty acids have Monounsaturated and polyunsaturated fatty acids are present in plasma membranes, which are capable of stimulating cellular proliferation and angiogenesis, thus exert an important role in the process healing. The topical administration of 9) acid nonessential fatty acids modulate 6 fatty acid may increase proe cutaneous wound healing process seed oils.
The extraction of linoleic acid in supercritical carbon dioxide was measured as a function of pressure, ect of extraction condition on linoleic acid of the extracts produced gave different effect. But, Analysis of variance of the data indicated that there was no statistically significant difference between the other sample. Although there was greater variation within the sample, there was still no statistically significant effect of temperature and pressure on the S mahagoni seed within the 9.02 MPa, a temperature of 67.88ºC and particle size 0.75 mm, and the predicted linoleic acid was found to be 34. 91% | 2019-04-09T13:11:07.753Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "2d5606727317eb3671ee1f6c19f72fe850991965",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1028/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4ed56236d88cf4fc840a45b830250659e8b66dcb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
214787747 | pes2o/s2orc | v3-fos-license | The Protective Effect of Adiponectin-Transfected Endothelial Progenitor Cells on Cognitive Function in D-Galactose-Induced Aging Rats
Aging is a multifactorial process involving the cumulative effects of inflammation, oxidative stress, and mitochondrial dynamics, which can produce complex structural and biochemical alterations to the nervous system and lead to dysfunction of microcirculation, blood-brain barrier (BBB), and other problems in the brain. Long-term injection of D-galactose (D-gal) can induce chronic inflammation and oxidative stress, accelerating aging. The model of accelerated aging with long-term administration of D-gal have been widely used in anti-aging studies, due to the increase of chronic inflammation and decline of cognition that similarity with natural aging in animals. However, despite extensive researches in the D-gal-induced aging rats, studies on their microvasculature remain limited. Endothelial progenitor cells (EPCs), which are precursors to endothelial cells (ECs), play a significant role in the repair and regeneration process of endogenous blood vessel, and adiponectin (APN), a protein derived from adipocyte, has many effects on protective vascular endothelium and anti-inflammatory. Recently, many studies have shown that APN can promote improvements in cognitive function. Under these circumstances, we investigated the neuroprotective effect of the APN-transfected EPC (APN-EPC) treatment on rats after administration with D-gal and explored the likely underlying mechanisms. Compared to model group for D-gal administration, better cognitive function and denser microvessels were significantly found in the APN-EPC treatment group, and indicated APN-EPC treatment in aging rats could improve the cognitive dysfunction and microvessel density. The level of proinflammatory cytokines IL-1β, IL-6, and TNF-α, activated astrocytes and apoptosis rate were significantly reduced in the APN-EPC group compared with the model group, showed that APN-EPCs alleviated the neuroinflammation in aging rats. In addition, the APN-EPC group inhibited the decrease of BBB-related proteins claudin-5, occludin, and Zo-1 in aging rats and attenuated BBB dysfunction significantly. These results of our study indicated that APN-EPC treatment in D-gal-induced aging rats have a positive effect on improving cognitive and BBB dysfunction, increasing angiogenesis, and reducing neuroinflammation and apoptosis rate. This research suggests that cell therapy via gene modification may provide a safe and effective approach for the treatment of age-related neurogenerative diseases.
Introduction
With the increase of aging countries and the proportion of aging population, there is a growing concern about aging. Aging is a normal physiological process and the leading cause of the cognitive decline, principally in learning, memory, sensibility, and perception functions. The mechanism may be related to chronic inflammation, oxidative stress, mitochondrial dysfunction, and other factors [1,2]. Aging is accompanied with changes in the structure and function of microcirculation, resulting in vascular endothelial dysfunction and decrease of microvessels [2][3][4], thus, leading to age-related neurodegenerative diseases, such as vascular cognitive dysfunction and Alzheimer's disease (AD).
D-galactose (D-gal) is a reducing sugar found in many foods such as honey, yogurt, milk, and kiwi fruit. Excessive long-term D-gal intake can lead to the overproduction of advanced glycosylation end-products (AGEs) and reactive oxygen species (ROS) [5], which may contribute to chronic inflammation and oxidative stress [6]. Many studies have shown that chronic inflammation and oxidative stress would result in mitochondrial, neurological function damage, and even cognitive decline [7][8][9][10][11][12]. The model of accelerated aging with long-term injection of D-gal has been widely used in antiaging studies, due to the increase of chronic inflammation, decline of cognition, and biochemical indexes that are similar with natural aging in animals [5][6][7][8][9][10][11][12][13]. However, studies on the microvasculature in D-gal-induced aging rats remain limited.
Neurovascular unit (NVU), constitute by neurons, astrocytes, microglia, vascular endothelial cells, perivascular cells, basement membrane, and extracellular matrix, plays a very important role in maintaining the structure and function of the brain [14,15]. Tight junctions (TJ) present between the endothelial cells of the capillaries that perfuse the brain parenchyma, assisted by astrocytic end-feet and pericytes, help maintain the stability of the blood-brain barrier (BBB) [15,16]. As the production of ROS has been considered a key factor in the process of aging [17], it can induce neurovascular inflammation. Under the stimulation of chronic inflammation, the structure of NVU will be injured, which may result in activation of astrocytes, impairment of endothelial cell function, and cell apoptosis, which eventually leads to the decrease of microvascular density, increase of permeability of BBB, and impairment of nervous system function [16,[18][19][20].
EPCs, the precursor cells of endothelial cells derived from bone marrow, play a significant part in the process of endogenous blood vessel repair and regeneration [21]. EPCs can produce a variety of cytokines through paracrine to promote the proliferation and differentiation of endothelial cells (ECs) at the site of vascular injury and EPCs in circulation to repair injured vessels under the conditions of chronic inflammation, ischemia, and hypoxia [22][23][24][25][26]. In recent years, EPC transplantation has played an active role in the treatment of ischemic cerebrovascular diseases [25][26][27]. A previous study in our laboratory has also found that the treatment of middle artery occlusion (MCAO) rats with bone marrow-derived EPCs can play a good role in promoting angiogenesis, improving behavioral function, and reducing infarction area and apoptosis rate [27]. Nevertheless, the number of EPCs would decrease with aging [28], and the extraction of EPCs is difficult as well. When transplanted alone, just a few EPCs survived, which weaken the process [29].
Adiponectin (APN), a protein synthesized by adipocyte, plays an important role in the regulation of glucose and lipid metabolism. APN also has been proposed to have essential functions like vascular endothelial protection, anti-inflammatory, antiatherosclerosis, and vasodilatory properties which may influence the central nervous system disorders. The content of APN was positively correlated with the level of EPCs [30,31], and APN can also be involved in regulating the function and promoting the role of EPCs [32][33][34][35][36]. In recent years, a growing number of studies have shown that APN can promote improvements in cognitive function [37,38]. Therefore, we transfected APN gene into EPCs by lentivirus through genetic engineering technology to make EPCs overexpress APN, then the APN-EPCs are supposed to play a better role in endothelial repair and antiinflammation to make a positive effect on cognitive function in D-gal-induced aging rats.
Materials and Methods
2.1. Preparation of EPCs. 4-week-old male Sprague-Dawley (SD) rats provided by the Centers for Disease Control of Hubei Province were anesthetized with 3% pentobarbital sodium at 30 mg/kg and then killed, isolated the tibia and femur of the rats aseptically, and rinsed their bone marrow cavities with PBS buffer repeatedly to collect fresh bone marrow. Mononuclear cells were isolated by density gradient centrifugation using the Ficoll-Paque solution (d = 1:077, Sigma, USA). Then, they were inoculated in a T25 culture flask with endothelial cells basal medium-2 (EBM-2, Lonza, Switzerland) and placed in a cell incubator with 5% CO 2 atmosphere at 37°C. Changing the media every 72 hours and the growth state of the cells were observed every day. In our previous experiments [27], we identified these cells by the property of uptaking acetylated LDL (ac-LDL) and Ulex europaeus agglutinin-1 (UEA-1) by EPCs, and cells cultured by this method have been identified as EPCs.
Gene Transfection.
EPCs were collected after a 9-day culture and transfected with lentiviral vectors encoding the human APN gene (LV-APN/EGFP, Shanghai Genechem Company, China) at a transfection multiple of 100. The vector of this gene is a relaxed shuttle plasmid, which can be amplified without restriction, and the sequence of its vector elements is Ubi-MCS-3FLAG-SV40-EGFP-IRES-puromycin. After 12 hours, the cells were washed with PBS buffer and cultured with fresh media. 72 h after transfection, the cells were placed under an inverted fluorescence microscope to observe the expression of enhanced green fluorescent protein (EGFP) and evaluate the results of transfection, and the transfection rate was calculated according to the following formula: transfection rate = positive cells/total cells per field × 100%. The expression of APN from APN-EPCs and EPCs were detected by Western blot analysis [27].
Animals and Drug
Administration. 48 male SD rats, weighing 200-220 g, provided by Beijing Vital River Laboratory Animal Technology Company were used in our experiments. The animal study proposal was approved by the Institutional Animal Care and Use Committee (IACUC) of Wuhan University (Hubei, China) with the permit number IACUC 2019060. All the operations and handling of experimental animals are complied with the requirements of animal ethics strictly.
The rats were randomly separated into four groups: (1) control group (n = 12), (2) model group (n = 12), (3) EPC treatment group (n = 12), and (4) APN-EPC treatment group (n = 12). Rats in the model group and the treatment group were given intraperitoneal injection with 100 mg/kg D-gal 2 Neural Plasticity daily for 6 weeks, while the control group was given intraperitoneal injection of normal saline at the same dose every day for 6 weeks.
Cell Transplantation.
The EPCs cultured for 14 days and the APN-EPCs cultured for 5 days were resuspended with PBS buffer and collected, respectively. After 6 weeks of intraperitoneal injection, 0.5 ml EPC and APN-EPC suspension containing 2 × 10 6 cells were injected into the EPC treatment group and APN-EPC treatment group, respectively, through the tail vein, while 0.5 ml PBS buffer was injected into the control group and model group.
Morris Water Maze
Test. All the rats were given spatial memory assessment by using Morris water maze (MWM), which mainly includes a 1.5 m diameter pool filled with opaque water and a 10 cm round platform that placed 1 cm below the surface of the water at the center of one quadrant. The whole experiment lasted for 6 days. The first 5 days were the place navigation test, which was used to test the spatial learning and memory functions of rats. During the first 5 days, each rat was gently released into the water at four different locations on opposite sides of the platform for four swimming trials per day. Rats that failed to find the platform within 60 s were manually guided to the platform and allowed to stay for 15 s. The probe trial proceeded on the 6th day, which was tested for spatial memory. On the 6th day, the platform was removed, the contralateral side of the platform was selected as the entry point, and the rats were free to swim in the tank for 60 s. The swimming trace, latency to the platform, the time spent in each quadrant, and the platform crossing times were recorded by a computerized video imaging analysis system (AVTAS Animal Video Tracking Analysis System, Wuhan YiHong Sci. & Tech. Co., Ltd).
Tissue Preparation.
After behavior testing, the rats were anesthetized with 3% pentobarbital sodium. Six rats were randomly selected from each group and transcardially perfused with 4% paraformaldehyde solution. The brains were postfixed overnight in 4% paraformaldehyde and embedded in paraffin after full dehydration. Then the brains were cut at 4 μm thick coronal sections from the anterior fontanelle -3.84 mm to -5.04 mm, and the sections were dewaxed before use. Brains of the remaining rats were taken after decollation, then, the bilateral hippocampal tissues of these brains were isolated on the ice rapidly and transferred to a refrigerator at -80°C for storage immediately.
Western
Blotting. The rat hippocampal tissues were homogenized in lysis buffer with phenylmethanesulfonyl fluoride (PMSF) and phosphatase inhibitor single-use cocktail (Beyotime, Shanghai, China), followed by 30 min on ice.
The lysates were centrifuged at 12,000 rpm for 5 min at 4°C, and the supernatant solutions were collected. The total protein concentration of each sample was determined using a BCA protein assay kit (Beyotime, Shanghai, China). Then, the lysates were separated by the sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) and transferred to nitrocellulose membranes. The membranes were incubated in 5% nonfat milk in PBS with 0.1% Tween-20 at room temperature (RT) for 1 h. Probing with primary antibodies to claudin-5 (Biorbyt, Wuhan, China), IL-6, IL-1β (Bioss, Beijing, China), glyceraldehyde 3-phosphate dehydrogenase (GAPDH), occludin, Zo-1, and TNF-α (Abcam, UK), followed by incubation with horseradish peroxidaseconjugated goat anti-rabbit IgG and detection using a chemiluminescence substrate. The AlphaEaseFC software was used for analysis, and the optical density values of the target proteins detected in each experimental group were normalized to the optical density values of the GAPDH control groups.
2.8. Immunohistochemical Analysis. The brain sections were treated with 0.1% Triton X-100 for 10 min, then, washed with 0.1 M PBS. Endogenous peroxidase activity in the brain sections were blocked with 3% hydrogen peroxide, then blocking the nonspecific binding sites with 3% normal goat serum at RT for 30 min. After blocking, the sections were incubated overnight in rabbit anti-caspase-3 and anti-glial fibrillary acidic protein (GFAP) antibodies (Proteintech, Wuhan, China), then incubation with horseradish peroxidaseconjugated goat anti-rabbit IgG. The sections were washed with PBS and incubated in 3,3 0-Diaminobenzidine tetra hydrochloride (DAB), rinsing the sections with running water to stop color development, and brown or tan staining on the cell membrane or in the cytoplasm represented positive staining. The density of positive staining was the expression levels of caspase-3 and GFAP proteins, and the results were measured by the ImageJ software.
2.9. Immunofluorescence Analysis. The brain sections were washed three times with 0.1 M PBS for 3 min after treating with 0.1% Triton X-100 for 10 min. Blocking the nonspecific binding sites with 3% normal goat serum at RT for 30 min, then the sections were incubated with the primary antibody (CD31, 1 : 100, Abcam) overnight at 4°C washing with PBS buffer, the sections were incubated with fluorescein isothiocyanate-(FITC-) labeled goat anti-rabbit IgG secondary antibodies (1 : 100, Boster, China) for 1 h. The ImageJ software was used to analyze immunohistological quantitative.
2.10. TUNEL Staining. The brain sections were washed three times for 10 min in proteinase K solution (20 μg/ml) which was diluted with PBS buffer. Apoptotic cells in the sections were detected by a transferase-mediated dUTP-biotin nick end labeling (TUNEL) apoptosis detection kit (Yeasen, Shanghai, China), then incubating with DAPI in a dark environment for 5 min. The number of apoptotic cells was observed and counted under the fluorescence microscope, and the apoptosis rate was calculated according to the following formula: apoptosis rate = positive cells/total cells per field × 100%.
2.11. Statistical Analysis. The GraphPad Prism 8.0 software was used for statistical analysis. The data was presented as the mean ± standard error of mean (SEM). Two-way analysis of variance (ANOVA) was used to analyze the escape latency in the MWM training task, and the other data was analyzed by a one-way ANOVA. The value with p < 0:05 was considered to be statistically significant.
EPC Morphology and Transfection Result.
We observed with the microscope that the newly extracted mononuclear cells were small, round, and suspended in the medium. The cells were arranged like pebbles after 14 days of culture ( Figure 1). After 72 hours of gene transfection, the cells were placed under a fluorescence microscope (×100), and a large number of EPCs with green fluorescence were observed, which manifested that the transfection was successful (Figure 2). The ImageJ software was used for cell counting and analysis, and the transfection rate was 71:3 ± 8:8%.
The APN-EPCs Prevent D-Gal-Induced Cognitive
Impairment. The results of the study showed that the APN-EPCs significantly ameliorated the memory deficits of Dgal-induced aging rats (Figure 3). In the place navigation test, the escape latency of the model group was significantly increased compared with the control group (p < 0:05), the treatment groups were significantly shorter than that of the model group (p < 0:05), and there was a significant difference among the four groups (F ð12, 220Þ = 19:42, p < 0:01). Compared with the EPC treatment group, the escape latency of the APN-EPC treatment group was significantly shortened, and the difference was statistically significant on day 4 and day 5 (p < 0:05, Figure 3(a)). In the probe trial, the time of the rats in the model group spent in the platform quadrant was significantly shorter than that in the control group (p < 0:05), and the duration of the rats in the APN-EPC treatment group in the platform quadrant was significantly longer than that in the EPC treatment group and the model group (p < 0:05, Figure 3(b)). An increase time of platform crossing was observed with the APN-EPC and EPC treatment group compared with the D-gal treated group (Figure 3(c), p < 0:05). These results indicated that APN-EPC treatment prevented D-gal-induced cognitive impairment.
3.3.
The APN-EPCs Attenuate D-Gal-Induced Neuroinflammation. The Western blotting results showed that significantly increased protein levels of IL-1β, IL-6, and TNF-α in the aging rat hippocampus when compared with the control group (p < 0:05, Figure 4). In the immunohistochemical analysis, the expression of GFAP, which is a specific marker for activated astrocytes, increased significantly in the model group when compared with the control group (p < 0:05, Figure 5). The APN-EPC group prevent the increase of inflammatory cytokines and GFAP in the model group, and the effect was better than that of EPC group (p < 0:05). These results indicated that APN-EPC treatment prevented D-gal-induced neuroinflammation.
The APN-EPCs Attenuate D-Gal-Induced BBB
Dysfunction. Claudin-5, occludin, and Zo-1 are proteins related to the tight junctions (TJ) in the BBB; they play an important role in the size-selective relaxation of the BBB together. We observed the Western blotting results that the expression of claudin-5, occludin, and Zo-1 in the aging rat hippocampus were significantly decreased when compared with the control group (p < 0:05). The APN-EPC group increases the expression of the proteins in the model group, and the effect was better than that of EPC group (p < 0:05, Figure 6).
The APN-EPCs
Can Improve the Microvessel Density. We observed through immunofluorescence microscopy that the microvessel density in the hippocampus of the aging rats treated with APN-EPCs was higher than that in the EPC treatment group and the model group (p < 0:05, Figure 7).
The APN-EPCs Attenuate the D-Gal-Induced Cell
Apoptosis. The results in the TUNEL staining showed that apoptosis rate in the model group was significantly increased compared with the control group (p < 0:05), while that of the APN-EPC treatment group was decreased when compared with the EPC treatment group and model group (p < 0:05, Figure 8).
Discussion
In this research, we induced subacute aging rats by excessive and long-term intraperitoneal injection of D-gal, which resulted in cognitive dysfunction, decreased brain microvascular density, increased apoptosis rate and release of proinflammatory cytokines, activation of astrocytes, and decreased BBB-related proteins. After the injection of APN-EPCs through the tail vein, we observed that the cognitive function was improved, the density of brain microvascular was increased, the function of the BBB was improved, the apoptosis rate and the level of proinflammatory cytokines was decreased in the aging rats, and the effect was better than that of EPCs.
In D-gal-induced aging rat model, the excessive and long-term injection of D-gal can lead to the overproduction of AGEs and ROS, which may result strong oxidative stress and chronic inflammation that accelerates aging in rats [9][10][11][12][13]. Chronic, aseptic, and low-degree inflammation was considered a sign of the aging process [2]. So, we focused on the inflammation cytokines in the aging rats. Interleukin-1 (IL-1) family of cytokines are key mediators of the inflammatory response, and tumor necrosis factor-alpha (TNF-α) is a proinflammatory cytokine that is generally produced by microglia and astrocytes in the brain [6]. In our study, we have Neural Plasticity shown the increase of the proinflammatory cytokines IL-1β, IL-6, and TNF-α, and the activation of astrocytes in the Dgal-induced aging rats. The results are just consistent with the previous study [6,8,9].
Moreover, the neuroinflammatory responses induced by ROS can lead to structural injured of NUV, such as activation of astrocytes, impairment of endothelial cells function, and apoptosis of cells, which has been thought to result in BBB Neural Plasticity dysfunction [39,40]. Endothelial TJ density proteins, such as claudin-5, together with integral membrane proteins and cytoplasmic proteins, such as occludin and Zo-1, are involved in maintaining the permeability of BBB. Matrix metalloproteinase (MMP) is overproduced due to oxidative stress and chronic inflammation induced by age-related vascular changes, it can facilitate the degradation of TJ proteins and the basement membrane [15], thus, leading to increased permeability of the BBB. Impaired BBB function has been associated with neurodegeneration and cognitive decline [18,19]. Our study just indicated the cognitive dysfunction and the BBB-related proteins claudin-5, occludin, and Zo-1 increased in the subacute aging model. EPCs, derived from bone marrow, play a significant part in the process of endogenous blood vessel repair and regeneration. However, the number of EPCs decreases accompany with aging [28], and difficulties still exist to obtain the EPCs. As is known, APN is a kind of adipocytokine, which has the functions of protecting vascular endothelium, anti-inflammation, and antiatherosclerosis [41,42]. It can also be involved in regulating the function and promoting the role of EPCs. In the study of Lavoie et al. [33], APN could regulate the ability of EPCs to repair blood vessels by reducing the apoptosis of EPCs and enhancing their functions. Wang et al. [34] and Dong et al. [35] found that APN can enhance the function of EPCs through mTOR-STAT3, AMPK/eNOS, and other signaling pathways. In this study, we transfected APN gene into EPCs through genetic engineering technology and found that APN-EPCs have effects on reducing proinflammation markers, improving cognitive function, microvascular density, and structural damage of aging rats, and the effect was significantly better than EPCs transplanted alone.
Our study demonstrated that the treatment of APN-EPCs inhibited proinflammatory gene expression and prevented neuroinflammation in aging rat, it may relate to the anti-inflammatory effect of APN. Under the antiinflammatory and vascular endothelial protective effects of APN, the content of BBB-related proteins gradually recovered and the injury of endothelial cells decreased. APN-EPCs also increased microvascular density in aging rats, it may be due to the endogenous blood vessel repair and regeneration of EPCs. With the recovery of microvascular density and the increase of BBB-related proteins, the BBB integrity was restored, which eventually lead to the improvement of cognitive function. The likely mechanism is that under the environment of chronic inflammation and oxidative stress,
Conclusion
Our data demonstrated that the APN-EPC treatment had a positive effect on nervous structure damage and cognitive dysfunction by repairing injured blood vessels and reducing neuroinflammation. The results suggested that gene-modification cell therapy may provide a safe and effective approach for the treatment of age-related neurogenerative diseases.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors claim that there is no conflict of interests. | 2020-03-26T10:34:29.575Z | 2020-03-23T00:00:00.000 | {
"year": 2020,
"sha1": "9d317a0ef7d6a50158ee50af87f7b1e4e185431e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2020/1273198",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e9709d549b332ff218029fa7cbccee642ce7c15",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18098330 | pes2o/s2orc | v3-fos-license | Office workers' objectively assessed total and prolonged sitting time: Individual-level correlates and worksite variations
Sedentary behavior is highly prevalent in office-based workplaces; however, few studies have assessed the attributes associated with this health risk factor in the workplace setting. This study aimed to identify the correlates of office workers' objectively-assessed total and prolonged (≥ 30 min bouts) workplace sitting time. Participants were 231 Australian office workers recruited from 14 sites of a single government employer in 2012–13. Potential socio-demographic, work-related, health-related and cognitive-social correlates were measured through a self-administered survey and anthropometric measurements. Associations with total and prolonged workplace sitting time (measured with the activPAL3) were tested using linear mixed models. Worksites varied significantly in total workplace sitting time (overall mean [SD]: 79% [10%] of work hours) and prolonged workplace sitting time (42% [19%]), after adjusting for socio-demographic and work-related characteristics. Organisational tenure of 3–5 years (compared to tenure > 5 years) was associated with more time spent in total and prolonged workplace sitting time, while having a BMI categorised as obese (compared to a healthy BMI) was associated with less time spent in total and prolonged workplace sitting time. Significant variations in sitting time were observed across different worksites of the same employer and the variation remained after adjusting for individual-level factors. Only BMI and organisational tenure were identified as correlates of total and prolonged workplace sitting time. Additional studies are needed to confirm the present findings across diverse organisations and occupations.
Introduction
Exposure to high levels of workplace sedentary (sitting) time has become common, particularly in office environments .
Office-based workers have been reported to spend between two-thirds and three-quarters of their working hours sitting Parry and Straker, 2013;Clemes et al., 2014;Ryan et al., 2011), with a high proportion accrued in prolonged, unbroken bouts of 30 min or more (Parry and Straker, 2013;Ryan et al., 2011). Consistent evidence has linked high levels of sitting with chronic diseases and premature mortality (Biswas et al., 2015;de Rezende et al., 2014) and prolonged sitting with cardio-metabolic risk (Healy et al., 2008). Thus, exposure to excessive workplace sitting is an emerging workplace health and safety issue (Straker et al., 2014).
Despite a growing interest in workplace interventions (Neuhaus et al., 2014a), relatively little is known about factors influencing workplace sitting time; knowledge which could improve targeting of strategies. While factors relating to work have been identified as potential correlates (Hadgraft et al., 2015;Mummery et al., 2005;Wallmann-Sperlich et al., 2014;De Cocker et al., 2014), only two studies (Wallmann-Sperlich et al., 2014;De Cocker et al., 2014) have assessed cognitive-social factors that may influence sitting time. Both studies noted the need for confirmatory and additional research (Wallmann-Sperlich et al., 2014;De Cocker et al., 2014). Also, no previous studies have analysed potential correlates of prolonged sitting time (i.e. unbroken bouts) to assess whether these attributes differ from those associated with total workplace sitting time.
Existing studies have also used self-report questionnaires to measure sitting time (Hadgraft et al., 2015;Wallmann-Sperlich et al., 2014;De Cocker et al., 2014). Relative to self-report, objectivemeasurement devices-such as inclinometers-can determine the volumes and accumulation patterns of sitting time with better validity and accuracy (Clark et al., 2011). The use of objective-measures of workplace sitting in studies assessing correlates reduces the potential for measurement error.
The factors influencing workplace sitting are likely to operate at multiple levelsincluding individual, cognitive-social, environmental, and policy levels (Owen et al., 2011). The extent to which workplace sitting is influenced by factors acting at the individual-level, compared with at the organisational-level, is of interest when considering how interventions should be designed and targeted. This may include whether strategies should be individually-driven and targeted at "high risk" groups and/or aimed at influencing the organisational-level through policy and cultural change. Assessing the variation in sitting time between worksites, before and after accounting for individual-level factors, provides the opportunity to explore such issues.
The aim of this study was to examine the worksite-level variation, and the socio-demographic, health-related, work-related, and cognitive-social correlates of objectively-assessed total and prolonged workplace sitting time in Australian office-based workers. Given limited evidence relating to the correlates of workplace sitting time, including prolonged workplace sitting, this study employed an exploratory, data-driven approach.
Study design and participants
Participants were recruited for a cluster randomized controlled trial of a multi-component workplace intervention aimed at reducing workplace sitting (the Stand Up Victoria [SUV] trial). They were informed that the study aimed to "investigate the effectiveness of an intervention to increase overall physical activity levels at the workplace". The intervention, detailed elsewhere (Dunstan et al., 2013;Neuhaus et al., 2014b;Healy et al., 2016), comprised organisational-, environmental-(sit-stand workstation), and individual-level strategies. Here, we report findings derived from baseline measurements. In brief, recruitment and randomization occurred at the worksite-level. Fourteen geographically separate worksites were recruited from a single government department (Victoria, Australia). At each site, a work team (i.e., a distinct group with dedicated team leader(s) and regular group meetings) was selected (if team size was b 10, two teams were combined). Eligibility criteria included: aged 18-65 years, English-speaking, worked ≥ 0.6 full time equivalent (FTE) and had designated access to a telephone, internet, and desk within the workplace. Participants did not have heightadjustable desks at baseline. Participants' roles mostly involved telephone-based and clerical/administrative tasks.
Of the 278 who originally expressed interest, 33 were ineligible and 14 were no longer eligible and/or willing to participate at the intervention commencement, leaving 231 participants. Ethics approval was granted by Alfred Health Human Ethics Committee (Melbourne, Australia). The SUV trial was prospectively registered with the Australian New Zealand Clinical Trials Registry (ACTRN12611000742976).
Data collection
At baseline, trained staff conducted onsite assessments to collect anthropometric measurements, provide participants with activity monitors and logbooks, and give instructions on activity monitor use (see below). Thereafter, participants completed a self-administered online questionnaire (LimeService), containing questions relating to sociodemographic, work, health-related and cognitive-social characteristics.
Measures
2.3.1. Objectively measured sitting time and moderate-vigorous physical activity (MVPA) Sitting time was measured objectively using the activPAL3 activity monitor (PAL Technologies Limited, Glasgow, UK) which provides highly accurate measures of sitting time and sitting accumulation (Lyden et al., 2012). Participants were asked to wear the activPAL for seven consecutive days (24 h/day) following the onsite assessment. The monitor was waterproofed and secured to the anterior mid-line of the right thigh, about one third down from the hip, using hypoallergenic adhesive material. During waking hours (apart from water-based activities) participants also wore the tri-axial Actigraph GT3X + activity monitor (ActiGraph, Pensacola, Florida) on an elastic belt over their right hip. Participants were asked to record sleep and waking times, work hours and any device removals N 15 min in a logbook.
Activity monitor data were processed in SAS 9.3 (SAS Institute Inc., Cary NC), with reference to participant logbooks. Quality controls were conducted before (e.g. diary entry errors) and after processing (visual checking). For activPAL data, events were coded as: awake, nonwear, or at work when they were mostly (≥50%) within these periods. Non-wear time and sleep were excluded. Workplace time was taken as all work hours for this employer from any location. Days were considered valid for workplace time when the device was worn for ≥ 80% of work hours (see Edwardson et al., 2016 for details of compliance). Times spent sitting, sitting for ≥ 30 min continuously (prolonged sitting), standing and stepping during work hours were averaged from the totals for valid days and standardised to an 8-h day. Time, rather than the number of prolonged bouts, was used as the outcome as it provides a more informative measure of the extent or duration of exposure to this potential health risk.
The GT3X + data (extracted as 60-s epochs) were used to identify MVPA (Harrington et al., 2011) based on all minutes with ≥1952 vertical acceleration counts (Freedson et al., 1998) on valid days (≥10 h waking wear time). The activPAL estimation of MVPA, using a cadence-based equation, does not have high agreement with referent methods (Harrington et al., 2011). Non-wear time (≥60 min of 0 counts, allowing for up to 2 min with 1-49 counts) was excluded, as was sleep (McVeigh et al., 2015). Non-work time excluded work for any employer, and days the participant reported working but did not indicate work times. Non-work MVPA (min/day) was calculated using a weighted daily average (average non-work day MVPA × 2/7 + nonwork time MVPA on work days × 5/7) to account for differences in non-work time on such days and the number of work and non-work days during the monitoring period.
Socio-demographic and health-related variables
Participants reported their age, gender, ethnicity (Caucasian; Asian; other), marital status (married/de facto; separated/divorced/widowed; never married), educational attainment (high school or lower; trade/ vocational; university level) and smoking status at work (yes; no). Non-work MVPA was calculated as above. Body mass index (BMI) was calculated from height, measured using a portable stadiometer (average of two measures; third if the difference was ≥ 0.5 cm), and mass, measured to the nearest 0.1 kg using bioelectrical impedance analysis scales. BMI was categorised as underweight (BMI b 18.5 kg/m 2 ), healthy (18.5 − b25 kg/m 2 ), overweight (25 − b30 kg/m 2 ) and obese (≥30 kg/ m 2 ). Given only one underweight participant, the underweight and healthy weight categories were combined.
Work-related variables
Individual-level work-related variables included: a measure of working hours -1.0 FTE (yes; no), tenure at the current workplace (b3 years; 3-5 years; N 5 years), and occupational skill level (managers; professionals/associate professionals; clerical/sales/services workers).
Cognitive-social variables
Six cognitive-social constructs were assessed: workspace satisfaction (average of four items); knowledge (five items); barrier selfefficacy (nine items); perceived behavioral control (five items); perceived organisational social norms (eight items); and, frequency of use of self-regulation strategies (10 items). These were adapted from physical activity literature or developed for the trial to be specific to workplace sitting (Dunstan et al., 2013), for example, barrier selfefficacy related to barriers to reducing workplace sitting; perceived organisational social norms related to norms about workplace sitting/ standing. Items were measured on 1-5 Likert scales (strongly disagree-strongly agree; not at all confident-very confident; neververy often). Item questions and construct internal consistency are provided in Supplementary Table 1. Cronbach's alpha coefficients ranged from 0.50 (knowledge) to 0.92 (barrier self-efficacy). Two items from the Health Work Questionnaire (Shikiar et al., 2003) assessed job control (How much control did you feel you had over how you did your job this week?) and overall stress (Overall, how stressed have you felt this week?) on 10-point scales (1 = no control, 10 = total control; 1 = not stressed at all, 10 = very stressed). Participants also self-reported their desired proportion of the day spent sitting at work (categorised as b 50%; ≥50%).
Statistical analyses
Descriptive statistics were calculated for the whole sample and by worksite. To assess the correlates of total and prolonged sitting time (min/8-h workday), linear mixed models were used, with worksite cluster specified as a random effect. Models were limited to participants with complete data for outcomes and covariates (n = 214). Potential correlates were entered in three blocks: (i) socio-demographic and health-related variables; (ii) work-related variables; and (iii) cognitive-social variables. As this study was exploratory in nature, the final adjusted models were obtained using backwards elimination. All potential correlates were forced into the model and variables with the highest p-value removed one-by-one until only those with p b 0.20 remained (Faraway, 2002). Age and gender were retained in all models. Likelihood ratio tests were used to assess goodness of fit after variable removal and Akaike's Information Criterion and Bayesian Information Criterion were calculated to compare models. Retained variables from previous blocks were included for successive blocks. Variance Inflation Factors (VIFs) were b2.5 in all models. The minimum difference of interest for total and prolonged sitting time was 45 min (Dunstan et al., 2013).
To assess worksite variation in the outcome variables, the random intercept for worksite was tested by likelihood ratio test. The difference between each worksite-specific mean and the overall mean was estimated using Best Linear Unbiased Predictions. Worksite variation was considered unadjusted, and correcting for compositional effects (i.e., individual attributes not pertaining to work).
Data were analysed in Stata 12.1 (StataCorp LP, College Station, TX); p b 0.05 was considered statistically significant.
Participant characteristics
Participant characteristics are presented in Table 1. The majority (69%) were women and 67% were aged 35-55 years, which was broadly typical of all departmental employees (72% women; 59% aged 35b 55 years) (Department of Human Services, 2014). Most were Caucasian, worked in clerical/administrative roles and had tenures N5 years. The sites were varied in their composition, for example, the proportion university qualified ranged from 14% (site G) to 75% (site D).
On average, approximately four-fifths of working hours were spent sitting, with 42% spent in prolonged sitting bouts. Comparatively less time was spent standing and stepping (Table 2). Sitting time was proportionately higher on work days than non-work days.
Correlates of total workplace sitting time
In terms of socio-demographic and health-related variables (Block 1), marital status and BMI category were significant correlates of total workplace sitting time, while work smoking status, ethnicity, nonwork MVPA and education dropped out of the model (see Table 3). Of the work-related variables (Block 2) only tenure was significantly associated with total workplace sitting. No cognitive-social variables (Block 3) were significantly correlated, with all factors other than knowledge and use of self-regulation strategies dropping out. Adjustment for cognitive-social variables did not significantly alter effect sizes, although the overall test for marital status became non-significant (p = 0.07). In the fully adjusted model, participants with an obese BMI averaged 21 min (per 8-h workday) less workplace sitting time (ref: healthy BMI). Tenure of 3-5 years was associated with an average 23 min additional workplace sitting time (ref: N 5 years). Participants who were separated, divorced or widowed spent on average 15 min less time sitting (ref: married/de facto). Neither age, nor gender was significant correlates.
The significant variation between sites remained evident across each model. In the final model, the ICC was 0.144 (95% CI: 0.042, 0.388), indicating that 14% of workplace sitting variation was explained by worksite differences (although the margin of error was wide). Fig. 1 shows the worksite variation in total workplace sitting time. Unadjusted, the site average was 378 min/8-h workday (95% CI: 368, 389 min). Worksites varied from 21 min below (worksite A) to 22 min above average (worksite N). After adjusting for socio-demographic and healthrelated variables, worksites varied from 21 min below (worksite B) through to 27 min above (worksite N) the average (388 min/8-h workday, 95% CI: 357, 418 min). Table 4 shows the correlates of workplace sitting time accumulated in prolonged bouts. BMI category was the only significant Block 1 variable. In Block 2, the only significant correlate was tenure, although occupational category remained in the model. None of the cognitive-social variables (Block 3) were significantly associated with prolonged sitting, although perceived behavioral control and perceived organisational norms remained in the model. The addition of these cognitive-social variables did not attenuate associations of BMI and tenure with prolonged sitting time. Participants who were overweight or obese averaged 50 and 40 min/8-h workday respectively, less prolonged sitting time (ref: healthy BMI). Tenure of 3-5 years was associated with an average 50 min/8-h workday additional prolonged workplace sitting (ref:
Correlates of workplace sitting time accumulated in prolonged bouts
N5 years). The non-significant variables remaining in the model were estimated with a wide margin of error but indicated potentially large differences in prolonged sitting time (e.g. nearly 1 h difference between professionals/associate professionals and managers). Worksites varied significantly in average prolonged workplace sitting time, even in the full adjusted model. Fig. 2 depicts the worksite variation in prolonged sitting time, unadjusted and after adjustment for socio-demographic and health characteristics. Around a mean of 197 (95% CI: 173, 220) min/8-h workday of prolonged sitting time, sites varied from 44 min below (Site B) to 57 min above average (Site N). After adjustment, sites varied from 49 min below (Site B) to 62 min above (Site N) the overall mean (200 min/8-h workday; 95% CI: 135, 265).
Discussion
To our knowledge, this study is the first to examine correlates of workplace sitting time (total and in prolonged bouts) using highquality objective measurement. Shorter occupational tenures were associated with higher levels of total and prolonged workplace sitting, while excess BMI was associated with lower levels of total and prolonged workplace sitting.
This sample of office-based workers engaged in high amounts of workplace sitting on average, with wide variation between individuals and worksites. On average, 79% of working hours were spent sitting; more than half of which was prolonged sitting (≥30 min bouts). These findings are consistent with other studies within office environments (Parry and Straker, 2013;Clemes et al., 2014;Healy et al., 2013) and highlight the need for interventions in these settings.
None of the socio-demographic factors emerged as significant correlates of workplace sitting. Previous studies with population-based samples have reported other socio-demographic factors such as younger age (De Cocker et al., 2014;Bennie et al., 2015) and higher educational attainment (Wallmann-Sperlich et al., 2014;De Cocker et al., 2014) to be associated with higher self-reported workplace sitting. The homogeneity of our sample-involving a single employer and industry-may have limited the ability to test these associations.
BMI emerged as a significant inverse correlate of total and prolonged workplace sitting, contrary to previous studies (De Cocker et al., 2014;Chau et al., 2012;Levine et al., 2005). Higher BMIs have been associated with increased prevalence of work-related musculoskeletal disorders (da Costa and Vieira, 2010;Schmier et al., 2006). Participants with greater adiposity may possibly experience more physical discomfort in traditional seated arrangements, which could be alleviated by more frequent breaks (Thorp et al., 2014). However, we cannot rule out possible bias and measurement error. The knowledge of having activity 14.3 ± 8.2 21.0 ± 6.4 31.0 ± 10.4 24.6 ± 6.8 Stepping (%) 6.9 ± 2.9 9.6 ± 3.1 13.1 ± 4.5 10.8 ± 3.1 a Data are mean ± standard deviation with linearized variance estimation. Percentages are calculated as a proportion of waking monitor wear time.
Table 3
Linear mixed models examining correlates of total workplace sitting time (min/8-h day). monitored could have altered behavior differentially in our sample. Another possible explanation concerns the validity of the activPAL. While the activPAL appears to perform similarly for obese and healthy weight participants when assessing walking (Ryan et al., 2006), this has not been established for sitting and standing (delineated by estimated monitor angles, assumed to indicate thigh angle). Differential measurement error could arise if overweight/obesity affects thigh shape in a way relevant to device function, or how participants sit. Perching forward, in particular, can register as standing (Steeves et al., 2015).
Of the work-related factors, tenure greater than five years was associated with less total and prolonged workplace sitting time. Previous research has found tenures of at least five years to be associated with higher self-reported sitting (Vandelanotte et al., 2013). It is possible that tenure acts indirectly through other factors such as seniority; workers with longer tenure may have responsibilities requiring greater movement around the office. However, only 7% reported their occupation as managerial. The underlying mechanisms behind this finding should be explored further. The effect sizes for BMI and tenure for prolonged sitting time were large, meeting the minimum difference of interest set for the broader intervention trial (45 min of total/prolonged sitting) (Dunstan et al., 2013). Effect sizes for total sitting time were more modest-approximately 15-30 min-although these differences were seen in the absence of any workplace intervention.
None of the cognitive-social constructs emerged as significant correlates. Similar cognitive-social constructs assessed previously (Wallmann-Sperlich et al., 2014;De Cocker et al., 2014) were also not found to be strong influences on workplace sitting. Nonetheless, with the observed margins of error our study did not provide evidence to rule out the importance of these factors. There were indications of a potential positive association between prolonged sitting time and perceived organisational norms and a potential negative association between prolonged sitting time with perceived behavioral control; the latter finding is in line with some previous studies (De Cocker et al., 2014;Prapavessis et al., 2015).
We observed large and significant differences between worksites in total and prolonged workplace sitting time, in unadjusted and adjusted models. Anecdotally, the level of task variation differed between sites-the teams with lower than average sitting time (sites A-D) were not predominately telephone-based, unlike others (e.g. H and L) that had higher sitting levels. More detailed assessment of job tasks or content (i.e. beyond assessing occupation) should be considered in future studies. Further exploration is needed to identify potential worksitelevel factors influencing sitting that were not measured in our study.
An ecological model of sedentary behavior (Owen et al., 2011) suggests that there are multiple levels of influence on behavior. A significant limitation is that the variables assessed as potential correlates-and thus, our findings-reflect a data-driven approach. Not all of these potential influences were captured and others that were not assessed may also be of importance. In addition, while the cognitive-social constructs had theoretical relevance to the logic of the intervention, we did not aim to comprehensively test a single theory. The newly developed measures may also be affected by measurement error. This could account for the large proportion of unexplained variance in workplace sitting. Future studies should assess the potential influence of variables such as physical environments, organisational and social factors on total and prolonged sitting as these may be amenable to workplace environmental and policy changes.
Participants were government employees with mostly administrative and telephone-based customer service roles and were not randomly sampled. Our findings may not be generalizable to all officebased workers or organisations. However, we found limited evidence to suggest that participants were atypical, with high participation rates within most teams, and participants broadly similar to the departmental gender and age profile. While the broader intervention trial was powered to assess changes in the primary outcome, wide estimates of error suggest this study was underpowered and meaningful associations were possibly not detected. Studies that investigate the correlates of objectively measured sitting across larger, more diverse groups of workers are required to address these issues.
Conclusions
In this sample of office-based workers, shorter tenure and lower BMI levels were associated with higher levels of total and prolonged workplace sitting time, while significant variation in sitting time was observed across worksites. This suggests that identifying and assessing potential workplace-level correlates, such as physical environment and social-cultural factors, may be a useful next step in the research agenda for understanding and influencing workplace sitting. Overall, while these findings contribute to the existing limited evidence base on correlates of workplace sitting, replication and confirmation of our findings is needed. | 2018-04-03T02:32:03.387Z | 2016-06-15T00:00:00.000 | {
"year": 2016,
"sha1": "742c57c2d75f8ba448dbfd3d524c3552c7a8a582",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.pmedr.2016.06.011",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a74195d5b84d00be818eb2825290d4b399d1230e",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
149117758 | pes2o/s2orc | v3-fos-license | Problems and strategies of teaching and translating English idioms in Albanian schools-Theoretical and practical implications
The aim of this paper is to present the main strategies, problems and difficulties that English language teachers encounter while teaching idioms in Albanian schools. Idiomatic language and expressions place an important part in English phraseology, as a reflection of the mentality and spirituality of the nation vision of the world. Translating idioms needs good competence of the target language, which makes the issue a difficult and challenging task for the teachers and translators, too. The scope of this study is to show theoretical and practical implications of idiom teaching and translation in primary and secondary schools of Korça region. It outlines the main problems, methods and ways which have been investigated through a questionnaire carried out to 41 English teachers, 7 of which teaching in villages and 34 in the city. A quantitative research is carried out to give a general view on idiom teaching and the place translation acquires in the target language. The most effective strategies and methods are outlined to illustrate the results of this study. As teaching idioms is considered to be a difficult process, the scope of this study aims at presenting data collected showing Albanian teacher’s experience and the role of the target language in idiom meaning explanation.
Introduction
Teaching idioms is a very difficult and challenging process.It is well known that English is rich in idioms, collocations, set phrases, proverbs and quotations.According to E. M. Kirkpatiick, CM.Scwartz (1996: cover 4) "Idioms are those modes of expression peculiar to a language which frequently defy logical and grammatical rules, but without which both speech and writing would lose much of their vitality and color." The field of idiomaticity is considered by many researchers an important area of linguistics.Mastering idioms is believed by many scholars as a sign toward proficiency for students as EFL learners.It is obvious that further studying should be done to enrich this field.Figurative language is an area often neglected in the teaching of vocabulary (Lazar: 1996) but of crucial importance to be considered.
The aim of this article is to identify teacher's problems while idiom processing and explanation in class and also to outline which strategies do teachers use and find more effective.Idioms are part of the vocabulary in student's course books and no one can neglect teaching them, since they constitute an important part.Idiom teaching and learning are often considered as a hard task in L2 learning.The difficulty of this stands in the fact that the meaning of idioms is not the sum of the individual literal meanings of its constituents.An idiom is made up of simple and random words but when it comes to translate or teach them, there stands the difficulty.
A good cultural knowledge should be acquired by teachers in order to achieve a good grammatical and semantic translation.According to Culler (1976), languages contain concepts which differ radically from those of another, since each language organizes the world differently.
The disparity among languages results in different phrases even though both nations have identified similar social rules.This makes the process of translation a difficult task for the teachers to transfer the meaning from one language to another and a troubleshoot for EFL students to acquire.
Theoretical background
There has been a great contribution of linguists such as Baker, M. ( 1992 In order to understand translation strategies we should know what the translation process is.According to Brislin (1976, p.1) translation is defined as: "the general term referring to the transfer of thoughts and ideas from one language have established orthographies or do not have such standardization or whether one or both languages is based on signs, as with sign languages of the deaf."To Lotfipour, 1997, the translator's task is to create conditions under which the source language author and the target language reader can interact with one another.Those who are involved in the process of translating and are called translators are supposed to be the agents for transferring messages from one language to another, while preserving the underlying cultural and discourse ideas and values (Azabdaftary, 1996).According to Bell (1991: xv), the goal of translation is "the transformation of a text originally in one language into an equivalent text in a different language retaining, as far as possible, the content of the message and the formal features and functional roles of the original text."Newmark (1981: 7) defines translation as "a craft consisting in the attempt to replace a written message and/or statement in one language by the same message and/or statement in another language".According to Wills (1982: 13), translation theory examines the transferability of texts from one language to another language, as well as the "similarity of the effect produced by the source language text (SLT) and that produced by the target language text (TLT)"
What are translation strategies?
Very little has been actually written about translation strategies within the field of translation theory, since some scholars consider it a useless concept in the first place.These strategies are defined by Loescher (1991:76) as "a potentially conscious procedure for solving a problem faced in translating a text, or any segment of it."Jaaskelainen (1999, p.71) considers strategy as, "a series of competencies, a set of steps or processes that favor the acquisition, storage, and/or utilization of information."He continues that strategies are "heuristic and flexible in nature, and their adoption implies a decision influenced by amendments in the translator's objectives."According to Leppihalme (1997: 24), translation strategies are "means which the translator, within the confines of his/her existing knowledge, considers to be the best in order to reach the goals set by the translation task." Another important concept is giving the definition of the term 'idiom'.Being part of phraseology, idioms are defined as "a string of words whose meaning is different from the meaning conveyed by the individual words" (Larson, 1984, p.20).In Longman dictionary of English idioms (Longman Group Ltd: 1979) idioms are referred to as "a fixed group of words with a special different meaning from the meaning of the separate words".J. Sedl and W.Mc. Mordie, 1978 gives the definition "The idiom is some quantity of words which, under condition of their joint consideration, mean something absolutely another in comparison with the individual word meanings, forming an idiom."Moon (2006) defined idiom as a fixed sequence of words which has a meaning beyond that of the constituent parts.Another definition contributed by Irujo (1986a, p. 2) is that "an idiom is a conventionalized expression whose meaning cannot be determined from the meaning of its parts.Idioms differ from other figurative expressions, such as similes and metaphors, in that they have conventionalized meanings."Jani Thomai, 1981, states: "An idiom is a linguistic unit with an autonomous meaning consisting of two or more words, with a stable construction, historically formed for a long time, with the value of one word and is reproduced in speech and functions in language as a ready-made unit."Amosova, 1963 "an idiom is a phraseological unit in which we cannot point out which word comprises the main and basic semantic feature of the unit".
The definitions of the idioms are quite unified and they seem to posses almost the same concept and no major contradictions seem to exist between them.
Taking into consideration the idiom definition as a group of words the meaning of which is different from the meaning of its constituents, translation is not an easy task to perform.That's why teachers and many other scholars find this process difficult.This is the reason why we have intended to present the strategies and how to overcome difficulties in the teaching and translation process of idioms in the Albanian context.
Idiom translation
Idiom translation poses a serious problem for the translator because each culture has its own way of expressing things.Some idioms might be difficult to translate because of the lack of this idiom in the target language.The translator has to think about an appropriate translation strategy for the phrase.Idiom translation requires a deep insight of the culture, a good understanding and appropriate analysis before it is given the equivalence.They are culture bound and language specific.Larson (1984: 143) agrees, as he argues that the first crucial step in the translation of idioms is to be absolutely certain of the meaning of the source language idiom.This is why recognizing and being able to use idioms appropriately requires excellent command over the source language.
There are some ways to carry out translation of idioms from one language to another.
Identical pairs is one of them.It is easy for the translator to find the equivalence in the target language such as: someone's blood boils -i zjen gjaku (është i ri), never look a gift horse in the mouth -kali i falur nuk shihet nga dhëmbët, kill two birds with one stone -vras dy zogj me një gur, etc.
Identical messages is another one.This method comprises the idioms which are different in form and meaning, but they convey the same message, e.g.kick the bucket -ndërroj jetë, carry coals to Newcastle -të bësh një vrimë në ujë, put the cart before the horse -peshku në det tigani në zjarr.
Calque or loan translation which are idioms borrowed and accepted from another language (usually from Greek mythology, Bible, Latin, French, German, borrowings) such as: all roads lead to Rome -të gjitha rrugët të çojnë në Romë, be more Catholic than the Pope -të jesh më katolik se Papa, Pandora box -kutia e Pandorës, a Pirrian victoryfitore si e Pirros, the Troyan horse -kali Trojan, etc.
Translation of the message -the idiom in the source language cannot be conveyed in the target language except by translating the message it is supposed to convey, e.g.be in the same boat -të gjendesh në të njëjtën situatë, every cloud has a silver lining -mbas çdo të keqe vjen një e mirë, stick one's neck out -të të hajë kurrizi (të flasësh me kurajo duke marë përsipër rreziqe), spill the beans -nxjerr të palarat në shesh, tie one on -të bëhesh tapë (i dehur), etc.
Uncommon translations.Sometimes an idiom is translated into another language and sounds unfamiliar but the metaphor behind it is clear enough to make it understood.These translations usually happen in translating novels and in dubbing films and rarely become part of the lexicon of the target language.
There is another way of translating idioms and collocations.It is called word for word translation.This type of translation is similar to primitive machine translations and serves no practical purposes.It is considered to be the worst possible translation strategy.For instance according to Larson (1984: 116), a literal translation of an idiom will usually result in complete nonsense in the target language.Also Newmark (1981: 125) stresses that idioms should never be translated word for word.Ingo (1990: 246) agrees with Larson and Newmark, stating that literal translation of an idiom is rarely successful, and should therefore be avoided at all costs.
The most effective strategy is translating by finding the equivalent expression in the target language as the original source language idiom.It is very important to convey the style and the manner of the original source language idiom.If there is no equivalent expression the translator has to use a "non-idiomatic" expression which conveys the same meaning.
Problems in Teaching L2 Idioms
Idioms are the most difficult part of the vocabulary to teach.They are not considered to be taught in the elementary level.Students might be faced with idiomatic expressions like phrasal verbs since the third and the fourth grade such as: switch on the TV, wake up, turn on/off the light, hurry up, stand up, sit down, hands up (in the air), etc. they often use to learn from the teachers phrases like: carry on, raise up your voice, look at the blackboard, etc.Even though this expressions are not called idioms they comprise figurative meaning which is made up of a verb and a particle or adverb.
Idioms are usually not taught in the L2 classroom due to the fact that teachers either don't know their meaning or they don't know their origin.Idioms are not treated in L2 classrooms as regularly as might be, because of time pressures.(Mola, 1993).According to Lennon (1998) exercises of problem-solving nature can help learners to discover the metaphors in idiomatic expressions.Furthermore, Lennon believes that students will become highly motivated to translate their language's metaphors into the target language so as to share with the class their own culture the method of metaphor encoding.
It is very important to integrate students in the four skills (reading, listening, writing, speaking) because they integrate idiom knowledge in the four skills.Teachers should design various activities for students to use with English idioms so that students can collaborate with peers and utilize idioms in different contexts.Idioms should not be taught directly at all.(Mantyla, 2004).She considers the best policy of teaching to be a method where the students' attention is focused on the common characteristics of idioms.
When to teach idioms
A number of questions arise from researchers as to when to teach idioms, what level, age, in the primary or secondary school, etc.Since idioms represent the status of highly idiomatic language, an idiomatic expression like let the cat out of the bag -nxjerr të palarat në shesh or it is raining cats and dogs -bie shi me gjyma there is no relation with cats and dogs but the overall meaning is composed of several words whose individual meanings do not seem to contribute to the meaning of the idiom as a whole (reveal a secret or heavy rain).In addition to this apparent incongmity between form and meaning, the scarcity of teaching materials and the lack of a clear methodology make idioms a stumbling block for EFL students.(Irene López Rodriguez, Elena Maria Garcia Moreno, n.d.: 241)."Some students, of course, only learn English because it is on the curriculum at primary or secondary level, but for others, studying the language reflects some kind of choice."(J.Harmer, 2007: 11).
Teaching tips and strategies.
Learning idioms is one of the fundamental aspects of language learning that is postponed until the learners reach their advanced levels (Irujo, 1986a).It is very important to choose the idioms which are necessary, those which are more frequent in reading texts and conversations.She indicates that comparing and contrasting literal and figurative meanings of idioms will enable students to recognize idiomatic usage and to interpret idioms accordingly.Irujo (1986b) emphasizes that most students are very interested in learning idiomatic expressions so it will be a wrong decision to postpone learning them until students reach advanced levels.A research study conducted by Wu (2008) showed that English idioms with illustrations could increase college students' idioms understanding better.
According to Nilsen, A.P. & Nilsen, D.L.F.(2003), by knowing the origins of idioms, students can more easily figure out the metaphorical meanings.He carries on that discussions focused on the origins of words and phrases help students understand how language transforms over time and, thereby, enables them to hypothesize in a more meaningful way the meaning of unfamiliar words or phrases.Zigo, D. (2001), states that: "when teachers encourage students' natural inclinations toward narrative forms of meaning making, in conjunction with text-based lessons, the students appear more engaged with textual content and demonstrate less resistance to reading material that might otherwise be challenging or frustrating."He sums up that students respond to texts through narrative approaches, encouraging them to engage in role-playing and to allow memories, images, and stories to surface as they begin to develop interpretations.They are more likely to understand, recall, and care about what a metaphor means after having played with the word through a highly personalized, storied exploration of their own experiences of metaphorical language.According to the internet site (w.w.w.teachingenglish.org.uk) the following tips are given to teach idioms: - The teacher deals with proverbs and idioms when they crop up in their contexts, such as in reading and listening tasks or when you use one naturally in class.
-
The teacher teaches several 'body idioms' together.E.g. to be head and shoulders above the rest, to be long in the tooth, to shoot yourself in the foot, etc.It will be easier for students to remember some of them if they're in groups. - The teacher uses visuals and pictures to help learners remember them.For example, draw a bird in the hand and two in the bush. - The teacher does some matching activities.For example, give students five proverbs that have been cut in half and get them to match them up. - The teacher asks students if any of the proverbs translate directly into their own language.Most of the time students will know a similar expression in their language and it can help them to remember them if they compare the differences between English and their language. - The teacher tires put idioms into context.Try to use situations when people actually use the expressions and get students to create dialogues or role-play and to use a few of the proverbs or idioms to reinforce the meaning.
-
The teacher explains to students that it may be more useful for them to be able to understand the expressions when they hear them than to be able to produce them.Ask them how they would react if you used this type of expression in their language.Would they find it a bit strange? - The teacher doesn't overload students with too many at a time.Five is probably a good number of one class.
The study
This study has a qualitative nature followed by a quantitative study carried out by the analysis of a questionnaire handed to city and village teachers of Korça district.The study is followed by an analysis of the data taken from each questionnaire and presented in type of charts and diagrams below.
Aim and research questions
The overall aim of this research paper is to analyze what strategies and methods teachers use in idiom teaching and translation in primary and secondary schools of rural and urban zones of Korça city.This paper intends to show teachers methodology of idiom translation in their classes and what different strategies do they use to teach idioms in primary and secondary schools.This study seeks to answer the following research questions.
1. What are the main translation strategies teachers' use in teaching and translating idioms in class?
2. What different strategies do teachers find more effective in teaching idioms in primary and secondary schools?
3. How do teachers cope with idiom translation in class?
The subjects in the study and the research instruments
This study is focused on a questionnaire of the methods, strategies, and difficulties teachers face in teaching idioms in class.The participants involved in this study were 41 English teachers of Korça city and some villages (Pirg, Zvirinë, Vashtmi, Libonik, Mborje, Podgorie and Pojan).The questionnaire was distributed to 12 teachers teaching in the above villages and 29 teachers of Korça public schools.From the 29 teachers 10 teach in private schools and English language centers.4 of the teachers teach in both private centers and public schools, of which 2 in villages and the other 2 in Korça public schools.All of them teach both in primary and secondary schools from the 3rd to the 9th grade.Primary school in Albania includes grades from 3rd to 5th while secondary from 6th to 9th.The questionnaire was handed out and collected in two weeks time.The questionnaire consisted of 16 questions and the data derived were analyzed by using descriptive statistical methods.
Teachers under the subject of this research have an experience from 5 to 15 years of teaching in primary and secondary school.Teachers worked with different levels of English texts in different schools thus different answers are provided depending on idiom frequency in their English texts.Most of them worked with English Zone
Methodology
The methodology used is based on a quantitative analysis conducted with a questionnaire handed out and filled in from teachers of primary and secondary schools of Korça city and villages around.The questionnaire was selected as the most appropriate methodological tool so as to conduct the survey.According to Papanastasiou and Papanastasiou (2005), a questionnaire is an important means which can collect data from a lot of people while the quantitative and the statistical analysis can be revealed.Data results are interpreted through graphs and charts under statistical descriptive method.
Data Analysis of the questionnaire
When all questionnaires were collected, data were recorded and registered on the computer.Notes were kept for each question and answers were recorded.Concerning the 16 questions of the questionnaire, 11 questions included multiple choice answers while 5 others wanted teacher interpretation and comments.There was given no comment to some questions by some teachers.
According to the results of the questionnaire the following data were recorded: ► 40 % of the teachers chose translation by paraphrase strategy and another 41 % translation by using an idiom with similar form and meaning in the target language.19 % had chosen translation by using an idiom of similar meaning but dissimilar form and 0% of them translation by omission.(Data are shown in Chart 1).► In the second question which strategies do teachers find useful while teaching it was pointed out the first and the third, and in some cases the second and the third strategies have been more useful.(See the questionnaire).Most of the teachers who had chosen the first method answer said students understand idioms better, the others who have chosen the second method said students can understand the meaning and differences in form the Albanian phrases and is more appealing to the needs of learners, it is practical to learn and easy to remember.About 18 have chosen the third method.Students feel comfortable and can understand better.It makes students think and use their knowledge (vocabulary, structure, phrases).Some teachers have chosen more than one strategy, 7 teachers have chosen the first and the third method, 3 the second and the third.
► 24 out of 41 teachers find necessary to translate idioms in Albanian, 10 answered often, 1 never, and 6 usually.(See Chart 2).This happens when there is a low level of pupils in class and in the primary education mostly.
► 24 of the teachers chose idiom translation into Albanian and 17 of them by paraphrasing to the question when do students acquire better an idiom meaning.
► Almost all the teachers found difficult to teach idiomatic language because students cannot grasp the idiom meaning unless it is translated into Albanian, sometimes there is no corresponding idiom in the target language, and they do not make use of them, so they do not feel encouraged to learn idioms.6 of them do not find idiom teaching difficult.► According to students they find idioms difficult in their course books.Around 36 find them difficult, 4 easy and 1 very difficult.
► 26 of the teachers have answered 10-20 % the 8 th question relating to idiom usage in students' communication activities, 13 find it 40 % and 2 less than 60-70%.
► 26 of the teachers answered: students are interested in learning English idioms, 12 a little, 1 very interested and 2 answered no.
► Some interest was shown by the students to know idiom origin and background 13, 18 a little, 5 no and 2 very interested.
► Concerning the 11 question what do teachers think are the drawbacks and disadvantages of learning idiom through internet, most of them answered: students would not get the right meaning and the presence of the teacher is necessary.A few answered there is no disadvantage since they are provided examples and explanation.
► We have the same ratio =50 % and >80 % to the question how much are students able to use and remember idioms in the learning unit.
► 26 out of 41 teachers answered not very easy to understand idiom explanation and examples in the learning unit, 6 answered easy, 5 very easy, 3 difficult, 1 very difficult.► 35 (85 %) teachers answered YES to the question if students could make faster progress in the English language learning possessing idiomatic competence, 6 (15 %) answered NO. (See chart 3).► Regarding the 15 th question what are the effects of using translation activities to learn English, most of the teachers shared the opinion that it is a useful activity to enhance students vocabulary; develop listening and speaking skills; necessary to comprehend the meaning of certain linguistic items; remember and produce foreign language; useful for lighting specific differences between mother tongue and English language; encourage critical thinking; etc.A few answered translation activities might not be suitable for all students' level; students take ready-made meanings; loose the chance to speak and listen in the foreign language; etc. 3 of the teachers gave no answer to this question.► Referring to the materials teachers use in explaining idiom meaning most of them used games, flashcards, and labels.Almost all used games and flashcards because they are more interesting, attractive to learners (especially young learners); they understand and remember better idiom meaning (long term memory).Some of the teachers varied the materials depending on the topic, module, etc. 3 teachers didn't use because of the lack of material in school.
Overall Discussions and Conclusions
This study aimed at identifying strategies and methods teachers used in the process of L2 learning during idiom teaching based on a data analyzed from a questionnaire.The research seeks to answer the different strategies teachers used in primary and secondary education, and to analyze the role of the target language (Albanian) in class.
Teaching idioms in class has shown to be a difficult task.The study has revealed difficulties in choosing which strategy to teach.This varied on the level of the class.Teachers of primary education find effective translation by using an idiom with similar form and meaning in the target language while teachers of secondary one could use as an alternative method translation by paraphrase.They use it because it expands students vocabulary, develops critical thinking, they can understand better and clearly idiom meaning.Teachers find it useful when there is no equivalent in Albanian.However they tend to use translation into the mother tongue in most of the cases because they are sure all the students of all the levels have understood the idiom properly.
Based on the quantitative data, 59% of the teachers used sometimes Albanian in translating idioms.There is almost the same percentage of students when acquiring better idiom meaning which seems that teachers use almost in the same proportion of paraphrasing and idiom translation into Albanian.
Idioms resulted to be difficult because students do not use them in situational contexts thus not encouraging using them.They find their usage and remembering difficult.Students need teacher's help.They should be given contextual support and many exercises to demonstrate the right usage.It has come to a conclusion that teachers find them difficult to teach because there is no equivalent idiom in the target language and the language is too idiomatic.Moreover, the lower the grade, the lower the percentages of idiom use.Idiomatic language is used very little due to the lack of necessity and there is a different form and meaning in English and Albanian which makes students difficult to use them.
From the questionnaire statistics 76% of students have shown little interest in knowing idiom origin and background.Most of the teachers were against idiom teaching through internet because of the misunderstanding led from the meanings they read, the necessity of the teacher presence in class; they get inappropriate connotations of meaning and do not interact and put idioms in practice.Some teachers accepted the fact that they use internet as entertainment and not for the learning process.A few of them shared the opposite opinion.85% of the teachers commented that idioms help improve and increase idiomatic language competence.A few shared a negative answer.
The results of the questionnaire for the effects of using translation activities to learn English support for the idea of acquiring the meaning more easily and use idiom in everyday English, specifying the understanding of idioms.It is necessary to comprehend, remember and produce foreign language while some of the teachers agree with the opposite idea considering the process as threshold to develop their communicative ideas.They suggest these activities for the early beginners and not to be used as a common practice.
The organization of teaching idioms in an English class should take into consideration the usage of some materials to teach the idiom meaning.Games, flashcards, role-plays, and pictures have resulted to be the most useful.They learn easily and put them in the long term memory.It is an easier and faster method for students to acquire and understand through visual perception in partly motivated idioms.
Teachers should use a variety of methods and strategies depending on the topic and module.Mother tongue should be left apart and teach idioms through paraphrasing.This develops their critical thinking, enlarge the vocabulary and make students think in English.Albanian should be used in translating idioms which are highly non-motivated, when there is hard to find any explanation, and when half of the class has not acquired the idiom meaning.
This study intends to be helpful for L2 teachers who encounter difficulties and problems in such idiomatic languages suggesting what methods to use and exactly what strategies do Albanian teachers find useful in the foreign language teaching in primary and secondary schools, fulfilling thus teachers' needs and requirements.
Chart 3 .
Chart 4. Primary vs. Public Schools Question 3. Do you find necessary to translate idioms in Albanian while teaching and practicing idiomatic language in class?a) Yes b) No c) Sometimes d) often Question 4. How often do you use your mother tongue (Albanian) in translating idioms?a) Sometimes b) Often c) Never d) usually Question 5. When do students acquire better an idiom meaning?Why? a) By paraphrasing b) idiom translation into Albanian Question 6. Do you find idiomatic language difficult to teach?If Yes, why? Question 7. How do students find idioms in English course books?a) Difficult to use and practice b) easy c) not interesting d) very difficult Question 8. Do they use idiomatic language they have learned in communicative activities?If yes, how often?a) 10-20% b) 40% c) less than 60-70% ________________________________________________________________________________________________ ____________________________________________________________ Question 9. Are students interested in learning English idioms?a) No b) Interested c) extremely interested d) very interested e) little Question 10.When encountering unfamiliar an English idiom, are students interested to know its origin and background?a) No b) A little c)Some interest d) Very interested e) Extremely interested Question 11.Compared with traditional classroom learning with textbooks, what do you think are the drawbacks and disadvantages of learning idioms online?________________________________________________________________________________________________ ____________________________________________________________ Question 12. How much are students able to use and remember these idioms after learning them?<10% <30% =50% >80% >95% Question 13.Is it easy to understand the explanation and examples of idioms in the learning unit?a) Very easy b) Easy c) Not very easy d) Difficult e) Very difficulty 1, English Zone 2, English Zone 3 -Margarita Prieto; Lauren Robbins; McGraw-Hill Companies, Inc (2006) from 3rd to 5th grade and some with Welcome 1, Welcome 2, and Welcome 3 -Gray E. Evans V. Express Publishing 2008.In the secondary grade they use Elevator International 1, Elevator 2, Elevator 3, Elevator 4 -Lucy Norris; Annie Taylor, Richmond Publishing (2008) and Access 1, Access 2, Access 3, Access 4 Virginia Evans; Neil O'Sulivan; Express Publishing 2008.One of the public secondary schools in Korça city worked with Off we go, Zana Lita; Suzana Balli, Pegi (2006). | 2018-12-15T07:49:46.309Z | 2014-05-01T00:00:00.000 | {
"year": 2014,
"sha1": "a7f5fd26b5aa6157590782a664b7c6059722abf7",
"oa_license": "CCBYSA",
"oa_url": "https://revistia.com/index.php/ejser/article/view/6202/6055",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a7f5fd26b5aa6157590782a664b7c6059722abf7",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
244621261 | pes2o/s2orc | v3-fos-license | RURAL-URBAN MIGRATION AND UNEMPLOYMENT TENDENCY
The study examined the effect of rural-urban migration on unemployment tendency, while controlling for other variables. We make use of the instrumental variable approach and probit controlling for endogeneity to determine the relationship between rural-urban migration and unemployment. Cameroon labour force survey is used to estimate our results. Results shows that the likelihood of unemployment decreases among rural-urban migrates compared to their rural counterparts who do not migrate. By the same token, holders of primary, secondary and tertiary levels of are less likely to be unemployed relative to their counterparts with no education, respectively. These findings have a number of policy implementations: the government could create an enabling environment for labour markets to work better for the youths seeking employment and could invest rationally on education to enable the youth become self-reliant instead of job seekers through skill development and training.
INTRODUCTION
Over the last decades, identifying the factors accounting for population intra-national and international movements has underlined the growing body of literature on interregional migration. Grounded on the observation that entry into labour force is the period where geographic mobility is highest, these movements were explained by employment motives (Zax, 1991). Earlier studies provided the basis for the analysis of the links between migration choices and employment. As far as intra-national migration is concerned, individuals migrate in response to a gap between an expected urban and a de facto rural wage (Harris and Todaro, 1970). Based on the observation that urban wages are high and institutionally determined, migrants expect to secure either jobs or better-paying jobs at the destination. Sjaastad (1962) explained migration decisions as the outcome of human capital investment decisions. This view led to the explanation of labour moves as responses to either interregional wage differentials (Greenwood, 1985) or unemployment differences among local labour markets (Kriaa and Plassard, 1996). The "new economics of migration" added explanatory power to the neo-classical model. It advocated that migration is a collective endeavour enabling rural households to diversify incomes (Stark and Levhari, 1982). In this literature, migrants choose destinations where they are either well connected or have family/community ties (Munshi, 2003). While increasing the probability of migration, these networks are thought to influence the economic returns to migration; although the large empirical literature devoted to the relationship between individual labour market outcomes of migrations have had mixed results. *Coresponding author: tambi2015@yahoo.co.uk Both the volumes and patterns of migration have undergone important changes during the last few decades; making migration a critical issue of our times. Since the 1960s, the overall volume of international migrants has doubled. In 2000, the Population Division of the United Nations estimated the total number of international migrants to be approximately, 175 million. Thus, about 2.9 per cent of the world's population or one in every 35 persons are moving across borders (IOM, 2003). Taken together, migrants would make up the fifth most populous "country" in the world (ILO, 2013). These cross-border movements have been accompanied by the increase in the number of urban resident and for the first time, the percentage of urban residents has gone over that of rural residents.
Africa's population is very young with more than half aged below 25 years. It is estimated that each year, between 2015 and 2035, there will be half a million more 15-year-olds than the year before (ILO, 2013). Estimates by IOM, 2003 put Africa's youth population aged between 10 and 24 years at 344.4 million in 2013, representing 31% of total population making the continent the youngest region in the world. Employment appears to be the most principal challenge that youths are facing in the world today and a call for concern to the global economy. The future of Cameroon, Africa and the world at large is in the hands of the youths who are the leaders, engineers, captains of industries and administrators of tomorrow. The future highly depends on the way the youths are motivated in terms of the job market (employment and unemployment), quality of education, health and migration. The young constitute the majority of the Cameroon's population and present a great labour force, they are characterize by tremendous energy, great hunger for new ideas, discoveries, dynamism, impressive technological savvy and intelligence that can catapult Cameroon in to untold prosperity and stability, in to higher levels of second generation economic activity (agriculture, manufacturing and distribution) as well as serve as drivers and strong engines for economic development of Cameroon. The youths are the nations' most valuable asset; they represent a tremendous potential competitive advantage in the global economy, if the youths are given the due opportunity, they can transform Cameroon into a prosperous and productive country that can compete with the rest of the world. Today, the challenge of Cameroon is to provide the youths with opportunities to fulfil their potential and contribute to the development of their nation, African continent and the world at large. Without jobs or meaningful livelihoods options, young people in Cameroon will naturally seek other ways to release their energies, which can be through violence or migration; this has motivated most youths to migrate to the cities from the rural zones.
This study attempts to explore the effect of rural-urban migration on urban youth unemployment in Cameroon. As a result of the rural-urban movement, in 2017 Cameroon fines herself with an unemployment rate of 4% with a population of about 23.3 million people. This revealed that many young people in Cameroon are unemployed; many have completely given-up looking for jobs while others are working but still living below the poverty line, this category is known as the working poor. Most of the barriers to resolving the unemployment challenge in Cameroon includes: (1) the unprecedented economic crisis suffered in the 1990s (2) the educational system of Cameroon which focuses mainly on theories and abstract concepts with little or no training in technology and entrepreneurship, (3) low-quality jobs, (4) skills mismatch, (5) inadequate job matching, (6) the work experience trap, (7) lack of access to capital, (8) little or no entrepreneurship and business training, (9) limited youth participation, (10) social discrimination and corruption, (11) frustration and discouragement, amongst others.
It is also true that the Cameroon government has become aware of the dangers posed by the growing rate of youth unemployment and has made moves in that regards. This can be seen through the Ministries of Youth Affairs and Civic Education and that of Employment and Vocational Training. The programs designed by government via these ministries include the Rural and Urban Youth Support Program known by its French acronym as PAJER-U, the Integrated Project for Manufacturing of Sporting Materials (PIFMAS), the National Employment Fund (NEF) and the Integrated Support Project for Actors of the Informal Sector (PIAASI). All these programs have their success and failure stories. But the bottom line is that, despite all these efforts made by government, a lot more still has to be done. Nowadays, there is a socioeconomic and political urgency of responding to the challenge of youth unemployment as a precondition for poverty reduction, sustainable development and lasting peace. It is believed that an essential approach for addressing the challenges of youth unemployment is the need for a national youth policy, an integrated strategy for rural development, as well as job creation.
In Cameroon, the unemployment rate is 30% while that of underemployment stands at 75% (International Labour Organization's 2013 report). It is worth noting that Cameroon has a population of over 20 million inhabitants and most of the people belong to the middle class. It may interest you to know that the working population of Cameroon is about 12million and only a little over 200,000 people work in the public service. With government being the highest employer, this implies that the other 11.8million people who are not government employed are a call for concern. Population growth in Cameroon is rapid in most big towns and cities.
According to statistics, about 92 per cent of the population in Yaoundé is below 45 years. This rapid urban growth brings about social problems which affect particularly the poor and other vulnerable groups in society such as youths. Youth unemployment in Cameroon is compounded by rampant corruption in most employment sectors and young people's inadequate knowledge on the existing job market and opportunities. As a result of all these problems, we are therefore interested in examining the effect of rural-urban migration on unemployment in Cameroon. To address these issues, the objectives are: to explore the effects of rural-urban migration on unemployment in Cameroon, to verify the effect of rural-urban migration on unemployment by Gender, to evaluate the effect of levels of education on unemployment in Cameroon and to derive policy recommendations on the basis of our analysis.
LITERATURE REVIEW
An understanding of the relationship between rural-urban migrations, unemploymentunderemployment is known by a clear definition of important elements before reviewing the relevant literature underlying this study. Mcha (2012) in a country level knowledge network in Tanzania noted that unemployment-underemployment have been defined in the literature in different ways. Following the National Employment Policy in 2008 as cited by Mcha (2012), unemployment is the total lack of work of an individual (15 years +); it include enforced idleness for people that are able and willing to work but cannot find jobs (ILO, 2013).
Focusing on rural-urban migration, we observed that rural-urban migration is the movement of people in our case youths of age 15 to 34 years from the rural community in search of better jobs. With regards to migration in Cameroon, the proportion of persons not born in the locality/subdivision where they reside is 32.7 percent (NIS, 2011). However, a slight decrease of migration is noticed compared to 2005 (35.4 percent) while men migrate as well as women at almost similar proportions. Considering NSO (2014) in overall 27 percent of employed population in Malawi is underemployed. There are little sex differences in levels of underemployment. In urban areas, the percentage of employed population who are available to work additional hour is 24 percent compared to 27 percent in the rural areas (ILO, 2013). However, in terms of age groups, following the Malawi labour force survey, no major differences are observed in the level of youth underemployment between males and females and between rural areas and urban areas. However, while there are little variations in the level of underemployment in age group by educational level, the level of underemployment among the youths in age group 15-34 declines with level of education (NSO, 2014).
With regards to the importance of rural-urban migration and the consequences in the urban centres; Ajaero and Onokala (2013) in examining the effects of rural-urban migration on the rural communities of South-eastern Nigeria, shows that rural-urban migration contributes significantly towards the development of their rural communities through monetary remittances and the involvement of the rural-urban migrants in community development projects. However, Golub and Hayat (2014) documented and analysed the predominance of informal employment in Africa and shows that lack of demand for labour rather than worker characteristics is the main reason for pervasive underemployment. Golub and Hayat (2014) concluded on their analysis of informal employment that improvements in the business climate are the key to boosting investment and technology transfer in labour-intensive tradable industries and thus raising labour demand and employment. Gimba and Kumshe (2011) in their study on the causes and effects of rural-urban migration in Borno state Nigeria, affirm that in recent years the rate of rural-urban migration as become alarming as more people drift into the urban centres from the rural areas. In their analysis of 150 respondents drawn from Maduguri metropolis indicated that the major causes of rural-urban migration are: search for better education, employment, and business opportunities; due to poverty, unemployment, famine, and inadequate social amenities in the rural areas. Gimba and Kumshe (2011) unanimously accepted that the effects of rural-urban migration are; pressure on urban housing and environment, high rate of population growth in the urban centres, low quality of life, increase crime rate and slow down pace of development of rural areas. In this struggle, Ankrah (1995) also revealed the situation of rural-urban migration in Ghana and suggested that migration has the effect of precipitating major social and behavioural change visa-vis committed urbanites who readily adapt to urban life and the situational urbanites that experience greater problems in adjustment to the city.
Reviewing policy issues in relation to rural-urban migration, unemployment; Ankrah (1995) emphasized that the rural youths need the means to stay in their communities with the opportunity to improve their livelihoods and that as agriculture is one of the most promising sectors for rural youth employment, Cameroon governments should prioritize investments and programs in irrigation, water resource management as well as improved agricultural practices in order to expand young rural farmers' capabilities to produce food and conserve the land's natural resources while providing the young population with the skills and abilities to increase their rural incomes (Ankrah, 1995).
THEORETICAL FRAMEWORK
The economic model of the family as applied by Frijters et al (2008), form the conceptual basis of our analysis of the consequences of unemployment due to rural-urban migration. The family's objective is assumed to be the maximization of utility that it derives from consuming the various goods that it produces using inputs of family members' time and market-purchased goods and services, as well as employment services are viewed as consumption good from which parents derived utility. The family's level of consumption of employment services depends on the availability to work and the quality of job a youth that migrate from a rural community to an urban community will get (Blau and Grossberg, 1990).
The time spent by youths to study, received professional training, move from the rural to urban centres, do other activities and to search for jobs as well as social amenities such as seeking preventive and curative medical care is an important input into the production of employment (unemployment) in Cameroon. Youths may move from rural centres because of family strife, no land to cultivate or land dispute, health conditions, some youths move because they want to gain township experience, learn a trade or seek better opportunities, invitations by friends and family members. However, there exists other youths who no matter what, they cannot move because of the same reasons as above but in an opposite direction, such as youths with much land, social stability... such youths may actually not have the time to move/migrate or make use of public services designed for workers.
Unfortunately, youths that migrate to the cities and fails to find a job may constitute a real problem in the cities. They will likely increase the rate of juvenile delinquency, insecurity through theft and pick-pocket, environmental congestion and poverty (NIS, 2011). Not-withstanding, youth's income generating activities increase the level of household resources, which should improve their well-being. Moreover, there is some evidence that youths are more likely than the old to spend their income in ways that improve their social welfare. What then can we say? The net effect of rural-urban migration on unemployment outcomes is an empirical issue.
METHODOLOGY OF STUDY
Linking rural urban migration to unemployment, we used the economic model of the family as applied by Frijters et al (2008), this forms the conceptual basis for our analysis of the unemployment consequences of rural-urban migration. Based on these authors, the relationship between rural-urban migration and unemployment can be described within the framework of a simple household production model. Thus, our generic model of unemployment for the youth i that migrated from rural to urban centres is assumed to be as follows: is a vector of household/environmental characteristics (sex of household head, household size, geographical place of residence, pipe borne water, electricity) and migrant characteristics (education, marital status, type of work contract, age group, occupation, employment duration).
i RUM is ruralurban migration and i is a random error term. The coefficient 1 is the parameter of primary interest and represents the impact that rural-urban migration has on unemployment. Ordinary least square (OLS) estimates of equation [1] will be reported in the results column, however, this single-equation estimate may be upward or downward biased depending upon the effect that unemployment has on rural-urban migration and on the correlation between omitted variables and rural-urban migration. For example, if rural-urban migration has a positive impact on unemployment, then we would expect the OLS estimate of 1 to be biased upward.
In empirical estimation, the prime difficulty of the two-way causality that comes in the effect on rural-urban migration and unemployment may cause the classical endogeneity problem. To avoid the strong likelihood of this endogenity bias, confounded by the problem of variables that are missing in the data, we use a two stage least squares estimation approach. Thus, the firststage equation in this approach is: Whereby i SA is social amenities (availability of pipe borne water and electricity; availability of medical centres), the 2SLS model should capture the causal effect of rural-urban migrated youths for those migrated youths whose migration/movement is affected by social amenities. Importantly, though i RUM is ordinal, 2SLS estimates of 1 can be interpreted as estimating the average marginal effect of a unit increase in i RUM for migrants whose migration/movement is affected by the availability of medical centre, pipe borne water and electricity. Ajakaiye and Mwabu (2009) noted that in the presence of endogeneity, a device must be found to vary the 'treatment variable' exogenously without changing other unobserved or unmeasured variables with which it is correlated. Such device includes instrumental variable (IV) method, natural experiments and randomization. Implementation of experimental designs are rare in evaluation of broader health and social programmes (Jones, 2007), either because experiments are too expensive, unethical or simply impossible and therefore beyond the scope of this study. This study proposes to use the IV method and the Probit controlling endogeneity approach, popularly known as IVPROBIT approach. Endogeneity can arise due to: errors-invariables, omitted variables and simultaneous causality (Bascle, 2008). Endogeneity and heterogeneity bias can compromise the validity of OLS estimators. The IV approach is intended to oxygenize the endogenous regressors using valid, relevant and strong instrumental variable method and the most commonly used IV estimation method is the single equation approach of two-stage least squares (2SLS) estimators (Bascle, 2008).
Before presenting the 2SLS estimates, we shall present a reduced form analysis of rural urban migration; here we would expect to observe rural-urban migrants with social amenities to have lower movement/migration, because rural-urban migration is negatively affected by migrants with social amenities on their initial place of residence, there is a high probability that they may not likely move. The result column presents the relationship between social amenities and unemployment, given that 2SLS estimation allows us to scale these Probit marginal effects into the effects on an increase in our ordinal rural-urban migration measure.
Econometrically, the instrumental variable method is used to estimate causal relationships when controlled experiments are not feasible, in other words when a treatment is not successfully delivered to every unit in a randomized experiment (Imbens and Angrist, 1994). This instrumental variable method allows consistent estimation when the explanatory variables are correlated with the error terms of a regression relationship. This may occur when unemployment causes at least one of the covariates, when there are relevant explanatory variables which are omitted from the model or when the covariates are subject to measurement error. Considering these issues, equation [2] presenting the first stage equation will be computed to obtain equation [3] to capture the second stage least square as follows: We use the social amenities variable as an instrument to overcome the endogeneity problem between rural urban migration and unemployment which cannot be adequately controlled for by observable characteristics. Assuming that social amenities are a valid instrument, we use the IVPROBIT model (probit model controlling for endogeneity) which better respects the binary nature of unemployment as represented by the following equation: follow a bivariate normal distribution with non-zero correlation. The report of the IVPROBIT model will be presented in the result section. In addition, we can calculate the marginal effects of a variable as the average of the marginal effect of everyone in the sample.
Treatment Variable for the Endogenous Variable
The strength and success of the instrumental variable strategy lies in the identification of the instrument with sufficient predictive power. The IV method is one of the most powerful tools in econometrics, since it allows consistent parameter estimation in the presence of correlation between explanatory variables and disturbances (Murray, 2006a). The IV technique is the most widely applied approach to identifying causal or treatment effects and it essentially assumes that some components of non-experimental data are random (Rosenzweig and Wolpin, 2000). The instruments are variables thought to have no direct association with the outcome and are powerful predictors of treatment (Jones, 2007). 2SLS instrumental variable estimation is an effective tool when instruments are valid and strong, otherwise this quality is lost (Murray, 2006a). Stock et al (2002), caution that finding exogenous instruments is hard work.
Finally, Mwabu (2009) mentioned that, three properties of an instrument need to be noted at the outset. First, an instrument is relevant if its effect on a potentially endogenous explanatory variable is statistically significant. Second, an instrument is strong, if the size of its effect is 'large'. Finally, the instrument is exogenous if it is uncorrelated with the structural error term. An instrumental variable that meets all these requirements is a valid instrument, but often very difficult to find.
We are interested in using social amenities that is the availability of water, electricity and medical centres in urban community. This variable has been confirmed by authors since the seventies. Many other researchers have revealed over the years that people, including the youths migrate for economic and social reasons (Harris and Todaro, 1970). For our provision of social amenities to overcome the potential endogeneity problem between rural-urban migration and unemployment/underemployment: the instrument must be (i) strongly correlated with rural-urban measures and (ii) uncorrelated with unemployment, except through the rural urban migration (Murray, 2006a). Based on this, two main factors can lead to bias in the estimated impact of rural-urban migration on unemployment: firstly, there are likely to be unobservable characteristics relating to the rural-urban migrants that are correlated with both youth's unemployment and underemployment.
Two obvious candidates are family links in the ability of family relations and the migrated youth and the extent with which a family member cares about her relation in terms of their wellbeing relative to leaving them in the village or local community. The second source of potential bias arises from the direct effect of unemployment on rural-urban migrants. If unemployment has a negative impact on rural urban migration, then unemployed and underemployed youths will be less developed both economically and socially than un-migrated non-working youths, creating a downward bias on the estimated impact. On the contrary if unemployment has a positive impact on rural urban migration then the estimated impact would be biased upward.
To avoid the individual effect of each migrant youth, we shall use the cluster mean of each instrument, by so doing only the community effect of the migrant will be capture and hence increasing the strength of our instrument on the endogenous variable. It's also worth mentioning that, our instruments are not directly related to unemployment except through rural-urban migration. Through Sagan and Cragg Donald statistics couple with the ideas of Mwabu (2009), we shall scale the relevance and strength of our instrument, all these will ensure robust results free from bias as compare to former studies.
Data Presentation
In this study, we used the data of Cameroon Employment and Informal Sector survey (CEISS) to compute the effects of rural-urban migration on unemployment concurrently while decomposing the results in level of education. Following the report of the second edition of the CEISS by the National Institute of Statistics (NIS, 2011); the CEISS 2 was realized in 2010 after the first in 2005 by the ministry of Labour and Social Security in collaboration with other ministries. The 2010 CEISS is a two phase national statistical survey with the first phase being survey on employment and second phase being survey on the informal sector. This survey has as objectives to provide to users a set of indicators on (i) the labour market, the conditions of activity and the incomes and (ii) the informal sector and its contribution in the economy, in terms of employments and added value.
The target population of the survey represents 68.7 percent of the overall population; made up of 51.6 percent of women and 48.4 percent of men (NIS, 2011). There are about 34500 observations to be computed, among which we have variables on unemployment, underemployment, migration and other determinants variables. The data can be used to estimate the number of persons in the labour force (employed, under-employed and unemployed) and their distribution by sex, major age-groups, educational level, disability status, geographical and rural/urban spread as well as the ecological manifestations of these. It can equally be used to estimate the number of child workers (or children in employment) aged 5-17 years and its distribution by sex, major age-groups, educational status, geographical, ecological and rural/urban spread etc.
In summary, our outcome variable is unemployment; the potential endogenous variable is rural-urban migration which is capture as the proportion of people who were not born in the locality/division where they live. The instruments of our endogenous variables are social amenities such as: (1) availability of electricity services in urban centres and (2) availability of water for good health. The control/exogenous variables are: age group, gender of household head, education, socio-economic status, occupation, and type of work contract, geographical place of residence, household size and employment duration.
Weighted Sample Descriptive Statistics
From the preliminary result of the sample descriptive statistics in table one below; obtained from the labour survey year 2010, we observed that in Cameroon, unemployment was at about 8.3 percent. Unemployment occurs when people who are without work are actively seeking work, International Labour Organization (2014). According to International Labour Organization report, more than 200 million people globally or 6percent of the world's workforce were without a job in 2012 (International Labour Organization, 2014). About 53.8 percent of the migrants leaving the rural areas for the urban areas are as a result of electricity. Also with regards to water, a slight lesser percentage of people say 41.9 migrate because of that. It also reveals that about 14.8 percent of the rural population move from rural areas to urban areas.
It is also reveal that 77.6 percent of the rural-urban migrants are male and whose age ranges from 0 to 99 years old. Also many people who already have established businesses or enterprises that belongs to them, will find it very difficult to leave the rural area for the urban areas. It will also be very difficult for a person who has more than just one job to leave the rural areas for the urban city. These migrant are coming from families whose size ranges from 1 to 28 persons and with very poor housing conditions. It is also reveal that 33.3 percent of those with primary level of education will migrate from rural to urban towns, 35.7 percent of those who have obtain a secondary level of education will move from rural settlement to urban settlement while 4.6 percent of rural population with tertiary level of education will migrate from the rural areas to the urban areas. . In focusing on the main sample result, the respondents noted that their reasons for migrating are: to work, look for job, for health reasons, for apprenticeship, housing problem, joining family, family problems and retirement. These variable outcomes are indicated in the figure below. This figure reveals that most of the people that migrated from rural to urban did so for family reasons to about 50 percent of the total migrants.
Basic Marginal Effect Estimates of Rural-Urban Migration
Table three present the results of (a) the OLS result in column one, which can either be bias upward or downward; (b) the instrumental variable result in column two (IV 2SLS) while (c) the probit model controlling for endogeneity in column three (IVPROBIT).
Considering equation one above, the result of the linear regression can either be biased upward or downward depending on the direction of the relationship between rural-urban migration and unemployment effects. Therefore, this OLS result is not appropriate for inference, this explain why the rural-urban migration is insignificant revealing that the value of rural-urban migration is not appropriate for judgment. The 2SLS result solve the problem of endogeneity resulting from the data this can either be from missing variables or omission whereas the IVPROBIT resolve the problem of endogeneity originating from both the data and elsewhere, hence the estimates of IVPROBIT is our preferred result. Further, following the joint F/(p-value) test for Ho: coefficients on instruments = 0/Wald/chi2 of 41.88 [14, 19842; 0.0000] for 2SLS and 605.33 [10; 0.0000] for IVPROBIT reveals that the probit result controlling for endogeneity is preferable, These results are presented in table 2. From column 1, we can say that rural-urban migrates pave the probability of about 0.13% of being employed more than their rural counterpart who do not migrate. From our analysis above, table one depicts OLS estimate of Unemployment, column 2 represent the 2 stage least square and in column 3, consistent IV estimates of rural-urban migration parameter. We observed that rural-urban migration has no significant effect on unemployment. Also as household size increases, unemployment also increases at 1 percent significant level, the movement of male from rural areas to urban areas have no significant effect on unemployment. There is a positive relationship between age and the unemployment. This implies that as age increases, the rate of unemployment also increases meanwhile many people who are selfemployed that is owners of enterprises (sole proprietors) will hardly leave rural areas for urban areas thereby reducing the rate of unemployment which is significant at 1 percent.
The level of education which comprises of primary, secondary and tertiary levels of education has a 1 percent significant on unemployment. The more persons with primary level of education leave the rural areas for the urban areas, the more unemployment will increase. In a similar manner, rural urban migrants with secondary level of education will only end up increasing the level of unemployment. At a given point, migrants with just a higher level of education will also increase unemployment. From our IVPROBIT regression, we realized that rural-urban migration has a direct effect on unemployment at 10 percent level of significant, which also shows that as household size increases, the rate of unemployment also increases at 5 percent level of significant. Unemployment also increases as the people's age increases. Also as more people do not have their own enterprises or are not sole proprietors, they are force to leave the rural areas for the urban city, as they move to these urban areas, the level of unemployment increases at 1 percent significant level.
Entrepreneurship effect by the level of Education of the Migrant
In table 3, we found out that, both the male and female migrants are fuelling unemployment in Cameroon. However, the male migrants seem to be fuelling more as compared to their female counterparts. Further, lesser number of male who left the rural areas to the urban areas because of electricity will be unemployed at 1percent level of significance. Also a greater number of female who left the rural area for the urban areas because of the housing condition will be unemployed at 1 percent level of significant than male who left the rural areas for the same reason. Many women from large family size who migrated from rural to urban areas was faced with the problem of unemployment at 10 percent level of significant than the men who lift the rural areas for the same purpose at 1 percent level of significant. More female who left rural communities for urban communities will face unemployment than males of the same age bracket which is seen at 1 percent level of significant. Our statistics also shows at 1 percent significant that among those male who were self-employed and left the rural area for the urban areas, very few of them had employment and for the female who were already self-employment in the rural area and left for the urban area, more of them will be faced with unemployment which is significant at 10 percent. Our results also shows that more male migrants from rural to urban areas who have just the primary level of education will be unemployed at 5 percent level of significant as well as rural-urban male migrants but at 10 percent level of significant.
CONCLUSION
The main objective of this study was to examine effect of rural-urban migration on unemployment tendency. The study was conducted in Cameroon following increasing number of rural-urban migrants. Data would be obtained from the labour survey 2010. From the preliminary result of the sample descriptive statistics obtained from the labour survey year 2010, we observed that in Cameroon, unemployment was at about 8.3 percent.
From the findings above, it can be concluded that rural-urban migration has a significant effect on decreasing unemployment in rural areas of Cameroon. An important conclusion of the results is that rural urban migration, household size square, household age and status of work, positively decrease unemployment in Cameroon while household size, male, household age square, primary, secondary, higher and informal education increases unemployment in Cameroon. We observed that in Cameroon, unemployment was at about 8.3 percent. In focusing on the main sample result, the respondents noted that their reasons for migrating are: to work, look for job, for health reasons, for apprenticeship, housing problem, joining family, family problems and retirement. Finally, democracy is a journey not a destination. For Cameroon, it is a learning process. As a matter of fact it may not be a perfect system of government, but it has several advantages over other systems. People including the Cameroonian must feel the positive impact of democracy in their lives. The situation whereby only a few privileged persons in positions of authority benefit from this system of government at the expense of the impoverished masses portends a great and real danger that may incur the wrath of the unemployed citizens in Cameroon if not addressed urgently. Cameroon leaders should strive to promote good governance in other to engender rural empowerment, employment and socio-economic development.
In terms of policy, strengthen of existing institutions by appointing decent people to head them, respect their tenure and appoint successors rather than political appointee. Investment in education (vocational training school): government should invest heavily on education, education that will enable the youth to become self reliance instead of job seekers through skills development and training in the rural areas. Infrastructural building that will provide employment to thousand people such as good roads, electricity, provision of portable drinking water etc should be embarked upon by the government of the day. Create labour market that work better for the youth and promotion of conducive atmosphere for investment in the rural. Future research could be undertaken in the following domains: urban youth unemployment and underemployment in Cameroon: role of rural-urban migration. | 2021-11-26T00:08:30.097Z | 2020-06-29T00:00:00.000 | {
"year": 2020,
"sha1": "6e008b7f1f4769ccb48d6854f5663ee9f9c17a24",
"oa_license": "CCBY",
"oa_url": "https://journal.afebi.org/index.php/aefr/article/download/318/204",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e69a1e55e0d543d60d8f6ce1b3e42ce236a6c199",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
9907649 | pes2o/s2orc | v3-fos-license | Boundary S matrix of an open XXZ spin chain with nondiagonal boundary terms
Using a recently proposed solution for an open antiferromagnetic spin-1/2 XXZ quantum spin chain with N (even) spins and two arbitrary boundary parameters at roots of unity, we compute the boundary scattering amplitudes for one-hole states. We also deduce the relations between the lattice boundary parameters appearing in the spin-chain Hamiltonian and the IR (infrared) parameters that appear in the boundary sine-Gordon S matrix.
Introduction
Factorizable S matrix is an important object of integrable field theories and integrable quantum spin chains. As for the "bulk" case where the S matrix is determined in terms of two-particle scattering amplitudes, the "boundary" case can equally well be formulated in terms of an analogous "one-particle boundary-reflection" amplitude. These bulk and boundary amplitudes are required to satisfy Yang-Baxter [1]- [3] and the boundary Yang-Baxter [4,5] equations respectively. Methods based on Bethe equations have long been used to compute bulk two-particle S matrices [6]- [8]. In [8], Fadeev and Takhtajan studied scattering of spinons for the periodic XXX chain for both the ferromagnetic and antiferromagnetic cases. The bulk two-particle S matrix for the latter case coincides with the bulk S matrix for the sine-Gordon model [2] in the limit β 2 → 8π, where β is the sine-Gordon coupling constant. Much work has also been done on the subject for open spin chains [9]- [16] as well as for integrable field theories with boundary [5,9]. In [5], Ghoshal and Zamolodchikov presented a precise formulation of the concept of boundary S matrix for 1 + 1 dimensional quantum field theory with boundaries such as Ising field theory with boundary magnetic field and boundary sine-Gordon model. For the latter model, the authors used a bootstrap approach to compute the boundary S matrix. They determined the scalar factor up to a CDD-type of ambiguity. Nonlinear integral equation (NLIE) [17,18] approach has also been used to study excitations in integrable quantum field theories such as the sine-Gordon model [19]- [22] and open quantum spin-1/2 XXZ spin chains [12]- [15]. In fact, in [15], NLIE approach is used to compute boundary S matrix for the open spin-1/2 XXZ spin chain with nondiagonal boundary terms, where the boundary parameters obey certain constraint. The bulk anisoptopy parameter however is taken to be arbitrary.
In this paper, we compute the eigenvalues of the boundary S matrix for a special case of an open spin-1/2 XXZ spin chain with nondiagonal boundary terms with two independent boundary parameters (with no constraint) at roots of unity, using the solution obtained recently [23,24]. The motivation for the performed computation is the fact that the Bethe Ansatz equation for this model is unchanged under sign reversal of the boundary parameters. Hence, the usual trick of obtaining the second eigenvalue of the boundary S matrix of an open spin-1/2 XXZ spin chain by exploiting the change in Bethe Ansatz equation under such sign reversal of the boundary parameters [10,11,15,16] would not work here. Consequently, identifications of separate one-hole states are necessary here. As far as the formalism goes, we follow the approach used earlier for diagonal open spin chains [10,11]. This is a generalization of the method developed by Korepin, Andrei and Destri [6,7] for computing bulk S matrix. The quantization condition discussed by Fendley and Saleur [9] is a crucial step for the calculation. The solution utilized here was derived for certain values of bulk anisotropy parameter, µ in the repulsive regime (µ = π p+1 ∈ (0, π 2 ]) for odd p values. Hence, we focus only on the critical and repulsive regime, which corresponds in the sine-Gordon model to β 2 ∈ [4π, 8π) 1 . One-hole excitations for this model occur in even N sector [25] in contrast to the diagonal open spin-1/2 XXZ spin chain where such excitations appear in the odd N sector [11].
The outline of the paper is as follows. In Section 2, we briefly review the model. Previously found Bethe Ansatz solution for the model is presented here [23,24]. We also review the string hypothesis for two one-hole states. In Section 3, we proceed with the computation of the scattering amplitudes. Since the Bethe roots for the model consist of "sea" roots and "extra" roots, we rely on a conjectured relation between the "extra" roots and the hole rapidity, which is confirmed numerically for system up to about 60 sites. We find that the eigenvalue derived for the open XXZ spin chain agrees with one of the eigenvalues of Ghoshal-Zamolodchikov's boundary S matrix for the one boundary sine-Gordon model, provided the lattice boundary parameters that appear in the spin chain Hamiltonian and the IR parameters that appear in Ghoshal-Zamolodchikov's boundary S matrix [5] obey the same relation as in [15] 2 . The problem of finding the second eigenvalue of the boundary S matrix requires the identification of an independent one-hole state. In contrast to previous studies [10,11,15,16], where such state was found by reversing the signs of the boundary parameters 3 , similar strategy does not work here. Reversing the signs of the boundary parameters in the present case leaves the Bethe equation unchanged, hence giving the same one-hole state. Interestingly, a separate one-hole state with 2-string is found [25]. Using a conjectured relation between "extra" roots, hole rapidity and the boundary parameters, which is again confirmed numerically for system up to about 60 sites, we derive the remaining eigenvalue which also agrees with Ghoshal-Zamolodchikov's result. Finally, we conclude the paper with a brief discussion and possible future work on the subject in Section 4.
Bethe Ansatz and string hypothesis
We begin this section by reviewing recently proposed Bethe Ansatz solution [23,24] for the following model [5,26] where the "bulk" Hamiltonian is given by In the above expressions, σ x , σ y , σ z are the usual Pauli matrices, η is the bulk anisotropy parameter (taking values η = iπ p+1 , with p odd), α ± are the boundary parameters, and N is the number of spins/sites. Note that this model has only two boundary parameters. The most general integrable boundary terms contain six boundary parameters. In the present case, four other boundary parameters have been set to zero. We restrict the values of the remaining parameters, α ± to be pure imaginary to ensure the hermiticity of the Hamiltonian (2.1). The Bethe Ansatz equations are given by where δ(u) = 2 4 (sinh u sinh(u + 2η)) 2N sinh 2u sinh(2u + 4η) sinh(2u + η) sinh(2u + 3η) and j and u (2) j (zeros of Q 1 (u) and Q 2 (u) respectively).
One-hole state
In order to compute the spinon boundary scattering amplitude, we consider a one-hole state. The roots distribution for such a state was found in [24]. One-hole excitations for the open XXZ spin chain we study here appear in the even N sector. Hence, it is sufficient to review the results for even N case. The shifted Bethe rootsũ are the zeros of Q a (u) that form real sea ("sea" roots) and µλ (a,2) k are real parts of the "extra" roots (also zeros of Q a (u)) which are not part of the "seas". Hence, there are two "seas" of real roots. We employ notations used in [13], . (2.8) Rewriting bulk and boundary parameters [13], η = iµ, α ± = iµa ± 4 the Bethe Ansatz equations (2.3) for the "sea" roots then take the following form and e 1 (λ respectively, where j = 1 , . . . , N 2 . These equations can be re-expressed in terms of counting functions, h (l) (λ) as where h (l) (λ) are given by The string hypothesis (2.7) holds true only for suitable values of a ± , namely ν−1 In the above equations, q n (λ) and r n (λ) are odd functions defined by q n (λ) = π + i ln e n (λ) = 2 tan −1 (cot(nµ/2) tanh(µλ)) , r n (λ) = i ln g n (λ) . (2.14) } is a set of increasing positive integers that parametrize the state 5 . For states with no holes, the integers take consecutive values. For one-hole state, there is a break in the sequence, represented by a missing integer. This missing integerJ, fixes the value of the hole rapidity,λ, according to If the hole is located to the right of the largest "sea" root (λ )⌋. See [25] for more details. For later use, we next define the densities of "sea" roots as The functions (2.14) have the following derivatives which prove to be essential to the analysis in following sections, a n (λ) = 1 2π . (2.17)
One-hole state with 2-string
In addition to the one-hole state considered in last section, there is another one-hole state. This state is the only remaining one-hole state, which also has a 2-string. In this section, 5 In principle, there are two such sets of integers, J (1) i and J (2) i corresponding to the two counting functions, h (1) (λ) and h (2) (λ) respectively. But, in fact these two sets of integers are identical. Hence we choose to drop the superscript, l from J j in (2.11).
we give some brief information on the state. The shifted Bethe rootsũ are the zeros of Q a (u) that form real sea ("sea" roots) and µλ (a,2) k are real parts of the "extra" roots (also zeros of Q a (u)) which are not part of the "seas". For this state, we also have µλ (a) 0 , the real parts of additional "extra" roots that form a 2-string.
The counting functions for this state are given by and The Bethe Ansatz equations for this state take the following form, where {J 1 , J 2 , . . . , J N 2 −1 } is a set of increasing positive integers that parametrize the state. The hole for this state breaks the sequence, represented by a missing integer. As before, the missing integerJ ,enables one to calculate the hole rapidity,λ using If the hole appears to the right of the largest "sea" root (λ (a,1) In this Section, we give the derivation for the boundary scattering amplitudes for one-hole states reviewed in Section 2.
Further, using (3.3) and (3.4), one gets
where (a − → a + ) is a shorthand for two additional terms which are the same as the third and fourth terms in the integrand of (3.14), but with a − replaced by a + . The integrals involving "extra" roots λ (2) k and λ where µ ′ = π ν−1 and f (λ After evaluating the rest of the integrals, (3.14) becomes The values of the "extra" roots are dependent on the hole rapidity,λ and the boundary parameters, a ± . Hence, it is sensible to expect a relation between these "extra" roots, λ , the boundary parameters, a ± and the hole rapidity,λ. Consequently, one needs to express the right hand side of (3.15) in terms of purely a ± andλ to complete the derivation.
To look for this additional relation, we begin with the information contained in the difference of the two densities, ρ (1) (λ) − ρ (2) (λ). This leads to the following, where R − (λ) has the following Fourier transform, Analogous to (3.6)) one gets . Further, integrating (3.22) with respect to λ, taking limits of integration from 0 toλ as before, one finds Since h (1) (λ) = h (2) (λ) ∈ positive integer, using the fact that R − (λ) is an even function of λ and exponentiating (3.23) we get e Rλ −λ dλR − (λ) = g(λ, a + )g(λ, a − ) . Next, an important observation is the following relation (as N → ∞), for which we provide numerical support in Table 1. Although the results shown in Table 1 are computed for the case where the hole appears to the right of the largest "sea" root, we find similar results for other hole locations. From (3.24) and (3.25), it also follows that Table 1: l ,λ) for p = 3 (a + = 2.1, a − = 1.6) and p = 5 (a + = 3.3, a − = 2.7), from numerical solutions based on N = 24 ,32 ,. . . ,64.
We stress here that the values of λ up to a rapidity-independent phase factor. Subsequently, the complete expression for each boundary's scattering amplitude is given by (up to a rapidity-independent phase factor) α(λ, a ± ) = S 0 (λ)S 1 (λ, a ± )g(λ, a ± ) (3.28) where + and − again denotes right and left boundaries respectively.
Eigenvalue for the one-hole state with 2-string
We now consider the one-hole state with a 2-string, reviewed in Section 2.2. The computation of the eigenvalue for this state is identical to the one given above. Hence, we skip the details and present the result. Analogous to (3.14), we have 0 ω) + cosh(2iλ (2) 0 ω)) (3.29) which after evaluating the integrals yields As before, (a − → a + ) represents two additional terms which are the same as the third and fourth terms in the integrand of (3.29), but with a − replaced by a + . We proceed to make the following conjecture (as N → ∞) to complete the derivation.
Like (3.25), we provide numerical support for (3.32) in Table 2 where we compute the ratio φ ≡ d 1 d 2 , where d 1 and d 2 are the left hand side and the right hand side of (3.32) respectively, for systems up to 64 sites. We believe this supports the validity of (3.32) at N → ∞. The values of λ (2) k , λ (1) l , λ (1) 0 and λ (2) 0 used in computations are obtained by solving numerically the Bethe equations (2.19) and (2.20) for the "sea" roots and (2.3) for the "extra" roots. The correctness and validity of such numerical solutions are checked by comparing them with the ones obtained from McCoy's method for smaller number of sites, e.g., N = 2 , 4 and 6 9 . We stress here that although the results obtained in Table 2 are computed forJ = 1, namely the case where the hole appears close to the origin, similar results are found for other hole locations, e.g.,J = 2 , 3 , . . .. Table 2: φ for p = 3 (a + = 2.1, a − = 1.6) and p = 5 (a + = 3.2, a − = 2.7), from numerical solutions based on N = 24 ,32 ,. . . ,64.
Discussion
Based on a recently proposed Bethe ansatz solution for an open spin-1/2 XXZ spin chain with nondiagonal boundary terms, we have derived the boundary scattering amplitude (equation (3.28)) for a certain one-hole state. We used a conjectured relation between the "extra" roots and the hole rapidity, namely (3.25), which we verified numerically. This result agrees with the corresponding S matrix result for the one boundary sine-Gordon model derived by Ghoshal and Zamolodchikov [5], provided the lattice and IR parameters are related according to (3.40). We obtained the second eigenvalue (3.34) by considering an independent one-hole state with a 2-string. This scattering amplitude, derived for the one-hole state with 2string also agrees with Ghoshal-Zamolodchikov's result following conjecture (3.32), which we verified numerically and identification (3.40). It would be interesting to derive (3.25) and (3.32) analytically.
It will also be interesting to study the excitations for the more general case of the open XXZ spin chain, namely with six arbitrary boundary parameters and arbitrary anisotropy parameter, and derive its corresponding S matrix. Solutions (spectrums) have been proposed for the general case, using the representation theory of q-Onsager algebra [28] and the algebraic-functional method [29]. However, Bethe Ansatz solution for this general case has not been found so far although such a solution has been proposed lately for the XXZ spin chain with six boundary parameters at roots of unity [30]. In addition to the bulk excitations, one can equally well look at boundary excitations although this can be rather challenging even for the simpler case of spin chains with diagonal boundary terms. It is therefore our hope that some of these issues are addressed in future publications. | 2008-03-19T02:52:49.000Z | 2007-11-11T00:00:00.000 | {
"year": 2007,
"sha1": "34c6234621c4d29bd127ce960580ef31df799613",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0711.1631v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "34c6234621c4d29bd127ce960580ef31df799613",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17945589 | pes2o/s2orc | v3-fos-license | Biotensegrity of the Extracellular Matrix: Physiology, Dynamic Mechanical Balance, and Implications in Oncology and Mechanotherapy
Cells have the capacity to convert mechanical stimuli into chemical changes. This process is based on the tensegrity principle, a mechanism of tensional integrity. To date, this principle has been demonstrated to act in physiological processes such as mechanotransduction and mechanosensing at different scales (from cell sensing through integrins to molecular mechanical interventions or even localized massage). The process involves intra- and extracellular components, including the participation of extracellular matrix (ECM) and microtubules that act as compression structures, and actin filaments which act as tension structures. The nucleus itself has its own tensegrity system which is implicated in cell proliferation, differentiation, and apoptosis. Despite present advances, only the tip of the iceberg has so far been uncovered regarding the role of ECM compounds in influencing biotensegrity in pathological processes. Groups of cells, together with the surrounding ground substance, are subject to different and specific forces that certainly influence biological processes. In this paper, we review the current knowledge on the role of ECM elements in determining biotensegrity in malignant processes and describe their implication in therapeutic response, resistance to chemo- and radiotherapy, and subsequent tumor progression. Original data based on the study of neuroblastic tumors will be provided.
INTRODUCTION
The study of spatial and temporal responses to mechanical forces of tissue structures of biological organisms is a growing field in health sciences. Such responses can be modified by mechanotherapeutic interventions, ranging from the molecular level to whole body systems, and involving a broad spectrum of target molecules belonging to the microenvironment. In order to carry out mechanotherapy effectively, we should consider the stabilizing elements of tension and compression, or biotensegrity systems, existing at all structural levels in the body. Tensegrity is an architectural principle put forth by Buckminster Fuller in the 1960s (1,2). According to the tensegrity principle, structures or tensegrity systems are stabilized by continuous tension with discontinuous compression (3). Coming under the term biotensegrity, the tensegrity principle applies to essentially all detectable scales in the body, from the musculoskeletal system to proteins or DNA (4,5).
In this review, we highlight the current challenges and on-going issues for dissecting the mechanisms of tumor extracellular matrix (ECM) biotensegrity and discuss how these concepts may be translated into treatment and prognosis of cancer. To illustrate the biotensegrity principle, we present some preliminary results on the mathematical integration of multimodal data, combining imaging and non-imaging tumor tissue data, acquired in the context of neuroblastoma (NB) studies, suggesting testable hypotheses for making prognostic predictions and therapeutic response related with this principle.
CELL AND TISSUE BIOTENSEGRAL PHYSIOLOGY
Several studies have demonstrated that cells can function as independent pre-stressed tensegrity structures through their cytoskeleton architecture. Ingber defined the pre-stressed tensegrity model as a structural support on biological systems. It is constituted by a number of continuous elements of tension and a number of discontinuous elements resistant to compression providing a stabilized structure (6)(7)(8)(9)(10)(11)(12)(13)(14). As a tensegrity network, a single cell presents such continuous tension (mediated by cytoskeleton elements such as microfilaments and intermediate filaments) and local discontinuous compression (mediated by ECM and other cytoskeleton elements such as microtubules). The individual pre-stressed cells are poised and ready to receive mechanical signals and convert them into biochemical changes (15). Therefore, cell membrane, nucleus, and all the organelles are hard-wired by the cytoskeletal scaffold. When the mechanical cue is received, sensed mainly by focal adhesion complexes induced by integrins, the signal modifies the cytoskeletal scaffold. Thus, the local mechanical signal is amplified and propagated through a series of force-dependent biochemical reactions, whereby intra-cellular signaling pathways become sequentially activated through mechanotransduction (16). At the molecular level, several elements resist compression, such as all structures containing alpha-helix, beta-sheet, or even DNA backbone structures, while others, such as attraction and repulsion forces of molecular, atomic, and ionic bonds (such as Van der Waals forces, covalent bonds, etc.), resist continuous tension. Many molecules display such structures and are subject to these two forces at different stages along the mechanical intra-cellular signal pathways. Among these abundant and intermingled pathways, many remain unknown. We believe that knowledge on how we could potentially interfere with these signaling pathways or cascades may provide new therapeutic targets. Nevertheless, as many molecules play multiple roles in different pathways, molecular therapy based on mechanotransduction, should be carried out on specific targets to avoid adverse effects.
The tensegrity architecture of the cytoskeleton and signal pathways is linked to the tensegrity elements of the outer and inner nuclear membrane through KASH-SUN bridges, where KASH proteins are located in the outer nuclear membrane (Nesprin-1 and Nesprin-2 link nuclei with actin filaments, Nesprin-3 interacts with intermediate filaments, and Nesprin-4 binds to microtubules) (17)(18)(19) and SUN proteins in the inner nuclear membrane (Samp1 and lamin). This connection is critical for intra-cellular force transmission in physiological homeostasis (20)(21)(22) and might ensure that chromatin organization is not perturbed when tissues experience stress, and may be fundamental for normal development (23).
The self-balanced mechanical stability of the cytoskeleton enables the macro-mechanical forces to be converted into molecular changes. Since cells are connected to each other through cell junctions, mediated mainly by cadherins, these changes not only affect the cell that receives the signal, but are also transferred to the neighboring cells. Indeed, recent biophysical studies have revealed that the size of cell-cell contacts can be regulated in response to the mechanical forces exerted on those junctions and that cells are also able to regulate the forces exerted on their junctions (24)(25)(26). It is known that cell-cell junctions are anchored to neighboring cells and focal adhesions to ECM, and all are connected to the intra-cellular cytoskeletal network, therefore the forces that cross these structures fluctuate strongly when tissue is remodeled. It is becoming apparent that these structures do not just transmit forces while maintaining tissue cohesion, but also respond to fluctuations in force by actively influencing cell morphology and behavior (27). The biological significance of this mechanotransduction is to promote coordinated cytoskeletal reorganizations that can define changes in shape across the whole tissue. The basal lamina plays a central role in this process. It provides physical support to epithelial cells, surrounds muscle cells, fat cells, and Schwann cells, and is the environment where cells and ECM bind through focal adhesions and integrins. Accordingly, the shape of tissue cells (round or flattened) and the three-dimensional structure of the tissue patterns that constitute glands, alveoli, ducts, and papillae (among others), depend on the stiffness and flexibility and on the coordinated movement of the basal lamina (28).
ROLE OF ECM IN BIOTENSEGRITY
During the last decade, cell-matrix contacts based on the transmembrane adhesion receptors from the integrin family or focal adhesions have emerged as the major mechanosensitive structural elements that connect, collect, process, and integrate the information of the ECM. Recent proteomic studies have not only found many more components, but also have revealed that many of these elements are recruited to focal adhesions in a force-dependent manner, supporting the view that focal adhesions harbor a network of mechanosensitive processes (29). Integrins are transmembrane αβ heterodimer receptors that function as structural and functional bridges between the cytoskeleton and ECM molecules. Specifically, α8β1 or tensegrin can bind to several ECM molecules and has been shown to be associated with focal adhesion points, where it participates in the regulation of spreading, adhesion, growth, and survival in different neuronal and mesenchymal-derived cell types (30,31).
Various interconnected cells bind to their microenvironment, forming a mechanical tensegral system, which implies the existence of a mechanical balance between compression (ECM) and tension (cell) forces. The ECM is made up of a mixture of ground substance [glycosaminoglycans (GAGs) -mostly hyaluronan, proteoglycans, and glycoproteins] situated in close relationship with a fibrous scaffold [reticulin (Ret F) -elastin and collagen fibers (Col F)], and supplies much of the structural support available to parenchymal cells in tissues, by adding tensile strength and flexibility (32). The ECM is a dynamic and multifunctional regulator and has its own biotensegrity with Ret F and elastin fibers acting as tensional elements, and ground substance and Col F as compressionresistance elements. This tensegral network is considered to be a solid-state regulatory system of all cell functions, responsible for changes in genes and proteins, as well as alterations in cell shape and movement (33)(34)(35). One result of cell-ECM biotensegrity is substrate rigidity, which can control nuclear function and hence cell function (36). Cells can use this substrate rigidity to exert traction forces, thus altering the ECM. Indeed, in a state of reciprocal isometric mechanical tension, a dynamic balance exists between cell traction forces and points of resistance within the ECM. This dynamic biotensegral system with its mechanotransduction phases (Figure 1) enables our cells to mechanosense, modifying their microenvironment, thus promoting ECM remodeling in homeostasis and in tissue disorders (37). Manipulation of this mechanical balance could be used to promote tissue regeneration. In fact, various studies have demonstrated that different elasticities of the ECM drive mesenchymal stem cell differentiation in a very specific way. Neurogenic, myogenic, or osteogenic differentiations are induced under identical matrix serum conditions, with variations in ECM softness, strength, and stiffness (38). Furthermore, ECM stiffness guides cell migration. It has been shown that fibroblasts prefer rigid substrates and when placed on flexible sheets of polyacrylamide, they migrate from the soft to the stiff areas (39). Under homeostatic conditions, collagen fibrils have a minimal turnover. However, this turnover is accelerated during tissue remodeling and tumor development, as evidenced by the serum levels of its degradation products (40). Studies of the ECM have revealed that the components of the tumor microenvironment are fundamental, not only for the regulation of tumor progression (41,42), but also are essential even before the tumor appears. The stromal cells are able to transform the adjacent cells through an alteration in the homeostatic regulation of the tissue, Frontiers in Oncology | Molecular and Cellular Oncology
CELL AND TISSUE BIOTENSEGRITY IN CANCER
Cancer can be understood as a disease of the developmental processes that govern how cells organize into tissues (48). The tumor microenvironment is comprised of a variety of cell types lying among a network of various ECM fibers merged within the interstitial fluid and gradients of several chemical compounds, which constantly interplay with malignant cells (34). Therefore, we can infer that the previously described biotensegral systems also exist within tumor tissue. In fact, the dynamic mechanical balance achieved through mechanosensors, cytoskeletal tensegrity, molecular biotensegral intra-cellular pathways, ECM with compressive and resistant elements, supportive cells (such as fibroblasts and multiple tumor-associated immune cells), and vascular and lymphatic vessels tensional structures, can be as important as the genetic instability of tumor cells in the pathogenesis and evolution of the malignant process (42,49). In this regard, various studies have demonstrated the importance of cell-ECM biotensegrity in cancer (34,48,50). Indeed, a desmoplastic reaction is frequent in many solid tumors, such as breast, prostate, colon, or lung, in which high levels of TGF-b and PDGF are found. These growth factors are produced by the mesenchymal cells of the tumor stroma and induce immunophenotypic changes. These changes are observable by studying actin-alpha, myosin, vimentin, desmin, and the altered production of several ECM proteins, such as collagens, laminin, tenascin, ECM metalloproteinases (MMP), and MMP-inhibitors (43). Additionally, ECM stiffness modulates cancer progression; cancer cells promote stiffening of their environment, which in turn feeds back to increase malignant behaviors such as loss of tissue architecture and invasion (51). For instance, the speed of malignant cells in vitro is affected by the geometry of the ECM. Human glioma cells move faster through narrow channels than through wide channels or in non-stretched 2D surfaces. This is thought to be triggered by an increase in the polarity of the traction forces between cell and ECM (52). Indeed, recent publications describe that not only neoplastic ECM stiffness, but also the firmness of tumor cells play a significant role in tumor progression. The firmness of tumor cells, especially the metastatic cells, has been found to be lower than that of the normal cells of the same sample, and is caused by the loss of actin filaments and/or microtubules and the subsequent lower density of scaffold (53,54). In this regard, it has been shown that the transformation from a benign proliferative cell into a malignant cell can be produced by a peculiar phenotypic change, known as epithelial-mesenchymal transition (55). This transformation involves breaking contact with sister cells and increasing motility, as a result of a change in the epithelial cytoskeleton, with its corresponding proprieties for a pseudomesenchymal phenotype, which enables migration, invasion, and dissemination (56). While normal cells adhere to their environment through integrins, and their body has a proper consistency, tumor cells lose that consistency and tensegrity, becoming easily deformable elements (causing pleomorphism), with high elasticity (enhancing infiltration) and with an increased degree of mobility (enabling metastasis) (57). In breast cancer, the genomic profile expressing mainly mesenchymal features is actually found in the most invasive cell lines (58). Moreover, it has been published that chronic growth stimulation, ECM remodeling, alteration of cell mechanics, and disruption of tissue architecture are nongenetic influences on cancer progression (42,49). These ideas not only agree with basic predictions of cellular tensegrity, but also support the idea that therapy based on the manipulation of the biotensegrity principle cues should be considered as a way to revert the malignant phenotype (59,60).
In cancer research, the hallmark which includes the physical aspects of tissue has been less investigated, but it is known that this hallmark is one of the most basic mechanisms in enhancing tumor proliferation and creating resistance to cancer treatment, among other processes (61). A previous study by our group takes in account some structural elements of ECM that have the capacity to influence physical conditions and suggests that Schwannian stroma cells are not the only important factor in the histopathologic analysis of neuroblastic tumors (50). Specifically, multi-parametric analysis of other tumor stroma components (Ret F, Col F, GAG, and immune cells) detected by classic histochemistry (HC) and immunohistochemistry (IHC) techniques and incorporated into a quantitative morphological analysis would improve the value of the International NB Pathology Classification (62). As we will show later, chemotherapy and radiotherapy are known to act on tumor cells as well as on stromal cells and ECM elements (63,64). As a consequence, injury to the ECM can contribute directly to treatment resistance, creating niches of resistant tumor cells (64,65). Furthermore, damage to DNA induces the production of cytokines and growth factors by stromal cells, this triggers inflammation, cell survival, and tumor progression, thus the effect of therapy on ECM may be to promote relapse or chemoresistance (66,67). We hypothesize that studying the different elements of the ECM, as one of the main contributors to biotensegrity, through objective morphometric analysis and the creation of mathematical networks of histologic sections stained with HC and IHC, can shed light on how biotensegrity influences tumor microenvironment and could provide clues to its action mechanism.
EVIDENCES OF ECM BIOTENSEGRITY CHANGES IN MALIGNANT TISSUE
It is known that tumor cells alter the mechanical properties of the microenvironment in order to create favorable conditions for their proliferation and/or dissemination (68). In addition, adhesion molecules such as E-cadherin are involved in the processes of tissue differentiation and morphogenesis and play an important role in modulating the invasiveness of tumor cells in breast cancer and other epithelial tumors (69). For instance, the reciprocal communication between the stromal cells and the tissue parenchyma directs gene expression, and in prostate carcinoma and breast carcinoma, the oncogenic potentiality arises from stroma-associated fibroblasts, immune response, and the alterations of biotensegrity (49,70). Deregulation and disorganization of the composition, structure, and stiffness of the ECM elements progressively increase interstitial fluid pressure, leading to limited penetration and dissemination of therapeutic agents within solid tumors, thus enabling the creation of niches within tissues and organs that offer sanctuary to tumors and activate therapy resistance programs Frontiers in Oncology | Molecular and Cellular Oncology (11-13, 64, 65). Tumor cells are not the only cells that change the mechanical properties of the microenvironment. Despite all the efforts of tumor cells to make ECM elements work for their survival and proliferation, tumor stromal cells, specifically, immune system cells, try to reverse the pathological condition. Indeed, two lymphoproliferative syndromes (follicular lymphoma and Hodgkin lymphoma) are good examples of the fact that a tumor can be considered as functional tissue, connected and dependent on the microenvironment, which sends and receives signals to and from the tumor tissue itself. In such syndromes, tumor microenvironment stromal cells, including immune response, determine the morphology, clinical stratification, aggressiveness, prognosis, and response to treatment of the tumor (71).
In the next two sub-sections, we describe the methods developed for the study of biotensegrity in neuroblastic tumors.
MORPHOMETRIC ANALYSIS OF ECM ELEMENTS -AN EXAMPLE IN NB
Accurate quantification of pathology specimens using imaging technology to analyze the variations in structural tissue that arise from interactions between tumor and stroma cells and ECM elements is providing important information. This approach would allow biotensegral patterns to be included in computational formulations for risk stratification systems and aid in designing better anti-cancer treatment strategies (29). However, the validity of the model depends on the quality of the data. This quantification depends on the staining, scanning, image analysis, and statistical evaluation. For that purpose, automatic stained sections must be digitized using microscopic preparation scanners such as Aperio Scanscope XT (Aperio technologies) or Panoramic Midi (3Dhistech) or with a photomicroscope if a scanner is not available. Different image analysis systems can be used, such as Image Pro-plus software (Media cybernetics), ImageScope (Aperio technologies), Panoramic viewer (3D Histech), free software (ImageJ of the NIH), or self-designed software to obtain customized macros or algorithms (informatic protocols) to detect and characterize the quantity (number of objects and area occupied), size (area, width, length), shape (aspect, roundness, perimeter ratio, fractal dimension), and orientation (angle), among other parameters, of the ECM elements of interest. All systems provide mark-up images or masks, which represent the recognized and measured element in white upon a black background. The use of tissue microarrays is advised for standardization purpose of background subtraction and color segmentation, given that these techniques tend to be dependent of the intensity of the staining and algorithms must be recalibrated with every change of intensity/ground noise/contrast staining, thus losing objectiveness. Further details regarding objective quantification of different cell and ECM elements and a flowchart of the analysis used by our group, are described elsewhere (50).
Neuroblastic cells are known to be committed in a complex interaction with the surrounding tumor microenvironment and we believe that patients with neuroblastic tumors, specifically those still subject to therapeutic failure despite current knowledge, could benefit from novel therapeutic strategies which could originate from the study of ECM biotensegrity. To investigate such new therapeutic targets, we have objectively quantified Ret F, Col I F, GAGs (Gomori, Masson's trichrome, and Alcian blue pH 2.5 HC, respectively), blood vessels (CD31 IHC, Dako), lymph vessels (D2-40 IHC, Dako), and cell markers, including leukocyte lineage (CD45/LC IHC, Dako) and NB cells, in primary NB. A first approach to the evaluation of the role of ECM biotensegrity in neuroblastic tumors is the observation of the mark-up images of tissue microarrays cylinders comprising a mixture of tumor and normal tissue either in the primary and/or metastatic location. In the particular case presented in Figure 2, included for illustrative purpose, a clear disruption in the organization of the ECM elements can be observed when passing from the normal tissue area to the neoplastic tissue. In the tumor area, Ret F becomes disorganized, Col I F is slightly increased (although minimal), GAGs almost disappear, CD45 reactive cells accumulate, and blood vessels vary in size and characteristics.
Statistical analysis of the quantitative data of fibers, GAGs, tumor cells, and immune system markers compared with the current parameters used to predict risk of relapse (stage, age, histopathology, state of MYCN oncogene, state of 11q region, overall genomic profile, and ploidy) (72)(73)(74) and other genetic markers of prognostic interest in a subset of 78 primary neuroblastic tumors has already been published by our group, and highlights the interest of studying ECM in neuroblastic tumors (50). The fact that ECM elements differ depending on the characteristics of the tumors and, more interestingly, the fact that the characteristics of ECM elements are related to prognosis (relapse or overall survival) advocates on behalf of the regulatory role of ECM biotensegrity in tumor progression.
DEVELOPMENT OF MATHEMATICAL TOPOLOGY OF ECM ELEMENTS -AN EXAMPLE IN NB
The combination of multidisciplinary efforts by clinicians, biologists, pathologist, bioengineers, and biostatisticians could elucidate how ECM elements interact with tumor and stromal cells. In this respect, a new and interesting approach is to analyze biopsies by converting the tissue into a mathematical network of cell-to-cell contacts (75)(76)(77)(78). Using graph theory concepts, these networks can provide organizational information that seems relevant in embryologic development and disease. For example, this method has been applied to the analysis of neuromuscular diseases, serving as a diagnostic tool able to quantify the severity of the pathology in a muscle biopsy (77). We propose that this technology can be adapted to the analysis of tumor biopsies. It is already possible to compare different mark-up images obtained from the analysis of several markers which have been assessed on serial thin sections with preserved histology. These overlapping images enable several markers to be considered at the same time and allow the co-location and study of the interaction between continuous tensional elements and discontinuous compression elements. In this regard, we have analyzed the relationship between different biopsy components taking the cell nuclei as a reference. The procedure is based in the identification of the cell nuclei and the calculation of their respective centroids. These centroids serve as seeds to perform a Voronoi diagram of Voronoi cells (79, 80). A new partitioned image is produced in which each nuclei is associated with its corresponding Voronoi cell. In this way, it is possible to construct a network based on the neighboring Voronoi cells. Topological approaches and the use of Voronoi cells need to be able www.frontiersin.org to capture the presence and relative disposition of tissue heterogeneities derived from, for example, the luminal space of glands, blood and lymph vessels, or larger extracellular spaces. They will be reflected through different characteristics and will be taken into account for the study, testing if they can be part of the relevant features that define a specific condition. The selection of regions of interest in each biopsy will allow studying only tumor tissue areas without artifacts that could bias the study, leading to wrong conclusions. The combination of graph-related parameters with the morphometric information will enable a comprehensive analysis of the changes arising from different compression forces in relation to the different types of ECM (stiff/soft, organized/chaotic) in combination with the tumor stroma cells such as immune cell infiltrates. We have performed preliminary comparisons based Frontiers in Oncology | Molecular and Cellular Oncology on the genetic features of NB using Ret F and blood vessels (in addition to the nuclei) as the reference features for providing the biological clues. This procedure has shown some hints of discrimination regarding the organization and co-location of these elements (Figure 3). We found that some network characteristics were relevant to perform this initial separation. This suggests that diverse backgrounds can respond differently to the pathological process depending on the organization of the tumor. Following the same approach, we will use other mark-up images of the positive cells stained with the different monoclonal antibodies against the different cells of the leukocyte lineage. We hope that this combination of mathematical and statistical methods will answer the question on the relationships between ECM biotensegrity and the changes mediated by the cell infiltrate.
EFFECT OF TREATMENT ON TUMOR MICROENVIRONMENT AND CONSEQUENCES
There is much evidence that the lack of total specificity of cancer therapeutic agents (chemotherapy and radiotherapy) causes collateral damage to the mechanical properties of the tumor ECM and benign stromal cells (which were previously fighting the tumor), creating resistance to therapy and favoring relapse and metastasis. This fact becomes evident while analyzing posttreatment biopsies, which contain a high degree of fibrosis and calcification. Some studies have shown that cancer therapy can sometimes damage tumor DNA and stromal cells, which results in the secretion of a spectrum of proteins, including the Wnt family members. For example, in prostate cancer, the expression of this proteins in the tumor microenvironment, regulated by lymph B cells, attenuates the effects of cytotoxic chemotherapy in vivo, promoting tumor cell survival and disease progression (63,81). It has also been reported that in follicular lymphoma and diffuse large cell lymphoma, treatment with lenalidomide affects the immune synapses of intra-tumoral T lymphocytes (82). In breast cancer, treatment with doxorubicin results in an increase in fibuline-1, an ECM protein, and its binding proteins, fibronectin and laminin-1, which constitute a source of chemoresistance (83) and triggers overexpression of maspin protein, which induces the accumulation of collagen fibers, thus causing disease progression (84). A novel Toll receptor-9-dependent mechanism that initiates tumor regrowth after local radiotherapy has also been reported (85). Monoclonal antibodies against fibulin-1 are able to reverse such chemoresistance, and the inhibition of MMP seems to have a therapeutic effect (86).
An example of the effect of treatment in NB is shown in Figure 4. When comparing a primary NB with its non-primary sample, we can appreciate that Ret F, GAGs, and Col I F accumulate in the ECM of the post-treatment sample. The amount of blood macrovasculature is slightly decreased. All these findings describe a stiffer ECM after multimodal treatment.
POTENTIALITY OF MECHANOTHERAPY
The changes exerted on ECM by therapeutic agents and the perspective of the epithelial-mesenchymal transition in epithelial tumors have opened the door to a new line of treatment, which considers the genetic and epigenetic mechanisms associated with resistance to chemotherapy (59,87). There is a need to personalize therapeutics taking into account not only the features known to have prognosis impact, but also new markers, such as the mechanical stress of the tumor ECM elements. Indeed, because of its importance to the tumor, the ECM represents an "Achilles heel" that can be exploited in designing cancer therapy. The bionetwork between the ECM and tumor cells is dynamic, and for every action, such as exposure to genotoxic stress, there are reactions and consequences throughout the micro and macrosystem (65). Removal of ECM barriers will either have a direct negative effect on tumor cells or facilitate anti-tumor immune responses and drug treatment, through better intra-tumoral penetration and accessibility to target cells. In this regard, a number of experimental approaches are aimed toward the transient degradation or down regulation of ECM proteins using injection of ECM-degrading enzymes into the tumor or their intra-tumoral expression after viral-or stem cell-based gene transfer (65). Other approaches attempt to indirectly decrease tumor-associated ECM by killing tumor stromal cells that produce ECM proteins (64) or aim at enhancing the host immune response (88). The potentiality of therapeutic agents to modify ECM can be turned around in such a way that new chemicals can be applied to modify a given ECM stiffness or composition into one shown to trigger a better prognosis. www.frontiersin.org
CONCLUDING REMARKS
Biotensegrity is the structural principle of mechanotherapy. Cells are linked both to each other and to the ECM forming a mechanical biotensegral system in homeostasis. Cell-cell junctions are anchored to neighboring cells and focal adhesions to ECM, allowing forces to cross via intra-cellular cytoskeletal and nuclear networks. These structures fluctuate, and the multiple responses appear to strongly affect tissue remodeling and cell transformation. As described, normal organs tissue, primary NB, and post-treatment NB have a different amount and topography of biotensegral ECM elements. Conventional approaches have traditionally focused on the neoplastic cell. Moreover, an arsenal of mechanotherapeutic approaches to enhance the efficacy of more classical cancer therapeutics and overcome treatment resistance has already been discovered. To achieve more effective personalized strategies, further studies should consider to improve the definition of the interactions between tumor and stromal cells and the ECM elements and vascular constituents of the tumor, as well as their influence on treatment. We propose that integrating the tumor topo-functional networks of the ECM elements with the clinical, histopathology, and genetic information could provide new information about the impact of biotensegrity on patient care. Understanding the mechanical properties of tumor ECM components, related to variations in quantity, degree of interference, and types of organization, is key to defining new potential mechanotherapeutic targets and agents. | 2016-06-17T21:20:38.951Z | 2014-01-20T00:00:00.000 | {
"year": 2014,
"sha1": "2e1b707806d25051244f6476cd36ece5d7956ba0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2014.00039/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e1b707806d25051244f6476cd36ece5d7956ba0",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
224926429 | pes2o/s2orc | v3-fos-license | Investigation of the 2016 Eurasia heat wave as an event of the recent warming
This study investigates the physical mechanisms that contributed to the 2016 Eurasian heat wave during boreal summer season (July–August, JA), characterized by much higher than normal temperatures over eastern Europe, East Asia, and the Kamchatka Peninsula. It is found that the 2016 JA mean surface air temperature, upper-tropospheric height, and soil moisture anomalies are characterized by a tri-pole pattern over the Eurasia continent and a wave train-like structure not dissimilar to recent (1980–2016) trends in those quantities. A series of forecast experiments designed to isolate the impacts of the land, ocean, and sea ice conditions on the development of the heat wave is carried out with the Global Seasonal Forecast System version 5. The results suggest that the tri-pole blocking pattern over Eurasia, which appears to be instrumental in the development of the 2016 summer heat wave, can be viewed as an expression of the recent trends, amplified by record-breaking oceanic warming and internal land-atmosphere interactions.
Introduction
The unusually severe heat wave events that have occurred around the world in recent decades have had a profound negative impact on human health, ecosystems, and socioeconomies (WMO 2011, Coumou andRahmstorf 2012). Observations are consistent in showing that, since the middle of the 20th century, most global areas have experienced significant warming in daily maximum and minimum temperatures (Caesar et al 2006, Donat et al 2013a, 2013b. One of the impacts of the warming trend has been to increase the frequency, intensity and/or duration of heat wave events across much of North America, Eurasia and Australia: an impact that appears to be due not only to an increasing mean temperature, but also to an increase in its variability (Choi et al 2009, Perkins et al 2012, Stocker et al 2013, Sun et al 2014. Over Eurasia, the prominent summer temperature trend exhibits an inhomogeneous pattern with accelerated warming centered on eastern Europe, East Asia, and the Kamchatka Peninsula (Cohen et al 2012, Horton et al 2015. This is associated with a wave train-like atmospheric circulation trend pattern that appears to have been instrumental in the development of a number of the most extreme heat waves including the 2003 European and 2010 Russian heat wave events (Dole et al 2011, Schubert et al 2011 and in the general increase in heat wave occurrence over East Asia (Ito et al 2013, Erdenebat andSato 2016).
A number of previous studies have examined the role of anomalous boundary conditions (e.g. sea surface temperature (SST), soil moisture, and sea ice) in the development of major heat waves. For example, dry land surface conditions appear to have played a role in the development of recent Eurasian summer heat waves (Beniston 2004, Ferranti and Viterbo 2006, Fischer et al 2007a, 2007b, Seo et al 2019. Here, deficits in soil moisture, driven by larger evapotranspiration rates or by precipitation deficits, contribute to the development of persistent atmospheric high-pressure systems which, in turn, act to amplify the surface dryness and warming. Anomalous SSTs can also act to drive extremely hot weather by forcing large-scale atmospheric teleconnections including the development of persistent ridges (Kenyon and Hegerl 2008, Alexander et al 2009. Examples of heat waves where anomalous SST appear to have played a role in their development include those that developed over Russia in 2010, over Southern Australia in 2009 (Pezza et al 2012), over central and southern United States in 2011 (Hoerling et al 2013), and over East Asia in 2013 (Jing-Bei 2014). In the case of the 2010 Russian heat wave, the wave train-like spatial pattern of the surface temperature anomalies over Eurasia was to a large extent determined by an internally forced atmospheric Rossby wave that appears to have been amplified by the recent trends in SSTs resembling a linear combination of the cold phase of a Pacific decadal mode, the warm phase of an Atlantic multidecadal oscillation (AMO)-like mode, and the long-term trend pattern (Schubert et al 2014). This includes record high SSTs in the tropical Atlantic that produced strong local convection and altered monsoon circulations, which helped to produce anticyclonic conditions over Russia (Trenberth and Fasullo 2012). Furthermore, such a wave train-like pattern is related to the steady warming derived by external forcing and heterogeneous warming induced by internal dynamic processes of land-atmosphere interactions that manifest quasistationary waves (Sato and Nakamura 2019). In the case of East Asia, the increasing occurrence of summer extreme heat events also appears to be linked to changes in both summer arctic sea ice and highlatitude snow cover over land (Tang et al 2014).
During the summer of 2016, Eurasian countries were engulfed by an extreme heat wave. On a global basis, summer temperatures were the hottest on record since 1880. In this study, we carry out a series of well-controlled numerical hindcast experiments designed to isolate the separate influences of the large-scale oceanic, land, and sea ice conditions on the development and maintenance of the 2016 heat wave. Furthermore, we examine the extent to which the development is a manifestation of the wave-like spatial structure of the recent warming trend over Eurasia.
Observational data
The high-resolution (0.5 • horizontal degree) gridded CRU Time-series (TS) version 4.01 data produced by the University of East Anglia was used as one of the observational land surface air temperature (SAT) products (Harris et al 2014). These monthly data cover the period 1901-2016 and are used in the continental scale long-term trend analysis. A gridded land-only Global Historical Climatology Network (GHCN) daily maximum temperature anomaly dataset produced by U.S. National Climatic Data Center was used in the analysis of recent changes in climate extremes (Menne et al 2012). The observational daily mean SAT was obtained from the 6-hourly Japanese 55 years Reanalysis (JRA-55) (Kobayashi et al 2015), which was also used as the near-surface atmospheric forcing for the offline land surface model (LSM) simulation. The observations-based geopotential height (GPH) data were taken from the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) for 1980-2016 (Gelaro et al 2017). The SSTs used were from the Advanced Very-High-Resolution Radiometer-based Optimal Interpolation Sea Surface Temperature for 1982-2016 (Reynolds et al 2007), and sea ice concentrations were obtained from the National Snow and Ice Data Center (NSIDC, http://nsidc.org/) for 1987-2016. The validation of the soil moisture produced by the offline LSM simulation used the satellite-based European Space Agency Climate Change Initiative (ESA CCI) volumetric soil moisture data covering 1980-2016, which represents soil moisture in the top few centimeters of soil (~5 cm) (Dorigo et al 2017). The research period for this study is 37 years ; we used as much of the available observations in this period as possible.
LSM offline simulation
This study first integrated the stand-alone LSM model in Global Seasonal Forecast System version 5 (GloSea5) to obtain a realistic land surface reanalysis. The Joint UK Land Environment Simulator (JULES) LSM was driven by observed atmospheric surface conditions including 2 m air temperature and humidity, precipitation, 10 m wind speed, radiative fluxes, and pressure at the surface. These historical observations were obtained from the 6-hourly JRA-55 Reanalysis. Precipitation was corrected with monthly mean values from the Climate Prediction Center Merged Analysis of Precipitation dataset (Xie and Arkin 1997). Our land reanalysis was carried out at a resolution of 0.5 • latitude × 0.5 • longitude across the land areas of the globe. Among four vertical sub-surface layers, the top-level representing volumetric soil moisture at a depth of approximately 10 cm captures the geographical distribution of JA mean soil moisture climatology and anomalous soil moisture conditions in 2016 reliably in comparison with the ESA CCI volumetric soil moisture (supplementary figure 1 (available online at https://stacks.iop.org/ERL/15/114018/mmedia)), even though the range of volumetric soil moisture in both datasets is quite different due to differences in the representative depth of soil and other factors (Koster et al 2009). The interannual variation of the soil moisture anomalies over East Asia over 37 years also exhibits a significantly high temporal correlation (r = 0.64) with the CCI data. Our land reanalysis provided the observational surface soil moisture examined in this study; it was also used to initialize the multi-layer soil moisture conditions in the fully coupled ensemble forecasts.
Experiment design
This study used the UK Met Office GloSea5-GC2.0 coupled atmosphere-land-ocean-sea ice model (Maclachlan et al 2015) (the specific model description refers to the Supplementary) to perform a set of ensemble seasonal forecasts that address the impact of initial conditions on the simulation of the 2016 Eurasian heat wave. We conducted four sets of ensemble forecast experiments (table 1). From Exp1 to Exp3, one of the sets of initial conditions was replaced by the model climatology, such as for soil moisture (Exp1), ocean (Exp2) and sea ice (Exp3), respectively. Exp4 was initialized with all anomalous observed states in July 2016. Note that the operational GloSea5 seasonal forecast in the Korea Meteorological Administration has the same configuration as Exp1. Comparison of Exp4 with Exp1, Exp2, and Exp3, respectively, isolates the impact of soil moisture, ocean, and sea ice components on the extreme event. All experiments consist of 50 ensemble members. Soil moisture initialization follows the standard normal deviate scaled method (Seo et al 2019), wherein the observed anomalies from the offline reanalysis (see above) were added to the coupled model's soil moisture climatology after scaling the observed variance to the coupled model's variance. The coupled model's soil moisture statistics were obtained from the soil moisture fields for the given date from a long-term GloSea5 integration covering 15 years (1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011). The adjusted soil moisture initial state minimizes a systemic drift toward the model climatology. Climatological soil moisture initial conditions were obtained from a pre-existing yearly-averaged monthly land surface reanalysis in which JULES was forced offline using the Integrated Project Water and Global Change (WATCH) Forcing Data methodology applied to ERA-Interim data (WFDEI) (Weedon et al 2011). Ocean and sea ice components are initialized by the Forecast Ocean Assimilation Model Ocean Analysis (Blockley et al 2014) using a variational data assimilation system for the NEMO ocean model (NEMOVAR) in the operational GloSea5 coupled forecasts and hindcasts. Climatological ocean and sea ice initial conditions (both surface and subsurface) were obtained from the averages of the ocean and sea ice initial conditions over the 20 years GloSea5 hindcast period (1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010).
The atmosphere in all of the runs was initialized with conditions produced by the U.K. Met Office four dimensional variational (4D-Var) data assimilation system (Rawlins et al 2007). Each 50-member ensemble consists of 10 members started on each day between July 1 and July 5 of 2016. Initial conditions were perturbed on a given day using the Met Office Stochastic Kinetic Energy Backscatter (SKEB2) scheme (Bowler et al 2009). Each run was conducted for 62 d, i.e. through the end of August. Figure 1 shows the spatial pattern of the recent linear trend of the JA mean SAT, the 300 hPa GPH, and surface soil moisture anomalies. The trend map for SAT reveals regions of significant warming in eastern Europe, East Asia, and the Kamchatka Peninsula (figure 1(a)) consistent with a tri-pole pattern of 300 hPa GPH anomalies ( figure 1(b)). It is noteworthy that the wave-train like pattern of the temperature trend resembles the leading mode of the combined Eurasian summer SAT and precipitation variability , characterized by a projection (associated time series) that has an increasing trend (Schubert et al 2014). Furthermore, when the leading SST pattern of global warming is prescribed in five atmospheric general circulation models (AGCMs) (Schubert et al 2009), the resulting JAmean surface temperature response (supplementary figure 2(a)) is also similar in structure to the recent warming trend over Eurasia, especially with significant warming signals in eastern Europe and East Asia. The linear trend of the soil moisture anomalies exhibits roughly the same spatial pattern as that of the SAT anomalies but with opposite sign, suggesting a substantial land-atmosphere feedback (figure 1(c)). Such a temperature-soil moisture relationship is relatively strong in the transitional regions where the time-mean soil moisture is neither too dry nor too wet (Koster et al 2011, Seo et al 2019, including the regions of soil moisture dryness in the recent period (Schubert et al 2014). However, the spatial pattern of the soil moisture long-term trend is not identical to that of the SAT since the soil moisture is determined not only by the energy balance but also by the water balance. Comparing the 2016 JA anomalies (figures 1(d-f)) to the trends (figures 1(a-c)), we see considerable similarities, although there is a slight phase shift of the positive anomalies over eastern Europe toward the central Eurasia region, presumably due to the prominent sea ice melting over the Kara Sea during 2016. Based on this aspect, we try to understand the 2016 Eurasian heat wave as a concurrent event of the recent warming and internal variability.
Heat wave days and intensity
The interannual variation of the JA-mean SAT in eastern Europe (35-60 • E, 50-70 • N; figure 2(a)) and East Asia (95-125 • E, 30-50 • N; figure 2(b)) highlights the record-breaking heat wave signal in 2016 as well as a secular warming trend in both regions over the 37 years time period. The interannual variation of heat wave days and intensity (definition is described in the Supplementary) in eastern Europe (figure 2(c)) and East Asia (figure 2(d)) shows not only the pronounced year-to-year variability but also decadal variability and a trend. For East Asia, the heat wave days and intensity show an increasing trend and also the remarkable signal in 2016, which shows up regardless of whether the 90th or the 95th percentile is used as the threshold. For eastern Europe, the heat wave variables also show an increasing trend, and the 2016 summer is ranked as the second hottest summer during that time period, exceeded only by the Russian heat wave of 2010. The 2016 Eurasia heat wave can be considered as a continental-scale extreme hot event because anomalous warming occurred simultaneously over eastern Europe and East Asia. Figure 3 (left panels) shows the anomalies of the land, ocean, and sea ice roughly 3 weeks prior to the development of the heat wave (here we show 5 d averages at the beginning of July). The soil moisture anomalies are characterized by dry conditions over eastern Europe, eastern Siberia, and the Kamchatka Peninsula and wet conditions over southwest Siberia (figure 3(a)). The wet conditions are the result of positive precipitation anomalies that occurred in June of 2016 (not shown). The soil dryness ahead of the 2016 hot summer was likely impacted by the exceptionally low snow cover that extended across the Eurasia continent during the spring, which provided low amounts of meltwater to the land surface (supplementary figure 3).
Model hindcasts of the 2016 Eurasian heat wave
The soil moisture anomaly pattern over Eurasia during early July (figure 3(d)) shows some similarities to the trend pattern of the JA-mean soil moisture (cf figure 1(c)), suggesting that the land surface conditions going into the heat wave in early July include an imprint from the recent warming trend. To quantify the agreement between the soil moisture trend and the anomaly pattern at a specific year, we calculate a spatial correlation between them over the Eurasia continent (30-180 • E, 40-80 • N). Note that for each year, when computing the spatial correlations, the trend is recomputed excluding the year in question. The results show that the evolution of the spatial correlation between the trend pattern and the pattern of the anomalies during any 1 year (utilizing either the JA-mean or the 1-5 July mean soil moisture) is characterized by generally increasing values, such that the 1-5 July soil moisture anomaly exhibits its strongest agreement (highest positive correlation) with the soil moisture trend in 2016 ( figure 3(g)). Turning to the SST, figure 3(e) shows that the initial state (1-5 July 2016) is characterized by weak La Niña-like conditions in the tropical Pacific with, however, predominantly warm anomalies including Interannual variation of JA (red solid line) and 1-5 July (cross notation) mean value for (g) the agreement between the soil moisture long-term trend and the anomaly pattern at specific year over the Eurasia continent (30-180 • E, 40-80 • N), (h) global averaged SST and (i) Arctic sea ice fraction anomalies relative to their long term climatology. those in the North Atlantic and the North Pacific (figure 3(e)). In fact, focusing on the global mean (figure 3(h)), 2016 is the warmest, though it is likely that it is the spatial variations in the SST (rather than the global mean) that are more relevant for producing the wave-like temperature anomalies over Eurasia. For the sea ice conditions, the anomalous melting in the Kara Sea occurred in 2016 (figure 3(f)) when the amount of Arctic sea ice anomalies are ranked as the second (1-5 July mean) and the third (JA-mean) lowest values historically (figure 3(i)). Overall, it is clear that the early July 2016 initial conditions of soil moisture ( figure 3(a)), SST ( figure 3(b)), and sea ice (figure 3(c)) used to initialize the four sets of numerical experiments with the GloSea5 forecast system contain not only signatures of the recent trends, but also have magnitudes that are substantially above the typical interannual variability. We note that the memory of initial soil moisture, SST, and sea ice generally extends out to 1 month or more in the GloSea5 forecast system (supplementary figure 4) and directly or indirectly effects on the atmosphere. Furthermore, it is important to note that while only SST and sea ice fraction are shown here, the other (e.g. subsurface) ocean and sea ice variables are also initialized to early July 2016 values (or to climatology, if specified) in our experiments.
The key results of the four sets of experiments are presented in figure 4, focusing now on the SAT and the eddy GPH at 300 hPa during the mature period of the heat wave (25 July-25 August). The observations show positive anomalies of SAT and GPH that are coincident over eastern Europe, East Asia and the Kamchatka Peninsula (figures 4(a) and (b)): as we have already seen these are essentially the same three regions that have also experienced significant warming as part of the recent trend (cf figures 1(a) and (d)). The in-phase relationship between SAT and upperlevel GHP anomalies suggests a barotropic structure that acts to sustain the heat wave. In Exp4 (the control experiment), the SAT over Eurasia is simulated somewhat realistically including the observed three main hot spots, although the wave-like structure is less clear due to a systematic warm bias of the forecast model ( figure 4(c)), and the East Asia heatwave is particularly muted. Exp4 also reproduces the observed tri-pole GPH pattern over Eurasia realistically ( figure 4(d)). Observed anomaly map of (a) SAT (unit is • C) and (b) 300 hPa eddy height (unit is m) during the 2016 Eurasia heat wave active period (25 July-25 August). The anomaly map of Exp4 (c) SAT and (d) 300 hPa eddy height compared to 20 years (1991-2010) GloSea5 hindcast. Difference maps of these variables for (e, f) Exp1, (g, h) Exp2, and (i, j) Exp3 compared to Exp4. The dotted area in (e-j) represents statistical significance at the 95% level from the Student's t-test.
A comparison of the results of Exp1 and Exp 4 indicates that soil moisture initial conditions produce significant surface warming over eastern Europe but not over East Asia and the Kamchatka Peninsula ( figure 4(e)). When the soil moisture anomaly during heat wave active period (supplementary figure 5(a)) is compared to its initial condition (figure 3(d)), those two anomaly patterns are resembled to each other. The feature is also realistically simulated by Exp4 compared to Exp1 except for somewhere the soil moisture memory is relatively short (supplementary figure 5(b)). Corresponding to the result of SAT, the model simulation captures the observed soil moisture dryness well over eastern Europe but not over East Asia, which is related to the simulation of the surface temperature. The soil moisture initialization does have a considerable impact on the GPH pattern over Eurasia, contributing to the development of a realistic tripole blocking structure ( figure 4(f)). The large-scale atmospheric circulation anomalies in the upper atmosphere can be driven by the strong coupling between soil moisture and temperature over Eurasia (supplementary figure 6). The result of the observation shows a significant signal over East Asia from the pre-developed stage till the maturing phase of the heat wave. Moreover, there is strong landatmosphere interaction over the northern Siberia at the pre-developed stage and Eastern Europe at the heat wave active period. This feature is also realistically simulated by Exp4 compared to Exp1 throughout the forecast lead time up to 2 months. While statistically significant, the amplitude of the GPH impact is, however, about three times less than the observed. In contrast, comparing Exp2 and Exp4 indicates that realistic ocean initial conditions help the model reproduce the overall pattern of the continental scale anomalous surface warming over Eurasia at a statistically significant level (figure 4(g)), although with weaker than the observed amplitude. The observed (modeled in Exp4) amplitude of the temperature anomaly over eastern Europe and East Asia is 3.7 • C (2.0 • C) and 2.0 • C (0.82 • C), respectively, and the oceanic states induce temperature anomalies of 1.3 • C and 0.36 • C for these regions. The aforementioned initial SST anomalies (cf figure 3(e)) are sustained with a similar pattern during the heat wave period (supplementary figure 5(c)), and the model simulates this pattern realistically (supplementary figure 5(d)). The first and second leading patterns of SST variability (the first leading mode is related to global warming and the second is associated with the central Pacific cooling, referring to figure 1 in Schubert et al (2009)), related to the anomalous SST pattern, show the response of JA-mean surface temperature warming simulated by five AGCMs over eastern Europe and East Asia (supplementary figure 2(a)) and across the mid-latitude regions of Eurasia (supplementary figure 2(b)), respectively. Ocean-induced GPH anomalies are weaker than the observed anomalies and show less agreement with the observations compared with those induced by soil moisture initialization (figure 4(h)). The impact of sea ice initialization is mostly weak and not statistically significant (figures 4(i) and (j)).
Thus, the results suggest that both the early July initial ocean states and initial soil moisture anomalies had some impact on the subsequent development of the 2016 boreal summer heat wave, characterized by much above normal temperatures over eastern Europe, East Asia, and the Kamchatka Peninsula, while there is little evidence that the anomalous sea ice played a significant role. Somewhat surprisingly, the soil moisture's impact was stronger on the GPH than on the SAT, while the reverse was true for the impact of the SST.
Further considerations (and the results from two supplemental experiments) enhance our interpretation of these results. It is critical to note first that the initial atmospheric conditions for all of the simulations performed include the Eurasian blocking tripole structure that appears to be a critical component of the heat wave's development. As a result, the experiments as designed effectively evaluate the ability of the evolving soil moisture, SST, and sea ice anomalies to maintain and/or amplify the already existing atmospheric pattern-a pattern that is itself reflected in, and supported by the recent trends in the surface variables. Again, the comparison of Exp4 and Exp1 indicates that the initial soil moisture conditions do help maintain this wave pattern. Our supplemental experiments, however, suggest that soil moisture's effectiveness in this regard may depend on the background state. In the supplemental experiments (sExp1 and sExp4), which are repeats of Exp1 and Exp4 but using climatological SSTs and sea ice throughout all simulations (see table 1), the resulting inferred soil moisture impact (supplementary figure 7) is essentially absent; the difference pattern obtained from these experiments is generally statistically insignificant. If only one of oceanic or sea ice components is initialized as the climatology, the observed atmospheric anomaly is also absent in the simulation (not shown). It would appear that without the background state induced by the concurrent SST and sea ice anomalies, the soil moisture anomalies do not help maintain the wave pattern present in the atmospheric initial conditions.
Conclusions
This study investigates the physical mechanisms underlying the 2016 Eurasia heat wave. The spatial pattern of the heat wave event is similar to the recent warming pattern as manifested in the longterm trends (covering the last 37 years) in July-August mean SAT and soil moisture anomalies over Eurasia; for both the 2016 event and the overall trends, the data show a continental-scale wave structure with strong signals in eastern Europe, East Asia, and the Kamchatka Peninsula. Record-high temperature anomalies over Europe and East Asia were seen in the summer of 2016.
The GloSea5 seasonal forecast system is used to evaluate the external factors contributing to the production of the 2016 Eurasian heat wave. The extreme event appears to be a phenomenon connected with the recent warming, given that the patterns of the soil moisture, ocean, and sea ice anomalies at the onset of the event resemble the patterns associated with the warming trend. The result infers that the heatwave occurrence will be increased in the future if the warming trend would persist. The 2016 initial soil moisture conditions are seen to contribute significantly to the observed upper-atmospheric circulation anomalies over Eurasia (cf figures 4(b) and (f)); however, this contribution is not realized in parallel experiments utilizing climatological SST and sea ice distributions, suggesting that this soil moisture impact relies on a background state influenced by realistic SST and sea ice conditions. As for the heatwaves themselves, while soil moisture conditions contribute only to the eastern Europe heatwave (cf figures 4(a) and (e)), the highly anomalous (particularly warm) 2016 SST initial conditions contribute to warming across Eurasia (cf figures 4(a) and (g)). The sea ice initial conditions appear to have insignificant impact.
The fact that, in the observations, the upperlevel GPH anomalies are in phase with the SAT patterns suggests that the SAT anomalies are reinforced by internal atmospheric dynamical processes. The importance of simulating a proper GPH field for the prediction of heatwaves underlines the importance of realistic soil moisture initialization in a forecast system. Based on the results above, we can speculate that the current operational configuration of GloSea5 system, for example, failed to predict the 2016 Eurasian heat wave because it could not properly predict the upper-level GPH anomalies due to its use of a climatological soil moisture initialization. Overall, the present study contributes to our understanding of the 2016 heat wave in the context of recent warming trends. | 2020-10-19T18:10:08.828Z | 2020-10-16T00:00:00.000 | {
"year": 2020,
"sha1": "040e6eff999ecff20b47efd47169070bcaa98fbb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1748-9326/abbbae",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "44ae2910568978d142738029b97139fd06ebc86f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
} |
252210098 | pes2o/s2orc | v3-fos-license | Research review: Effects of parenting programs for children's conduct problems on children's emotional problems – a network meta‐analysis
Background Specific programs are often implemented for specific child mental health problems, while many children suffer from comorbid problems. Ideally, programs reduce a wider range of mental health problems. The present study tested whether parenting programs for children's conduct problems, and which individual and clusters of program elements, have additional effects on children's emotional problems. Methods We updated the search of a previous systematic review in 11 databases (e.g., PsycINFO and MEDLINE) and included studies published until July 2020 with keywords relating to ‘parenting’, ‘program’, and ‘child behavioral problems’. Also, we searched for recent trials in four trial registries and contacted protocol authors. Studies were eligible for inclusion if they used a randomized controlled trial to evaluate the effects of a parenting program for children aged 2–10 years which was based on social learning theory and included a measure of children's emotional problems postintervention. Results We identified 69 eligible trials (159 effect sizes; 6,240 families). Robust variance estimation showed that parenting programs had small significant parent‐reported additional effects on emotional problems immediately postintervention (Cohen's d = −0.14; 95% CI, −0.21, −0.07), but these effects faded over time. Teachers and children did not report significant effects. Additional effects on emotional problems were larger in samples with clinical baseline levels of such problems. No individual program elements predicted larger additional effects. Of the clusters of elements, combining behavior management and relationship enhancement elements was most likely to yield the strongest additional effects. Conclusions The additional effects on emotional problems of parenting programs designed to reduce conduct problems are limited, but some clusters of elements predict larger effects. Our findings may contribute to realistic expectations of the benefits of parenting programs for children's conduct problems and inform the development of programs with wider benefits across mental health problems.
Introduction
The adverse impact of child mental health problems on individuals and society is profound (Caspi & Moffitt, 2018). The past decades have yielded dozens of empirically supported interventions to effectively reduce child mental health problems, but most programs target a single type of mental health problems (e.g., conduct problems or emotional problems), while most children referred for mental health problems suffer from multiple problems. It has been suggested that programs developed for one target problem can have additional effects on other problems, and indeed some programs have shown such effects (Webster-Stratton & Herman, 2008), but systematic empirical evidence is lacking. Besides, the magnitude of additional effects may depend on specific program elements (i.e., specific content, skills, or techniques taught). If we can identify programs, and program elements, that have such additional effects, these can be used to reduce a wide range of child mental health problems (Mulder, Murray, & Rucklidge, 2017).
Comorbid conduct and emotional problems are the norms rather than the exception (Caspi & Moffitt, 2018;Pearce et al., 2018;Wichstrøm et al., 2012). Theoretically, there may be reasons to expect additional effects if an intervention targets one of these problems. If conduct problems spill over to emotional problems through compromised social and academic development problems (i.e., Dual Failure Model [Patterson & Capaldi, 1990]), parenting programs that successfully reduce children's conduct problems may have additional effects on children's emotional problems. Similarly, a common psychopathology may underly both conduct and emotional problems. For instance, evidence has indicated that emotion regulation may be a risk factor for both problems (Caspi & Moffitt, 2018;Lahey et al., 2015), and if parenting programs for children's conduct problems improve children's emotion regulation, they may have beneficial outcomes both on conduct problems and emotional problems.
There may be reasons to assume that parenting programs for children's conduct problems are also beneficial for children's emotional problems: shared mechanisms may include improved parent-child relationship quality (e.g., through enhanced positive involvement and child-led interactions, and reduction of harsh and critical parenting) and parental reinforcement of positive behavior (e.g., prosocial and courageous behavior) rather than negative (e.g., aggression or avoidance)elements common in most parenting programs for conduct problems (see Leijten et al., 2019) and associated with children's emotional problems (George, Herman, & Ostrander, 2006). This suggests that increasing parentchild relationship quality and parental behavior management skills in a parenting program may reduce children's emotional problems.
But there are also reasons to believe that benefits for children's emotional problems may be limited. For instance, effective cognitive-behavioral treatments for youth anxiety typically consist of cognitive restructuring of anxious thoughts and graduated exposure, with moderate effects immediately after the intervention (g = .68), and small effects at later follow-up (g = .18; € Ost & Ollendick, 2017). More specifically, in a recent review, CBT-based parenting programs for reducing anxiety that were considered 'well-established', typically included the element of exposure, as well as elements addressing overprotection and family accommodation (i.e., parents' behaviors to help the child avoid feelings of distress and anxiety), psychoeducation about anxiety, and parent-child relationship quality (Comer, Hong, Poznanski, Silva, & Wilson, 2019). Except for the latter, these elements are not typically found in parenting programs for conduct problems.
Empirically, some trials have found that parenting programs for children's conduct problems also reduce children's emotional problems (Chase & Eyberg, 2008;Kjøbli & Ogden, 2012;Webster-Stratton & Herman, 2008), but findings from several other trials suggest no such effects . Both theoretically and empirically, the literature thus is inconclusive as to whether parenting programs developed to reduce conduct problems can be expected to have additional effects on children's emotional problems, highlighting the need for a systematic synthesis of available evidence. This is important also because the effects on emotional problems may depend on the specific elements in parenting programs for children's conduct problems. For instance, in the pioneering work by Hanf (1969), parents learned first relationship enhancement skills, and second, behavior management skills. The premise underlying this model is that strengthening the parent-child relationship magnifies the effects of behavior management and this twostep approach is now the cornerstone of many established parenting programs (see Kaehler, Jacobs, & Jones, 2016, for an overview). Moreover, many empirically supported programs are based on Patterson's social learning model of a coercive parent-child interaction cycles (Patterson, 1982). This model posits that the key process maintaining and exacerbating children's conduct problems is parent-child interaction patterns where parents and children unwittingly reinforce aversive behavior in each other, leading to interaction cycles that become increasingly difficult to manage. Based on these principles, most programs teach parents behavior management techniques, for example, the use of differential attention to break these cyclesconsistent reinforcement of positive behavior and avoiding reinforcement of disruptive behavior (Kaehler et al., 2016).
Some programs add elements from other, complementary perspectives on how parenting can contribute to children's conduct problems, such as relational perspectives on the importance of parent-child, and perspectives focused on parents' emotion regulation and stress. For example, the Incredible Years Parenting Program includes childled play elements to enhance parent-child relationship quality (Webster-Stratton & Herman, 2008). Similarly, some programs emphasize the importance of parents' emotional regulation and stress in successfully managing difficult child behavior. For example, a version of Parent Management Training Oregon addresses emotion regulation difficulties in parents (Gewirtz & Davis, 2014).
Including specific parenting program elements can predict stronger program effects (Kaminski, Valle, Filene, & Boyle, 2008;Leijten et al., 2019). We, therefore, tested whether the inclusion of specific elements in parenting programs, most notably relationship enhancement and parental self-regulation and stress elements, yields stronger effects on children's emotional problems. In addition, because program elements may not operate in isolationthe effects of some elements may depend on the inclusion of other elements -, we also examined associations between clusters of elements. Elements were derived from a prior meta-analysis of parenting program elements (Leijten et al., 2019) based on their distinct theoretical approach (e.g., behavior management techniques derived from social learning theory principles and relationship-enhancement techniques derived from relational principles); clusters were formed based on their natural co-occurrence in established programs. Because elements to strengthen the parent-child relationship quality are also often included in parenting programs for emotional problems (Comer et al., 2019), including these elements in parenting programs for children's conduct problems may result in stronger effects on children's emotional problems.
In the current meta-analysis, we aimed to: (1) examine whether parenting programs for reducing child conduct problems have additional effects on child emotional problems; (2) identify parenting program elements associated with stronger additional effects on child emotional problems; and (3) identify clusters of program elements associated with stronger additional effects on child emotional problems.
Protocol and registration
We published our study protocol and research questions on PROSPERO (#CRD42020145130) prior to finishing data extraction.
Information sources and search
We updated the searches of a previous systematic review (Leijten et al., 2019), searching 11 databases and trial registries, including work published up to July 2020, and included studies both from the initial search and the updated search. In structuring the search, we used the following four conceptual categories: intervention, parenting, child behavioral, and emotional problems. We also searched for recent trials that may not yet have been published through searches of four trial registries and by contacting the protocol authors. For the full search strategy, see the protocol of our systematic literature review on PROSPERO under #CRD42019141844.
Eligibility criteria
Studies were eligible for inclusion if (1) they evaluated the effects of a parenting program for children's conduct problems; (2) the parenting program was primarily based on social learning theory principles (to ensure sufficient homogeneity of programs); (3) they included a measure of children's emotional problems postintervention; (4) children were on average between 2 and 10 years; (5) they used a randomizedcontrolled or cluster randomized trial. More detailed inclusion and exclusion criteria can be found in the study protocols.
Data items
Studies were coded on basic study characteristics (e.g., country and children's age), type (e.g., anxiety or depression), and level of emotional problems (0 = M and M + 1 SD are below clinical threshold; 1 = M is below the clinical threshold, but M + 1 SD is above clinical threshold; and 2 = M + 1 SD are above clinical threshold), and individual and clusters of program elements in the studies' intervention and control conditions. We coded 25 individual elements (see Table S1) derived from an earlier metaanalysis (Leijten et al., 2019), and four clusters empirically derived from an earlier network meta-analysis (Leijten, Weisz, & Gardner, 2021): (1) behavior management (BM): programs with a main emphasis on teaching basic behavior management skills. Although these programs may also include additional content (e.g., teaching parents problem-solving skills or encouraging positive involvement), the majority of their content is on behavior management. (2) Behavior management and relationship enhancement (BM + RE): programs with a strong emphasis on both behavior management and relationship enhancement. BM + RE programs differ from BM programs in their emphasis on actually teaching parents specific skills (e.g., child-led interactions, active listening, sensitivity, and responsiveness to the child's needs) to enhance the parent-child relationship without focusing on improving child behavior. Importantly, general information and advice regarding positive interactions, as well as encouraging parents to be involved, or to spend quality time with children, were not coded as teaching relationshipenhancing skills. Programs had to explicitly teach parents how to build a positive parent-child relationship while spending time together. (3) Behavior management, relationship enhancement, and multiple additional components designed to enhance parent and child skills, such as parental anger management and how parents can cultivate children's social skills (BM + RE + other): programs with an emphasis on additional skills, above and beyond behavior management and relationship enhancement. And (iv) no/minimal components: control conditions where parents were offered no program or only minimal support (e.g., a website). Because we specifically focused on programs based on social learning theory principles, all active clusters included behavior management techniques. Studies were double coded with 90% agreement.
Risk of bias
We assessed the risk of bias using the Cochrane Collaboration tool 1.0 on random sequence generation, allocation concealment, blinding of assessors, and blinding of providers and families (Higgins et al., 2011). We did not test for publication bias because a key assumption of standard tests of publication bias (e.g., funnel plots, Egger test, and trim-and-fill analysis) is the independence of effect sizes. It was key to our analysis strategy that we included all relevant effect sizes. Standard tests of publication bias were therefore not applicable.
Statistical analysis
We expressed effect sizes as Cohen's d using postintervention means and standard deviations. We included multiple-effect sizes per study if studies measured multiple indicators of children's emotional problems (e.g., multiple informants or multiple time points). To estimate overall additional effects, we used random-effects robust variance estimation, including multiple effect sizes per study and weighting them using an approximate variance-covariance matrix (Tanner-Smith, Tipton, & Polanin, 2016). Because the time lag between the intervention and the type of informant can impact program outcomes, we stratified effect sizes by time lag (i.e., immediate postintervention, up to 6 months later, and ≥ 12 months later) and informant (i.e., parent, teacher and child), estimating both conditional estimates (e.g., immediate postintervention with teacher informants) and marginal estimates (e.g., all immediate postintervention effect sizes; all teacher informant effect sizes). We then focused on parent-reported effects at postintervention to estimate associations between program elements and additional effects, as this was by far the largest amount of evidence, and narrowing it down would reduce confounding by informant or time lag. We again used robust variance estimation meta-regression with random effects. All analyses assumed an intercorrelation parameter of 0.8 (cf. Tanner-Smith et al., 2016).
To identify whether clusters of program elements were associated with the strongest additional effects, we used network meta-analysis in a multivariate meta-analysis framework . Network meta-analyses used a common heterogeneity parameter across contrasts. We used a bootstrapping-based method to generate probabilistic ranks for each cluster, with 10,000 repetitions. Inconsistency analyses were not needed given the tree-shaped nature of the network of evidence. We did not prespecify any additional analyses (e.g., sensitivity or subgroup analyses).
Included studies
Our systematic search identified 13,055 unique hits, in addition to the 13,414 unique hits from the original systematic search. Sixty-nine studies met inclusion criteria; and 117 studies met inclusion criteria except for assessing children's emotional problems postintervention. In other words, of the overall evidence base of parenting programs for children's conduct problems, 37% of the studies included a measure of children's emotional problems. Together, these studies yielded 159 effect sizes based on data from 6,240 families. The PRISMA flow diagram is shown in Figure S1, the PRISMA Checklist is shown in Table S2, and the study characteristics are presented in Table S3 (see also Appendix S1 for a reference list of included studies).
Parenting program effects on children's emotional problems
There was limited evidence for additional parenting program effects (Figure 1). Although overall improvements in children's emotional problems were significant (Cohen's d = À0.16; 95% CI, À0.21 -0.07), significant effects were reported only by parents immediately after the end of the program (Table 1). Effects faded over time and teachers and children themselves did not report effects at any of the time points.
Parenting program elements associated with stronger effects Of the 25 elements tested, none were significantly associated with stronger additional parenting program effects on children's emotional problems (Table 2); one element was significantly associated with weaker effects when included in programs: psychoeducation regarding typical child development (differential d = 0.16; 95% CI, À0.001 to À0.31).
Parenting program clusters yielding the strongest effects
The network meta-analysis with four clusters (BM, BM + RE, BM + RE + other, and no/minimal components) suggests that two active clusters (BM and BM + RE) are superior to the inactive cluster (no/ minimal components) with effect sizes of d = À0.18 (95% CI, À0.30 to À0.06, p =.004) and d = À0.25 (95% CI, À0.41 to À0.09, p =.002), respectively. The third active cluster (BM + RE + other) was marginally superior (d = À0.12; 95% CI, À0.26 to 0.01, p =.069). Probability rankings suggest that of the three active clusters, combining behavior management and relationship enhancement was most likely to yield the strongest additional effects on emotional problems compared to the inactive cluster (Table 3).
Post hoc sensitivity analysis
Because baseline severity of children's conduct problems is known to predict parenting program effects (Leijten et al., 2019), we conducted a multivariate meta-analysis to stratify the program clusters by samples' mean baseline levels of children's emotional problems (0 = M + 1SD fell in the nonclinical range; 1 = M fell nonclinical range but M + 1SD fell in the subclinical/clinical range; and 2 = M fell in the subclinical/clinical range). As expected, additional effects on emotional problems were larger in samples with higher baseline levels of emotional problems (d across clusters = À0.03 to À0.10 for samples where M + 1SD fell in the nonclinical range; and d across clusters = À0.10 to À0.30 for samples where M + 1SD fell in the subclinical/clinical range vs. d across clusters = À0.12 to À0.32 for samples where M fell in the subclinical/clinical range). Importantly, however, baseline emotional problem severity did not interact with the clusters. In other words, findings from the network meta-analysis regarding the relative effects of different clusters of components could not be explained by differences in baseline problem severity.
Post hoc analysis
We tested whether parenting programs are more likely to yield stronger effects on children's emotional problems when they yield stronger effects on children's conduct problems. To this end, we estimated the correlation between effect sizes for emotional and conduct problems, assuming that individual participants' scores for emotional and conduct problems correlate .30 (Goodman, 2001). This resulted in a marginal correlation of 0.22, a weak positive association between the effect sizes on children's emotional and conduct problems.
Risk of bias
The risk of bias assessment of the included studies was similar to previous evaluations of this field (Leijten et al., 2019). Explanations of randomsequence generation and allocation concealment were often not reported. Participant blinding is difficult to achieve in this field because parents know they attend a program. We found little evidence of bias regarding blinding of assessors, addressing incomplete data, and drop-outs. That only 37% of the studies from the original systematic review reported program effects on children's emotional problems raises the question of whether the remaining studies did not include measures of children's emotional problems, or whether they did not report these outcomes because no program effects on these outcomes were found, suggesting selective reporting bias (Dwan et al., 2008). Study design and sample features of these 37% of studies were similar to those of other studies (e.g., N average = 97 vs. 90; active control 23% vs. 18%; and established, branded intervention 63% vs. 61%), except that samples of studies including emotional problems outcomes might have on average somewhat more severe conduct problems (69% vs. 58% indicated prevention/treatment settings, as opposed to universal/ selective prevention). Incomplete outcome data and selective reporting were difficult to assess because most trials were not preregistered.
Discussion
This study tested whether parenting programs for reducing children's conduct problems have additional effects on children's emotional problems. We found that such effects were limited. There was a small significant parent-reported additional effect immediately after the end of the program, but this effect faded at later follow-up. Teachers and children did not report any additional effects. Importantly though, additional effects were larger in samples with clinical baseline levels of emotional problems. None of the individual program elements tested was associated with larger additional effects. Comparing three clusters of elements against their control conditions, we found that programs combining behavior management and relationship enhancement elements were most likely to yield the strongest additional effects. That additional effects are limited might suggest that parenting programs for children's conduct problems do not completely reduce the risk factors that may underlie both conduct and emotional problems (e.g., emotion regulation skills). Instead, these programs may specifically reduce factors that maintain children's conduct problems (e.g., coercive parent-child interaction cycles). In line with the Dual Failure Model (Patterson & Capaldi, 1990), however, one might still expect a successful reduction in conduct problems to spill over to reduce children's emotional problems. However, we found little evidence that this was the case. To further explore this issue, future studies should examine whether there is a covariance between effects on conduct problems and emotional problems, as that was not possible with our data. However, combining our current finding that larger effects on emotional problems were found in samples where baseline levels of emotional problems were more severe, with previous findings that larger effects on conduct problems are found in samples where baseline levels of conduct problems are more severe, might suggest joint benefits on conduct and emotional problems for the most impaired children. Our exploratory tests of which individual program elements predicted stronger additional effects did not suggest any robust patterns. When testing which clusters of program elements predicted stronger additional effects, our findings indicated that programs are most likely to yield additional effects when they teach parents both relationship enhancement and behavior management. This aligns with findings that this combination of elements yields stronger effects also on reducing severe conduct problems (Leijten, Melendez-Torres, et al., 2018;Leijten, Melendez-Torres & Gardner, 2022). A high-quality parent-child relationship and appropriate behavioral control thus seem to curtail the development of both conduct problems and emotional problems. This finding suggests that Hanf's (1969) premise that both relationship enhancement skills and behavior management skills are necessary for reducing conduct problems may also in part pertain to emotional problems. Practically, this may suggest that parenting programs aimed at ameliorating both conduct problems and emotional problems should include elements addressing both relationship enhancement skills and behavior management skills. Another possible explanation for our findings is that programs that include this combination of elements might share other characteristics that drive this effect (e.g., more rigorous therapist training and higher program fidelity), something meta-analysis of associations between program elements and program effects cannot rule out (Leijten et al., 2021).
As noted, additional effects were larger in samples where baseline levels of emotional problems were more severe. This may suggest that elements of parenting programs target common underlying processes for conduct problems and emotional problems. Given that the effect sizes ranged from À0.12 to À0.32, even for the samples with more severe baseline emotional problems, this provides only limited support for use of these programs when a key treatment goal is to reduce children's emotional problems. Interventions are often developed for single mental health problems mismatches with the reality that most children suffer from multiple mental health problems (Marchette & Weisz, 2017). To overcome this mismatch, recent interventions that include elements for both conduct and emotional problems have been developed (Jeppesen et al., 2021;Weisz et al., 2012). One such program is Modular Approach to Therapy for Children with Anxiety, Depression, Trauma and Conduct Problems (MATCH; Weisz et al., 2012). This program brings together 33 separate elements of evidence-based The difference in program effects with or without the program element significant at the level of *p < .05 is in bold. treatments for emotional and conduct problems. MATCH has been found to outperform both usual care and standard evidence-based programs for single mental health problems (Weisz et al., 2012). Although evidence is emerging that such wider interventions hold great potential, more systematic research is needed to investigate the precise benefits of such interventions. We hope that our findings regarding what combination of elements are most likely to yield effects on a wider range of mental health problems can help guide the development and use of such interventions.
Strengths and limitations
To our knowledge, this is the first systematic overview to examine the additional effects of psychosocial interventions of children's psychiatric problems. Strengths include stratification of effects by assessment point (i.e., time after the end of the intervention) and informant, showing that additional effects are limited to immediate effects reported by parents, and by individual and clusters of program elements, showing some program content may enhance additional benefits. However, the findings should be interpreted with the following limitations in mind: (1) We tested associations between program elements and program effects. With associations, we can never rule out the possibility that other program elements that are confounded with the clusters of elements predict larger effects. Also, a true test of what changes in parent-child dynamics cause symptom reduction in children requires mediation analysis, which was not possible with our data (i.e., mediation analysis in meta-analysis requires correlations between parenting and child measures for all included studies; Jak, 2015).
(2) Only 37% of trials tested intervention effects on children's emotional problems. It might be that authors are more likely to report outcomes regarding children's emotional problems if the intervention yielded favorable effects on these outcomes, possibly yielding too large additional effects, but we were unable to test the magnitude of this potential bias. However, the likelihood is that this bias would work in the direction of overestimating the effects of interventions on emotional problems, if anything, adding to confidence in the conclusions.
(3) On the other hand, as our study included samples with low baseline levels of emotional problems, effect sizes may have been underestimated, as suggested by the post hoc analysis showing larger effects when baseline levels of emotional problems were more severe. (4) Most studies were not designed to estimate additional effects-their primary outcome measure was children's conduct problems. Because of this, the outcome measures used in the primary studies may not have been optimal to detect subtle but meaningful changes in children's symptoms. Specifically, 82% of studies used the Strengths and Difficulties Questionnaire or Child Behavior Checklist, whose 3-point Likert scales may not pick up subtle changes in symptoms.
Conclusions
Evidence for the additional benefits for children's emotional problems of parenting programs for children's conduct problems is limited but combining behavior management and relationship enhancement elements might be most likely to yield additional effects. These findings stress the need for identifying the conditions under which 'singlediagnostic programs' might yield additional effects and a more thorough understanding of the merit of programs specifically designed to produce wider effects.
Supporting information
Additional supporting information may be found online in the Supporting Information section at the end of the article: Table S1. Definition of program elements. Table S2. Prisma Checklist. Table S3. Studies included in the meta-analysis. Figure S1. PRISMA Flow Diagram. Appendix S1. Reference list of included studies.
This meta-analysis of 69 trials suggests that parenting programs originally developed to target children's conduct problems have limited additional effects on emotional problems.
Some clusters of elements predict larger effects: programs combining behavior management and relationship enhancement elements were most likely to yield the strongest additional effects.
Additional effects of parenting programs for children's conduct problems are limited. This provides only limited support for use of these programs when a key treatment goal is to reduce children's emotional problems. | 2022-09-14T06:18:08.589Z | 2022-09-13T00:00:00.000 | {
"year": 2022,
"sha1": "fbd1be9df491f3cfbca958592a739f61445bc703",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcpp.13697",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8243e2859740f6c1295f784f5c70e4dda8efa03",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10875407 | pes2o/s2orc | v3-fos-license | Atypical presentations of eosinophilic fasciitis
Eosinophilic fasciitis is an uncommon connective tissue disease that may mimic and overlap with other sclerosing disorders such as morphea and lichen sclerosus. Herein, we report four patients (two men and two women, aged 16-64 yeas) with eosinophilic fasciitis. There was overlap with both morphea and lichen sclerosus in 2 patients and with morphoea alone in 1 patient. Magnetic resonance imaging (MRI) was used for diagnosis in three patients and for assessing treatment response in one patient. Eosinophilic fasciitis may co-exist with morhoea and lichen sclerosus. In view of the overlapping clinical and histopathological features of these disorders, MRI may be helful in delineating the conditions by detecting involvement of fascia.
INTRODUCTION INTRODUCTION
Eosinophilic fasciitis is a rare sclerosing disease with a wide clinical spectrum varying from a limited disease with a benign course where cure is possible to a more generalized disease with organ involvement and poor response to treatment. [1][2] We present four cases of eosinophilic fasciitis with unusual features to highlight a possible overlap with morphea and lichen sclerosus and also highlight the importance of magnetic resonance imaging which can be a diagnostic tool.
Case 1
A 64-year-old woman with generalized stiffness of the skin, weight loss, fatigue and abdominal distention was referred to dermatology from the hematology clinic where she was diagnosed as hypereosinophilic syndrome and was being treated with systemic steroids for 13 months followed by hydroxyurea without any response.
Physical examination revealed diffuse sclerosis of skin more pronounced on lower legs, abdomen and forearms sparing the face, hands and feet. She also had ivory-colored sclerotic plaques on chest, forearms and proximal legs, some of them surrounded by a violaceous halo. Puckering of skin, peau d'orange appearance as well as furrowing were striking on the thighs [ Figure 1]. Raynaud's phenomenon was absent, and nailfold capillaroscopy was normal.
Relevant laboratory findings were antinuclear antibody positivity (1:80), peripheral blood eosinophilia (21% of white blood cells), high erythrocyte sedimentation rate and hypergammaglobulinemia [ Table 1]. Extractable nuclear antigens, rheumatoid factor, and Lyme antibodies were negative. Bone marrow biopsy had revealed marked eosinophilia. An extensive search for malignancy failed to show any underlying malignancy.
A punch biopsy obtained from the lesion clinically suggestive of morphea revealed dermal fibrosis whereas biopsy from an area with diffuse sclerosis of the lower leg revealed fascial fibrosis extending to dermis with a mixed inflammatory infiltrate containing eosinophils [ Table 1]. Magnetic resonance imaging of the lower extremity revealed thickening and increased signal intensity within the fascia and fascial enhancement after contrast administration. There was an edema-like signal within the muscle fibers adjacent to fascia and overlying subcutaneous tissue [ Figure 2a-c].
Accordingly, she was diagnosed as eosinophilic fasciitis with generalized morphea and treatment with methylprednisolone 0.8 mg/kg/day was re-initiated. Due to the development of diabetes mellitus and lack of response, steroids were tapered and treatment with hydroxychloroquine and psoralen and ultraviolet A therapy (PUVA) were initiated. At the end of 60 psoralen and ultraviolet A (PUVA) sessions, plaques of morphea resolved and skin stiffness improved partially. Post-treatment magnetic resonance imaging revealed minimal improvement. Treatment was subsequently stopped and she had no clinical deterioration over a 5 year period of follow up. A recent work-up revealed no autoimmune disorder or malignancy.
Case 2
A 16-year-old otherwise healthy woman presented with a 2-month history of difficulty in opening her hands. She was practicing violin and had not exercised vigorously. Physical examination revealed stiffness of both arms and neck with prayer sign in both hands. Fingers were not affected and Raynaud's phenomenon was absent.
Laboratory tests revealed a high C-reactive protein and hypergammaglobulinemia. Her eosinophil count was 400/mm 3 . Antinuclear antibody was positive in low titer (1:100) but extractable nuclear antigens were negative [ Table 1].
Histopathological findings are shown in Table 1.
Magnetic resonance imaging of upper extremity revealed marked thickening and increased signal intensity within the fascia and prominent fascial enhancement after contrast administration. An edema-like signal was seen within the muscle fibers adjacent to the fascia. No signal abnormality was seen within the overlying subcutaneous tissue [ Figure 3a and b].
She was diagnosed as eosinophilic fasciitis and treated with methylprednisolone 0.8 mg/kg/day and methotrexate 7.5 mg/week. At the end of 15 months of treatment, skin findings resolved completely and she remained disease free over 12 months of follow-up.
Case 3
A 62-year-old man presented with a 6-week history of wood-like stiffness of skin on the extremities and trunk, myalgias and muscle weakness. His medications included bisoprolol, valsartan, hydrochlorothiazide, trimetazidine, isosorbide mononitrate, and acetylsalicylic acid which were being used for coronary artery disease and hypertension.
Physical examination revealed woody induration and edema of forearms and legs and purple-gray patches with occasional sclerotic, hypopigmented centers on the lateral aspects of trunk, shoulders, and inguinal region [ Figure 4]. Range of motion of the elbows and knees were limited. Hands and feet were spared. He had peripheral eosinophilia (20%, 2700/mm 3 ), elevated erythrocyte sedimentation rate of 48 mm/h, hypergammaglobulinemia, elevated serum creatinine and antinuclear antibody positivity (1:80). Extractable nuclear antigens profile was negative [ Table 1].
Histopathological examination of the purple-gray, sclerotic patches revealed dermal fibrosis [ Table 1]. Magnetic resonance imaging examination of lower extremity revealed prominent thickening and increased signal intensity within the fascia. An edema-like signal was seen within both the muscle fibers adjacent to the fascia and the overlying subcutaneous tissue [ Figure 5a and b]. Eosinophilia-myalgia syndrome was excluded through the absence of muscle cramps, pulmonary symptoms, skin rash, neurological symptoms as well as histopathological and magnetic resonance imaging findings.
Based on these findings, he was diagnosed as eosinophilic fasciitis and morphea-lichen sclerosus overlap. Treatment with methylprednisolone, 48 mg/day and hydroxychloroquine, 400 mg/day was commenced. After 2 weeks of treatment, stiffness of the skin improved slightly and peripheral blood eosinophilia returned to normal (100/mm 3 ). After 2 months, hydroxychloroquine treatment was stopped and methotrexate 5 mg weekly was added as adjuvant treatment. Methylprednisolone treatment was tapered and stopped at the end of 12 months and the patient is still on treatment with a moderate response.
Case 4
A 46-year-old man presented with an 8-year history of stiffness and swelling of the legs and forearms worsening over the last few months. He had been diagnosed as scleredema and eosinophilic fasciitis in another center. His prior treatments included psoralen and ultraviolet A (PUVA) phototherapy and systemic prednisolone which had led to remission of his complaints. Physical examination revealed stiffness, sclerosis of the forearms and legs sparing the digits and feet as well as sclerotic, centrally ivory-colored patches on the flexor aspects of forearms, legs and ill-defined purple-gray patches on the antero-lateral aspects of trunk [ Figure 6]. Laboratory tests including complete blood count, peripheral blood smear, basic biochemical tests, C-reactive protein, erythrocyte sedimentation rate, antinuclear antibody, and extractable nuclear antigens profile revealed no abnormalities [ Table 1].
Histopathological examination revealed fibrotic septal thickening in the subcutaneous fat and fibrosis in the fascia as well as a mixed infiltrate, rich in eosinophils [ Figure 7]. Based on the clinical and histopathological findings, a diagnosis of eosinophilic fasciitis, morphea and lichen sclerosus overlap was made. He was treated with methotrexate, 7.5 mg/week. After 6 weeks of treatment, stiffness and extent of lesions were improved. He has stable disease after 12 months of methotrexate treatment.
DISCUSSION DISCUSSION
We report four cases of eosinophilic fasciitis, two having overlap with both morphea and lichen sclerosus and one with morphea. Magnetic resonance imaging has been used both as a diagnostic tool and in follow-up. Eosinophilic fasciitis is a rare autoimmune disease mimicking scleroderma. Its characteristic findings are sudden-onset erythema, edema in the early phase and symmetrical woody induration of distal extremities later. [1,2] It is triggered by strenuous exercise and trauma in at least 66% of cases which are hypothesized to induce the antigenicity of the fascia and subcutis. [3,4] Arthropod bites, borreliosis, Mycoplasma arginini infection and drugs such as simvastatin, atorvastatin, ramipril, and phenytoin are among other triggering factors. [5,6] Other than our second case having hobby-related overuse of hands, none of our patients had noted any triggering factors.
While eosinophilic fasciitis may be associated with several hematological and autoimmune diseases, none of our patients developed such disorders during the follow-up period ranging from 1 to 5 years. [7] Nearly, a third of patients with eosinophilic fasciitis are reported to have an association with morphea, either preceding or following the onset of the latter. [3,8,9] Coexistence of lichen sclerosus and localized scleroderma have also been reported. [10,11] Despite the possibility of two fibrosing disorders occurring in the same patient, we were unable to find any published reports in English that described coexistent eosinophilic fasciitis, morphea, and lichen sclerosus as was noted in our last two cases. Although not reported previously, the coexistence of these three disorders is not surprising as all are fibrosing in nature and the same stimulus may trigger these different disorders through similar inflammatory pathways.
Our second case is an adolescent, and eosinophilic fasciitis is extremely rare during childhood. [12] Pediatric eosinophilic fasciitis shows a female predominance, a higher incidence of hand involvement, lower incidence of an associated arthritis or hematological disorder and a less favorable course with higher risk of residual fibrosis. [13] Our patient was a female with hand and arm involvement with no evidence of hematological disease, consistent with the literature. However, her disease had a favorable course, completely responding to systemic steroids and methotrexate.
The differentiation of eosinophilic fasciitis from deep morphea or overlapping cases with morphea or scleroderma may be problematic. Overlapping features of eosinophilic fasciitis and deep morphea are involvement of subcutaneous tissue, fascia, muscles, eosinophilia and positive antinuclear antibody and association with systemic sclerosis. [14] The features favoring the diagnosis of eosinophilic fasciitis are venous furrowing, prayer sign, the absence of sclerodactyly and Raynaud's phenomenon, prominent eosinophilia, hypergammaglobulinemia and the absence of nailfold capillary changes. Although eosinophilic fasciitis, morphea and lichen sclerosus are considered different entities both clinically and histopathologically, in view of their common features, one cannot exclude the possibility of these being various manifestations of the same disease. [15,16] As the histopathological findings of these disorders may overlap to some extent, depending on histopathological examination alone for differential diagnosis may cause difficulties. Consequently, findings of clinical, histopathological and imaging studies should all be evaluated together for precise diagnosis.
The standard diagnostic test for eosinophilic fasciitis is histopathological examination which shows thickening of fascia with sclerosis and occasionally an inflammatory polymorphic infiltrate with varying numbers of eosinophils which can spread to the muscle fibers. [17] Considering the drawbacks of histopathological examination and the validity of magnetic resonance imaging, histopathological examination for confirming the diagnosis may be unnecessary in pediatric cases and also in patients who decline a biopsy. Although magnetic resonance imaging is more expensive, it has several advantages in being non-invasive, rapid, sensitive and devoid of surgical complications. Furthermore, it may provide dermatologists an opportunity to assess improvement objectively. In eosinophilic fasciitis, magnetic resonance imaging images show a thickened deep fasciae on T1-weighted sequences and relatively increased signal intensity greater than that of muscle on fat-suppressed or fat-saturated T2-weighted sequences. [18][19][20] Although magnetic resonance imaging findings cannot differentiate eosinophilic fasciitis from other causes of fasciitis, in cases with characteristic clinical, laboratory and magnetic resonance imaging findings, all of the disorders in the differential diagnosis can be excluded. In addition, magnetic resonance imaging can also be used for evaluation of treatment response as imaging findings are consistent with clinical improvement. [18][19][20] We used magnetic resonance imaging in three of our cases for diagnosis and in the first case for assessing therapeutic response.
Although high-dose corticosteroids, the gold standard of eosinophilic fasciitis treatment, are reported to be effective in up to 70% of cases, treatment should be individualized. Three of our patients showed a partial/complete response to corticosteroid treatment and all required adjuvant modalities including hydroxychloroquine, methotrexate, and psoralen and ultraviolet A (PUVA).
Financial support and sponsorship
Nil.
Confl icts of interest
There are no conflicts of interest. | 2018-04-03T00:20:39.319Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "60ec66cdb45686c1d842dd9b071828b0ba7275e2",
"oa_license": null,
"oa_url": "https://doi.org/10.4103/0378-6323.171010",
"oa_status": "GOLD",
"pdf_src": "WoltersKluwer",
"pdf_hash": "694991e9b5226d22a144a95badee85e51089d50d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229714029 | pes2o/s2orc | v3-fos-license | Employment and disability status in patients with functional (psychogenic nonepileptic) seizures
Abstract Purpose We investigated the rate of employment in patients with functional seizures (FS) in a follow‐up study. We also investigated the rate of receiving disability benefits in these patients. Finally, we investigated factors that are potentially associated with their employment status. Methods In this long‐term study, all patients with FS, who were diagnosed at Shiraz Comprehensive Epilepsy Center, Iran, from 2008 to 2018, were investigated. In a phone call interview to the patients in February 2020, we tried to obtain the following information: seizure outcome, employment status, receiving disability benefits, and their current drug regimen, if any. The first call was made in the evening and after working hours. In case of no response, we made two more attempts in the following weeks to contact the patients during different time periods of the day. Results Eighty‐ four patients participated. Thirty‐one patients (37%) were employed, and 53 people (63%) were not; at the first visit, the rate of employment was 23%. Female sex (Odds Ratio [OR]: 12.18; 95% Confidence Interval [CI]: 3.51–42.18; p = .0001), taking psychiatric drugs (OR: 4.93; 95% CI: 1.17–20.73; p = .02), and being employed previously (OR: 0.19; 95% CI: 0.04–0.77; p = .02) were independently significantly associated with the current employment status. Three patients (4%) reported receiving disability social benefits, two women and one man. Conclusion This study highlights that unemployment is a serious issue in patients with FS and psychiatric comorbidities play a significant role in the employment status in these patients.
| INTRODUC TI ON
Functional seizures (FS) or psychogenic nonepileptic seizures (PNES) are characterized by paroxysmal and self-limited events that semiologically may resemble epileptic seizures, but without ictal epileptiform discharges; they are considered as psychological problems . While there is no universally accepted terminology for this common condition, "functional seizures" meets more of the criteria proposed for an acceptable label than other popular terms in the field (e.g., PNES). Hence, we have adopted "functional seizures" terminology in the current paper (Asadi-Pooya, Brigo, et al., 2020).
Many patients with FS experience loss of responsiveness with their seizures (Asadi-Pooya & Bahrami, 2019). Hence, it is plausible to assume that people with FS might be at increased risk of experiencing job-related difficulties . A few previous studies (all from the Western and developed countries) have shown that patients with FS have low rates of employment and may be receiving some form of social financial support (Jennum et al., 2019;McKenzie et al., 2016;Walczak et al., 1995). In a study of 120 patients with FS from the UK, only 30% were employed 5-10 years postdiagnosis (McKenzie et al., 2016).
In the current endeavor, we investigated the rate of employment in patients with FS living in a developing nation from the Middle-East, in a follow-up study. We also investigated the rate of receiving disability benefits in these patients. We hypothesized that unemployment is common in these patients and some of these patients are receiving disability benefits. Also, we investigated factors that are potentially associated with the employment status in patients with FS living in a developing nation. This data may add to the literature and improve our understanding of the social status of patients with FS cross-culturally.
| MATERIAL S AND ME THODS
In this long-term follow-up study, we investigated all adult patients with FS admitted to the epilepsy monitoring unit at Shiraz Comprehensive Epilepsy Center from December 2008 through September 2018. All included patients were 20 years of age or older at the time of their first visit. Patients had a confirmed diagnosis of FS, determined by clinical assessment and video-EEG monitoring with ictal recording of their seizures. We excluded patients with comorbid epilepsy, abnormal electroencephalography (EEG), or insufficient data. We extracted all the relevant clinical and demographic data at the time of diagnosis from our database (age, sex, seizure semiology [e.g., aura, loss of responsiveness with functional seizures, generalized motor seizures], medical comorbidities [non-neurological and nonpsychiatric medical comorbidities such as diabetes, heart disease, etc.], family history of seizures, employment, marriage, taking antiseizure medications, and taking psychiatric medications). In a phone call interview to the patients in February 2020 (i.e., at least 18 months after their first visit), we tried to obtain the following information if patients agreed to participate and answer the questions (consecutively, we asked the following questions): seizure outcome (seizure-free during the past 12 months or not), employment status, receiving disability benefits, and their current drug regimen, if any.
The first call was made in the evening and after working hours (i.e., 6-9 p.m.). In case of no response, we made two more attempts in the following weeks to contact the patients during different time periods of the day (i.e., 11 a.m.-13 pm and 4-6 p.m.).
We first studied factors associated with their current employment status using Pearson chi-square and Fisher's exact tests.
Variables that were significant in univariate tests were assessed in a logistic regression model. Odds ratio and 95% confidence interval (CI) were calculated. p values < .05 were considered significant. The study design was reviewed and approved by Shiraz University of Medical Sciences Institutional Review Board and ethical committee.
Oral informed consent of the participants was obtained after the nature of the procedures had been fully explained.
| RE SULTS
During the study period, 198 patients with FS-only had available phone numbers and other inclusion criteria and were approached.
One hundred and eight people did not answer our call, three persons declined to participate, and three people were dead. Eightyfour patients participated in this study. They included 52 female and 32 male patients. The mean age of the participants (±standard deviation) at the time of diagnosis was 30 (±9) years (range: 20-53 years). In the follow-up call, 40 patients (48%) were seizurefree and 44 patients (52%) were still suffering from functional seizures. Thirty-one patients (37%) were employed, and 53 people (63%) were not; at the first visit, the rate of employment was 23% (19 out of 84 patients). Among the whole cohort of patients with FS in the follow-up, three patients (4%) reported receiving disability social benefits, two women and one man. These three people were unemployed at the time of their first visit, and all were not seizure-free in their follow-up.
| D ISCUSS I ON
Functional seizures affect many aspects of a person's life. This condition may have substantial socioeconomic consequences for patients, their partners, and the society (Jennum et al., 2019). In this study, we observed that the majority of patients with FS were unemployed and this situation did not improve significantly with the passage of time (from 23% at the first visit to 37% at the last follow-up call). In a previous study of patients with FS from the UK, only 30% were In the current study, female sex had a significant association with the employment status. This probably has more socio-cultural reasons than biological underpinnings. In Iran, while women are legally permitted to work in almost any job they want (as men are), men are culturally responsible to provide for the family. In addition, there exists social discrimination against women in finding a job (in contradiction with the above legal status). In other words, laws generally do not discriminate against women in the job market, but policies do! Labor force participation rates in the fiscal 2016-2017 were 64% of men and 15% of women in Iran (https://finan cialt ribune. com/artic les/econo my-busin ess-and-marke ts/78671/ iran-s-women -labor -force -parti cipat ion-lowes t-world wide/. Accessed on March 26, 2020); the employment rates were 15% in women and 72% in men with functional seizures in our study. We also observed that the employment status at the time of the diagnosis had a significant association with the employment status in the follow-up. This finding is pretty much expected; many of those ( Finally, we observed that only a small minority of patients with FS (4%) were receiving social support and disability benefits, despite the high rate of unemployment in this study. In a study by the international league against epilepsy (ILAE) PNES Task Force, no respondent from low-income countries stated that their patients could receive state disability benefits for FS, compared to 23% in the middle-income and 50% in high-income countries (Hingray et al., 2018).
In a previous study, we observed that 75% of the physicians in our area believed that patients with FS and specific jobs or professions (e.g., pilots, bus drivers, and firefighters) should be qualified for disability benefits, as long as Furthermore, a major risk factor for early disability pension in patients with epilepsy is psychiatric comorbidity, (Specht et al., 2015) and more patients with FS suffer from other psychiatric comorbidities (e.g., depression and anxiety) than those with epilepsy (Walsh et al., 2018). This paradox is likely related to the economic status, public politics, and social and cultural issues, particularly in low-income countries. Experts in the field and patient support organizations should negotiate with the authorities to improve the current laws and regulations on the issue of social support eligibility in patients with FS all around the world.
This study has some limitations. The sample size was not large.
Furthermore, self-report nature of the questionnaire might have influenced the results. In addition, we did not include all the possible variables that might be related to the employment status (e.g., living status). However, this study highlights that unemployment is a serious issue in patients with FS and psychiatric comorbidities play a significant role in the employment status in these patients.
ACK N OWLED G M ENTS
Shiraz University of Medical Sciences supported this study.
CO N FLI C T O F I NTE R E S T
Ali A. Asadi-Pooya, M.D.: Honoraria from Cobel Daruo, RaymandRad and Tekaje; Royalty: Oxford University Press (Book publication).
Mehdi Bazrafshan: none. None of the authors listed on the manuscript are employed by a government agency. All are academicians.
None of the authors are submitting this manuscript as an official representative or on behalf of the government.
AUTH O R CO NTR I B UTI O N S
Ali A. Asadi-Pooya, M.D. involved in design and conceptualized the study, analyzed the data, drafted and revised the manuscript. Mehdi Bazrafshan involved in data collection and revised the manuscript.
PE E R R E V I E W
The peer review history for this article is available at https://publo ns.com/publo n/10.1002/brb3.2016.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data use in this study is confidential and will not be shared. | 2020-12-30T06:18:36.128Z | 2020-12-29T00:00:00.000 | {
"year": 2020,
"sha1": "4cf2d2c15b03f14b0cb21e06388d06f4ecc40462",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/brb3.2016",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b972743c1251887f6fedb83fdc933d918d12fd7",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234448309 | pes2o/s2orc | v3-fos-license | Fear and concerns of women delivering during coronavirus pandemic
The novel coronavirus namely SARS CoV2 has emerged as a potentially life-threatening condition with spread almost all over the world. World health organization has declared it as a global public health emergency. COVID 19 pneumonia was first detected in Wuhan, China in December 2019 and since then almost whole of the world is under its infectious attack. Pregnant woman undergoes physiological changes in her cardiorespiratory system making her a vulnerable group prone to develop respiratory infection.The previously identified strains of coronavirus family, namely severe acute respiratory syndrome coronavirus (SARS‐CoV) and Middle East respiratory syndrome coronavirus (MERS‐CoV) are both known to cause severe complications during pregnancy with need for endotracheal intubation, admission to an intensive care unit (ICU), renal failure and death. With the emergence of this new coronavirus strain and limited data regarding its impact on pregnancy outcomes, there remains an uncertainty among woman delivering during this phase, where no proven treatment exists. Pregnancy is considered as one of the most intense and emotional phases in a women’s life. The preparation to welcome the newborn child begins much before delivery. Things have changed drastically owing to the emergence of this highly contagious coronavirus strain. Worldwide, there is lockdown situation with restricted people mobility resulting in social isolation for majority. This has resulted in a sense of worry and anxiety among pregnant women. Stress and anxiety during pregnancy are known to be associated with hyperemesis gravidarum, preterm labour, low birth weight and lower APGAR score. As a preventive measure to reduce infection, Indian government also announced a lockdown initially for 21 days starting from 24th March 2020.We hypothesised that this sudden lockdown might have resulted in a sense of uncertainty and fear among those delivering during this period. With this aim, we interviewed postnatal women who had delivered during this initial lockdown phase to identify their fears and concerns.
INTRODUCTION
The novel coronavirus namely SARS CoV2 has emerged as a potentially life-threatening condition with spread almost all over the world. World health organization has declared it as a global public health emergency. COVID 19 pneumonia was first detected in Wuhan, China in December 2019 and since then almost whole of the world is under its infectious attack. 1 Pregnant woman undergoes physiological changes in her cardiorespiratory system making her a vulnerable group prone to develop respiratory infection.The previously identified strains of coronavirus family, namely severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle East respiratory syndrome coronavirus (MERS-CoV) are both known to cause severe complications during pregnancy with need for endotracheal intubation, admission to an intensive care unit (ICU), renal failure and death. 2,3 With the emergence of this new coronavirus strain and limited data regarding its impact on pregnancy outcomes, there remains an uncertainty among woman delivering during this phase, where no proven treatment exists.
Pregnancy is considered as one of the most intense and emotional phases in a women's life. The preparation to welcome the newborn child begins much before delivery. Things have changed drastically owing to the emergence of this highly contagious coronavirus strain. Worldwide, there is lockdown situation with restricted people mobility resulting in social isolation for majority. This has resulted in a sense of worry and anxiety among pregnant women. Stress and anxiety during pregnancy are known to be associated with hyperemesis gravidarum, preterm labour, low birth weight and lower APGAR score. 4,5 As a preventive measure to reduce infection, Indian government also announced a lockdown initially for 21 days starting from 24th March 2020.We hypothesised that this sudden lockdown might have resulted in a sense of uncertainty and fear among those delivering during this period. With this aim, we interviewed postnatal women who had delivered during this initial lockdown phase to identify their fears and concerns.
We prospectively conducted a hospital-based qualitative study in maternity ward of a tertiary centre in Uttarakhand India. In depth interviews of 12 pregnant women delivering during this initial lockdown period. Semi structured interview with open ended questions mainly focusing on their specific source of fear and worries were employed during the data collection in the interview. Each interview lasted for half an hour and were aimed at letting the informant speak freely and without interruptions. Verbal informed consent was obtained from each participant prior to the interview with maintaining their privacy and confidentiality.
Awareness about disease
All the women were aware about the nature of this dreaded disease and were trying best use of hand rubs & hand sanitizers provided to them by the hospital team to reduce the risk of getting infected.
As per our hospital infection prevention and control team, adequate spacing was kept between two hospital beds and each bed had a hand sanitizer kept over its rack. Each woman was also given a face mask. As a result, everyone was following the concept of social distancing with maintenance of personal hygiene.
Cause of their concern and anxiety 1) Fear and worry about their own health and their attendants-This started right after their entry into the hospital premise. Owing to current pandemic situation, every patient attending hospital underwent though a screening stage answering a checklist enquiring for their travel history, contact history or febrile illness associated with cough and shortness of breath. This resulted in an extra stress for them making them fearful 2) concern and worry that hospital itself is a potential source of acquiring infection and as such, 3 women attended postdated citing the reason that they waited for spontaneous delivery at home 3) sharing the maternity room with some unknown women (who might be a silent carrier of the virus) further added to their stress 4) stress of being monitored for signs and symptoms of COVID 19 5) frustration and anger that their relatives would avoid them after being discharged from hospital as they would fear themselves of contracting the disease 6) not being able to meet with their children back home was a major concern 7) worry that their child might be affected was a concern in all the mothers 7) prolonged neonatal admission of baby was another reason for their anxiety. Restricting the number of attendants in hospital further makes them self-isolated 8) all mothers were anxious to know if breastfeeding was safe or not.
Challenges faced in hospital
None of the women reported any challenge faced in terms of arranging food and medicine as the hospital was well prepared and had adequate stocks arranged.
Coping strategies in hospital
Virtual communication through mobile was a major support in all serving as a means of contact with their loved ones. Reading books and religious material was helpful in some women.
DISCUSSION
Prior to addressing the fears and concerns, it is crucial to understand why this happens. With focused listening sessions, we identified 3 key concerns of pregnant women 1) fear of being exposed at hospital and in turn taking the infection home thereby affecting their near and dear ones 2) restricted number of hospital visitors made them confined and self-isolated; acted as a cause of their boredom and frustration 3) risk of baby being affected.
Any women arriving in labor ward must be segregated based on hospital strategies into risk categories so that type of infection control precautions can be employed for the attending medical staff. Adequate patient counselling should be done at repeated intervals to minimize her fear of acquiring infection in hospital setting. Owing to highly contiguous nature of this infection, appropriate arrangement should be done during intrapartum period. She should be informed that at present there is no convincing evidence of vertical transmission and breastfeeding is safe. In a retrospective analysis of COVID-19 in pregnancy, it was seen that none of the women had detectable viral loads of SARS-CoV-2 in breastmilk. 6 A face mask should be worn while breast feeding to reduce the risk of droplet transmission. COVID 19 related stress and concerns have previously been evaluated by Rashidi et al. 7 A potential limitation of this study was that it interviewed only woman who delivered, thereby leaving the other important subset of pregnant woman who comes for regular antenatal checkups. This subset might have some other additional fears and concerns which also needs to be addressed for better maternal and fetal wellbeing. Strength of this study is that it focuses on an important issue which need to be dealt with compassion.
CONCLUSION
Understanding a pregnant women's concern and fear during this pandemic will enable a health care worker in better counselling. Considering the impact of this global public health issue, we believe that addressing the issues brought out in our study would help in relieving a women's fears and concerns in a significant way. | 2020-12-31T09:04:24.221Z | 2020-12-26T00:00:00.000 | {
"year": 2020,
"sha1": "ed4fc8656d7192b8b7ed033dbb7bec0fddfa77f0",
"oa_license": null,
"oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/9404/6186",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8128c6d83f92d6b84aa859770803d2b07b88ea5d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221103803 | pes2o/s2orc | v3-fos-license | Dynamic Programming Principle for Backward Doubly Stochastic Recursive Optimal Control Problem and Sobolev Weak Solution of The Stochastic Hamilton-Bellman Equation
In this paper, we study backward doubly stochastic recursive optimal control problem where the cost function is described by the solution of a backward doubly stochastic differential equation. We give the dynamical programming principle for this kind of optimal control problem and show that the value function is the unique Sobolev weak solution for the corresponding stochastic Hamilton-Jacobi-Bellman equation.
Introduction
Backward stochastic differential equation (BSDE in short) has been introduced by Pardoux and Peng [3]. Independently, Duffie and Epstein [2] introduced BSDE from economic background. In [2] they presented a stochastic differential recursive utility which is an extension of the standard additive utility with the instantaneous utility depending not only on the instantaneous consumption rate but also on the future utility. The recursive optimal control problem is presented as a kind of optimal control problem whose cost functional is described by the solution of BSDE. In [4] they gave the formulation of recursive utilities and their properties from the BSDE point of view. In 1992, Peng [6] got the Bellman's dynamic programming principle for this kind of problem and proved that the value function is a viscosity solution of one kind of quasi-linear second-order partial differential equation (PDE in short) which is the well-known as Hamilton-Jacobi-Bellman equation. Later in 1997 he virtually generalized these results to a much more general situation, under Markvian and even Non-Markvian framework. In this chinese version, Peng used the backward semigroup property introduced by a BSDE under Markovian and Non-Markovian framework. He also proved that the value function is a viscosity solution of a generalized Hamilton-Jacobi-Bellman equation. In 2007, Wu and Yu [7] gave the dynamic programming principle for one kind of stochastic recursive optimal control problem with the obstacle constraint for the cost functional described by the solution of a reflected BSDE and showed that the value function is the unique viscosity solution of the obstacle problem for the corresponding Hamilton-Jacobi-Bellman equation.
In 1994, Pardoux and Peng first studied the backward doubly stochastic differential equations(BDSDE in short). There are two different directions of stochastic integral in the equations involving with two independent standard Brownian motions: a standard (forward) dW t and a backward dBt. They had proved existence and uniqueness result of this equation and established the connection between BDSDE and a classical solution for stochastic partial differential equation (SPDE in short) under smoothness assumption on the coefficients. And then, Bally and Matoussi [1] gave the probabilistic representation of the solution in Sobolev space of semilinear stochastic PDEs in terms of BDSDE. Shi and Gu [16] gave the comparison theorem of BDSDE. Then Auguste and Modeste [10] got the uniqueness and existence of reflected BDSDE's solutions.
In our paper, we study a stochastic recursive optimal control problem where the control system is described by the classical stochastic differential equation, however, the cost function is described by the solution of a backward doubly stochastic differential equation. This kind of recursive optimal control problem has some practical meaning. For example, in an arbitragefree incomplete financial market, there may exist so called informal trading such as "insider trading". An individual has access to insider information would have an unfair edge over other investors, who do not have the same access, and could potentially make larger 'unfair' profits than their fellow investors. This phenomenon could be described by a BDSDE in a financial market models. More specifically, there are two kinds of investors with different levels of information about the future price evolution in a market influenced by an additional source of randomness. The ordinary trader only has the "public information"-market prices of the underlying assets contained in the filtration F W t . However, an insider who has assess to a larger filtration F W t ∨ F B t,T , which includes insider information. For instance, an insider knows the functional law of the price process or he knows in advance that a significant change has occurred in the business policy or scope of a security issue or he could estimate if his portfolio is better than others. We would like to emphasize that BDSDE techniques provide powerful instruments to analyze the problem of portfolio optimization of an insider trader. For an insider trader, his investment strategy still satisfies the property that locally optimal is equal to globally optimal.
The problem we are most interested in is whether the dynamic programming principle still holds for this recursive optimal control problem. The good news is that it can be accomplished by the properties of the BDSDE. Compared with the Hamilton-Jacobi-Bellman (HJB in short) equation in paper [6] [7], the corresponding HJB we get is a SPDE in a Markovian framework. In the stochastic case where the diffusion is possibly degenerate, the HJB equation may in general have no classical solution. To overcome this difficulty, Crandall and Lions introduced the so-called viscosity solutions in the early 1980s. Obviously, the research on the viscosity solution on HJB equations have yielded fruitful results. However, the viscosity solution of the HJB equation cannot give an reasonable probabilistic interpretation on a pair of solution (Y,Z) of BSDE considering that relationship do not established between the Z part of the solution and the HJB equation. Here, we study a different kind of weak solution for HJB equations in a Sobolev space, in which part Z is spontaneously contained in the weak definition. Wei and Wu [11] have proved that the value function is the unique Sobolev weak solution of the related HJB equation by virtue of the nonlinear Doob-Meyer decomposition theorem introduced in the study of BSDEs.
In this paper, we consider the issue on Sobolev weak solution of HJB equation connected with BDSDE. Since that we cannot find a Doob-Meyer decomposition theorem in BDSDE, it is a point that how to establish the equation like Lemma 4.1. and 4.2. in [11]. Inspired by the [10] we bring a increasing process into the equation in order to push the cost functional upward in a minimum force.
The paper is organized as follows. Preliminaries and assumption are introduced in Section 2. In Section 3 we formulate a stochastic recursive optimal control problem where the cost function is described by the solution of a BDSDE. We show that the celebrated dynamic programming principle still holds for this kind of optimization problem. In Section 4 we prove that the value function of this problem is the unique weak solution in a Sobolev space for the corresponding stochastic Hamilton-Jacobi-Bellman equation.
Preliminaries and assumption
In this section, we give some preliminary results of the BDSDE which are useful for the dynamic programming principle for the recursive optimal control problem.
Let (Ω, F, P) be a probability space, and T > 0 be an arbitrarily fixed constant throughout this paper. Let {W t ; 0 ≤ t ≤ T } and {B t ; 0 ≤ t ≤ T } be two mutually independent standard Brownian Motion processes with values respectively in R d and R l , defined on (Ω, F, P). Let N denote the class of P -null sets of F. For each t ∈ [0, T ], we define where for any process Note that the collection {F t ; t ∈ [0, T ]} is neither increasing nor decreasing, so it does not constitute a filtration. Let us introduce some notations: and the following BDSDE: Here and f, g satisfying (H1) for all (y, z) ∈ R × R d , f (·, y, z) ∈ M P (0, T ; R k ); g(·, y, z) ∈ M P (0, T ; R k×l ), (H2) for some L > 0 and 0 < α < 1 all y, y ∈ R, z, z ∈ R d , a.s.
There exists C such that for all (t, We notice that there are two independent Brownian motions W and B in (1), where the dW integral is a formed Itô's integral and dB integral is a backward Itô's integral. The extra noise B in the equation can be thought of as some extra information that can not be detected in the market in general, but is available to the particular investor. The problem then is to show how this investor can take advantage of such extra information to optimize the utility, but by taking actions that are completely "legal", in the sense that the investor has to choose the optimal strategy in the usual class of the admissible portfolios.
Then form Theorem 1.1 in [5], then there exists a unique solution Now we give two more accurate estimates of the solutions. They are very important and necessary for the dynamic programming principle of our optimal control problem and play an important role for the continuation properties of value function u(t, x) about t and x. The proof is complicated and technical, some technique derive from [1].
be the solution of the above BDSDE, then for some p > 2, ξ ∈ L p (Ω, F T , T ; R k ) and Proposition 2.2 Let (ξ, f, g) and (ξ , f , g ) be two triplets satisfying the above assumption. Suppose (Y, Z) is the solution of the BDSDE (ξ, f, g) and (Y , Z ) is the solution of the BDSDE (ξ , f , g ). Define Then there exists a constant C such that 3 Formulation of the problem and the Dynamic Programming Principle In this section, we first formulate a backward doubly stochastic recursive optimal control problem, and then we prove that the dynamic programming principle still holds for this kind of optimization problem.
We introduce the admissible control set U defined by An element of U is called an admissible control. Here U is a compact subset of R k , however this restriction is often satisfied in practical applications.
For a given admissible control, we consider the following control system Where t ≥ 0 is regarded as the initial time and ζ ∈ L p (Ω, F t , P ; R n ) as the initial state. The mappings satisfy the following conditions: Obviously, under the above assumption, for any v(·) ∈ U, control system (4) has a unique strong solution {X t,ζ;v s ∈ H p (0, T ; R k ), 0 ≤ t ≤ s ≤ T } , and we also have the following estimates.
Where the constant C p also depends on L.
where the constant C also depends on x and L.
Now for any given admissible control v(·), we consider the following BDSDE: where and they satisfy the following conditions: (H3.3) f and h are continuous in t.
Then there exists a unique solution (Y t,ζ;v , Z t,ζ;v ) ∈ S p (0, T ; R k ) × H p (0, T ; R k×d ). Moreover, we get the following estimates for the solution from Proposition 2.1 and 2.2.
The proof is complicated and technical, we put in the Appendix. Given a control process v(·) ∈ U, we introduce the associated cost functional: and we define the value function of the stochastic optimal control problem Now we continue our study of the control problem (12) and prove that the celebrated dynamic programming principle still holds for this optimization problem. Some proof ideas come from the proof of the dynamic programming principle for recursive problem given by Peng in Chinese version [6], and wu and Yu in [7]. Now we introduce the following subspace of U : Firstly we will prove that: Proof. First we can prove: We need to consider the inverse inequality. For any v(·), v(·) ∈ U, for the Proposition 3.4, we know So there exists a subsequence, we denote without loss of generality By the arbitrariness of v(·) and the definition of essential supremum, we get then we obtain (3.11). Second, we want to prove Obviously, In order to get the inverse inequality, we need the following Lemma: is F B t,T measurable. Next we will discuss the continuity of value function u(t, x) with respect to x and t. We have the following estimates: Proof. Using the estimates: and On the otherhand, for each ε > 0, ∃v(·), v (·) ∈ U such that: Form the estimate (10) we can get: Form the arbitrariness of ε, we can obtain (ii). Similarly, Then we can obtain (i).
For the value function of our recursive optimal control problem.We have: Proof. We first study a simple case: ζ is the following form: From the definition of cost functional. We deduce that Therefor, for simple functions, we get the desired result.
Given a general ζ ∈ L p (Ω, F t , P ; R n ), we can choose a sequence of simple function {ζ i } which converges to ζ in L p (Ω, F t , P ; R n ). Consequently, we have: With the help of Y t,ζ;v t = J(t, ζ; v(·)), the proof is completed.
For the value function of our recursive optimal control problem, we have Lemma 3.9: Fixed t ∈ [0, T ) and ζ ∈ L p (Ω, F t , P ; R n ), for each v(·) ∈ U, we have : On the other hand, for each ε > 0, there exists an admissible control v(·) ∈ U such that: Now we start to discuss the (generalized) dynamic programming principle for our recursive optimal control problem.
Firstly we introduce a family of (backward) semigroups which is original from Peng's idea in [6].
Given the initial condition (t, x), an admissible control v(·) ∈ U, a positive number δ ≤ T −t and a real-value random variable η ∈ L p (Ω, F t+δ , P ; R), we denote where (Y s , Z s ) is the solution of the following double BSDE with the horizon t + δ: Obviously, . Then our (generalized) dynamic programming principle holds.
Theorem 3.11 Under the assumption (H3.1)-(H3.4), the value function u(t, x) obeys the following dynamic programming principle: for each 0 < δ ≤ T − t, Proof. We have Form Lemma 3.10 and the comparison theorem of double BDSDE On the other hand, from Lemma 3.10, for every ε > 0, we can find an admissible control v(·) ∈ U such that For each v(·) ∈ U, we denote v(s) = I {s≤t+δ} v(s) + I {s>t+δ} v(s). From the above inequality and the comparison theorem, we get By Proposition 2.2 , there exists a positive constant C 0 such that Therefore, letting ε ↓ 0, we obtain Because v(·) acts only on [t, t + δ] for G t,x; v t,t+δ , from the definition of v(·) and the arbitrariness of v(·), we know that the above inequality can be written as which is our desired conclusion.
4 Sobolev weak solutions for the HJB equations corresponding to the stochastic recursive control problem In this section we consider the Sobolev weak solution for the SHJB equation related to the stochastic recursive optimal control problem.
We give some preliminary results of the BDSDE which are useful for the sobolev weak solutions for the recursive optimal control problem. In order to facilitate understanding and narration, we divided it into several parts. Part I Consider the control system defined by (4) satisfying the following conditions: (H4.1) The coefficient b is 2 times continuously differentiable in x and all their partial derivatives are uniformly bounded, σ is 3 times continuously differentiable in x and all their partial derivatives are uniformly bounded, and |b(t, And the cost function defined by the following BSDE: satisfying the conditions as same as that denoted in Chapter 3. Obviously, under the above assumptions(H3.4)(H3.5)(H3.7)and(H4,1), for a given control v(·) ∈ U, there exists a unique solution (Y t,ζ;v , Z t,ζ;v ) ∈ S 2 (0, T ; R)×H 2 (0, T ; R d ). We introduce the associated cost functional: and define the value function of the stochastic optimal control problem According to the conclusion in previous chapter, we know that the celebrated dynamic programming principle still holds for this recursive stochastic optimal control problem. We therefore deduce the following HJB equation: where L is a family of second order linear partial differential operators,
Part II
We define the weight function ρ is continuous positive on R d satisfying R d ρ(x)dx = 1 and Denote by L 2 (R d , ρ(x)dx) the weighted L 2 -space with weight function endowed with the norm We set D : ∂u ∂x i is derivative with respect to x in the weak sense. Note that D equipped with the norm is a Hilbert space, which is a classical Dirichlet space. Moreover, D is a subset of the Sobolev space H 1 (R d ).
We set H : ). Following Kunita [14], we can define the composition of u ∈ L 2 (R d ) with the stochastic flow by (u • x t,·,v s , ϕ) = (u, ϕ t (s, ·)). Indeed, by a change of variable, we have In [1], V. Bally and A. Matoussi proved that ϕ t (s, x) is a semimartingale and admits the following lemma 4.1 and lemma 4.2.
where L * t is the adjoint operator of L t . The next lemma, known as the norm equivalence result and proved in [1] plays an important role in the proof of the main result.
Lemma 4.2. Assume that (H1) holds. Then for any v ∈ U there exist two constants c > 0 and C > 0 such that for every t ≤ s ≤ T and ϕ ∈ L 1 (R d ; ρ(x)dx) The constants c and C depend on T , on ρ and on the bounds of derivatives of the b and σ. The proof is similar to the proof of Proposition 5.1 in [1], hence we omit it. Now we give the definition of a sobolev solution for SHJB equation (17). Definition 4.1 We say that V is a weak solution of the equation (17) and for any small ε > 0, there exists a control v ∈ U, such that Let (Y, Z) be a solution of the BDSDE with parameter(ξ, f, g) and (Y , Z ) a solution of the BDSDE with parameter (ξ , f , q). Then The proof is similar to the proof in [16].
and for any small ε > 0, there exists a v ∈ U, such that Proof. According to the theory of dynamic programming principle we have got above, Then we set (27) Then it is no hard to finish the proof.
Proof. The proof is similar to the proof in [10]. Since Y n t ≥ Y 0 t , we can replace V t by V t ∨ Y 0 t , so assume that E sup t≤T V 2 t < ∞ ,We first want to compare a.s. Y t and S t for all t ∈ [0, T ], while we do not know yet that Y is a.s. continuous. From the comparison theorem for BDSDE's, we have that a.s. Y n t ≥Ỹ n t , 0 ≤ t ≤ T n ∈ N, where Ỹ n t ,Z n t ; 0 ≤ t ≤ T is the unique solution of the BDSDE: Let ν be a stopping time such that 0 ≤ ν ≤ T. Theñ It is easily seen that a.s. and in L 2 , and the conditional expectation converges also in L 2 . Moreover, Consequently,Ỹ n s → ξ1 ν=T + S ν 1 ν<T in mean square, and Y ν ≥ V ν a.s. From this and the section theorem in Dellacherie and Meyer [15], it follows that a.s.
Hence (Y n t − V t ) − 0, 0 ≤ t ≤ T , and from Dini's theorem the convergence is uniform in t.
by the dominated convergence theorem.
Before lemma 4.6, we now introduce the BDSDE with increasing process: The solution of the equation is triple (Y, Z, K) of F t measurable and take value in (R, R d , R + ) and satisfying (H4.2) Z ∈ H 2 . (H4.3) Y ∈ S 2 , and K T ∈ L 2 . (H4.4) K t is a continuous and increasing process, ) in the sense of Definition 4.1..
Proof.
Since the solution of the BDSDE is no longer a super-martingale, the method of proof in Lemma 4.1. [11] will fail in our situation. The ides of proof comes from the the properties of BDSDE and limitation theory. According to the penalization method and the comparasion theorem For each n ∈ N , we denote (Y n , Z n ) the unique pair of F t measureable process with valued in R × R d is the solution of We denote First we prove (Y, Z) is the limit of (Y n , Z n ). We know f n (t, y, z) ≤ f n+1 (t, y, z), from comparison theorem, ) and according to the result from [10] E sup It follows from the Fatou lemma that E sup 0≤t≤T |Y t | 2 ≤ c, then by the dominated convergence, Next, we desire to prove E T 0 (Z t − Z n t ) 2 dt → 0, as n → ∞. Applying Itô's formula to the proces|Y n t − Y p t | 2 .
According to the Lemma 4.5 we prove that Now we begin to prove Y is continuous.
From Burkholder-Davis-Gundy inequality, We get E sup 0≤t≤T |Y n t − Y p t | 2 → 0 as n, p → ∞. Y n convergence uniformly in t to Y, a.s.
hence Y is continuous.
In addition, we have denoted that K n t is a increasing process with E (K n T ) 2 ≤ C, it is obvious that K T < ∞, a.s.
From the Lipschitz conditions and the Burkholder-Davis-Gundy inequality, we have It remains to check that T 0 (Y t − V t ) dK t = 0. According to (36) and (38), we have Finally, we take the limit of both sides of the equation of (33), then we have equation (31). The proof of the uniqueness are derived from the proof of the Proposition 1.6 in the [12].
If there exist another solution K t * ,x,v r and Z t * ,x,v satisfing equation (33), then we apply Ito formula to(y t − y t ) 2 ≡ 0 on the [0, T ] and take expectation Then we can define a C 4 (t, T, x, v) satifying Then we have it follows that On the other hand, for any small ε > 0, there exists a control v ∈ U, V r, Proof. Existence: In the stochastic recursive optimal control problem, the value function V (t, x) defined by (16) satisfies the Bellman's dynamic programming principle. By Lemma 4.6 and Lemma 4.7, we know that, for any v ∈ U, there have a unique increasing process A t,x,v s , V (s, x t,x,v s ) satisfy the following BDSDE: Finally combining (67) and (73), we know that Thus V (t, y) is also the value of sup v∈U J(t, y, v), from uniqueness of the solution of cost functional and the uniqueness of supremum, we get uniqueness of weak solution for PDEs (17), i.e. V (t, x) = V (t, x). | 2020-08-13T01:01:16.959Z | 2020-08-12T00:00:00.000 | {
"year": 2020,
"sha1": "b28527258f18e31b348c5e12df5dcff8ddae7360",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b28527258f18e31b348c5e12df5dcff8ddae7360",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
234439568 | pes2o/s2orc | v3-fos-license | Facilitating Numerical Solutions of Inhomogeneous Continuous Time Markov Chains Using Ergodicity Bounds Obtained with Logarithmic Norm Method
: The problem considered is the computation of the (limiting) time-dependent performance characteristics of one-dimensional continuous-time Markov chains with discrete state space and time varying intensities. Numerical solution techniques can benefit from methods providing ergodicity bounds because the latter can indicate how to choose the position and the length of the “distant time interval” (in the periodic case) on which the solution has to be computed. They can also be helpful whenever the state space truncation is required. In this paper one such analytic method—the logarithmic norm method—is being reviewed. Its applicability is shown within the queueing theory context with three examples: the classical time-varying M / M /2 queue; the time-varying single-server Markovian system with bulk arrivals, queue skipping policy and catastrophes; and the time-varying Markovian bulk-arrival and bulk-service system with state-dependent control. In each case it is shown whether and how the bounds on the rate of convergence can be obtained. Numerical examples are provided.
Introduction
The topic of this paper concerns the analysis of (one-dimensional) inhomogeneous continuous-time Markov chains (CTMC) with discrete state space. The inhomogeneity property implies that (some or all) transition intensities are non-random functions of time and (may or may not) depend on the state of the chain. For such mathematical models many operations research applications are known (see, for example, [1][2][3][4] and [Section 5] in [5]), but the motivation of this paper is queueing. Thus all the examples considered in this paper are devoted to time varying queues. Substantial literature on the problem exists in which various aspects (like existence of processes, numerical algorithms, asymptotics, approximations and others) are analyzed. The attempt to give a systematic classification of the available approaches (based on the papers published up to 2016) is made in [5]; up-to-date point of view is given in [Sections 1 and 1.2] of [4] (see also [6]).
The specific question, being the topic of this paper, is the computation of the longrun (see, for example, in [Introduction] of [7]), (limiting) time-dependent performance characteristics of a CTMC with time varying intensities. This question can be considered from different point of views: computation time, accuracy, complexity, storage use etc. As a result, various solution techniques have been developed, but none of them is the ubiquitous tool. One of the ways to improve the efficiency of a solution technique is to supply it with a method for the limiting regime detection, (or, in other words, a method providing ergodicity bounds): once the limiting regime is reached, there is no need to continue the computation indefinitely. The main contribution of this paper is the review of one such method (see Section 2) and presentation of its applicability in two new usecases, not considered before in the literature (see Sections 4 and 5). It is worth noting that methods, which provide ergodicity bounds, can be also helpful, whenever a truncation of the countable state space of the chain is required. The method presented in Section 2, whenever applicable, is helpful in this aspect as well (see also [8,9]).
The end of this section is devoted to the review (by no means exhaustive) of the popular solution techniques for the analysis of Markov chains in time varying queueing models. The attention is drawn to the ability of a technique to yield limiting time-dependent performance characteristics of a Markov chain with time varying intensities. For each technique mentioned, (computer simulation methods and numerical transform inversion algorithms are not discussed here), it is highlighted if any benefit can be gained when the technique is used along with a method providing ergodicity bounds.
In many applied settings the performance analysis is based on the procedure known as point-wise stationary approximation [10] and its ramifications. According to it the timedependent probability vector x(t) at time t is approximated by the steady-state probability vector y(t) by solving y(t)H(t) = 0 and y(t) 1 = 1, where H(t) is the time-dependent intensity matrix (throughout the paper the vectors denoted by bold letters are regarded as column vectors, e k denotes the kth unit basis vector, 1 T -row vector of 1's with T denoting the matrix transpose). In its initial version, the approximation breaks down if the instantaneous system's load is allowed to exceed 1. In general its quality depends on the values of the transition rates, and for some models (like time-dependent birthand-death processes) the approach is proved to be correct asymptotically in the limit (as transition intensities increase). Another fruitful set of techniques, which help one understand the performance of complex queueing systems, is the (conventional and manyserver) heavy-traffic approximations, (another approximation technique, worth mentioning here especially because of its applicability to non-Markov time varying queues, is robust optimization. See [4], Section 2.). Since scaling is important in heavy-traffic limits, usually the technique is more justified whenever the state space of a chain is in some intuitive sense close to continuous (see e.g., [11,12] and no doubt others), and less (or even not at all) justified if the state space is essentially discrete, (for example, when formed by the number of customers in the system M t /M t /1/N (for fixed N) at time t). Due to the nature of both class of techniques mentioned above they do not benefit from methods providing ergodicity bounds.
The very popular set of techniques to calculate performance measures, which stands apart from the two mentioned above, is comprised of numerical methods for systems of ordinary differential equations (ODEs)-Kolmogorov forward equations, (for an illustration the reader can refer to, for example, [13]). Due to the increasing computer power such methods keep gaining popularity. By introducing approximations these methods can be made more efficient. For example, when only moments of the Markov chain are of interest one can use closure approximations, (since the moment dynamics are (when available) close to the true dynamics of the original process, the benefits from the methods providing ergodicity bounds, when used alongside, are clear), (see e.g., [14][15][16]). Another method for the computation of transient distributions of Markov chains is uniformization (see [17]). It is numerically stable and, as reported, usually outperforms known differential equation solvers (see [Section 6] in [18]).
The methods based on uniformization suffer from slow convergence of a Markov chain: whenever it is slow, computations involve a large number of matrix-vector products. An ODE technique yields the numerical values of performance measures, but it is complicated by a number of facts, among which we highlight only those which are related to the topic of this paper. Firstly, there can be infinitely many ODEs in the system of equations. Traditionally this is circumvented by truncating the system, i.e., making the number of equations finite. But there is no general "rule of thumb" for choosing the truncation threshold. Secondly, (time-dependent) limiting characteristics of a CTMC are usually considered to be identical to the solution of the system on some distant time interval (see, for example, [17][18][19][20][21][22][23]). This procedure yields limiting characteristics with any desired accuracy, whenever the CTMC is ergodic. Yet, in general, it is not suitable for Markov chains with countable (or finite but large) state space. Moreover it is not clear, (convergence tests are usually required, which result in additional computations). how to choose the position and the length of the "distant time interval", on which the solution of the system must be found. Thus in practice without an understanding a priori about when the limiting regime is reached, significant computational efforts are required to make oneself sure that the obtained solution is the one required, (and, for example, the steady-state is not detected prematurely (see [24]). The authors in [20] propose the solution technique equipped with the steady-state detection. As is shown, it allows significant computational savings and simultaneously ensures strict error bounding. Yet the technique is only applicable, when the stationary solution of a Markov chain can be efficiently calculated in advance).
The approaches mentioned in the previous paragraph have straightforward benefit from the methods providing a priori determination of point of convergence. Although generally this task is not feasible, certain techniques exist, which provide ergodicity bounds for some classes of Markov chains. In the next section we review one such technique, being developed by the authors, which is based on the logarithmic norm of linear operators and special transformations of the intensity matrix, governing the behaviour of a CTMC. In the Sections 3-5 it is applied to three use-cases. Section 6 concludes the paper.
In what follows by · we denote the l 1 -norm, i.e., if x is an (l + 1)-dimensional column vector then x = ∑ l k=0 |x k |. If x is a probability vector, then x = 1. The choice of operator norms will be the one induced by the l 1 -norm on column vectors, i.e., A = sup 0≤j≤l ∑ l i=0 |a ij | for a linear operator (matrix) A.
Logarithmic Norm Method
Ergodic properties of Markov chains have been the subject of many research papers (see e.g., [25,26]). Yet obtaining practically useful general ergodicity bounds is difficult and remains, to large extent, an open problem. Below we describe one method, called the "logarithmic norm" method, which is applicable in the situations, when the discrete state space of the Markov chain cannot be replaced by the continuous one and the transition intensities are such that the chain is either null or weakly ergodic. The method is based on the notion of the logarithmic norm (see e.g., [27,28]) and utilizes the properties of linear systems of differential equations. Consider an ODE system d dt where the entries of the matrix H(t) = (h ij (t)) ∞ i,j=0 are locally integrable on [0, ∞) and H(t) is bounded in the sense that H(t) is finite for any fixed t. Then where −β(t) is the logarithmic norm of H(t) i.e.
Thus the following upper bound holds: If H(t) has non-negative non-diagonal elements (and arbitrary elements on the diagonal, (such a matrix in the literature is called sometimes essentially nonnegative).) and all of its column sums are identical, then there exist y(0) such that in (4) the equality holds.
The logarithmic norm method is put into an application in four consecutive steps. Firstly one has to determine whether the given Markov chain (further always denoted by X(t)) is null-ergodic or weakly ergodic,(a Markov chain is called null-ergodic, if for all its state probabilities p i (t) → 0 as t → ∞ for any initial condition; a Markov chain is called weakly ergodic if p * (t) − p * * (t) → 0 as t → ∞ for any initial condition p * (0), p * * (0), where the vector p(t) contains state probabilities). Secondly one excludes one "border state" from the Kolmogorov forward equations and thus obtains the new system with the matrix which, in general, may have negative off-diagonal terms. The third step is to perform (if possible) the similarity transformation (see (11) and (24)), i.e., to transform the new matrix in such a way that its off-diagonal terms are nonnegative and the column sums differ as little as possible. At the final, fourth step one uses the logarithmic norm to estimate the convergence rate. The key step is the third one. The transformation is made using a sequence of positive numbers (see the sequences {δ n , n ≥ 0} below), which usually has to be guessed, does not have any probabilistic meaning and can be considered as an analogue of Lyapunov functions.
Time-Varying M/M/2 System
We start with the well-known time-varying M/M/2/∞ system with two servers and the infinite-capacity queue in which customers arrive one by one with the intensity λ(t). The service intensity of each server does not depend on the total number of customers in the queue and is equal to µ(t). The functions λ(t) and µ(t) are assumed to be nonrandom, nonnegative and locally integrable on [0, ∞) continuous functions. Let the integer-valued time-dependent random variable X(t) denote the total number of customers in the system at time t ≥ 0. Then X(t) is the CTMC with the state space {0, 1, 2 . . . }. Its transposed time-dependent intensity matrix (generator) A(t) = (a ij (t)) ∞ i,j=0 has the form For all t ≥ 0 we represent the distribution of X(t) as a probability vector p(t), where p(t) = ∑ ∞ k=0 P(X(t) = k)e k (as above, e k denotes the kth unit basis vector). Given any proper initial condition p(0), the Kolmogorov forward equations for the distribution of X(t) can be written as d dt Assume that X(t) is null ergodic. The condition on the intensities λ(t) and µ(t), which guarantees null ergodicity will be derived shortly below, (clearly, if the intensities are constants, i.e., λ(t) = λ and µ(t) = µ, then the condition is simply λ > 2µ. If both are periodic and the smallest common multiple of the periods is T, then the condition is . Fix a positive number d > 1 and define the sequence {δ n , n ≥ 0} by δ n = d −n . It is the decreasing sequence of positive numbers. By multiplying (5) from the right with Λ = diag(δ 0 , δ 1 , . . . ), we get Denote by −α k (t) the sum of all elements in the kth column ofÃ(t). By direct inspection it can be checked that , the upper bound follows from (4) applied to (6): If d is chosen such that d > 1 and ∞ 0 (λ(t) − 2dµ(t)) dt = +∞, then from (7) it follows that p k (t) → 0 as t → ∞ for each k ≥ 0 and thus X(t) is null ergodic. In such a case it is possible to extract more information from (7). Note that for any fixed n ≥ 0 it holds that Thus, if X(0) = N, i.e., p N (0) = 1 then for any n ≥ 0 the following upper bound for the conditional probability P(X(t) ≤ n|X(0) = N), N ≥ 0, holds: Now assume that X(t) is weakly ergodic (the corresponding condition on the intensities λ(t) and µ(t) will be derived shortly below). Using the normalization condition p 0 (t) = 1 − ∑ i≥1 p i (t) it can be checked that the system (5) can be rewritten as follows: where the matrix B(t) with the elements b ij (t) = a ij (t) − a i0 (t) has no probabilistic meaning and the vectors f(t) and z(t) are Let z * (t) and z * * (t) be the two solutions of (9) corresponding to two different initial conditions z * (0) and z * * (0). Then for the vector with arbitrary elements we have the system The matrix B(t) in (10) may have negative off-diagonal elements. But it is straightforward to see, that the similarity transformation TB(t)T −1 = B * (t), where T is the upper triangular matrix of the form gives the matrix B * (t): which off-diagonal elements are always nonnegative. Let Then by multiplying both parts of (10) from the left by T, we get Fix a positive number d > 1 and define the increasing sequence of positive numbers {δ n , n ≥ 0} by δ n = d n−1 . Let D = diag(δ 1 , δ 2 , . . . ). By putting w(t) = Du(t) in (12), we obtain the system of equations where the matrix B * * (t) = DB * (t)D −1 has nonnegative off-diagonal elements. Denote by −α k (t) the sum of all elements in the kth column of B * * (t) i.e.
Sometimes it is also possible to obtain bounds similar to (15) for other characteristics of X(t). For example, denote by E(t, k) the conditional mean number of customers in the system at time t, given that initially there where k customers in the system, i.e., E(t, k) = ∑ n≥1 nP(X(t) = n|X(0) = k). Then using [Equation (22)] of [29] it can be shown, that The results obtained above for both, null and weak ergodic, cases can be put together in the single theorem.
Whenever the intensities λ(t) and µ(t) are constants or periodic functions stronger results can be obtained.
Corollary 1.
If in the Theorem 1 the intensities λ(t) and µ(t) are constants or 1−periodic, (i.e., λ(t) and µ(t) are periodic functions and the length of their periods is equal to one), then X(t) is exponentially null (weakly) ergodic if d < 1 (d > 1) and there exist R > 0 and a > 0 such that We now consider the numerical example. Let λ(t) = 9(1 + sin 2πt) and µ(t) = 8(1 + cos 2πt). It is straightforward to check from the Theorem 1 that if d = 4 3 then X(t) is weakly ergodic. Then the ergodicity bounds follow from (15) and (16): Figure 1 shows the graph of the probability p 0 (t) as t increases. It can be seen that for any initial condition p(0) there exists one periodic function of t, say π 0 (t) (i.e., π 0 (t) = π 0 (t + T), where T = 1 is the smallest common multiple of the periods of λ(t) and µ(t)), such that lim t→∞ (p 0 (t) − π 0 (t)) = 0. Figure 2 shows the detailed behaviour of π 0 (t). Now consider (17). If t ≥ 37 then the right part of (17) does not exceed 10 −3 i.e., starting from the instant t = 37 = t * the system "forgets" its initial state and the distribution of X(t) for t > t * can be regarded as limiting. The error (in l 1 -norm), which is thus made, is not greater than 10 −3 . Moreover, since the limiting distribution of X(t) is periodic, it is sufficient to solve numerically the system of ODEs only in the interval [0, t * + T]. The distribution of X(t) in the interval [t * , t * + T] is the limiting probability distribution of X(t) (with error not greater than 10 −3 in l 1 -norm). Note that the system of ODEs contains infinite number of equations. Thus in order to solve it numerically one has to truncate it; this truncation was performed according to the method in [30]. The upper bound on the rate of convergence of the conditional mean E(t, k) is given in (18). If t ≥ t * then the right part does not exceed 10 −2 i.e., starting from t = t * the system "forgets" its initial state and the value of E(t, k) can be regarded as the limiting value of the conditional mean number of customers with the error not greater than 10 −2 . The rate of convergence of E(t, k) and the behaviour of its limiting value is shown in the Figures 3 and 4. Note that the obtained upper bounds are not tight: the system enters the periodic limiting regime before the instant t = t * .
Time-Varying Single-Server Markovian System with Bulk Arrivals, Queue Skipping Policy and Catastrophes
Consider the time-varying M/M/1 system with the intensities being periodic functions of time and the queue skipping policy as in [31] (see also [32]). Customers arrive to the system in batches according to the inhomogeneous Poisson process with the intensity λ(t). The size of an arriving batch becomes known upon its arrival to the system and is the random variable with the given probability distribution {b n , n ≥ 1}, having finite meanb = ∑ ∞ k=1 B k , B k = ∑ ∞ n=k b n . The implemented queue skipping policy implies that whenever a batch arrives to the system its size, say B, is compared with the remaining total number of customers in the system, say B. If B > B, then all customers, that are currently in the system, are instantly removed from it, the whole batch B is placed in the the queue and one customer from it enters server. If B ≤ B the new batch leaves the system without having any effect on it. Whenever the server becomes free the first customer from the queue (if there is any) enters server and gets served according to the exponential distribution with the intensity µ(t). Finally the additional inhomogeneous Poisson flow of negative customers with the intensity γ(t) arrives to the system. Each negative arrival results in the removal of all customers present in the system at the time of arrival. The negative customer itself leaves the system. Since γ(t) depends on t it can happen that the effect of negative arrivals fades away too fast as t → ∞ (for example, if γ(t) = (1 + t) −n , n > 1). Such cases are excluded from the consideration.
Let X(t) be the total number of customers in the system at time t. From the system description it follows that X(t) is the CTMC with state space {0, 1, 2, . . . , b * }, where b * is the maximum possible batch size i.e., b * = max n≥1 (b n > 0). Thus if the batch size distribution has infinite support then the state space is countable, otherwise it is finite.
It is straightforward to see that the transposed time-dependent generator A(t) = (a ij (t)) ∞ i,j=0 for X(t) has the form We represent the distribution of X(t) as a probability vector p(t), where p(t) = ∑ b * k=0 P(X(t) = k)e k tor all t ≥ 0. Given a proper p(0), the probabilistic dynamics of X(t) is described by the Kolmogorov forward equations d dt p(t) = A(t)p(t), which can be rewritten in the form where g(t) = (γ(t), 0, 0, . . . ) T and A * (t) is the matrix with the terms a * ij (t) equal to Due to the restrictions imposed on γ(t), we have that ∞ 0 γ(t) dt = ∞. Thus X(t) cannot be null ergodic irrespective of the values of λ(t) and µ(t).
Theorem 2. Assume that the catastrophe intensity γ(t) is such that
Then the Markov chain X(t) is weakly ergodic and for any two initial conditions p * (0) and p * * (0) it holds that Proof. It is straightforward to check, that the logarithmic norm (see (3)) of the operator A * (t) is equal to −γ(t). Denote now by U * (t, s) the Cauchy operator of the Equation (19). Then the statement of the theorem follows from the inequalities U * (t, s) ≤ e − t s γ(u) du and Even though (21) is the valid ergodicity bound for X(t), it is of little help whenever the state space of X(t) is countable and one needs to perform the numerical solution of (5). This is due to the fact that the bound (21) is in the uniform operator topology, which does not allow to use the analytic frameworks (for example, [29]) for finding proper truncations of an infinite ODE system. For the latter task ergodicity bounds for X(t) in stronger (than l 1 ), weighted norms are required. It can be said that with such bounds we have a weight assigned to each initial state and thus a truncation procedure becomes sensitive to the number of states. Below (in the Theorem 3) we obtain such a bound under the additional assumption, (for the definition used see [33]; appropriate test for monotone functions can be found in [Proposition 1] of [34]. Although the Theorem 2 below holds for any distribution {b n , n ≥ 1}, this assumption is essential for the Theorem 3. For distributions with tails heavier than the geometric distribution we were unable to find the conditions, which guarantee the existence of the limiting regime of queue-size process even for periodic intensities). that the batch size distribution {b n , n ≥ 1} is harmonic new better than used in expectation i.e., ∑ ∞ j=k B j+1 ≤b 1 −b −1 k for all k ≥ 0. Using the normalization condition p 0 (t) = 1 − ∑ i≥1 p i (t) the forward Kolmogorov system d dt p(t) = A(t)p(t) can be rewritten as where Fix d ∈ (1, 1 + (b − 1) −1 ] and define the increasing sequence of positive numbers {δ n , n ≥ 0} by δ n = d n−1 . Then instead of the matrix B * * (t) in (13) we have the matrix A(t) = (ã ij (t)) ∞ i,j=0 with the following structure: Since the logarithmic norm (see (3)) ofÃ(t) is equal to then from (4) we get: Arguments similar to those used to establish the Theorem 1 lead to the following ergodicity bounds for p * (t) − p * * (t) and the conditional mean E(t, k): These results can be put together in the single theorem.
Theorem 3. Assume that the distribution {b n , n ≥ 1} with finite meanb is harmonic new better than used in expectation. Then if , then the Markov chain X(t) is weakly ergodic and the ergodicity bound (26) holds.
We close this section with the example, showing the dependence on t of the same two quantities -p 0 (t) and E(t, k)-considered in the Section 3. Assume here that b k = 1 3 2 3 k−1 , λ(t) = 9(1 + sin 2πt), µ(t) = 8(1 + cos 2πt) and γ(t) = 1, i.e., the catastrophe intensity is constant and the mean sizeb of an arriving batch is equal to 3. It can be checked that d = 3 2 satisfies the conditions of the Theorem 3. Then from (26) and (27) we get the upper bounds In Figure 5 it is depicted how p 0 (t) behaves as t increases and Figure 6 shows its limiting value. If t ≥ 60 then the right part of (28) does not exceed 3 · 10 −2 , i.e., starting from the instant t = 60 = t * the system "forgets" its initial state and the distribution of X(t) for t > t * can be regarded as limiting. Moreover, since the limiting distribution of X(t) is periodic, it is sufficient to solve (numerically, (it must be noticed that since b k > 0 for all k, the system of ODEs contains infinite number of equations. Thus in order to solve it numerically one has to truncate it. We perform this truncation according to the method in [30])). the system of ODEs only in the interval [0, t * + T], where T is the smallest common multiple of the periods of λ(t) and µ(t) i.e., T = 1. The probability distribution of X(t) in the interval [t * , t * + T] is the estimate (with error not greater than 3 · 10 −2 in l 1 -norm) of the limiting probability distribution of X(t). The upper bound on the rate of convergence of the conditional mean number of customers in the system E(t, k) is given in (29). If t ≥ t * then the right part does not exceed 0, 3, i.e., starting from the instant t = t * the system "forgets" its initial state and the value of E(t, k) can be regarded as the limiting value of the mean number of customers with the error not greater than 0, 3. The rate of convergence of E(t, k) and the behaviour of its limiting value can be seen in Figures 7 and 8. As in the previous numerical example, the obtained upper bounds are not tight: the system enters the periodic limiting regime before the instant t = t * .
Time-Varying Markovian Bulk-Arrival and Bulk-Service System with State-Dependent Control
In the recent paper [35] the authors considered the Markovian bulk-arrival and bulkservice system with the general state-dependent control (see also [35][36][37][38][39]). The total number X(t) of customers at time t in that system constitutes CTMC with state space {0, 1, 2, . . . }. Its generator Q(t) = (q ij (t)) ∞ i,j=0 has quite a specific structure: where k ≥ 1 is the fixed integer. For further explanations and the motivation behind such structure of Q(t) we refer the reader to [Section 1] in [35]. The purpose of this section is to show that for at least one particular case of this system, even when the intensities are time-dependent, one can obtain the upper bounds for the rate of convergence using the method based on the logarithmic norm. Specifically, we take the example, (in the example of [Section 7] in [35] the entries of the intensity matrix Q(t) are: . from the Section 7 of [35], with the exception that all the transition intensities are time-dependent i.e., b i = λ(t) and a i = µ(t) and are both nonnegative locally integrable on [0, ∞). Then the transposed generator A(t) = (a ij (t)) ∞ i,j=0 = Q T (t) of X(t) has the form Denote the distribution of X(t) by p(t) i.e., p(t) = (p 0 (t), p 1 (t), . . . ) T = ∑ ∞ k=0 P (X(t) = k)e k (as above, e k denotes the kth unit basis vector). The ergodicity bound for X(t) in the null ergodic case is given below in the Theorem 4.
The ergodicity bound in the weakly ergodic case, state below in the Theorem 5, is obtained by analogy with the Theorem 1. Define an increasing sequence of positive numbers {δ n , n ≥ 0}. Then the matrix B * * (t) built from the matrix A(t), in the same way as it is done in the Section 3, has the form: Denote by −α k (t) the sum of all elements in the kth column of B * * (t) i.e., Since the logarithmic norm of B * * (t) is equal to −β(t) = − min(min 1≤k≤3 α k (t), inf k≥4 α k (t)), we can apply (4) to (13) and (15) with δ k+1 = σδ k , k ≥ 5.
Conclusions
As can be seen from the last three sections, in order to obtain the ergodicity bounds the values of λ(t) and µ(t) for each t may not be needed. Instead it may be sufficient to know only the time-average intensities λ = 1 t lim t→∞ t 0 λ(u)du and µ = 1 t lim t→∞ t 0 µ(u)du. For periodic intensities with the smallest common multiple of the periods T, the values λ and µ are exactly the average arrival and service intensity over one period.
The classes of CTMC to which the logarithmic norm method is applicable and gives meaningful results is not limited to those considered in this paper, (necessary and sufficient conditions for a CTMC "to fit" the logarithmic norm method are not known). For example, the same reasoning, which has led to the Theorem 1, can be used to obtain the upper bounds for the rate of convergence of the M t /M t /S/∞ system with any (finite) number of servers. Moreover, whenever X(t) is weakly ergodic, the analysis can be carried on beyond what is stated in the Theorem 1. For example, one can obtain the perturbation bounds (see e.g., [40]) and study different state space truncation options: one-sided or two sided (see e.g., [29,41,42]).
Author Contributions: Investigation, A.Z., R.R., Y.S., I.K. and V.K. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript. | 2020-12-31T09:07:45.926Z | 2020-12-27T00:00:00.000 | {
"year": 2020,
"sha1": "e8509943a29818332bd5e59a8139ca0c219709db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/9/1/42/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "0b25749657fea9eef1a4607f87339ab571ea6377",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
270936629 | pes2o/s2orc | v3-fos-license | Global, regional, and national burden of female cancers in women of child-bearing age, 1990–2021: analysis of data from the global burden of disease study 2021
Summary Background The global status of women's health is underestimated, particularly the burden on women of child-bearing age (WCBA). We aim to investigate the pattern and trend of female cancers among WCBA from 1990 to 2021. Methods We retrieved data from the Global Burden of Disease Study (GBD) 2021 on the incidence and disability-adjusted life-years (DALYs) of four major female cancers (breast, cervical, uterine, and ovarian cancer) among WCBA (15–49 years) in 204 countries and territories from 1990 to 2021. Estimated annual percentage changes (EAPC) in the age-standardised incidence and DALY rates of female cancers, by age and socio-demographic index (SDI), were calculated to quantify the temporal trends. Spearman correlation analysis was used to examine the correlation between age-standardised rates and SDI. Findings In 2021, an estimated 1,013,475 new cases of overall female cancers were reported globally, with a significant increase in age-standardised incidence rate (EAPC 0.16%), and a decrease in age-standardised DALY rate (−0.73%) from 1990 to 2021. Annual increase trends of age-standardised incidence rate were observed in all cancers, except for that in cervical cancer. Contrary, the age-standardised DALY rate decreased in all cancers. Breast and cervical cancers were prevalent among WCBA worldwide, followed by ovarian and uterine cancers, with regional disparities in the burden of four female cancers. In addition, the age-standardised incidence rates of breast, ovarian, and uterine cancers basically showed a consistent upward trend with increasing SDI, while both the age-standardised incidence and DALY rates in cervical cancer exhibited downward trends with SDI. Age-specific rates of female cancers increased with age in 2021, with the most significant changes observed in younger age groups, except for uterine cancer. Interpretation The rising global incidence of female cancers, coupled with regional variations in DALYs, underscores the urgent need for innovative prevention and healthcare strategies to mitigate the burden among WCBA worldwide. Funding This study was supported by the Science Foundation for Young Scholars of Sichuan Provincial People's Hospital (NO. 2022QN44 and NO. 2022QN18); the Key R&D Projects of 10.13039/501100004829Sichuan Provincial Department of Science and Technology (NO. 2023YFS0196); the 10.13039/501100001809National Natural Science Foundation of China (No. 82303701).
Introduction
Breast cancer, cervical cancer, ovarian cancer, and uterine cancer represent significant health issues for women worldwide. 1With the global increase in the female population and rapid social development, the burden of these cancers is steadily rising, accompanied by a discernible trend towards women of child-bearing age (WCBA). 2 Breast cancer is one of the most common cancers among women across the world, while cervical cancer remains a leading cause of cancer-related deaths in some developing countries. 3,4Although ovarian cancer and uterine cancer have relatively lower incidence rates, they are nonetheless significant types of malignancies affecting the female reproductive system. 5here are striking disparities across regions and countries in female cancers. 6Some regions may suffer from inadequate medical resources and insufficient healthcare services, leading to delayed cancer screening and treatment, thereby increasing the incidence and mortality rates of these diseases.An International Agency for Research on Cancer study of two million women from 81 countries found that nearly one-third of breast cancer cases were late-stage in sub-Saharan Africa, while only one-tenth in Europe and North America, 7 highlighting a correlation between lower socioeconomic status and late-stage breast cancer diagnosis.According to the World Health Organization (WHO), cervical cancer is the fourth most common cancer among women globally, with approximately 94% of the 350,000 deaths occurring in low-and middleincome countries. 8There is still inadequate coverage of cervical cancer screening and human papillomavirus (HPV) vaccination in certain less developed countries, contributing to persistently high incidence rates of cervical cancer. 9Unhealthy lifestyles and environmental pollution due to rapid development in developing countries may also contribute to an increased risk of female cancers.The latest Global Cancer Statistics indicate that the mortality rates for breast and cervical cancers among women in developing countries are significantly higher than in developed countries (15.3 and 12.4 cases per 100,000 people, respectively, compared to 11.3 and 4.8 cases per 100,000 people). 10herefore, comprehensive research and analysis of the burden of female cancers among WCBA in different regions and countries are essential for developing more targeted prevention and control strategies.
The Global Burden of Diseases, Injuries, and Risk Factors Study provided a systematic approach to assess the burden of female cancers in 204 countries and territories, offering a unique opportunity to understand the underlying burden trends across the past three decades. 11Considering that WCBA represents a crucial demographic group for reproductive health and family planning, effective interventions for female cancers during this age period can contribute to improving
Research in context
Evidence before this study We used the keywords "breast cancer", "gynecologic cancer", "cervical cancer", "uterine cancer", "ovarian cancer", "global burden", and "women of child-bearing age" to search PubMed and Web of Science from database inception to May 24th, 2024.Several recent studies have indicated that rapid socioeconomic development has contributed to an increase in the incidence of female cancers and a trend towards younger ages at diagnosis, while persistent issues such as regional disparities and gender inequality have led to inequalities in the survival rates of women with cancer.To date, there has been no analysis of global burden and trends in the four major female cancers among women of child-bearing age (WCBA).The United Nations General Assembly calls for action towards the Sustainable Development Goals related to health, poverty, and gender, and aims to eliminate cervical cancer as a public health issue by 2030.One of the key actions is to provide tailored comprehensive healthcare and primary health services for WCBA.However, there is a scarcity of existing global-and regional-level female cancer among WCBA surveillance data, and the quality varies greatly.
Added value of this study
This study first analyse the global trends in the incidence and disability-adjusted life-years (DALY) of four major female cancers (breast cancer, cervical cancer, uterine cancer, ovarian cancer) among WCBA (15-49 years) in 204 countries and territories from 1990 to 2021, considering age and sociodemographic index.The findings of this study provide valuable insights for the development of evidence-based healthcare strategies and the allocation of resources aimed at mitigating the burden of female cancers among WCBA.This underscores the necessity for a comprehensive approach to prevention, screening, and care tailored specifically to this demographic group.
Implications of all the available evidence
Female cancers among WCBA pose a global public health challenge.The age-standardised incidence rate of female cancers increased worldwide from 1990 to 2021, mainly attributable to breast, ovarian, and uterine cancers.Although the age-standardised DALY rate of female cancers decreased worldwide from 1990 to 2021, there were also regional and demographic disparities for each cancer.Health-care providers should be aware of gender inequality and other societal factors on the risk of female cancers in WCBA and should develop region-and age-appropriate primary intervention, secondary intervention and health care.global women's health and population fertility issues.In this study, we focus on the four major female cancers (breast cancer, cervical cancer, ovarian cancer, and uterine cancer) that affect the female reproductive system and aim to estimate the patterns and trends of their incidence and disability-adjusted life-years (DALYs) among WCBA, to provide insights for tailored policies and strategies concerning prevention, screening, and treatment, ultimately benefiting reproductive and population fertility health.
Study population and data collection
In this study, we analysed data on female cancers from the Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2021.Despite the diverse occurrence of cancers in women, such as breast, cervical, ovarian, uterine, vulvar, and vaginal cancers, GBD 2021 only provided estimates of the burden of four major female cancers (breast cancer, cervical cancer, ovarian cancer, and uterine cancer) (https://ghdx.healthdata.org/record/ihme-data/gbd-2021-cause-icd-code-mappings).In addition, according to the GBD definition, fallopian tube cancer is not included under ovarian cancer.The international classification of disease codes of these four cancers in GBD 2021 were defined in the supplementary appendix (Appendix 1).Moreover, according to the definition from WHO, women of child-bearing age (WCBA) was defined as 15-49 years. 12he GBD 2021, supported by over 11,500 collaborators from 164 countries, systematically assesses global health status and disease burden through extensive data provision, review, and analysis.The data sources can be found through the GBD 2021 Data Input Sources Tool (https://ghdx.healthdata.org/gbd-2021/sources)from the Institute for Health Metrics and Evaluation website.An overview of GBD data collection, modeling/analysis, and dissemination was provided in the supplementary appendix (Appendix 2).Details of the disease model of the four female cancers are presented in the GBD 2021 methods appendices (https://www.healthdata.org/gbd/methods-appendices-2021/cancers).In this study, we extracted numbers and rates on the incidence and DALYs of the four major female cancers within the age range of 15-49 years from the GBD 2021 through the GBD Results Tool (https://vizhub.healthdata.org/gbdresults/).
The socio-demographic index (SDI) is estimated to represent a comprehensive development status that exhibits a robust correlation with health outcomes.It is derived from the geometric mean of 0-1 indices of the fertility rates among females under the age of 25, average years of education for individuals aged 15 and above, and lag-distributed income per capita.For GBD 2021, final SDI values were multiplied by 100 for reporting.An SDI of 0 signifies the theoretical minimum level of development relevant to health, while an SDI of 100 represents the theoretical maximum level.A recent GBD 2021 capstone paper described how SDI is assembled and categorized the 204 countries into five quintiles (low, low-middle, middle, high-middle, and high) based on their country-level SDI estimates for the year 2021. 13
Statistical analysis
We calculated age-standardised rates (ASRs) per 100,000 people of WCBA from 15 to 49 years, according to the formula 11 (1): In the equation, α i denotes the age-specific rate in the ith age group, while W i signifies the count of individuals within the same age group as per the GBD 2021 standard population.N is the total number of age groups.The 95% confidence interval (CI) was determined by "ageadjust.direct" function of package "epitools" within R software. 14e calculated the estimated annual percentage change (EAPC) in ASR to evaluate the average changing trends over a specified time interval. 15We assumed the natural logarithm of ASR fit the linear regressions model γ = α + βx + ε (2), where γ refers to ln (ASR), and x is the calendar year.Therefore: EAPC with 95% CI = 100 × (e β − 1) We identified an ASR as indicative of an increasing or decreasing trend over time if both the EAPC and its 95% CI were above or below zero, respectively.In instances where the 95% CI encompassed zero, we deemed the change in ASR statistically insignificant.
We employed local regression smoothing models (loess) using "geom_smooth" function of package "ggplot2" to fit the correlation between the burdens of female cancers among WCBA and SDI across 21 regions and 204 countries and territories.Additionally, we used Spearman correlation analysis to compute the r indices and p values for the relationship between burdens and SDI 2021.In addition, considering the distribution of SDI across countries changed much from 1990 to 2021, we calculated the EAPC of SDI by 204 countries, and used Spearman correlation analysis to assess the relationship between EAPCs of SDI and ASRs.We regarded p < 0.05 as statistically significant.All statistical analysis and graphical representations were conducted using R software (version 4.2.2).
Ethics statement
For GBD studies, the Institutional Review Board of the University of Washington reviewed and approved a waiver of informed consent (https://www.healthdata.org/research-analysis/gbd).
Role of the funding source
The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.All authors had full access to all the data in the study and accepted responsibility to submit for publication.
Global, regional, and national burden of overall female cancers
In 2021, the global incidence of overall female cancers was approximately 1,013,475 cases, with an agestandardised incidence rate of 50.7 per 100,000 population.The global DALYs was approximately 12,512,451 cases, with an age-standardised DALY rate of 626.8 per 100,000 population (Table 1).Throughout 21 GBD regions and 204 countries, the highest age-standardised incidence rates were found in Central Latin America (78.0),Monaco (133.3), and the highest age-standardised DALY rates were recorded in Southern Sub-Saharan Africa (1346.4),Kiribati (2129.6),respectively (Table 1, Table S1, Fig. 1A, Fig. 2A and B).
From 1990 to 2021, the global age-standardised incidence rate increased with an EAPC of 0.16, while the age-standardised DALY rate decreased with an EAPC of −0.73 (Table 1).Across the regional and national levels, the most rapid increases of agestandardised incidence rate were observed in North Africa and Middle East (EAPC = 2.45) and Lesotho (4.49) (Table 1, Table S1, Figs.1B, Fig. 2C).The agestandardised DALY rate in most regions decreased significantly except for three regions (Southern Sub-Saharan Africa (EAPC = 1.53) and North Africa and Middle East (0.56) increased significantly, and Central Sub-Saharan Africa remained stable).The most rapid increase of age-standardised DALY rate across countries was Lesotho (EAPC = 4.38) (Table 1, Table S1, Figs.1B, Fig. 2D).
At the national level, the highest age-standardised incidence rates of breast, cervical, ovarian, and uterine cancers were observed in Bahamas, Kiribati, Seychelles, and Cuba, while the highest age-standardised DALY rates were recorded in Nauru, Kiribati, Bahamas, and Guyana for breast, cervical, ovarian, and uterine cancers, respectively.The most significant increases of agestandardised rates of incidence and DALYs were both observed in Türkiye (EAPC = 7.42, EAPC = 4.62, respectively) for breast cancer, Lesotho (5.00, 4.92) for cervical cancer, and Ecuador (6.83, 6.32) for ovarian cancer.For uterine cancer, the fastest increasing trend in age-standardised rates of incidence and DALYs were in Italy (4.67) and Zimbabwe (4.13), respectively (Tables S6-S9, Figures S1-S4).
High-income
Age-group disparities in the burden of four female cancers Among WCBA, the age distribution of numbers and rates in incidence and DALYs were largely consistent for the four and overall female cancers globally.Detailly, in 2021, the incidence and DALY numbers and rates of the four female cancers increased with age and reached the highest at 45-49 years (Fig. 4A and B).Additionally, in each age group, the absolute numbers and rates of incidence and DALYs were highest for breast cancer, followed by cervical cancer, ovarian cancer, and uterine cancer.The percentage changes in burdens of breast, cervical, ovarian, and overall cancers between 1990 and 2021 exhibited a declining trend.On the contrary, the percentage change of uterine cancer showed a fluctuant increase trend with female aging.In addition, the most significant increases in incidence and DALY rates of breast, cervical, and ovarian cancers were observed among those aged 15-19 years and 20-24 years (Fig. 4C and D).These patterns suggest that the burdens of these three female cancers are increasingly affecting younger women.
The association between ASR, EAPC, and SDI
From 1990 to 2021, across 21 regions, the overall agestandardised incidence rate of female cancers increased with rising SDI.The age-standardised incidence rate for breast and uterine cancers also increased, while cervical cancer decreased.The age-standardised incidence rate for ovarian cancer initially increased, then declined at an SDI of 75.The overall agestandardised DALY rate of female cancers decreased with increasing SDI, with cervical cancer showing a similar trend.However, age-standardised DALY rate for breast, ovarian, and uterine cancers initially increased with rising SDI, then declined around an SDI of 70 (Figure S5, Fig. 5).
The SDI in 2021 acts as a surrogate for the level and availability of healthcare across different countries.Regarding 204 countries and territories in 2021, the overall age-standardised incidence rate of female cancers increased with rising SDI but declined after SDI reached 75.Ovarian and uterine cancers showed similar trends.Conversely, the age-standardised incidence rate for breast cancer increased with rising SDI, while cervical cancer decreased.The overall age-standardised DALY rate of female cancers decreased with rising SDI, consistent with cervical cancer.However, the age-standardised DALY rates for breast, ovarian, and uterine cancers initially increased with rising SDI and then declined around an SDI of 70 (Figures S6-S10).
From 1990 to 2021, countries and territories with low-middle and middle SDI saw faster increases in agestandardised incidence and DALY rates of overall, breast, ovarian, and uterine cancers, while those with low, low-middle, and middle SDI experienced slower decreases in rates of cervical cancer.(Figures S6-S10).In addition, positive associations were observed between the EAPCs of age-standardised rates and SDI in breast cancer and ovarian cancer, from 1990 to 2021 (Figure S11).
Discussion
This study provides a comprehensive estimation of the incidence and DALYs of female cancers among WCBA and investigates their temporal trend worldwide for the first time.The primary findings are as follows: first, the global age-standardised incidence rate of overall female cancers among WCBA increased, while the agestandardised DALY rate decreased from 1990 to 2021.Second, annual increase trends of age-standardised incidence rate were observed in all cancers, except for that in cervical cancer.Contrary, the age-standardised DALY rate decreased in all cancers.Third, breast and cervical cancers were prevalent among WCBA worldwide, followed by uterine and ovarian cancers, with regional disparities in the burden of four female cancers.Fourth, the age-standardised incidence rates of breast, ovarian, and uterine cancers basically showed a consistent upward trend with increasing SDI, while both the age-standardised incidence and DALY rates in cervical cancer exhibited downward trends with SDI.Lastly, among WCBA, the burden of female cancers increased with age groups in 2021.However, the greatest changes were observed in younger age groups from 1990 to 2021, excluding uterine cancer.
Our results indicated a global rise in the agestandardised incidence rate of female cancers among WCBA from 1990 to 2021, mainly attributable to the increases in breast, ovarian, and uterine cancers.This finding aligns with previous studies, 1,6,10 indicating the need for heightened attention to the cancer burden in this population.The global rise of obesity among WCBA is likely contributing to this rise, which has been demonstrated to alter the inflammatory, metabolic, and hormonal pathways. 16,17][20] Besides, researchers revealed numbers of adverse lifestyle behaviors arising from rapid social development are associated with increased female cancers through diverse mechanisms. 21,224][25] Given the heterogeneity and limited statistical power of these studies, additional research is warranted to gain a deeper understanding of how risk factors, particularly modifiable ones, impact female cancers, 26 to achieve targeted early prevention of these cancers.In addition, socioeconomic development has led to enhanced breast cancer screening, routine gynecologic exams, and improved cancer registration, facilitating earlier detection and higher incidence rates of female cancers. 27he mentioned factors are increasingly common among women in middle SDI regions, correlating with high age-standardised incidence rates and significant increases in female cancers.A community-based research explained this phenomenon, that is cancer types were associated with different socioeconomic and lifestyle significantly. 28Rapid societal development to a middle socio-development level often leads to adverse lifestyles and improved screening, contributing to rising cancer incidence rates.Conversely, in regions that advance to a high SDI level, healthier lifestyles and effective preventive measures may reduce female cancer incidence. 29For instance, Central Latin America exhibited the highest age-standardised incidence rate, while North Africa and Middle East increased the most in our study.And the highest age-standardised incidence rate in these regions were both breast cancer, which may be associated with more risk factors. 23hese findings substantiate the insufficiency of primary prevention efforts in middle SDI regions for female cancers, especially breast cancer, and emphasize the necessity for policymakers to devise and enforce easily implementable policies.This may include initiatives like promoting physical activity and healthy weight through public health centers, fostering positive lifestyle habits, and enhancing community-based public health education on female cancers. 30,31he age-standardised DALY rates of all female cancers among WCBA decreased significantly worldwide from 1990 to 2021, however, ovarian cancer experienced stable increases in many regions compared to other female cancers.According to the most recent findings of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute (NCI), ovarian cancer ranked as the fifth leading cause of cancer-related death among women and is considered the deadliest of gynecologic cancers. 32This is primarily because early-stage ovarian cancer rarely presents symptoms, leading to delayed detection until it has spread and formed a tumor, making treatment more challenging and often resulting in a fatal outcome. 33In women deemed at high risk, the established approach continues to be risk-reducing salpingo-oophorectomy. 34onsidering young women's reproductive potential, secondary prevention and healthcare measures should be actively explored to reduce the risk of ovarian dysfunction and hormonal fluctuations, thereby improving cure and survival rates. 35Meanwhile, there is currently insufficient evidence to endorse screening for the general population.The most extensive ovarian cancer screening trial conducted to date, the UK Collaborative Trial of Ovarian Cancer Screening, found that neither annual multimodal screening nor annual transvaginal ultrasound screening demonstrated a definitive reduction in ovarian or tubal cancer mortality when compared to no screening. 36Another 20-year screening trial from NCI also found no improved survival. 37Reassuringly, a clinical pilot study indicated that an 11-methylated DNA markers ovarian cancer panel can identify all five early-stage, high-grade serous ovarian cancers, 38 which emphasized policies should promote larger biomarker research with more-diverse patient populations to improve early detection of ovarian cancer.
Furthermore, in the context of the global decline in age-standardised DALY rates of female cancers among WCBA from 1990 to 2021, significant negative correlations were observed between the age-standardised DALY rate and its EAPC with SDI.A systematic review summarized delays and barriers to cancer care are common in low-and middle-income regions. 39dditionally, socioeconomic disparities exist in these regions and financial resources are proportional to health status. 402][43] This is underscored by our study, which identified regions with middle SDI levels, such as Southern Sub-Saharan Africa and North Africa and Middle East demonstrated significant increases in age-standardised DALY rate among WCBA.These increases were driven by both ovarian and breast cancers.Besides, these two cancers exhibited growth in most low, low-middle, and middle SDI regions, aligning with other epidemiologic studies. 44,45A review found significant disparities in ovarian cancer outcomes influenced by factors such as race and ethnicity, insurance coverage, socioeconomic status, and geographic location. 46Similarly, a report about breast cancer screening from the Centers for Disease Control and Prevention (CDC) summarized that women with low income and education were less likely to have had a mammogram. 47Researchers also observed that compared to women with high socioeconomic status (SES), those with low SES are less adherent to screening and have a two-fold risk of late-stage breast cancer. 48,49Reassuringly, the establishment of the Africa CDC in 2017 signifies an important stride in enhancing capacity and preparedness across the continent.However, to effectively tackle regional disparities in female cancers, urgent policy reforms are needed to reduce poverty and inequality through improving governance quality, economic growth, revenue distribution, and health education. 50urthermore, given the limited and fragmented nature of female cancer research in these regions, strengthening population-based registry systems for monitoring is pressing.
Leaving aside regional differences, breast cancer was the most prevalent both in incidence and DALYs among WCBA worldwide of the female cancers included in this study, followed by cervical cancer.With giant strides in WHO, our study showed that the global agestandardised DALY rate of breast cancer had declined significantly over the past 30 years, consistent with another study. 51However, there is room for substantial improvement.In 2023, the WHO introduced the Global Breast Cancer Initiative Framework, aimed at reducing breast cancer mortality by 2.5% annually and preventing 2.5 million breast cancer deaths worldwide by 2040. 52arly detection, timely diagnosis, and complete treatment are the fundamental strategies.Nevertheless, the persistently increasing global and regional agestandardised incidence rate of breast cancer indicated that current efforts in disease prevention are significantly inadequate.Pan American Health Organization has published a Breast Health Global Initiative, suggesting lifestyle modifications (diet, exercise, alcohol), chemoprevention drugs (tamoxifen) for moderate to high-risk women, and prophylactic surgery (mastectomy or oophorectomy) for high-risk women with appropriate testing and counseling. 53Updates to this guideline with new research should guide clinical practice and benefit a broader global population, particularly in low-and middle-income countries.
Regarding cervical cancer, while it remains a significant health concern among younger women, there is reassurance in the effectiveness of scaled-up strategies such as prophylactic vaccination against HPV, along with timely screening and treatment of precancerous lesions. 9Cervical cancer has emerged as one of the most successfully prevented and treated female cancers worldwide.Also, in our study, the age-standardised rates of incidence and DALYs of cervical cancer both declined significantly from 1990 to 2021 globally.Nonetheless, the illness proves fatal for women living in lowerincome nations, as they frequently encounter advanced and incurable stages due to resource constraints. 4ecent research has unveiled a surge in the incidence rate of distant-stage cervical cancer among White women in low-income counties, escalating at an annual rate of 4.4% since 2007. 54These findings align with our study results, indicating higher incidence and DALY rates of cervical cancer occurred in regions with middle and low SDI levels.Overall survival at five years in women with late-stage cervical cancer was below 19%. 55ence, it is imperative for all nations, especially those with lower SDI, to support the resolution adopted by the World Health Assembly in 2020, which advocates for the "Elimination of Cervical Cancer" by 2030, achieved through attaining targets of immunizing 90% of girls by age 15 with the HPV vaccine, screening 70% of women at ages 35-45 using high-performance tests, and treating 90% of precancerous lesions and managing 90% of invasive cancer cases. 56Notably, despite a high false positivity rate, visual inspection with acetic acid (VIA) could be considered as an alternative screening tool in primary care in low-income countries, 57 with attention given to the specificity of VIA.
Overall, the age-standardised incidence rates of breast, ovarian, and uterine cancers among WCBA basically increased with SDI, and these cancers had the highest age-standardised DALY rates in middle and high-middle SDI regions.Although breast, ovarian, and uterine cancers originate from different tissues, they share common epidemiological and hormonal risk factors, such as current age, age at menarche, and parity.A population-based cohort study developed models to predict the absolute risk of these cancers, aiding informed decisions on prevention and treatment. 58owever, broader studies are needed to refine the risk factor model to support prevention and treatment strategies for WCBA.Furthermore, the global agestandardised incidence rate of overall female cancers showed a positive correlation with SDI, whereas the age-standardised DALY rate demonstrated a negative correlation, with the most rapid increases in lower SDI regions.This suggests that regions and countries should adapt their preventing or treating strategies based on their specific incidence and DALY rates.In particular, lower SDI countries should focus on primary prevention as a cost-effective strategy for long-term cancer control, as evidenced by the decline in global cervical cancer rates.
Our study also revealed that the age-standardised rates of incidence and DALYs of female cancers among WCBA increased with age groups in 2021.
However, the greatest changes were observed in younger age groups from 1990 to 2021, excluding uterine cancer.This disproportionately affects women in their prime years, who are the primary caregivers for children, managing household responsibilities, and simultaneously engaging in critical professional or agricultural activities.Moreover, findings from clinical cancer trials indicated that women experience more severe symptomatic and hematologic adverse events across various treatment modalities, including immunotherapy, targeted therapy, and chemotherapy, 59 suggesting significant sex differences exist.Governments worldwide must prioritize cancer control among WCBA within their development agendas and integrate gender considerations into personalized cancer medicine.Adequate resources should be allocated to investigate gender disparities and develop less toxic and womenspecific therapies.Notably, a qualitative analysis found that 66.7% of younger women with gynecologic cancers reported unmet care needs, and 28% cited organizational difficulties within healthcare systems. 60dditionally, survivors often face psychological and social-sexual issues, leading to negative life changes. 61hese experiences underscore the importance of high-quality women's care through psychological and pharmacological interventions.
Our study has several limitations.First, the estimation of the burden of female cancers relies heavily on the availability and quality of data from the GBD 2021.There may be a lack of access to the raw/original data for some countries, particularly those with low and middle incomes, which can hinder GBD researchers from producing their estimates.Second, our study exclusively focuses on describing the burden of four common female cancers: breast, cervical, uterine, and ovarian cancers, excluding other types of female cancers.Third, variations in the diagnosis and detection protocols for these female cancers across countries and over time may potentially impact the comparability of results.Given the uncertainties associated with the raw data, caution is warranted in interpreting the trends in the burden of female cancers among WCBA identified in this study.Fourth, a narrow focus on significance testing may overlook the clinical relevance of the findings.To mitigate this limitation, we advocate for the development and implementation of diverse analytical approaches to broaden and validate the results of this study.
In conclusion, female cancers among WCBA pose a global public health challenge.From 1990 to 2021, the incidence of female cancers worldwide has continued to rise.Despite a downward trend in the global DALY rate for female cancers from 1990 to 2021, regional disparities persist.Healthcare providers should recognise that social factors associated with globalization may contribute to an increasing number of WCBA being at risk for female cancers.Moreover, tailored primary prevention, secondary prevention and healthcare strategies should be optimized to address the needs of WCBA based on age, region, and disease type, particularly in aging societies.
Contributors ZWW conceived the study.PS designed the protocol.PS, CY, LMY, and YC analysed the GBD data.CY, ZCS, TTZ, PS, KHZ, XQY, JYC, and YPL contributed to the statistical analysis and interpretation of data.PS and ZWW drafted the manuscript, and other authors critically revised the manuscript.PS and ZWW accessed and verified the underlying data.All authors have read and approved the final version of the manuscript.
Fig. 1 :
Fig. 1: Age-standardised incidence and DALY rates in 2021, and their estimated annual percentage changes from 1990 to 2021 for female cancers, globally and by 21 GBD regions.Age-standardised rates of incidence and DALYs (A), and estimated annual percentage changes of age-standardised rates of incidence and DALYs (B).Female cancers include breast, cervical, ovarian, and uterine cancers.DALY, disabilityadjusted life-years; EAPC, estimated annual percentage change; ASR, age-standardised rate.
Fig. 2 :
Fig. 2: National age-standardised incidence and DALY rates in 2021, and their estimated annual percentage changes from 1990 to 2021 for overall female cancers.Age-standardised rates of incidence (A) and DALYs (B).Estimated annual percentage changes of age-standardised incidence rate (C) and DALY rate (D).Female cancers include breast, cervical, ovarian, and uterine cancers.DALY, disability-adjusted life-years.
Fig. 3 :
Fig. 3: Numbers and proportions of incident cases and DALYs contributed by 21 GBD regions, for female cancers, in 2021.Numbers of incident cases (A) and DALYs (B) of each cancer.Proportions of incident cases (C) and DALYs (D) accounted for by each cancer.Female cancers include breast, cervical, ovarian, and uterine cancers.DALY, disability-adjusted life-years.
Fig. 4 :
Fig. 4: The cross-sectional (2021) and longitudinal trends (1990-2021) of incidence rate and DALY rate of female cancers throughout women of child-bearing age.Numbers and rates of incident cases (A) and DALYs (B) of female cancers.Percentage changes of incidence rate (C) and DALY rate (D) of female cancers.Female cancers include breast, cervical, ovarian, and uterine cancers.DALY, disability-adjusted life-years.
Fig. 5 :
Fig. 5: Age-standardised rates of incidence and DALYs of each female cancer, globally and for 21 GBD regions, by SDI (2021), from 1990 to 2021.Age-standardised incidence rates of breast cancer (A), cervical cancer (B), ovarian cancer (C), and uterine cancer (D), by SDI.Age-standardised DALY rates of breast cancer (E), cervical cancer (F), ovarian cancer (G), and uterine cancer (H), by SDI.Expected values with 95% CI, based on SDI and disease rates in all locations, are shown as a solid line and shaded area; 32 points are plotted for each region and show the observed age-standardised incidence or DALY rates for each year from 1990 to 2021.Points above the solid line represent a higher-than-expected burden, and those below the line show a lower-than-expected burden.Female cancers include breast, cervical, ovarian, and uterine cancers.DALY, disability-adjusted life-years; GBD, Global Burden of Diseases, Injuries, and Risk Factors Study.SDI, socio-demographic index. | 2024-07-04T15:01:51.529Z | 2024-07-02T00:00:00.000 | {
"year": 2024,
"sha1": "39d74bc7be15a0a5d843edc195be1ee2ed85f71f",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "78b42748cab313c6c6c2ec05acfaf3ba5e87689f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
247820316 | pes2o/s2orc | v3-fos-license | Ecological Education Development Design for Middle School Students in Maintaining the Sustainability of Ujungpangkah Mangrove Essential Ecosystem Area (KEE MUP)
The essential ecosystem of mangroves Ujungpangkah (KEE MUP) is protection and limited utilization area located on the coastal area in the north of Gresik Regency. This area has high biodiversity, especially various mangroves and waterbird species from different parts of the world due to their migration process. To protect this area, it is required to enhance the sensitivity of students as the young generation to react appropriately to keep the environment sustainable. In this case, education based environmental studies (Ecological Education) is used as a sustainable approach to answering this challenge. Various activities of community services are structured through a strategic plan for ecological education implementation. With this Ecological Education, students can increase their sense of love for the environment to keep the KEE MUP conservation zone inherited sustainably. .
INTRODUCTION
Environmental sensitivity is an action to react quickly and precisely in protecting and loving the environment (Pitoewas et al., 2020). In the younger generation, this sensitivity needs to be cultivated to understand the importance of preserving nature, analyzing and criticizing environmental destruction that occurs, and taking strategic steps for ecological restoration and conservation to keep the environment sustainable (Prasetyo, Santoso & Prasetyo, 2017) The younger generation has a massive role in the development of the environment because they will be the ones who will continue the leadership milestones in the future so that a high sensitivity to the environment must be embedded in the souls of today's young generation, namely students in elementary to middle schools.
To answer this challenge, Environmental Education (Ecological Education) is needed as a sustainable approach to increase students' sensitivity as young people to the environment. Some things that can be learned in Ecological Education are environmental education, knowledge of ecosystems, the existence of flora and fauna, and environmental sustainability according to ecological functions. The ecology education curriculum is expected to provide a positive view of students regarding ecosystems that need to be maintained. Currently, the northern coastal area of Gresik Regency, especially Banyuurip Village, Pangkahkulon Village, and Pangkahwetan Village in Ujungpangkah District, has been designated as an area of the Ujungpangkah Essential Mangrove Ecosystem Area (KEE MUP), referring to the East Java Governor's Decree No. 188/309/KPTS/013/2020 on July 13, 2020. KEE is an essential ecosystem area designated as a protected area and managed based on conservation principles adopted to address conservation forest areas. The background for determining this area is preserving biodiversity, which is one of the goals of sustainable development, the change in the KEHATI management paradigm to encourage local governments and related parties to be involved in protecting biodiversity.
KEE Ujungpangkah Mangrove covers 1554.27 hectares, with 1143.71 hectares of mangrove vegetation and 410 hectares of water. Pangkahwetan (1029.16 ha area), Pangkahkulon (397.5 ha area), and Banyuurip (1029.16 ha area) are the three administrative areas of the village (127.61 ha area). A significant ecosystem that reflects wetlands is the mangrove forest (wetlands).
Various types of protected birds include the Sunda coucal, Milky stork, Lesser adjutant, Javan Plover, Osprey, Far Eastern curlew, Eurasian whimbrel, Malaysian pied fantail, and Crested honey buzzard, which come from various parts of the world (Australia, Madagascar, and Mongolia). In addition, 16 species of mangroves thrive to maintain the biological balance in marine waters and as a feeding ground for aquatic biota. Therefore, the KEE MUP area becomes a natural resource development area that needs to be protected and preserved.
KEE MUP has the vision to realize sustainable management of the Ujungpangkah Mangrove Essential Ecosystem (KEE) area by aligning the utilization of socio-cultural potential to support community welfare (Ariyanto, Ferdiansyah, 2020). Thus, there is a need for synergy from the University of Muhammadiyah Gresik as an academic institution that involves the management of KEE to develop the KEE into a form of ecological education development design. This is to foster a sense of love for the environment in the younger generation, especially in primary and secondary education, through the induction of an environment-based curriculum.
APPROACH METHOD
The target group for this community service activity is students in primary and secondary education through FGD activities with teachers to get twoway interaction and understanding about the Ecology Education curriculum in the KEE MUP area. The selection of targets is based on the situation analysis results and the specificity of the team's expertise.
This study uses a CBR (Community Based Research) approach, which is carried out with a community-based approach with paradigmatic consequences based on the active participation of the community. In this case, a participatory approach is used so that the target group is expected to play an active role in voting and provide solutions that can be implemented effectively and efficiently.
In general, the stages of the approach implemented are described as follows: 1. Laying foundation (need analysis) to get to know the general picture of the community as a whole the condition of the young generation in the KEE MUP area, including problems, constraints, and development opportunities, especially in the study of Ecological Education for the younger generation. 2. Research planning, formulate problems, group them based on priority scales, and describe them into an ecological education development design. 3. Gathering and analyzing information by involving elementary (SD-SMP) and secondary (SMA) schools. In its implementation, depth interviews, observation, documentation, and FGD techniques are used. 4. Acting on finding produces a framework for a long-term and sustainable vision for environmental-based ecosystem management by the younger generation KEE MUP.
FINDINGS AND DISCUSSION
Based on the objectives of the service program, the implementation of the community service activities is carried out through various stages. The first stage is exploration activities carried out in several schools in the KEE MUP area. This activity is the initial coordination stage carried out to develop the KEE MUP Ecological Education design. The activity was carried out in the form of semi-structured interviews in the form of depth interviews to collect data that the service team could use in preparing plans for implementing Ecological Education for the young generation of KEE MUP. Information was obtained that Ecological Education was not included in local subject and was spread out into material in other subjects. The school also strongly supports the socialization of Ecological Education by introducing students to the Ujungpangkah Essential Ecosystem Area, where they live as a conservation area that needs to be protected. Besides visiting schools, observations were also made through river crossing activities to see firsthand the condition of the KEE Mangrove in Ujungpangkah Gresik.
Furthermore, problem formulation activities were carried out to determine the design for the development of Ecological Education for the younger generation in the Ujungpangkah KEE Mangrove area. In this activity, a roadmap for planning activities for the development of Ecological Education in the coming year is made. This is not only limited to its application as a local subject, but also simulation and direct practice in carrying out area protection activities. Ecological Education Development Design for the Sustainability of the Ujungpangkah Essential Mangrove Ecosystem Area (KEE MUP) is shown in Figure 1.
The design for the development of Ecological Education for students in the KEE MUP area has been carried out for five consecutive years, which has a different outcome every year.
In 2021, the results of the situation analysis and exploration studies that have been carried out previously showed that there are still people in the KEE MUP area who have no idea that the area where they live has now been designated as an essential area for the protection of mangrove ecosystems and migratory birds originating from various parts of the world. In addition, after field exploration and interviews with the community, it was found that there was a conversion of mangrove forest into a fishpond area so that local and migratory birds were found less and less, both in number and species in the mangrove area. Referring to this fact, the solution offered is that the development of Ecological Education is prioritized to introduce the Banyuurip, Pangkahkulon, and Pangkahwetan Village areas as a conservation-based unitary area called the Ujungpangkah Essential Mangrove Ecosystem Area (KEE MUP) with students in the KEE area as the target. Its existence needs to be maintained and preserved by all parties, not only by the government as a policymaker but also by the private sector, academics, and the KEE MUP community.
Figure 1. Ecological Education Development Design for Middle School
Students in the KEE MUP Area The socialization activity was carried out to support the ecological education action plan in schools to introduce the Ujungpangkah Mangrove Essential Ecosystem Area (KEE MUP) to the elementary and middle school students through teachers in the environment (Suryani, Aje, & Tute, 2019). The activity began with an introduction from the KEE Ecological Education community service team and several essential points discussed from Ecological Education, including the introduction of crucial ecosystem areas (KEE), flora, and fauna in the KEE area, as well as education with an ecological perspective. Aside from being an essential wetland ecosystem as a habitat for animals and flora in coastal areas, the main speaker in this socialization activity also explained the critical value of KEE as an essential ecosystem as a coast guard, windbreak, preventing abrasion and seawater intrusion, supplier of organic matter, and carbon sink (Putra, Akbar & Habiburrahman, 2020).
KEE MUP is expected to become a center for environmental conservation amidst the onslaught of industrialization in an area, such as in Gresik Regency, after being designated as an Important Bird Area (IBA) by Birdlife International (2018). Although it occupies an area prone to conflict between humans and wildlife, KEE is a limited used zone that still prioritizes the wheels of the human economy so that synergy is needed in its management to achieve a balanced condition.
In implementing this socialization activity, it was also stated that environmentally-friendly education in the KEE MUP primary and secondary school areas is essential. This is because Ecological Education is a solution that is considered absolute to be done in socializing and preventing ecological damage in an ecosystem. Therefore, schools are expected to significantly contribute to increasing students' sensitivity to environmental conditions, thus supporting the success of Ecological Education. Furthermore, Ecological Education can also apply an ecological character approach to improve people's ecologically minded attitudes, considering that the ecological crisis that has occurred so far is caused by maladaptive human attitudes in interacting with their environment (Lundquist, Carter, Hailes & Bulmer, 2017).
Based on the socialization results carried out in lectures, discussions, questions and answers, and FGDs, this activity can run in two directions without any boundaries between the service team and participants and positive reciprocal results from the socialization participants. Some of the points of note from the results of these discussions are as follows.
In Ecological Education, it is hoped that the Ecological Character Building program will become one of the main approaches to stimulate an individual's ecologically minded attitude, where this program contains various activities designed to touch the psychological side of students about nature. Furthermore, this ecological behavior can be carried out by engaging directly with the community to solve existing ecological problems and understand the importance of preserving the environment. In detail, the activities outlined in action are as follows: 1. Clean and healthy living behavior 2. Mangrove planting 3. Cleaning and waste processing 4. Distributing pamphlets and ecology movement in schools 5. Ecotourism Furthermore, Ecological Education is expected to socialize the principles of environmental ethics in sustainable utilizing the aquatic environment (Anggraini, N., Marpaung S., Hartuti, M, 2017).
The second point discussed in socialization is the gradual environmental degradation of the KEE area due to overlapping interests between the economic, social, and ecological sectors. For example, the number of seabirds as exotic KEE biota has decreased drastically, and the opening of mangrove areas into fishponds.
The increase in population causes the human need for access to life to be higher. In addition to these social factors, residents will also need a strategic way to meet family needs. The KEE MUP area is very exotic and is rich in natural resources. Therefore, this is where the need for synergy in controlling the ecological system because this area is needed not only for humans but also for various types of animals and plants. The linkage between these three systems requires the control of each factor to achieve a sustainable ecosystem, which can be enjoyed economically for present and future generations (Nizar, Siswati, & Zargustin, 2019) Several things have been done in realizing the balance of the system, including the government has legally established a policy regarding this location as an essential mangrove ecosystem area that needs to be protected. In addition, extensive mangrove nurseries and planting activities have been promoted to protect the sustainability of the ecosystem, which functions as a home for various biota with high economic value. There is also a mangrove ecotourism location as a tourist location and education about nature conservation and fishing activities using environmentally friendly fishing gear (Taqiyudin & Santoso, 2019).
To increase the interdependence of these three systems to become more synergistic, Ecological Education is the right solution to overcome these problems, especially for the younger generation who will continue the leadership milestone in the future. With Ecological Education for elementary and secondary school students in the KEE MUP area, it is hoped that these students will increasingly realize the importance of the KEE area in their lives, conduct analysis, educate, and take the initiative to take strategic steps in sustainably protecting ecosystems.
Figure 2. Series of Community Service Activities in 2021 Regarding
Ecological Education Socialization in the KEE MUP Area In the second and third years, activities focused on action plans for ecological education activities in the KEE area were carried out. The first activity is the socialization of Ecological Education, which is expected to shape students' insight into environmental, ethical human beings who can appreciate both biotic and abiotic ecological systems according to ecological reality. The implementation of this activity is the existence of direct natural teaching activities to provide as many opportunities as possible for students to be active and be directly demonstrated to attract students' interest. In addition, to provide a solid and non-verbalizes intellectual apperception material and emotional apperception to students (Fahrudin & Santoso, 2019) The next activity is legal socialization related to KEE to provide information on various regulations and collaborations carried out by the government as policymakers and associated stakeholders, including UMG as academics, to protect the essential mangrove ecosystem area Ujungpangkah.
Other activities carried out are training activities for handling and processing natural materials, as a follow-up to the evaluation results of introducing KEE socialization activities. Here, students are invited to actively participate in groups with simulation methods and hands-on practice in processing waste and utilizing natural materials into products of commercial value (Santoso & Sutopo, 2019) In the fourth year, ecological education activities focus on increasing literacy in Ecological Education by multiplying reading books on natural ecosystem studies and slogans related to protecting the surrounding environment for ecosystem sustainability for future generations. Other activities include training on mangrove nurseries and planting for three consecutive years to help restore coastal areas as important economic biota in the KEE MUP area.
In the last year, research was conducted to determine changes in student behavior towards Ecological Education. This activity was carried out in FGDs, two-way discussions, in-depth interviews, to the distribution of questionnaires and questionnaires, which were then adjusted to indicators of the achievement of environmental, ethical principles applied in everyday life.
Some of the principles emphasized in environmental ethics, according to Keraf (2005) in Setyowati (2014), are as follows: 1. Respecting nature, because human life depends on nature, so needs to be nurtured, cared for, guarded, protected, and preserved to be sustainable for future generations. 2. The principle of responsibility (moral responsibility for nature) which means the urge to protect nature individually and in groups. 3. Cosmic solidarity supports humans to save the environment and all life in nature. 4. The principle of compassion and caring for nature 5. The principle of not harming nature 6. The attitude of life is simple and in harmony with nature. 7. The principle of justice can synergize between economic and social fulfillment needs with existing ecosystem conditions to achieve sustainable requirements sustainably between generations.
With the implementation of this Ecological Education development design, it is hoped that it can foster a sense of love for the environment in students so that the conservation of the KEE MUP zone can be inherited sustainably.
CONCLUSION
The community service activity regarding the socialization of the ecological education action plan for students in the KEE MUP area has been going smoothly and has received high enthusiasm. With the implementation of the design for the development of Ecological Education, it is expected to shape students' insight as the target of activities to become human beings who are environmentally ethical and can apply the principles of environmental ethics that are used in everyday life. | 2022-03-31T15:51:41.410Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "9743e2210b52c5cdcfee21250c1b35453d2e4743",
"oa_license": "CCBYSA",
"oa_url": "https://journal-center.litpam.com/index.php/Sasambo_Abdimas/article/download/626/375",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1a1def0748630950fef93a9e4999f42691bd37f7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
202854342 | pes2o/s2orc | v3-fos-license | Prevalence of Wuchereria bancrofti Infection in Mosquitoes from Pangani District, Northeastern Tanzania
30 ABSTRACT Background: Wuchereria bancrofti is the most widely distributed of the 3 nematodes known to cause lymphatic filariasis, the other 2 being Brugia malayi and Brugia timori. Anopheles gambiae and Anopheles funestus are the main vectors. However, the relative contributions of mosquito vectors to disease burden and infectivity are becoming increasingly important in coastal East Africa, and this is particularly true in the urban and semiurban areas of Pangani District, Tanzania. Methods: Mosquitoes were sampled from 5 randomly selected villages of Pangani District, namely, Bweni, Madanga, Meka, Msaraza, and Pangani West. Sampling of mosquitoes was done using standard Centers for Disease Control light traps with incandescent light bulbs. The presence of W. bancrofti in mosquitoes was determined via polymerase chain reaction (PCR) assays using NV1 and NV2 primers, and PoolScreen 2 software was used to determine the estimated rate of W. bancrofti infection in mosquitoes. Results: A total of 951 mosquitoes were collected, of which 99.36% were Culex quinquefasciatus, 0.32% were Anopheles gambiae, and 0.32% other Culex species. The estimated rate of W. bancrofti infection among these mosquitoes was 3.3%. Conclusion: This was the first study employing the use of PoolScreen PCR to detect W. bancrofti circulating in mosquito vectors in Pangani District, northeastern Tanzania. The presence of W. bancrofti infection suggests the possibility of infected humans in the area. The high abundance of Cx. quinquefasciatus calls for integrated mosquito control interventions to minimise the risk of W. bancrofti transmission to humans. Further research is required to gain an in-depth understanding of the W. bancrofti larval stages in mosquitoes, their drug sensitivity and susceptibility profiles, and their fecundity. ORIGINAL ARTICLE
INTRODUCTION
W uchereria bancrofti is a filarial nematode that has a thread-like appearance in its adult stage. 1 The female nematodes are about 10 cm long and 0.2 mm wide, while the males are only about 4 cm long. 2 The adults reside and mate in the lymphatic system, where they can produce up to 50,000 microfilaria per day. 1 The microfilariae are 250 to 300 µm long, 8 µm wide, and they circulate in the peripheral blood.They can live in the host as microfilaria for up to 12 months.Adult worms take 6 to 12 months to develop from the larval stage and can live between 4 and 6 years. 3The parasites are transmitted to humans when infected mosquito vectors deposit infective larvae onto the human skin. 4The larvae penetrate the skin, migrate to the lymphatic vessels, and develop into male and female adult worms over a period of months.Microfilaria ingested by a vector during a blood meal will develop to infective larvae in about 10 to 14 days.These migrate to the mosquito's proboscis and may then be transmitted to a new human host during a subsequent blood meal.Mosquitoes thus play an essen-tial role in maintaining the lifecycle of W. bancrofti and disseminating the infection. 5 blood smear is a simple and accurate diagnostic tool, provided the blood sample is taken during the day when the juveniles are in the peripheral circulation. 6A polymerase chain reaction (PCR) test can be performed to detect a minute fraction -as little as 1 pg of filarial DNA. 7Some infected people do not have microfilaria in their blood.As a result, tests aimed to detect antigens from adult worms are used.Ultrasonography can also be used to detect the movements and noises caused by the movement of adult worms. 8uchereria bancrofti causes lymphatic filariasis, which is a disfiguring and disabling disease that is associated with severe suffering and socioeconomic burden in endemic communities. 4Current estimates suggest that more than 1 billion people living in endemic areas, who are at risk of the infection, and that more than onethird of these at-risk individuals are in sub-Saharan Africa. 9In Tanzania, about 34 million people are at risk, while 6 million people are already affected by lymphatic filariasis.Lymphatic filariasis is widespread in Tanzania; particularly high endemicity is seen along the coast of the Indian Ocean and in areas adjacent to the Great Lakes. 10In the Tanga Region of Tanzania, recent reports after mass drug administration (MDA) estimate circulating filiarial antigen (CFA) and microfilaraemia rates of 15.5% and 3.5%, respectively, which are down by 75.5% and 89.6% -and 2.3% for CFA in school children -from baseline. 13Ongoing vector control measures against W. bancrofti in Tanzania consist of indoor residual spraying, long-lasting insecticidal nets, larval source management, mosquito repellents and coils, and house modifications.
It has previously been shown in Tanzania that MDA treatment regimen drastically reduce W. bancrofti microfilarial load. 11Other studies have revealed a decrease in the transmission of lymphatic filriasis associated with a decline in anopheline mosquitoes. 17Although a decline in anopheline mosquitoes has been documented in Tanga, information on vector burden and vector infection rate with W. bancrofti is still lacking.Therefore, this study assessed vector burden and vector infection rate with W. bancrofti.
Study Setting
This study was carried out in 5 rural villages of Pangani District, which has an area of 1,830 km 2 , making it the smallest district in Tanga Region.It is located in the southern part of Tanga, extending from 5°15.5' to 6° S and from 38°35' to 39° E. It is bordered by Handeni District to the west, the Indian Ocean to the east, Pwani Region to the south, and Muheza District to the North.Altitude ranges from 0 to 186 m above sea level.The Pangani District is administratively divided into 13 wards and 23 villages.
Study Design
This was an 8-month cross-sectional study, which involved the trapping of mosquitoes for laboratory examination of W. bancrofti.The 8 months were divided into 2 rounds, and 5 villages were randomly selected.Houses for mosquito collection were randomly selected from each village .The mosquitoes were sampled using using standard Centers for Disease Control light traps with incandescent light bulbs (Model 512, John W. Hock Company, Gainesville, FL, USA).
Traps were hung beside beds occupied by at least 1 person sleeping in unimpregnated bed nets. 14Briefly, the shield of each trap was left to touch the side of the net with 150 cm clearance above the floor.The light traps were set between 20:00 hours and 06:00 hours and retrieved in the morning at 06:00 hours.
Mosquito Storage and Identification
The mosquitoes collected at each village were held separately and transported to the National Institute for Medical Re-search's Tanga Centre for identification based on morphological identification keys. 15,16Female mosquitoes were organised into pools of 20, stored in cryogenic vials with silica gel, and transported to Sokoine University of Agriculture in Morogoro for screening of W. bancrofti.
DNA Extraction from Mosquitoes
DNA from the pools of 20 mosquitoes was extracted using a modified version of the Qiagen DNeasy kit protocol (Qiagen, Hilden, Germany).Briefly, mosquitoes were crushed in phosphate buffered saline, lysed, and then proteins were precipitated out using ethanol.The supernatant was passed through a silica column, followed by washing of the bound DNA.Afterwards, the silica was dried and DNA eluted into RNase-free Eppendorf tubes.DNA was stored at -20 °C until PCR were done.
Detection of W. bancrofti Using PCR
PCR assays to detect W. bancrofti were performed using NV1 and NV2 primers. 17,18The target sequence for these primers is the Ssp I repeat, a gene present at ~500 copies per haploid genome.Amplification with these primers yields an 188 bp fragment.Each 20 µl PCR reaction contained 1× Qiagen Taq buffer; 50 Mm MgCl 2 ; 50 mM each of dATP, dCTP, dGTP, and dTTP; 10 pmol/ µl of NV1 and NV2 primer; 1.25 U HotStar Taq DNA polymerase; and 2 µl genomic DNA.PCR reactions were run on a Veriti 96-Well Thermal Cycler (Applied Biosystems, Jurong, Singapore), and reaction conditions consisted of a single step of 95°C for 10 minutes, followed by 94°C for 30s econds, 54°C for 45 seconds, and 72°C for 45 seconds.The final step was a 10-minute extension at 72°C.PCR products were size fractionated on 1.5% agarose gel stained with GelRed (Biotium, Hayward, CA, USA).Agarose gels were run at 100 V for 40 minutes and visualised under ultraviolet light using a gel documentation system (EZ Gel Imager, Bio-Rad Laboratories, Hercules, CA, USA).A positive control mosquito pool, known to be infected with W. bancrofti -a kind donation from the National Institute for Medical Research, Amani Tanga Centre -was used, along with negative controls, which were run concurrently with the samples to ensure that the PCR amplification was not contaminated.This helped prevent false positive results and ensure that all the reagents were working properly.
Determination of Estimated Rate of W. Bancrofti Infection in Mosquito Vectors
The calculation of vector infection rates from pool screening was addressed via an application of the binomial distribution. 19A maximum likelihood estimation algorithm was used to estimate the maximum likelihood of W. bancrofti infection at the 95% confidence level in mosquitoes, whereby total pools screened, the number of positive pools, and pool sizes was entered into PoolScreen 2 software to obtain infection rate.PoolScreen 2 software was obtained from the Depart-ment of Biostatistics and Division of Geographic Medicine, University of Alabama at Birmingham, USA.The programme relies on the fact that the PCR assay is sensitive enough to detect a single infected insect in a pool containing large numbers of uninfected insects.
Ethical Considerations
Ethical approval for this study was obtained from the Medical Research Coordination Committee (MRCC), based at the National Institute for Medical Research, Dar es Salaam, Tanzania (Ref: NIMR/HQ/R.8a/Vol.IX/1834).Permission to conduct study was also obtained from regional, district, and respective village authorities.Moreover, written informed consent was sought from the heads of the households where mosquito collection was carried out.
Presence of W. bancrofti in Mosquitoes
All Cx. quinquefasciatus mosquitoes collected in Pangani District were screened for W. bancrofti infection.From 47 mos-quito pools screened for W. bancrofti, 24 (51%) pools tested positive and 23 (48%) tested negative.Positive pools produced a PCR product of approximately 188 bp, an expected size after Ssp I amplification using NV1 and NV2 primers. 17Figure 2 shows an example of the agarose gel after performing PCR for the detection of W. bancrofti in mosquito pools.
Estimated Rate of Infection of W. bancrofti in Mosquito Vectors
A total of 951 female mosquitoes were screened for infection with W. bancrofti using Poolscreen 2 software, which uses maximum likelihood at the 95% confidence level based on likelihood rates for determining the infection rates. 19Msaraza village had the highest estimated rate of infection, at 5.34%, and Bweni village had the lowest estimated rate of infection, at 2.9% (Table 2).
DISCUSSION
Monitoring infection rates among humans and vectors is an essential component of any lymphatic filariasis control programme.Such monitoring informs decision making, for example, deciding when to stop MDA and to certify the elimination of the disease.Monitoring transmission or infection in vectors is ideal, as mosquitoes may offer a real-time estimate of transmission, 20,21 even though the manifestation of microfilaria may be marginally quicker in humans.Low-level microfilaraemia may also not be easy to detect in human populations.
The results obtained from the present study indicate that Cx. quinquefasciatus was the most abundant vector species caught during the study.These observations concur with a study carried out in Dar es Salaam, 22 which reported that out of 12,096 vector mosquitoes caught using light traps, the great majority (99.0%) were Cx.quinquefasciatus, followed by a few Anopheles gambiae (0.9%) and Anopheles funestus (0.1%).
The higher abundance of C. quinquefasciatus in the present study might be because mosquitoes were collected during the dry season, during which the overall mosquito population is normally relatively low.The observed mosquito abundance has important implications on the transmission of both malaria and lymphatic filariasis, but the low anopheline mosquito abundance observed in the present study has greater implications on malaria transmission.
Wuchereria bancrofti infection in mosquitoes was found in all 5 villages, with an overall infection rate of 3.3%.Derua et al. 12 reported that the overall rate of W. bancrofti infection among 3 sibling species -An.gambiae, Anopheles merus, and Anopheles arabiensis -in their study area in northern Tanzania, was 3.6%, which is similar our calculated rate.It should be noted that these infection rates are based on all vector-borne stages of the parasite, since the PCR testing method used cannot distinguish between the different larval stages.There is a need to determine the presence of the infective stages of W. bancrofti to estimate the risk of lymphatic filariasis trans- mission by these mosquitoes. 23The detection of infection in mosquito vectors is an indication that there may be infected humans in the area, and a high rate of W. bancrofti in the vectors might reflect a high prevalence of microfilaraemia in the human population.A previous study reported the overall prevalence of 24.5% for W. bancrofti microfilaria among people over the age of 1 year. 24In a similar study, the prevalence of W. bancrofti-specific circulating antigen was 53.3%. 24hile annual MDA remains the standard intervention for interrupting the transmission of lymphatic filariasis, vector control to reduce the number of potential mosquito vectors is increasingly recognised as a complementary strategy in some contexts. 25A combination of more than 1 vector control method would probably enhance the impact on vector populations and lymphatic filariasis transmission reduction, particularly if the methods address different stages of the mosquito lifecycle or if they have different modes of action.To further explore the findings and implications of this study, we recommend that further research -with much larger sample sizes and encompassing parasites from different geo-climatic regions -be conducted to enhance our understanding of W. bancrofti vector infection status.Additionally, further research comparing the prevalence of W. bancrofti in the human population with that among mosquito vectors in the study area and other endemic areas is of paramount importance, to draw clear conclusions regarding W. bancrofti infection prevalence in Tanzania.
CONCLUSION
A high W. bancrofti vector infection rate of 3.3% was found in the present study, indicating a high likelihood of human infection in the area.Most mosquitoes collected were Cx.quinquefasciatus, which calls for integrated mosquito control interventions to lower the risk of W. bancrofti transmission from mosquitoes to humans.Additional research is needed to gain an in-depth understanding of the W. bancrofti larval stages in mosquitoes, their drug sensitivity and susceptibility profiles, and fecundity.Such information would inform treatment strategies and decision making relaed to, for example, how long to run MDA programmes and the optimal size of the human population treatment unit.
Village
Mosquitoes
TABLE 1 .
Proportion of Mosquito Species Collected for the Detection of Wuchereria bancrofti in Selected Villages of Pangani District, Northeastern Tanzania Abbreviations: CI, confidence interval; ERI, estimated rate of infection
TABLE 2 .
Infection Rates of Mosquitoes with Wuchereria bancrofti, as Determined by Polymerase Chain Reaction Pool Screening | 2020-01-10T13:03:11.777Z | 2019-03-25T00:00:00.000 | {
"year": 2019,
"sha1": "e37f0563695e12b2b45a1feb59529c93ee68772e",
"oa_license": "CCBY",
"oa_url": "http://easci.eahealth.org/index.php/easci/article/download/16/16",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e37f0563695e12b2b45a1feb59529c93ee68772e",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
139883277 | pes2o/s2orc | v3-fos-license | The Realization of Redistribution Layers for FOWLP by Inkjet Printing †
The implementation of additive manufacturing technology (e.g., digital printing) to the electronic packaging segment has recently received increasing attention. In almost all types of Fanout wafer level packaging (FOWLP), redistribution layers (RDLs) are formed by a combination of photolithography, sputtering and plating process. Alternatively, in this study, inkjet-printed RDLs were introduced for FOWLP. In contrast to a subtractive method (e.g., photolithography), additive manufacturing techniques allow depositing the material only where it is desired. In the current study, RDL structures for different embedded modules were realized by inkjet printing and further characterized by electrical examinations. It was proposed that a digital printing process can be a more efficient and lower-cost solution especially for rapid prototyping of RDLs, since several production steps will be skipped, less material will be wasted and the supply chain will be shortened.
Introduction
FOWLP is a rapidly growing high-density packaging technology, which has many advantages such as short interconnection, low thermal resistance, high RF performance, and small package outline dimension.A lot of activities are running worldwide dealing with FOWLP.It can be used for multi-chip packages for System in Package (SiP) and heterogeneous integration.However, there are still several technical challenges facing FOWLP that need to be overcome, such as the die shifting issue, warpage due to the material coefficient of thermal expansion (CTE) mismatch and the fabrication cost and complexity of the redistribution layers (RDL).For FOWLP two basic process flows are encountered: the "mold-first" or the "RDL first" approach [1][2][3].The mold-first process flow is depicted in Figure 1, beginning with the placement of known-good-dies onto a temporary carrier with thermal release tape laminated on top.After overmolding and subsequent release of the temporary adhesive tape, the redistribution layers (RDL) are fabricated (step 5).RDLs are typically metal interconnection schemes or metal traces that route the electrical signals from one part of the package to another.This is usually achieved by combining photolithographic processes, as well as sputtering and plating.In the case of a high-density FOWLP, multiple layers of RDL are required to support the necessary routing.The metal traces together with dielectric isolation layers generate multi-layer RDLs.
In the present study, however, an alternative approach is investigated, namely inkjet printing of metallic routes, thereby allowing for fast prototyping, which typically is not possible with RDL formation via multiple process steps.In another article, screen-printed Ag paste was suggested as to realize RDL structures and fill the vias [4].
Methodology
A wafer-level SiP concept was pursued in this project, in which capacitive MEMS microphones together with their respective low power ASICs are molded simultaneously in a single package.As schematically shown in Figure 2, in order to connect the microphones pads to the ASICs as well to fan out the signals via solder balls, RDL structures are required.As seen, the main benefit of using inkjet printing for fabricating RDLS instead of photolithography is the shortened supply chain, by which rapid prototyping of such customized SiPs would be much easier.Another benefit of inkjet printing also comes from its digital or on-demand deposition capability [5].The whole procedure of photoresist coating, baking, exposure, development, seed layer sputtering, metal plating and etching can be replaced by inkjet printing of single or multilayers of metallic and dielectric inks followed by sintering.In our previous works, inkjet printed Ag traces were introduced as a versatile means for rapid prototyping of electronic packages [6,7].
Materials
In this study 8 (200 mm) wafers are fabricated using an epoxy molding compound with spherical SiO2 filler particles with a size of max.75 µm.Inkjet printing of Ag tracks was carried out by employing an advanced R&D inkjet printer with a 50 µm nozzle diameter and 50 pl calibrated drop size.A commercial nanoparticle Ag ink with 50 wt% metal loading and average particle size of 110 nm (d90) was deposited at the operational jetting voltage of 100 V, printing frequency of 500 Hz and at carefully adjusted jetting pulse duration profile.The printing was performed at room temperature, while the substrate was heated-up to 50 °C.The most efficient sintering methodology for the printed Ag RDLs (e.g., pulsed light sintering, thermal sintering, plasma sintering etc.) was subsequently determined by means of the four-point probe measurements [8].In fact, since different materials with divergent surficial and physical properties are found in SiP, which are exposed to the light or plasma source concurrently, the uniform treatment of the inkjet printed Ag nanoparticles was not effective.On the contrary, upon thermal curing at 150 °C for 1 h, Ag lines were sintered uniformly regardless of the material beneath.Considering the printing head's configuration used in this study and Ag/ EMC's surficial properties, the minimum width and pitch size of the Ag tracks was defined to be 80 µm and 160 µm respectively.The pads on the chip were 100 µm in diameter.
The morphology of the printed RDLs was characterized using scanning electron microscopy (SEM, Quanta 200 FEGSEM, FEI) and focused ion beam work station (FIB, Quanta 200 3D, FEI).A Pt-based thin layer was deposited onto the surface in order to protect the surface of printed traces from incurring FIB induced damage.
Results and Discussion
In Figure 3, one realization of the inkjet printed RDLs for FOWLP is demonstrated.The respective cross-sectional image is shown in Figure 3c-f.As seen, 4 Ag lines resembling single-layer RDL structure were generated as to connect the Au-pads of the chip to other components.As inferred from these figures, Ag lines managed to cross over four levels; chip pad, substrate, frame and mold.The corresponding interfaces, spotlighted in Figure 3, are Au-pad to SiO2 substrate (3 c), SiO2 substrate to the chip frame (3 d) and the frame to EMC mold (3 e).As inferred from these figures, sharp steps can be considered as the bottleneck for printed line.While the molded components should principally stay flat at the wafer level, depending on the material selection and process conditions, the components might encounter few micrometer out-of-plane offset.Given that, it was observed that the molded components were 3-6 µm higher than the mold wafer's surface.The functionality of the Ag traces, in terms of electrical conductivity was successively assessed which is given in Table 1.The configurations of the inkjet printed Ag RDLs were compared to those of conventional RDLs fabricated by photolithography and plating.As perceived from this list, some fundamental differences in the final properties of the RDLs can be recognized.Suffice to say, that inkjet-printed RDLs are required to be adjusted with respect to the respective substrate materials.Due to the temperature restrictions of the EMC, thermal sintering of the Ag lines were applicable only up to 150 °C.It was found that thermal sintering at higher temperatures (>150 °C) lead to an increase of wafer warpage, most likely caused by the proceeding cure reaction of the molding compound and the resulting cure shrinkage.The thickness of the inkjetted RDLs can be simply increased by multi-pass printing, whereas the multilayered RDLs were feasible by a successive printing of alternate layers of Ag and dielectric structures.
Conclusions
In this study, the feasibility of employing inkjet printing in FOWLP was investigated.It was shown that inkjet printed Ag lines can be a promising alternative to conventional RDLs produced by photolithography and electroplating.Different sintering techniques were assessed, where thermal sintering appeared to be the most suitable methodology.Though thermal curing can cause increased mold wafer warpage, that can be nevertheless handled during the process flow until package singulation.It was revealed that small out-of-plane offsets of the molded compounds can be tolerated and functional RDL structures can be generated.
Figure 3 .
Figure 3.An example for inkjet-printed RDLs in FOWLP; a microphone chip with 4 pads before molding.(a) and a molded one with Ag RDLs (b); the corresponding FIB-cut cross-sectional view (c) and the three critical interfaces between pad-substrate (d), substrate-frame (e) and frame-mold (f).
Table 1 .
Material properties of the inkjet printed RDLs in comparison to the conventional RDLs.All the chip-pads in this study were 100 µm in diameter ** Corresponds to a single-pass inkjet printing. * | 2019-02-11T13:53:36.061Z | 2018-12-13T00:00:00.000 | {
"year": 2018,
"sha1": "768851c060ed5d3b7ad790d0b153ebb621e2846e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2504-3900/2/13/703/pdf?version=1544664961",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "768851c060ed5d3b7ad790d0b153ebb621e2846e",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
267785062 | pes2o/s2orc | v3-fos-license | ADVERTISING IN THE URBAN LANDSCAPE AS A THEME FOR EDUCATIONAL ACTIVITIES BASED ON THE EXAMPLE OF A FIELD PRACTICE IN PABIANICE
The article describes possible practical educational activities aimed at respecting the landscape, in particular the cultural landscape, in which various types of advertisements appear in an uncontrolled form. The presented considerations are based on a field practice conducted in 2018 with students of spatial management, during which advertisements on the main street of Pabianice were inventoried. The collected material, conclusions, and recommendations serve as a starting point for a broader discussion on the advertising chaos in the city.
Introduction
In my daily work with students of spatial management, I notice considerable deficits in the earlier stages of education regarding awareness of co-responsibility for the surrounding space and the need to participate in the processes of its transformation.As a parent, I notice that during preschool and school education there are no activities that stimulate curiosity and encourage learning about and consciously analysing elements of the immediate space around us.
Currently, much space in various policies is dedicated to public participation.Increasingly, local governments are being mobilised through legislation, e.g. the Revitalisation Act (Ustawa ..., 2015a), to involve stakeholders in the ongoing processes of spatial change.We expect adults to be socially active and to participate consciously in various activities undertaken by the municipality without preparing them in advance, in the process of education, for such attitudes and without equipping them with the necessary knowledge to do this.
Landscape protection is a subject that should absolutely be discussed at various stages of education and which would help the user to consciously perceive the surrounding space.This issue could cover many interesting and current themes, including the one related to the appearance of advertisements in various spaces, often strongly interfering with the cultural landscape, i.e. the one already transformed by man as a result of civilisational development (e.g.Kalinowski, 2023;Myga-Piątek, Nita, 2006;Pawłowska, Swaryczewska, 2002).It is possible, as in the article, to evaluate the degree and intensity of the distribution of messages in different spaces, or to analyse in detail the content they communicate to us and the reactions they generate in the audience (Aparecida Campos et al., 2021;Hoewe, Ahern, 2017;Hansen, Machin, 2013;Cronin, 2013).
The inclusion of the assessment of the quality of life related to the use of aesthetic landscape, despite its subjective perception, also remains incontestable (Kerebel et al., 2019).Also of interest are studies involving creators of outdoor advertising who transform space by segmenting and valuing specific areas of cities and the routes to and around them (Cronin, 2006).From H. Lefebvre's (1991) perspective, the advertising industry has become an important actor in spatial design, along with architects and urban planners.
An educator can create an opportunity to talk to young people about topics related to the above research and generate discussion about the ubiquitous advertising chaos that prevails in Poland.This could allow them to make their own observations, analyse and explore this subject matter during organised activities.
This article presents one such activity aimed at students.The presented considerations are based on specialised field practice conducted in 2018 in Pabianice for students of the master's degree in spatial management.They can inspire various educational activities covering the issue of advertising in the surrounding space.The publication also focuses attention on the possible practical dimension of conducting such activities, which definitely increases their participants' motivation and involvement.
In the case under discussion, the choice of topic and research area was made together with the Pabianice City Hall, and the results became the subject of a report prepared for the authorities (Masierek, Kurzyk, 2018).It was supposed to serve as a starting point for a broader discourse with the local community (especially entrepreneurs) on the advertising chaos occurring on the city's most important street and for planning the effective use of its potential, both economic and visual.The stocktaking done and the conclusions may serve as an example for other cities, especially those which are dealing with this important issue from the point of view of taking care of the aesthetics of space and creating effective instruments for its protection.
The Polish Landscape Act (Ustawa …, 2015b) is of an optional nature.It indicates the preferred direction of the necessary changes with respect to the control and protection of space from advertising chaos and the possible instruments to be prepared.However, it does not require municipalities to prepare local advertising codes (Masierek, Pielesiak, 2018).Additionally, taking into account the procedural problems accumulated during the adoption of resolutions and their different public perception, this topic is not easy and "politically attractive" for local governments.
Theoretical backgroundk
Space is a mix of natural and cultural components.We see it every day in the form of landscape.Where the work of nature predominates, we speak of the natural landscape; where the work of man predominates -of the cultural landscape (Pawłowska, Swaryczewska, 2002).The cultural landscape of a selected space is a sign of values created by successive generations overlapping each other (Moore, Whelan, 2007).This contemporary understanding of the concept is connected to international discourse and the concept of the so-called historic urban landscape (Bandarin, van Oers, 2012).The processes that take place within a city and the interpretation of its heritage are a result of abrasion and implication of the activities of many actors (Domański, Murzyn-Kupisz, 2021).It is essential that all stakeholders engage in dialogue and cooperation to protect their common heritage and its potential.
The analysis of the landscape of any area can be conducted from a structural, functional, and material point of view, where the diversity of its components is assessed (Richling, Solon, 1994), or from a physiognomic point of view (Bogdanowski, 1972), where spatial relationships resulting from the perception of a given space and its composition are largely analysed (Pawłowska, 1994;Wojciechowski, 1994).Both of these approaches cross-fertilise and complete each other; therefore, it is worth taking lessons from their achievements while doing research.
Whether they want to or not, a person in the space of a modern city is surrounded by information.The forms of its transmission are rich and varied.From the user's point of view, it seems important to be able to control (limit) the information stream and create the best possible quality of that stream (Gamdzyk, 2017).It is difficult to imagine today's space completely devoid of advertising, and the very phenomenon of visual information seems to be as necessary as possible.According to R. Venturi et al. (1977), advertisements can reinforce or authenticate the message that is directed at us in space.However, this should be done in orderly and unobtrusive ways.Mature democracies have coped with advertising chaos (Gamdzyk, 2017).They have certainly been helped by a more informed civil society, as well as by the spatial planning and cultural landscape protection tools in place.However, they do not solve all dilemmas, such as the legitimacy of placing advertisements on urban infrastructure facilities or in public spaces in public-private transactions (Iveson, 2012).
In Poland, there is still a tendency to an exaggeration in the amount and intensity of advertisements appearing in space.We lack respect for our surroundings and landscape.There is a need for continuous education about space as a common and universally respected good.The legal issue of the location of advertisements and billboards in the Act on Spatial Planning and Development and then in the Landscape Act is supposed to help create rules for the use of space, but it does not ultimately solve the problem (Nowak, 2017;Fogel, 2016).
Even the biggest sanction for advertising screens on the facades of historic buildings will not make the advertising industry realise that this is not the way it should go...Along with the legal changes, we need to seriously consider how to teach about space and how to make our approach to it change from one full of ignorance to a sincere concern (Głogowski, 2017, p. 24).
Therefore, undertaking any educational activities with diverse audiences helping them to consciously observe the space and react to anomalies that appear in it seems crucial.Particularly, sensitising young people to the issues of surrounding them space and creating changes together with them gives hope for a better quality of the space in the future.
Study area and methods
The field practice was prepared for first-year students of Spatial Management Master's Programme.Its aim was to diagnose the intensity of advertising in the city's main street, and then to critically assess and formulate conclusions and recommendations based on the assessment.As part of preparation for the class, a lecture on the spatial structure and the most important conditions of Pabianice was given to the participants by a representative of the city hall, so that the students could become better acquainted with the site of the field research.A research tool was then developed in the form of an inventory card which included a list of all the advertisements present in the study area, considering their location, type, volume and technical condition.In addition, space was left to mark where there was the highest concentration of advertisements and where they clearly did not suit the surroundings.Diverse types of advertising were selected, i.e. signboards, window films, banners, pieces of paper, billboards, posters advertising glass cases, trestle/posters, citylights, LED advertising, megaboards, totems.Forms of advertising were distinguished as single-sided and double-sided advertising and as four categories of advertising size: • small, i.e. small billboards (up to 1 m 2 in surface); • medium, mainly advertising banners hung on fences (1-9 m 2 ); • large, usually 6x3 system carriers (9-18 m 2 ); • very large, large-surface advertising media usually placed on facades of buildings.
The technical condition of the advertisement (good, average, bad) and its aesthetics (rated on a scale of 1 to 5) were also important elements taken into account.Additionally, an attempt was made to connect the analysed advertisement with the industry it represents.With this in mind, sections of the Polish Classification of Activities, introduced by the Ordinance of the Council of Ministers of 2007(Regulation..., 2007), were used with close reference to the European Classification of Activities.
The field trip itself was followed by a meeting at the Pabianice City Hall with representatives of the office and a detailed discussion of the specifics of the study area and the planned use of the analysis results.Archival photographs of the study area were reviewed.The fieldwork took place in preestablished groups whose collected data was finally aggregated into a single, coherent database.
The fieldwork was conducted in June 2018 in Pabianice, a town with a population of approximately 58,600 inhabitants (Raport..., 2022) belonging to the Łódź agglomeration.The research area involved a fragment of the city's main thoroughfare, known as Trakt Kapituły Krakowskiej, with a length of approximately 2.3 km (a sequence of Zamkowa, Stary Rynek, Warszawska streets), which was selected together with representatives of the Pabianice City Hall.It represents a characteristic axis of the urban composition of Pabianice with an east-west layout.It concentrates objects and functions related to the development of the city (Fig. 1).For the purposes of the field practice, the area was divided into 5 parts, each examined by a different group of students.The selected area is a showcase for the city and refers to its industrial traditions.It brings together objects and functions related to its development, including the Old Market Square, which, after the creation of the New Town, has definitely lost its importance, as is clearly visible both in the spatial structure of the city and in the inhabitants' perception of this place.
Advertising as a theme of educational activity -collaboration with practice and diagnosis
In 2015, the Landscape Act was introduced in Poland, which was supposed to unify the terms related to advertising and define the role and scope of influence of local authorities on the place and the way in which advertising is distributed in the municipality.Some local authorities have undertaken themselves to put advertising space in order, especially in city centre areas, while others rather observe the effects of these measures and wonder how to translate this rather difficult topic into reality in their municipalities.Unquestionably, however, in order to start thinking about any changes in this area, it is necessary to conduct a thorough baseline diagnosis, a real inventory of advertisements in the space and to identify the most problematic locations in this respect.In addition to looking for deficits, it is also worth pointing out good examples of advertisements occurring in the municipality that fulfil their role while integrating harmoniously into their surroundings (Masierek, Pielesiak, 2018).
However, local authorities often do not have the time or human and financial resources for such work.In this case, it is useful to think about utilising students' potential and cooperating with universities.The benefit is usually mutual, as students are much more willing to do tasks whose results can be used in practice.Therefore, an important element in teaching (using different forms of classes) is to create opportunities to continuously link theory and practice and to «confront» students with the real challenges of planning, organising planning, organising and managing space.This is possible, among other things, by continuous contacts with various local authorities.2018), which is the subject of this article.The purpose of the latter was to lay the basis for a discussion about the city's advertising chaos and to begin corrective steps in this area.To start such a discourse, a good diagnosis is needed as a starting point for planning change.An effort to establish such a diagnosis was carried out by fulltime master's students in spatial management.Their work consisted, among other things, of conducting an inventory of advertisements in a designated area.The students took stock of the types, volume, location and technical condition and aesthetics of the advertising media.The study also assessed the degree of intensity of the advertisements and how they inscribed themselves in the surroundings.Photographic documentation was made.The strengths and weaknesses of the area were analysed in terms of the advertisements present.In addition, a comparison was made between archival photographs and the state at the time of the inventory.
The results of the survey confirm that advertisements are present in the selected area in high concentration and, moreover, they often clearly contrast with the surroundings (Fig. 2) or cover up monuments (Fig. 3).
A total of 1,609 different cases of advertising were inventoried.Signboards and film on glass are the largest in number, accounting for 30% and 22% of all advertisements, respectively.So-called "pieces of paper" (8.1%) were also found in relative abundance, often pasted on trees or lampposts being small advertisements.The open category "other" included advertisements occurring on vehicles or umbrellas in beer gardens and outdoor restaurants (Tab.1).
93% of the inventoried advertisements related to the activities carried out at the location.88% of the media were on buildings.Most were small advertisements with a surface of less than 1 m2 (Fig. 4).Overall, the technical condition of the advertisements in the study area was assessed as good.However, 1/3 of the media received the lowest score in terms of aesthetics.
According to the classification of economic activities adopted in the study, the most frequently advertised industry is retail and wholesale trade and repair of motor vehicles excluding motorbikes.Apart from these, there is numerous advertising media reporting on financial, insurance and catering activities.A high proportion of advertising was also attributed to other service activities (e.g.repair of electronic equipment, watches, jewellery or shoes, hairdressing or beauty services).This shows the clearly service-oriented nature of Pabianice's main street (Tab.2).
Other services activities 481
Households with employees or producing goods and services for own use 2
Extra-territorial organisations and teams 11
Source: Own study.
As part of the summary of the work, the strengths and weaknesses of the study area were identified in the context of the observed advertisements (Tab.3).Among others, the presence of good examples (Fig. 5), the relatively low proportion of large-size advertisements and the considerable service potential of the analysed area were noted.Unfortunately, the list of weaknesses prevails, including too many advertisements covering building facades, their aggressive colours, repeated advertising information.
After the field practice, a report was prepared on the conducted research and analyses which was submitted along with recommendations to the Pabianice Municipality.Unfortunately, in the end, the decision to discuss the issue more widely in the city was not taken.Perhaps the clearly stated diagnosis did not particularly correspond to the political image of the authorities, and the previously mentioned negative experiences of other local authorities did not encourage them to take up the topic.Source: Own study.
Conclusions
Undoubtedly, in every city it is beneficial to take educational actions about space that reach different groups (Fogel, 2016;Głogowski, 2017), including those drawing attention to the advertising chaos around us.It would be worth supporting and promoting good examples of the use of outdoor advertising (Masierek, Pielesiak, 2018).
The results of the research presented in this article may inspire a broader discussion on the subject.A good diagnosis of the existing situation should always be a starting point for any initiated actions and plans.The conducted analyses show that, as a rule, this should at least: • eliminate poor quality advertising, • minimise the content and form of advertisements and avoid repetition, • establish the proportions of advertisements in relation to the objects on which they are placed, • match colours to the surroundings (Szmygin, 2015;Masierek, Kurzyk (eds.), 2018).However, each space has its own individual endogenous conditions, so it is worth getting to know it very well (its strengths, weakness and opportunities) so that the recovery programme can be adapted individually to it and to the needs of its users.
Students participating in the described activities, in addition to practicing the skills associated with doing their own inventory work and analysing it, also had an opportunity to meet representatives of the Pabianice City Hall and participate in a project directly responding to the needs of the placement.This was certainly additional motivation to complete the task reliably and to present themselves at their best to a potential employer or client.The fact that the final study was not used by the office is, despite appearances, another interesting issue to discuss with the students and explain the often non-obvious reality.Thanks to this exercise, the participants learned about the cultural landscape and the legal conditions for its protection, and they became familiar with the study area and its conditions.When inventories of specific advertisements were made, the students actually had to ask themselves many questions related to them and then to make their own critical analysis and evaluation.Thanks to this educational exercise, they have become sensitive to the issue of advertising chaos, and their perception of space has probably been influenced.Hopefully, they will put to use the knowledge and experience they have obtained in their current work or life in various spaces and, as citizens, they will not remain indifferent to the ruination of the landscape.
One such example is the cooperation with the city of Pabianice undertaken by the Author of the publication in 2016 which resulted in various educational activities, including: • completed M.A. theses addressing important topics for the city, e.g.Influence of social infrastructure on the quality of life of residents of the Piaski housing estate in Pabianice (2018), Revitalisation of the Old Market area in Pabianice (2019) or Aspect of the Green in urban revitalisation on the example of Juliusz Słowacki Park in Pabianice (2020); • specialised field studies, as part of which a research project was conducted with residents entitled «How to change the Old Market Square in Pabianice?»(Jak zmienić …, 2016), • specialty field practice, which included an inventory of advertisements, in the space of the city's main street (
Weaknesses•
High proportion of advertising relating to economic activity in the study area (90%) • Low proportion of very large (1%) and large (5%) advertisements • Good technical condition of most advertisements (60%) • Examples of good practices in the use of advertising (19%) • High service potential of the Krakowska Kapitula Route• Too many advertisements on building facades and their negative impact on the landscape • Repeated signs with the same information on facades of some buildings • Aggressive colours of advertisements (bright colours, high contrast between colours) • Lack of coherence between the colours of advertisements and the colour of building facades • Lack of correspondence between the size of advertisements and the size of buildings • Inappropriate location of advertisements (on balconies, fences) • Advertisements covering entire shop windows or windows of buildings, limiting the flow of daylight into interiors • Poor quality of advertisements • Prevalent average (27%) and poor (22%) aesthetics. | 2024-02-23T16:04:50.472Z | 2023-12-29T00:00:00.000 | {
"year": 2023,
"sha1": "7e3c88434dde43a7881c0b04836f439c7bdab4e6",
"oa_license": "CCBY",
"oa_url": "https://czasopisma.bg.ug.edu.pl/index.php/JGPS/article/download/10436/9410",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "444ede8428d91ae4c64a71d851789b924e2ca8a5",
"s2fieldsofstudy": [
"Geography",
"Political Science",
"Education"
],
"extfieldsofstudy": []
} |
871238 | pes2o/s2orc | v3-fos-license | Identification of Patients with Congestive Heart Failure using a binary classifier: a case study.
This paper addresses a very specific problem that happens to be common in health science research. We present a machine learning based method for identifying patients diagnosed with congestive heart failure and other related conditions by automatically classifying clinical notes. This method relies on a Perceptron neural network classifier trained on comparable amounts of positive and negative samples of clinical notes previously categorized by human experts. The documents are represented as feature vectors where features are a mix of single words and concept mappings to MeSH and HICDA ontologies. The method is designed and implemented to support a particular epidemiological study but has broader implications for clinical research. In this paper, we describe the method and present experimental classification results based on classification accuracy and positive predictive value.
Introduction
Epidemiological research frequently has to deal with collecting a comprehensive set of human subjects that are deemed relevant for a particular study. For example, the research focused on patients with congestive heart failure needs to identify all possible candidates for the study so that the candidates could be asked to participate. One of the requirements of a study like that is the completeness of the subject pool. In many cases, such as disease incidence or prevalence studies, it is not acceptable for the investigator to miss any of the candidates. The identification of the candidates relies on a large number of sources some of which do not exist in an electronic format, but it may start with the clinical notes dictated by the treating physician.
Another aspect of candidate identification is prospective patient recruitment. Prospective recruitment is based on inclusion or exclusion criteria and is of great interest to physicians for enabling just-in-time treatment, clinic trial enrollment, or research study options for patients. At Mayo Clinic most clinical documents are transcribed within 24 hours of patient consultation. This electronic narration serves as resource for enabling prospective recruitment based on criteria present in clinical document.
Probably the most basic approach to identification of candidates for recruitment is to develop a set of terms whose presence in the note may be indicative of the diagnoses of interest. This term set can be used as a filtering mechanism by either searching on an indexed collection of clinical notes or simply by doing term spotting if the size of the collection would allow it. For example, in case of congestive heart failure, one could define the following set of search terms: "CHF", "heart failure", "cardiomyopathy", "volume overload", "fluid overload", "pulmonary edema", etc. The number of possible variants is virtually unlimited, which is the inherent problem with this approach. It would be hard to guarantee the completeness of this set to begin with, which is further complicated by morphological and spelling variants. This problem is serious because it affects the recall, which is especially important in epidemiological studies.
Another problem is that such term spotting or indexing approach would have to be intelligent enough to identify the search terms in negated and other contexts that would render documents containing these terms irrelevant. A note containing "no evidence of heart failure" should not be retrieved, for example. Identifying negation reliably and, more importantly, its scope is far from trivial and is in fact a notoriously difficult problem in Linguistics [1]. This problem is slightly less serious than the completeness problem since it only affects precision which is less important in the given context than recall.
In order to be able to correctly identify whether a given patient note contains evidence that the patient is relevant to a congestive heart failure study, one has to "understand" the note. Currently, there are no systems capable of human-like "understanding" of natural language; however, there are methods that allow at least partial solutions to the language understanding problem once the problem is constrained in very specific ways. One such constraint is to treat language understanding as a classification problem and to use available machine learning approaches to automatic classification to solve the problem. Clearly, this is a very limited view of language understanding but we hypothesize that it is sufficient for the purposes referred to in this paper.
Previous work
The classification problems that have been investigated in the past are just as varied as the machine learning algorithms that have been used to solve these problems. Linear Least Squares Fit [2], Support Vector Machines, Decision trees, Bayesean learning [3], symbolic rule induction [4], maximum entropy [5], expert networks [6] are just a few that have been applied to classifying e-mail, Web pages, newswire articles, medical reports among other documents.
Aronow et al. [7] have investigated a problem very similar to the one described in this papers. They developed an ad hoc classifier based on a variation of relevance feedback technique for mammogram reports where the reports were classified into three "bins": relevant, irrelevant and unsure. One of the features of the text processing system they used had to do with the ability to detect and take into account negated elements of the reports.
Wilcox et al. [8] have experimented with a number of classification algorithms for identifying clinical conditions such as congestive heart failure, chronic obstructive pulmonary disease, etc. in raidograph reports. They found that using an NLP system such as MedLEE (Medical Language Extraction and Encoding System) and domain knowledge sources such as UMLS [9] for feature extraction can significantly improve classification accuracy over the baseline where single words are used to represent training samples.
Jain and Friedman [10] have demonstrated the feasibility of using MedLEE for classifying mammogram reports. Unlike Wilcox [8], this work does not use an automatic classifier, instead, it uses the NLP system to identify findings that are considered suspicious for breast cancer.
NaiveBayes vs. Perceptron
We experimented with two widely used machine learning algorithms, Perceptron and Naïve Bayes, in order to train models capable of distinguishing between clinical notes that contain sufficient evidence of the patient having the diagnosis of congestive heart failure (positive examples) from notes that do not contain such evidence (negative examples). The choice of the problem was dictated by a specific grant aimed at studying patients with congestive heart failure.
The choice of the algorithms was largely dictated by efficiency considerations. Both Perceptron and Naïve Bayes belong to a family of linear classifiers which tend to be computationally more manageable on large feature sets like the one we are addressing than other algorithms. Damerau et al. [11] show on the Reuters corpus that sparse feature implementations of linear algorithms are capable of handling large feature sets. We used a sparse feature implementation of these two algorithms available in the SNoW (Sparse Networks of Winnows) Version 2.1.2 package [12]. Perceptron and Naïve Bayes classifiers.
Perceptron is a simple iterative learning algorithm that represents in its simplest form a two-layer (input/output) neural network where each node in the input layer is connected to each node in the output layer. A detailed description can be found in [13] and [14]. There are several well known limitations of this algorithm. The most significant is that the simple Perceptron is unable to learn non-linearly separable problems. In order for this algorithm to work, one should be able to draw a hyperplane in the training data feature space that will linearly separate positive examples from negative. With large multidimensional feature spaces, it is hard to know a priori whether the space is linearly separable; however, a good indication of that can be gleaned from the classification accuracy testing on several folds of training/testing data. If the accuracy results show large fluctuations between folds, then that would be a good indication that the space is not linearly separable. On the other hand if the standard deviation on such a cross-validation task is relatively small, then one could be reasonably certain that Perceptron is a usable technique for the problem.
The other less serious limitation is that there is a chance that the algorithm will falsely conclude convergence in a local minimum on the error function curve without reaching the global minimum, which could also account for low or inconsistent accuracy results. This limitation is less serious because it can be controlled to some extent with the learning rate parameter, which sets the amount by which the weights are adjusted each time Perceptron makes a classification error during training [14].
Naïve Bayes does not have the limitations of Perceptron, but does have limitations of its own. The Bayes decision rule chooses the class that maximizes the conditional probability of the class given the context in which it occurs: Here, C` is the chosen category, C is the set of all categories and V j is the context. Naïve Bayes decision algorithm makes a simplifying assumption that the words in V j are independent of each other. A particular implementation of the Naïve Bayes decision rule based on the independence assumption to text categorization and word sense disambiguation problems is also known as "bag of words" approach [13]. This approach does not attempt to take into account any sort of possible dependency between the individual words in any given context, in fact it assumes that the word "heart" and the word "failure", for example, occur completely independently of each other. Theoretically, such assumption makes Naïve Bayes classifiers very unappealing for text categorization problems, but in practice it has been shown to perform well on a much greater range of domains than the theory would support.
The common feature between the two techniques is that both are linear classifiers and are relatively efficient which makes them attractive for learning from large feature sets with lots of training samples.
CHF pilot study
As part of preliminary grant work to investigate and evaluate incidence, outcome, and etiology trends of heart failure, a pilot study for prospective recruitment using term spotting techniques was tested. Prospective recruitment was needed for rapid case identification with 24 hours of newly diagnosed heart failure patients.
Within Mayo Clinic approximately 75% of clinical dictations are electronically transcribed on the date of diagnosis allowing them to be processed using natural language techniques. Using the terms "cardiomyopathy, heart failure, congestive heart failure, pulmonary edema, decompensated heart failure, volume overload, and fluid overload" all electronic outpatient, emergency department, and hospital dismissal notes were processed. These results were reviewed by trained nurse abstractors to determine if this technique could provide identification of patients with clinically active heart failure. Using the term spotting technique no cases were omitted as compared to standard human diagnostic coding methods of final diagnosis. This pilot provided a valid basis for using term spotting for prospective recruitment; however, the nurse abstractors reported filtering out a large number of documents that were irrelevant to the query, thus indicating that there was room for improvement especially in precision. These were not quantified at the time. The results derived from the test sets used for the study described in this paper display similar tendencies.
Human Expert Agreement
For testing a classifier, it is important to have a test bed that contains positive as well as negative examples that have been annotated by human experts. It is also important to establish some sort of an agreement between annotators. For this study we used a test bed created with a specific focus on the diagnosis regarding the patient described within the medical document for a separate pilot study of agreement between annotators (de Groen et al., p. c.).
One of the topics selected for this test bed creation study included congestive heart failure. For each topic, 90 documents were selected for evaluation. Seventy of the 90 documents were chosen from documents with a high likelihood of containing diagnostic information regarding the topic of inquiry. Specifically, thirty-five documents were randomly selected from a pool of documents based on a coded final diagnosis; thirtyfive documents were randomly selected from a pool of documents based on a textual retrieval of lexical surface forms (term spotting). The final twenty documents were randomly selected from the remaining documents, not originally included in the coded or text identified collections. A group of Emeritus physicians acted as the human experts for this annotation task. The experts were instructed to determine whether the information contained in the clinical note could support inclusion of the patient in a clinical/research investigation, if such investigation was centered on patients having -at the time the note was createdthe topic of inquiry.
Each document was judged by three physicians on the following scale: (confirmed-probableindeterminate-probably not-definitely not). For the purposes of our study we collapsed "confirmed" and "probable" categories into one "positive" category. We also collapsed "probably not" and "definitely not" into a "negative" category. The "indeterminate" category happened to include such artifacts as differential diagnosis as well as uncertain judgements and therefore was ignored for our purposes. The agreement on this particular topic happened to be low: only 31% of the instances were agreed upon by all three experts; therefore, we decided to use the agreed upon subset of the notes only for testing our approach. The low level of agreement was partly attributable to the breadth of the topic and, partly, to how the instructions were interpreted by the experts. Despite the low level of agreement, we were able to select a subset of 26 documents where all three annotators agreed. These were the documents where all three annotators assigned either the "positive" or the "negative" category. 7 documents were judged as "positive" and 19 were judged as "negative" by all three experts.
Feature extraction
Arguably, the most important part of training any text document classifier is extracting relevant features from the training data. The resulting data set looks like a set of feature vectors where each vector should represent all the relevant information encoded in the document and as little as possible of the irrelevant information. To capture the relevant information and give it more weight, we used two classification schemes: MeSH (Medical Subjects Headings) [15]and HICDA (Hospital International Classification of Diseases Adaptation) [16]. The MeSH classification is available as part of the UMLS (Unified Medical Language System) compiled and distributed by the National Library of Medicine (NLM) [9]. HICDA is a hierarchical classification with 19 root nodes and 4,334 leaf nodes. Since 1975, it has been loosely expanded to comprise 35,676 rubrics or leaf nodes. It is an adaptations of ICD-8, which is the 8 th edition of the International Classification of Diseases. HICDA contains primarily diagnostic statements, whereas MeSH is not limited to diagnostic statements and therefore the two complement each other. It should also be noted that, for mapping the text of clinical notes to these two ontologies, in addition to the text phrases present in HICDA and MeSH, some lexical and syntactic variants found empirically in medical texts were also added. For MeSH, these variants were derived from MEDLINE articles by UMLS developers and for HICDA, the variants came from coded diagnoses. Having these lexical and syntactic variants in conjunction with text lemmatization made the job of mapping relatively easy. Text lemmatization was done using the Lexical Variant Generator's (lvg 1 ) 'norm' function also developed at NLM.
For the purposes of this experiment, we represented each document as a mixed set of features of the following types: (MeSH code mappings, HICDA code mapping, Single word tokens, Demographic data). First, MeSH and HICDA mappings were identified by stemming and lowercasing all words in the notes and finding their matches in the two ontologies. Next, stop words were deleted from the text that remained unmapped. The remaining words were treated as single word token features. In addition to these lexical features, we used a set of demographic features such as age, gender, service code (the type of specialty provider where the patient was seen (e. g. 'cardiology')) and death indicator (whether the patient was alive at the time the note was created). Since age is a continuous feature, we had to discretize it by introducing ranges A-N arbitrarily distributed across 5 year intervals from 0 to over 70 years old. For this experiment, features that occurred less than 2 times were ignored. The extracted feature "vocabulary" consists of 11,118 unique features. Table 1 shows the breakdown of the feature vocabulary by type.
Experimental Setup
Both Naïve Bayes and Perceptron were trained on the same data and tested using a 10-fold crossvalidation technique as well as a held-out test set of 26 notes mentioned in section 4.
Data
Two types of annotated testing/training data were used in this study. The first type (Type I) is the data generated by medical coders for the purpose of conceptual indexing of the clinical notes. The second type (Type II) is the data annotated by Emeritus physicians (experts).
For Type I data, a set of clinical notes for 6 months of year 2001 was collected resulting in a corpus of 1,117,284 notes. Most of these notes contain a set of final diagnoses established by the physician and coded using the HICDA classification by specially trained staff. The coding makes it easy to extract a set of notes whose final diagnoses suggests that the patient has congestive heart failure or a closely related condition or symptom like pulmonary edema. Once this positive set was extracted (2945 notes), the remainder was randomized and a similar set of negative samples was extracted (4675 notes). The total size of the corpus is 7620 notes. Each note was then run through feature extraction and the resulting set was split into 10 train/test folds by randomly selecting 20% of the 7620 notes to set aside for testing for each fold.
Type II data set was split into two subsets: a complete agreement (TypeII-CA) set and a partial agreement set (TypeII-PA). The complete agreement set was created by taking 26 notes that were reliably categorized by the experts with respect to congestive heart failure specifically. These 26 notes represent a set where all three annotators agreed at least to a large extent on the categorization. "A large extent" here means that all three annotators labeled the positive samples as either "confirmed" or "probable" and the negative samples as either "probably not" or "definitely not". The set contains 7 positive and 19 negative samples. The partial agreement set was created by labeling all samples for which at least one expert made a positive judgement and no experts made a "negative" judgement as "positive" and then labeling all samples for which at least one expert made a negative judgement and no experts made a positive judgements as "negative". This procedure resulted in reducing the initial set of 90 samples to 74 of which 21 were positive and 53 were negative for congestive heart failure. This partial agreement set is obviously weaker in its reliability but it does provide substantially more data to test on and would enable us to judge, at the very least, the consistency of the automatic classifiers being tested.
Training
The following parameters were used for training the classifiers. Naïve Bayes was used with the default smoothing parameter of 15. For Perceptron, the most optimal combination of parameters was to have the learning rate set at 0.0001 (very small increments in weights), the error threshold was set at 15. The algorithm with these settings was run for 1000 iterations.
Results
Standard classifier accuracy computation [13] for binary classifiers was used. (2) Where TP represents the number of times the classifier guessed a correct positive value (true positives), TN is the number of times the classifier correctly guessed a negative value (true negatives), FP is the number of times the classifier predicted a positive value but the correct value was negative (false positives) and the FN (false negatives) is the inverse of FP.
In addition to standard accuracy, positive predictive value was also used. It is defined as: Where TP+FP constitute all positive samples in the test data set. We are interested in positive predictive value because of the strong preference towards perfect recall in document retrieval for epidemiological studies, even if it comes at the expense of precision. The rule is that it is better to identify irrelevant data that can be discarded upon review than to miss any of the relevant patients.
First, we established a baseline by running a a very simple term spotter that looked for the CHFrelated terms mentioned in Section 2 (and their normalized variants) in the collection of normalized 2 documents from the Type II data set. The accuracy of the term spotter is 56% on Type II-CA set and 54% on Type II-PA set. Positive predictive value on Type II-CA set is 85% and on Type II-PA set -71%. The positive predictive value on Type II-CA set reflects the spotter missing only 1 document out of 7 identified as positive by the experts. The results are summarized in Tables 3 and 4.
The results of testing the two classifiers are presented in Table 2. Naïve Bayes algorithm achieves 82.2% accuracy, whereas Perceptron gets 86.5%. The standard deviation on the Perceptron classifier results appears to be relatively small, which leads us to believe that this particular classification problem is linearly separable. The difference of 4.3% happens to be statistically significant as evidenced by a t-test at 0. Table 2. Classification test results illustrating the differences between Perceptron and Naïve Bayes.
confidence level. The difference in the positive predictive value is also significant, however, is it inversely related to the difference in accuracy. Perceptron models perform on average 11 absolute percentage points worse than Naïve Bayes models. Table 1 shows results that represent the accuracy of the classifiers on classifying the Type I test data that has been generated by medical coders. Clearly, Type I data is not generated in exactly the same way as Type II. Although Type I data is captured reliably and is highly accurate, Type II data is classified specifically with respect to congestive heart failure only, by expert physicians and, we believe, reflects the nature of the task at hand a little better.
In order to test the classifiers on Type II data, we re-trained them on the full set of 7620 notes of Type I data using the same parameters as were used for the 10-fold cross-validation test. The results of testing the classifiers on Type II-CA data (complete agreement) are presented in Table 3. These results are consistent with the ones displayed in Table 2 in that Perceptron tends to be more accurate overall but less so in predicting positive samples. Table 4 summarizes the same results for Type II-PA test set and the results appear to be oriented in the same general direction as the ones reported in Table 2 and 3. NaiveBayes 95 57 Perceptron 86 65 TermSpotter 71 54 Table 4. Test results for Type II-PA data (annotated by retired physicians with partial agreement).
Classifier PPV (%) Acc (%)
From a practical standpoint, the results presented here are interesting in that they suggest that the most accurate classifier may not be the most useful for a given task. In our case, if we were to use these classifiers for routing a stream of electronic clinical notes, the gains in precision that would be attained with the more accurate classifier would most likely be wiped out by the losses in recall since recall is more important for our particular task than precision. However, for a different task that may be more focused on precision, obviously, Perceptron would be a better choice.
Finally, both Perceptron and Naïve Bayes performance appears to be superior to the baseline performance of the term spotter. Clearly such comparison is only an indicator because the term spotter is very simple. It is possible that a more sophisticated term spotting algorithm may be able to infer semantic relations between various terms and be able to compensate for misspellings and carry out other functions resulting possibly in better performance. However, even the most sophisticated term spotter will only be as good as the initial list of terms supplied to it. The advantage of automatic classification lies in the fact that classifiers encode the terminological information implicitly which alleviates the need to rely on managing lists of terms and the risk of such lists being incomplete. The disadvantage of automatic classification is that the classifier's performance is heavily data dependent, which raises the need for sufficient amounts of annotated training data and limits this methodology to environments where such data is available.
The error analysis of the misclassified notes shows that a more intelligent feature selection process that takes into account discourse characteristics and semantics of negation in the clinical notes is required. For example, one of the misclassified notes contained "no evidence of CHF" as part of the History of Present Illness (HPI) section. Clearly, the presence of a particular concept in a clinical note is not always relevant. For example, various terms and concepts may appear in the Review of Systems (ROS) section of the note; however, the ROS section is often used as a preset template and may have little to do with the present condition. Same is true for other sections such as Family History, Surgical History, etc. It is not clear at this point which sections are to be included in the feature selection process. The choice will most likely be task specific.
The current study did not use any negation identification, which we think accounted for some of the errors. As one of the future steps, we are planning to implement a negation detector such as the NegExpander used by Aronow et al. [7].
Conclusion
In this paper, we have presented a methodology for generating on-demand binary classifiers for filtering clinical patient notes with respect to a particular condition of interest to a clinical investigator. Implementation of this approach is feasible in environments where some quantity of coded clinical notes can be used as training data. We have experimented with HICDA codes; however, other coding schemes may be usable or even more usable as well.
We do not claim that either Naïve Bayes or the Perceptron are the best possible classifiers that could be used for the task of identifying patients with certain conditions. All we show is that either one of these two classifiers is reasonably suitable for the task and has the benefits of computational efficiency and simplicity. The results of the experiments with the classifiers suggest that although Perceptron has higher accuracy than the Naïve Bayes classifier overall, its positive predictive value is significantly lower. The latter result makes it less usable for a practical binary classification task focused on identifying patient records that have evidence of congestive heart failure. It may be worth while pursuing an approach that would use the two classifiers in tandem. The classifier with the highest PPV would be used to make the first cut to maximize recall and the more accurate classifier would be used to rank the output for subsequent review. | 2014-07-01T00:00:00.000Z | 2003-07-11T00:00:00.000 | {
"year": 2003,
"sha1": "41722a7c6dc9e9f264da4e3d87883b688cef0087",
"oa_license": null,
"oa_url": "https://dl.acm.org/doi/pdf/10.3115/1118958.1118970",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "6e482ca036c71fcbf9683ecbf60fb0f9b280c650",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
195583466 | pes2o/s2orc | v3-fos-license | Effectiveness and acceptability of methods of communicating the results of clinical research to lay and professional audiences: protocol for a systematic review
Background Phase III randomised controlled trials aim not just to increase the sum of human knowledge, but also to improve treatment, care or prevention for future patients through changing policy and practice. To achieve this, the results need to be communicated effectively to several audiences. It is unclear how best to do this while not wasting scarce resources or causing avoidable distress or confusion. The aim of this systematic review is to examine the effectiveness, acceptability and resource implications of different methods of communication of clinical research results to lay or professional audiences, to inform practice. Methods We will systematically review the published literature from 2000 to 2018 for reports of approaches for communicating clinical study results to lay audiences (patients, participants, carers and the wider public) or professional audiences (clinicians, policymakers, guideline developers, other medical professionals). We will search Embase, MEDLINE, PsycINFO, ASSIA, the Cochrane Database of Systematic Reviews and grey literature sources. One reviewer will screen titles and abstracts for potential eligibility, discarding only those that are clearly irrelevant. Potentially relevant full texts will then be assessed for inclusion by two reviewers. Data extraction will be carried out by one reviewer using EPPI-Reviewer. Risk of bias will be assessed using the relevant Cochrane Risk of Bias 2.0 tool, ROBINS-1, AXIS Appraisal Tool or Critical Appraisals Skills Programme Qualitative Checklist, depending on study design. We will decide whether to meta-analyse data based on whether the included trials are similar enough in terms of participants, settings, intervention, comparison and outcome measures to allow meaningful conclusions from a statistically pooled result. We will present the data in tables and narratively summarise the results. We will use thematic synthesis for qualitative studies. Discussion Developing the search strategy for this review has been challenging as many of the concepts (patients, clinicians, clinical studies, and communication) are widely used in literature that is not relevant for inclusion in our review. We expect there will be limited comparative evidence, spread over a wide range of approaches, comparators and populations and, therefore, do not anticipate being able to carry out meta-analysis. Systematic review registration International Prospective Register of Systematic Reviews PROSPERO (CRD42019137364). Electronic supplementary material The online version of this article (10.1186/s13643-019-1065-x) contains supplementary material, which is available to authorized users.
Background
Phase III randomised controlled trials are often costly and can take years to carry out [1]. They may involve hundreds or thousands of participants, cared for by clinicians, nurses and other medical professionals at many sites. They aim not just to increase the sum of human knowledge, but also to improve treatment, care or prevention for future patients. To achieve this, the results need to be communicated effectively to a variety of audiences [2].
The evidence base on how best to communicate trial results to different audiences is sparse [2]. These gaps in evidence mean that scarce time and resources may be wasted in carrying out ineffective communications activities, while approaches that do work may not be widely used. There is also a risk that some communication approaches may be harmful, for example causing avoidable distress or confusion [3,4]. This systematic review will draw together what evidence there is, in order to inform the practice of clinical trials units and others who are interested in effectively communicating trial results.
The aim of this study is to examine the effectiveness, acceptability and resource implications of different methods of communication of clinical research results to lay (study participants, patients, carers, communities, populations at risk of the condition, and the wider public) and professional audiences (medical professionals, policymakers, clinical guideline developers and healthcare commissioners). We are primarily interested in evidence on communicating the overall results of phase III academic clinical trials but will also look at the literature on communicating the results of other clinical research study designs that are likely to generate evidence with direct implications for policy and practice (for example, systematic reviews and cohort studies) in order to learn lessons that may also be applicable to phase III academic clinical trials. We will try to encompass all approaches to communication that have been evaluated in the literature.
This systematic review will build on two prior reviews of communicating the results of clinical research, identified through scoping searches of the PubMed database during the initial planning phase of this review. The first of these was a systematic review by the Agency for Healthcare Research and Quality (AHRQ) [2], which looked at several questions around the communication and dissemination strategies to facilitate the use of health-related evidence. Of most relevance to this review is the comparative effectiveness of the dissemination strategies to promote the use of health care evidence and how this varies by patients and clinicians. They focused on strategies to increase reach of information, motivation to use and apply evidence, ability to use and apply evidence, or used a multicomponent approach (combining two or more approaches to increase the reach of the evidence, the motivation of the audience to apply the information, and/or the ability to use the evidence). They found that evidence was poor, inconsistent or not statistically significant for most of the comparisons they looked at. The most successful strategy identified in the review was the use of a multicomponent dissemination approach for clinicians when trying to change their behaviours. This review will complement the AHRQ review by including qualitative and nonexperimental studies alongside trials of communication approaches, to provide a broader understanding of approaches that are being used to communicate health research, as well as look at comparative evidence that has been generated since the AHRQ review. The second literature review looked at communicating the results of clinical research to participants [5] and found that research participants wanted to be informed of the results of the studies they had taken part in and that investigators seemed to support the communication of aggregate results to participants. Our scoping search of the PROSPERO database did not reveal any other relevant systematic reviews to consider.
Our review will also complement a study currently being carried out to look at communicating results to trial participants (the REporting Clinical trial results Appropriately to Participants (RECAP) study) [6]. In addition to study participants, we will also look at communicating results to wider lay audiences (including patients who are not participating in the trial, and the wider public) and professional audiences including policymakers, guideline developers and clinicians.
This review will cover a broad range of approaches of communicating study results to lay and professional audiences. As there are very different resource implications for the different methods of communicating results, this review will compare the effectiveness and acceptability of different methods and will also comment on resource implications. It will look at a variety of outcomes, covering all dimensions of the International Association for the Measurement and Evaluation of Communication Framework (outputs, outtakes (what the audience take from the communication, e.g. awareness or understanding), outcomes and impact) [7]. Where there is sufficient evidence, we will seek to make recommendations on which approaches are likely to be the most effective, cost-effective and/or acceptable for communicating with different audiences, in order to guide future practice and avoid effort and resources being wasted on ineffective approaches.
Objectives
Our overarching research question is what are the best ways or combinations of ways of communicating the results of clinical research that has implications for health policy or practice to lay and professional audiences? Within this, we will look at three key questions:
Methods
This protocol will be reported according to the PRISMA-P statement. Our PRISMA-P checklist is included as Additional file 1.
Eligibility criteria
Studies will be selected according to the criteria outlined below.
Study designs
We will include any reports of studies with any qualitative or quantitative study design, as well as theoretical papers, health economic papers, reviews, reports and guidelines.
Population
Eligible participants will include lay audiences and professional audiences, as defined below.
Lay audiences
Clinical study participants and their carers, the wider patient community, including individual patients, carers, and patient groups Communities in which the studies have taken place (geographic or other demographic communities) Populations at risk of the condition The wider public
Professional audiences
Medical professionals (beyond those involved in conducting the trial), including individual practitioners, organisations (e.g. hospitals or medical schools) and professional associations/societies Policymakers Clinical guideline developers Healthcare commissioners We will not restrict the population by age, sex, location or other demographic factors.
Approaches
For the purposes of this review, we are interested in any approaches for communicating study results to any of the populations specified above. We are interested in communicating the results of clinical studies carried out on humans that have implications for health policy or practice. By clinical studies, we mean observational or interventional medical research relating to treatment, diagnosis or disease prevention among actual patients/people, rather than laboratory or modelling studies. We anticipate that this will include approaches to communicating the results of phase III academic randomised controlled trials, including cluster randomised trials; meta-analyses; epidemiological studies that look at treatment, prevention or diagnosis approaches; industry-sponsored phase III randomised controlled trials; and possibly phase II randomised controlled trials if their results have implications for practice and/or policy. As we will include studies without a comparator group in this systematic review, we are not restricting the search to studies with a particular comparator.
Outcomes
For question 1 (effectiveness), we are interested in a broad range of outcomes (Table 1), so the types of outcomes reported will not be an inclusion or exclusion criterion.
Timing
The time point of enrolment and the duration of interventions will not be considered as eligibility criteria. There will be no restrictions by length of follow-up of outcomes.
Setting
We will not restrict the search to specific settings but will collect data on the settings in which studies have been carried out and seek to assess whether there is heterogeneity of effect based on setting.
Minimum sample size
We will not restrict studies by sample size, as this would be inappropriate when including qualitative research, or studies where the intended audience may be small (e.g. policymakers).
Language
We will include articles reported in English. We will exclude articles written in other languages, due to resource constraints.
Types of publication
The types of publications we will include are:
Complete articles Conference abstracts Reports (grey literature) Theses
We will exclude commentaries, editorials, guidelines, letters and protocols.
Information sources
Literature search strategies will be developed using medical subject headings (MeSH) and text words. We will search the following databases: We will also search the following grey literature sources for items: We will scan the reference lists of key included studies and relevant reviews and guidelines identified through the search, to identify papers that our search has missed. We will contact experts in the field, to ask what studies they are aware of that we should make sure we include.
Search strategy
No study design limits will be imposed on the search. The search strategy was developed by AS, who is a PhD student, under the guidance of CV, who is a specialist in systematic reviews, with advice from an information specialist. AS, CV and JB contributed to the development of search terms and reviewed the final strategy. The strategy was developed and pilot tested in the EMBASE database, using the Ovid interface, using an iterative approach until a strategy that was sufficiently sensitive but did not result in an unmanageable number of records was achieved. The strategy will be adapted to the syntax and subject headings of other databases, including MEDLINE, PsycINFO and ASSIA as necessary.
Some of the grey literature sources have less sophisticated search functions. For these, we will use broad search terms, and hand search the results to identify those that are potentially relevant for this review. The searches will be updated until the study completion date, which is anticipated to be around early 2022.
Two searches will be run: one to identify articles relevant to communicating results to lay audiences and the other to identify articles relevant to communicating results to professional audiences. This is because a different terminology is needed to identify communication approaches for professional audiences compared to lay audiences. Figure 1 illustrates the concepts of our search. Each search will combine results from searches focusing on terms for the audience, with results from searches with terms for communication and clinical research. Additional file 2 shows the list of search terms used. These will be combined with adjacency rules applied to limit results to articles where the audience terms, communication terms and clinical research terms are within a certain number of words from each other. Where databases allow us to use this adjacency approach, we will not use MeSH terms in our searches, as we are confident that articles that are relevant to our review will be identified through use of terms relating to audience, communications and clinical research in their titles, abstracts and keywords, and it is unclear how keywords about patients or clinical research are applied, given how common these are in the medical literature. Where the adjacency approach cannot be used, we will use MeSH terms for communication to narrow down results.
Study records Data management
Search results will be downloaded into Endnote. The citations will then be imported into Covidence (www.covidence.org) for abstract and title screening. Covidence will be used to record eligibility decisions. Selected records will then be imported into EPPI-Reviewer Version 4.7.0.0 [8] for data extraction.
Selection process
AS will assess the eligibility of titles and abstracts identified from electronic searches against the eligibility criteria of the review, discarding only those that are duplicates or clearly irrelevant. She will retrieve full-text copies of all articles judged to be potentially eligible. AS and CV will then independently assess all retrieved articles for inclusion. AS and CV will meet regularly during the screening process to discuss any issues arising and to ensure the eligibility criteria are being applied consistently. In cases where the two reviewers disagree on whether an item is eligible, this will be resolved by discussion between AS, CV and a third author where necessary. Where there is insufficient information in the publication to assess its eligibility, we will seek additional information from the study authors. All potentially relevant papers excluded from the review at this stage will be listed as excluded studies, with reasons provided in the characteristics of excluded studies table. The reviewers will not be blind to the journal titles or to the study authors or institutions. We will not use quality criteria as an inclusion criterion, but will assess the quality of included studies and report on that in our synthesis. If we find many records, we may restrict the synthesis to those judged to be at low risk of bias. See the "Assessment of risk of bias in included studies" section for how we will assess the quality of studies.
Data collection process
AS will carry out data extraction using standard electronic forms to tabulate the required information in EPPI-Reviewer. This will include information about the study design, setting, results to be communicated, communication approach(es) used, target audience and outcomes. The form will be based on the Cochrane Consumers and Communication Group's data extraction template, adapted to fit the needs of this review. There will be different versions of the form for quantitative and qualitative studies, to allow different risk of bias assessment tools to be applied and different types of data to be recorded. Draft data collection forms are included as Additional file 3. The forms will be piloted on the first 5 studies of each type and adapted as needed. If the piloting leads to changes to our plans, we will be transparent about this when we report the review. Where there are multiple reports from a single study, the most recently published data on each available outcome will be recorded. If important data about outcomes or approaches are missing, we will contact authors to request this information.
Data items
We will collect information about: The article type, publication year, citation and contact details for authors The study it refers to communicating the results of (design, population, setting, disease, time period and funding) The study methods for this report The participants included The approaches to communications studied The outcome measures The results/findings and interpretation Risk of bias
Outcomes and prioritisation
We will classify outcomes in line with the International Association for Measurement and Evaluation of Communication (AMEC) framework [7], which splits measurements and insights into outputs, outtakes (the response and reactions of the audience to the activity), outcomes (the effect of the communication on the target audience) and impact (changes the audience make as a result of the communication). Table 1 outlines the outcomes that we expect to find information on (although they will not all be relevant for every study). We may find additional outcomes that fit within this framework. Our primary outcomes are shown in italics. For participants, the coprimary outcomes are understanding and satisfaction with how the results are communicated. For other patients, communities and the public, we have chosen understanding as our primary outcome. For clinicians and policymakers, our co-primary outcomes are changes in the recommendations made in clinical policy documents, and clinical guidelines published by professional bodies or government agencies, and changes in clinical practice. We are also interested in the costs of the approaches tested. We will not set time points for measuring primary or secondary outcomes but use those reported in eligible studies.
The qualitative synthesis will focus on research question 2 (factors that influence the effectiveness of different communication approaches) and 3 (the views and experiences of audiences and communicators with regard to the communication of research results).
The AHRQ review reported a lack of consensus regarding definitions of key terms, particularly those describing different dissemination strategies. As there are no widely agreed standardised approaches to measuring the outcomes listed in Table 1, we will accept different definitions and approaches to the measurement of outcomes, as reported in retrieved studies, but will note how they have been defined by the authors.
Assessment of risk of bias in included studies
We will apply the risk of bias tool according to the pertinent study design. If any RCTs of communication approaches are found, we will use the Cochrane 'RoB 2.0' tool [9]. Any cluster randomised trials will be assessed by the Cochrane 'R0B 2.0 for cluster randomised trial when interest is in the effect of assignment to intervention' template [9]. Cohort studies and case-control studies will be assessed using the ROBINS-I tool [10]. Cross-sectional studies will be assessed using the AXIS tool [11]. Qualitative papers selected for inclusion will be assessed for methodological quality using the CASP Qualitative Checklist [12], which is one of the tools recommended by the Cochrane Qualitative and Implementation Methods Group for assessing the quality of qualitative studies with a range of methods [13].
For other types of studies, the most appropriate tool for assessing risk of bias will be applied.
Data synthesis Narrative synthesis
We will group the data based on the category that best explores the heterogeneity of studies and makes the most sense to the reader (i.e. by interventions, populations or outcomes). Within each category, we will present the data in tables and narratively summarise the results. Information will be presented in the text and tables to summarise the characteristics and findings of the included studies. The narrative synthesis will explore the relationship and findings both within and between the included studies. We will use thematic synthesis for qualitative studies.
Results will be presented in order of key question, and, within the section on key question 1, we will present the primary outcomes first, followed by the other outcomes in the order of outputs, outtakes, outcomes and impact.
If we find many studies relating to a communication approach to similar audiences, we may exclude from the synthesis those at high risk of bias. If we do not find large numbers of comparable studies, we will retain studies of any level of risk of bias in our analyses, but ensure that the narrative considers the impact that the level of risk of bias has on the certainty of any conclusions drawn. If there is sufficient information available, we will seek to explore whether the effectiveness of communication approaches varies by the target audience (e.g. lay vs professional audiences, participants vs other patients, patients vs general population), disease or geographical location, or other factors identified from the qualitative synthesis.
For qualitative studies, we will accompany the narrative synthesis with a summary of the assessment of quality for the included studies and how methodological limitations may affect our confidence in the synthesised findings, as recommended by the Cochrane Qualitative and Implementation Methods Group [13].
Following the GRADE guidelines, we will grade the overall quality of the synthesised quantitative evidence for each outcome separately as high, moderate, low or very low, taking into account the risk of bias, effect size, consistency of results, directness of evidence, precision and risk of publication bias [14].
Meta-analysis
We do not think that formal meta-analyses will be possible because of the anticipated variability in the populations, interventions and outcomes of the included studies. However, we will conduct such quantitative syntheses in the case of (1) low risk of bias in the included studies, (2) consistent outcomes between studies, (3) low publication bias, (4) a high number of included studies and (5) low heterogeneity [15,16]. We will apply the random effects model when undertaking a meta-analysis. If we are unable to pool the data statistically using meta-analysis, we will provide clear reasons for this decision.
Meta-biases
We will carry out extensive literature searches that include the grey literature as well as published studies in order to limit the impact of publication bias on our review. We will also make careful assessments of potential multiple publications from a single study, to ensure we are not double counting results. We will also ask key stakeholders what they think are the key studies in this area, to ensure our search has identified them.
Where protocols for studies have been published, we will check for differences between the protocol and the final study, to assess whether there has been selective outcome reporting. Where insufficient information is available from the published report, we will contact the authors for further information.
Discussion
Through conducting this review, we hope to bring together the existing evidence on how best to communicate the results of clinical research to lay and professional audiences, in terms of effectiveness, acceptability and resource requirements, in order to inform practice.
Developing the search strategy for this study has been challenging as many publications that are unrelated to our topic use combinations of the search terms and concepts used (e.g. participants, clinicians, clinical studies, report, disseminate), resulting in very large numbers of records if our three concepts (lay or professional, communication and clinical research) are combined with AND. The adjacency approach has been developed to improve the specificity of results from the searches, with the number of words within which the three concepts have to appear being determined based on comparing results from different limits (e.g. adjacent within 5 words compared to adjacent within 6 words), to see at what point adding extra words identifies very few extra relevant records.
Previous reviews, looking at subsets of the audiences we are interested in, have found very heterogeneous studies in terms of interventions and outcome measures, making meta-analysis challenging. As our review is looking at a wide range of audiences, interventions and outcomes, we expect that we may not be able to carry out meta-analyses. Once we have carried out the screening and data extraction, we will make a decision on whether meta-analysis is appropriate.
A limitation of this review is that title and abstract screening will be carried out by only one reviewer, which could result in potentially relevant records being excluded. In order to reduce this risk, the single reviewer will include any records where there is doubt about their eligibility at this stage of the review. | 2019-06-26T14:14:14.542Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "760ea78c0297c4dff9d27c8343359db0bd49c2c7",
"oa_license": "CCBY",
"oa_url": "https://systematicreviewsjournal.biomedcentral.com/track/pdf/10.1186/s13643-019-1065-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "760ea78c0297c4dff9d27c8343359db0bd49c2c7",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
71677353 | pes2o/s2orc | v3-fos-license | Thymoma with immunodeficiency with multiple recurrent oral herpetic infections
Thymomas with immunodeficiency (formerly Good's syndrome) are a rare acquired disease of combined T- and B-cell immunodeficiency accompanying a thymoma. Recurrent opportunistic infections associated with disorders of both humoral and cell-mediated immunity frequently accompany this rare primary, adult-onset immunodeficiency. This is a report of a case of a thymoma with immunodeficiency in a 65-year-old male patient who developed recurrent oral herpetic infections. He consulted us about recurrent vesiculo-ulcerative lesions on his tongue, lower lip, and buccal mucosa. Results of laboratory examinations indicated hypogammaglobulinemia accompanied by diminished B cells in the peripheral blood, which is consistent with the characteristic features of a thymoma with immunodeficiency. After a diagnosis confirming herpes simplex virus infection, systemic antiviral therapy was administered, which was effective for his vesiculo-ulcerative lesions at follow-up. When an intractable infection accompanied by a thymoma is encountered, increased awareness about the clinical and immunological profiles of this primary immunodeficiency may help in its early diagnosis, thereby preventing mortality.
Introduction
Although thymomas are the most frequently encountered primary neoplasm in the anterior mediastinum in adult patients, they are actually rare malignant neoplasms. 1 Immunodeficiency syndrome associated with a thymoma was first reported by Good and colleagues in 1954 and was commonly referred to as Good's syndrome. 2 However, the current classification (2007) of the International Union of Immunological Societies replaced this old eponym with "thymoma with immunodeficiency" and lists it is a primary immunodeficiency. 2 This syndrome is a rare acquired disease of combined Tand B-cell immunodeficiency accompanying a thymoma and has an incidence rate of approximately 6e11% in thymoma cases. 3 The affected patients are most commonly between 40 and 70 years of age and have a thymoma, fewer or no B cells in the peripheral blood, hypogammaglobulinemia, inversion of the CD4-to-CD8 ratio, and defects in cellmediated immunity. 1,4 Patients with this primary immunodeficiency are at increased risk of developing severe opportunistic infections including herpes simplex virus (HSV)-related infections. 2 HSV infections represent one of the most widespread infections of the orofacial region. HSV type 1 and type 2 (HSV-1 and HSV-2) are two strains of the herpes virus family, the Herpesviridae, which infect humans. These two viruses can infect the mouth or genitals; generally, HSV-1 is considered to infect the regions "above the waist", while HSV-2 is considered to infect the region "below the waist". Although most of the primary orofacial HSV infections are caused by HSV-1, infection by HSV-2 is increasingly common. 5 A primary HSV-1 infection in oral and perioral sites usually manifests as gingivostomatitis, whereas reactivation of the virus in the trigeminal sensory ganglion gives rise to mild cutaneous and mucocutaneous disease, which is often termed as recurrent herpes labialis. However, recurrent HSV-1 infection in the mouth is less common than herpes labialis and is unusual in otherwise healthy persons. 6 In immunocompromised patients, recurrent oral HSV infections are described as "atypical" and the lesions are more extensive and aggressive, slow or nonhealing, and extremely painful. 7 Recurrent herpetic infections of the tongue are exceptional and are only encountered in patients with immunodeficiency. 8 Herein, we report a case of a thymoma with immunodeficiency in a patient who developed excessive recurrent oral herpetic infections.
Case presentation
In October 2009, a 65-year-old man consulted us because he had been suffering from recurrent vesiculo-ulcerative lesions of his tongue, lower lip, and buccal mucosa for approximately 4 years; he had been treated for a clinical diagnosis of lichen planus and fungal infection. He had a history of a thymoma first identified by a computed tomography scan after a traffic accident in 2003 and of an extended thymectomy for the thymoma removal in March 2007. Results of a histopathological analysis revealed "type AB medullary thymoma" (according to the Masaoka system of staging, the tumor was at stage II due to microscopic invasion into the capsule). His medical history also showed that he had received adjuvant (preventive) radiation therapy as a microscopic monitoring of the invasion [5000 cGy in 25 sessions (1 session Z 200 cGy administration)] in June 2007 and was treated for a prolonged fever of unknown origin with antibiotics and antipyretics in August and September 2007. In addition, he reported chronic diarrhea, which began about a year ago and is ongoing.
The patient was admitted to our hospital in October 2009. Upon admission, he had a discrete erosive lesion covered by a yellowish-white fibrinous exudate with surrounding mucosal erythema on the lower lip ( Fig. 1A) and buccal mucosa (Fig. 1B), and the well-demarcated, indurated, and thickened yellowish-white plaques and nodules on the dorsal tongue (Fig. 1C). Regional lymphadenopathy was present. Written consent was obtained from the patient, and owing to the atypical clinical presentation, a biopsy was performed under local anesthesia. An oral biopsy specimen was taken from the tongue.
A histological analysis revealed the following marked hyperplastic changes in the mucosal epithelium: mainly irregular acanthosis, parakeratosis, edema, and lymphocyte exocytosis ( Fig. 2A). In addition, there were ulcerated vesicles. Near the ulcerated areas, characteristic virusinfected keratinocytes were observed along the basal layer of the epithelium. They were large cells with homogenous eosinophilic cytoplasm and mummified chromatin with a thick nuclear membrane. Multinucleation and nuclear molding were frequent (Fig. 2B).
Blood samples were collected on the same day. Results of a laboratory examination revealed the following: white blood cell count, 4600/mm 3 ; hemoglobin, 12.2 g/dL; platelet count, 152,000/mm 3 ; C-reactive protein, 10.6 mg/ L; total protein, 6.21 g/L; and albumin of 4.48 g/dL. Laboratory data included the following: low immunoglobulin G (IgG), 5.91 g/L (normal: 800e1600 mg/dL); IgA, 0.25 g/L (normal: 80e400 mg/dL); and IgM, 0.17 g/L (normal: 50e180 mg/dL). A lymphocyte subset analysis of Figure 1 (A) Discrete erosive lesion covered by a yellowish-white fibrinous exudate with surrounding mucosal erythema on the lower lip labial mucosa; (B) appearance of the right buccal mucosa before treatment; and (C) well-demarcated, indurated, and thickened yellowish-white plaques and nodules on the dorsal tongue.
The final diagnosis was that of HSV-1 infection. Upon diagnosis, oral systemic antiviral therapy (200 mg acyclovir every 4 hours, 5 times daily) was administered for 10 days. After 2 weeks, complete healing of the lesions on the lower lip (Fig. 3A) and buccal mucosa (Fig. 3B) was observed, while there was a partial healing of the lesion on the tongue (Fig. 3C). Meanwhile, the patient consulted with the Department of Immunology. Immunodeficiency due to a thymoma with immunodeficiency explained why the recurrent oral HSV-1 infection had been intractable for several years. Thus, prophylactic intravenous immunoglobulin (IVIG) therapy and routine control of infections in the patient were initiated by the Department of Immunology.
Discussion
In general, symptoms of Good's syndrome caused by humoral and cellular immunodeficiency are occasionally complicated by leukopenia. Infections are one of the main characteristics of Good's syndrome. 9 The most commonly documented infectious complications in patients with a thymoma with immunodeficiency are recurrent upper and lower respiratory tract infections with encapsulated organisms. 2,4 Patients with a thymoma with immunodeficiency also have increased susceptibility to bacterial, fungal, viral, and opportunistic infections related to both humoral and cell-mediated immune deficiencies. 4 These infections may be severe or even fatal in patients with a thymoma with immunodeficiency. 2 An extended thymectomy, radiation therapy, or chemotherapy are the available options for treating the thymoma in combination with Ig replacement therapy to maintain adequate IgG values to prevent opportunistic infections. 3 Unfortunately, there are no case reports of a thymectomy resolving the immunodeficient state associated with a thymoma with immunodeficiency. 3 Our patient had suffered from oral recurrent HSV infections for 2 years before the extended thymectomy. His thymoma with immunodeficiency had been diagnosed at least 4 years before the operation, and the thymectomy might not have resolved the immunodeficiency as previously reported. 3 The prognosis of patients with a thymoma with immunodeficiency is thought to be worse than for those with other immunodeficiencies. 4 In contrast to a thymoma with immunodeficiency, the common variable immunodeficiency, which is also characterized by hypogammaglobulinemia, is associated with less frequent and less severe cellular immunodeficiency and fewer opportunistic infections including HSV infection. In addition, the clinical presentation of HSV infection is usually atypical with a thymoma with immunodeficiency. 2 Immunological investigations, including quantitative Ig levels, B cells, and T-cell subsets, should be considered as a part of the routine diagnostic evaluation in patients with a thymoma and intractable infections. If immunologic test results are normal, testing should be performed periodically if the clinical suspicion of a thymoma with immunodeficiency persists, because there can be an interval between the diagnosis of immunodeficiency and/or a thymoma and development of infection. 4 In immunocompromised patients, atypical clinical manifestations of HSV presenting as tumor-like nodules or condylomatous or hypertrophic lesions, rather than a classic ulcer may occur. Such unusual presentations increase the risk of a misdiagnosis and delays in appropriate treatment. 10 In this case report, we describe an immunocompromised patient with an unusual tumoral presentation of recurrent herpetic infections of the tongue. The predominant histopathological finding was marked hyperplastic changes with irregular acanthosis, parakeratosis, edema, and lymphocyte exocytosis. It is crucial to be aware of these unusual presentations to provide an early, correct diagnosis and effective treatment for HSV.
In immunocompromised groups, oral acyclovir seems to be the drug of choice for recurrent HSV-1 infections, but recently famciclovir was found to be effective and has the convenience of less frequent dosing than acyclovir. 6 Topical treatment for these patients is usually of little clinical benefit. 6 In our case, after the HSV infection was confirmed by histopathological, clinical, and laboratory examinations, considerable improvement with acyclovir therapy was seen in the intractable lesions. Furthermore, it was reported that IVIG substitution is indicated in all patients with a thymoma with immunodeficiency, as it may be associated with improved control of infections, decreased use of antibiotics, reduced hospitalizations, and perhaps improved survival in patients with a thymoma with immunodeficiency and other hypogammaglobulinemic conditions. 2 For this reason, the patient was directed to the Department of Immunology for IVIG therapy and initiation of routine controls.
In conclusion, when an intractable infection accompanied by a thymoma is encountered, increased awareness about clinical and immunological profiles of this primary immunodeficiency may increase early diagnosis of this condition and prevent mortality. | 2019-03-08T14:02:51.002Z | 2012-12-17T00:00:00.000 | {
"year": 2012,
"sha1": "e1266da4ffede6c69430197d38b0b8688c9dc05e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jds.2012.10.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e1266da4ffede6c69430197d38b0b8688c9dc05e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
91915666 | pes2o/s2orc | v3-fos-license | Salvia officinalis induces antidepres- sant-like effect, anxiolytic activity and learning improvement in hippocampal lesioned and intact adult rats
The anxiolytic and antidepressant like effects of Salvia officinalis extract (50, 100 and 200 mg/kg) were evaluated using marble burying, forced swimming and open-field tests in intact and hippocampal lesioned rats. Additionally, S. officinalis was evaluated on rat's memory using conditioned learning test. and we screened the methanolic extract for anti-oxidant activity, phytochemical and high performance liquid chromatography analyses. The administration of sage extract showed a significant reduction of immobility time in lesioned and intact animals during the forced swim test and anxiolytic effect in marble burying test. In the case of conditioned learning paradigm, memory enhancement was observed in sage treated group which indicates a cognition improvement. These activities seem to be related to the anti-oxidant capacity and the phytochemicals (phenolic, flavonoid, and tannin) detected into the extract of S. officinalis. The findings show that the methanolic extract of sage possess antidepressant-like effect, anxiolytic activity and also may contain bioactive compounds that stimulate learning in rat.
Introduction
Several synthetic and chemical drugs have been introduced to treat patients with various neurological disorders, but their therapeutic effects have modest efficacy and most of them are associated with several adverse effects indicating need of better-tolerated and more efficacious treatments (Kim and Oh, 2012).Medicinal plants have been a major source of natural active compounds that can provide important options for developing drugs for the treatment of different mood and neurological disorders. al., 2009), anti-inflammatory and also anti-nociceptive activities (Rodrigues et al., 2012).Systematic and mechanistic studies of the effects of S. officinalis extracts have revealed multiple activities potentially relevant to brain function, and some reports showed that ethanolic extract of S. officinalis potentiates the memory retention and interacts with muscarinic and nicotinic cholinergic systems involved in the memory retention process (Eidi et al., 2006).Even the efficacy of the leaves of S. officinalis extract in the management of Alzheimer's disease, the treatment of depression, and memory disorders have been reported (Akhondzadeh et al., 2003;Howes et al., 2003;Perry et al., 2002;Savelev et al., 2004).
With regard to the possible effects of S. officinalis on behavior, this work aimed to investigate the effect of methanolic extract of S. officinalis on depression, anxiety and learning in hippocampal lesioned and normal adult rats.
Plant collection and identification
S. officinalis leaves were collected from the Ourika region, Morocco in March, 2013 and the plant's identity was confirmed by Prof. Ouhammou in the Faculty of Sciences Semlalia, University Cadi Ayyad Marrakech, Morocco.A voucher specimen of the plant has been deposited at the herbarium of the Faculty under the number MARK-10004.
The plant was cleaned and shed dried at 40°C.The dried leaves of the plant were triturated in order to obtain a powder.
Preparation of methanolic extract
Dried and ground leaves (110 g) were used for extraction with methanol (600 mL) in a soxhlet apparatus for 72 hours.After extraction the solvent was evaporated by a rotavapor (Stuart® RE300, Bibby Scientific, UK).The final weight of the extract was 34.45 g.
Determination of total polyphenols content
The total phenolic content of the extract was determined using Folin Ciocalteu method as described previously (Singleton and Rossi, 1965).Briefly, 100 μL of the properly diluted extract was mixed with 3.9 mL of distilled water followed by 100 μL of Folin-Ciocalteu reagent.After 3 min, 1 mL of 20% sodium carbonate (Na2CO3) was added.The mixture was shaken and incubated for 1 hour at room temperature in the dark.The absorbance was measured at 725 nm against a blank using a UV-vis spectrophotometer.The total phenolic concentration is expressed as mg equivalent gallic acid per gram of the dry matter.All assays were carried out in triplicate.
Determination of total flavonoids
The flavonoids content was determined by aluminum trichloride method using catechin as reference compound (Zhishen et al., 1999).A volume of 200 μL of extract was mixed with 800 μL of distilled water added to 60 μL of 5% NaNO2 solution.The mixture was allowed to stand for 6 min, then 40 μL of aluminum trichloride (10%) was added and incubated for 5 min, followed by the addition of 400 μL of 1M NaOH.500 μL of distilled water are added to the reaction medium.Immediately, the mixture was agitated in order to homogenize the content.After 15 min of incubation, the absorbance of the pinkish mixture was measured at 510 nm.The content of total flavonoids of studied extract was expressed as mg equivalent catechin per gram of the DM.
Total condensed tannin contents
The quantities of condensed tannins are estimated using the method to vanillin in acid medium (Xu and Chang, 2007).A volume of 200 μL of the extract is introduced in two test tubes separated (one for the sample and the other for the witness).Then, 2 mL of a solution of vanillin (4% in methanol) and 1 mL of concentrated hydrochloric acid are added.After 15 min incubation of the obtained mixture, the absorbance was read at 500 nm with the aid of a spectrophotometer UV -vis.The results were expressed as mg equivalent catechin per g of the dry matter.
DPPH Free radical-scavenging activity
The total anti-oxidant capacity measures the free radical scavenging capacity of S. officinalis extract.An aliquot of 100 μL of different sample or standard concentrations (butylated hydroxytoluene, quercetin, ascorbic acid and α-tocopherol) was added to 2.9 mL of the methanolic DPPH solution (0.004%) freshly prepared.In parallel, the negative control was prepared by mixing 100 μL of methanol with 2.9 mL of methanolic DPPH at the same concentration used.The mixture was shaken and kept in the dark at room temperature for 30 min and the absorbance was recorded at 517 nm against a blank containing all reagents except the test sample.Assays were carried out in triplicate.The following equation was used to determine the percentage of the radical scavenging activity of each extract.Inhibition (%) = [(control absorbance − sample absorbance)/control absorbance] × 100
β-Carotene/linoleic acid bleaching assay
The anti-oxidant activity of the samples was determined using the β-carotene/linoleic acid test.The assay was carried out as described previously (Miraliakbari and Shahidi, 2008).A stock solution of βcarotene and linoleic acid was prepared by adding together 0.5 mg β-carotene in 1 mL chloroform, 25 μL linoleicacid and 200 mg Tween 20.The chloroform was removed using a rotary evaporator, and distilled water (50 mL) was subsequently added to the residue slowly with vigorous agitation, to form an emulsion.350 μL of S. officinalis extract solution or reference anti-oxidant (butylated hydroxytoluene) at various concentrations were added to 2.5 mL of the above emulsion.The test and control (containing water in place of sample) tubes were capped and incubated at 50°C for 2 hours.The absorbance of the emulsion at 470 nm was determined.All determinations were performed in triplicate.
Where, Asample 2 hours, Ablank 2 hours are the absorbance of the test compound and control respectively after 2 hours assay and a initial blank is the absorbance of control at the beginning of the experiment The sample concentration providing 50% inhibition (IC50) was measured by plotting inhibition percentages against the sample concentrations.
Iron (II) chelating activity
The ability of S. Officinalis extracts to reduce FeCl3 to FeCl2 was investigated using the method (Oyaizu, 1986). 1 mL of the tested sample (S. officinalis extract or standard (butylated hydroxytoluene, quercetin, αtocopherol and ascorbic acid) was mixed with the phosphate buffer (2.5 mL, 0.2 M, pH 6.6) and potassium ferricyanide (2.5 mL, 1%).The mixture was then incubated at 50°C for 20 min.Subsequently, 2.5 mL of trichloroacetic acid (10%) was added to the mixture, which was then centrifuged for 10 min at 3,000 rpm.Finally, the upper layer of the solution (2.5 mL) was mixed with distilled water (2.5 mL) and FeCl3 (0.5 mL, 0.1%), and the absorbance was measured at 700 nm in a spectrophotometer.
HPLC analysis of S. Officinalis extract
The identification of phenolic monomers was conducted using a high-performance liquid chromatography (Knauer) equipped with a (K-1001) pump and a PDA detector (200-700 UV-vis) operating at 280 nm.The column was (4.6 × 250 mm) (Eurospher II 100-5), and the temperature was set at 25°C.The flow rate was 1 mL/min and the sample volume injected was 2 mL.Acidified water (A) and acetonitrile (B) mixture were chosen as the optimal mobile phase for a total running time of 60 min.The identification of phenolic compounds was executed by comparison of retention times and UV-vis spectra with the standards.
Experimental animals
Male Sprague-Dawley rats, from the animal facility of the faculty of Sciences Semlalia, Marrakech, Morocco weighing 300-350 g were used in this study.The animals, housed in individual plastic cages, were kept at constant room temperature (21 ± 2°C) and relative humidity (60%) with a 12 hours light/dark cycle (dark from 7 P.M.) and had free access to water and food.
Acute toxicity test
Acute toxicity test was performed on mice weighing about 17 to 25 g were divided equally into different groups (n=6) and fasted overnight, but provided with water ad libitum.On the test day, the methanolic extract was given by gavage in the doses of 1, 2, 3, 4 and 5 g/kg (10 mL/kg).To detect signs of toxicity and death, mice were observed for 2, 4, 6, 8, 12 and 24 hours postextract administration.The mice were also observed daily for up to 14 days to detect any possible delayed death.
Drugs and treatments
The drugs administered to the rats were imipramine (Novartis, 30 mg/kg), diazepam (F.Hoffmann-La Roche, Switzerland, 1 mg/kg), and S. Officinalis methanolic extract (50 mg/kg).Imipramine was selected as standard drug (positive control) for depression in the forced swim test, diazepam as reference drug (positive control) for anxiolytic activity and saline (0.9% NaCl) as control.All drugs and vehicle were injected intraperitoneally.The injection volume was 0.1 mL/100 g body weight.
Thirty min after intraperitoneal treatment, the animals were submitted to behavioral tests (forced swim test, marble burying test and conditioned learning test).
Hippocampal lesion
After the acclimatization to the laboratory conditions, all rats were divided into different groups: A control group [control-saline, n=6, positive-control, (n=6)] remained free of any surgical manipulation, treated group with sage (n=6).In the sham group (n= 6), animals underwent the same surgical procedure at the same coordinates to those employed in the lesioned group [lesioned (n=6), lesioned plus sage (n=6)] except the electric current.
For the hippocampal lesion and after thiopental anesthesia (60 mg/kg body weight), the rats were mounted on a stereotaxic frame (Horseley-Clark).The hippocampus was damaged at one anteroposterior site in relation to the interaural zero point.All coordinates were obtained from the rat brain atlas (Paxinos and Watson, 1998) with anteroposterior, mediolateral and dorsoventral positions referenced from Bregma: anteroposterior, -5.2 mm, mediolateral, +4.8 mm, dorsoventral, +4.8 mm.The unilateral electrolytic lesion was made with a lesion-generating device (GRASS, D.C. LM5A, USA) by passing 2 mA DC cathodal current for 15 sec, (Ramos, 2008) through a monopolar stainless steel electrode insulated with INSL-X except for the 0.5 mm tip.The rats in the sham group underwent the same surgical procedure at the same coordinates except the electric current.
Antidepressant-like activity of S. officinalis extract
The forced swim test is one of the most used assays of depressant-like activity in rodent.It was performed according to the method described previously (Porsolt et al., 1977).A vertical plexiglas cylinder (40 cm high, 20 cm in diameter and depth of 20 cm) was filled with water at 25°C.The rats were first subjected to pre-swim by placing each of them in the cylinder for 15 min.The day of test, and 24 hours after the pre swim, each rat was forced to swim in the cylinder for 6 min and the duration of floating (i.e. the time during which the rat made only the small movements necessary to keep its heads above water) was scored.The soiled water was changed between the tests.
Effect of S. Officinalis extract on locomotor activity
In order to evaluate the possible effects of S. officinalis on locomotor activities, a group of tested rats was evaluated in the open-field test as previously described (Katz et al., 1981).Rats were individually placed in a wooden box (80 x 80 x 40 cm) divided into 25 squares of equal areas.The number of crossings defined as the rectangles crossed by the animal with its four paws was registered during a 5 min period.During the test time, the animal's movements in the field were quantified by counting the number of crossings (at least three paws in a square) and the number of rearing which was defined as the animal standing upright on its hind legs (wall rears and free rears).The number of crossings was considered as indicative of locomotor activity.The floor of the open-field apparatus was cleaned with 10% ethanol at the end of each test to remove any olfactory cues.
Effect of S. officinalis extract on learning
In this experiment, the testing was performed into two equally sized compartments (right and left) (30 x 25 x 50 cm) connected by an opaque wood door (with one hole).The box was covered by a hinged roof of clear Plexi-glass with numerous holes to allow ventilation and the floor was made of stainless steel rods (2.5 mm in diameter) that were separated by a distance of 1 cm.The experimental room was illuminated and quit.
All rats were trained initially to the box for 10 min and immediately followed by a series of 10 trials = 1 sequence.Each trial consisted of a 3 sec of sound stimulation (conditioned stimulus), which was an 80 db tone presented through a bell centrally mounted about 20 cm above the box.The conditioned stimulus was followed by an electrical stimulation 1, 2 mA foot-shock of 30 sec as unconditioned stimulus delivered through the grid floor.The unconditioned stimulus was terminated when the animal crossed to the other compartment.If the rat crosses to the other compartment during conditioned stimulus, before unconditioned stimulus onset, successful acquisition of learning response was recorded.
Each animal was tested for 10 sequences and 30 min after methanolic extract injection.
Principle
The lesion(s) in the hippocampus and septum reduce the digging
Uses
This test is an useful model of anxiety, obsessive compulsivedisorder and neophobia.The test has predictive validity for screening of novel anxiolytic or antidepressant.
Preparation of the cages
1.The polycarbonate rat cage (26 x 48 x 20 cm) with fitted filter-top cover was filled with wood chip bedding (about 5 cm depth), lightly tamped down to make a flat and even surface.
2. A regular pattern of glass toy marbles were placed on the surface, evenly spaced (about 4 cm apart).
3. Use a stop watch for counting the time (30 min for each test) Test 1. Place one rat into a conner of a cage on bedding carefully and left for 30 min 2. The number of marbles buried up to 2/3 their depth within the bedding was counted Advantages 1. Readily available to most laboratories and is not expensive 2. Simple to apply
Limitation
The neuronal circuitry of the behavior is not clearly elucidated.
Histological verification
At the end of all behavioral experiments, rats were anesthetized with thiopental (60 mg/kg i.p.) and perfused transcardially with 50 mL of 0.9% saline followed by 3.2% paraformaldehyde.After perfusion, the brains were removed and post-fixed for 24 hours in 3.2% paraformaldehyde, then stored in 30% sucrose.
Coronal sections (50 μm) were sectioning with the cryostat (Leica Microsystems, Germany), stained with 0.5% toluidine blue, mounted on glass microscope slides and examined under a microscope (Leica Microsystems, Germany) where the lesions and probe placement could be seen.Any subjects with misplaced cannula or significant damage around the injection site were excluded from the subsequent statistical analyses of the behavioral data.
Statistical analysis
All experimental results are given as the mean ± S.E.M.
For the behavioral measures in the forced swim, marble burying and conditioned learning tests, the time spent immobile and sequences for all groups were analyzed using a one-way analysis of variance (ANOVA).If the ANOVA revealed a significant main effect, a Tukey post-hoc analysis was performed to compare between the specific groups.Statistical significance was set to p<0.05.The SigmaPlot 12.5 software was used for statistical analysis.
Total polyphenols, flavonoids, condensed tannin contents
The results showed (Figure 1) that S. Officinalis extract demonstrated the highest total phenol content with more than 118.2 ± 0.7 mg gallic acid Eq/g dry matter, the flavonoid concentration (84.7 ± 7.0 mg gallic acid Eq/g dry matter) and the tannins content was estimated at 19.1 ± 0.7 mg catechin Eq/g dry matter.
DPPH Free radical scavenging activity
The effects of methanolic extract of S. officinalis in the DPPH, iron (II) chelating activity and β-carotene/ linoleic acid bleaching assay are shown in Figure 2, which was comparable to that of the standard antioxidant butylated hydroxytoluene, quercetin, vitamin C and E. As the better anti-oxidant activity was reflected by the lower IC50 values, the results showed that methanolic extract exhibited the highest anti-oxidant activity.The lowest IC50 was obtained with reducing power assay: IC50 values of 0.334 ± 0.007 mg/mL, followed by β-carotene/linoleic acid bleaching assay
HPLC analysis of S. Officinalis extract
According to the HPLC data shown in Figure 3, the extract of S. officinalis contained flavonoids and phenolic acid derivatives in different proportions.Eleven phenolic compounds were identified.Among the compounds, rosmarinic and caffeic acid were the most abundant.
Acute toxicity
The oral administration of S. officinalis extract at doses up to 5 g/kg did not produce any mortality symptoms of toxicity in mice during the study period of 14 days.
Four rats from lesioned and sage plus lesioned group were excluded from statistical analysis for the forced swimming test, whereas two rats from each group.The lesioned group and sage plus lesioned group were excluded from statistical analysis for the conditioned learning.
Antidepressant-like activity of S. officinalis extract
In the forced swim test, we investigated the anti-depressant like effect of sage extract in both intact and hippocampal-lesioned animals.
The results presented in Figure 5 (A) show that the immobility duration of treated groups with sage at different doses (50, 100 and 200 mg/kg) or imipramine (30 mg/Kg, intraperitoneally), was shorter than the control group, [H(10.6)=29.7;p=<0.001], and the treatment with sage extract reached roughly the same result observed in positive control treated with imipramine (30 mg/kg, intraperitoneally).
Post hoc comparisons revealed a significant higher total time of immobility in lesioned group as compared with normal control (p<0.05) while in the lesioned animals and treated with sage extract (50,100 and 200 mg/kg) or imipramine (30 mg/kg, intraperitoneally) a reduction of the same parameter was observed as compared to lesioned group (p<0.001)(Figure 5B).
Effects on the number of crossing and rearing in the open-field test
The locomotor activity of extract is evaluated by its effect on open field test.In this test, our results show that treatment with sage or diazepam had no effect on the locomotor activity in open-field test compared to the lesioned animals.Post hoc analysis indicted a significant reduction in the locomotor activity in both lesioned and lesioned treated with sage or diazepam, as compared to the normal control group F (10.55)=12.20;p<0.001 (Figure 6A,B).In the vertical activity (number of rearings), the lesioned and lesioned rat treated with diazepam (1 mg/kg, intraperitoneally) showed a significant decrease in the number of rearing as compared to the normal control group, H(10.55)=29.04;p<0.05.The sage treatment did not show any significant effect on these behavioral measurements (Figure 6C,D).
Anxiolytic activity of S. officinalis extract
The treatment of normal rats with sage extract (50, 100 and 200 mg/kg) or diazepam (1 mg/kg) reduced significantly the marble-burying behavior compared with the control group F (10.55)=78.06,p<0.001 (Figure 7A).One the other hand, comparisons using the Tukey's test revealed that lesioned rats buried a significant higher number of marbles as compared to the normal control group, p <0.001.However, when the lesioned animals are treated with diazepam or sage extract a dose-dependent decrease in marble burying behavior was observed (p <0.001) (Figure 7B).
Effect of S. officinalis extract on learning
In this experiment, the animals were tested using 10 sequences of classic conditioning.
Results of conditioned learning performance represented in Table I indicate that the administration of sage extract (50, 100 and 200 mg/kg) showed a significant increase in the number of avoidance response as compared to the normal group which indicated that methanolic extract enhanced acquisition of the task since the seven sequences (sequence 7; F (10.55)=30.10;p=0.039).Therefore, statistical analysis revealed that the learning performance of lesioned animals is less than the control group (sequence 3; F (10.55)=32.67;p<0.001) which is reflected by the decrease of conditioned reactions number during all of the tested sequences.However, the treatment of lesioned animals with three doses 50, 100 and 200 mg/kg of sage extract a highly significant gradual increase in number of conditioned reactions as compared to the lesioned animals (sequence 4; F (10.55)=37.27;p<0.001).No significant differences between the sham group and the other tested groups (p>0.05).
Discussion
Major depression, memory loss and atrophy of the hippocampus are the most deterioration observed in Alzheimer disease (Byers and Yaffe, 2011;Schweitzer et al., 2002), and none of the current treatment can successfully cure (Alzheimer disease) at an early stage.
On the other hand, many herbs have pharmacological activities relevant to dementia therapy, like S. officinalis (Howes et al., 2003).It hence the need to evaluate the antidepressant and anxiolytic-like activity of S. officinalis and the effective effect on learning in the presence or absence of hippocampus lesion.
Our present study has shown, for the first time the antidepressant-like effect of methanolic extract from the leaves of S. officinalis in the forced swim test in rats.The i.p. injection of plant extract produced a marked reduction in immobility time.This effect was comparable to the reference antidepressant drug imipramine.These findings proved the antidepressant like effect of this plant at the different tested doses (50, 100, 200 mg/ kg), which could be due to their riches in polyphenolics compounds particularly rosmarinic acid and caffeic acid, confirmed by our HPLC analysis and supported by a previous study (Farhat et al., 2013).On the other hand, further results showed that rosmarinic acid and caffeic acid the major phenol compounds of sage, exerts anti-depressant like effect in an animal model of depression, and reduced the duration of immobility in forced swimming test (Kondo et al., 2015;Takeda et al., 2002).Furthermore, our data showed that methanolic extract possess anti-depressant like effect even in the presence of electrolytic lesion of hippocampus, and could be explained by neuroprotective effect of rosmarinic acid against corticosterone (Sasaki et al., 2013), and in earlier studies rosmarinic acid has antidepressant activity via regulation of Mkp-1, which the expression in hippocampus could be increased by the high glucocorticoid levels in depression (Kim et al., 2005), and modulation of dopamine and corticosterone synthesis (Kondo et al., 2015).On the other hand, the extract had no significant effect on the motor activity as assessed by the open-field test.These data demonstrate that the sage antidepressant-like effect is specific, not due to the psychostimulant effect, which might be considered a false positive result in the forced swimming test (Borsini et al., 1988).
Using the marble burying test, the treatment with sage extract showed anxiolytic effects in hippocampal lesioned and intact rats which was reflected by the reduced number of buried marbles.The defensive burying behaviors in the marble burying test represent an active coping strategy in response to a discrete threat including the predictive validity for anxiety (Gorton et al., 2010), there are some evidence about the possible effects of S. officinalis on anxiety.The results of a study on 30 healthy participants showed improvement in mood and reduced anxiety effect following the administration of single doses of S. officinalis due to the cholinesterase inhibiting properties of this plant (Kennedy et al., 2006).Some studies about anxiolytic effects of other salvia species showed that S. leriifolia, S. reuterana Boiss and S. eleganspocess induces anxiolytic effect in rats using EPM model (Herrera-Ruiz et al., 2006;Hosseinzadeh et al., 2008;Mora et al., 2006;Rabbani et al., 2005).
In addition, the present study indicate that methanolic extract of S. officinalis enhance and improve learning of rats and offsetting the negative effect of electrolytic lesion in the hippocampus.These data are in agreement with some other studies reporting that ethanolic extract of S. officinalis leaves possess a mnemonic effect on adult male rats (Eidi et al., 2006;Tildesley et al., 2005).
In our study, these effects were revealed by increase the number of conditioned responses in learning and suggested that S. officinalis exhibits central nervous system acetylcholine receptor activity, including nicotinic and muscarinic binding properties (Wake et al., 2000).In addition, several clinical and experimental studies (Eidi et al., 2003;Hasselmo, 2006;Herholz et al., 2005;Muir, 1997;Tang et al., 1997;Winters and Bussey, 2005) have demonstrated the important role of the cholinergic system in learning, memory and attention.
The degeneration of the cholinergic system is responsible for reduced acetylcholine esterase and acetylcholine transferase levels and changed distribution of cholinoceptors in brains of patients with Alzheimer disease (Mufson et al., 2008).
Moreover, the bioactive compounds of sage possess anticholinesterase activity (Perry et al., 1996), and the interaction of rosmarinic acid properties like anticholinesterase, neuroprotective, and anti-oxidant which has been confirmed through the DPPH, β-carotene and the reducing power methods, may all be responsible for the revealed effects of S. Officinalis (Hasanein et al., 2016;Hasanein and Mahtaj, 2015).
These data supply further evidence that the efficacy of S. officinalis extract can be appropriate to the treatment of cognitive disorders (Houghton and Howes, 2005;Sallam et al., 2016) and might potentially provide natural treatment for Alzheimer disease (Akhondzadeh et al., 2003).
Conclusion
The methanolic extract of S. Officinalis leaves reveal beneficial effects on depression, anxiety and learning of rats in the presence or absence of hippocampus electrolytic lesion.
Ethical Issue
All experiments were carried out in accordance with the European Community Guidelines (EEC Directive of 24 November 1986; 86/609/EEC).All efforts were made to minimize animal suffering and to reduce the number of animals used.
Figure 1 :Figure 2 :
Figure 1: Total phenols, total flavonoids and condensed tannins in S. officinalis methanolic extract.The data represent the mean ± SD
Figure 4 :Figure. 5 .
Figure 4: (A) Photomicrographs of coronal Nissl-stained sections from a representative hippocampus lesioned rat.(B).Serial histological reconstructions of representative extent (Black) of electrolytic lesions in hippocampus in the rat according to Paxinos and Watson, 1998.Numbers indicate distance posterior from bregma according to Paxinos and Watson, 1998.
Figure 6 :
Figure 6: Open-field activity of rats.Intact and hippocampal-lesioned were treated with S. officinalis (50, 100 and 200 mg/kg i.p) or diazepam (1 mg/kg i.p) (A, B); Number of crossings during the 5 min session.(C,D) Number of rearings during the 5 min session.Values are the means ± S.E.M. (n = 6).a p< 0.001, b p< 0.05 as compared to the normal control group
Figure 7 :
Figure 7: Effects of S. officinalis, methanolic extract (50, 100 and 200 mg/kg i.p.) or diazepam (1 mg/kg i.p.) in the presence or absence of hippocampus lesion on marble-burying behaviour in rats during 30 min.Results are expressed as means ± S.E.M. (n = 6 per group) | 2019-04-03T13:09:05.095Z | 2018-12-21T00:00:00.000 | {
"year": 2018,
"sha1": "a6a96ac89eb1fb469fb09e898493ca24ec47e8c7",
"oa_license": "CCBY",
"oa_url": "https://banglajol.info/index.php/BJP/article/download/38375/29551",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a6a96ac89eb1fb469fb09e898493ca24ec47e8c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
255457534 | pes2o/s2orc | v3-fos-license | Upregulation of Neuroinflammatory Protein Biomarkers in Acute Rhegmatogenous Retinal Detachments
The purpose of this study is to characterize the inflammatory cytokine profile in rhegmatogenous retinal detachments (RRDs) compared to surgical controls. Vitreous humor was collected from patients undergoing vitrectomy for RRD and noninflammatory vitreoretinal diseases. A quantitative immunoassay was used to measure the levels of 36 cytokine markers. Linear regression analysis with the duration of detachment as the predictor and log-transformed cytokine levels as the outcome was conducted for normally distributed cytokines as determined by the Shapiro–Wilk test. The analysis was adjusted for age, sex, and race. The Kruskal–Wallis test was used for cytokines not normally distributed. Twenty-seven RRD cases and thirteen control cases were studied. Between all RRDs and controls, fibroblast growth factor 2 (FGF2) (p = 0.0029), inducible protein-10(IP-10) (p = 0.0021), monocyte chemoattractant protein-1 (MCP-1) (p = 0.0040), interleukin (IL)-16 (p = 0.018), IL-8 (p = 0.0148), IL-6 (p = 0.0071), eotaxin (p = 0.0323), macrophage inflammatory protein (MIP)-1 alpha (p = 0.0149), MIP-1 beta (p = 0.0032), and the thymus and activation regulated cytokine (TARC) (p = 0.0121) were elevated in RRD cases. Between acute RRDs (n = 16) and controls, FGF2 (p = 0.0001), IP10 (p = 0.0027), MCP-1 (p = 0.0015), MIP-1β (p = 0.0004), IL-8 (p = 0.0146), and IL-6 (p = 0.0031) were elevated. Determining alterations in inflammatory cytokine profiles may aid in understanding their impact on RRD development, clinical course, and complications before and after surgical repair.
Introduction
Rhegmatogenous retinal detachment (RRD) is a vision-threatening condition in which mechanical forces at the vitreoretinal interface lead to a break in the retina, allowing for the passage of fluid into the subretinal space, separating the retina from the retinal pigment epithelium (RPE) and choroid [1]. This condition affects 1 in 10,000 people per year and occurs more frequently in males [2,3]. RRD is the most common type of retinal detachment, and patients can present with symptoms including flashes of light, visual floaters, and loss of vision [2]. Risk factors for RRD include lattice degeneration, myopia, prior cataract surgery, prior retinal detachment, and trauma [3][4][5]. In adults, RRDs are typically an isolated eye condition and are not associated with any systemic diseases, in contrast to other types of retinal detachments such as tractional and serous retinal detachments, which can occur from advanced diabetic retinopathy and inflammatory eye conditions, respectively. In pediatric patients, over half of RRDs are associated with Stickler Syndrome, a systemic collagenopathy [6].
The goal of treatment of RRDs is to reattach the retina to the RPE with surgical techniques such as pneumatic cryopexy, scleral buckle, or pars plana vitrectomy [7]. In general, single-surgery success rates are high in these procedures, but each approach has its own indications and complications profile. After surgical repair, some patients have minimal to no visual improvement despite successful anatomic reattachment of the retina. There are numerous pre-operative predictive factors for the outcomes of RRDs. Studies have shown that worse pre-operative visual acuity (VA), macula-off status, longer duration of detachment, the presence of proliferative vitreoretinopathy (PVR), older age, male sex, and non-White race portend poorer visual outcomes after treatment [3,[8][9][10][11]. The etiology of poor visual outcomes is thought to be due to photoreceptor death from increased intraocular inflammation caused by prolonged detachment, especially within the macula or recurrent detachments requiring multiple repairs that can damage photoreceptors beyond recovery after reattachment [12][13][14]. Inflammation contributes to the formation of PVR, which is the most common cause of failure after RRD repair and leads to an increased risk of recurrent RRD. PVR is characterized by the development of contractile fibrocellular epiretinal or subretinal membranes and, at times, intrinsic intraretinal fibrosis [15][16][17]. The exact inflammatory response that contributes to the development of PVR after RRD repair is not fully understood.
The aim of this study is to further characterize the nature of inflammation in patients with RRD by evaluating their vitreous cytokine profile compared to controls, with results stratified based on the duration of detachment.
a. Inclusion Criteria
This study was approved by the Institutional Review Board of Boston Medical Center (BMC) and Boston University Medical Campus Institutional Review Board and adheres to the tenets of the Declaration of Helsinki. Patients enrolled in this prospective, crosssectional study were 18 years or older with a primary language of English or Spanish and scheduled for a pars plana vitrectomy in at least one eye. For patients included in this study, the following demographic variables were collected from their medical charts: age, sex, and race. Self-reported racial categories included White, Black or African American, American Indian or Alaska Native, Asian, and Native Hawaiian or Other Pacific Islander according to the U.S. Census Bureau guidelines [34]. The study group included patients with RRD, and the control group included patients with non-inflammatory eye conditions including visually significant floaters, vitreomacular traction (VMT), macular hole (MH), epiretinal membrane (ERM), and subluxed crystalline lens with an intact capsule and no eye inflammation. All patients in both groups underwent vitrectomy alone with the exception of one combined case that included phacoemulsification, and in that patient, the vitreous specimen was retrieved prior to the phacoemulsification. Subjects enrolled in this study were part of a larger cohort of 95 participants that included patients requiring surgery for several indications, and cases that did not include a diagnosis of RRD were not included in this study [35].
Cases were grouped based on the duration of detachment, either less than 2 weeks or greater than 2 weeks, defined as the onset of symptoms such as flashes, floaters, decreased vision, and peripheral visual field loss. Symptoms were used as a proxy for the duration of detachment since this was the only way to clinically estimate RRD duration. We stratified the RRD cases at 2 weeks because it is at this point that a prior study defined RRD as chronic: [36] (1) Less than or equal to two weeks (hereafter described as "acute RRD") and (2) greater than two weeks (hereafter described as "chronic RRD").
b. Biospecimen Collection and Analysis
Prior to starting the infusion during pars plana vitrectomy, 0.5-1.0 mL of undiluted vitreous fluid was aspirated by the vitrector into an attached 3 mL syringe [37][38][39][40]. Using a sterile technique, the syringe with the patient's vitreous fluid was capped and labeled with a study number. The samples were stored on ice during transportation, centrifuged for 5 min at 12,000 rpm, and aliquoted and stored at −80 C until analysis. At the time of analysis, 200 µL samples were prepared with 100 µL of vitreous fluid diluted 1:1 with 1% Blocker A (MSD #R3BA 4) in wash buffer. The Meso Scale Discovery MULTI-SPOT Assay System Neuroinflammation Panel 1 was used to complete a quantitative immunoassay for 36 neuroinflammatory factors. Duplicate samples were quantified, signal detection was conducted using a sulfo-tag conjugated secondary antibody, and analyte levels (pg/mL) were measured with an MSD SECTOR S 600 Imager. The samples were analyzed for the following proteins:
c. Statistical Analysis
Statistical analysis was conducted using SAS v 9.4. The Shapiro-Wilk test was used to determine the normality of log-transformed cytokine levels. If the cytokine levels were normally distributed, we used a linear regression model controlling for age, race, and sex to compare mean cytokine levels between groups. We did not adjust for lens status since it has been found that lens status does not significantly impact cytokine levels [19]. If the cytokine levels were not normally distributed, the nonparametric Kruskal-Wallis test was used to compare mean cytokine levels between groups. Concentrations of vitreous cytokines were log-transformed after adding 1 to achieve a normal distribution [41][42][43], given that the linear regression analysis requires normal distribution of the outcome variable (cytokine levels). Log transformation is commonly used in biomedical research [44][45][46][47] and allows the use of a normal distribution to model continuous outcomes in skewed data. Since the classic bell-shaped normal distribution does not always describe observed data in real life, log transformation converts the skewed data into a more normally distributed dataset compared to the data prior to log transformation. As a result, parametric tests such as linear regression models can be used to analyze the data.
In the primary analysis, we compared mean cytokine levels between RRD cases and controls. In the secondary analysis, we compared mean cytokine levels between acute RRD cases (≤2 weeks duration) and controls. We did not complete a subgroup analysis comparing chronic RRD cases against controls or acute cases due to large variation in duration (3 weeks to 8 months), smaller sample size, and greater likelihood of recall bias by patients as the length of time from symptom onset and the chronicity of the retinal detachment increased. To correct for multiple comparisons, the p-values of cytokines were adjusted to account for potential type 1 errors using the false discovery rate (FDR), and we focused on the cytokines that were statistically significant (p ≤ 0.05) and within an FDR of 10%. We provided effect size and standard error for normally distributed cytokines analyzed with a linear regression model. We were not able to report effect sizes for cytokines analyzed with the Kruskal-Wallis test due to the fact that the SAS does not report effect sizes for this nonparametric test.
Fold changes represent ratios of mean log-transformed cytokine levels and were calculated by dividing mean log-transformed cytokine levels between cases and controls [48]. The following ratios (using log-transformed cytokine concentrations) were used to calculate fold changes: Acute RRD/controls and all RRD cases/controls.
Results
Of the 95 subjects enrolled in the study, 80 samples were collected. Fifteen subjects' samples were dropped due to the following reasons: Inability to obtain the sample or insufficient sample collection (n = 5), loss to follow-up (n = 3), surgery cancellation (n = 4), mislabeled specimens (n = 2), and withdrawal of consent (n = 1). An additional forty subjects (out of eighty) were excluded as their surgical indication was not RRD or did not meet the criteria for the control group. In total, 40 patients were included in this study, comprising 27 subjects with RRD (11 subjects with chronic RRD and 16 with acute RRD) and 13 controls without RRD (Table 1). Demographic information of our subjects is listed in Table 2. The mean age of the controls is higher than that of the cases, but it was not statistically significant (p = 0.0926). White patients comprised just under 50% of cases and controls. RRD cases and controls were similar with respect to gender distribution (p = 0.1862). Four of the eleven chronic cases had preoperative PVR. Table 3 shows the mean cytokine levels before and after log transformation. The results of the primary and secondary analysis are shown in Table 4. Of the ten cytokines that significantly increased from controls to all RRD cases, six cytokines were found to be significantly upregulated in those with acute RRD cases. Fold changes from Table 4 were calculated using mean log-transformed cytokine levels from Table 3. Table 3. Mean and standard deviation (SD) of cytokine levels (pg/mL) before and after logtransformation.
Some of the cytokines analyzed in this study have been implicated in the inflammatory causes of photoreceptor death. MCP-1, released by Muller cells, induces resident microglia migration and subsequent microglia activation and secretion of cytotoxic factors [49]. Activated microglia and dead photoreceptors promote a further increase in MCP-1 levels in a pro-inflammatory positive feedback loop [50]. The presence of other cytokines from this study, including IL-1β, IL-6, IL-7, and TNF-α, was found to increase MCP-1 levels, further contributing to photoreceptor death [12]. Of the aforementioned cytokines, MCP-1 is also upregulated in the acute phase of RRD. By measuring cytokine levels at various time points after detachment, it may be possible to determine which cytokines are involved early in the feedback loop and guide further studies in preventing photoreceptor death.
As previously mentioned, inflammation contributes to the formation of PVR, the most common cause of surgical treatment failure. While this study identifies a subset of cytokines upregulated within the first two weeks of detachment, correlating cytokine levels within this subset with those involved in the later development of PVR may provide insight into the biochemical pathways associated with PVR. Understanding the cytokines that trigger the cascade of fibrosis in some postoperative eyes but not others will help further work in preventing PVR in order to achieve better surgical and visual outcomes.
Among the interleukins tested, IL-8 and IL-6 were found to be upregulated between RRD cases and controls. IL-6 is known to stimulate B-cells, hematopoiesis, and the production of acute-phase proteins [51,52]. IL-6 receptor blockers reduced subretinal fibrosis and prevented PVR by reactivating the platelet-derived growth factor. IL-8 is produced by phagocytes and mesenchymal cells and activates, recruits, and promotes extravasation and the respiratory burst of neutrophils. Several studies suggest that chemoattraction and neutrophils are involved in the retinal detachment and PVR disease processes [20,22]. Furthermore, Takahashi et al. hypothesized that IL-8 levels could reflect the level of photoreceptor damage given that IL-8 was found to be positively correlated with the extent of detachment, and photoreceptor cell damage indirectly increases IL-8 expression [23].
Non-interleukin growth factor cytokines significantly upregulated in acute RRD includes FGF2. FGF2 stimulates endothelial cells, promotes angiogenesis and wound healing, and leads to the proliferation of Müller cells, retinal astrocytes, and retinal pigment epithelial cells [53,54]. Multiple studies have demonstrated that FGF2 levels are elevated in patients with PVR but not in patients with primary RDs without PVR (Table A2) [26,27,55,56]. In this study, FGF2 was elevated in acute RRD, but no cases developed PVR after surgical repair, possibly because the timing of surgical intervention in the acute stage prevented its development. Future studies may consider stratifying FGF2 levels by the duration of detachment and FGF2 levels after surgical repair in PVR to further study the involvement of FGF2 in RRD development and progression.
IP10, MCP-1, and MIP-1β are non-interleukin cytokines involved in monocyte chemotaxis and activation and were upregulated in this study. Yang et al. found that MCP-1 can activate monocytes that induce RPE apoptosis and increase levels of intracellular calcium and reactive oxygen species [57]. MCP-1 may lead to lead to photoreceptor death and poor visual outcomes after successful anatomic repair of RRDs. Similarly, MIP-1β promotes the migration and adhesion of macrophages and microglia [58,59]. Additionally, IP-10 is a pro-inflammatory chemoattractant for monocytes and macrophages and functions as an anti-angiogenic and antifibrotic agent (Table A2) [20][21][22]. Therefore, while IP-10 may attract leukocytes to the inflamed area of retinal detachment, it may also counteract the fibrotic actions of MCP-1 and MIP-1β as proposed by Takahashi et. al. [23].
The strength of this study lies in isolating acute RRD cases by the duration of detachment and analyzing cytokine profiles while adjusting for demographic variables, particularly patient race. Because most previous studies use nonparametric tests that cannot adjust for demographic covariates, adjusting for demographic variables, such as race and sex, has rarely been performed in most prior studies (Table A2) [18,[22][23][24]30]. Furthermore, by applying an a priori false discovery rate of 10% to correct for multiple comparisons, we were conservative in our approach by focusing on cytokines that met this threshold. Our study also had limitations. The most significant limitation is our sample size of 27 RRD cases and 13 controls; however, our sample sizes are comparable to prior studies of vitreous cytokine profiles in RRD (Table A2) [18,[20][21][22][23][24][25][26]28,32]. There is a 3:1 ratio of males to females among RRD cases due to the low sample size, and because more males than females consented to the original study [40]. Subgroup analysis of chronic RRD (greater than 2 weeks duration) was not performed in this study because we were limited by the heterogeneity of the chronicity (range of duration of detachment: 3 weeks to 8 months) and small sample size (n = 11). Additionally, patient-reported durations of detachment for chronic cases were very likely subject to greater recall bias as the length of time from symptom onset and the chronicity of the retinal detachment increased. Another potential limitation is the use of symptoms as a marker of the duration of detachment, which relies on subjective patient reporting. However, in clinical practice, symptom duration is the only proxy available for approximating the duration of RRD and informs decision making for the type of surgical intervention and timing of surgery. Lastly, the error in the cytokine level exceeds the mean cytokine level in some cases (Table 3). However, log transformation addressed this issue by converting the original dataset into one that is more normally distributed. Future studies may be able to investigate the utility of the vitreous cytokine profile in RRD to ascertain the duration of detachment without relying on the patient-reported onset of symptoms.
In conclusion, we corroborated the findings of elevated cytokines in RRD and identified a subset of inflammatory markers that may be early markers of RRD. Our findings may be foundational for future studies aiming to elucidate the effects of these cytokines on visual and anatomic outcomes after surgical repair, understand the pathogenesis of long-term consequences, such as PVR, and identify potential targets for the prevention of those complications as well as therapeutic interventions. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available upon request from the corresponding author. The data are not publicly available due to their containing information that could compromise the privacy of research participants.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Appendix A Table A1. Abbreviations and the corresponding full form of the biomarkers examined in the present work or in the literature cited in Table A2. | 2023-01-06T16:02:44.637Z | 2022-12-31T00:00:00.000 | {
"year": 2022,
"sha1": "824850d120c72a46961f1d377d21dbad32db5d45",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/13/1/118/pdf?version=1672479076",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "398bb2158208e60e8654db07d25d7e26a94c7a05",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238709552 | pes2o/s2orc | v3-fos-license | Suplementi u prehrani zdravstvenih djelatnika tijekom pandemije COVID-19 Food supplements in healthcare professionals’ diet during COVID-19 pandemic
Aim : The study aims to investigate the impact of the COVID-19 pandemic on the frequency of healthcare professionals’ food supplement consumption. Subjects and Methods : The study was conducted in December 2020 and Ja-nuary 2021 in the City of Zagreb and comprised a total of 279 healthcare professionals (physicians, nurses/technicians, pharmacists) affiliated with the HC “Center”. Data were collected via a questionnaire adapted to the study’s pur-poses. Differences between the group which changed its food supplementation consumption during the COVID-19 pandemic and the group that did not change it were tested using the χ2 test. P-values beneath 0.05 were deemed statisti-cally significant. Results : The results reveal the consumption of some food supplements to be a fairly strong habit among healthcare professionals. COVID-19 pandemic urged one third of them to start taking food supplements (11.5%) or to increase the amount and frequency of their use (21.9%). As for vitamins, during the pandemic healthcare professionals have taken more C (P=0.001), D (P=0.001), and B complex vitamins (P=0.048). The major increase was seen with the D vitamin, whose daily consumption rose by 3.63. Significant differences in the consumption of minerals, proteins, and amino acids, noticeable between the group that changed its food supplementation habits and the one that did not change, aro-se primarily due to the changes in magnesium and zinc intake (P<0.001). On top of that, a significant rise in beta-glucan (P<0.001), ginkgo biloba (P=0.012), collagen (P=0.038), and homeopathic preparations’ intake was documented (P=0.006). Conclusion : COVID-19 pandemic significantly impacts food supplements’ use among healthcare professionals. Based on the current knowledge and dietary recommendations, during the pandemic, the focus should be shifted to healthy diet principles. Daily vitamin, mineral, protein, and antioxidant needs should be satisfied through a variety of foods. In case of an increased risk of COVID-19 disease or deficiency of certain nutrients, food supplements should be introduced, too.
Introduction
Food supplements are preparations produced from concentrated nutrient sources or other substances having a kim učinkom kojima je svrha dodatno obogatiti uobičajenu prehranu u cilju održavanja zdravlja [1].
In 2020, the global food supplement market was estimated at 140.3 billion dollars, with a projected annual increase of 8.6% within the 2021-2028 timeframe. The most substantial proportion of these supplements is represented by vitamins (31.4%), followed by herbal preparations, minerals, proteins, amino acids, and creatin. Some of the factors influencing such a large market and the increase in use witnessed over the years are increased awareness about the importance of health, especially in the context of modern "rush through life", an increasing proportion of the elderly, increasing interest in healthier diet, particularly that of the labour force to the end of disease prevention, and increasing popularity of gyms and various fitness centres. The onset of the COVID-19 pandemic has increased the interest in food supplements even more, in hope of prevention and curing of SARS-CoV-2 infection [2][3][4], which is to be attributed to the influence of media and social networks, as well [5].
An optimal diet can improve health and reduce the risk of diseases and COVID-19-associated morbidity rate. Dietary vitamins and minerals may contribute to the reinforcement of the immune system, which is most important in this time of the pandemic. These recommendations highlight the importance of sufficient intake of C, B6, and B12 vitamins, folates, A and D vitamins, zinc, iron, and selenium. It is, therefore, recommendable to stick to a well-balanced diet that includes fresh and unprocessed foodstuffs and avoids sugar, fat, and salt intake, but encourages an appropriate physical activity, a lot of sleep, and stress avoidance [6][7][8].
Healthcare professionals engaged in several fields dealing with health improvement, disease prevention, therapy, and rehabilitation, that is to say, occupying different work posts, are often overstrained and deprived of well-balanced meals. The emergence of the COVID-19 pandemic represents a novel challenge healthcare professionals have to cope with. Under these circumstances, it is significant to take care of the immune status, given the fact that the pandemic began to spread widely in the winter period when certain micronutrient deficiencies are more striking and more common.
Accordingly, this study aims to investigate the impact of the COVID-19 pandemic on the frequency of supplement use in healthcare professionals' diet.
Subjects and Methods
The study was carried out in December 2020 and January 2021 in the City of Zagreb and comprised a total of 279 healthcare professionals affiliated with the HC "Centar". The data were collected using an anonymous questionnaire [9] adapted for the needs of this research. The first part of the questionnaire contained sociodemographic features of the respondents and posed questions about self-appraisal of dietary habits, physical fitness, and physical activity frequency. The frequency of food supplements consumption was defined as "daily", "4-6 times a week", "once a week" or "none".
Prevalence rates of individual food supplements were expressed as absolute figures and matching percentage shares. Differences between the group that changed its food supplement intake pattern during the COVID-19 pan-vrijednosti manje od 0,05 smatrane su značajnima. U analizi i grafičkom prikazu koristila se programska podrška IBM SPSS Statistics for Windows, verzija 25.0.
Results
The study comprised a total of 279 healthcare professionals, out of which 215 (77.1%) of the female gender, with 58 (20.8%) respondents under the age of 30. A total of 113 (40.5%) were vocational school graduates, while the number of nurses/technicians equalled 148 (53.0%). Aside from nurses/technicians, the most represented examinee group were physicians (64:22.9%). Healthy dietary habits were self-reported by 125 (44.8%) examinees, while the majority of subjects -173 (62.0%) -fell within the normal body mass index range (18.5-24.9 kg/m 2 ). Forty-three subjects (15.4%) were in poor physical shape, while 47 examinees (16.8%) claimed to be frequently and intensely active (daily or >90min several times a week).
The distribution of answers to the question about the impact of the COVID-19 pandemic on food supplement intake in healthcare professionals is illustrated in Figure 1.
One third of healthcare professionals, 93 of them (33.3%), claimed that COVID-19 affected their daily food supplement intake, and were divided as follows: one third of the latter (32) currently takes food supplements which they have never taken before, while 61 of them have increased their consumption in terms of quantity and/or frequency as compared to the pre-pandemic era. Značajne razlike u uzimanju minerala, proteina i aminokiselina između skupine koja je imala promjenu u uzimanju suplementacije u odnosu na skupinu koja nije imala promjenu u uzimanju suplementacije tijekom pandemije CO-VID-19 odnosile su se na uzimanje magnezija i cinka (P < 0,001) (Tablica 3). Od ostalih dodataka prehrani značajno se povećao unos beta-glukana (P < 0,001), ginko bilobe (P = 0,012), kolagena (P = 0,038) te homeopatskih pripravaka (P = 0,006) (Tablica 4). Table 1 shows the sociodemographic characteristics of the respondents and the differences between the group that changed its food supplement intake pattern during the COVID-19 pandemic and the group that failed to do so. Older respondents mostly fall within the group that changed its food supplementation pattern during the pandemic (P=0.001). The same applies to respondents in poorer physical shape (P=0.001) and those who do not exercise intensely (P=0.017). As for the vitamin intake, during the COVID-19 pandemic healthcare professionals have increased their C vitamin (P=0.001), D vitamin (P=0.001), and B complex vitamin intake (P=0.048). The most substantial increase is noticeable with the vitamin D daily intake that rose 3.63 times ( Table 2).
Significant differences in mineral, protein, and amino acids' intake between the group that changed its food supplementation pattern during the pandemic and the group that failed to do so, were attributable to changes in magnesium and zinc intake (P<0.001) ( Table 3). As for other food supplements, a significant intake rise was seen with beta-glucan (P<0.001), ginkgo biloba (P=0.012), collagen (P=0.038), and homeopathic preparations (P=0.006) ( Table 4).
Discussion
The global popularity of complementary and alternative therapies is increasing. The onset of the COVID-19 pandemic gave rise to the interest in food supplements, functional diet, and herbal preparations that are more wanted in hope that they might be efficient in the prevention and treatment of the disease [10].
The aforementioned was confirmed by our study carried out among healthcare professionals, who have a marked habit of taking food supplements; in fact, this was the case in more than three quarters of our examinees (77%). One third of the latter started taking food supplements under the influence of the COVID-19 pandemic or have increased their intake for the very same reason (21.9%).
The analysis of sociodemographic features of our respondents and the differences between those who changed their food supplementation habits during the COVID-19 pandemic and those who did not, revealed that major changes in food supplementation habits occurred among older respondents, respondents in poorer physical shape and respondents not keen on physical activity. During the pandemic, healthcare professionals have significantly increased their vitamin C and B complex consumption (over three times in the case of D vitamin) and their consumption of magnesium and zinc. Other supplements (beta-glucan, ginkgo biloba, collagen, and homeopathic preparations) have also been taken in larger quantities.
Studies on dietary habits of healthcare professionals, food supplementation included, are scarce and mostly conducted in the US. Dickinson et al. [11] reported that 72% of physicians and 89% of nurses consume food supplements on a regular, time-to-time, or seasonal basis. A study conducted among American cardiologists, dermatologists, and orthopaedics prior to the COVID-19 pandemic showed that they started to take food supplements, but also tend to increasingly recommend them to their patients [12].
The COVID-19 pandemic has prompted research on how to overcome this emergency and strengthen immunity. Some of the research has suggested that food supplementation in terms of A, B, C, and D vitamin and minerals such as selenium, zinc, and iron, together with omega-3 and melatonin, could play a role in COVID-19 treatment, and even more importantly, in the prevention of upper respiratory tract infections [6,13,14]. The most recent study involving sustava [6,13,14]. Najnovija studija provedena na gotovo 450 000 ispitanika ukazuje na značajnu povezanost upotrebe suplemenata multivitamina, vitamina D, probiotika, omega-3 masnih kiselina i nižeg rizika za razvoj bolesti CO-VID-19 kod žena, ali ne i kod muškaraca [15].
Nema sukoba interesa
nearly 450,000 participants has demonstrated a significant relationship between the consumption of food supplements (multivitamins, D vitamin, probiotics, omega-3 fatty acids) and COVID-19 risk reduction, but only in women, not in men [15].
Same as our study, the study carried out among healthcare professionals-nutritionists in Turkey has revealed the increase in the number of participants who started taking food supplements during the pandemic (14 %) [10].
Pérez-Rodrigo et al. [16] highlight the significant supplement (vitamin and mineral) intake during the COVID-19 pandemic and Spanish lockdown, in particular in women and persons aged 35-54. Same as our study, the most frequently consumed food supplements were multivitamins and D, C, and B12 vitamin D, vitamin C, and vitamin B12, while the most often taken minerals are zinc, iron, and selenium. As for other food supplements, the consumption of brewer's yeast, omega-3 polyunsaturated fatty acids and collagen has been mentioned, too. At the same time, a Chinese study conducted during the pandemic, suggests that over one third of respondents (37.7%) consume supplements such as vitamin C, probiotics and so forth [17].
Dietary habits are influenced by the level of education and socioeconomic factors. Less educated and underpaid persons have poorer dietary habits, noticeable in decreased fruit and vegetable consumption and increased intake of foodstuffs rich in sugar, fats, and salt [18,19]. One of the rare studies carried out in Croatia in a group of nurses showed that only 7% of them prefer the Mediterranean diet as a role model of a healthy diet [20], while another research showed a decreased dietary intake of leafed vegetables [21].
The examinees comprised by our study are healthcare professionals varying in their level of education (vocational school graduates, college graduates, higher school graduates) and income. Because of their education and continuous following scientific and professional literature sources, but also given their priory reported poor dietary habits [20,21], it can be seen that many healthcare professionals have started or increased their food supplementation intake under the influence of the COVID-19 pandemic.
Conclusion
The COVID-19 pandemic has prompted one third of our subjects to start taking and take more food supplements. Based on the latest scientific knowledge and dietary recommendations in times of pandemic, more attention should be focused on healthy diet principles and daily vitamin, mineral, protein, and antioxidant agents' demand, which should be fulfilled by various food consumption. In case of an increased COVID-19 risk or nutrient deficiencies, certain food supplementation would be in order.
The authors declare no conflicts of interest. | 2021-08-27T17:22:54.548Z | 2021-08-19T00:00:00.000 | {
"year": 2021,
"sha1": "76e29d186833ede3a7710f9d31c5f1ebfcdb3c1f",
"oa_license": null,
"oa_url": "https://hrcak.srce.hr/file/380298",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ffec4a92a78f52bd7a64856261b1e46cae7824cc",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
55079583 | pes2o/s2orc | v3-fos-license | Fermentation kinetics of carbohydrate fractions of maize grains as determined by in vitro gas production curve subtraction technique *
The objectives of this study were to study the fermentation kinetics of different carbohydrate fractions of maize grains of Chinese varieties based on in vitro gas production curve subtraction technique. Ten maize grain samples were extracted with either 80% ethanol or neutral detergent to obtain ethanol-insoluble residue (EIR) and isolated neutral detergent fibre (NDF). Then unfractionated maize grain (UCG), EIR and NDF were fermented in vitro and the gas production was recorded. Because fermentation characteristics of fraction A (sugars and organic acids) and B1 (starch and soluble fibre) were not directly measured, a curve subtraction technique was used to evaluate the gas production and fermentation of these soluble fractions. The results showed: 1. the proportion of different carbohydrate fractions averaged 10.3±1.1, 78.3±1.2, 11.8±0.6% and 0.6±0.2% on DM base for A, B1, B2 (digestible fibre) and C (indigestible residue) fractions, respectively; 2. there were high correlation coefficients among gas production of UCG, the A, B1 and B2 fractions; 3. there was no significant difference between observed kinetics parameters and those predicted by the Gompertz function for the A and/or B1 fractions; and 4. the A fraction had the significantly highest acetate to propionate ratio among fractions studied while the B2 fraction the lowest. In conclusion, * Supported by the Project of ‘MATS-Beef Cattle System’ and the China National Supporting Projects, No. 2006BAD12B02-06 and 2008BADA 7B04
INTRODUCTION
With the widespread application of the Cornell Net Carbohydrate and Protein System (CNCPS) model, there is an increasing interest of CNCPS use in China.In the CNCPS model, carbohydrate is divided into four fractions: A -sugars and organic acids, B1 -starch and soluble fibre, B2 -digestible fibre and C -indigestible residue (Sniffen et al., 1992;Chen et al., 1999).In the case of experiment, the carbohydrate of maize grains could be fractionated into different pool size, including ethanol-insoluble residue (EIR) and NDF, and then the digestion kinetics of individual fractions can be determined by an in vitro gas production method (Menke and Steingass, 1988).
Maize is a second cereal crop source in China agricultural industry and provides a major energy feed ingredient for ruminant feeding.The digestion kinetics of feed or dietary DM and NDF may be determined by standard in vitro or in situ methods (Ørskov and McDonald, 1979).However, such methods generally cannot be directly utilized to evaluate the fermentation kinetics of the soluble fraction.Therefore, an indirect method-gas curve subtraction method has been developed (Schofield and Pell, 1995;Calabrò et al., 2001).This method is very important in determining fermentation kinetics for the application of CNCPS.However little has been studied on fermentation kinetics of different carbohydrate fractions of cereal concentrates, and limited information has been available on the VFA profiles after fermentation of various carbohydrate fractions of maize grains used as a ruminant feed.In China, although some researches have been conducted on the determination of fermentation characteristics of forage carbohydrate fractions (zhao et al., 1994;Liu et al., 2002), there has been less research available on the determination of fermentation kinetics of cereal grains used for CNCPS application in ruminant feeding.
The objectives of the present paper were to partition carbohydrate fractions of maize grains of Chinese varieties, to determine the fermentation kinetics of their carbohydrate fractions by in vitro gas production curve subtraction technique.
Sample preparation and carbohydrate fractionation schemes
Samples of ten different maize varieties (harvested at maturity stage) collected from different regions of China were used in this study.The sample set was selected to provide samples that varied in fibre content according to a laboratory scale wetmilling procedure (Eckhoff et al., 1996).The mean content of fibre obtained using this wet-milling method was 11.4%, with a range from 9.5 to 13.3%.All of these samples were ground to pass a 1-mm screen and saved for analysis.For each sample, the DM concentration of maize grains was determined by oven-drying at 105°C (AOAC, 1990).The content of the A fraction was determined by difference between unfractionated maize grain (UCG) and EIR, while the B1 fraction between EIR and NDF (NDF was the sum of the B2 and C fraction).Analogically, to obtain the gas production and fermentation kinetics of the A fraction, the gas production from the EIR fermentation was subtracted from that of UCG at scheduled time point.With regard to the B1 fraction, the similar approach was used by subtracting the isolated NDF gas production from the corresponding EIR gas production.The gas produced from the NDF residue represents that from the B2 fraction assuming that the C fraction cannot be fermented and be utilized by microorganisms.
Ethanol-insoluble residue analysis and preparation
About 0.5 g of maize grain samples of each variety were loaded into tared fibre bags and sealed, respectively.Then the bags were dipped in 80% ethanol (v/v) at room temperature with continuously stirring with about one rotation per sec for 4 h.After that, the bags were rinsed three times with 80% ethanol and one with acetone.The EIR was dried in a 100°C oven, and its percentage of the UCG was calculated.The UCG and their EIR were analysed for crude protein (CP), ether extract and ash according to Chen et al. (1999).
The ethanol extraction method described above was used to prepare EIR samples for fermentation except that the bags were dried at 60°C.The residues were saved for fermentation kinetics (Chen et al., 1999).
Neutral detergent fibre analysis and preparation
To avoid starch contamination of the fibre residue, a modified NDF procedure (Van Soest et al., 1991) was used in this study.Also for each of the ten varieties, about 0.5 g of maize grain samples were loaded separately into tared fibre bags and sealed.Then the bags were dipped in 100 ml neutral detergent (ND) plus 0.1 ml of heat-stable α-amylase and 0.5 g of sodium sulphite.After boiling the samples for 1 h, washed them with hot distilled water (several times until no foam was observed), ethanol (three times) and acetone (for the last time), dried in a 100°C oven and the NDF percentage of the maize grain was calculated (Pell and Schofield, 1993;Calabrò et al., 2005).
To obtain the NDF residue for fermentation, the same procedure was used except that the bags were dried in a 60°C oven overnight and the residues of NDF were saved for subsequent fermentation (Pell and Schofield, 1993;Chen et al., 1999).
In vitro gas production
All treatments involving animals were conducted under approval of the China Agricultural University Institutional Animal Care and Use Committee.
The in vitro incubation for gas production adopted the procedure described by Menke and Steingass (1988).Three cannulated mature Simmental×Luxi yellow cattle (27 months age, 600 ± 40 kg average body weight) were fed ad libitum a total mixed ration of concentrate (40% of DM), maize silage (35% of DM) and lucerne hay (25% of DM) twice daily for 8 d before rumen fluid collection.Rumen fluid, collected 2 h after morning feeding, was filtered through four layers of gauze into a bottle quickly.All laboratory handling of rumen fluid was carried out under a continuous flow of CO 2 .
UCG, EIR and NDF of each maize grain sample were accurately weighed (150±10 mg) into 100 ml glass syringes fitted with plungers.In vitro incubations were conducted in two consecutive runs and each involving triplicates of samples.Syringes were filled with 30 ml medium consisting of 10 ml rumen fluid and 20 ml buffer solution as described by Menke and Steingass (1988).Three blanks containing 30 ml medium only were included in each assay as control.Then the syringes were incubated in thermostat incubator (39°C) and the gas production was recorded at 1,2,3,4,5,6,8,10,12,16,20,24,28,32,36,40,44 and 48 h of incubation.Gas production was not recorded after 48 h because of complete fermentation of the diet according to Wallace et al. (2001) and González-García et al. (2008).At the end of fermentation (48 h), the culture fluid (30 ml) of each syringe was carefully removed into centrifuge tubes (20 ml) and pH values were immediately measured.The remaining fluid (1 ml) was acidified with 25% metaphosphoric acid (200 µl) containing 2-ethyel butyric acid as an internal standard, and then frozen at -20°C.Upon thawing, the fluid sample was centrifuged at 10000 g for 15 min to remove any particulate matter, and the supernate was used for VFA analysis.
Volatile fatty acid analysis
The supernatant samples (0.6-µl portion) were analysed for acetic, propionic, isobutyric, butyric, isovaleric and valeric acids according to the method of Li and Meng (2006) by gas chromatography (Agilent 6890N) with a 30-m HP-INNOWax 19091N-213 (Agilent) capillary column (0.32 mm i.d. and 0.50 mm film thickness) in split mode (ratio, 1:100).Nitrogen was used as carrier gas with a flow rate of 2.0 ml/min and 2-ethylbutyric acid was used as an internal standard.The chromatograph oven was programmed as follows: 120°C for 3 min, 10°C min increment to 180°C, and then was held for 1 min.The injector port and FID detector were maintained at 220 and 250°C, respectively.
Curve subtraction and calculation of kinetics parameters
The curve subtraction technique was utilized to obtain the kinetics parameters of all these carbohydrate fractions fermented in vitro.The gas production of UCG, its respective EIR and NDF fractions were adjusted to represent the amount of each fraction present in the initial UCG sample (Calabrò et al., 2005).
To describe the dynamics of gas production over time, the Gompertz function of: (Schofield et al., 1994) was chosen to calculate fermentation parameters.In the present study, some different letters were used according to Liu et al. (2002).So the equation is as following: where: Y -cumulative gas volume of incubated substrate (adjusted to respective fraction of 100 mg UCG), B -the theoretical maximum of gas production (at t=∞), C -the maximum rate of gas production (ml .h -1 ) that occurs at the point of inflection of the curse, and LAG -the lag time of fermentation (h).
Curve fitting and statistical analysis
The criteria used to judge the fitness of a given model were the fit statistic (F-values, mean square of regression/mean square error) (Schofied et al., 1994;Stefanon et al., 1996).A larger F-value indicates a better fit of the gas data to the mathematical model (Chen et al., 1999).The observed parameters of gas production were obtained by the subtraction approach as mentioned above.The corresponding average parameters of the A fraction was obtained by the values on the curve of UCG gas production minus those on the curve of the EIR fraction gas production.The similar approach was used to obtain the average parameters of the B1 fraction calculated as the values on the curve of EIR gas production minus those on the curve of NDF gas production.
The observed and average kinetics parameters of the A or B1 fractions were compared using a paired t-test.The INSIGHT MODULE of SAS Version 8.0 was used to analyse the correlation coefficient of gas produced from different fractions.The Gompertz function was used to obtain dynamic fermentation parameters and the NON-LINEAR procedure of SAS V 8.0 was used to analyse these parameters.The one-ANOVA procedure was used to analyse the acetate to propionate (A: P) ratio and pH value of the incubated end products.
RESULTS AND DISCUSSION
Carbohydrate fractions and gas production.The amounts of the various fractions of maize grain are presented in Table 1.By the difference of UCG and EIR, the percentage of the A fraction (DM %) was calculated.The content of the A fraction averaged 10.3 ± 1.1%, which was different from the result of 7.1% as reported by Chen et al. (1999).That might be due to the different varieties of maize grain used in the experiment.However, both of the data overestimated the fraction A content, since ethanol extraction could also remove ether extract, some crude protein, a trace amount of ash as well as the A fraction.Owing to this chemical heterogeneity and solubility, it was impossible to get the gas production of the A fraction directly.So as proposed by some authors (Pell et al., 1997;Chen et al., 1999;Calabrò et al., 2005), fermentations of the 80% ethanol-soluble fraction corresponds to the A fraction and may be estimated approximately by subtracting the gas production of the EIR from that of UCG.
As mentioned above, the gas production subtraction technique was used to obtain the gas production data of the A and B1 fractions (A= UCG -EIR; B1= EIR -NDF).The gas production curves from the UCG, A, B1, and B2 fractions (NDF gas production), normalized to the amount of the fraction studied contained in 100 mg DM, are shown in Figure 1.The accumulated gas volume of UCG, the A, B1, and B2 fractions after 48 h fermentation were 38.95, 7.20, 27.86 and 3.56 ml/100 mg DM, respectively, which were in agreement with the previously researches (Opatpatanakit et al., 1994;Chen et al., 1999).The difference of gas production between UCG and EIR changed little after 6 h incubation (from 5.99 to 6.03 ml/100 mg DM) indicated the fermentation of the A fraction ended by 6 h.95.0 1.3 1 NDFD -neutral detergent fibre digestibility.It was calculated from the NDF disappearance during a 48 h fermentation of isolated NDF, and the residue was assayed for NDF after fermentation; 2 the A fraction -sugar and organic acid, equal to 100 -ethanol insoluble residue (EIR); 3 the B1 fraction -starch and soluble fibre, equal to EIR -NDF; 4 the B2 fraction -digestible fibre, equal to NDF×NDFD; 5 the fraction C -indigestible fibre, equal to NDF×(100-NDFD)/100 The content of the B1 fraction was 78.3 ± 1.2%.As expected, the B1 fraction, which mainly consisted of starch and a little soluble fibre, accounted for the largest proportion of carbohydrate fraction of maize grain.So it contributed the most to the total gas production (Figure 1).The NDF fraction, including the B2 and C fractions, was 11.4±0.6%.Odle and Schaefer (1987) and Herrera-Saldana et al. (1990) declared that the NDF content of maize grain was 10.8 and 9.3%, respectively.The NRC (1996) value was 10.8±3.6%.The slight discrepancy of NDF content in these studies might be resulted from different varieties, environmental factors or incomplete removal of soluble fractions during NDF analysis.We incubated the NDF to obtain the gas production of the B2 fraction as Chen et al. (1999) reported because of the C fraction was very little and assuming it cannot be fermented or utilized by microorganisms (Pozdíšek and Vaculová, 2008).The gas production curve of the B2 fraction came directly from NDF gas data and took the least contribution to total gas production (Figure 1).
Correlation analysis of gas produced from carbohydrate fractions.Little is known about the correlation of gas produced from different carbohydrate fractions of maize grain fermented in vivo or in vitro.Schofield and Pell (1995) reported that the gas produced by digestion of NDF is linearly related to the mass of fibre digested.Awati et al. (2006) reported that the cumulative gas produced at different time points showed a positive correlation with incubation time using the faecal inoculum of suckling piglets.In this study, the correlation coefficients among the gas production from different carbohydrate fractions were very high (Table 2).It should be mentioned that as the fermentation procedure of the A fraction finished rapidly (within 6 h), the correlation coefficients of the A fraction and the others parameters was presented at 6 h point while the correlation coefficients among other parameters were shown at 48 h point.977 UCG -unfractionated maize grain, A fraction -sugar and organic acid, equal to 100 -ethanol insoluble residue (EIR), B1 fraction-starch and soluble fibre, equal to EIR -neutral detergent fibre (NDF), B2 fraction -the digestible part of NDF, a -correlation coefficients among the A fraction and other parameters are the gas production values within the initial 6 h fermentation, while those among others are values at 48 h fermentation Digestion kinetics parameters.The average and observed gas production were fitted to the Gompertz function to obtain kinetics parameters.The observed and average kinetics parameters of the A or B1 fractions were compared for model verification.There was no difference between the parameters obtained using the observed or average gas production curves of the A or B1 fraction (Table 3).This demonstrated that the exponential model (Gompertz function) was suitable for this study.Subsequently, the kinetics parameters of maize grain and its carbohydrate fractions were estimated (Table 4).
Approximately, the gas production of UCG was equal to the sum of the gas produced by the A, B1, and B2 fractions, which showed the division of maize grain into the three fractions was reasonable.The highest rate of gas production (0.33 h -1 ) and the least lag time (0.17 h) of A fraction suggested that it could be fermented quickly, which is in agreement with that reported by Calabrò et al. (2005).However, Doane et al. (1998) argued that the fermentation rate of the A fraction was lower than the B1 fraction.The reason might be due to two aspects: one is different experimental condition; the other one might be related to the different mathematical model used for gas production analysis.The fermentation rate of the B2 fraction was lower than that of the A or B1 fractions.The order of fermentation rates of three fraction is in accordance with the previous report of CNCPS version 5.0 (0.05, 1.5 and 0.18/h for NDF, A, and B1 fractions, respectively; Fox et al., 2003), while the value had large differences.The discrepancy might be attributed to different variety, maturity stage of maize grain or different chemical extraction method employed in the studies.LAG, h 1.09 1.09 0.002 0.342 1 the observed gas production was computed by subtracting the actual gas production of the EIR fraction from that of the UCG; 2 the average gas production was estimated by subtracting the average gas production of the EIR fraction from that of the UCG; 3 SE, standard error; 4 the observed gas production was computed by subtracting the actual gas production of the NDF fraction from that of the EIR; 5 the average gas production was estimated by subtracting the average gas production of the NDF fraction from that of the EIR, B -the theoretical maximum of gas production, ml/100 mg maize grain DM, C -the rate of gas production, h -1 , LAG -the lag time (h) Volatile fatty acid production and pH.The VFA profiles and pH of the fermented end-products were analysed at the end of fermentation (Table 5).We may use the fermentation balance equation of Wolin (1960) to compare the production of gas and VFA from glucose: 57.5 glucose→100 VFA + 60 CO 2 + 35 CH 4 where: VFA represents a mixture of acetic, propionic, and butyric acids (Schofield and Pell, 1995).A fraction: sugar and organic acid; B1 fraction -starch and soluble fibre; B2 fraction -digestion fibre; 1 the VFA profiles of UCG and EIR were the measured data at the end of 48 h fermentation; 2 the VFA profiles of the A and B1 fractions were the differences between UCG and EIR, and EIR and NDF, respectively; 3 the VFA profiles of the B2 fraction was the measured data of NDF; 4 mean ± SD, A:P=acetate to propionate ration, the different letters in the same row differ significantly at P<0.05; e -pH value of B2 fraction was the pH of NDF fermentation end production The acetate, propionate and butyrate were the three main components of total VFA among all of these fractions.Because of phosphate-bicarbonate buffering in vitro, 1 mol of VFA produces approximately 0.8 mol of gas (Beuvink and Spoelstra, 1992).In our study, a similar result of 0.9 mole of gas was obtained (38.66/42.95;Tables 4 and 5).
The A fraction had the significantly highest acetate to propionate (A:P) ratio and the B2 fraction the lowest among fractions studied.Generally, the A:P ratio was used to evaluate substrate-related fermentation differences (Schofield and Pell, 1995).Higher propionate is associated with lower gas production because the extra carbon atom in propionate would otherwise have appeared as CO 2 while acetate contributes to gas production (1 mol of acetate = 1 mol of CO 2 ; Wolin, 1960;Calabrò et al., 2005).There were many literatures studying the differences of VFA profiles and A:P ratio in the ruminal fluid fed different feeds (Schofield et al., 1994).However, few studies had been taken on the effect of a single fraction.
The results of this study gave alternative information on the VFA characteristics of single carbohydrate fractions after 48 h fermentation for the first time.This could be useful for analysis of fermentation characteristic of carbohydrate fractions.
There was no significant difference among the pH values of products at the end of fermentation.Generally, the pH of the in vitro fermentation was checked to detect if its buffer capacity was surpassed (González-García et al., 2008).The pH in syringes in this study was maintained in the range of 6.4 to 6.7 for the different fractions.As a result, no effects were expected as a direct consequence of changes in buffer capacity.
As mentioned above, gas production mostly comes from fermentation of carbohydrate fractions because no ash fermentation occurs, no gas is available from fat fermentation, and little gas is produced from protein fermentation (Pozdíšek and Vaculová, 2008).Gas production techniques have a good potential to predict rumen OM degradation, in particular by the provision of kinetic information, and could be widely used to evaluate rumen fermentation kinetics of feeds (Umucalilar et al., 2002;Pozdíšek1and Vaculová, 2008).Combined with the CNCPS carbohydrate fractionation scheme (Sniffen et al, 1992;Chen et al., 1999), an in vitro gas production technology could provide some useful information on fermentation kinetic of different carbohydrate fractions of Chinese maize grains.
CONCLUSIONS
The in vitro gas production curve subtraction technique provides a suitable tool to evaluate the fermentation kinetics of soluble carbohydrate fractions of maize grains.By using this approach, the information about contents of carbohydrate fractions of Chinese maize grains, cumulative gas production and rate and contribution to volatile fatty acids of different fractions can be provided for the Cornell Net Carbohydrate and Protein System model application in direction of ruminant feeding in China.
Table 1 .
Chemical analysis and carbohydrate fractions of maize grain
Table 2 .
Correlation coefficients among gas productions of UCG and its carbohydrate fractions
Table 3 .
Comparison of fermentation parameters for the A and B1 fractions using observed or average data | 2018-12-07T21:48:59.152Z | 2010-11-30T00:00:00.000 | {
"year": 2010,
"sha1": "ed5440053e1eedad46b69f98e4498c7fef6cb38f",
"oa_license": "CCBY",
"oa_url": "http://www.jafs.com.pl/pdf-66337-5777?filename=Fermentation%20kinetics%20of.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ed5440053e1eedad46b69f98e4498c7fef6cb38f",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
54920509 | pes2o/s2orc | v3-fos-license | Quantification of the quality problems in the construction machinery production
Paper presents results of the research findings on the quality problems analysis conducted in the chosen enterprise producing a machinery for the construction industry. The aim of the study was to identify the causes and consequences of the quality problems in the production by using selected methods and tools of the quality management, as well as to identify opportunities to minimize the existing risks in the production process. The results show that approximately 77% of production losses are caused by incompatibilities resulting from two quality problems caused by employee errors and the low quality level of supplied screw taps. The results of the study indicate the corrective measures that may be taken in the company in order to eliminate and reduce identified production problems contributing to the production of machines that do not meet building standards.
Introduction
The construction machines industry in Poland experienced the biggest boom in the 70's and 80's and afterwards there was a collapse in the early 90s around the world noted the significant decline in a demand for all kinds of cranes, excavators, bulldozers and more specialized equipment [1].Polish construction industry crisis has been stopped in 1994, when the statistics show the construction machines production increase by more than 30% annually.Polish construction industry has undergone the structural transformation.Some manufacturers has started to certify their products with ISO 9000 (PN-EN-29000).
The fulfilment of European standards is important to approximate the technicaloperational parameters of the construction machinery.Currently, the construction machines users pay special attention to the following factors such as: a safety, a quality and environmental operating conditions and the need to meet the requirements of ergonomic work (i.e.human factors).The first two requirements are governed by the relevant European standards EN and ISO.Environmental performance of the machines driving machines is regulated by the strict Euro II Directive since January 1996.Regulations on human factors (HF), which include among others: noise, vibration and ergonomics of the machine, are more complex.Cushioned cabin of materials absorbing a part of the vibration energy seats is matched to the shape of the human body is only one of several means already used on a large scale.
Polish construction industry includes almost 140 plants of various sizes, employing some 50,000 workers.The sale of the construction machinery, including the road construction, is now about 10% of total sale in Polish engineering industry.Turnover of this sector since the early 90's steadily growing and it is estimated to exceed 500 million USD.Resources domestic construction machinery market consists of about 60 000 of the machinery and equipment -including about 10 000 of the mobile cranes, 11 000 of excavators, 10 000 of the cat track and 3 000 of the wheel loaders.About half of these devices was manufactured before 1997, 40% -in 1997-2000, 30% -in 2001-2005.Machinery and equipment imports constitute about 70% of the total construction machinery number.In the foreign observers' opinion, Polish construction machines and equipment are competitive in terms of the quality and the price [2,3].
The growing number of the road construction and the boom in the construction industry involves more the heavy construction machines and the equipment.There is also much smaller machinery and equipment involved in the completion of the road construction investment such as the plate compactors.The plate compactors are particularly suitable for the soil consolidation in a rather wide pits, but also a surfactant.It is also applied successfully for compacting asphalt and paving of cobblestones [2].
The aim of the paper is analysis of the quality problems identified in the manufacturing of the plate compactors machines.Obtained research findings have been analysed with the quality management tools and quality management method (Ishikawa diagram, Pareto -Lorenz diagram and FMEA method) applying to identify the most significant problems in the manufacturing process to improve the final quality of the plate compactors used in the road construction investments.
Characteristics of the research object
The research on the plate compactors manufacturing was conducted in the chosen Polish construction machine manufacturing enterprise.The plant is engaged in designing and manufacturing of small machines and equipment for the construction industry, repair and maintenance of the roads.Currently the company offers 14 different products, including 9 types of the soil plate compactors, 2 types of the surface cutters, the slurry pump with the combustion engine, a generator and a concrete cutter profiles.The study was discussed the petrol plate compactor ZGS-12/500 that is used to vibratory compaction, e.g.sand, gravel, asphalt or paving stones.
The quality issues in the analysed enterprise manufacturing profile
The analysed enterprise is the innovative and technical Polish enterprise that was established in 1987.Since the beginning of its activity it is engaged in designing and manufacturing of small and medium-sized machinery and equipment for construction, repair and maintenance of roads.The subject of the research conducted in the analysed enterprise is the petrol plate soil compactor type ZGS-12/500 that was designed by the enterprise.The machine consists of a number of assemblies, subassemblies and components, which are made of different grades of steel, epoxy resins and elastomeric components.Compactor type ZGS-12/500 is used to work on the road construction and repair of roads, the laying of pavements, car parks, squares, sports fields, compaction of narrow trenches in industrial and hydraulic engineering.Due to the large decrease in efficiency and effectiveness, it is pointless to use it, the share of clay fraction exceeds 10%, a fraction of dust is greater than 30%.Through the use of special plates elastomer, which is an accessory, there is a possibility of levelling the pavement and concrete blocks.Compactor provides immediate stabilization of the soil and it help to obtain the surface durability.The petrol plate compactor analysed in the paper is presented in Figure 1.The liability for the quality of the product and its components occurring in the manufacturing process is provided by the Quality Controller and Representative of the Enterprise Quality System.The control process of the production process is described in the Technical Conditions.There are also specific control methods at the various stages of the process.The whole quality control process and its components are strictly inscribed in the Quality Assurance System as described in the corresponding Procedures.The materials used for the production of individual parts of the plate compactor should comply with the technical documentation and material standards (important quality feature of the plate compactor is that castings made of gravy cast iron shall comply with BN74/19046-04 standard and dimensions of parts, assemblies and complete compactors for which to design drawings not specified dimensional tolerances should be done in 13 accuracy class according to PN-79/M-02139 standard).Another important quality feature of the plate compactor are screw connections that should be tightened by hand or mechanically correct key.The size of allowable tightening torques should comply with PN-81/M-82056 standard with a maximum tolerance of 10%.Bolted connections should be secured against selfloosening in accordance with the technical documentation.Welded joints should be consistent with the requirements of BN-73/1610-1603 standard.Every compactor should be tested (uptime during the test shall be 15 minutes) to check the correct operation of the machine and all cooperating assemblies and components.There are tested: correct function of the speed and the engine start, turnover inclusion centrifugal clutch, rated speed of the engine (DTR = 3600 rev/min), the rate of entry into the cold vibrator turn (tdop = 10 s), operating temperature of the vibrator (at the end of the test should not exceed 95°C), any grease leaks, any metallic rattles and creaks during operation, CO content in the exhaust gas (perm.content <4.5%).The critical quality feature of the plate compactor quality is related to the issue of the protection against corrosion.All connecting elements (screws, bolts, nuts, washers, pins and steel components) specified in the design documentation must be protected against corrosion by galvanizing.The surfaces to be coated should be proportionate to the degree of purity B2a in accordance to PN-70/H-97050 standard.Lacquered surfaces should be covered with a layer of a pre-primer.Cover painted shall meet the conditions workmanship min.3 class according to PN-79/H-97070 standards.All painted surfaces should be covered with double layer of varnish [4,5].
Research findings and discussion
In recent times, the main problem occurring in the enterprise is a production losses increase.In order to identify and systematize the causes of manufacturing problem there was created Ishikawa diagram with using the brainstorming method.There were identified five manufacturing causes areas i.e. : 'materials', 'machine', 'method', 'man', 'organization' (Fig. 2).Analysing data presented in the Ishikawa diagram (Fig. 2), it can be stated that 'man' and 'method' areas generated the majority of the manufacturing problems.Improper personnel policy was identified as the main problem cause that is linked with quality issues performance in the analysed enterprise.
Research findings presented in the Ishikawa diagram (Fig. 2) also allows linking of the manufacturing losses increase with the following reasons causing this situation: -company seeking to reduce the production costs has decided to change supplier of taps on the supplier offering a lower price (however this change, despite the fact that led to the restrictions mentioned above costs, is the main reason for picking threads and making losses); -a very important factor is partly used machinery; -persons employed directly at the production process does not comply with the quality standards; -the lack of a smooth flow of information between the different levels of the manufacturing process; -poor application of technology, particularly in the manufacturing of new products.
In order to identify the significant factors affecting the final quality problems occurring in the construction machines manufacturing process the Pareto -Lorenz diagram was applied to evaluate the hierarchy of the manufacturing incompatibilities in the plate compactor manufacturing.Pareto -Lorenz analysis is carried out to identify and to establish the hierarchy of the incompatibilities identified in the manufacturing process.Research results are used to elaborate the corrective and preventive actions for the production process within the quality issues on the analysed product [6 -8].
In order to arrange the causes of production losses in terms of their frequency they were carried out analysis of Pareto-Lorenz.Authors identified the following causes of the manufacturing process losses as the following incompatibilities: N1 -poorly bent shaft; N2 -unevenly welded worktop; N3 -broken threads in the mounting vibrator; N4 -damaged paint cover of the engine plates; N5 -improper angle of the shaft.
The frequency of incompatibilities' occurrence in the analysed enterprise in 2014 and the cumulative percentage of identified incompatibilities were presented in Table 1 and Figure 3.
Analysing data presented in Figure 3, it can be stated that over 77% of incompatibilities are caused by two causes (N4 and N3).Reduction of the production losses and thereby the enterprise profits increase can be achieved by eliminating of the failures in the engine plate painting and searching supplier of the highest quality taps.Acquisition of the low quality sheets and pipes happens occasionally, which it is certainly a result of long-term relationships with reliable suppliers.Cause like poorly bent shaft due to the rarity of its occurrence has little effect on the resulting waste.Failure Modes and Effects Analysis (FMEA) method was applied in order to analyse possible failures, their causes and effects of identified incompatibilities in the petrol plate compactor manufacturing process.This method requires defining the relationship causeeffect-defect by scoring points on a scale 1÷10 due to three criteria: the risk of defects CoSME'16 04011 appearance (R), the probability of defects detecting (W), the importance of defect appearance (Z).The product of these three values multiplying is called the priority number P, which is as a measure of the validity of individual failures and it provides the basis for their hierarchy.Priority number of the Risk (LPR) accounts according to the formula: LPR = LPW × LPZ × LPO (1) where: LPW -Priority Number of Appearing.LPZ -Priority Number of Meaning (for customer).LPO -Priority Number of Detectability [6,9].
Table 2 presenting the graduating system in the research findings analysis was applied in order to elaborate results for table 3. Identifying quality problems in the manufacturing process (Table 3) allows using of preventive measures, implementing of corrective actions and testing their effectiveness.Quality problems, which are the source of the greatest threats, should be solved as the first step.Table 3 describes the preventive measures for the failures occurrence.The technology change is the key factor in this step since the improper technology increases production losses.There are suggested also staff training which is why the company's management should focus on elaborating of the training system for employees.
An important factor in the production process is also used machinery, which is why the company should pay attention to more frequent inspections of the equipment, as well as reflect on the partial replacement of the machine park.For this company the majority of the faults and their causes are adequately easily detectable, and therefore should be already in the design phase the production of all identify them and take any action that will ensure that the probability of failures and thus the number of risk will be minimized.This is in line with the motto FMEA method, which reads: "prevent the emergence of errors rather than detection." As Table 3 shows the introduction of any corrective measures led to a significant reduction in the number of risk for individual failures.An analysis of incompatibilities carried out by using Pareto-Lorenz diagram shows that over 77% production loss is caused by two incompatibilities (damaged paint cover of the engine plate and broken threads in the mounting vibrator).The first one is caused by the most common workers' failure while the second -the poor quality of supplied taps.
Conclusion
Analysis of the research findings and corrective and preventive actions obtained as the result of the applied quality management tools and method allows to summarize suggestions for the analysed enterprise manufacturing process: -greater focus on "improving" their employees, the introduction of a workers training system, skills upgrading, which will reduce the number of mistakes made by the crew; -consider the possibility of changing taps' supplier; -the production technology improvement; -the information flow improvement both between employees and their superiors, and between different departments in the company;
Fig. 3 .
Fig. 3. Pareto-Lorenz diagram for the incompatibilities identified in the petrol plate compactor manufacturing process in the analysed enterprise in 2014.
Table 1 .
The Ishikawa diagram for the factors affecting the manufacturing losses increase in the analysed enterprise.Incompatibilities identified in the petrol plate compactor manufacturing process in the analysed enterprise in 2014.
Table 2 .
Scale of grading R, W, Z in FMEA method.
Table 3 .
FMEA method for defects identified in the petrol plate compactor manufacturing process in the analysed enterprise in 2014. | 2018-12-06T12:53:10.927Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "ee2d36a5f130e99d9d902a4bf0404622dfe3831d",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/08/matecconf_cosme2017_04011.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ee2d36a5f130e99d9d902a4bf0404622dfe3831d",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
258844632 | pes2o/s2orc | v3-fos-license | The psychological basis of reductions in food desire during satiety
Satiety—the reduced desire to eat, drink or have sex in their respective aftermath—is particularly important for feeding, where it assists energy balance. During satiety, the anticipated pleasure of eating is far less than the actual pleasure of eating. Here we examine two accounts of this effect: (i) satiety signals inhibit retrieval of pleasant food memories that form desirable images, allowing unpleasant memories into mind; (ii) feelings of fullness reflect what eating would be like now, negating the need for imagery. To test these accounts, participants undertook two tasks pre- and post-lunch: (i) judging desire for palatable foods either with or without imagery impairing manipulations; (ii) explicitly recollecting food memories. Impairing imagery reduced desire equally, when hungry and sated. Food-memory recollections became more negative/less positive when sated, with this correlating with changes in desire. These findings support the first account and suggest imagery is used when hungry and when sated to simulate eating, and that the content of these memory-based simulations changes with state. The nature of this process and its implications for satiety more generally are discussed.
Introduction
Humans and animals are less likely to eat immediately after a meal, less likely to drink immediately after quenching a thirst, less likely to have sex immediately following intercourse, and less likely to use a particular narcotic immediately after using it (e.g. [1][2][3][4]). All are examples of satiety, where desire for more of a particular thing is dulled in its immediate aftermath. In this manuscript, we examine some of the psychological processes that occur during satiety, and we focus in particular on those involving food. This example is highly consequential, as eating when sated is a potentially important contributor to excess weight gain (e.g. [5]), with obesity remaining a significant global health problem (e.g. [6]). simulation. To this end, participants undertook four types of trials while they evaluated wanting. Two involved chemosensory stimulation-one using taste/texture and the other olfaction. The other two formed the control. One involved listening to a sound while evaluating wanting. Audition was selected as it is not normally involved in food imagery (e.g. [35]). The other just required wanting judgements with no concurrent sensory stimulation. The key question was whether chemosensory interference would reduce wanting equally across state (i.e. hungry = sated) or differentially (i.e. hungry > sated).
The second test employed the food recollection task, which involves deliberative memory retrieval of what it would be like to eat a particular food now (e.g. describe what it would be like to eat a steak). The memory content account suggests that food memories should become less affectively positive and more affectively negative across state (i.e. hungry to full). This is due to the inhibition of pleasant food-related memories during satiety, which allows negative memories to dominate. By contrast, the interoceptive account would suggest: (i) fewer affect-laden food memories entering consciousness in the sated state as the interoceptive state of fullness is 'in mind'; (ii) those memories that are reported should include more frequent reference to how filling the food is, as the interoceptive state of fullness is 'in mind' and should 'contaminate' recollections; and (iii) both (i) and (ii) should relate to how full the person feels (i.e. current interoceptive state). Finally, the main effects on memory predicted for the food recollection task (i.e. increase in negative and reduction in positive food-related memories across state versus reduction in the number of affect laden food memories across state) should correlate with the 'additional satiety' effect on the wanting and liking task, providing a further test of the memory content and interoceptive accounts.
Overview
The study used a wholly within-participant design. There were two main tasks, the food recollection task and the wanting and liking task. Both were undertaken before and after lunch. In the food recollection task, hungry participants wrote descriptions of what it would be like to eat two foods and touch one followed by... (1) royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 10: 221404 object. They then undertook the wanting and liking task. This involved two parts. First, judging desire for palatable snack foods while looking at them, either while concurrently receiving chemosensory interference (to impair chemosensory imagery) or control manipulations (figure 1). Second, eating a morsel of each snack food used in the first part, and rating it for liking (figure 1). Participants then ate a filling meal and repeated both tasks.
Participants
Forty-eight first year undergraduates (14 men) participated for either course credit or a small cash payment. Mean age was 19.6 years (s.d. = 2.8), with an average body mass index of 22.0 (s.d. = 3.0). The study was approved by the Macquarie University Human Research Ethics Committee and written consent was provided by each participant. The study was described as exploring the impact of hunger and fullness on 'various psychological states'. The specific rationale for the study was explained at the end.
Stimuli
There were eight snack foods used in the wanting and liking task and these were organized into four pairs. Each pair was composed of one sweet and one savoury food. The four pairs were: one Cheetos cheese and bacon ball and one mini Oreo cookie; one Pringles salt and vinegar chip and one Tiny Teddy mini chocolate chip cookie; one Pringles BBQ chip and one Cadbury's mini chocolate finger; and one cube of cheddar cheese (0.5 cm 3 ) and one Malteser. These snack food pairs were fully counterbalanced across the four types of trial used in the wanting part of the wanting and liking task (figure 1). The concurrent stimuli used in the trials of the wanting part of the wanting and liking task (figure 1) were composed of an odorant, a tastant and a sound. The odorant was 25 mg of lemon oil on a cotton wool ball presented in an opaque 250 ml plastic squeezy bottle (i.e. a weak lemon odour). The tastant was 10 ml of 3.1 mM citric acid (i.e. a weak sour taste). The auditory stimulus was the sound of falling rain obtained from the international affective sound series and presented via a smartphone (i.e. a soft pattering of rain (50 dB)).
Lunch consisted of either 'On the menu' brand beef (260 g; 1570 kJ) or vegetable lasagne (260 g; 1140 kJ)-according to preference-and 100 g of vanilla ice-cream (Bulla; 718 kJ). Water was available ad libitum during lunch.
Procedure
A set of 'general rating scales' were completed first. Participants rated their hunger, fullness, thirst, happiness, sadness, relaxedness and alertness, all using 120 mm visual analogue scales (anchors: 'Not at all' and 'Very'). Only hunger and fullness ratings are reported as the remainder served as distractors.
The first food recollection task followed. This started with a practice phase with the following instructions: 'Please describe what it would be like to eat a meat pie and sauce right now. Please use as much detail (e.g. taste, flavour, smell, texture, attractiveness, etc.) as needed to describe what this experience would be like at this moment now'. Participants were told they would have 1 min and following this minute they were given a further 20 sec to complete their response. After this practice phase, participants were asked to provide descriptions of what it would be like to experience three objects-two foods (i.e. eating them) and one non-food (touching it). There were two sets of the three objects. This was so a different set could be used before and after lunch for each participant, with this being fully counterbalanced. One set was composed of: (i) a prime steak, salad and fries; (ii) a bowl of boiled rice and soy sauce; and (iii) a bean bag. The other set was composed of: (i) a rack of ribs, fries and onion rings; (ii) a bowl of noodles and soy sauce; and (iii) a suitcase. Objects within each set were presented in a fixed random order.
Participants then undertook the wanting and liking test. This had two parts (figure 1). The first involved eight trials. On some of these trials another stimulus-a smell, a taste or a sound-was experienced just before, and during the whole period in which they were both shown the snack food and rating their desire to eat it (How much would you like to eat this food? 120 mm VAS, anchors: Not at all, A lot). Only once this rating was complete did the concurrent stimulation cease. Importantly, the snack food was never eaten on any of these trials-it was just looked at. There were four trial types-olfaction, taste, audition and no concurrent stimulation-with each being undertaken twice, once for each food of the pair assigned to that imagery condition (i.e. one sweet and one savoury snack food). Before starting, participants undertook a practice trial for both the smelling and royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 10: 221404 tasting procedures, using chocolate as the target food. The control and auditory tasks were just described to participants. The procedure for each trial type is illustrated diagrammatically in figure 1.
A taste trial involved pouring all of a 10 ml citric acid stimulus into the mouth and gently swilling it around. The target snack food was then brought into view, and the participant was asked to complete their wanting rating. Once this rating was completed, the food item was removed from sight and participants expectorated, rinsed with water, and then the second taste trial commenced. For an olfactory trial, the same basic procedure ensued, but this time participants continuously sniffed a lemon odorant ( puffed at them by the experimenter) while they looked at and then rated the target snack food. This too was followed by a water rinse and then the second olfactory trial. For an auditory trial, the rain sound was looped so that it played across the whole period in which the participant looked at and rated the snack food. Again, this concluded with a water rinse and then the second auditory trial. For a no concurrent stimulation trial, participants just viewed and rated the target snack food, followed by both a water rinse and then their second no concurrent stimulation trial.
In the second component of the wanting and liking test, participants now ate a single morsel of each of the eight foods used in the first component ( figure 1). Each trial was composed of eating all of the sample, rating how much they liked the food (120 mm VAS, anchors: Not at all, A lot) and how much more of it they would like to eat (120 mm VAS, anchors: None, A lot). This 'want more' rating was included for continuity with our previous uses of the wanting and liking task and we did not expect any task-relevant outcomes for this measure. A water rinse again separated each trial. Food stimuli were presented in the same order as they were on the first component of this test, to ensure a similar judgemental context.
Following a second set of general scales (i.e. hunger, fullness, etc.), participants were served their lunch. They were encouraged to eat as much of the food as possible, so as to be comfortably full by the end of the meal. Prior to eating they were instructed: 'This is the main meal. Please eat as much of this food as you can. All the food that is uneaten will be thrown away. Please feel free to read the magazines while you eat-if you like. I will step outside while you eat'. Participants were then left for 5 min, with the experimenter then returning to remove any uneaten lasagne for later weighing. The ice-cream was then presented in the same manner (i.e. with 5 min for eating), with uneaten ice-cream weighed at the end of the meal. Participant then completed a further set of general rating scales.
After lunch, participants completed the second wanting and liking task (this being separated from the first task by about 20 min), with this being undertaken in an identical manner to the first. The second food recollection task followed, with presentation as described above, but using the set of stimuli not employed in the pre-lunch test. Biographical data were then obtained, alongside frequency of consumption of the test foods used in the wanting and liking task, and ratings of liking for the concurrent stimuli used in the wanting part of the wanting and liking task (120 mm VAS, anchors: dislike, indifferent, like). Participants were also asked to rate how well they could form visual, flavour and textural imagery (120 mm VAS, anchors: Not at all, Good). Participants were then weighed and had their height measured and were provided with a verbal debriefing of the study.
Analysis
Only the food recollection data are reported (the non-food recollection data were included to avoid a sole focus on food). These data were coded for the presence of positive affect (a score of 1 for each mention), negative affect (a score of 1 for each mention), and for any report of how filling it would be to eat (with a score of 1 for each mention). The primary coder coded every response for every participant. The reliability coder recoded 12 randomly selected participants' data from the full set of 48 cases. Both coders were blind to participant state and to the study aims. To assess reliability, scores were first collapsed both across foods and state, so as to increase variability. Reliability was then examined using the intra-class correlation coefficient (ICC), between the primary and reliability coder. The ICC for food positive hedonics was = 0.93, for food negative hedonics = 0.83 and for food fillingness = 0.89, thus indicating excellent to good reliability.
On the wanting and liking task, we conducted preliminary analyses on the wanting data to check if there were any differences between the two types of chemosensory trial (taste versus smell), between the two types of control trial (sound versus no concurrent stimulation) and then between the chemosensory trials and the sound trials. As planned, and also because there were no significant differences, we combined the taste and smell trials to form the chemosensory interference condition, and the sound and no concurrent stimulation trials to form the control condition.
royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 10: 221404 All data were suitable for parametric testing, excepting that from the food recollection task, where non-parametric tests were employed.
Manipulation of state
Participants consumed a mean of 221 g of lasagne (s.d. = 31 g) and 79 g of ice-cream (s.d. = 11 g). Hunger and fullness ratings were obtained at the start of the experiment, and before and after lunch. These ratings were analysed using a two-way repeated measures ANOVA with Time and Rating type as factors. There were main effects of Time ( p < 0.001) and Rating type ( p < 0.01), and an interaction between these variables (F 2,94 = 199.15, m.s.e. = 657.24, partial η 2 = 0.81). Hunger ratings changed little (M = 79/120 to M = 77/120) prior to lunch, and fell markedly after lunch (M = 22/120). Fullness ratings increased prior to lunch (M = 15/120 to M = 32/120), and increased more after lunch (M = 94/ 120). The manipulation of state was then successful.
Wanting and liking task
We started by analysing the wanting, liking and want more ratings individually, using in each case a two-way repeated measures ANOVA with State (hungry versus full) and Interference type (chemosensory versus control) as factors. The results for each ANOVA are presented in table 1, and the data are presented in table 2. As expected, there was a substantial effect of State for each rating type, with all responses declining when sated.
For wanting ratings, there was a main effect of Interference type. Post hoc tests revealed that chemosensory interference significantly reduced wanting ratings both in the hungry (t 47 = 3.28, p = 0.002) and sated (t 47 = 2.65, p = 0.011) states. There was no evidence that this interference effect differed by state (i.e. non-significant interaction). This pattern of findings is most consistent with the idea that imagery/simulation occurs equally in both the hungry and sated states-that is the memory content account.
For liking ratings, we had no expectation that the interference manipulation would affect responding because no interference manipulation was undertaken while these ratings were made. Consistent with this, there was no main effect of Interference type. However, and unexpectedly, the interaction of State and Interference type approached significance, with a smaller change in liking between the hungry and stated states in the chemosensory interference condition. Finally, want more ratings revealed no effect of Interference type. To test for the 'additional satiety' effect, and to check if it differed between the chemosensory interference and control condition, we conducted two sets of contrasts. First, we tested if the change in wanting across state exceeded the change in liking in the chemosensory interference condition, and then in the control condition ( figure 2). There was a significant 'additional satiety' effect for both the chemosensory interference condition (t 47 = 2.46, p = 0.018) and the control condition (t 47 = 2.77, p = 0.008). In other words, and as evident in figure 2, wanting declined more across state (i.e. 'additional satiety') relative to liking. Second, we examined if these two effects differed, but they did not ( p = 0.77).
Food recollection task
Food recollections were coded for inclusion of hedonic terms ( positive, negative and their total) and for any mention of how filling the food would be to eat (table 3). When sated, food recollections contained significantly fewer positive hedonic comments and significantly more negative hedonic comments, than when hungry-consistent with the memory content account. However, while there was no reduction in the total number of affect-laden reports between states, reports of how filling the food was were significantly increased from the hungry to the sated state, providing some support for the interoceptive account.
We then tested the relationships identified in the Introduction. First, we examined two predictions pertinent to the interoceptive account. There was a significant association between changes in fullness across the study and changes in total affective responses (ρ = 0.33, p = 0.024), but this was not in the expected direction. That is more affectively laden instances of memory during satiety (relative to when royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 10: 221404 hungry) were described by those with the largest changes in fullness. There was no association between changes in fillingness and changes in fullness (ρ = 0.06, p = 0.71).
We then examined relationships between the food recollection task and the wanting and liking task. There was a significant relationship between changes in the effect of retrieved food memories on the food description task (i.e. less affectively positive and more affectively negative across state) with the 'additional satiety' effect (i.e. the greater decline in wanting relative to liking) from the wanting and liking task (ρ = 0.32, p = 0.027) providing support for the memory content account. There was no significant relationship between changes in total affective responses on the food description task and the 'additional satiety' effect from the wanting and liking task (ρ = −0.02, p = 0.92), contrary to expectations for the interoceptive account.
Other variables
We examined participants' hedonic evaluations of the concurrent task stimuli used in the wanting phase of the wanting and liking task. There was no significant difference between the auditory (M = 70/120) and olfactory (M = 64/120) stimuli (t < 1), but the tastant (M = 48/120) was judged as less pleasant than these other two ( ps < 0.014). Participants were also asked about their capacity for mental imagery. Visual imagery was reportedly better (M = 89/120) than both textural (M = 78/120; p = 0.011) and flavour (M = 73/120; p = 0.001) imagery, which did not differ (t = 1.3). Finally, participants reported moderate consumption of each of the snacks used in the wanting and liking test (monthly to weekly).
Discussion
Participants were asked to judge both their desire (wanting) for palatable snack foods, and how much they liked their taste, before and after a filling lunch. Half of their wanting ratings were undertaken with concurrent chemosensory interference, the remaining half without. Chemosensory interference reduced participants' desire (wanting) for palatable snack foods. The key test was to see if this reduction was of an equivalent magnitude when they were hungry and when they were sated. The analysis revealed that chemosensory interference reduced desire by a similar degree across state. We then contrasted wanting and liking ratings, finding that the 'additional satiety' effect was present in both the chemosensory interference and control conditions, and to a similar degree. Participants were also asked to recollect what it would be like to consume particular foods, undertaking this description task before and after lunch. When participants were sated, their recollective reports contained more negative and fewer positive comments about the foods, and more statements about the food's fillingness, relative to when they were hungry. The change in positive/negative recollections across state on the food recollection task was significantly correlated with the 'additional satiety' effect from the wanting and liking task.
The primary aim of this experiment was to test two accounts of how memory inhibition might result in the observed 'additional satiety' effect, where desire for palatable snack foods when full, is far less than the actual pleasure reported when they are tasted. Both accounts draw upon the same mechanism when hungry participants see palatable snacks. Namely, memories of these foods are retrieved and used to construct mental images enabling simulations of eating. However, when sated, the interoceptive account suggests that fullness sensations serve as a simulation of what eating will be like now, negating the need to retrieve food memory and engage imagery. Crucially, for this account, food memory retrieval and imagery are necessary for the hungry state but not for the sated state. By Table 3. Data from the food and non-food recollections task. contrast, for the memory content account, satiety cues act to inhibit retrieval of pleasant food-related memories, resulting in retrieval of affectively negative memories. Thus, the hypothesized process during satiety requires mental imagery and simulation, in the same way as it does when hungry. The basic findings from the current experiment are most consistent with the memory content account. This is because, in the analysis of the wanting data, the chemosensory interference manipulation reduced desire significantly and equally in both the hungry and sated state, consistent with the operation of food memory retrieval and mental imagery/simulation occurring in both states. An important consideration in respect to this conclusion concerns the nature of the interference manipulation in the wanting part of the wanting and liking task ( figure 1). There were two basic assumptions in its design. First, that in retrieving memories of these foods and then using these to form mental images or simulations of eating them, participants would use chemosensory perceptual systems, namely drawing upon taste and smell (e.g. [35]). Second, irrespective of how conscious and explicit these simulations were, data suggest that many of the same brain areas used during perception are activated during mental imagery and simulation (e.g. [29][30][31]). We reasoned on this basis, as have others before (e.g. [32][33][34]), that asking participants to undertake a concurrent task that draws upon a sensory modality likely to be involved in mental imagery/simulation would disrupt it. By contrast, concurrent tasks using sensory systems not used in imagery/simulation should have little effect. Consistent with this, we found no evidence that listening to rain impacted wanting ratings, while both chemosensory tasks-taste/texture and smell-did. The similarity of the two chemosensory conditions, and their difference from the auditory condition are important for a further reason. Participants rated their liking for the concurrent stimuli, with no difference found for smell and sound (both mildly positive), while taste was judged as significantly less pleasant. That these hedonic differences do not align with the impacts on wanting ratings suggests that the central aspect of our interference manipulation was its use of a particular sensory modality, rather than reflecting differences in affect.
The study also included a food recollection task, where participants were asked to describe what it would be like to eat certain foods and to touch certain objects. Participants made more positive and fewer negative comments in their food recollections when they were hungry, than when sated. This change is also consistent with the memory content account. We suggested in the Introduction that recollective differences between the two states, especially in regard to affect, are a key feature of the memory content account. This conclusion is given further support by the finding that changes across state in the effect of participants' food recollections on the food description task correlated with the 'additional satiety' effect. This implies that a common mechanism is responsible for both effects, and we suggest that this is memory inhibition generated by satiety cues and operating on the retrieval of food memories (e.g. [21,23,24]).
As we outlined above, the interoceptive account was also tested here. There were two findings from the current study that suggest it would be premature to dismiss this account. The first concerns the wanting ratings. The interaction of Interference type and State (table 1) had medium effect size-and if tested with a directional paired-sample t-test, alpha would have been 0.0545. The point here is that even though the chemosensory interference task unequivocally reduced desire both when participants were hungry and sated, these data still suggest some modest reduction under conditions of satiety. The second relevant finding comes from the Food recollection task, where participants reported more comments pertaining to the filling nature of the food when they were sated than when they were hungry. This effect has been obtained before when using this type of task [36]. In this prior report, we also found that measures of gastric sensation-stomach bloating, stomach emptiness and nausea (but not fullness)-were correlated with changes in reports of fillingness, between the hungry and sated state. Indeed, it was these findings which led us to suggest that one way that memory inhibition might operate is for the interoceptive cues associated with fullness to dwell in consciousness to the extent that they negate the need to simulate eating a food. This is arguably because the interoceptive state effectively functions as a simulation of what eating will feel like now. There is of course no reason why the two accounts need to be mutually exclusive, and it may be that the type of interoceptive model suggested by reports of fullness is operative, but is dependent upon the extent to which stomach-related sensations are evident after a meal. This in turn would depend both on individual differences in interoceptive sensitivity (e.g. [37]) and on the quantity of food consumed, such that the stronger these stomach-related signals become, the more unnecessary any eating-related simulation would be.
For liking ratings, there was some-albeit non-significant ( p = 0.051)-indications that the foods used in the chemosensory interference condition changed less in liking across state than those used in the royalsocietypublishing.org/journal/rsos R. Soc. Open Sci. 10: 221404 control condition (see data in table 2). This was unexpected, because the interference manipulation was not undertaken while participants made liking judgements (figure 1). As to why this may have occurred, one possibility is that the lower levels of desire for the chemosensory condition foods generated by the chemosensory interference condition may have created a positive contrast effect when these foods were actually tasted. This may not have been so apparent when participants were hungry, perhaps because of ceiling effects for the liking ratings.
Finally, it is worth reflecting on the broader implications of these findings. Currently, we do not know if the 'additional satiety' effect is operative for other motivational systems such as thirst, sex and drugs. If it is, and this would seem plausible, then this would have some interesting implications. The principal one comes from the finding that memory inhibition-however it operates-is very sensitive to subtle hippocampal impairment even in otherwise healthy people [9,12]. As impairments in memory inhibition-that is a reduction or loss of 'additional satiety'-can be induced by one week of a diet rich in saturated fat and added sugar, in people who normally eat a fairly healthy diet, the suspicion would emerge that thirst, sex and drug use might similarly be impacted. This would have the effect of making pleasant tasting beverages, attractive mates and previously used narcotics more appealing than they otherwise would be during satiety.
In conclusion, the experiment reported here examined two possible models of how memory inhibition may operate to drive the 'additional satiety' effect. The findings are most consistent with the memory content account in which memory retrieval, imagery and simulation of eating occur both when hungry and when sated participants view food, with the difference between the two states being the nature of the recollected material-memory content. This type of recollective difference was illustrated in the food recollection task, with participants reporting more positive and fewer negative hedonic comments in their food descriptions when hungry relative to when sated. Finally, the affective changes observed on the food recollection task were found to correlate with the changes in wanting relative to liking (i.e. 'additional satiety') on the wanting and liking task.
Ethics. The study was approved by the Macquarie University Human Research Ethics Committee (ref. no. 520221036736852) and written consent was provided by each participant.
Data accessibility. The data are provided in the electronic supplementary material [38]. | 2023-05-24T13:04:17.552Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "aecf6940fe91315882f2201c4e93932d62c48644",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "RoyalSociety",
"pdf_hash": "aecf6940fe91315882f2201c4e93932d62c48644",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
12321068 | pes2o/s2orc | v3-fos-license | Bone fracture risk in patients with rheumatoid arthritis
Abstract Background: Patients with rheumatoid arthritis (RA) are predisposed to osteoporotic fracture. The present study aims to determine the association between rheumatoid arthritis (RA) and bone fracture risk, and in relation to gender and site-specific fractures. Methods: Studies related to bone fracture in patients with RA were searched from databases including PubMed, EMBASE, and OVID from inception through April 2016. The quality of the studies was evaluated using the Newcastle-Ottawa Scale. Meta-analysis was performed with Stata13.1 software. The results were reported based on risk ratio (RR) and 95% confidence interval (95% CI) using a random effects model. Results: The meta-analysis of 13 studies showed a significant higher risk of bone fracture in patients with RA than in patients without RA (RR = 2.25, 95% CI [1.76–2.87]). Subgroup analyses showed that both female and male patients with RA had increased risk of fracture when compared with female and male patients without RA (female: RR = 1.99, 95% CI [1.58–2.50]; male: RR = 1.87, 95% CI [1.48–2.37]). Another subgroup analysis of site-specific fracture also showed that RA is positively correlated with the incidence of vertebral fracture (RR = 2.93, 95% CI [2.25–3.83]) or hip fracture (RR = 2.41, 95% CI [1.83–3.17]). Conclusion: RA is a risk factor for bone fracture in both men and women, with comparable risks of fractures at the vertebral and hip.
Introduction
Rheumatoid arthritis (RA), a systemic autoimmune disorder that primarily affects the synovial tissues, is one of the most debilitating types of arthritis affecting approximately 1-2% of the world population. RA causes inflammation, pain, stiffness, swelling, and disability of the joint, thus limiting mobility in the affected joints and curtailing individuals with RA the ability to perform basic daily tasks. The onset of RA is typical during middle age, although reports have also suggested the development of RA at a younger age, [1] and the incidences of RA are 2 to 3 times more common in women than in men [2,3] .
Patients with RA are at risk of osteoporosis and osteoporotic fractures. [4][5][6] Clinical studies have shown that the incidence of osteoporosis among RA patients is 1.9 times higher than among non-RA patients. [7] Bone loss in RA has been associated with many factors including chronic inflammation, use of glucocorticoids, and physical inactivity. The release of pro-inflammatory cytokines such as interleukin-1 (IL-1), IL-6, and tumor necrosis factor-a (TNF-a) may cause the abnormal production of osteoclasts, thus disrupting the equilibrium between bone resorption and bone formation. [8][9][10] Secretion of receptor activator of nuclear factor kappa B ligand (RANKL) by activated T lymphocytes has also been observed to induce the differentiation of synovial macrophages into osteoclasts, leading to bone loss. [11,12] Oral glucocorticoids, clinical drugs commonly used to suppress RA-induced inflammation, can ironically promote the loss of bone mass by inhibiting the differentiation and activity of osteoblasts through the blockage of bone morphogenetic protein 2 (BMP-2) [13] or the Wnt/beta-catenine pathways. [14,15] Meanwhile, immobility resulting from RA-induced muscle pain, weakness, and swelling may increase the risk of falling by a certain extent, [16,17] thereby raising the rate of bone fracture. The mortality rate from osteoporotic fractures is higher than any other mortality including cervical cancer, uterine cancer, or breast cancer. [18] Therefore, the study of osteoporosis and osteoporotic fracture in RA patients is important for the early intervention and prevention of bone fracture.
Over the years, numerous observational studies have associated patients with RA with the increased risk of osteoporosis fracture involving mainly the hip or vertebral. [19][20][21] However, most clinical studies performed are either limited in sample size, restricted to certain subpopulation, or are fracture-site specific. The risk of bone fracture in RA patients has not been summarized and little is known whether the risk of fracture is site-specific. To the best of our knowledge, no meta-analysis has been performed to conclude the assessment of bone fracture risk in RA patients. Therefore, the present study aims to evaluate the overall risk of bone fracture associated with RA.
Materials and methods
This study was conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) [22] guidelines. As a meta-analysis study based on previous studies, ethical approval and informed consent were, therefore, not required.
Participants
Subjects were eligible for inclusion if they were diagnosed with RA based on the diagnostic criteria published by the American Rheumatism Association (ARA) [23] or the American College of Rheumatology 1987 (ACR). [24] Eligibility of subjects was not restricted by race and sex. Subjects without RA and any other conditions that are known to affect bone mass are defined as the control group.
Studies outcomes
The primary outcome of interest is the incidence of bone fracture. The secondary outcome of interest is the incidence of hip fracture or vertebral fracture (also known as the spine fracture).
Types of studies
Only retrospective or prospective studies published in English or Chinese were included.
Exclusion criteria
The exclusion criteria were as follows: (1) Studies on subjects without clearly defined diagnosis, and inclusion and exclusion criteria. (2) Studies that reported the rate of mortality as outcome, that is, standard mortality rate (SMR). (3) Studies with inaccurate or incomplete data and were unable to provide outcome. (4) Studies published repeatedly.
Search strategy
We conducted a systematic search in PubMed, EMBASE, and OVID databases using the MeSH terms and free key words "rheumatoid arthritis" combined with "Fracture," to identify relevant studies published from inception through April 1, 2016. Language restrictions were not employed. We also searched the reference lists for full-text papers and all relevant publications were reviewed to identify any omitted studies.
Literature selection
Literatures were imported into EndNote software to check for completeness of volume, issue, and abstract. Important information was copied and edited; and, the literatures that met the criteria were retained. For the manuscripts that did not fulfill inclusion criteria, the original documents were read to determine eligibility; literatures were marked with "include," "pending," or "exclude" (with reasons). For articles marked with "pending," full-text articles were retrieved from references and further reviewed to determine eligibility.
Quality assessment
The Newcastle-Ottawa Scale (NOS) was used to evaluate the quality of the studies included. Specifically, the studies were evaluated on 8 items, categorized into 3 aspects: the selection of the study groups, the comparability of the groups, and the ascertainment of the outcome of interest. NOS employed the star system to provide a semi-quantitative appraisal for the overall quality of each cohort study. The highest quality studies were awarded up to 9 stars.
Data extraction
A self-designed data abstraction form was used to record the following information: first author and publication year, type of study, country where the study was conducted, inclusion criteria of participants, cases of RA, incidences of fractures in RA and non-RA participants, outcome measurement, confounders adjusted for, and matching baseline factors.
Data selection, evaluation, and extraction were performed by 2 independent investigators. Discrepancies were solved by discussion to consensus or by the assistance of a third investigator.
Outcome measurement
The primary outcome of interest for our study is the indicators associated with RA and bone fracture, which is calculated in risk ratio (RR), odds ratio (OR), and hazard ratio (HR) with 95% confidence interval (CI).
Statistical analysis
Statistical analysis was conducted using Stata13.1 software. All ratios (risk ratio (RR), odds ratio (OR), and hazard ratio (HR)) were combined to obtain an accurate and comprehensive statistical analysis. [25] Pooled RR and its 95% confidence interval (CI) were calculated. A chi-squared test (x 2 ) was used to test the included studies for statistical evidence of heterogeneity, and the degree of heterogeneity among studies was assessed with I 2 statistic. When no significant heterogeneity was observed (P > .1, I 2 50%), data were analyzed using the fixed-effects model. When heterogeneity was observed (P .1, I 2 > 50%), the studies were analyzed with the random-effects model. The sources of heterogeneity were evaluated by subgroup analyses (i.e., sex and site-specific fractures).
A sensitivity analysis was performed to assess the robustness of the overall effect size. The included studies were omitted one at a time and the pooled RRs were recalculated to determine if there was any change to the overall estimates. Publication bias was assessed using the funnel plot. An asymmetry in the plot was further evaluated using Egger's test. P < .05 was considered to be significantly bias.
Author, year Type of study Cooper et al 1995 [27] Case-control study United Kingdom The case group comprised 300 patients (60 men and 240 women) aged 50 years and over who were admitted sequentially to an orthopedic unit over an 18 month period with fracture of the proximal femur, and who were able to pass an abbreviated mental test. Patients in the study group were compared with 600 community controls (120 men and 480 women), residing in the same district, who were selected from the register of Hampshire Family Practitioner Committee.
Characteristics of included studies
A total of 13 studies that reported RR, OR, or HR were included in the meta-analysis to assess the association between RA and bone fracture. The studies were conducted in countries including the United States, United Kingdom, Sweden, Norway, Finland, Australia, and China. Various matching factors were considered when selecting controls, including age, sex, age or years of menopause, height, weight, body mass index (BMI), residential area, and smoking habits. Six studies [4,20,21,[28][29][30] performed adjusted risks of fractures in RA patients to reduce potential confounders involving age, sex, BMI, smoking habits, previous history of fracture or fall, joint or hormone replacement therapy, and calcium, vitamin D, or other medication intake. The characteristics of each study are listed in Table 2.
Association of RA with bone fracture risk
The risk of a bone fracture was compared between the RA and non-RA patients. Meta-analysis showed strong heterogeneity (P < .0001, I 2 = 96.5%) among the studies; thus, a random-effects model was employed to analyze the data. Our results show that patients with RA have a significantly higher risk of bone fracture compared to patients without RA (RR = 2.25, 95% CI [1.76-2.87]) (Fig. 2).
Studies have also suggested that RA affects more women than men. Therefore, we also performed subgroup analysis based on sex. Our results showed that the risks of bone fracture are significantly higher in both women and men with RA than in women and men without RA (women: RR = 1.99, 95% CI [1.58-2.50]; men: RR = 1.87, 95% CI [1.48-2.37]) (Fig. 3A).
Sensitivity analysis
Sensitivity analysis was performed to explore the heterogeneity among studies and to determine whether these factors would have an impact on the overall pooled estimates. Our sensitivity analysis showed that no individual studies significantly affected the pooled RRs (Fig. 4).
Publication bias
The funnel plot showed asymmetry, indicating the presence of potential publication bias (Fig. 5). Further analysis with Egger's test showed no evidence of publication bias (P = .554).
Discussion
RA is a common chronic inflammatory joint disease in adults. Progression of RA leads to local and systemic bone loss, and patients eventually develop osteoporosis. [4][5][6] Osteoporosis is a condition in which the bone decreases in strength and becomes vulnerable to fracture. The manifestation of the osteoporosis is due to the loss of bone mass and damage of fine structure in bone tissue which increases bone fragility. Our study, together with other studies, [19][20][21]35] demonstrate that patients with RA are at higher risk of osteoporotic fractures than patients with non-RA.
Postmenopausal women are more prone to osteoporosis and it is estimated that osteoporotic fracture occurs at least once in approximately 50% of postmenopausal women and in over 20% of men over 50 years of age. [36,37] However, our results show a similar increased risk of fracture in men and women with RA than those without RA, further suggesting that RA is an independent risk factor for fracture. Although patients with osteoporosis are prone to fractures mainly in the vertebral, hip, and forearm, [38,39] several studies have argued an increased risk of hip [21,27,29] or vertebral [20,21] fractures in RA patients. Our result show comparable risks of fractures at the vertebral and hip in RA patients, suggesting no specificity in the site fracture. As fracture often reduces quality of life, fracture prevention is, therefore, crucial for patients with RA. First, the fracture risk should be carefully evaluated in RA patients. Although RA is an independent risk factor for fracture itself, chronic inflammation and glucocorticoid application may promote the development of osteoporosis. [40][41][42] Therefore, regular bone mineral density (BMD) measurement and fracture risk assessment using tools such as FRAX (Fracture Risk Assessment) algorithm should be performed for early detection of osteoporosis in RA patients. [43,44] Other skeletal or nonskeletal fracture risk factors, as well as other conditions such as age, gender, body mass index, cigarette smoking, high alcohol intake, inadequate physical activity, and family history of osteoporosis, that may lead to reduced BMD should be considered in the evaluation of fracture risk assessment in RA patients. For patients with high fracture risk, and those taking glucocorticoids particularly, prescription of calcium and vitamin D supplements, and treatments to control BMD loss, such as bisphosphonates, denosumab, and parathyloid hormone analogs should be considered. [44] Second, chronic inflammation in RA should be controlled. For decades, prednisone, a corticosteroid drug, has been widely used to suppress inflammation, but the treatment itself could also enhance BMD loss. [45] Disease-modifying antirheumatic drugs such as methotrexate (MTX) are able to control RA disease activity and could be considered as a treatment option, as current clinic studies did not show the increased risk for osteoporosis and osteoporotic fracture in RA patients treated with MTX. [46] Newer inflammation-fighting drugs, such as TNF inhibitors etanercept and adalimumab, have also been reported to control inflammation without disrupting bone remodeling. [47,48] However, further investigations are warranted, as there are no data available to determine whether TNF inhibitors can minimize the risk for fracture.
Third, patients with RA should be assessed for fall risk regularly. Falls are the leading cause of fracture. [44] More than 95% of hip fractures resulted from falls. [49] Immobility resulting from pain, swelling, and lack of motor coordination in RA patients highly increases their risks of falling, thus increasing the risk for fracture. Taking certain preventive measures may help to reduce fall risk. Tai Chi [50] and regular weight-bearing exercises [51] such as walking and running may strengthen the bone and decrease BMD loss. Home safety assessment [50] and hip protectors [52] may reduce the risk of falling and fracture.
There are a few limitations in our meta-analysis. Heterogeneity was present among the 13 studies. Confounding factors such as age, sex, BMI, and postmenopausal status in RA and non-RA groups were not controlled at the same level. The confounders adjusted for are also different between studies. These differences attribute to a certain degree of bias when combined for the estimation of pooled RR. Moreover, the duration and severity of www.md-journal.com RA were not considered when selecting subjects. This limitation could lead to the overestimation or underestimation of the associated indicator. In general, the risk of bone fracture increases with the duration and severity of RA. We also did not include BMD as one of our primary outcome of interest due to the limited studies available. The association of RA, osteoporosis, and bone fracture is thus not directly displayed. In addition, the treatment for RA patients was not taken into account in this study. Doses and duration of glucocorticoid might contribute to the difference in outcome measurement. The selection of participants, type of treatments given, confounder adjusted for, and matching factors between RA and non-RA patients are all possible sources contributing to the heterogeneity present among studies.
Conclusion
Our study concludes that RA is a risk factor for bone fracture in men and women, with a comparable risk of fracture at the hip and vertebral. Patients with RA are to be monitored more closely to control bone loss and prevent fracture. | 2018-04-03T00:30:42.084Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "6eecee2ae734d3f60ec52ca44912d0ba40d851bd",
"oa_license": "CCBYND",
"oa_url": "https://doi.org/10.1097/md.0000000000006983",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6eecee2ae734d3f60ec52ca44912d0ba40d851bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14976763 | pes2o/s2orc | v3-fos-license | Influence of lengths of millimeter-scale single-walled carbon nanotube on electrical and mechanical properties of buckypaper
The electrical conductivity and mechanical strength of carbon nanotube (CNT) buckypaper comprised of millimeter-scale long single-walled CNT (SWCNT) was markedly improved by the use of longer SWCNTs. A series of buckypapers, fabricated from SWCNT forests of varying heights (350, 700, 1,500 μm), showed that both the electrical conductivity (19 to 45 S/cm) and tensile strength (27 to 52 MPa) doubled. These improvements were due to improved transfer of electron and load through a reduced number of junctions for longer SWCNTs. Interestingly, no effects of forest height on the thermal diffusivity of SWCNT buckypapers were observed. Further, these findings provide evidence that the actual SWCNT length in forests is similar to the height.
Background
The effective transfer of phonons, electrons, and load is known to increase with longer carbon nanotubes (CNTs) within CNT agglomerates. For example, in the percolation theory, electron transfer is expected to be achieved with a lesser number of CNTs by the use of longer CNTs in accordance with the relation N c = 5.71 /L s 2 , where N c and L s are percolation threshold and CNT length, respectively [1][2][3][4]. For example, higher electrical conductivity was observed for transparent conductive films using network thin films of longer CNTs [5,6]. In addition, Miyata el al. reported a field effect transistor (FET) with high mobility using long single-walled CNTs (SWCNTs) [7]. Further, in CNT/polymer composites, the beneficial effect of CNT length on the efficiency of phonon/electron transport and interfacial load transfer has been reported [8][9][10][11]. Such superiority in properties from long CNTs originates from the fewer CNT junctions, which interrupt phonon, electron, and load transfer, in a network structure of CNTs required to span the material.
Although these reports suggest the advantages of long CNTs on electron, thermal, and mechanical properties of a CNT assembly, this point has not been explicitly demonstrated experimentally. In other words, almost all the above experiments have employed only short CNTs, on the order of micrometers, with only one exceptional report by Zhu et al., who reported on the properties of composite of multiwalled CNTs with thick diameters (approximately 40 to 70 nm) and bismaleimide (BMI) [8]. Particularly, there has been no report on the effect of length on the properties of SWCNTs exceeding 1 mm.
There are three reasons why research on the CNT length dependence of various properties of CNT assemblies has been difficult. First, the synthesis of long CNTs with uniform length in a large quantity is difficult. For example, Wang et al. reported the synthesis of long single-wall CNTs with a maximum length of 18.5 cm, but there were substantial variations in CNT length [12]. Cao et al. reported an interesting approach for lengthtunable CNT growth, but the length did not reach to millimeter scale [13]. Furthermore, several groups reported the methods for classifying long/short CNTs, but this was not applied to CNTs that were longer than 10 μm in length [14][15][16][17]. Secondly, due to the tight entanglement among CNTs, the dispersion of CNTs without CNT scission is difficult. Ultrasonic agitation, which has been typically employed as a dispersion method, is known to shorten CNTs as it disentangles them [18]. Finally, there is no available method to measure the lengths of individual CNTs longer than 100 μm. CNTs with lengths of several micrometers have been evaluated by atomic force microscopy (AFM) [8][9][10][11][14][15][16][17], but this method encounters extreme difficultly when obtaining statistically significant data for long CNTs.
Using water-assisted chemical vapor deposition (CVD), we reported the synthesis of a vertically aligned SWCNT array (SWCNT forest) with height exceeding a millimeter [19]. The SWCNT forests possessed several excellent structural properties, such as long length, high purity, and high specific surface area. This development opened up the potential for various new applications of CNTs, such as high-performance super-capacitors [20][21][22][23] and highly durable conductive rubbers [24,25]. Subsequently, many groups reported the growth of long SWCNTs. For example, Zhong et al. reported the growth of SWCNT forests reaching 0.5 cm in length [26]. Hasegawa et al. reported growth of SWCNT forests of several millimeters in length without an etching agent (water) [27]. Numerous studies have also reported the synthesis of multiwalled CNT forests [28][29][30]. However, the following points remain unclear at present: the correlations between forest height and (1) the actual CNT length and (2) the electrical, thermal, and mechanical properties after formation of CNT assemblies.
In this research, we report the effect of the length of long CNTs on the electrical, thermal, and mechanical properties. Our results demonstrated a strong dependence of the SWCNT aggregate properties on the length. Specifically, buckypaper produced from 1,500 μm SWCNT forests exhibited approximately twice the electrical conductivity (52 vs. 27 S/m) and twice the tensile strength (45 vs. 19 MPa) of a buckypaper produced using 350 μm SWCNT forests. The use of an automated synthetic system equipped with height monitoring and dispersion strategy recently reported by Kobashi et al. [31] allowed overcoming the first two of the aforementioned issues, namely the required large quantity of long CNTs and CNT dispersion method to preserve length.
Fabrication of uniform buckypaper from SWCNTs of varying length
We selected the buckypaper, a randomly oriented sheet of CNT, as the form of CNT assembly to study the effect of SWCNT length. The buckypaper is particularly suitable for the present study because it is comprised solely of CNTs (i.e., no binder or other foreign material), and the fabrication is relatively simple, merely requiring filtration of a SWCNT dispersion. We fabricated a series of buckypapers from SWCNT forests of different heights, which are schematically illustrated in Figure 1a. The fabrication process comprises three main steps: (1) synthesis of SWCNT forests of determined length; (2) dispersion of the SWCNTs; and (3) fabrication of the buckypaper.
SWCNT forests of various lengths were synthesized in a fully automated CVD synthetic system equipped with a telecentric height measurement system using the waterassisted CVD process. A Fe/Al 2 O 3 catalyst-sputtered silicon substrate was inserted into the 1-in. diameter quartz tube reactor (1 atm, 750°C). First, the substrate was exposed to a carrier gas (He, total flow of 1,000 sccm) containing hydrogen (40%) to form catalytic nanoparticles, and then SWCNTs were synthesized using a C 2 H 4 (100 sccm) carbon feedstock and precisely regulated water vapor (100 to 150 ppm). The SWCNT forest height was controlled by using the height as feedback to the control software to automatically stop when the target height was achieved [32]. In this way, SWCNT forests with precisely regulated heights (350, 700, 1,500 μm) could be synthesized in mass quantities. The uniformity of SWCNT forest heights was verified by scanning electron microscopy (SEM; Figure 1b,c,d) and digital photography (see Additional file 1: Figure S1).
Next, dispersions of the series of SWCNT forests of differing heights were prepared. Although conventional dispersion strategies aim to completely disentangle the CNTs into isolated particles, it also results in scission. Our strategy minimizes the scission by suspending the SWCNT agglomerates in a solvent while retaining the entanglement (Yoon et al.: Controlling the balance between exfoliation and damage during dispersion long SWCNTs for advanced composites, unpublished). We selected jet milling as the dispersion method because it has shown to preserve the SWCNT length with minimal scission, and it has also been shown that the resulting materials are suitable to fabricate SWCNT/polymer composite materials of high electrical conductivity (Yoon et al.: Controlling the balance between exfoliation and damage during dispersion long SWCNTs for advanced composites, unpublished) [24,25,33]. This benefit stems from a turbulent flow mechanism used in jet milling which exfoliates CNTs with minimal damage, in contrast to the cavitation mechanism used in conventional ultrasonic dispersion which is known to damage CNTs [33]. Mixtures of SWCNT forest samples of specific length in methyl isobutyl ketone (MIBK) were introduced into a high-pressure jet-milling homogenizer (Nano Jet Pal, JN10, Jokoh), and suspensions (0.03 wt.%) were made by a high-pressure ejection through a nozzle (20 to 120 MPa, single pass).
Finally, a series of buckypapers with precisely controlled mass densities were prepared by the filtration and compression processes described below. The suspensions were carefully filtered using metal mesh (500 mesh, diameter of wire 16 μm). The as-dried buckypapers (diameter 47 mm) were removed from the filters and dried under vacuum at 60°C for 1 day under the pressure from 1-kg weight. Some papers were further pressed into a higher density in order to eliminate the effects of mass density on buckypaper properties. Although the mass densities of the as-dried buckypaper significantly varied among the samples (0.25 to 0.44 g/cm 3 , Table 1), buckypapers with uniform density, regardless of forest height, were obtained by pressing buckypapers at 20 and 100 MPa to raise the density at approximately 0.50 g/cm 3 (0.48 to 0.50 g/cm 3 ) and 0.63 g/cm 3 (0.61 to 0.65 g cm -3 ), respectively (Table 1). In addition, buckypaper samples were uniform where the thicknesses at its periphery and at the middle were nearly identical.
Results and discussions
High electrical conductivity in buckypaper fabricated from high SWCNT forests We found that buckypaper fabricated from tall SWCNT forests exhibited excellent electrical conductivity and mechanical strength. In terms of electrical properties, the electrical conductivity (σ) of each buckypaper sample was calculated by σ = 1/tR s (t = average buckypaper thickness) from the sheet resistance (R s ) measured using a commercially available four-probe resistance measuring apparatus (Loresta-GP, Mitsubishi Chemical Analytech Co., Ltd., Yokohama, Japan) The electrical conductivity of buckypapers made from forests of the same height exhibited a linear dependence on density (Figure 2a). For example, the electrical conductivity rose from 21 to 54 S/cm with a density increase from 0.25 to 0.65 g/cm 3 . Significantly, we observed that the taller the forest used in the buckypaper fabrication, the higher the electrical conductivity. Comparing buckypapers with almost the same density, the buckypaper obtained from forests with heights of 1,500 μm exhibited approximately twice the electrical conductivity of buckypaper made from 350-μm forests, (i.e., 45 vs. 19 S/cm at 0.50 g/cm 3 , and 27 vs. 16 S/cm around 0.35 g/cm 3 ).
In order to verify that this apparent height-dependent variation in buckypaper conductivity was not due to differences in CNT quality, which has been shown to be essential for the various properties of buckypaper in previous works [34], Raman spectroscopy and electrical resistivity measurements of the as-grown SWCNT forests were carried out. The intensity ratios of the G-band For each height of SWCNT forest, two as-dried buckypapers, one paper after compression at 20 MPa, and one paper after compression at 100 MPa have been prepared. The thickness of the buckypaper was measured by the stylus method instrument. The average thickness of five measurements was obtained from both of the center and the edge of buckypapers.
(1,600/cm) and the D-band (1,350/cm) in the Raman spectra (see additional file 1: Figure S2), an indicator of CNT quality, were very similar (approximately 7). Peak positions and intensities in the radial breathing modes (RBM; 100 to 300/cm) were also nearly identical for all SWCNT forest heights. As the RBM peak position w (cm −1 ) is reported to be inversely proportional to the SWCNT diameter (nm), i.e., w = 248/d [35], these findings indicate that the effect of forest height on SWCNT diameter distribution was small. Furthermore, electrical conductivity of raw material forest was evaluated by applying a micro 4-probe onto the sides of SWCNT forests. Since the distances between the probes (50 μm) in a micro 4-probe was sufficiently short compared with the forest height, CNT length had almost no influence on the resistance values observed with this measurement. The measured resistance was nearly identical (206 to 220 Ω/sq) regardless of forest height (Figure 2b), indicating that quality of the SWCNTs did not degrade when growing forests of height to 1,500 μm, in accordance with the results of Raman spectroscopy. As shown in the previous paragraph, taking into consideration the fact that forest height did not influence CNT quality, we conclude that the increase in buckypaper conductivity accompanying forest height was a result of the increased length of individual SWCNTs. In other words, improved electron transfer, thus higher conductivity, became possible from fewer junctions as a result of individual SWCNTs becoming longer. Furthermore, the 50% drop in buckypaper resistance by the approximately fourfold increase in SWCNT length (350 to 1,500 μm in forest height) indicate the strong effect of CNT-CNT junctions on the electrical resistance of SWCNT assemblies.
High tensile strength in buckypaper fabricated from high SWCNT forests
Another advantage of buckypaper made from tall SWCNT forests shown by the present study for the first time is the improved mechanical properties, i.e., high tensile strength and breaking strain. Tensile test samples were cut into a dog bone-shape from the sheet with the dimension of 40 mm (length) × 2 mm (width). The extension rate and the gauge length were 1.0 mm/min and 20 mm, respectively. The tests were performed using a Micro Autograph MST-I (Shimadzu Co., Kyoto, Japan) with 100-N load cell. As reported by previous papers [34], tensile strength increased linearly with the mass density ( Figure 3a); therefore, we compared the mechanical properties of buckypapers of similar mass densities approximately 0.63 g/cm 3 . Importantly, for an increase in forest height from 350 to 1,500 μm, both tensile strength and breaking strain increased by about 100% (27 to 52 MPa and 1.5% to 2.9%, respectively). In other words, the use of taller forests resulted in buckypapers which could withstand larger loads and strains. There were no major differences in Young's modulus (i.e., stress/strain) regardless of forest height indicating similar interfacial contact between CNTs, as shown in Figure 3b. The mechanism by which mechanical strength was observed to improve through using tall forests can be interpreted in an analogous manner to that for improvement in electrical conductivity; in other words, the longer the CNT, the fewer the junctions as weak points for load transfer.
Relationship between forest height and SWCNT length
Additional insight can be garnered from the improvement in electrical and mechanical properties in tall forests on the actual length of the SWCNTs in a forest. Thus far, no direct evidence has been shown regarding this point. Our results indicate that the length of the SWCNTs within the forest is equal to the forest height.
Furthermore, we quantitatively discuss the effect of individual SWCNT length on electrical conductance and load transfer. Assuming that the electrical and mechanical properties are related to the number of junctions, the approximate double electrical conductivity and tensile strength exhibited by forests with heights of 1,500 μm compared with those of heights 350 μm indicates that half the number of SWCNT junctions are present for electron/ load. We estimated that the SWCNTs from a 1,500-μm forest were, in fact, four times longer than those in a 350-μm forests by constructing a simple model describing the effective area of a SWCNT of a certain length as it spreads in a buckypaper. To make this model solvable, we assumed that the SWCNTs fell into a circular island with a uniform areal mass (i.e., SWCNT mass per unit area) within the buckypaper plane. The uniform areal mass assumption is justified by the overall macroscopic homogeneity of the buckypaper. With this consideration, the diameter of the effective area is proportional to the square root of the SWCNT length, and the effective area, where a SWCNT can make contact with another effective area, would be proportional to the length of the SWCNT. Therefore, we find that the four-time difference in forest height (1,500:350) matches well with the four-time difference in effective areas which would result in a twofold difference in junctions along a path and thusly explain the difference in electrical conductivity and mechanical strain. Importantly, we can also conclude that the length of a SWCNT within a forest, at least to a large extent, spans the height of the forest from the substrate to the forest top.
Relationship between buckypaper thermal conductivity and high SWCNT forest height Furthermore, we investigated the in-plane thermal diffusivities of buckypaper fabricated from SWCNT forests of various heights. Thermal diffusivities of buckypaper in horizontal direction were measured by the Thermowave Analyzer (Bethel Co., Ibaraki, Japan) at room temperature. As opposed to electrical conductivity, a clear dependence of thermal conductance on SWCNT forest height was not observed (Figure 4). In particular, the tallest forests (1,500 μm) did not exhibit the highest thermal diffusivity (15 cm 2 /s), while forest with a medium height of 700 μm showed a slightly higher thermal diffusivity (18 cm 2 /s). These findings can be explained by theoretical prediction [33] and our recent experimental results that the thermal diffusivity of SWCNT forests is strongly dependent on the crystallinity (or the G-band/ D-band ratio) [36]; in other words, while junctions between SWCNTs play the rate-limiting factor in electrical conductivity, phonon scattering via defects in individual SWCNTs appears dominant for thermal diffusivity. The number of junctions appears to only exhibit a small influence. This fact indicates that highly crystalline CNTs, not length, is most important for creating CNT networks with superior thermal conductivity. | 2016-05-12T22:15:10.714Z | 2013-12-27T00:00:00.000 | {
"year": 2013,
"sha1": "b36955a26640eaae6d49bd4bbd8a7d4a4ed12bed",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-8-546",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65f1f1392f43aaf7b1f60a03408616c2bc5a1d5b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
3531485 | pes2o/s2orc | v3-fos-license | BEAM web server: a tool for structural RNA motif discovery
Abstract Motivation RNA structural motif finding is a relevant problem that becomes computationally hard when working on high-throughput data (e.g. eCLIP, PAR-CLIP), often represented by thousands of RNA molecules. Currently, the BEAM server is the only web tool capable to handle tens of thousands of RNA in input with a motif discovery procedure that is only limited by the current secondary structure prediction accuracies. Results The recently developed method BEAM (BEAr Motifs finder) can analyze tens of thousands of RNA molecules and identify RNA secondary structure motifs associated to a measure of their statistical significance. BEAM is extremely fast thanks to the BEAR encoding that transforms each RNA secondary structure in a string of characters. BEAM also exploits the evolutionary knowledge contained in a substitution matrix of secondary structure elements, extracted from the RFAM database of families of homologous RNAs. The BEAM web server has been designed to streamline data pre-processing by automatically handling folding and encoding of RNA sequences, giving users a choice for the preferred folding program. The server provides an intuitive and informative results page with the list of secondary structure motifs identified, the logo of each motif, its significance, graphic representation and information about its position in the RNA molecules sharing it. Availability and implementation The web server is freely available at http://beam.uniroma2.it/ and it is implemented in NodeJS and Python with all major browsers supported. Supplementary information Supplementary data are available at Bioinformatics online.
Introduction
Structural motif finding in RNA is a growing branch in the field of computational biology, especially given the rise of new experimental techniques capable of probing structural contexts at single nucleotide resolution (Lu and Chang, 2016;Wan et al., 2014), which in turn allows for more accurate secondary structure predictions (Lorenz et al., 2016). The question that is usually addressed by looking for structural motifs revolves around finding structural determinants associated to specific functions (e.g. protein interaction specificity, main actor of certain interactions or behaviours), for example the determinant of Staufen-RNA specificity (LeGendre et al., 2013). In this sense, data coming from high-throughput in vivo experiments such as HITS-CLIP, PAR-CLIP, iCLIP or eCLIP provide a perfect playground, for they are often composed by a large number of molecules (up to 50k RNAs, or even more) with a shared binding ability. Current structural motif finders cannot work over low input size thresholds (i.e. 1000 molecules is a hard limit for most) and, to our knowledge, only our method BEAM (Pietrosanto et al., 2016), and the most recent SMARTIV (Polishchuk et al., 2017), can tackle these large inputs. Ours is, however, the only web server that can both discover motifs in large windows (e.g. downstream or V C The Author 2017. Published by Oxford University Press.
1058
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com Applications Note upstream a binding site or along a 500 nt RNA) and with tens of thousands of molecules.
Materials and methods
We extended BEAM, for which the user must provide pre-computed RNA secondary structures converted in BEAR notation through a separate encoding software (Mattei et al., 2014), by letting users upload a standard FASTA format file containing only the RNA sequences. In this case users can choose one out of two possible structural prediction methods: RNAfold from the Vienna Package (Gruber et al., 2008) or MaxExpect from RNAstructure (Reuter and Mathews, 2010). Then, from the dot-bracket notation, RNA structures will be automatically converted into the BEAR encoding. Users can also directly upload a file containing RNA sequences in FASTA format containing the corresponding secondary structure prediction in dot-bracket or in BEAR notation. The same data can be also pasted in a text-area. The users can also upload a background dataset for computing the motif significance; alternatively, the server provides automatic background generation by using RNA sequences from Rfam seed data with a filter that guarantees similar length and amount of structural content with respect to the input (Mattei et al., 2015). Another available feature is the possibility to upload a BED file, which is the most common output format for CLIP-Seq analysis tools: the webserver will manage all the needed processing steps (namely: extension of the intervals, intersection with a feature file to extract only specific genomic regions, sequence retrieval, secondary structure prediction and motif discovery).
In the output page, a table is provided containing all the RNA structure motifs identified. This table shows the following information, for each identified motif: a WebLogo (Crooks et al., 2004) picture in qBEAR alphabet (Pietrosanto et al., 2016), statistic values (such as P-value, coverage, BEAM score etc.), a histogram of the motif position distribution with respect to the 5 0 of each RNA, and the motif model structure picture obtained using VARNA (Darty et al., 2009). It is also possible to expand the motif results by listing all sequences with a graphic illustration of the motif position relative to the sequence length, along with the dot-bracket and sequence alignments. This representation of a structural motif provides researchers with an overview of how sub-structures could be involved in the function shared by all, or a subset, of the input RNAs, such as protein-RNA or RNA-RNA interactions.
Results
For large datasets the application was tested, along with about a hundred unique datasets, with CLIP-Seq data for SLBP (Zhang et al., 2012) (stem-loop binding protein)-interacting RNAs (GSE62154), LIN28A (Cho et al., 2012;Zeng et al., 2016) targets and the other DoRiNA (Blin et al., 2015) datasets, for which all the significant motifs retrieved were presented in the original work (Pietrosanto et al., 2016). In some datasets, we analysed more than 35K RNA sequences in a single run. In particular SLBP has been known to interact with dsRNA (Brooks et al., 2015;Li et al., 2010;Zhang et al., 2012), and the accurate structural context (Fig. 1) can be retrieved with little effort. The server has been tested on datasets of up to 100k RNAs and up to 5 motifs per dataset, and the computational time is similar to that of the BEAM standalone version, as the post analyses take negligible time to compute. The only consistent time added is the time taken by the secondary structure prediction and eventually the genomic interval pre-processing by means of BEDtools (Quinlan, 2014), if used. Current limitations and associated graphs are reported in the Supplementary Material and in the online documentation.
Conclusion
The BEAM web server is a web application that allows the analyses of RNA datasets in search of secondary structure motifs. It can work with tens of thousands of molecules (see Supplementary Material for more information) with a length up to 2000 nt (if folding predictors are used, different limits are applied, see Supplementary Material).
Therefore, this is the only tool that can tackle the task of structural motif discovery of big datasets (such as CLIP-Seq) along their full length.
Moreover, our framework enables researchers to access the tool without additional scripting thanks to the automation provided by the web server. For advanced users, this resource is a fast test ground for BEAM and a precious time saver for downstream analysis. Fig. 1. SLBP putative interaction motif. On the left, a logo describing the identified structural motif is shown in qBEAR notation, in which A stands for medium size stem of a hairpin, and X for a short size terminal loop. On the right, an instance of the motif secondary structure is shown | 2018-02-25T02:24:10.055Z | 2017-10-31T00:00:00.000 | {
"year": 2017,
"sha1": "c05a0317af50e8faf337bb5ac6d8b9a2e30e8e63",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/bioinformatics/article-pdf/34/6/1058/25119221/btx704.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c05a0317af50e8faf337bb5ac6d8b9a2e30e8e63",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
256971112 | pes2o/s2orc | v3-fos-license | Investigation of Phenolic Compounds and Antioxidant Activity of Sorbaria pallasii (Rosaceae) Microshoots Grown In Vitro
Sorbaria pallasii is an endemic species of the Far East and Siberia and grows along the Goltsy altitudinal belt. Data on micropropagation and phytochemical characteristics of this plant are not available, probably because of the inaccessibility of the plant material. Morphogenesis initiation from flower buds of S. pallasii in vitro and micropropagation were performed here in the Murashige and Skoog medium supplemented with 5.0 µM 6-benzylaminopurine and 0.0–1.0 µM α-naphthylacetic acid; elongation was implemented in the same medium without the hormones. A well-growing sterile culture of S. pallasii was obtained; the number of microshoots per explant reached 5.7 ± 1.2. Phytochemical analyses of in vitro propagated S. pallasii detected 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity in a water-ethanol extract from its microshoots and revealed phenolic compounds in it. The phenolic compounds that likely contribute to its biological activity are tannins (74.9 mg/g), phenolcarboxylic acids (30.8 mg/g), and catechins (13.3 mg/g). In the microshoot extract, high-performance liquid chromatography identified three catechins. Microshoots showed the highest concentration of (±)-catechin (3.03 mg/(g of absolutely dry mass; ADM)). Concentrations of epigallocatechin gallate (0.38 mg/(g of ADM)) and (−)-epicatechin (0.55 mg/(g of ADM)) were significantly lower. This study paves the way for further biotechnological and phytochemical research on S. pallasii.
Introduction
Biologically active substances of plants, including secondary metabolites, are a valuable resource for drug development and formulations [1]. Research into active ingredients of plants is underway for both direct use of secondary metabolites as drugs and for their use as precursors for subsequent chemical modifications [2]. The demand for biologically active substances of plant origin grows globally every year [3]; this is especially true for the search for effective therapeutics against COVID-19 [4,5].
Conventional medicine currently takes advantage of only~5% of all the species of higher plants that have been studied [6]. Nonetheless, poorly studied or unexplored plant species are a great potential source of new biologically active compounds [7]. One of these species is Sorbaria pallasii (G.Don) Pojark. [= S. grandiflora (Sweet) Maxim.]. It is a semi-shrub up to 50 cm high that belongs to the family Rosaceae Juss (Figure 1). Young shoots are brownish, glabrous, or finely pubescent with yellowish branched hairs, and older shoots have peeling bark. Leaves are imparipinnate, up to 15 cm long, of 9-15 pairs of leaflets, dark green, glabrous, or often pubescent. Flowers are in small obovate panicles and white, pinkish, or creamy white, up to 1.5 cm in diameter. Fruits are pubescent follicles with small seeds. The follicular fruit contains small seeds. The species grows along the alpine mountain belt, on bald mountains and stony slopes and placers, sometimes among The literature data on concentrations of biologically active substances in S. pa are fragmentary. High-performance liquid chromatography (HPLC) has identifie least 22 phenolic compounds (PCs) in water-ethanol extracts from the leaves of S. pa and not less than 28 PCs in water-ethanol extracts from its inflorescences [12]. Thes clude two acids (chlorogenic and p-hydroxybenzoic) and five flavonols (hypero isoquercitrin, quercitrin, kaempferol, and astragalin) [12]. Data on biological activity pallasii are not yet available. On the contrary, another representative of the genus Sor (Ser.) A. Braun-S. sorbifolia (L.) A. Braun-which grows in Russia, Mongolia, Japan China, has been studied sufficiently well. It has been found to improve blood eliminate stasis, reduce swelling, and relieve pain; it is used to treat fractures, bru and rheumatoid arthritis in traditional Chinese medicine [13]. A decoction of S. sorb is employed to treat diarrhea and rheumatoid arthritis in Amur Oblast and Transbaik The plant is used to cure skin tuberculosis and other skin diseases [14]. An ethyl ac extract from the above-ground part of S. sorbifolia possesses immunomodulatory pro ties and enhances an antitumor effect of chemotherapeutic drugs against inoculated cinoma 180 [15]. A dry extract from S. sorbifolia inflorescences has been prepared tha antiviral and antioxidant activity [16]. Scientists across the globe have revealed antitu anti-inflammatory, antimicrobial, antimelanogenic, and antioxidant properties of ext The literature data on concentrations of biologically active substances in S. pallasii are fragmentary. High-performance liquid chromatography (HPLC) has identified at least 22 phenolic compounds (PCs) in water-ethanol extracts from the leaves of S. pallasii and not less than 28 PCs in water-ethanol extracts from its inflorescences [12]. These include two acids (chlorogenic and p-hydroxybenzoic) and five flavonols (hyperoside, isoquercitrin, quercitrin, kaempferol, and astragalin) [12]. Data on biological activity of S. pallasii are not yet available. On the contrary, another representative of the genus Sorbaria (Ser.) A. Braun-S. sorbifolia (L.) A. Braun-which grows in Russia, Mongolia, Japan, and China, has been studied sufficiently well. It has been found to improve blood flow, eliminate stasis, reduce swelling, and relieve pain; it is used to treat fractures, bruises, and rheumatoid arthritis in traditional Chinese medicine [13]. A decoction of S. sorbifolia is employed to treat diarrhea and rheumatoid arthritis in Amur Oblast and Transbaikalia. The plant is used to cure skin tuberculosis and other skin diseases [14]. An ethyl acetate extract from the above-ground part of S. sorbifolia possesses immunomodulatory properties and enhances an antitumor effect of chemotherapeutic drugs against inoculated carcinoma 180 [15]. A dry extract from S. sorbifolia inflorescences has been prepared that has antiviral and antioxidant activity [16]. Scientists across the globe have revealed antitumor, anti-Life 2023, 13, 557 3 of 11 inflammatory, antimicrobial, antimelanogenic, and antioxidant properties of extracts from Sorbaria species [17][18][19][20][21][22]. In their phytochemical studies on Sorbaria, they have isolated triterpenoids, such as cucurbitacin [23], flavonoids, phenolcarboxylic acids, cyanoglycosides, and chromone derivatives [18,24,25], and triterpene acids of ursane and oleanane series [26]. Data on S. pallasii are scarce probably because it grows in hard to reach locations, e.g., on mountain slopes above the tree line. Another species, S. sorbifolia, has been extensively introduced throughout Russia and abroad; it grows in easy to reach places, and its habitat does not extend high into the mountains.
The pharmaceutical industry is highly dependent on medicinal plants, and as a consequence of its excessive activities, many wild plant species are endangered or extinct [27]. An alternative way to obtain secondary metabolites of plant origin is the culturing of plant cells and tissues (in vitro). The in vitro cultivation technique may help not only to preserve wild plants in nature but also to prepare standard plant materials from hard-to-propagate or poorly accessible plants for the pharmaceutical industry.
The aims of this study were to create a well-growing sterile culture of S. pallasii microshoots under in vitro conditions and to perform its phytochemical analyses.
Induction of Clonal Micropropagation and Preparation of the Extract
The material for introduction into the in vitro culture was collected in late September 2021. S. pallasii plants grew in the experimental plot of the Laboratory of Phytochemistry, the Central Siberian Botanical Garden (CSBG) SB RAS. They had been introduced in 2019 from the Zeya State Nature Reserve (Amur Oblast, Zeya District), in the upper reaches of the Big Erakingra River of the subalpine zone of the Tukuringra Ridge (54 • 07 05.7 N, 126 • 56 02.4 E, 1094 m above sea level). The plants grew on rocky debris overgrown with moss. The explants were flower buds of a generative plant. The shoots were cut into segments (one bud per segment) and sterilized in a 0.2% HgCl 2 solution for 15 min and then washed three times with sterile distilled water for 10 min. Next, the outer scales and some rudimentary leaves were aseptically removed from the buds by means of tweezers and a scalpel; the buds were separated from the branches and placed onto the Murashige and Skoog (MS) agar medium [28] supplemented with 5.0 µM 6-benzylaminopurine (BA; Sigma-Aldrich, St. Louis, MO, USA) to initiate morphogenesis. Each bud was cultured separately in a penicillin vial for 30 days to reduce contamination. The resulting primary microshoots were cut into single-node segments, and the leaves and unexpanded buds were also separated and transferred to the micropropagation medium. The medium was supplemented with the following plant growth regulators (PGRs): 5.0 µM BA in combination with 0.0-1.0 µM α-naphthylacetic acid (α-NAA; Sigma-Aldrich, St. Louis, MO, USA); the passage lasted 30 days. For microshoot elongation, the plant material was cultured in the hormone-free medium of the same mineral composition. The passage lasted 30 days. Elongated microshoots were cut into single-node segments and cultivated either in a solid or liquid medium (no agar) of the same mineral and hormonal compositon and in corresponding PGR-free media.
All the media were supplemented with 3% of sucrose (Shostka Chemica l Reagents Plant, Shostka, Ukraine), and pH was adjusted to 5.8 before autoclaving (sterilization). The solid media contained 0.6% of agar (PanReac, Barcelona, Spain). The media were autoclaved at 121 • C for 20 min. PGRs were added aseptically after sterilization and cooling of the media to 40 • C.
The culturing was performed under a 16 h photoperiod at 40 µmol m −2 s −1 light intensity provided by cool white fluorescent lamps at 23 ± 2 • C. In the liquid media, the culturing was performed in 100 mL Erlenmeyer flasks using a shaker (Elmi, S-3-02 L, Latvia) at 100 rpm; the volume of the medium in the flask was 20 mL.
Phytochemical analyses of the microshoots were performed after their cultivation in the hormone-free MS agar medium for three passages.
Identification and Quantitation of Phenolic Compounds (PCs) in Water-Ethanol Extracts Prepared from the Microshoots 2.2.1. Extract Preparation
To assess the levels of PCs and antioxidant activity of S. pallasii microshoots, the plant material was air-dried completely at room temperature in shade and weighed. Then, the dry material was shredded into 2-3 mm pieces and blended, and representative samples were chosen. PCs were identified and quantified in an extract obtained by incubation with 70% ethanol in a water bath with reflux at 70 • C. The extract was prepared at the 1:500 ratio of the raw material to the solvent.
Quantification of PCs
The total level of PCs was determined using the Folin-Ciocalteu reagent [29]. Briefly, 0.5 mL of the extract, 2.5 mL of the Folin-Ciocalteu reagent (diluted 1:10 with distilled water), and 2 mL of a 7.5% aqueous sodium carbonate solution were combined and shaken thoroughly. The mixture was kept in a water bath for 15 min at 45 • C. Absorption was measured at a wavelength of 765 nm using an SF-56 spectrophotometer (Lomo, St. Petersburg, Russia). A blank sample consisting of distilled water and the reagents served as a control. Gallic acid was used as a reference compound.
The Total Flavonoid Content
Flavonoids were quantified by the spectrophotometric aluminum chloride technique [30]. Briefly, 0.5 mL of 2% aluminum chloride (AlCl 3 ) in ethanol was mixed with the same volume of the plant extract. After 1 h, absorbance readings at 415 nm against a blank (ethanol) were acquired. Optical density of the mixture was measured on the SF-56 spectrophotometer (Lomo, St. Petersburg, Russia). The concentration of each flavonoid was calculated with the help of a calibration curve plotted for rutin (Sigma-Aldrich).
Quantitation of Total Phenolic Acids
The total level of phenolic acids was determined by means of Arnov's reagent [31,32]. Briefly, 1 mL of the extract, 5 mL of distilled water, 1 mL of hydrochloric acid (0.1 mol), 1 mL of Arnov's reagent (10.0 g of sodium molybdate and 10.0 g of sodium nitrate in 100.0 mL of water), and 1 mL of sodium hydroxide (1 mol) were mixed, the volume was adjusted to 10 mL with distilled water, and optical density was measured immediately at 490 nm on the SF-56 spectrophotometer (Lomo, St. Petersburg, Russia). A blank sample consisting of distilled water and the reagents was used as a control, and caffeic acid served as a reference.
Quantification of Tannins
This assay of tannins (hydrolyzable tannins) was performed using the method proposed by L.M. Fedoseeva [33]. Briefly, 10 mL of the extract was transferred into a 100 mL volumetric flask, and 10 mL of a 2% aqueous solution of ammonium molybdate was introduced. The flask's content was brought to the nominal volume with purified water and incubated for 15 min. The intensity of the resulting color was measured using the SF-56 spectrophotometer (Lomo, St. Petersburg, Russia) at 420 nm in a 1-cm light-path cuvette. A government standard sample of tannin (Sigma, St. Louis, MO, USA) was utilized.
Quantification of Catechins
The concentration of catechins was determined spectrophotometrically by the method based on the ability of catechins to produce a crimson color in a solution of vanillin in concentrated hydrochloric acid [34,35]. Briefly, 0.8 mL of the extract was placed into two test tubes. Next, 4 mL of a 1% solution of vanillin in concentrated hydrochloric acid was poured into one of them, and the volumes were adjusted to 5 mL in both tubes with concentrated hydrochloric acid. The tube without vanillin served as a control. In the presence of catechins, the sample turned pink, raspberry, or orange-red. After 5 min, the intensity of the colors was measured through the use of the SF-56 spectrophotometer (Lomo, St. Petersburg, Russia) at 504 nm in a cuvette with a light path of 1 cm. The standard curve was constructed on the basis of (±)-catechin (Sigma, St. Louis, MO, USA).
Quantitation of Individual PCs via HPLC
This analysis was performed using an Agilent 1200 HPLC system, which included a Zorbax SB-C18 column (5 mm, 4.6 × 150 mm) and was equipped with a diode array detector and a ChemStation system for collection and processing of chromatographic data (Agilent Technology, Santa Clara, CA, USA). The separation was conducted under the following conditions: in the mobile phase, the concentration of methanol in the solution of phosphoric acid (0.1%) was changed from 22% to 100% during 36 min. The eluent flow rate was 1 mL/min, column temperature was 26 • C, and sample volume was 10 µL; the detection was conducted at wavelengths 254, 210, 230, 280, 315, 340, and 360 nm. Quantification of individual compounds in the plant extract samples was conducted by the external standard method. For detection of catechins in the plant extract, standard samples of (±)-catechin (Sigma-Aldrich, Taufkirchen, Germany Germany), (−)-epicatechin (Serva, Heidelberg, Germany), and epigallocatechin gallate (Teavigo, Kaiseraugst, Switzerland) were employed, from which standard solutions were prepared (10 µg/mL).
Assessment of an Antiradical Activity
Free radical scavenging capacity of the samples was determined by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) method [36,37] with modifications. Briefly, 2 mL of the extract (diluted with 70% ethanol to concentrations in the range of 50-800 µg/mL) was mixed with 3 mL of a DPPH solution (62 mg/mL in ethanol). After 30 min incubation in the dark at room temperature, optical density (A) was measured at 517 nm as compared to a blank sample. The free radical scavenging activity was calculated as percentage inhibition using the following formula: where A blank is optical density of the control solution (containing all reagents except the tested extract), and A sample is optical density of the sample.
Chemicals
All chemicals were of HPLC or analytical grade.
Statistical Analysis
The data were statistically processed by conventional methods using STATISTICA 6.0 and GraphPad Prism v.6.01 software (GraphPad Software, Boston, MA, USA). All the phytochemical experiments were set up with two biological replicates and three technical replicates per treatment. The data are presented as the mean ± standard deviation. The results were expressed in mg/(g of absolutely dry mass [ADM]).
Induction of Morphogenesis and Development of Microshoots
Surface sterilization of the plant material performed using HgCl 2 and subsequent culturing of each explant (Figure 2A) resulted in a high yield (that reached 70%) of aseptic explants. Because S. pallasii was introduced into in vitro culture for the first time, the MS medium, the most common culture medium for this purpose, was used as a mineral base. BA and α-NAA (PGRs most commonly used for clonal micropropagation experiments) were utilized for the induction of morphogenesis [38][39][40][41][42]. After 2 weeks of the culturing, ini-Life 2023, 13, 557 6 of 11 tiation of morphogenesis was observed, and a month later, a single 20-mm-high microshoot with flower buds and leaves developed from the explants (generative buds; Figure 2B). The frequency of regeneration of noncontaminated material reached 94%. Primary microshoots were shredded into pieces, including buds, leaves, and single-node stem segments. They were cultured for a month in the micropropagation medium: either MS plus 5.0 µM BA or MS plus 5.0 µM BA in combination with 1.0 µM α-NAA. During culturing of leaves and buds, no further morphogenesis was registered. Regeneration was observed only in single-node segments in culture media of both hormonal compositions ( Figure 2C). In the medium with cytokinin (5.0 µM BA), the number of microshoots per explant was 5.7 ± 1.2, and in the medium with the cytokinin and auxin (5.0 µM BA and 1.0 µM α-NAA), it was slightly higher: 5.8 ± 1.4. The height of the microshoots on the medium containing BA was 9.3 ± 3.9 mm, and the number of leaves per microshoot was 4 ± 1. On the medium containing the auxin, the height of the microshoots was greater (10.8 ± 3.3), but the number of leaves was slightly lower (3.9 ± 0.9). Nevertheless, chlorosis was detected in the experiment with the medium containing the auxin and cytokinin. It should be noted that the choice of the PGR concentration, combination, and ratio for the preparation of the medium not only is species-specific but also depends on the genotype of the specimen under study.
Surface sterilization of the plant material performed using HgCl2 and subsequent culturing of each explant (Figure 2A) resulted in a high yield (that reached 70%) of aseptic explants. Because S. pallasii was introduced into in vitro culture for the first time, the MS medium, the most common culture medium for this purpose, was used as a mineral base. BA and α-NAA (PGRs most commonly used for clonal micropropagation experiments) were utilized for the induction of morphogenesis [38][39][40][41][42]. After 2 weeks of the culturing, initiation of morphogenesis was observed, and a month later, a single 20-mm-high microshoot with flower buds and leaves developed from the explants (generative buds; Figure 2B). The frequency of regeneration of noncontaminated material reached 94%. Primary microshoots were shredded into pieces, including buds, leaves, and single-node stem segments. They were cultured for a month in the micropropagation medium: either MS plus 5.0 µM BA or MS plus 5.0 µM BA in combination with 1.0 µM α-NAA. During culturing of leaves and buds, no further morphogenesis was registered. Regeneration was observed only in single-node segments in culture media of both hormonal compositions ( Figure 2C). In the medium with cytokinin (5.0 µM BA), the number of microshoots per explant was 5.7 ± 1.2, and in the medium with the cytokinin and auxin (5.0 µM BA and 1.0 µM α-NAA), it was slightly higher: 5.8 ± 1.4. The height of the microshoots on the medium containing BA was 9.3 ± 3.9 mm, and the number of leaves per microshoot was 4 ± 1. On the medium containing the auxin, the height of the microshoots was greater (10.8 ± 3.3), but the number of leaves was slightly lower (3.9 ± 0.9). Nevertheless, chlorosis was detected in the experiment with the medium containing the auxin and cytokinin. It should be noted that the choice of the PGR concentration, combination, and ratio for the preparation of the medium not only is species-specific but also depends on the genotype of the specimen under study. For example, a combination of 5.0 µM BA and 2.5 µM α-NAA has been found to be appropriate for the most successful microshooting in cultures of the rare species Rhodiola rosea [40]. In addition, a combination of 15.0 µM BA and 5.0 µM α-NAA has turned out to be optimal for clonal micropropagation of Banana cv. Karpuravalli [39]. At the same time, a similar combination of BA and α-NAA used in in vitro culture of tree peony causes mass formation of somatic embryos [41]. In the present study, we chose the PGR combination and ratio based on our previous experiments on in vitro micropropagation of Spiraea betulifolia ssp. aemiliana [42]; this PGR ratio was optimal for Spiraea betulifolia but quite unsuitable for S. pallasii; therefore, S. pallasii was further cultured in the medium containing only the cytokinin (BA).
In vitro culturing in a liquid medium is worthwhile because it is more economically advantageous and does not require expensive gelling agents. According to some data, culturing in liquid media increases the productivity and growth rates of plantlets in vitro [38,43]. Judging by other evidence, not all plant species can be cultured successfully un- For example, a combination of 5.0 µM BA and 2.5 µM α-NAA has been found to be appropriate for the most successful microshooting in cultures of the rare species Rhodiola rosea [40]. In addition, a combination of 15.0 µM BA and 5.0 µM α-NAA has turned out to be optimal for clonal micropropagation of Banana cv. Karpuravalli [39]. At the same time, a similar combination of BA and α-NAA used in in vitro culture of tree peony causes mass formation of somatic embryos [41]. In the present study, we chose the PGR combination and ratio based on our previous experiments on in vitro micropropagation of Spiraea betulifolia ssp. aemiliana [42]; this PGR ratio was optimal for Spiraea betulifolia but quite unsuitable for S. pallasii; therefore, S. pallasii was further cultured in the medium containing only the cytokinin (BA).
In vitro culturing in a liquid medium is worthwhile because it is more economically advantageous and does not require expensive gelling agents. According to some data, culturing in liquid media increases the productivity and growth rates of plantlets in vitro [38,43]. Judging by other evidence, not all plant species can be cultured successfully under these conditions, which often cause vitrification of the plant material [44]. It is also reported that culturing in a liquid medium can be successful if a plant material floats on the surface of the culture medium [45]. During our micropropagation of S. pallasii in the liquid medium, either with or without the PGR(s), the plant material was immersed to the bottom of the culture vessel. This led to microshoot thickening, chlorosis, and local necrosis; therefore, S. pallasii was further incubated in the solid (agar-containing) medium.
Phytochemical Characterization of the S. pallasii Microshoots
PCs are secondary metabolites synthesized in response to stressful conditions in order to perform some physiological functions in plants. They protect from ultraviolet (UV) radiation and predators and participate in plant growth and plant responses to environmental stressors, including an injury, a pathogen invasion, mineral deficiency, and temperature stress [46][47][48]. PCs are among the main compounds that impart antioxidant properties to plants [49]. Excessive production of free radicals in the human body can cause such illnesses as cancer, atherosclerosis, Alzheimer's disease, Parkinson's disease, and cerebrovascular and cardiovascular diseases. A number of studies have revealed a preventive effect of PCs against these diseases [49][50][51]. In vitro systems of cultivation of various plant species can be an alternative source of polyphenolic compounds having a strong antioxidant effect, even stronger than that of intact plants [52,53]. Here, we investigated concentrations of PCs in the water-ethanol extract of S. pallasii microshoots (Figure 3). The extract was found to contain phenolcarboxylic acids at 30.8 mg/(g of ADM), whereas the content of flavonols (4.3 mg/[g of ADM]) was modest: seven-fold lower than that of phenolcarboxylic acids. The highest concentration of tannins (74.9 mg/(g of ADM)) and a relatively high level of catechins (13.3 mg/(g of ADM)) were detected in microshoots. The total phenolic content determined by the Folin-Ciocalteu assay was 30.2 mg/(g of ADM).
reported that culturing in a liquid medium can be successful if a plant material floats on the surface of the culture medium [45]. During our micropropagation of S. pallasii in the liquid medium, either with or without the PGR(s), the plant material was immersed to the bottom of the culture vessel. This led to microshoot thickening, chlorosis, and local necrosis; therefore, S. pallasii was further incubated in the solid (agar-containing) medium.
Phytochemical Characterization of the S. pallasii Microshoots
PCs are secondary metabolites synthesized in response to stressful conditions in order to perform some physiological functions in plants. They protect from ultraviolet (UV) radiation and predators and participate in plant growth and plant responses to environmental stressors, including an injury, a pathogen invasion, mineral deficiency, and temperature stress [46][47][48]. PCs are among the main compounds that impart antioxidant properties to plants [49]. Excessive production of free radicals in the human body can cause such illnesses as cancer, atherosclerosis, Alzheimer's disease, Parkinson's disease, and cerebrovascular and cardiovascular diseases. A number of studies have revealed a preventive effect of PCs against these diseases [49][50][51]. In vitro systems of cultivation of various plant species can be an alternative source of polyphenolic compounds having a strong antioxidant effect, even stronger than that of intact plants [52,53]. Here, we investigated concentrations of PCs in the water-ethanol extract of S. pallasii microshoots (Figure 3). The extract was found to contain phenolcarboxylic acids at 30.8 mg/(g of ADM), whereas the content of flavonols (4.3 mg/[g of ADM]) was modest: seven-fold lower than that of phenolcarboxylic acids. The highest concentration of tannins (74.9 mg/(g of ADM)) and a relatively high level of catechins (13.3 mg/(g of ADM)) were detected in microshoots. The total phenolic content determined by the Folin-Ciocalteu assay was 30.2 mg/(g of ADM). Catechins or flavan-3-ols belong to the flavonoid class. The catechin molecule contains two asymmetric carbon atoms on the pyran ring (C2 and C3), and hence each catechin can have four isomers and two racemates. A characteristic feature of catechins is the ability to accept a gallic acid residue with the formation of esters: gallates. Camelia sinensis and its in vitro cultures are a rich source of nongallated and gallated catechins, i.e., catechin gallates [46,54,55]. Individual catechins have been identified in representatives of the genus Sorbaria previously. For example, (±)-catechin and (−)-epicatechin have been detected in the above-ground part of S. sorbifolia var. Stellipa Maxim. [56] In our current work, based on the UV spectra and a comparison of retention times of peaks of substances in chromatograms of the analyzed samples with those of the standard samples, three catechins were identified in S. pallasii microshoots by HPLC: (±)-catechin (λ max = 280 nm; t R = 4.8 min), epigallocate-Life 2023, 13, 557 8 of 11 chin gallate (λ max = 280 nm; t R = 6.5 min), and (−)-epicatechin (λ max = 280 nm; t R = 7.6 min; Figure 4). Quantification of the identified substances showed that the level of (±)-catechin (3.03 mg/(g of ADM)) was the highest in microshoots. Concentrations of epigallocatechin gallate (0.38 mg/(g of ADM)) and (−)-epicatechin (0.55 mg/[g of ADM]) were 8-and 6-fold lower, respectively, as compared to the content of (±)-catechin. and its in vitro cultures are a rich source of nongallated and gallated catechins, i.e., catechin gallates [46,54,55]. Individual catechins have been identified in representatives of the genus Sorbaria previously. For example, (±)-catechin and (−)-epicatechin have been detected in the above-ground part of S. sorbifolia var. Stellipa Maxim. [56] In our current work, based on the UV spectra and a comparison of retention times of peaks of substances in chromatograms of the analyzed samples with those of the standard samples, three catechins were identified in S. pallasii microshoots by HPLC: (±)-catechin (λmax = 280 nm; tR = 4.8 min), epigallocatechin gallate (λmax = 280 nm; tR = 6.5 min), and (−)-epicatechin (λmax = 280 nm; tR = 7.6 min; Figure 4). Quantification of the identified substances showed that the level of (±)-catechin (3.03 mg/(g of ADM)) was the highest in microshoots. Concentrations of epigallocatechin gallate (0.38 mg/(g of ADM)) and (−)-epicatechin (0.55 mg/[g of ADM]) were 8-and 6-fold lower, respectively, as compared to the content of (±)-catechin. Widely used synthetic antioxidants can cause health problems [57]. Therefore, the search for natural antioxidants in the form of essential oils or plant extracts is of increasing relevance [58,59]. The antioxidant activity of Sorbaria plants has been reported repeatedly [60][61][62]. Pakistani scientists Izhar et al. [19] have detected high antioxidant potential in an extract from S. tomentosa (Lindl.); the extract Rehder can be employed to stabilize sunflower oil.
DPPH is a stable radical widely used to assess the antioxidant potential of extracts [63]. Our S. pallasii microshoot extract showed some DPPH radical scavenging activity (IC50 = 589.66 ± 13.68 µg/mL). The antioxidant activity of standard substances-trolox and ascorbic acid-proved to be higher: IC50 = 7.74 and 8.69 µg/mL, respectively. The antiradical activity of the microshoot extract is probably due to the relatively high content of tannins, phenolcarboxylic acids, and catechins. Catechins are generally recognized antioxidants owing to a large number of phenolic hydroxyl groups in their chemical structure. The presence of a hydroxyl group in the gallate moiety makes epigallocatechin gallate a highly effective free radical scavenger as compared to many other standard antioxidants such as ascorbic acid, tocopherol, and trolox [64,65]. In addition to the antioxidant activity, catechins possess antimicrobial, antiallergenic, anti-inflammatory, and other effects [66][67][68]. The sufficiently high concentration of catechins in the extract from S. Widely used synthetic antioxidants can cause health problems [57]. Therefore, the search for natural antioxidants in the form of essential oils or plant extracts is of increasing relevance [58,59]. The antioxidant activity of Sorbaria plants has been reported repeatedly [60][61][62]. Pakistani scientists Izhar et al. [19] have detected high antioxidant potential in an extract from S. tomentosa (Lindl.); the extract Rehder can be employed to stabilize sunflower oil.
DPPH is a stable radical widely used to assess the antioxidant potential of extracts [63]. Our S. pallasii microshoot extract showed some DPPH radical scavenging activity (IC 50 = 589.66 ± 13.68 µg/mL). The antioxidant activity of standard substances-trolox and ascorbic acid-proved to be higher: IC 50 = 7.74 and 8.69 µg/mL, respectively. The antiradical activity of the microshoot extract is probably due to the relatively high content of tannins, phenolcarboxylic acids, and catechins. Catechins are generally recognized antioxidants owing to a large number of phenolic hydroxyl groups in their chemical structure. The presence of a hydroxyl group in the gallate moiety makes epigallocatechin gallate a highly effective free radical scavenger as compared to many other standard antioxidants such as ascorbic acid, tocopherol, and trolox [64,65]. In addition to the antioxidant activity, catechins possess antimicrobial, antiallergenic, anti-inflammatory, and other effects [66][67][68]. The sufficiently high concentration of catechins in the extract from S. pallasii microshoots indicates good prospects for its further biotechnological and phytochemical studies.
Conclusions
This study describes for the first time the successful introduction to culture in vitro of S. pallasii, which is an endemic species of the Far East and Siberia. Aseptic culture was established by surface sterilization of flower buds with 0.2% HgCl 2 solution and subsequent cultivation of each explant in an individual vial. The best quality and quantity (5.7 ± 1.2 per explant) of microshoots were observed when the plant material was cultivated on MS agar-solidified medium supplemented with only a cytokinin (5.0 µM BA) as a PGR. Nonetheless, further investigation and optimization of the in vitro culture conditions are needed for the species under study. S. pallasii can become a promising source | 2023-02-18T16:14:29.373Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "8edd58dd4fe4851aa7ea5a0bf2f5bac34140122d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "140d968c95331f7f48e6088a3d2ea7c4510bd1dd",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
190907038 | pes2o/s2orc | v3-fos-license | Laryngeal and Vocal Characterization of Asymptomatic Adults With Sulcus Vocalis
Introduction Sulcus vocalis is defined as a longitudinal depression on the vocal cord, parallel to its free border. Its most marked characteristic is breathlessness, caused by incomplete glottal closure, in addition to roughness, due to the decrease in mucosal wave amplitude of the vocal cords. Vocal acoustic aspects, such as fundamental voice frequency, jitter, and shimmer, may also be altered in individuals with this type of laryngeal disorder. To assess the voice of individuals with sulcus vocalis, studies generally include a sample of subjects with vocal symptoms, excluding asymptomatic persons. To better characterize the vocal characteristics of individuals with sulcus vocalis, their asymptomatic counterparts must also be included. Objective Characterize the larynx and voice of asymptomatic adults with sulcus vocalis. Method A total of 26 adults, 13 with sulcus vocalis (experimental group) and 13 without (control group) were assessed. All the participants were submitted to suspension microlaryngoscopy, voice self-assessment, auditory perception and acoustic evaluation of the voice. Results Among the individuals with sulcus vocalis, 78% of the sulci were type I and 22% type II. Auditory perception assessment obtained statistically significant lower scores in individuals with sulcus vocalis compared with the control group, and a slight difference in the overall degree of hoarseness and roughness. No statistically significant intergroup diferences were found in self-reported voice or acoustic assessment. Conclusion Type I was the predominant sulcus vocalis observed in individuals without voice complaints, who may also exhibit slight changes in vocal quality and roughness.
Introduction conformation seems not to be found in the entire population. 3 Small anatomical alterations in the larynx, such as the sulcus vocalis, can change its functional result, predisposing individuals to dysphonia (hoarseness) or vocal fatigue. 1,2,4 The sulcus vocalis is defined as a longitudinal depression in a vocal cord parallel to its free border, which can vary in extension and depth, and be unilateral or present in both vocal cords. Histologically, the sulcus is located on the surface layer of the lamina propria and is lined with the stratified epithelium, contiguous to the epithelium with a normal mucosal lining. 2 Sulcus changes are classified according to their morphological characteristics and the degree to which the vocal cord structures are compromised. Ford et al (1996) 5 divided sulcus disorders of the vocal folds into 3 groups: In type I, epitelial invagination is limited to the lamina propria; type II, epithelial invagination along the vocal fold length; type III is the true sulcus vocalis (pocket type) and represents an epithelial invagination that may penetrate into the vocal ligament and/or vocalis muscle layers. Pontes et al (1994) 2 propose the following categories: sulcus stria minor-epithelial invagination, whose upper and lower lips usually touch each other; sulcus stria majorspindle-shaped mucosal depression, with a stiffer consistency and adhering to deeper structures, such as the vocal ligament and muscle; pouch-shaped sulcus-lesion that emerges as an invagination, whereby its lips touch each other and the opening leads to a dilated pouch-shaped subepithelial space.
The real incidence of sulcus vocalis is unknown, due to three factors: lack of knowledge of this laryngeal alteration, diagnostic error, or the absence of diagnosis when vocal symptoms are not serious enough to cause vocal complaints. 6 Currently, examinations such as videolaryngoscopy, videolaryngostroboscopy or suspension microlaryngoscopy are used to investigate morphological and structural changes in the vocal cords, although it is important to consider the data related to the clinical history of vocal alterations. 1,[6][7][8][9] It is important to underscore that the sulcus vocalis is not always evident in videolaryngoscopy and often causes only slight structural alterations, although the vocal repercussions can be considerable. 2,6-8 Videolaryngostroboscopy can help assess a larynx with sulcus vocalis, showing a decline or absence of mucosal wave vibration. 1,9 However, under some circumstances, an accurate diagnosis of the sulcus vocalis can only be obtained by suspension microlaryngoscopy, the gold standard for diagnosing minimal structural changes. It is applied exceptionally because of its invasive nature and the fact that the procedure is performed under general anesthesia. 8,9 Suspension microlaryngoscopy makes it possible to assess vocal cord details under binocular microscopy at depth and with good lighting, enabling the use of instruments for palpating vocal cord alterations and providing an important contribution to sulcus vocalis diagnosis. 8,9 With respect to characterizing visual laryngeal, auditory perception and acoustic attributes, studies performed with symptomatic individuals show that most vocal sulci are bilateral, with types II and III being the most common. 5,10-12 The most marked vocal characteristic of this lesion is breathless-ness, which results from incomplete glottal closure. Another vocal parameter is roughness, due to the decline in mucosal wave vibration in the vocal cords. 1,2,7,12 In regard to the acoustic characteristics of voice, parameters such as fundamental voice frequency, jitter, and shimmer were altered. 1,12 However, it is important to emphasize that voice assessment studies in individuals with sulcus vocalis 5,[9][10][11][12][13][14][15] generally select a symptomatic population, excluding possible subjects with sulcus that did not display voice symptoms. As such, the aim of the present study was to characterize the larynx and voice of asymptomatic adults with sulcus vocalis from the standpoint of laryngeal, auditory perception, and acoustic assessment, in addition to voice self-evaluation.
Method
This is a cross-sectional observational study, conducted in the otolaryngology department of a public hospital in Pernambuco state, Brazil.
After approval was obtained from the institutional research committee, under protocol number 973.637, and the subjects gave their informed consent, data collection occurred between January and December 2014.
The initial sample, selected consecutively by convenience, consisted of 77 adults with no vocal complaints, submitted to general anasthesia for surgery at the extralaryngeal site, extraneous to the study. The subjects were submitted to the following surgeries: tonsillectomies, septoplasties, turbinectomies and/or sinusectomies. After exclusion criteria were applied, the number of patients declined to 71. Suspension microlaryngoscopy was conducted, revealing 13 individuals with sulcus vocalis (group 1). Among the remaining subjects, 13 gendermatched controls with no laryngeal alterations were selected consecutively (group 2), totaling 26 study participants. Each group consisted of nine women and four men.
Excluded from the study were patients submitted to surgery with high anesthetic risk (above ASA III); endotracheal intubation or previous laryngeal surgery; history of cervical trauma; extrinsic laryngeal aggression factors, including the prolonged use of inhalatory costocorticoids, smoking and occupational respiratory diseases, contraindication for suspension laryngoscopy; trauma from orotracheal intubation; presence of phonotraumatic lesions identified during the examination; and incomplete laryngeal exposure during the procedure. Suspension microlaryngoscopy was conducted by an otalaryngologist (larynx specialist) using a Zeiss microscope equipped with a 12.5 occular lens and 400 mm objective lens with 25X magnification, without causing stress on the vocal cords. The procedure involves placing the microlaryngoscope between the upper and lower teeth, over the tongue and down the throat to allow a good view of the larynx and vocal cords. The microlaryngoscope is a hollow metal tube with a fiber optic light. There was no surgical intervention aimed at altering the larynx, irrespective of vocal cord examination findings. All laryngeal examinations were video recorded for later reassessment. The sulcus vocalis were described according to Ford et al (1996). 5 All the participants underwent the following voice evaluatiosymptoms scale (VSS), to ensure that all the participants were asymptomatic; auditory perception evaluation of the voice, with vocal assessment using the grade, roughness, breathiness, asthenia, strain, instability (GRBASI) scale; and acoustic evaluation of the voice applying the VOXMETRIA program (CTS Informática, Pato Branco, Paraná, Brazil). A sample characterization questionnaire was conducted before the vocal assessment. All the procedures were performed a minimum of 15 days after surgery.
The VSS is an instrument adapted and validated for Brazil. 16 It self-assesses voice and vocal symptoms, via 30 questions divided into three domains, collecting information on functionality (15 questions), emotional impact (8 questions) and physical symptoms (7 questions) that a voice disorder can cause. Subjects responded individually to the scale questions and each answer was scored from 0 to 4, according to the frequency reported: (0) never, (1) rarely, (2) sometimes, (3) almost always, (4) always. The scores were used to determine the participants' level of vocal alteration.
For auditory perception and vocal acoustics, samples of the subjects' voices were recorded. All the tasks were executed with the patients comfortably seated in a quiet room, where voices were recorded individually. The vocal data for auditory perception and acoustic analysis were recorded using the Fonoview (CTS Informática) and Voxmetria software (CTS Informática), respectively, in an HP Intel Core i5 2.5 GHz 4096 MB laptop. The voices were captured with a Karsect HT-9 microphone placed four centimeters from the speaker's mouth at a 45°angle. In addition, an Andrea PureAudio USB adaptor was connected to the laptop to reduce background noise.
For auditory perception data collection, the tasks selected were sustained emission of the vowels /a/ and /i/ and counting from one to ten. To determine vocal parameters, the GRBASI scale, proposed by Hirano (1981) and complemented with Dejonckere's "I" parameter (1996) was applied. This scale analyzes the following aspects of vocal quality: voice roughness (R), breathlessness (B), asthenia (A), stress (S) and instability (I), which, taken together, determine the overall degree of hoarseness (G). Each of these aspects can be classified on a severity scale from 0 to 3, where 0 represents no change; 1 slightly changed; 2 moderately changed and 3 significantly changed.
Auditory perception of voices was evaluated by two speech therapists specialized in voice assessment, with more than 15 years' experience. To determine inter-rater agreement, 30% of the voices were randomly repeated for a total of 34 voices. The results obtained from the evaluator with the highest index of reliability were selected for analysis.
For acoustic recording of the voices of the participants using the Voxmetria (CTS Informática) software, subjects were instructed to emit the vowel /e/ and count from 1 to 10. The following parameters were investigated: a) fundamental frequency; b) vocal intensity; c) irregularity d) jitter; e) shimmer; and f) glottal noise excitation ratio (GNE). The acoustic data were supplied by the program itself.
Vocal self-assessment, auditory perception and the acoustic data of groups 1 and 2 were submitted to descriptive analysis using absolute and percentage frequencies for the categorical variables and means and standard deviation for the numerical variables.
Pearson chi-squared or Fisher Exact test were applied to determine whether there was an intergroup difference in the categoriacal variables and the Student t or Mann-Whitney test to compare the numerical variables. Fisher Exact test rather than Pearson chi-squared was used when the condition to apply the latter was not present. The Student t-test was selected when the hypothesis of data normality was confirmed in both groups, while the Mann-Whitney test was used when the data were not normally distributed. Data normality was determined by the Shapiro-Wilk test and equality of variance by Levene test. Cohen
Results
The age of the participants varied between 24 and 66 years, with an average of 41.88 years. Half were aged between 24 and 40 years and the other half between 41 and 66 years; a majority (69.2%) were women. Only two individuals (7%) were voice professionals, one each from group 1 and 2.
►Table 1 illustrates the mean VSS values, and the standard deviation. There was no statistically significant intergroup difference (p > 0.05). The values observed for each individual were below 16 points, when all the domains were added. This assessment ensured that none of the participants displayed vocal symptoms.
►Table 2 shows the auditory perception results obtained, considering each parameter on the GRBASI scale, according to the group analyzed. In group 1, 69.2% of the individuals displayed mild overall vocal alteration (G), and 61.5% mild roughness (R). In group 2, 15.4% showed mild overall vocal alteration (G) and roughness (R). Intergroup comparison revealed a significant difference (p < 0.05) in relation to the overall degree of vocal alteration (G) and roughness (R). The weighted kappa index values of inter-rater agreement analysis on the GRBASI auditory perception scale were 0.81 (group I) and 0.82 (group II) for evaluator 1, and 0.79 (group I) and 0.80 (group II) for evaluator 2. In this analysis, almost perfect agreement was obtained for evaluator 1 and substantial for evaluator 2, according to the Landis & Koch classification. As such, the former's analysis was considered.
►Table 3 shows the absolute number of group 1 and 2 individuals with normal and altered values in the acoustic assessment parameters. Most of the individuals in both groups obtained normal acoustic values.
The average acoustic parameter values in ►Table 4 show that the means were higher in group 1 in vocal intensity, irregularity, shimmer and GNE ratio. Mean jitter was higher in group 2. Average fundamental frequency was higher in group 1 men, while the opposite was found for women. Furthermore, mean fundamental frequency was higher in women. However, when compared statistically, there was no significant intergroup difference (p > 0.05), considering all the acoustic parameters analyzed.
Discussion
Of the 13 individuals (26 vocal cords) with sulcus vocalis, the vast majority were type I, with type II being far less frequent. With respect to the lack of voice-related symptoms in group 1 individuals, since most were not voice professionals and did not use strain their voice, they were less likely to overload their vocal apparatus. Moreover, the degree of required vocal quality, as well as self-perceived vocal disadvantage, tend to be lower in the general population than in voice professionals. 17,18 Given that voice professionals face significant vocal demands and risks, and that even slight hoarseness can limit good performance, it is hoped that this population report sulcus vocalis-related problems as soon as possible, in contrast to the group under study.
In relation to the auditory perception characteristics of group 1, eight of the 13 individuals with sulcus vocalis exhibited mild roughness. Only one individual displayed mild breathlessness (►Table 2). This result contradicts literature findings, 5-7,10-12 in which breathlessness is the primary auditory perception alteration in patients with sulcus vocalis. Hirano et al (1990) 11 assessed 126 patients with sulcus vocalis and most of the individuals exhibited mild breathlessness and hoarseness. The results indicated that voice quality was more correlated with glottis incompetence than vocal cord stiffness. Since it was a retrospective study of individuals with vocal complaints examined by videolaryngoscopy, the sulci detected would likely be more pronounced with a more evident vocal impact, which would explain the higher incidence of breathlessness.
Other studies of symptomatic patients also showed a predominance of breathlessness in subjects with sulcus vocalis, in addition to the presence of roughness in some cases. The authors also agree that breathlessness is related to glottis incompetence and stiffness in the lamina propria of the vocal cords. 1,19 Bouchayer et al (1988) 20 reported that although sulcus vocalis is a benign condition, it has a dramatic impact on the voice, and the resulting vocal quality can be considered typical, not only breathlessness, but especially the rough, veiled monotone and reduced loudness, with limited harmonics and lack of projection imposed by the restricted muco-undulatory movement of the vocal cords.
Given these literature findings, contradicting those obtained here, in which individuals with sulcus vocalis exhibited mild roughness, it is important to underscore that 78% of group 1 individuals consisted of patients with type 1 sulci, which are generally shallow, exerting no significant impact on glottic closure or the voice. In addition, sulcus vocalis affects a part of the population with no vocal complaints, whereby the impact on the voice depends on the type and magnitude of the sulcus vocalis as well as the vocal demand the individual is submitted to. 1,2,8 The minimal structural alteration of the larynx present since birth can manifest themselves in the first sounds an infant makes or in adulthood depending on vocal demands, irritative factors and laryngeal development itself. 1,2,21 Intergroup comparison in terms of auditory perception assessment revealed that individuals with sulcus vocalis obtained lower scores, with a statistically significant difference Table 4 Mean values and standard deviation of fundamental frequency, intensity, irregularity, shimmer, jitter and glottal-to-noise excitation ratio per group in relation to group 2, in overall hoarseness (G) and roughness (R), both exhibiting a mild level. The presence of roughness in group 1 individuals is related to vibratory irregularity in the mucosal wave of the vocal cords. Some authors 1,6,11 report that sulcus vocalis may exhibit different levels of vibratory irregularity in the mucosal wave, when different layers of the lamina propria of the vocal cords are affected. 4,7 Ushijima et al (1986) 22 considered that the sulcus vocalis is related to persistent hoarseness due to insufficient glottic closure during phonation. Other authors 2,4,6 found that the impact is minimal in type 1 sulcus vocalis, because only the surface layer of the lamina propria is affected, often not perceived by the individuals who exhibit mild roughness. 1 With respect to acoustic assessment, in both groups 1 and 2, all the individuals showed normal voice frequencies, with no significant intergroup differences (►Table 3). The normal distribution range for male voices is between 80 and 150 Hz, while for females it varies from 150 to 250 Hz. 1,23,24 Several studies found alterations in fundamental frequency in individuals with sulcus vocalis, often leading to acute frequencies. 6,10,25 However, the samples of these studies consisted of subjects with type II or Type III sulcus, likely with vocal cords that exhibited important structural alterations. In the present study, most of the sulci were type I, with minimal structural alteration of the mucosa and no increase in stiffness or impact on the fundamental frequency. These results corroborate those reported by Lim et al (2009), 6 who assessed individuals with type I sulcus vocalis and found normal frequency values.
Intensity is directly related to subglottic pressure of the air column, which depends on factors such as amplitude of the vibration and stress on the vocal cords, more specifically glottic resistance. 1,26 With respect to this parameter, normal mean values were observed for both groups, with no significant differences between them (►Table 4). Considering that individuals in the present study with sulcus vocalis exhibited minimal structural alterations to their vocal cords with no impact on glottic closure, no change in subglottic air pressure or voice intensity is expected.
The parameter irregularity is related to glottic coaptation and quantifies irregularity of vocal cord vibratory cycles. 1,27 In the present study, the average values were normal for both groups, with no significant differences between them.
In relation to shimmer, the individuals with sulci vocalis displayed normal values and no significant intergroup differences were observed. Shimmer indicates the variability in sound wave amplitude, that is, irregular alterations in the amplitude of glottic cycles, from one to another. 1,5,11 Since the individuals studied here likely showed no decline in glottic resistance or mass lesion, according to the literature, 1,28 they may experience alterations in shimmer values, thereby explaining the fact that most of the values were normal.
In regard to jitter, the means obtained in groups 1 and 2 were normal. Jitter showed variable fundamental frequency and how much a period is different from its predecessor or immediate successor. Furthermore, alterations occur due to the lack of vocal cord vibration control, often correlated with roughness. 1,5,11 The results of the present study demonstrated that the changes found did not cause an increase in vocal cord vibration periodicity, which is reflected in higher jitter values. 1 The type of sulcus found does not result in significant damage to the vocal cord mucosa.
With respect to the jitter and shimmer data obtained in this study, it is important to underscore the findings of Yilmaz, 15 who studied the acoustic data of 44 patients with sulcus vocalis. In his study, patients exhibited changes in several acoustic parameters (jitter, shimmer, and fundamental frequency), without characterizing the type of sulcus vocalis. It is important to emphasize, however, that since this study assessed a sulcus excision technique associated with vocal cord medicalization, the population analyzed displayed intense vocal complaints and sulci vocalis with greater structural vocal cord alterations. [29][30][31] Glottal-to-noise excitation is an acoustic measure that calculates the noise produced by vocal cord oscillation, indicating that the vocal signal originates in vocal cord vibrations or the turbulent air current produced in the vocal tract. 32 The mean GNE obtained in group 1 and 2 revealed normal values, with no significant intergroup differences. According to the literature, since the GNE is related to breathlessness, the normal average values can be explained, since roughness was the only parameter that stood out in the individuals assessed (predominant in group 1). According to Madazio et al (2009), 33 strained and adapted voices may exhibit normal mean GNE values.
In regard to the difference in auditory perception and acoustic data, assessment of the former considers both source and filter-related data, which could, in some situations, change the overall impression of voice. It is known that correlations between auditory perception and acoustic data do not always exist. 1 According to Pontes, 2 a larynx with sulcus vocalis or other minimal structural alterations (MSA) can remain balanced and adapted to the vocal demand of speakers without compromising them for the rest of their life. The slight vocal alterations observed in individuals with sulcus vocalis reinforce the hypothesis that its vocal impact may be minimal or nonexistent, but these slight vocal deviations can be detected by voice specialists. Therefore, alterations in the vocal quality of individuals with sulcus is far greater than that traditionally described in the literature, since, in addition to symptomatic patients with pronounced sulci and severe hoarseness, where roughness and breathlessness predominate, there is also a much larger population of asymptomatic individuals with sulcus stria minor that may display minimum alterations in voice quality. 34 This sample clearly demonstrates the possibility of classifying one of the most frequent MSAs (sulcus vocalis) as an anatomical variation (morphological alteration in which the function of the organ is not compromised) or a representative entity of the set of laryngeal abnormalities, since in certain situations, this lesion promotes phonatory deviation.
Conclusion
Type I sulcus vocalis is predominant in individuals with no voice complaints, who may exhibit slight changes in vocal quality, characterized by roughness, or no voice alterations whatsoever. | 2019-06-14T13:46:38.920Z | 2019-02-28T00:00:00.000 | {
"year": 2019,
"sha1": "d2f70215d5c4a0868595c7e95a6bf47ea24d313a",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0039-1688457.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2f70215d5c4a0868595c7e95a6bf47ea24d313a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259608850 | pes2o/s2orc | v3-fos-license | Association between digestive diseases and sarcopenia among Chinese middle-aged and older adults: a prospective cohort study based on nationally representative survey
Objectives Patients with digestive diseases frequently suffer from dyspepsia and malabsorption, which may lead to muscle loss due to malnutrition. However, it is not clear whether digestive diseases are associated with sarcopenia. This study aims to explore the longitudinal association between digestive diseases and sarcopenia in middle-aged and older adults based on a nationally representative survey from China. Methods We used a prospective cohort study including 7,025 middle-aged and older adults aged ≥45 years from the 2011 to 2015 waves China Health and Retirement Longitudinal Study (CHARLS). Digestive diseases were identified using self-report. The assessment of sarcopenia was based on the Asian Working Group for Sarcopenia 2019 Consensus and included three components of muscle strength, physical performance, and muscle mass. Cox hazards regression was used to examine the association between digestive diseases and sarcopenia. Results The prevalence of digestive diseases and the incidence of sarcopenia in middle-aged and older adults were 22.6% (95% CI = 21.6–23.6%) and 8.5% (95% CI = 7.8–9.1%). After adjusting for 15 covariates composed of three sets (demographic characteristics, lifestyles, and health status), digestive diseases were associated with a higher risk of sarcopenia (HR = 1.241, 95% CI = 1.034–1.490, P < 0.05). The associations were more pronounced among men, older adults aged 60–79, rural residents, and married people. In addition, the association between digestive diseases and sarcopenia was robust in the sensitivity analysis. Conclusion Digestive diseases were associated with an increased risk of sarcopenia in middle-aged and older adults aged ≥45 years. Early intervention of digestive diseases may help to reduce the incidence of sarcopenia in middle-aged and older adults.
Introduction
Sarcopenia refers to a generalized and progressive skeletal muscle disorder including the loss of muscle strength and mass (1). A recent systematic review and meta-analysis indicated that the global prevalence of sarcopenia ranged from 10 to 27%, with the prevalence of severe sarcopenia varying between 2 and 9% (2). Sarcopenia not only decreased the quality of life for individuals but also caused serious economic and medical burdens to society and families. Numerous studies have shown that sarcopenia was associated with increased risks of adverse health outcomes such as falls, fractures (3), and mortality (4). In addition, it was identified as a predictor of increased health service use and healthcare costs according to the evidence from different countries (5,6). Therefore, exploring its risk factors to achieve precise screening and early prevention has become the focus of interest in the field of clinical practice, geriatric and public health.
Previous studies mainly investigated the associated factors of sarcopenia from the perspective of sociodemographic characteristics (7) and nutritional status (8). In addition to these factors, studies have found that sarcopenia may also be secondary to some long-term diseases or co-exist with these pathological conditions (1,9). Previous studies suggested an increased prevalence of sarcopenia in those with bone and joint diseases, cancer, chronic heart failure, chronic obstructive pulmonary disease, and diabetes (9). The long-term existence of these diseases would damage the physiological function of the body, lead to long-term chronic inflammation, and metabolic disturbances, and then induce sarcopenia (10). Based on these discovered factors, many studies have explored the intervention measures for sarcopenia, mainly involving exercise and nutritional supplements (11). A systematic review showed that exercise intervention can improve muscle strength and physical function, while the results of nutrition intervention were ambiguous (12), which may be related to the different kinds of nutrients taken. A recent systematic review and meta-analysis of nutrition supplement interventions found that different types of nutrients had different effects on the muscle mass and strength of older adults (13). In addition to the type of nutrients, the effect of nutrition supplements also depends on whether the individual can effectively absorb nutrients, which is directly related to the individual's digestive and absorption function.
Patients with digestive diseases often have digestive disorders, which may lead to muscle loss due to poor absorption of nutrients. However, few studies have explored whether digestive diseases are associated with an increased risk of sarcopenia. A previous cross-sectional study that enrolled 303 patients with digestive diseases in Japan reported that the prevalence of sarcopenia was 32.0% (14), which indicated patients with digestive diseases often suffer from sarcopenia. However, it was impossible to explore the relationship between digestive diseases and sarcopenia, because this study only selected patients with digestive diseases as samples. Therefore, it is necessary to further examine the association between digestive diseases and sarcopenia to provide scientific evidence for the intervention of sarcopenia in the future. Malnutrition was identified as a highly prevalent complication in patients with digestive diseases such as gastroparesis (15), inflammatory bowel disease (16), pancreatic disease (17), cirrhosis (18), and non-alcoholic fatty liver disease (19). The Belgian population-based cohort study involving 534 community-dwelling older people showed that malnutrition was a strong predictor of the onset of sarcopenia (20). Recently, researchers put forward a view of the gut-muscle axis (21). They believed that the gastrointestinal tract and skeletal muscles interact with each other through hormones, gut microbes, and metabolites (22). This further implies the possible relationship between digestive diseases and sarcopenia. Therefore, given the existing research gaps, we used a prospective cohort study to explore the association between digestive diseases and sarcopenia. This will provide evidence based on the Chinese community population level for further examining the association between digestive diseases and sarcopenia. It is contributed to simply targeting high risk populations in community screening and intervention of sarcopenia.
Participants
We used the 2011 to 2015 waves of China Health and Retirement Longitudinal Study (CHARLS) data to conduct a prospective cohort study. CAHRLS is a national survey of middleaged and older adults in China, which adopts a multi-stage stratified probability sampling method and has good national representation (23). CHARLS started the baseline survey in 2011, followed by a follow-up about every 2 years. At present, the survey data for four times in 2011, 2013, 2015, and 2018 have been disclosed. Since the data in 2018 only published questionnaire data, but not biomarker data, we used the data in 2011-2015 to explore the association between digestive system diseases and sarcopenia. In 2011, CHARLS recruited 17,708 participants. After excluding samples under the age of 45 and those with missing data, 10,400 participants were included at baseline. Subsequently, we excluded participants with sarcopenia in 2011 and those who were lost or missing sarcopenia data in the 2013 and 2015 surveys, and finally 7,025 participants entered the final analysis. The sample selection process is shown in Figure 1. CHARLS has been approved by the Ethics Committee of Peking University (No. IRB00001052-11015), and all participants have obtained informed consent.
Digestive disease
Digestive diseases were identified using self-report. Participants were asked, "Have you been diagnosed with Stomach or other digestive diseases (except for tumor or cancer) by a doctor?" When participants answered "yes" was determined to have digestive diseases. In this study, we defined digestive diseases as a binary variable (no vs. yes).
Frontiers in Nutrition 02 frontiersin.org The selection process of participants.
Sarcopenia
The assessment of sarcopenia was based on the Asian Working Group for Sarcopenia 2019 Consensus and includes three components of muscle strength, physical performance, and muscle mass (24). Muscle strength was measured using a standardized handgrip strength meter (Yuejian WL-1000, China). The handgrip strength of men below 28 kg and women below 18 kg is considered low muscle strength (24). Five-time chair stand tests were used to evaluate the individual's physical performance. The time of 5-time chair stand tests ≥12 s is considered low physical performance (24). The muscle mass was evaluated by the appendicular skeletal muscle mass (ASM). Since CHARLS does not use instruments to measure ASM, we used previously validated anthropometric formulas (ASM = 0.193 × body weight + 0.107 × height − 4.157 × sex (men = 1, women = 2) − 0.037 × age − 2.631) to estimate ASM (25). To reduce the potential impact of height on ASM, we adjust the height of ASM by dividing ASM by the square of height (ASM/Ht2). Referring to the previous study (26), we set the low muscle mass as lower than the 20th percentile of ASM for sex. Low muscle mass + low muscle strength or low physical performance was considered sarcopenia.
Covariates
Covariates in this study include three sets: demographic characteristics, lifestyles, and health status. Demographic characteristics included age, sex, residence (rural or urban), marital status (married or unmarried), and education level (illiterate, or primary school, or middle school and above). Lifestyles included smoking (current non-smoker or current smoker), drinking (current non-drinker or current drinker), exercise (hardly or regularly), social activity participation (No or Yes), and meal frequency (<3 or ≥3 meals per day). Health status included chronic diseases (No or Yes), cognitive function [cognitive normal or mild cognitive impairment (MCI)], functional limitations (No or Yes), visual impairment (No or Yes), and hearing impairment (No or Yes). More details of the measurement of covariates are in the Supplementary material.
Statistical methods
STATA version 17.0 (StataCorp, College Station, TX, USA) was used to perform all statistical analyses. We used the Chi-square test to compare the differences in demographic characteristics, lifestyles, and health status between groups with and without digestive diseases. Cramer's V statistic was used to estimate the effect size. Cox hazards regression was used to examine the association between digestive diseases and sarcopenia. Hazard Ratio (HR) and 95% confidence interval (CI) were used to estimate the strength of association and statistical significance, respectively. We estimated the following four models: model 1 was unadjusted; model 2 adjusted age, sex, residence, marital status, and education level; model 3 adjusted smoking, drinking, exercise, social activity participation, and meal frequency based on model 2; model 4 further adjusted chronic diseases, cognitive function, functional limitations, hearing impairment, and visual impairment based on model 3. To test the heterogeneity of the results, we conducted subgroup analyses among sex, age, residence, and marital status. Meanwhile, we also examined the interaction of digestive diseases with sex, age, residence, and marital status for associations with sarcopenia. In addition, we excluded individuals with MCI for sensitivity analysis to test the robustness of the association between digestive diseases and sarcopenia. We set P < 0.05 as statistically significant.
Descriptive statistics
The mean age of 7,025 participants was 57.3 years (SD: 9.4). The prevalence of digestive diseases and the incidence of sarcopenia in middle-aged and older adults were 22.6% (95% CI = 21.6-23.6%) and 8.5% (95% CI = 7.8-9.1%). More detailed information regarding the characteristics of participants according to digestive diseases is shown in Table 1. Table 2 displays the results of the association between digestive diseases and sarcopenia among the whole sample. Model 1 showed that digestive diseases were significantly associated with a high risk of sarcopenia (HR = 1.304, 95% CI = 1.089-1.562, P < 0.01). After adjusting for age, sex, residence, marital status, and educational level in model 2, the association weakened but was still significant (HR = 1.270, 95% CI = 1.061-1.522, P < 0.01). And then, we further adjusted smoking, drinking, exercise, social activity participation, and meal frequency based on model 2, digestive diseases were
Subgroup analysis
The results of the subgroup analysis showed that the association between digestive diseases and sarcopenia was heterogeneous among sex, age, residence, and marital status. Specifically, the association was more pronounced among males, older adults aged 60-79 years, rural residents, and married people. However, we found no significant interaction effects. Details of the subgroup analysis are shown in Figure 2.
Sensitivity analysis
After excluding participants with MCI, digestive diseases were still associated with sarcopenia (HR = 1.387, 95% CI = 1.126-1.707, P = 0.002). This result suggested that the association of digestive diseases with sarcopenia was robust. Subgroup analyses of associations between digestive diseases and sarcopenia.
Discussion
Based on a nationally representative survey, this study explored the longitudinal association between digestive diseases and sarcopenia in middle-aged and older adults in China. As far as we know, this is the first study to explore the above association based on the Chinese population. This study found that digestive diseases were associated with a higher risk of sarcopenia among Chinese middle-aged and older adults, which further enriched the relevant factors of sarcopenia, and provided new intervenable factors for future prevention and intervention.
Although previous studies have not directly explored the above associations, some studies have investigated the association between gastrointestinal diseases and the components of sarcopenia. A longitudinal study based on UK Biobank showed that gastrointestinal diseases predicted a decline in grip strength after 9 years of follow-up (27). This further supported our findings and provided evidence from observational studies for the theory of the gut-muscle axis. It should be noted that our study excluded digestive system cancer from digestive diseases, because cachexia and sarcopenia are very common complications of cancer patients (28), which may introduce the potential bias. This study also found that the association between digestive diseases and sarcopenia was different in sex, age, residence, and marital status. This association only applied to males, older adults aged 60-79 years, rural residents, and married people. To begin with, it is found in males, rural residents, and married people that there were significant associations between digestive diseases and sarcopenia, which may be related to lower health services utilization. The previous national survey of older adults in China found that males, rural residents, and married people were less likely to use outpatient and inpatient services than females, urban residents, and the unmarried (29), which may make them unable to get timely treatment when they were ill, thus increasing the harm of digestive diseases to health. Secondly, the association between digestive diseases and sarcopenia was found only in participants aged 60-79 years, which may be due to the lower incidence of sarcopenia in middle-aged people aged 45-59 and the lower number of the oldest-old over 80 years old. In statistical analysis, a low incidence rate and sample size may lead to insufficient statistical power. However, since no significant interactions were found, the differences in these subgroup analyses may be due to a reduction in sample size after grouping. In the future, larger sample studies are needed to analyze the results of different subgroups.
The mechanism of association between digestive diseases and an increased risk of sarcopenia may involve an imbalance of gut microbiota, malnutrition, and long-term inflammatory condition. Firstly, most digestive diseases have changes in gut microbiota, which may play a role in the pathophysiological process of muscle mass decline. The gut microbiota of patients with chronic liver disease showed a change of decreasing abundance of Firmicutes (Ruminococcaceae and Lachnospiraceae), Prevotellaceae, and an increase of Enterobacteriaceae, Proteobacteria (30). The gut microbiota of patients with IBD showed a change of decreasing abundance of Firmicutes (Eubacterium, Christensenellaceae, and Faecalibacterium prausnitzii), and an increase of Actinomyces, Escherichia coli (31). Moreover, other digestive diseases including Helicobacter pylori infection were also associated with alterations Frontiers in Nutrition 05 frontiersin.org in the gut microbiome (32,33). Researchers analyzed the gut microbiome of subjects with sarcopenia or low skeletal muscle mass, and the results indicated that the abundance of Firmicutes (Eubacterium, Ruminococcaceae, Lachnospiraceae, F. prausnitzii) decreased (34,35). These changes were consistent with the characteristics of gut microbes in patients with liver and gastrointestinal diseases to a certain extent. The microbes decreased in digestive diseases may produce some metabolites such as shortchain fatty acids (SCFAs), secondary bile acids (BAs), and some amino acids. They promoted skeletal muscle health by activating G protein-coupled receptors (GPR41/13/43) and inhibiting HDACs to further regulate the signal pathways related to insulin resistance, inflammatory response, and oxidative stress (36). Secondly, patients with digestive diseases often face the risk of malnutrition. It was reported that 32.0-61.5% of patients with various digestive diseases suffer from malnutrition (17,37,38). Anorexia and abdominal discomfort are the main symptoms of patients with digestive diseases, which may lead to a decrease in their dietary intake and mean a lack of raw materials for muscle protein synthesis. In addition, various pathological changes associated with digestive diseases, such as digestive enzyme deficiency, decreased synthesis or secretion of conjugated BAs, reduced intestinal absorptive area, and dysbacteriosis can cause maldigestion and malabsorption, making nutrients unable to be used for skeletal muscle metabolism (39). In addition, the association between digestive diseases and sarcopenia may also be related to the reduction of nutrient absorption caused by taking medicine in patients with digestive diseases. Proton pump inhibitors (PPIs) have become the first choice for the treatment of acid-related digestive diseases, thus the use of PPI has been increasing in recent decades (40,41). A large number of studies have found that the use of PPI was related to the reduced absorption of micronutrients such as vitamins C, D, and magnesium (42,43). Meanwhile, more and more studies have shown that low micronutrient intake was associated with an increased risk of sarcopenia (44).
Thirdly, chronic low-grade inflammation is an important pathological mechanism for the development of sarcopenia. Gastritis, pancreatitis, chronic liver disease, inflammatory bowel disease, irritable bowel syndrome, and other digestive diseases infected were often accompanied by inflammation and caused elevated levels of proinflammatory cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin 6 (IL-6), and myostatin (22,45,46). TNF-α, IL-6, and myostatin may activate the expression of the muscle-specific E3 ligases muscle RING-finger protein-1 (MuRF-1) and muscle atrophy F-Box (MAFbx) through regulating the p38 mitogen-activated protein kinase (MAPK) pathway and the nuclear factor-kappa B (NF-κB) pathway (47). MuRF-1 and MAFbx can further activate the ubiquitin proteasomal system (UPS) to promote protein degradation or proteolysis of skeletal muscle (47). The abnormal autophagy enhanced by phosphoinositide 3-kinase (PI3K)/protein kinase B (AKT)/mammalian target of rapamycin (mTOR) pathway was also associated with the process of muscle atrophy mediated by inflammation (48). These mechanisms provide a possible explanation for the relationship between digestive system diseases and sarcopenia. This also suggests that it is necessary to include intestinal flora or inflammatory factors in future research to explore whether these biomarkers can mediate the relationship between digestive diseases and sarcopenia.
It is worth noting that our study has several advantages. First of all, based on a nationally representative data, we explored the association between digestive diseases and sarcopenia using a prospective cohort study for the first time, which has good extrapolation and a higher level of evidence. Secondly, we controlled a set of covariates such as the social demographic characteristics, health conditions, and lifestyles to make the research results more stable. In particular, we have controlled the prevalence of chronic diseases, and the results showed that digestive diseases still significantly increased the risk of sarcopenia, which indicates that digestive diseases were significantly correlated with sarcopenia independently of other chronic diseases. In addition, we also conducted a subgroup analysis, which will help to find vulnerable groups more easily in the early prevention of sarcopenia in the future. Finally, we also conducted a sensitivity analysis, and the results suggested that the association between digestive diseases and sarcopenia was robust. The results of this study provide new ideas for early screening and prevention of sarcopenia in middle-aged and older people in community settings in the future. Community health workers can predict the incidence of sarcopenia 4 years later by asking middle-aged and older people if they have digestive diseases. Considering the availability of selfreported digestive diseases, this study has significant public health implications for reducing the burden of sarcopenia.
In addition, our study has some limitations. The cohort study cannot infer the causal relationship between digestive diseases and sarcopenia, which is our first limitation. In the future, causal inference methods are needed to enhance the causal effect of the above results. Second, we used self-reported digestive diseases instead of being diagnosed by doctors themselves, which may lead to evaluation bias. Although the survey has emphasized the need for participants to report diseases based on previous doctors' diagnoses, it is undeniable that there may be bias. Due to the data limitations of CHARLS, previous studies (49,50) have also used digestive diseases assessment methods consistent with this study, and have emphasized the limitations of self-reported bias and measurement bias. However, on the other side, given the scarcity of primary medical resources, if the risk of sarcopenia in middle-aged and older adults after 4 years can be determined based on self-reported digestive diseases, this may be of great value for early intervention and prevention of sarcopenia, as selfreported digestive diseases are very easy to obtain and do not require accurate medical equipment diagnosis. In addition, we have not collected subtypes of digestive diseases, such as gastritis, gastric ulcer, and inflammatory bowel disease. It is necessary to further explore the relationship between specific digestive diseases and sarcopenia in the future. Moreover, sarcopenia may be closely related to the diet of participants, however, since the CHARLS did not involve specific dietary assessments, we only included daily dietary frequency as a covariate, which may result in confounding bias. Finally, we excluded some missing data in the process of sample selection, which may introduce selection bias. However, due to the large number of samples included in our study, the impact of selection bias may be small.
Conclusion
Digestive diseases were associated with an increased risk of sarcopenia in middle-aged and older adults aged ≥45 years. Early intervention of digestive diseases may help to reduce the incidence of sarcopenia in middle-aged and older adults.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: the data used in this study can be obtained from CHARLS official website (https://charls.pku. edu.cn/en/).
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of Peking University. The patients/participants provided their written informed consent to participate in this study.
Author contributions
GC, SL, and XZ: conceptualization. GC and SL: methodology, validation, and formal analysis. GC: software and writingoriginal draft preparation. HY, GC, SL, and XZ: writing-review and editing. XZ: supervision. YY, YF, YC, XJ, and ML: project administration. All authors had read and agreed to the published version of the manuscript.
Funding
The study was supported by the Qi-Huang Scholar Chief Scientist Program of National Administration of Traditional Chinese Medicine Leading Talents Support Program (2021). | 2023-07-11T16:10:16.496Z | 2023-07-05T00:00:00.000 | {
"year": 2023,
"sha1": "b8cf4606c32fc6c0c56f1c2ac409b01431a0d4fa",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1097860/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a1ba5db295d08facb88f133816e5370addea822d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
255870787 | pes2o/s2orc | v3-fos-license | The molecular mechanism of METTL3 promoting the malignant progression of lung cancer
Lung cancer remains one of the major causes of cancer-related death globally. Recent studies have shown that aberrant m6A levels caused by METTL3 are involved in the malignant progression of various tumors, including lung cancer. The m6A modification, the most abundant RNA chemical modification, regulates RNA stabilization, splicing, translation, decay, and nuclear export. The methyltransferase complex plays a key role in the occurrence and development of many tumors by installing m6A modification. In this complex, METTL3 is the first identified methyltransferase, which is also the major catalytic enzyme. Recent findings have revealed that METTL3 is remarkably associated with different aspects of lung cancer progression, influencing the prognosis of patients. In this review, we will focus on the underlying mechanism of METT3 in lung cancer and predict the future work and potential clinical application of targeting METTL3 for lung cancer therapy.
Introduction
Lung cancer is one of the most common malignant tumors with the highest mortality rate worldwide [1][2][3]. According to histological appearance, lung cancer is classified into small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) [4,5]. Unfortunately, due to the lack of effective means of early diagnosis, most patients with lung cancer are found to be in an advanced stage and have a poor prognosis [6]. Even with the development of multidisciplinary comprehensive management of lung cancer, the overall survival (OS) of patients with lung cancer is still very low, about 15% [7].
As we all know, tumorigenesis is an extremely complex biological process involving genomics and epigenetics [8,9]. There is strong evidence that epigenetic modification has a profound impact on the occurrence and development of tumors without DNA sequence changes [10]. The epigenetic modifications include DNA methylation, histone modification, RNA modification, and noncoding However, RNA m 6 A modifications account for about 0.1-0.4% of the isolated RNA from eukaryotic cells [16]. Studies have shown that the sites modified by m 6 A are highly conservative and tend to be present in the consensus sequence Pu[G > A] m 6 AC[U > A > C](Pu, purine) [18]. Of note, m 6 A sites are mainly enriched near the 3' untranslated region (UTR), stop codon and long internal exon [18,19], and play an important role in regulating RNA metabolism. In mammals, m 6 A modification is dynamically reversible, installed by the methyltransferase complex (writers) and removed by demethylases (erasers) [20,21]. At least eight methyltransferases are found in the methyltransferase complex, including METTL3, METTL14, WTAP, KIAA1429(VIRMA), RBM15, HAK AI, ZC3H13(KIA A0853), and METTL16. In the methyltransferase complex, METTL3 is the first identified and the sole catalytic subunit that can catalyze the transfer of a methyl group from S-adenosylmethionine (SAM) to the N6-position of adenosine [13]. In addition, RNA binding proteins, also called readers, can selectively recognize and bind to the m 6 A modification sites of target RNAs, thereby participating in the RNA metabolism process, including RNA splicing, maturation, decay, translation, stabilization, Pri-mRNA processing, nuclear export [22,23] (Fig. 1). With the deepening of the research, more and more novel m 6 A-related regulators will be identified.
Recent findings have revealed that METTL3 is closely related to the progression of a variety of tumors, including lung cancer. In the present review, we will discuss the underlying molecular mechanism of METTL3 in the occurrence and development of lung cancer and predict the future research direction, as well as the potential clinical application of targeting METTL3 in lung cancer treatment.
Effects of methyltransferase METTL3 on lung cancer
Accumulating evidence has shown that abnormal m 6 A levels mediated by METTL3 are involved in the malignant progression of lung cancer, including cell proliferation, invasion, metastasis, angiogenesis, drug resistance, glycolysis, cancer stem cells, and tumor environment [24][25][26]. Therefore, we summarize the recent findings of METTL3 in the tumorigenesis of lung cancer (Table 1). -binding proteins (YTHDF1, YTHDF2, YTHDF3, YTHDC1, YTHDC2, IGF2BP1/2/3, HNRNPC, HNRNPA2B1). METTL3 has an influence on the expression of key oncogenes by regulating RNA metabolism, such as Pri-mRNA processing, RNA splicing, maturation, stability, translation, decay, and nuclear export
Effects of METTL3 on the proliferation of lung cancer
Lung cancer is a malignant tumor originating from bronchial mucosal epithelial cells that is closely associated with excessive cell division, cycle disturbance, and apoptosis dysregulation [27][28][29]. It has been reported that long noncoding RNA (LncRNA) ABHD11-AS1 was highly expressed in NSCLC, closely related to poor prognosis. Mechanistically, METTL3 increases the transcript stability of ABHD11-AS1 and reduces the expression of its downstream gene KIF4, promoting cell proliferation [30]. In contrast, RNA pumilio RNA binding family member 1 (circPUM1), a functional cir-cRNA, promotes NSCLC cell growth by activating the miR-590-5p/METTL3 axis [31]. Thus, METTL3 participates in the initiation and development of lung cancer by modifying noncoding RNA. Interestingly, miR-33a can inhibit NSCLC cell proliferation by downregulating METTL3 expression and its downstream genes, such as epidermal growth factor receptor (EGFR) and transcriptional coactivator with PDZ-binding motif (TAZ), via directly targeting the 3'UTR of METL3 mRNA [32]. In addition, METTL3 can also act as a tumor suppressor in lung cancer. Wu et al. found that numerous m 6 A methylation sites on FBXW7 mRNA in lung adenocarcinoma (LUAD) by using MeRIP-qPCR analysis. Further study showed that METTL3 upregulates FBXW7 expression to regulate proliferation or apoptosis-related genes such as Bax, c-Myc, Mcl-1, in an m 6 A manner [33]. Similarly, METTL3 represses LUAD tumorigenesis by regulating SLC7A11 mRNA expression. Mechanistically, YTHDC2 (reader) preferentially binds to m6A-modified SLC7A11 mRNA, which makes it more likely to be degraded, preventing cystine uptake and antioxidant function [34].
Effects of METTL3 on the invasion, migration and metastasis of lung cancer
Accumulating studies have shown that METTL3 overexpression in lung cancer tissues was significantly higher than that in normal cells and was strongly associated with tumor invasion, migration, and metastasis [35][36][37]. In NSCLC, METTL3-mediated Yes-associated protein (YAP) overexpression leads to tumor metastasis. By analyzing the upstream regulatory mechanism of YAP, researchers found that METTL3 not only upregulates the m 6 A level of YAP transcript to enhance its translation but also activates the MALAT1-miR-1914-3p-YAP axis to increase the stability of YAP transcript [38]. In lung cancer epithelial-mesenchymal transition (EMT) models mediated by TGF-β, METTL3 markedly accelerates the EMT process [35]. Mechanistically, METTL3 significantly enhances JUNB mRNA stability dependent of m 6 A methylation. In contrast, E-cadherin expression was upregulated when METTL3 knockdown was performed.
Moreover, CTNNB1 gene encodes β-catenin protein. The m 6 A level of CTNNB1 mRNA is abnormally increased, closely related to the EMT process in HeLa cell line. Mechanistically, METTL3 modifies the 5'UTR region of CTNNB1 mRNA and negatively regulates CTNNB1 mRNA stability and translation [39]. In addition, METTL3 upregulates the expression of transcription factor E2F1, thereby indirectly downregulating β-catenin protein. Intriguingly, this study also revealed that YTHDF1 selectively binds eIF4E1 or eIF4E3 to regulate β-catenin expression in a noncanonical pathway, according to the level of METTL3 in the cell. For example, in METTL3 knockdown cells, YTHDF1 tends to bind the oncoprotein eIF4E1 to upregulate β-catenin. Eventually, METTL3 can also decrease c-Met kinase expression to repress membrane localization of β-catenin, inhibiting cell migration. Suppress cell proliferation [106] The m 6 A levels were reported to be affected by environmental factors, including particulate matter (PM 2.5 ) [40] and cigarette smoke [41]. Recent studies have shown that METTL3 overexpression induced by smoke exposure downregulates E-cadherin to accelerate the malignant transformation of normal lung tissue in mice by activating the ZBTB4/EZH2/ H3K27me3 axis [41]. It has been reported that METTL3 participates in m 6 A-modified mRNA translation independently of its catalytic activity or m 6 A readers in lung cancer lines [42]. In METTL3 mutation assays, its N-terminal domain (1-200 amino acids) did not exhibit methyltransferase catalytic activity but is sufficient to increase the translation efficiency of target mRNAs. Mechanistically, METTL3 interacts with eIF3h to form the RNA looping to enhance significantly the translation efficiency of polyribosomes, leading to an increase in the expression of key oncogenes such as EGFR and TAZ [43]. However, METTL3 approximately selectively binds only 22% of RNA containing the m 6 A methylation site [44], so it is vital to explore selection mechanisms of METTL3 for target mRNAs to elucidate its biological function.
Effects of METTL3 on drug resistance of lung cancer
Drug resistance is the major reason for the failure of most solid tumor treatments [8,45,46], its mechanism is quite complicated, including cancer stemness, the ABC transporter family, noncoding RNA regulation, tumor microenvironment, hypoxia, autophagy, DNA damage and repair, and epigenetic modification. However, there is a lot of evidence that METTL3 is linked to drug resistance in many different types of tumors, including lung cancer [27].
In a murine cisplatin-resistant lung cancer model, METTL3 upregulates YAP expression to promote drug resistance and metastasis by YTHDF1/3, eIF3b and the MALAT1-miR-1914-3p-YAP axis [38]. In contrast, METTL3 knockdown increased the sensitivity of lung cancer cells to Cisplatin by downregulating YAP expression. C-Met overexpression in NSCLC was positively correlated with Crizotinib resistance [47,48]. NSCLC cells treated with Chidamide increased sensitivity to Crizotinib in vivo and vitro. Mechanistically, Chidamide reduced m 6 A level of c-Met and its expression by downregulating METTL3 and WTAP [49]. Liu et al. found that METTL3-mediated autophagy is involved in NSCLC resistance to gefitinib, and further mechanistic studies revealed that METTL3 in NSCLC regulates autophagyrelated gene expression such as ATG5, ATG7, LC3B, and SQSTM1 to promote cell survival in an m 6 A manner [50]. Similarly, METTL3 overexpression was positively correlated with MET in gefitinib-resistant LUAD. Further studies revealed that METTL3 regulates MET expression, thereby synergistically activating the downstream PI3K/AKT pathway and reducing the sensitivity of LUAD to gefitinib [51]. Conversely, METTL3 knockdown has also been reported to increase NSCLC resistance to Cisplatin [52].
Effects of METTL3 on the glycolysis of lung cancer
Abnormal glucose metabolism has been reported to facilitate malignant tumor initiation and development, which is one of the key energy metabolism features of tumors [8,53]. Recent findings have revealed that circPUM1 upregulated the expression levels of glucose transporter 1(GLUT1) and hexokinase-2 (HK2) to promote the glycolysis of lung cancer by upregulating METTL3 [31].
Different from the oxidative phosphorylation of mitochondria in normal cells, tumor cells mainly rely on aerobic glycolysis for energy supply, a phenomenon called "Warburg effect" [54,55], which provides a beneficial environment for tumor cell growth. The study has shown that METTL3 could activate ABHD11-AS1/EZH2/KLF4 axis to downregulate the expression of transcription factor Kruppel-like factor4 (KLF4), enhancing the Warburg effect [30]. Besides, METTL3 might also be involved in other energy mechanism pathways, such as lipid metabolism, amino acid metabolism, which needs to be further explored.
Effects of METTL3 on angiogenesis of lung cancer
Angiogenesis, a typical feature of the malignant progression of tumor cells, is a complex biological process by which new capillaries grow from preexisting vessels, providing oxygen and nutrients for the malignant progression of tumors [8,56,57]. METTL3 was reported to regulate the expression of let-7e-5p and miR-18a-5p to significantly improve endothelial cells (ECs) biological functions that facilitate neovascularization in limb ischemia and myocardial infarction mouse models [58]. Thus, METTL3 can serve as an important regulator of angiogenesis. Similarly, METTL3 also regulated miR-143-3p/VASH1 axis to enhance the angiogenesis ability of lung cancer cells in an m 6 A manner [59].
Effects of METTL3 on the tumor microenvironment of lung cancer
The tumor microenvironment (TME) is composed of cancer cells, cancer stem cells, endothelial cells, pericytes, cancer-associated fibroblasts, immune and inflammatory cells, as well as extracellular components such as vascular endothelial-derived growth factor (VEGF), ERGF etc., participating in tumor growth, invasion, metastasis, and drug resistance [8,27].
METTL3 depletion in macrophages reshaped the TME by increasing M1-and M2-like tumor-associated macrophages (TAMs) and regulatory T (Treg) cell infiltration in vivo, resulting in tumor growth, metastasis, and drug resistance. Mechanistically, ablation of METTL3 in macrophages inhibits the YTHDF1-mediated SPRED2 translation to upregulate ERK expression to activate NF-κB and STAT3 signaling [60]. Thus, METTL3 plays a key role in the TME. However, the molecular mechanisms involved in the TME are quite complex, and METTL3mediated remodelling of the TME also only reveals the tip of the iceberg where m 6 A methylation participates in the formation of the TME (Fig. 2).
Effects of METTL3 on the prognosis of lung cancer
More and more studies have demonstrated that METTL3 is strongly linked to the prognosis of lung cancer. However, whether METTL3 can precisely reflect clinical outcomes is controversial. The retrospective study conducted by Liu et al. evaluated the relationship between METTL3 and the prognosis of lung cancer through meta-analysis and bioinformatic analysis [61]. The result has shown that METTL3 overexpression is closely related to the prognosis of various tumors and could serve as a potential tumor biomarker [61].
By analyzing 22 immune cell types, Xu et al. identified that T follicular helper cell signature (risk core) could serve as an independent prognostic factor in patients with lung squamous cell carcinoma (LUSC) [62]. LUSC patients were separated into low-risk and high-risk groups based on this risk core. Interestingly, low-risk groups exhibited a worse OS, in which the expression of METTL3, HNRNPC, ALKBH5, and KIAA1429 was upregulated. While high-risk groups in which m 6 A-related regulators is downregulated better respond to chemotherapies and immunotherapies, suggesting METTL3 overexpression may predict a poor prognosis in LUSC patients [62]. The study by Zhang et al. has shown that METTL3 overexpression in LUAD was strongly related to better OS and progression free survival (PFS) [63]. Hence, METTL3 could serve as the protective gene in LUAD. However, another study suggested the opposite conclusion that METTL3 overexpression was negatively associated with LUAD prognosis [64].
Construction of a risk score model and lung cancer prognosis
METTL3, together with other m 6 A-related regulators, participates synergistically in an m 6 A methylation. Therefore, it is less reliable to estimate the prognostic value of METTL3 in lung cancer. Hence, based on multiple-gene signatures, the risk score model is constructed to reflect the prognosis of patients with lung cancer more reasonably. Zhu et al. found that nothing in the six m 6 A-related regulators is a prognostic risk factor for lung cancer [65]. The risk score model of six genes was built through bioinformatics analysis, which is significantly associated with clinicopathological features and survival outcomes, serving as an independent predictor of prognosis in LUAD [65]. Similarly, based on eight m 6 A regulators, an optimal prognostic gene risk score model was constructed by Liu et al., which could serve as an independent prognostic factor in LUAD [66]. Additionally, the risk score model constructed by three risk genes (METTL3, YTHDC1, and HNRNPC) also did well in predicting the prognosis of LUSC patients [66]. Moreover, Zhuang et al. constructed the risk score model by using ten m 6 A regulators, which were strongly related to clinicopathological characteristics and could be used to be an independent risk factor in LUAD [67]. Unfortunately, this model was not applicable to LUSC.
Gene alternative splicing (GAS) can be explained by the production of multiple mRNA isoforms from a single gene, which regulates gene expression at the posttranscriptional level and plays a crucial role in the development of diverse diseases, including cancer [68][69][70][71]. The risk signature of the m 6 A-associated GAS events was constructed by Zhao et al., which could serve as an independent prognostic risk factor in LUAD and LUSC [72]. Of note, METTL3, HNRNPC, and RBM15, as the splicing factors, can also be directly involved in GAS events in NSCLC [72].
Relationship between METTL3 and noncoding RNA in lung cancer
Noncoding RNA is functional RNA that is not translated into proteins but can regulate gene expression [73,74]. According to length, noncoding RNA can be divided into short-chain noncoding RNA (siRNA, miRNA, piRNA) and long-chain noncoding RNA (LncRNA) [75,76].
In mammals, miRNA can regulate the expression of target genes at the posttranscriptional level through incomplete complementary pairing with mRNA 3'UTR. Interestingly, 3'UTR is also where m 6 A modification is enriched [75]. Studies have shown that approximately 67% of transcripts in the 3'UTR with m 6 A modification contain at least one miRNA binding site [18]. Hence, METTL3mediated m 6 A methylation modification has a high link with miRNA. Functionally, both can regulate critical oncogene expression and influence tumor progression. Additionally, it has been reported that nine m 6 A-mediated miRNA were identified in human bronchial epithelial cells (HBEs) treated with arsenite by using the Venn diagram and KEGG analysis [77], which might regulate crucial pathways related to cell proliferation and apoptosis, including the P53 pathway, mTOR pathway, and MAPK pathway, suggesting that miRNA could serve as the pivotal bridge by which METTL3-mediated m 6 A facilitates to cell proliferation or apoptosis in HBEs treated with arsenite. Table 2 lists the relationships between METTL3 and noncoding RNA in lung cancer.
In addition, the METTL3-YTHDC1 participates in the back-splicing of some circRNAs and affects their biogenesis. For example, YTHDF3/eIF4G2 regulates its translation by recognizing a specific m 6 A site on circ-ZNF609 [78]. It has been reported that circPUM1 knockdown inhibits NSCLC cell growth and glycolysis in vivo and vitro. Further study has shown that circPUM1 sponges miR-590-5p, which can directly target METTL3 and downregulate its expression. Liu et al. identified a novel circIGF2BP3 that is overexpressed in NSCLC and inhibits CD8 + T cell infiltration [79]. Mechanistically, METTL3 promotes PKP3 mRNA to form a protein-RNA complex with FXR1 to stabilize OTUB1 mRNA by regulating the circIGF2BP3/ PKP3 axis. OTUB1 upregulates its expression by inhibiting PD-L1 ubiquitination in NSCLC cells to induce immune escape and resistance to PD-1 inhibitors.
Recent advances in targeting METTL3
Accumulating evidence has shown that METTL3 plays a crucial role in the tumorigenesis of lung cancer, dependent or independent of m 6 A modification. It could act as a potential therapeutic target (Table 3).
METTL3 inhibitors
As the sole catalytic subunit in the methyltransferase complex, METTL3 is involved in different aspects of tumor progression, such as cell proliferation, invasion, migration, metastasis, tumor environment, cancer stem cells, and drug resistance [43,80,81].
Because adenosine could serve as a SAM-competitive inhibitor of METTL3, Bedi et al. identified seven compounds from among 4000 adenosine analogs and derivatives using high-throughput docking into METTL3, two of which (compounds 2 and 7) showed good ligand efficiency [82]. Additionally, simvastatin has been reported to exert anti-tumor activity in various cancers, including lung cancer. The study by Chen et al. has indicated that simvastatin suppresses cell proliferation, migration, invasion, metastasis and EMT by reducing EZH2 expression via downregulating METTL3 in lung cancer [83].
Drug combination
In order to overcome multiple drug resistance and prolong patient survival, drug combination is becoming the mainstream of lung cancer therapy [84,85]. Recent evidence has shown that two different targeted agents for lung cancer could block numerous targets on the signal transduction pathway to prevent the malignant progression of lung cancer, thereby achieving better clinical efficacy [86][87][88]. Chidamide is a novel small Decrease the total m 6 A level [82] molecular inhibitor targeting HDAC1/2/3/10 [89]. Interestingly, histone deacetylase inhibitors (HDACIs) combined with other agents strongly improved antitumor activities [90][91][92][93]. Recently, Ding et al. revealed that Chidamide could make NSCLC cells more sensitive to Crizotinib in vivo. Mechanistically, Chidamide can decrease the stability and translational efficiency of METTL3 and WTAP transcripts to downregulate the m 6 A level and expression of c-Met [49].
Others
METTL3 overexpression is common in NSCLC and dramatically accelerates the transcriptional efficiency of key oncogenes, resulting in NSCLC malignancy [42,43]. However, it has also been shown that METTL3 overexpression is associated with a better prognosis in NSCLC patients. Based on bioinformatics analysis, Liu et al. constructed a risk score model with eight m 6 A methylation regulators, including METTL3, which could better respond to the clinical outcomes of LUAD and LUSC patients. METTL3 acted as a protective gene in this model and was commonly enriched in the low-risk group [66]. Similarly, Zhang et al. found that METTL3 overexpression was positively correlated with OS in LUAD samples from The Cancer Genome Atlas (TCGA) [63]. Zhu et al. constructed a risk score model based on six m 6 A methylation regulators that could better predict the clinicopathological characteristics of LUAD patients, and METTL3 acted as a tumor suppressor in this model [65]. It has been reported that Interleukin 37 (IL-37) increases METTL3 expression to upregulate the total m 6 A level, inhibiting A549 cell proliferation. Furthermore, IL-37 has anti-tumor activity by targeting the Wnt5a/5b pathway [94]. Therefore, Selberg et al.
performed a virtual screening based on the crystal structure of the methyltransferase complex. It turned out that four small molecule compounds enhance its methyltransferase activity by specifically binding METTL3, upregulating the total m 6 A level in RNA in cells [95]. Mechanically, the compound increases SAM affinity for METTL3 and lowers the energy barrier of the m 6 A methylation reaction by interaction with SAM in the active center of METTL3 [95].
METTL3-METTL14-WTAP complex in lung cancer
METTL3 is the first identified methyltransferase and has sole enzymatic activity. However, METTL3 interacts with METTL14 to form a heterodimer with the highest enzymatic activity and a better preference for substrate RNAs [44]. However, WTAP binds to the METTL3-METTL14 heterodimer to form a METTL3-METTL14-WTAP complex localized at the nuclear speckle, thereby affecting the total m 6 A level [96]. Interestingly, METTL3, as the only catalytically active subunit of the methyltransferase complex, promotes the development of lung cancer dependent or independent of catalytic activity. On the one hand, METTL3 affects m 6 A levels of oncogenes or tumor suppressors, thereby regulating their expression to promote tumor progression [39,81]. On the other hand, METTL3 interacts with eIF3h to form RNA looping independent on catalytic activity, which greatly accelerates the translation efficiency of polyribosomes and upregulates the expression of key oncogenes, such as EGFR and TAZ [42,43] (Fig. 3).
It has been reported that METTL14 depletion significantly downregulates m 6 A levels in tumor stromal cells, thereby decreasing CD8 + T cell infiltration and increasing dysfunctional T cells, leading to tumor growth [79]. Further study revealed that the METTL14-YTHDF2 axis maintained the balance between cytotoxic CD8 + T cells and dysfunctional T cells, and METTL14-depleted TAMs overexpressing Epstein-Barr virus-induced protein 3 (Ebi3) transcripts inhibited the anti-tumor activity of CD8 + T cells, which resulted in the conversion of CD8 + T cells to dysfunctional T cells. Furthermore, METTL14 is overexpressed in NSCLC cell lines and induces the EMT process [97]. Mechanistically, METTL14 knockdown downregulates Twist expression to inhibit the AKT pathway and upregulate E-cadherin, thereby inhibiting NSCLC cell migration. Of note, Li et al., based on TCGA and GEO databases, found that METTL14 was downregulated in lung cancer tissues compared to normal tissues. And METTL14 overexpression inhibited lung cancer growth and metastasis in vivo and in vitro through the miR-30c-1-3p/MARCKSL1 axis [98].
In 2000, Little et al. first identified WTAP using a yeast two-hybrid assay [99]. WTAP, a widespread nuclear protein, can specifically bind to the WT1 protein to colocalize at the nuclear speckle [99]. In addition, WTAP can also act as a splicing factor and participate in alternative splicing of specific a subset of m 6 A-modified RNAs. It has been reported that LncRNA PCGEM1 is highly expressed in NSCLC and promotes cell growth [100]. Mechanistically, LncRNA PCGEM1 sponges miR-433-3p to upregulate WTAP expression and accelerate NSCLC progression. Intriguingly, METTL3 levels are critical for WTAP homeostasis, and METTL3 knockdown or overexpression both upregulate WTAP expression [101]. Mechanistically, METTL3 overexpression increases WTAP mRNA translation and protein stability independent of its catalytic activity. In contrast, METTL3 knockdown increases WTAP mRNA levels and eventually upregulates the expression of WTAP. Notably, WTAP overexpression is insufficient to promote tumor progression when functional METTL3 is absent, implying that WTAP must depend on the METTL3-METTL14 complex to exert oncogenic activity [101].
Conclusions
Accumulating studies have shown that METTL3 plays a crucial role in the occurrence and development of lung cancer. METTL3 participates in cell proliferation, invasion, migration, metastasis, angiogenesis, glycolysis, drug resistance, and tumor microenvironment, dependent or not on catalytic activity. Thus, METTL3 could act as a potential therapeutic target for lung cancer. Notably, m 6 A regulators include methyltransferases and demethylases, while METTL3, as the sole methyltransferase with enzymatic activity, does not ultimately determine the global m 6 A level, suggesting that m6A levels may be regulated by specific patterns that require further investigation.
On the one hand, METTL3-mediated m 6 A methylation regulates the expression of oncogenes or tumor suppressors, influencing the malignant progression of lung cancer. On the other hand, METTL3 interacts with eIF3h to form RNA looping, significantly increasing the translation efficiency of key oncogenes. The combined effect of the two may determine the development of lung cancer. Intriguingly, either METTL3 knockdown or overexpression can regulate WTAP expression, suggesting that they can interact with each other, and act as upstream regulators of each other. Therefore, the regulatory network with METTL3 as the core in lung cancer is extremely complex and involves many molecular mechanisms.
A large number of studies have shown that METTL3 interacts with noncoding RNA to regulate the expression of downstream genes, thereby affecting the progression of lung cancer. Among noncoding RNA, especially circRNAs, the METTL3-YTHDC1 participates in circRNAs biogenesis and regulates the translation of specific m 6 A-modified circRNAs.
Notably, there is solid evidence that METTL3 can act as an oncogene in lung cancer cell lines but as a tumor suppressor in tumor stromal cells. For example, METTL3 deletion in macrophages can promote tumor growth, metastasis, and drug resistance by increasing M1-and M2-TAM and Treg infiltration in tumors and reshaping the tumor microenvironment. In addition, METTL14-deficient macrophages significantly downregulated the global m 6 A level to reduce CD8 + T cells and increase dysfunctional T cells, promoting tumor growth. In tumor stromal cells, especially TAM, METTL3 or METTL14 can serve as a protective genes for lung cancer. To our knowledge, this is the first time to propose that METTL3 and METTL14 play opposite roles in lung cancer cells and tumor stromal cells, precisely explaining why high levels of m 6 A methylation can be predictive of better prognosis in NSCLC patients. | 2023-01-17T14:27:55.728Z | 2022-03-24T00:00:00.000 | {
"year": 2022,
"sha1": "f2f5dd46786c06c129ad7dcf003ec0e0915e68f9",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12935-022-02539-5",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "f2f5dd46786c06c129ad7dcf003ec0e0915e68f9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
118938105 | pes2o/s2orc | v3-fos-license | Unified theory for Goos-H\"{a}nchen and Imbert-Fedorov effects
A unified theory is advanced to describe both the lateral Goos-H\"{a}nchen (GH) effect and the transverse Imbert-Fedorov (IF) effect, through representing the vector angular spectrum of a 3-dimensional light beam in terms of a 2-form angular spectrum consisting of its 2 orthogonal polarized components. From this theory, the quantization characteristics of the GH and IF displacements are obtained, and the Artmann formula for the GH displacement is derived. It is found that the eigenstates of the GH displacement are the 2 orthogonal linear polarizations in this 2-form representation, and the eigenstates of the IF displacement are the 2 orthogonal circular polarizations. The theoretical predictions are found to be in agreement with recent experimental results.
I. INTRODUCTION
In 1947, Goos and Hänchen [1] experimentally demonstrated that a totally reflected light beam at a plane dielectric interface is laterally displaced in the incidence plane from the position predicted by geometrical reflection. Artmann [2] in the next year advanced a formula for this displacement on the basis of a stationary-phase argument. This phenomenon is now referred to as Goos-Hänchen (GH) effect. In 1955, Fedorov [3] expected a transverse displacement of a totally reflected beam from the fact that an elliptical polarization of the incident beam entails a non-vanishing transverse energy flux inside the evanescent wave.
Imbert [4] calculated this displacement using an energy flux argument developed by Renard [5] for the GH effect and experimentally measured it. This phenomenon is usually called Imbert-Fedorov (IF) effect. The investigation of the GH effect has been extended to the cases of partial reflection and transmission in transmitting configurations [6,7] and to other areas of physics, such as acoustics [8], nonlinear optics [9], plasma physics [10], and quantum mechanics [5,11]. And the IF effect has been connected with the angular momentum conservation and the Hall effect of light [12,13]. But the comment of Beauregard and Imbert [14] is still valid up to now that there are, strictly speaking, no completely rigorous calculations of the GH or IF displacement. Though the argument of stationary phase was used to explain [2] the GH displacement and to calculate the IF displacement [15], it was on the basis of the formal properties of the Poynting vector inside the evanescent wave [14] that the quantization characteristics were acquired for both the GH and IF displacements in total reflection. On the other hand, it has been found that the GH displacement in transmitting configurations has nothing to do with the evanescent wave [7].
The purpose of this paper is to advance a unified theory for the GH and IF effects through representing the vector angular spectrum of a 3-dimensional (3D) light beam in terms of a 2-form angular spectrum, consisting of its 2 orthogonal polarizations. From this theory, the quantization characteristics of the GH and IF displacements are obtained, and the Artmann formula [2] for the GH displacement is derived. The amplitude of the 2-form angular spectrum describes the polarization state of a beam in such a way that the eigenstates of the GH displacement are the 2 orthogonal linear polarizations and the eigenstates of the IF displacement are the 2 orthogonal circular polarizations.
II. GENERAL THEORY
Consider a monochromatic 3D light beam in a homogenous and isotropic medium of refractive index n that intersects the plane x = 0. In order to have a beam representation that can describe the propagation parallel to the x-axis, the vector electric field of the beam is expressed in terms of its vector angular spectrum as follows [16], where time dependence exp(−iωt) is assumed and suppressed, A = ( A x A y A z ) T is the vector amplitude of the angular spectrum, k = ( k x k y k z ) T is the wave vector satisfying is the vacuum wavelength, superscript T means transpose, and the beam is supposed to be well collimated so that its angular distribution function is sharply peaked around the principal axis (k y0 , k z0 ) and that the integration limits have been extended to ±∞ for both variables k y and k z [17]. When this beam intersects the plane x = 0, the electric field distribution on this plane is thus hereafter the integration limits will be omitted as such. The position coordinates of the centroid of the beam (1) on the plane x = 0 are defined by and where ∂ ∂ky means partial derivative with respect to k y with k z fixed, ∂ ∂kz means partial derivative with respect to k z with k y fixed, and superscript † stands for transpose conjugate.
Since the Fresnel formula for the amplitude reflection coefficient at a dielectric interface depends on whether the incident plane wave is in s or p polarization, it is profitable to represent the vector amplitude of the angular spectrum in terms of its s and p polarized components. To this end, let us first consider one plane-wave element of the angular spectrum whose wave vector is k 0 = ( k cos θ k sin θ 0 ) T , where θ is its incidence angle. Its vector amplitude is given by where A s and A p are the complex amplitudes of A 0 s and A 0 p , respectively, s 0 = ( 0 0 1 ) T is the unit vector of A 0 s and is perpendicular to the plane xoy, and p 0 = ( − sin θ cos θ 0 ) T is the unit vector of A 0 p and is parallel to the plane xoy. This means that A 0 can be represented as is what we introduce in this paper and is referred to as the 2-form amplitude of the angular spectrum,s = After this element is rotated by angle ϕ around the x-axis as is displayed in Fig. 1, its wave vector becomes and its vector amplitude becomes where This shows that the vector amplitude (5) can be represented as where matrix P represents the projection of 2-form amplitudeà onto vector amplitude A and is thus referred to as projection matrix, and Now we have successfully represented, through the projection matrix, the vector amplitude A in terms of the 2-form amplitudeÃ, that is to say, in terms of the 2 orthogonal linear polarizations A s and A p . It should be pointed out that in this representation, s is not necessarily perpendicular to the plane xoy, and p is not necessarily parallel to this plane.
Denoting k r = k r e r = k y e y + k z e z , where k y = k r cos ϕ, k z = k r sin ϕ, e y and e z are the unit vectors in y and z directions, respectively, and e r is the unit vector in the radial direction, we find that s is in fact the unit vector in the azimuthal direction, s = e ϕ . Furthermore, letting p r = p y e y + p z e z , it is apparent that p r = kx k e r . In other words, p r is in the radial direction. The directions of s and p r are schematically shown in Fig. 2. Unit vectors s and p and the wave vector k are orthogonal to each other and thus satisfy the following relations, The first 3 equations guarantee From expression (6) for the vector amplitude and with the help of Eq. (7), we obtain Eqs.
III. DESCRIPTION OF INCIDENT AND REFLECTED BEAMS
Without loss of generality, we consider an arbitrarily polarized incident beam of the whereL i = l i1s + l i2p describes the polarization state of the beam and is assumed to satisfy the normalization conditionL † angular distribution function A(k y , k z ) is assumed to be a positively-definite sharply-peaked symmetric function around the principal axis (k y0 , k z0 ) = (k sin θ 0 , 0) and satisfy the normalization condition and θ 0 stands for the incidence angle of the beam. Eqs. (12) and (13) guarantee the following normalization condition for the 2-form amplitude (11), One example of such a distribution function that satisfies normalization condition (13) is the following Gaussian function [16,18], where w y = w 0 / cos θ 0 , w z = w 0 , w 0 is half the width of the beam at waist. ∆θ = 1 kw 0 is half the divergence angle of the beam.
According to Eq. (6), the vector amplitude of the incident beam is given by A i = PÃ i . For a uniformly polarized beam that was obtained from a linearly polarized beam in experiments [19,20,21], the s components of all its plane-wave elements are in the same direction, and the same to the p components. But in our representation advanced here, the s polarizations of different plane-wave elements are generally in different directions; so are the p polarizations.
Considering Eqs. (6), (11) and (15) together, one concludes that in order to describe a uniformly polarized beam mentioned above, it is essential that the incidence angle θ 0 be much larger than ∆θ. So we will only consider the case of large θ 0 below. Fortunately, this is just what we have in the case of total reflection. It will be convenient to expressL i on the orthogonal complete set of circular polarizations as follows,L where c i1 represents the complex amplitude of right circular polarization, c i2 represents the complex amplitude of left circular polarization,r = Us = 1 r andl form the orthogonal complete set of circular polarizations. Unitary transformation guaranteesL † iL i =C † iC i . When the beam is reflected at plane x = 0, the reflected beam has the following 2-form where describes the polarization state of the reflected beam, and R s ≡ |R s | exp(iΦ s ) and R p ≡ |R p | exp(iΦ p ) are the reflection coefficients for s and p polarizations, respectively. It will be convenient to expressL r on the orthogonal complete set of circular polarizations as follows, where c r1 represents the complex amplitude of right circular polarization for reflected beam, Since R s and R p are all even functions of k z , we have for the y coordinate of the centroid of the reflected beam on the plane x = 0, on applying Eqs. (2), (8), and (9) toà r , where describes the reflectivity of a 3D beam. The above equation can also be written as The displacement of y r from y i is the GH effect and is thus given by It is obviously quantized with eigenstates the s and p polarization states. The eigenvalues are with j = s, p. When the angular distribution function A(k y , k z ) is so sharp that ∂Φs ∂ky and ∂Φp ∂ky are approximately constant in the area in which A is appreciable, we arrive at the Artmann formula [2], It is now clear that the quantization description of GH displacement depends closely on the 2-form representation of the angular spectrum.
A. Total reflection
When the beam is totally reflected, the reflection coefficients take the form of and R = 1. Substituting Eq. (23) into Eq. (21), we obtain If ∂Φs ∂ky and ∂Φp ∂ky are approximately constant in the area in which A is appreciable, the reflected beam maintains the shape of the incident beam [22] and the GH displacement takes the form which leads naturally to the Artmann formula (22) for s or p polarization and agrees well with the recent experimental results [19,20].
B. Partial reflection and generalized GH displacement
When the beam is partially reflected, the reflected beam is also displaced from y i to y r in y-direction. This is the so-called generalized GH displacement [7] and is given by Eq. (21). Such generalized GH displacements may also occur in attenuated total reflection [10], amplified total reflection [23], and in reflections from absorptive [24] and active [25] media.
If ∂Φs ∂ky and ∂Φp ∂ky are approximately constant in the area in which A(k y , k z ) is appreciable, Eq. (21) reduces to which also leads to the Artmann formula (22) for s or p polarized beams. where Eq. (24) shows that z i does not vanish and is quantized with eigenstates the 2 circular polarizations. The eigenvalues are the same in magnitude and opposite in direction. For the Gaussian distribution function (15), we have [26] z c i ≈ 1 k tan θ 0 at large incidence angle, θ 0 ≫ ∆θ.
The non-vanishing transverse displacement of the incident beam from the plane z = 0 is in fact an evidence of the so-called translational inertial spin effect of light that was once discussed by Beauregard [27]. Beauregard found that although the transverse wave vector of a 2-dimensional beam is identically zero, the 2 circular polarizations have non-vanishing transverse Poynting vector, and called this phenomenon the translational inertial spin effect.
The problem is that the electromagnetic field of so defined 2-dimensional beam is uniform in transverse direction, ∂ ∂z = 0. In order to observe this effect, it is necessary to have a bound beam that is not transversely uniform, provided that the expectation of transverse wave vector is zero. The 3D beam that we consider here is such a beam satisfying k z = A † k z Adk y dk z = à † k zà dk y dk z = 0. For example, when θ 0 = 10 • , we have z c i ≈ 0.9λ. This displacement has been confirmed by the numerical calculation of the field intensity distribution, |Ψ(0, z)| 2 , on z-axis as is shown in Fig. 3, where the Gaussian distribution function (15) (10) to this amplitude and with the help of Eq. (18) gives the transverse displacement of the reflected beam from the plane z = 0. So defined displacement is the IF effect [4,14,19,20] and is given by This shows that the IF displacement of the reflected beam is quantized with eigenstates the 2 circular polarizations. The eigenvalues are the same in magnitude and are opposite in direction. Eqs. (24) and (25) indicate that the quantization description of IF displacement depends closely on the 2-form representation of the angular spectrum.
In order to compare with the recent experimental results [19,20,21], we consider such an incident beam that has the following elliptical polarization and Gaussian distribution where 0 ≤ ψ ≤ π/2. In this case, the IF displacement of totally reflected beam is Since Eq. (27) holds whether the beam is totally reflected by a single dielectric interface [19] or by a thin dielectric film in a resonance configuration [20], it is no wonder that the observed IF displacement in the resonance configuration [20] is not enhanced in the way that the lateral GH displacement is enhanced.
If the total reflection takes place at a single dielectric interface and the incidence angle is far away from the critical angle for total reflection and the angle of grazing incidence in comparison with ∆θ, the first and the last factors of the integrand in Eq. (27) can be regarded as constants for a well-collimated beam [22] and thus can be taken out of the integral with k y , k z , Φ s , and Φ p evaluated at k y = k y0 and k z = k z0 , producing This shows that for given θ 0 , the magnitude of D IF is maximum for circularly polarized reflected beams (ψ = π/4 and φ + Φ s0 − Φ p0 = (m + 1/2)π). It also shows that the nonvanishing IF displacement for the case of oblique linear polarization of the incident beam (φ = mπ) [4] results from the different phase shifts between s and p polarizations in total reflection. The incidence angle dependence ∼ 1/ tan θ 0 is in consistency with the recent experimental result [21]. Since θ 0 is larger than the critical angle for total reflection, it is no wonder that the IF displacement is of the order of λ 0 /2π [4,19,20].
VI. CONCLUDING REMARKS
We have advanced a unified theory for the GH and IF effects by representing the vector angular spectrum of a 3D light beam in terms of a 2-form angular spectrum consisting of the s and p polarized components. The 2-form amplitude of the angular spectrum describes the polarization state of a beam in such a way that the GH displacement is quantized with eigenstates the 2 orthogonal linear polarizations and the IF displacement is quantized with eigenstates the 2 orthogonal circular polarizations. We have also derived the Artmann formula for the GH displacement and found an observable evidence of the so-called translational inertial spin effect that was discussed more than 40 years ago [27]. It was shown that the IF displacement is in fact the translational inertial spin effect happening to the totally reflected beam.
In the 2-form representation of a bound beam presented here, only large incidence angle θ 0 in the angular distribution function A(k y , k z ) corresponds to the uniformly polarized beams [19,20,21]. When θ 0 is very small, especially when θ 0 = 0, this representation gives quite different beams with peculiar polarization distributions, which needs further investigations. | 2019-04-14T03:15:55.411Z | 2006-11-06T00:00:00.000 | {
"year": 2006,
"sha1": "e81ac40fd6fc74bde691ae3b79965a1239f67043",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0611047",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "52d84e7447cb2b65c1613fc74d49e43b692a63d3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
266279162 | pes2o/s2orc | v3-fos-license | Navigating rice seedling cold resilience: QTL mapping in two inbred line populations and the search for genes
Due to global climate change resulting in extreme temperature fluctuations, it becomes increasingly necessary to explore the natural genetic variation in model crops such as rice to facilitate the breeding of climate-resilient cultivars. To uncover genomic regions in rice involved in managing cold stress tolerance responses and to identify associated cold tolerance genes, two inbred line populations developed from crosses between cold-tolerant and cold-sensitive parents were used for quantitative trait locus (QTL) mapping of two traits: degree of membrane damage after 1 week of cold exposure quantified as percent electrolyte leakage (EL) and percent low-temperature seedling survivability (LTSS) after 1 week of recovery growth. This revealed four EL QTL and 12 LTSS QTL, all overlapping with larger QTL regions previously uncovered by genome-wide association study (GWAS) mapping approaches. Within the QTL regions, 25 cold-tolerant candidate genes were identified based on genomic differences between the cold-tolerant and cold-sensitive parents. Of those genes, 20% coded for receptor-like kinases potentially involved in signal transduction of cold tolerance responses; 16% coded for transcription factors or factors potentially involved in regulating cold tolerance response effector genes; and 64% coded for protein chaperons or enzymes potentially serving as cold tolerance effector proteins. Most of the 25 genes were cold temperature regulated and had deleterious nucleotide variants in the cold-sensitive parent, which might contribute to its cold-sensitive phenotype.
Introduction
Rice is one of the most important crops and, due to its subtropical origin, is generally sensitive to even short exposures to cold stress at all stages of development (da Cruz et al., 2013;Li et al., 2022).Because of global climate change, recent weather extremes, such as colder winters affecting early spring rice planting, have become increasingly common (Johnson et al., 2018).Moreover, to either expand rice cultivation into higher latitudes or altitudes where air temperatures can fluctuate widely during the early growing season, it is important to elucidate rice cold stress tolerance response mechanisms that allow young rice seedlings to survive several days of exposure to chilling temperatures and resume growth and development during recovery from stress.Interestingly, because of the way rice was domesticated and spread by humans, this generally cold-sensitive crop has a wide range of temperature adaptation that is structured into subpopulations, and variation for cold tolerance within subpopulations may act through very different mechanisms (Li et al., 2022).These mechanisms can be studied genetically because there is considerable natural variation in cultivated self-pollinating plants such as rice.Previous studies by us and others showed the Japonica varietal group (VG), composed of temperate japonica (TEJ) and tropical japonica (TRJ) accessions, is considerably more cold tolerant than the Indica VG, composed of aus (AUS) and indica (IND) accessions, which allowed us to map various cold tolerance trait QTL using genome-wide association study (GWAS) mapping approaches (Schläppi et al., 2017;Shakiba et al., 2017;Shimoyama et al., 2020;Phan and Schläppi, 2021;reviewed in Li et al., 2022).Since many of these QTL cover relatively large genomic regions containing many genes, a recurrent challenge is to identify cold tolerance candidate genes and ultimately the causal genes and their alleles associated with those QTL.
The purpose of this study was to fine-map cold tolerance trait QTL using bi-parental mapping populations to identify smaller genomic regions that overlap with the larger QTL regions we previously identified through GWAS, thus allowing us to narrow down cold tolerance candidate genes within those smaller regions.Based on our previous results, we selected two robustly cold-tolerant and coldsensitive parents to generate recombinant inbred lines (RILs) and backcross inbred lines (BILs).The two populations were used to map agricultural traits as "quality control" to assess whether known heading date and plant height QTL and the associated genes could be uncovered.Young RIL and BIL seedlings were then exposed for 1 week to a chilling temperature of 10°C, after which the degree of membrane damage in the leaves of exposed plants was measured, while seedlings' survival after 1 week of recovery growth was determined.Our results validated the two mapping populations suitable for QTL mapping, and 16 cold tolerance trait QTL were uncovered, all of which overlapped with at least one of the QTL we found by GWAS mapping.The genomic regions delineated by those QTL contained at least 25 candidate genes with polymorphisms between the cold-tolerant and cold-sensitive parents, and their potential contributions to rice cold tolerance are discussed.
Rice materials and inbred line production
The three accessions used to generate inbred lines are part of the rice mini-core collection (RMC).Seeds were obtained from the Genetic Stocks-Oryza (GSOR) collection located at the USDA-ARS Dale Bumpers National Rice Research Center (DBNRRC; Stuttgart, Arkansas, USA; https://www.ars.usda.gov/GSOR).
To generate recombinant inbred lines (RILs), the cold-tolerant temperate japonica accession Krasnodarskij 3352 (GSOR311787; originally from the Krasnodar region of Western Russia) was crossed as female with the cold-sensitive aus accession Carolino 164 (GSRO311654; originally from the Chad region of Africa) to generate F 1 seeds.The F 2 seed produced from 22 F 1 plants was advanced by single seed descent to the F 8 generation, with 12 RILs only advanced to the F 7 , for a final population of 140 RILs.Leaf tissue was collected from the 140 F 7:8 RILs and the F 8:9 seed from 134 (or fewer, if limited seed) of these RILs was used to grow the seedlings for phenotyping.
To generate backcross inbred lines (BILs), the cold-tolerant temperate japonica accession WIR 911 (GSOR311685; originally from the Primorsky Krai region of Eastern Russia) was crossed as female with Carolino 164, and the resulting F 1 was backcrossed as female with Carolino 164, producing 93 BC 1 F 1 progeny that were advanced by single seed descent to the BC 1 F 5 generation, with three BILs only advanced to the BC 1 F 4 .Leaf tissue from these 93 BC 1 F 4:5 BILs was used for genotyping, and BC 1 F 5:6 seed from 92 (or fewer, if limited seed) of these BILs was used to grow the seedlings for phenotyping.
Germination and standard growth conditions
For phenotyping, seeds of the three parents, F 8:9 RILs and BC 1 F 5:6 BILs were germinated in the dark for 2 days at 37°C in deionized water containing 0.1% bleach to prevent bacterial contamination.Germinating seeds were transferred into PCR strips, placed into pipette tip boxes, and grown hydroponically in deionized water for 10 days in a growth chamber under 12-h light (approximately 150 µE photon flux)/12-h dark and 28°C day/25°C night cycles.To provide nutrients, on day 10, the water was replaced with ¼ Murashige-Skoog basal salt liquid medium.Each line was represented by up to eight plants per box in quadruplicates for up to 32 plants (four boxes of eight plants) per experiment.The four boxes were randomly arranged within the growth chamber.Each box contained 11 strips of RILs or BILs and one strip with the parents as controls.For the RILs, there were four Carolino 164 seedlings, the cold-sensitive control, and four Krasnodarskij 3352 seedlings, the cold-tolerant control.For the BILs, the cold-tolerant parent, WIR 911 was used as the control.
Chilling stress treatment
The four boxes containing 2-week-old seedlings at the 2-leaf stage were placed at random positions within a growth chamber set at a constant 10°C ± 1°C temperature and incubated for 7 days (12h light/12-h dark cycles).Watering was done every other day.At the end of the treatment, approximately 1 cm of the middle segments of the second leaf for three seedlings per line was collected for electrolyte leakage (EL) assays, and the boxes were returned to standard growth conditions for a recovery period of 7 days after which low-temperature seedling survivability (LTSS) was recorded (details below).
Phenotyping of BIL and RIL populations
Agronomic and cold tolerance trait data for the RIL population are summarized in Supplementary Table S1.Data for the BIL population are summarized in Supplementary Table S2.
Heading date
Seeds of the F 8:9 RIL and BC 1 F 5:6 BILs were germinated and grown under standard growth conditions to generate 14-day-old seedlings.During May 2021, three healthy seedlings per line were transplanted in "hills" into 1.22-m × 1.22-m raised bed paddies on a rooftop at Marquette University in Milwaukee, Wisconsin, 100 lines per paddy (10 rows × 10 columns; 10 cm ± 1 cm distance between seedling "hills").The heading date was recorded as the number of days from germination to panicle emergence.
Plant height
The height of the main shoot of each RIL or BIL was measured in cm from the soil level to the tip of the mature panicle.
Electrolyte leakage
At the end of the 7-day 10°C stress period, approximately 1 cm of the middle section of the second leaf from three individual seedlings per RIL, BIL, or control per box was collected and cut into three equally sized segments.The pieces were washed in deionized water and transferred into three different screw-cap glass tubes filled with 5 mL of deionized water (conductivity ≤ 2 µS/cm), then shaken at 200 rpm for 60 min at room temperature to release cellular electrolytes from low-temperature damaged tissues.The initial conductivity of the three replicates per box (a total of 12 replicates across the four randomly distributed boxes) was measured by taking out 120 µL of the solution and adding it into the well of a hand-held LAQUAtwin B-771 conductivity meter (Horiba Scientific, Japan).Leaf samples were boiled for 20 min after the initial reading to release total cellular electrolytes.Samples were shaken again at 200 rpm for 30 min after cooling to room temperature, and the final conductivity reading was done.The mean percent EL was determined as [(initial conductivity reading)/ (final conductivity reading)] × 100.
Low-temperature seedling survivability
At the end of the 7-day 10°C stress period, seedlings were allowed to recover at standard growth conditions for one week after which seedling survival was determined by visual inspection.Seedlings that were mostly green and stiff were scored as alive while seedlings that were mostly wilted and/or bleached and soft were scored as dead.The mean percent survivability was calculated as [(number of seedlings scored as alive)/(total number of stressed plants)] × 100.
Statistical analysis of traits
QTL analysis prefers phenotype values for every (RIL or BIL) line in the mapping population.Line means were calculated for the heading date and plant height traits.For the EL and percent LTSS traits, these values were calculated from a linear mixed model (LMM) to obtain the Best Linear Unbiased Predictions (BLUPs) for each line.The LMM to calculate BLUPs for EL and LTSS traits used an augmented design with a fixed effect variable (group) indicating whether the line is a control or RIL (or BIL), a random effect variable for the line nested in the group variable, and random effect variables for the set (growth chamber experiment date) and box.In addition to treating LTSS as a percent value, LTSS was used in a survival analysis with a binomial generalized mixed model (GLMM) and logit link function to predict the probability of survival under cold treatment for each line on the log-odds scale, because percent EL and LTSS data were not normally distributed (Supplementary Figures S1C, D).The binomial GLMM used the same fixed and random variables as the LMM used for EL and percent LTSS.
Genotyping of recombinant inbred lines and QTL mapping
Leaf tissue was harvested from the parents and an individual representative F 7:8 RIL and BC 1 F 4:5 plant and lyophilized.Genotyping was done using the Cornell-IR LD Rice Array (C7AIR) containing 7,098 SNP markers (Morales et al., 2020) that was commercially available as an Illumina Infinium array.For DNA extraction and genotyping, lyophilized leaf tissue was sent to Eurofins Diagnostics, Inc. (www.eurofinsgenomics.eu/en/genotyping-gene-expression/service-platforms/illuminaarrayplatforms/).
GWAS analysis of the RDP1 and RMC
An earlier GWAS study by Shimoyama et al. (2020) phenotyped 354 accessions included in the Rice Diversity Panel 1 (RDP1) for EL and LTSS.Taking advantage of the increased number of SNPs available through the imputation of the RDP1 genotypes (Wang et al., 2018), the phenotypic data from Shimoyama et al. (2020) was rerun with the imputed genotypes using the mixed linear model in Tassel V 5.0 (Bradbury et al., 2007).The traits examined were EL8, EL10, EL12, LTSS8, LTSS10, and LTSS12, with the numbers 8, 10, and 12 representing the temperature at which the plants were grown, 8°C, 10°C, and 12°C, respectively.To more effectively capture the genetic diversity, seven panels based on subpopulation(s) were used.All panels were filtered to exclude heterozygous markers and markers with a minor allele frequency of less than 5%.The first panel, "353," was composed of 353 accessions with 2,721,379 SNPs and contained all five rice subpopulations: indica (IND), aus (AUS), aromatic (ARO), tropical japonica (TRJ), temperate japonica (TEJ), and the admixtures.The subpopulation panels included IND (72 accessions) with 1,637,548 SNPs, AUS (51 accessions) with 1,675,068 SNPs, TRJ (89 accessions) with 921,375 SNPs, and TEJ (86 accessions) with 569,119 SNPs.The ARO subpopulation contained only a limited number of accessions and was included with the 353 set but not as its own panel.The Indica VG panel, INDAUS, included both IND and AUS (129 accessions) with 2,389,503 SNPs, and the Japonica VG panel, JAP, included TRJ and TEJ (204 accessions) with 1,090,261 SNPs.GWAS run conditions as well as PCA and kinship generation were the same, as summarized in Eizenga et al. (2023).Schläppi et al. (2017) conducted GWAS mapping of LTSS in the RMC utilizing the 157 markers (148 SSRs) genotypes that were available at that time.Subsequently, the RMC was resequenced, which significantly improved the marker density (Wang et al., 2016;Huggins et al., 2019).For this study, the LTSS phenotypes from five trials (Schläppi et al., 2017) were rerun utilizing the denser SNP genotypes.Eight panels were used to capture the genetic diversity present in specific subpopulations and groups.All panels were filtered to exclude heterozygous markers and markers with a minor allele frequency of less than 5%.The panel "All Lines" contained all 173 RMC accessions genotyped with 3,224,845 SNPs.For the subpopulation panels, IND had 58 accessions genotyped with 1,883,603 SNPs, AUS had 30 accessions genotyped with 1,753,773 SNPs, TRJ had 28 accessions genotyped with 1,067,722 SNPs, and TEJ had 30 accessions genotyped with 691,001 SNPs.The ARO subpopulation only contained 11 accessions, so it was not an individual panel but was included in the "All Lines" panel and in the JAPARO panel described below.Panels of combined subpopulations were Indica (INDAUS) composed of IND and AUS having 92 accessions genotyped with 2,477,397 SNPs, Japonica (JAP) composed of TRJ and TEJ having 61 accessions genotyped with 1,092,889 SNPs, and JAPARO composed of JAP and ARO having 77 accessions genotyped with 1,402,000 SNPs.A mixed linear model was used in Tassel V 5.0 (Bradbury et al., 2007) for GWAS.GWAS run conditions, PCA, and kinship matrix are summarized in Eizenga et al. (2023).
GWAS results were run through Perl scripts to identify associated chromosome regions from individual SNPs or groups of physically linked SNPs (Huggins et al., 2019).Chromosome regions were set to 50 kilobases (kb) in both directions around each individual significant SNP and were extended to include nearby significant SNPs occurring within 200 kb.A Perl script was used to designate a "Peak SNP" for each region, which corresponded to the SNP with the most significant p-value found within the region.
Description of RIL and BIL populations
Four overall cold-tolerant (CT) TEJ accessions and four overall cold-sensitive (CS) accessions from the Indica VG, including AUS and IND accessions, were selected from the RMC-based GWAS mapping of the RMC for cold tolerance (Schläppi et al., 2017).Crosses were attempted between four pairs of these CT and CS accessions.Figure 1 shows the phenotypes of a pair of CS and CT parents where 2-week-old seedlings were exposed for 1 week to constant chilling temperatures of 10°C, after which they were allowed to recover at warm temperatures for 1 week.In the example shown, most CS seedlings did not survive; they were bleached and started to droop during the recovery period.Conversely, most of the CT seedlings survived, remained green, and resumed growth and development during the recovery period.
Due to major flowering time and sterility issues with two of the pairs, only two crosses were selected for further population development.A recombinant inbred line (RIL) mapping population was developed from CT Krasnodarskij 3352 (TEJ) crossed with CS Carolino 164 (AUS), advanced by single-seed descent to the F 8:9 generation, for a final population of 140 RILs.To generate a backcross inbred line (BIL) mapping population with reduced sterility issues, WIR 911 (TEJ) was crossed with Carolino 164, and F 1 plants were backcrossed as female with Carolino 164.The backcross progeny was advanced by single-seed descent to the BC 1 F 4:5 , for a final population of 93 BILs.Leaf tissue from single plants of the 140 F 7:8 RILs and 93 BC 1 F 4:5 BILs was genotyped, and after curation, more than 2,500 SNPs were retained.For QTL mapping of the RIL population, 2,794 SNPs with an average of 233 SNPs per chromosome (range: 170-346 SNPs) were used (Supplementary Table S3), and for BIL population mapping, 2,590 SNPs with an average of 216 SNPs per chromosome (range: 154-324 SNPs) were used (Supplementary Table S4).For both populations, chromosome (chr.)9 had the fewest, and chr. 1 the most SNPs.
Mapping potential validation of RIL and BIL populations
To perform a "quality control" for the bi-parental mapping potential of the RIL and BIL populations, two-week-old chambergrown seedlings were transplanted into raised bed rooftop paddies at Marquette University (43°02′11.59″N/87°55′51.64″W).Heading date (HD) and plant height (PTHT) data indicated that the parents were at the extremes of each end of the normal HD distributions, with only a few "transgressive" plants, while the normal PTHT distributions showed increased transgressive variation (Supplementary Figures S1A, B; Tables 1, 2).QTL mapping of HD and PTHT QTL revealed eight qHD and 12 qPTHT with logarithm of odds (LOD) scores >3.0 (Table 3).The general locations of the RIL HD QTL, qHD3 and qHD7, and BIL HD QTL, qHD3-1, qHD3-2, qHD3-3, and qHD6 (Figure 2) matched previously obtained legacy HD QTL (Takahashi et al., 2001), which validated the RIL and BIL populations for QTL mapping.
This conclusion was further supported by plant height QTL mapping.Of the 10 PTHT QTL, two contained known genes affecting plant height (Supplementary Table S5).qPTHT1 and qPTHT5-2 uncovered the chitin-inducible gibberellin-responsive gene, plant height 1 (ph1), and the gibberellin 2-beta-dioxygenase gene, OsGAox4, respectively, and nonsynonymous amino acid substitutions might contribute to plant height differences between the parents (Supplementary Table S6; Kovi et al., 2011;He et al., 2022).Taken together, heading date and plant height QTL mapping using the newly developed RIL and BIL populations described here yielded several QTL with high LOD scores containing known flowering time and plant height-regulating genes, thus validating their usefulness for mapping cold tolerance QTL and narrowing down associated cold tolerance genes.
Cold tolerance trait mapping
EL assays were done on 2-week-old seedlings after 1 week of exposure to 10°C as a measure of plasma membrane integrity right after the cold temperature exposure.After 1 week of recovery growth, LTSS was determined as a measure of the ability of coldstressed plants to maintain photosynthesis and resume growth and development.The chilling temperature and exposure time were chosen, because based on previous studies (Schläppi et al., 2017), the cold-tolerant parents had >90% survival while the cold-sensitive parent had approximately 10% survival, thus avoiding a scenario where most cold-sensitive RILs and BILs would have 0% survival.The RIL mapping yielded nine QTL with LOD scores of 2.7 or higher (three EL and five unique LTSS QTL; Table 4), and the BIL mapping also yielded nine QTL with LOD scores of 3.2 or higher (one EL and seven unique LTSS QTL; Table 4).Of the 16 unique cold tolerance trait QTL, 11 were found in clusters on chr.3, chr.6, and chr.8, while a single QTL was found on chr. 1, chr. 4, and chr.11 (Figure 2).Strikingly, all 16, in other words, 100% of the QTL (Supplementary Table S7) overlapped with seedling stage cold tolerance GWAS QTL discovered in the RMC or RDP1 (Schläppi et al., 2017;Shimoyama et al., 2020;Phan and Schläppi, 2021).Of note, for this study, GWAS QTL for the RMC was mapped more precisely using the phenotypes reported by Schläppi et al. (2017) and genotypes based on 3,224,845 SNPs (Wang et al., 2016;Huggins et al., 2019).Similarly, the precision of the GWAS mapping in the RDP1 was improved using the seedling stage cold tolerance phenotypes reported by Shimoyama et al. (2020) and genotypes based on 4,829,392 SNPs (Wang et al., 2018).These 16 overlapping QTL were analyzed to identify cold tolerance genes within the QTL regions.
The EL and LTSS QTL identified using the biparental RIL and BIL populations were smaller than those found by GWAS mapping in the RMC and RDP1, thus reducing the number of genes to investigate (Table 5; Schläppi et al., 2017;Shimoyama et al., 2020;Phan and Schläppi, 2021).Therefore, an analysis of genomic variation of nontransposable element genes between the CT and CS parents using a public database (Zhao et al., 2015) and literature searches were done to narrow down cold tolerance candidate genes within the QTL regions and 100,000 bp to the left and right marker SNPs, which uncovered 25 cold tolerance candidate genes in 15 QTL (Table 5).The SNP ID numbers, nucleotide changes, and SNP impacts in alleles of the cold-sensitive parent Carolino 164, such as nonsynonymous amino acid changes, frameshifts, premature stop, loss of start, loss of stop, and splice variants, are summarized in Table 6.
t h e m : t h e P -t y p e I I B C a 2 + A T P a s e g e n e O s A C A 6 (LOC_Os01g71240), the F-box protein gene OsFBL27 (LOC_Os06g06050), the serine esterase encoding gene LOC_Os06g06080, the mitogen-activated protein kinase gene OsMPK6 (LOC_Os06g06090), and the pentatricopeptide repeat type restorer-of-fertility gene RF6 (LOC_Os08g01870).OsACA6 was previously shown to enhance cold tolerance in transgenic tobacco (Huda et al., 2014), and due to nucleotide variations in the The RIL population had 134 progeny genotyped with 2,794 SNP markers and the BIL population had 92 progeny genotyped with 2,590 SNP markers.a Quantitative trait loci (QTL) are declared based on the Inclusive Composite Interval Mapping (ICIM) approach using the IciMapping software (Meng et al., 2015) and 0.5-step, 0.005-pin ICIM_Add settings unless otherwise noted.b A positive additive effect indicates that the Krasnodarskij 3352 or WIR 911 allele increases the phenotype for that trait; a negative additive effect indicates that the Carolino164 allele increases the trait.c PVE is the percentage of total phenotypic variation explained by an individual QTL, as estimated by R 2 values from ICIM analysis.d LOD score is close to the 90% confidence value of 2.774 (based on 1,000 permutations for the EL(BLUP) trait).e LOD score is >90% confidence value of 2.858 and <95% value of 3.205 (based on 1,000 permutations for the LTSS(logit) trait).
f LOD score is close to the 90% confidence value of 2.876 (based on 1,000 permutations for the LTSS(percent) trait).
g LOD score is close to the 95% confidence value of 3.384 (based on 1,000 permutations for the LTSS(percent) trait).Schläppi et al. 10.3389/fpls.2023.1303651Frontiers in Plant Science frontiersin.orgpromoter region, there might be differences in gene expression levels between CT and CS accessions.OsFBL27 activates OsMYB30, a negative regulator of cold tolerance, and OsFBL27 is downregulated by OsmiR528 (Tang and Thompson, 2019).Observed amino acid duplications and added nucleotides in the CS parent might enhance OsMYB30 activation due to increased OsFBL27 activity and/or lack of OsmiR528-mediated downregulation.OsMPK6 was previously shown to be cold inducible (Lee et al., 2008), and the 233 SNP variants between Carolino 164 and Krasnodarskij 3352 might affect kinase activity or regulation.Previously, other Ca 2+ ATPase, F-box protein, and mitogen-activated protein kinase genes were shown to be involved in cold stress tolerance response mechanisms leading to the production of compatible osmolytes that might stabilize biological membranes (reviewed in Li et al., 2022).The candidate genes uncovered here might have similar functions, which need to be tested in future studies.RF6 might be involved in nucleotide and nucleic acid metabolic processes in the mitochondrion (Huang et al., 2015), affecting reactive oxygen species (ROS) production, while the involvement of serine esterases in cold tolerance needs to be addressed in future studies.
The remaining 20 candidate genes were associated with 11 LTSS QTL, as follows: The two QTL on chr. 3 (qLTSS3-1 and qLTSS3-2) had four ankyrin-tetratricopeptide repeat (ANK-TPR) genes (OsANK; LOC_Os03g47460, LOC_Os03g47686, LOC_Os03g47693, LOC_Os03g47720) and the microsomal glutathione S-transferase gene OsGST3 (LOC_Os03g50130).The single QTL on chr. 4 (qLTSS4) had the two wall-associated kinase genes OsWAK38 (LOC_Os04g29680) and OsWAK40 (LOC_Os04g29790).The four QTL on chr.6 (qLTSS6-1, qLTSS6-2, qLTSS6-3, qLTSS6-4) had eight candidate genes: the receptor-like kinase gene OsRLK-803 (LOC_Os06g04370); the aminomethyltransferase gene LOC_Os06g04380; the histone chaperone gene OsNAPL2 (LOC_Os06g05660); the transferase domain-containing protein gene LOC_Os06g05790; the two peptidyl-prolyl cis/trans isomerase genes OsFKBP13 (LOC_Os06g45340) and CYP59A (LOC_Os06g45900); and the NAC transcription factor gene OsNAC011 (LOC_Os06g46270) and the glycosyl hydrolase family 31 gene LOC_Os06g46284.The three QTL on chr.8 (qLTSS8, qLTSS8-1, qLTSS8-2) had the transcription factor gene OsbHLH070 (LOC_Os08g08160), the protease inhibitor/ seed storage/LTP family protein precursor gene LTPL130 (LOC_Os08g27674), and the receptor-like cytoplasmic kinase gene OsRLCK253/CRINKLY4 (LOC_Os08g28710).The single QTL on chr.11 (qLTSS11) had the F-box genes OsFBX398 (LOC_Os11g09360) and the protein disulfide isomerase gene OsPDIL1-1 (LOC_Os11g09280).These 20 genes can be categorized into three general groups: (1) genes with kinase activity potentially involved in cold-mediated signal transduction; (2) genes that potentially regulate gene expression; and (3) genes that code for potential cold tolerance effectors such as chaperones or enzymes.Many of the 20 cold tolerance candidate genes were previously shown to be involved in stress tolerance responses.Specifically, in the kinase category, OsWAK38 and OsWAK40 upregulation was specifically shown to be associated with salt tolerance (Wang et al., 2021), and OsRLCK253 was previously shown to be the only candidate gene associated with QTL_19 (Le et al., 2021) because of its involvement in water-deficit improvement and salinity stress tolerance (Giri et al., 2011).In the gene expression category, OsNAC011 was specifically shown to regulate drought tolerance in rice (Fang et al., 2014), and OsFBX398 was previously identified as a candidate gene selected during the breeding of rice for adaptation to different environments in Vietnam (Higgins et al., 2022).In the potential cold tolerance effector category, chaperonetype ANK genes were shown to have numerous functions in plants (Huang et al., 2009), including roles in response to abiotic and biotic stresses, and histone chaperone NAP genes were previously shown to have a role in abiotic stress responses (Tripathi et al., 2016).For The candidate genes located in the QTL regions are listed.a The Mb position of the flanking markers for the RIL or BIL QTL interval is based on the Mb position of the markers that were closest to the cM position of the QTL region as defined by the QTL analysis and listed in Table 3. b Gene nomenclature followed the standardized nomenclature for rice genes used in Oryzabase (Yamazaki et al., 2010) enzyme-encoding genes, OsGST3 was specifically shown to respond to various stress hormones (Chen et al., 2023), and OsPDIL1-1 overexpression in transgenic rice was shown to enhance abiotic stress tolerance (Xia et al., 2018), possibly by controlling the production of ROS (Zhao et al., 2023).It was also shown that the aminomethyltransferase gene LOC_Os06g04380 was differentially expressed after arsenic treatment (Norton et al., 2008), that peptidyl-prolyl cis/trans isomerase genes have various functions in response to environmental stresses (Ahn et al., 2010), and that LTPL130 was previously selected as a candidate gene located on chr.8 at 16.86 Mb in a QTL, a QTL for several root and yield-associated traits under aerobic and irrigated conditions (Padmashree et al., 2023).Besides allelic differences between CT and CS rice accessions, another criterion for cold tolerance candidate genes is that their steady-state mRNA levels are up-or downregulated in response to cold temperatures.Strikingly, a search of publicly available databases determined that of the 24 candidate genes with expression data, 22, in other words, 91.7% were regulated by cold temperatures (Table 7).Of those 22, eight (i.e., 36.4%) were up-and 11 (i.e., 50.0%) were downregulated, while three (i.e., 13.6%) were either up-or downregulated in different data sources.Because genetic backgrounds, degree of cold, and lengths of cold exposures were different between data sets (Supplementary Table S8), it will be necessary to reexamine expression patterns of the cold tolerance candidate genes under the conditions we used for QTL mapping.However, it is conceivable that stably up-and downregulated genes might have positive and negative effects, respectively, in cold stress tolerance responses in rice.Such genes might be differentially regulated in CT and CS rice accessions, which, together with coding region SNP variants, would be another reflection of allelic differences between accessions with varying cold tolerance potentials.
Conclusions
In this study, we used two biparental mapping populations made from crosses between cold-tolerant and cold-sensitive parents (Figure 1) to fine-map cold tolerance QTL.We first performed "quality control" QTL analyses to demonstrate that heading date and plant height QTL contained known genes for those traits, such as Ehd4, Ghd7, Hd1, and OsGA3ox4, thus validating that the populations were suitable for cold tolerance trait QTL mapping (Table 3; Supplementary Table S5).All the 16 cold tolerance QTL identified here overlapped with the QTL we previously obtained through GWAS mapping approaches.This allowed us to narrow down the size of previously mapped QTL and identify 25 cold-tolerance candidate genes with mostly highimpact nucleotide variants between the cold-tolerant parents (Krasnodarskij 3352 and WIR 911) and the cold-sensitive parent (Carolino 164).Interestingly, most of those genes could be assigned to modules at the seedling stage that regulate osmolytes, ROS, and growth and development, as described in a recent rice cold tolerance review (Li et al., 2022).Specifically, five genes (20%) encoded receptor-like kinases that might be involved in the signal transduction of cold stress tolerance responses.That is, OsMPK6, OsRLK-803, OsRLCK253, OsWAK38, and OsWAK40 might be part of modules regulating osmolyte and/or ROS production, together with the two (8%) transcription factor genes OsbHLH070 and OsNAC011, and the two (8%) F-box genes OsFBL27 and OsFBX398 that might regulate transcription factor activities.The other 16 (64%) are potential cold stress tolerance effector genes that code for the following proteins: three (12%) chaperons that might help maintain biological macromolecule structure during cold stress; four (16%) ANK-TPR proteins that mediate interactions with protein partners of protein complexes with potential roles in cold stress tolerance responses; and nine (36%) enzymes involved in cellular metabolisms such as ROS production, maintenance of mitochondrial and/or chloroplast integrity, or functioning as ATPases, esterases, isomerases, hydrolases, or transferases.Of those, the two peptidyl-prolyl cis/trans isomerase genes OsFKBP13 and CYP59A might belong to a new module regulating the trade-off between stress response and growth and development, as previously shown for OsCYP20-2 (Ge et al., 2020).These candidate genes can be functionally analyzed in future studies using genomics, molecular genetics, and biochemical approaches, while the QTL regions containing those genes can be used for marker-assisted breeding of coldtolerant rice cultivars.
reviewers.Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
FIGURE 1 Phenotypes of two rice varieties with varying cold tolerance potentials.(A) Two-week-old hydroponically grown rice seedlings.The cold-sensitive control Carolino 164 (aus) is shown on the left (red seedlings), and the cold-tolerant check Krasnodarskij 3352 (temperate japonica; green seedlings) is shown on the right.(B) Phenotype after 1 week of 10°C exposure and 1 week of warm temperature recovery growth.
TABLE 1
Summary statistics (overall mean, SE, range of progeny) of the four traits measured on the 134 Krasnodarskij 3352 × Carolino 164 progeny lines included in the quantitative trait locus analysis (Tables3, 4) and parents.
aThe SE of the mean was calculated as the SD divided by the square root of the number of entries.b Number of days from germination to panicle emergence.
TABLE 2
Summary statistics (overall mean, SE, range of progeny) of the four traits measured on the 92 (WIR 911 × Carolino 164) × Carolino 164 backcrossed progeny lines included in the quantitative trait locus analysis (Tables3, 4) and parents.
Trait distributions are shown in Supplementary FigureS1B.a The SE of the mean was calculated as the SD divided by the square root of the number of entries.b Number of days from germination to panicle emergence.
(Meng et al., 2015)had 134 progeny genotyped with 2,794 SNP markers and the BIL population had 92 progeny genotyped with 2,590 SNP markers.aQuantitativetrait loci (QTL) are declared based on the Inclusive Composite Interval Mapping (ICIM) approach using the IciMapping software(Meng et al., 2015)and 0.5-step, 0.005-pin ICIM_Add settings unless otherwise noted.b A positive additive effect indicates that the Krasnodarskij 3352 or WIR 911 allele increases the phenotype for that trait; a negative additive effect indicates that the Carolino164 allele increases the trait.c PVE is the percentage of total phenotypic variation explained by an individual QTL, as estimated by R 2 values from ICIM analysis.d LOD score is close to the 90% confidence value of 3.01 (based on 1,000 permutations for the PTHT trait).
TABLE 5 Continued
(Kawahara et al., 2013) Symbolization, Nomenclature, and Linkage gene symbol was used, if available.cThepseudomoleculeMbposition of the candidate gene in the Rice Genome Annotation Project Release 7(Kawahara et al., 2013).dRiceGenome Annotation Project locus identifier for the candidate gene(Kawahara et al., 2013).
TABLE 6
High impact SNP variants in cold-sensitive aus line Carolino 164 compared to cold-tolerant japonica line Krasnodarskij 3352.
a Variant in Krasnodarskij 3352; Carolino 164 same as the Nipponbare reference genome.b Total of 233 variants between Krasnodarskij 3352 and Carolino 164.
TABLE 7
Cold temperature-regulated gene expression of QTL-associated candidate genes. | 2023-12-16T16:20:13.449Z | 2023-12-14T00:00:00.000 | {
"year": 2023,
"sha1": "e8a29a9f7cfb7d28ea3474cad32edd4ca0746d3a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2023.1303651/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c3802ecf8d87f85e5480a55324a69e088a4902f6",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
246996496 | pes2o/s2orc | v3-fos-license | A phase field crystal theory of the kinematics of dislocation lines
We introduce a dislocation density tensor and derive its kinematic evolution law from a phase field description of crystal deformations in three dimensions. The phase field crystal (PFC) model is used to define the lattice distortion, including topological singularities, and the associated configurational stresses. We derive an exact expression for the velocity of dislocation line determined by the phase field evolution, and show that dislocation motion in the PFC is driven by a Peach-Koehler force. As is well known from earlier PFC model studies, the configurational stress is not divergence free for a general field configuration. Therefore, we also present a method (PFCMEq) to constrain the diffusive dynamics to mechanical equilibrium by adding an independent and integrable distortion so that the total resulting stress is divergence free. In the PFCMEq model, the far-field stress agrees very well with the predictions from continuum elasticity, while the near-field stress around the dislocation core is regularized by the smooth nature of the phase-field. We apply this framework to study the rate of shrinkage of an dislocation loop seeded in its glide plane.
Introduction
Plasticity in crystalline solids primarily refers to permanent deformations resulting from the nucleation, motion, and interaction of extended dislocations. Classical plasticity theories deal with the yielding of materials within continuum solid mechanics [Hill, 1998, Wu, 2004. Deviations from elastic response are described with additional variables (e.g., the plastic strain), which effectively describe the onset of plasticity (yielding criteria), as well as the mechanical properties of plastically deformed media (e.g., work hardening). A macroscopic description of the collective behavior of dislocation ensembles is thus achieved, usually assuming homogeneous media for large systems. In crystal plasticity, inhomogeneities and anisotropies are accounted for, with the theory having been implemented as a computationally efficient finite element model [Roters et al., 2010, Pokharel et al., 2014. These theories are largely phenomenological in nature, and rely on constitutive laws and material parameters to be determined by other methods, or extracted from experiments. They can be finely tuned, but sometimes fail in describing mesoscale effects [Rollett et al., 2015]. On the other hand, remarkable mesoscale descriptions have been developed by tracking single dislocations [Kubin et al., 1992, Bulatov et al., 1998, Sills et al., 2016, Koslowski et al., 2002, Rodney et al., 2003. These approaches typically evolve dislocation lines through Peach-Koehler type forces while incorporating their slip system, mobilities, and dislocation reactions phenomenologically. Stress fields are described within classical elasticity theory [Anderson et al., 2017]. Since linear elasticity predicts a singular elastic field at the dislocation core, theories featuring its regularization are usually exploited. Prominent examples are the non-singular theory obtained by spreading the Burgers vector isotropically about dislocation lines [Cai et al., 2006], and the stress field regularization obtained within a strain gradient elasticity framework [Lazar and Maugin, 2005]. Plastic behavior then emerges when considering systems with many dislocations and proper statistical sampling [Devincre et al., 2008]. Still, the accuracy and predictive power of these approaches depend on how well dislocations are modeled as isolated objects. In this context, mesoscale theories that require a limited set of phenomenological inputs are instrumental in connecting macroscopic plastic behavior to microscopic features of crystalline materials.
The Phase Field Crystal (PFC) model is an alternative framework to describe the nonequilibrium evolution of defected materials at the mesoscale [Elder et al., 2002, Emmerich et al., 2012, Momeni et al., 2018. Within the phase q (4) q (5) q (6) q (7) q (8) q (9) q (10) q (11) q (12) q x q y q z Figure 1: (a) A dislocation line C consisting of points r characterized by the tangent vector t and the Burgers vector b at that point. The difference r − r from a point r to a point r on the line can be decomposed into a 2D in-plane vector ∆r ⊥ , which is the projection of r − r onto the plane N normal to t and a distance |∆r | from this plane. In this figure, ∆r · t = −3.47a 0 and |∆r ⊥ | = 3.42a 0 . (b) The N = 12 primary reciprocal lattice vectors {q (n) } 12 n=1 of length q 0 of a bcc lattice (Eq. (20)). Higher modes (dots) correspond to higher harmonics {p n } n>N in the expansion of the equilibrium phase-field configuration ψ eq , Eq. (3). lines in three dimensions down to the defect core. In the case of a point defect, it was shown in Ref. [Salvalaglio et al., 2020] that the stress field at the core agrees with the predictions of the non-singular theory of Ref. [Cai et al., 2006], and with gradient elasticity models Maugin, 2005, Lazar, 2017], indicating that the results obtained here can serve as benchmarks for similar theories in three dimensions. The specific example of a dislocation loop in a bcc lattice is considered, and the far-field stresses given by the ψ field are shown to coincide with predictions by continuum elasticity.
The rest of the paper is structured as follows. In Sec. 2, we introduce the theoretical method used to define topological defects from a periodic ψ-field. This allows us to define a dislocation density tensor in terms of the phase field (Eq. (12)), and obtain the dislocation line velocity (Eq. (16)). These are key results, which are applied in several examples in Sec. 3. First, we use the PFC model to numerically study the shrinkage of a dislocation loop in a bcc lattice. Then, we show analytically that Eq. (16) captures the motion of dislocations driven by a Peach-Koehler type force, and hence by a local stress. Finally, we introduce the PFC-MEq model, and compare the shrinkage of the dislocation loop under PFC and PFC-MEq dynamics. While the results are qualitatively similar for the case of a shear dislocation loop, the constraint of mechanical equilibrium causes the shrinkage to happen much faster. We finally confirm that the stress field derived from the ψ field in the PFC-MEq model agrees with that which would follow from continuum elasticity theory, with the same singular dislocation density as source, and with no adjustable parameters.
Kinematics of a dislocation line in three dimensions
Dislocations in 3D crystals are line defects, where each point r on the line C is characterized by the tangent vector t at that point and a Burgers vector b, see Figure 1(a). By introducing a local Cartesian plane N normal to t , the distance of an arbitrary point r to a point r on C can be decomposed into an in-plane vector ∆r ⊥ ⊥ t and a vector ∆r t , i.e. r − r = ∆r ⊥ + ∆r . A deformed state can be described by a displacement field u and, in the presence of a dislocation, u is discontinuous across a surface (branch cut) spanned by the dislocation, given by where u + and u − are the values of the displacement field at each side of the branch cut, respectively. We use the negative sign convention relating the contour integral with the Burgers vector. Here, Γ is a small circuit enclosing the dislocation line in the N -plane, directed according to the right-hand rule with respect to t . The dislocation density tensor associated with the line is [Lazar, 2014] where b j is the j component of the Burgers vector of the line, and dl i = t i dl is the line element in the direction of the line. δ (2) i (C) is a short-hand notation for the delta function, with dimension of inverse area, locating the position of the dislocation line for each component i of the dislocation density tensor. It is defined by the line integral over the dislocation line of the full delta function (which scales as inverse volume). The dislocation density tensor is defined so that N d 2 r ⊥ α i j t i = b j , where we are using the Einstein summation convention over repeated indices.
In the PFC models, a crystal state is represented by a periodic phase field ψ(r) of a given crystal symmetry. A reference crystalline lattice, is defined by a set of N primary (smallest) reciprocal lattice vectors {q (n) } N n=1 of length q 0 , and higher harmonics {p n } n>N , also on the reciprocal lattice but with |p n | > q 0 (see, e.g., {q (n) } with |q (n) | = q 0 for a bcc lattice in Fig. 1(b)). The lattice constant of the crystal is then given by a 0 ∼ 2π/q 0 . This represents a perfect crystal configuration in the absence of defects and distortion, where the average valueψ and the amplitudes η n are constants. In the phase-field crystal theory presented in Refs. [Elder et al., 2002, Elder andGrant, 2004], near the solid-liquid transition point, only the terms from the primary reciprocal lattice vectors contribute to ψ eq , while in general for more sharply peaked density profiles, there are also contributions from the higher order harmonics {p n } n>N . For a distorted crystal lattice, the mode amplitudes η n become complex scalar fields, henceforth named complex amplitudes η n (r), such that In this section, we provide an accurate description of dislocation lines as topological defects in the phase of the complex amplitudes η n (r). We generalize the method of tracking topological defects as zeros of a complex order parameter as introduced in Refs. [Halperin, 1981] and [Mazenko, 1997], and apply it to accurately derive the kinematics of dislocation lines. Given a phase field configuration ψ(r), the complex amplitudes can be found by a demodulation as described in Appendix A.1. Decomposing each amplitude η n (r) = ρ n (r)e iθ n (r) , into its modulus ρ n (r) and phase θ n (r), we have that for a perfect lattice, θ (0) n = 0 and ρ n is constant. Displacing a lattice plane by a slowly varying u transforms the phase as θ n → θ (0) n − q (n) · u. Thus, the phase provides a direct measure of the displacement field u(r) relative to the reference lattice, i.e., where q (n) i denotes the i-th Cartesian coordinate of q (n) . It is possible to invert Eq. (5), and solve for the displacement field u as function of the phases θ n and reciprocal vectors. We use the following identity which is valid for lattices with cubic symmetry, where all primary reciprocal lattice vectors have the same length q 0 (see Appendix B) so that the displacement u is given by Eq. (7) shows that a dislocation line, which introduces a discontinuity in the displacement field, leads to a discontinuity in the phases θ n (r). This is the first key insight, which we illustrate in Fig. 2. By using Eq. (5) and the fact that the Burgers vector b is constant along the dislocation line, we relate the Burgers vector to the phase θ n as where s n is the (integer) winding number of the phase θ n around the dislocation line. That s n is an integer follows from the fact that while θ n (r) may have a discontinuity across the branch cut, the complex amplitude η n (r) is well-defined and continuous everywhere. Therefore the circulation of the phase must be an integer multiple of 2π. By the same reasoning, for an amplitude for which q (n) · b 0, at the dislocation line, the phase θ n (r) is undefined (singular), so the modulus ρ n (r) must go to zero for η n (r) to remain continuous. This is the second key insight, which allows us to identify the location of the dislocation line with the zeros of the complex amplitudes η n (r). The complex amplitude η n (r) is isomorphic to a 2-component vector field Ψ(r) ≡ (Ψ 1 (r), Ψ 2 (r)) = ( (η n (r)), (η n (r))). The study of how to track zeros of any dimensional vector field in any dimensions was introduced in Ref. [Halperin, 1981]. The orientation field Ψ(r)/|Ψ(r)| is continuous wherever |Ψ(r)| 0 and supports 1D topological defects in 3 dimensions which are located precisely where |Ψ(r)| = 0. The topological line density ρ i of the line C, which satisfies d 2 r ⊥ ρ i = s n t i , is given by ρ = s n δ (2) (C) ρ i = s n δ (2) i (C) .
Like δ (2) i (C), the dimension of ρ i is that of a two-dimensional vector density. This topological charge density is expressed explicitly in terms of the real-valued positions C = {r } of the topological defect line. Since these positions coincide with the zero-line of the vector field Ψ(r), it is possible to relate the expression to the delta-function locating the zeros of Ψ(r), through the transformation law s n δ (2) (C) = D i (r)δ (2) (Ψ(r)), with the determinant vector field D i (r) = i jk (∂ j Ψ 1 (r))(∂ k Ψ 2 (r)). Comparing this to Eq. (2), using Eq. (8) and re-expressing D (n) i (r) (with the added superscript n) in terms of the complex amplitude η n (r), we end up with the central equation for tracking the evolution of the dislocation density where δ (2) (η n ) = δ( (η n ))δ( (η n )) and D (n) (r) = ∇ (η n (r)) × ∇ (η n (r)) D (n) i (r) = i jk (∂ j (η n (r)))(∂ k (η n (r))) .
In the following, for ease of notation, we suppress the explicit positional dependence of α i j , D (n) i and η n . The dislocation line is located at η n = 0, which is the intersection of the surfaces (η n ) = 0 and (η n ) = 0. As we see from its definition, D (n) is perpendicular to both these surfaces and is thus directed along the tangent to the line. We can reconstruct the dislocation density tensor from an appropriate summation over the modes with singular phases, namely by multiplying Eq. (10) by q (n) j , summing over the reciprocal modes and using Eq. (6) to arrive at Having a closed form of the dislocation density in terms of the complex amplitudes η n , we now turn to deriving a closed form expression for its kinematic in terms of the time evolution of η n . Taking the time derivative of Eq.
(2), we show in Appendix C.1 that for a dislocation density tensor described by a single loop or string, we have and V is a vector field defined on the string by the velocity of the line segment perpendicular to the tangent vector.
Taking the time derivative of Eq. (12), we show in Appendix C.2 that we get and . Note that J l j depends on ∂ t η n , and hence on the law governing the temporal evolution of the phase field. J (α) l j is the well-known expression in terms of the dislocation velocity and J l j is what we predict from the evolution of the phase field crystal density ψ. Under the assumption that both currents are equal, we show in the following that we are able to determine the dislocation velocity directly from the evolution of the phase field ψ at the dislocation core. We have checked numerically that the dislocation velocity predicted with this assumption is in excellent agreement with the one computed by tracking the position of the dislocation line at successive time steps.
By contracting Eq. (10) with D (n) i , we can express the delta-function in terms of the dislocation density tensor , which we can insert into Eq. (14). Then, by equating J l j and J (α) l j at a point r on the dislocation line, where α ik = t i b k δ (2) (∆r ⊥ ), we get after contracting with b and integrating the delta-functions in N (details in Appendix D) where v is the velocity of the dislocation node at r . Since t ⊥ v , we can easily invert this relation to find v , and using that Eqs. (12) and (16) are the key results of this paper. Eq. (12) defines the dislocation density tensor from the demodulated amplitudes η n of the phase field, while Eq. (16) gives an explicit expression for the dislocation line velocity. Both equations bridge the continuum description of the dislocation density and velocity with the microscopic scale of the phase field.
Dislocation motion in a bcc lattice
We apply here the framework developed in Sec. 2 to a phase field crystal model of dislocation motion in a bcc lattice [Elder et al., 2002, Elder and Grant, 2004, Emmerich et al., 2012. The free energy F ψ is a functional of the phase field ψ over the domain Ω, given by where L = q 2 0 + ∇ 2 , and ∆B 0 , B x 0 , V, and T are constant parameters . The dissipative relaxation of ψ reads as ∂ψ ∂t = Γ∇ 2 δF ψ δψ .
with constant mobility Γ. We will refer to Eq. (18) as the "classical" PFC dynamics. As a characteristic unit of time given these model parameters, we use τ = (ΓB x 0 q 6 0 ) −1 . For appropriate parameter values, the ground state of this energy is a bcc lattice which is well described in the one mode approximation where ψ 0 is the average density, η 0 is the equilibrium amplitude found by minimizing the free energy (Eq. (17)) with this ansatz for ψ(r), and {q n } are the N = 12 smallest reciprocal lattice vectors with q (n) = −q (n−6) for n = 7, .., 12, see Fig. 1(b). Figure 3 shows one bcc unit cell of a phase-field initialized in the one-mode approximation. Given the equilibrium configuration, the lattice constant a 0 will be used as the characteristic unit of length and the shear modulus µ calculated from the phase-field will serve as the characteristic unit of stress [Skogvoll et al., 2021a]. As we see, the functional form of the free energy determines the base vectors q (n) , and no further assumptions about slip systems or constitutive laws for dislocation velocity (or plastic strain rates) need to be introduced.
The model parameters (∆B 0 , B x 0 , T, V, and Γ) and variables (F ψ , ψ, r, and t) can be rescaled to a dimensionless form in which B x 0 = V = q 0 = Γ = 1, thus leaving only three tunable model parameters: the quenching depth ∆B 0 , T and the average density ψ 0 (due to the conserved nature of Eq. (18)). All simulations are performed in these dimensionless units as described in Sec. Appendix A.3.
Numerical analysis: shrinkage of a dislocation loop
In order to have a lattice containing one dislocation loop as the initial condition, we consider first the demodulation of the ψ field in the one mode approximation. A dislocation loop is introduced into the perfect lattice by multiplying the equilibrium amplitudes by complex phases η 0 → η n (r) with the appropriate charges s n (see Appendix A.4) and then reconstructing the phase field ψ through Eq. (19). We then integrate Eq. (18) forward in time as detailed in Appendix A.3. A fast relaxation follows from the initial configuration with the loop. This relaxation leads to the regularization of the singularity at the dislocation line (η n → 0 for s n 0) as achieved in PFC approaches [Skaugen et al., 2018a, Salvalaglio et al., 2019. From then onward, ψ evolves in time leading to the motion of the dislocation line which may be analyzed by the methods outlined in Sec. 2, using the amplitudes {η n } extracted from ψ extracted as detailed in Appendix A.1.
Numerically, we approximate the delta function in Eq. (12) as a sharply peaked 2D Gaussian distribution, i.e., δ (2) (η n ) exp(− |η n | 2 2ω 2 )/(2πω 2 ) with a standard deviation of ω = η 0 /10. Near the dislocation line, the dislocation density α i j thus takes the form of a sharply peaked function, which can be treated numerically. The decomposition of α i j into its outer product factors t i and a Burgers vector density B j = b j δ (2) (∆r ⊥ ) is done by singular value decomposition (see Sec. Appendix A.2), and the Burgers vector of the point is extracted by performing a local surface integral in N . We prepare a 35 × 35 × 35 unit cell 3D PFC lattice on periodic boundary conditions with a resolution of ∆x = ∆y = ∆z = a 0 /7. A dislocation loop is introduced as the initial condition in the slip system given by a plane normal [−1, 0, 1] with slip direction (Burgers vector) a 0 2 [1, −1, 1]. Figure 4(a) shows the initial dislocation density decomposed as described, where we also have calculated the velocity v at each point given by Eq (16).
In order to obtain the velocity of the dislocation loop segments, we identify M nodes on the loop and evaluate Eq. (16) by using numerical differentiation of the ψ field to calculate the amplitude currents J (n) l . To serve as a benchmark, we also calculate the circumference l C of the dislocation loop C at each time (further details in Appendix A.5), so that we compare the rate of shrinkage |∂ t R| (solid blue line in Fig. 4 (dashed blue line in Fig. 4(b)) where v (m) is the velocity of the dislocation line at node m, calculated by the velocity formula Eq. (16). |∂ t R| andv should agree in the case of the shrinking of a perfectly circular loop and the figure shows excellent agreement between the two. Interestingly, we observe that both are sensitive to the Peierls like barriers during their motion, as shown by the oscillations in Fig. 4(b)). The maxima are separated by 2πa 0 , confirming that the oscillation is related to the motion of a loop segment over one lattice spacing a 0 [Boyer and Viñals, 2002]. This observation confirms that even though Eqs. (12) and (16) are continuum level descriptions of the system, they still exhibit behavior related to the underlying lattice configuration. The initial fast drop in velocity is due to the fast relaxation of the initial condition. The evolution of the variables under the dynamics of Eq. (18) are shown together with the evolution given by the PFC-MEq model which will be introduced in Sec. 3.3.
Theoretical analysis: Peach Koehler law
In this section, we show that the general expression Eq. (16) of the defect velocity agrees with the dissipative motion of a dislocation as given by the classical Peach-Koehler force [Pismen, 1999, Kosevich, 1979. To calculate an analytical expression for the amplitude currents J l , we employ the amplitude formulation of the PFC model, which directly expresses the free energy and dynamical equations in terms of the complex amplitudes η n [Goldenfeld et al., 2005, Athreya et al., 2006, Salvalaglio andElder, 2022]. For our lattice symmetry, real valuedness of ψ requires that η n+6 = η * n , and the dynamical equations need only consider the amplitudes {η n } 6 n=1 . By substituting Eq. (19) in F ψ and integrating over the unit cell, under the assumption of slowly-varying amplitudes, one obtains the following free energy as a function of the complex amplitudes, where G n = ∇ 2 +2iq n ·∇ and Φ = 2 6 n=1 |η n | 2 . f s ({η n }, {η * n }) is a polynomial in η n and η * n that depends in general on the specific crystalline symmetry under consideration [Goldenfeld et al., 2005, Salvalaglio and Elder, 2022 (here bcc, see Appendix E for its expression). Equation (23) is obtained when considering a set of vectors q of length q 0 , while similar forms may be achieved when considering different length scales , Salvalaglio et al., 2021. The evolution of η n , which follows from Eq. (18) is [Goldenfeld et al., 2005, Salvalaglio andElder, 2022], with where the last term comes from the nonlinear contributions ψ 3 and ψ 4 in the local free energy density, and depend on the other amplitudes {η m } m n . However, for the amplitudes that go to zero at the defect, it can be shown that ∂ f s ∂η * n = 0 at the defect (for more details, see Appendix E). Thus, the evolution of η n near the defect core is dictated solely by the non-local gradient term, namely Furthermore, this implies that the complex amplitude η n of a stationary defect satisfy G 2 n η (0) n = 0 at the core. We now add an imposed, smooth displacementũ to the amplitudes as η n = η (0) n e −iq n ·ũ to represent the far-field displacement induced by a different line segment, defect, or externally applied loads [Skaugen et al., 2018a]. This displacement is in addition to the discontinuous displacement field u, described in Sec. 2, which is captured by stationary solution η (0) n and defines the Burgers vector of the dislocation line (Fig. 2). Inserting this ansatz of the complex amplitudes into Eq. (11), and in the approximation of small distortions, |∇ũ| 1, we find where D (n),0 i is the determinant vector field calculated from η (0) n . The corresponding defect density current is Arguably, the simplest solution of Eq. (26) is the isotropic, simple vortex η (0 ) n which is linear with the distance from the core and s n = ±1. At a node r on the dislocation line, η (0 ) can be written in terms of the Cartesian coordinates x ⊥ , y ⊥ in the plane N (Sec. 2), where it takes the form η (0 ) n = κ(x ⊥ + is n y ⊥ ), with κ a proportionality constant. The gradients of η (0 ) n can be evaluated in these coordinates and gives at r , , from which we get the current in terms of the local tangent vector t . At r , we also get D (n) i = κ 2 s n t i , which leads to an expression of the dislocation velocity (where the proportionality constant κ cancels out), given by whereσ l j is the stress tensor for a bcc PFC that has been deformed byũ [Skogvoll et al., 2021a], Thus, the velocity of the dislocation line is proportional to the stress on the line. In vectorial form, this equation reads with isotropic mobility M = Γπ/(|b| 2 η 2 0 ). A stationary dislocation induces a stress field σ (0) i j , but only the imposed stressσ i j appears in the equation above. This is analogous to how the stress field of the dislocation itself is not included when the Peach-Koehler force as calculated [Kosevich, 1979]. Thus, if σ ψ i j is the configurational stress of the phase field at any given time, the part responsible for dislocation motion is the imposed stress Note that the stationary solution necessarily satisfies mechanical equilibrium, ∂ j σ (0) i j , so that if the configurational PFC stress σ ψ i j is in mechanical equilibrium, so is the imposed stressσ i j on the dislocation segment. The imposed stress used can be attributed to external load, other dislocations, or other parts of the dislocation loop. The framework predicts a defect mobility which is isotropic and does not discriminate between dislocation climb and glide motion. Numerically however, we have seen that at deeper quenches ∆B 0 , climb motion is prohibited in the PFC model. The result in this section should therefore be interpreted as a first-order approximation, valid at shallow quenches. This apparent equal mobility for glide and climb may result from the employment of the amplitude phase-field model (which is only exact for |∆B 0 | → 0) or the assumption of an isotropic defect core in the calculation.
PFC dynamics constrained to mechanical equilibrium (PFC-MEq)
In the previous section, we found that the motion of a dislocation is governed by a configurational stress σ ψ i j which derives from the PFC free energy. Since this stress is a functional only of the phase field configuration, it does not satisfy, in general, the condition of mechanical equilibrium. References [Skaugen et al., 2018a, Skogvoll et al., 2021a give an explicit expression for this stress defined as the variation of the free energy with respect to distortion, where · is a spatial average over 1/q 0 in order to eliminate the base periodicity of the phase field (see Appendix A.1).
In this section, we discuss a modification of the PFC in three dimensions and in an anisotropic lattice so as to maintain elastic equilibrium in the medium while ψ evolves according to Eq. (18). Let ψ (U) be the field that results from the evolution defined by Eq. (18) alone. At each time, we define where u δ is a small continuous displacement computed so that the configurational stress associated with ψ(r) is divergence free. We now show a method to determine u δ . Suppose that at some time t the PFC configuration ψ has an associated configurational stress σ ψ,U i j (from Eq. (34), where ∂ j σ ψ,U i j 0). Within linear elasticity, the stress σ ψ i j after displacement of the current configuration by u δ is given by where C i jkl is the elastic constant tensor, and e δ i j = 1 2 (∂ i u δ j + ∂ j u δ i ). u δ is determined by requiring that By using the symmetry i ↔ j of the elastic constant tensor, we can rewrite this equation explicitly in terms of u δ , where is the body force from the stress [Skogvoll et al., 2021a]. The quantity f is the free energy density from Eq. (17). Given the periodic boundary conditions used, the system of equations (38) is solved by using a Fourier decomposition with the Green's function for elastic displacement in cubic anisotropic materials [Dederichs and Leibfried, 1969]. Once u δ is obtained, ψ is updated according to Eq. (35), and evolved according to Eq. (18) from its current state ψ(t) to ψ (U) (t + ∆t). Note that Eqs. (38) can, in general, be solved for any elastic constant tensor, so that the method introduced is not limited to cubic anisotropy. Since the state ψ (U) can only be updated according to Eq. (35) every ∆t, this effectively sets a time scale of elastic relaxation in the model. We found that the numerical discretization scheme for imposing mechanical equilibrium at every ∆t has a slow convergence with decreasing time resolution. Thus, the rate of loop shrinkage also depends slightly on ∆t. This is further discussed in Appendix A. Figure 4 contrasts numerical results for the evolution of an initial dislocation loop with and without using the method just described. The computed line velocities are very different as they are highly sensitive to the local stress experienced by the dislocation loop segments. This stems from the fact that under classical PFC dynamics, the stress is always given by σ ψ,U i j , and a consequence of the results from Sec. 3.2 is that the velocity of an element of the defect line will be quite different depending on whether the stress acting on it is σ ψ,U i j or σ ψ i j . Figure 5 shows the dislocation loop after its circumference has shrunk to 90% of its initial value, and the resulting xz component of the stress for both models. As expected, the correction provided by the PFC-MEq model is necessary to relax the stress (a) (b) Figure 5: In-plane sections (y = 17.5a 0 ) of the configurational stress σ ψ xz /µ for the dislocation loop after shrinking to 90% of its initial circumference under (a) PFC dynamics and (b) PFC-MEq dynamics. Because the latter evolves faster, the snapshots are taken at different times, namely t = 389.0τ and t = 34.4τ, respectively. A lot of residual (unrelaxed) stress is visible in the configurational stress for the classical PFC model. originating from the initial loop. The figure shows a large residual stress far from the dislocation loop that can only decay diffusively in the standard phase field model. Indeed, we have verified numerically that the configurational stress is only divergence-less for the PFC-MEq model. We note that in our set the loop is seeded in a glide plane, thus its shape remains approximately circular for both models, while the shrinkage rate is different. Note that with the addition of this advection step, the model is no longer guaranteed to be fully dissipative.
The problem addressed in this section involves finding the elastic distortion u kl (which away from defects it can be written as u kl = ∂ k u l for a displacement field u) given the dislocation density tensor α i j as a state variable [Acharya et al., 2019]. The first part is the incompatibility of the elastic distortion and the second is the mechanical equilibrium condition on u kl where C i jkl is the tensor of elastic constants, and (S ) denotes the symmetric part of the tensor. Equation (40) has a non trivial kernel consisting of gradients of vector fields ∇u δ . This vector field is determined by Eq. (41) given appropriate boundary conditions that guarantee uniqueness. A computational method for solving for u kl and u δ , using the dislocation density as a state variable, was first given in Ref. [Roy and Acharya, 2005]. The main difference between this reference and the method outlined in this section is that, since the incompatibility of the distortion is captured by the state of the phase field, we only need to solve for the compatible part of the distortion using the force density g ψ from the phase field as a source.
While the stress profile shown in Fig. 5(b), can be shown numerically to have vanishing divergence, we would like to see a direct comparison of the stress with the prediction from continuum elasticity. As the model purports to evolve the phase-field at mechanical equilibrium, and we are able to extract the dislocation density from the phase-field at any time through Eq. (12), this amounts to the problem of finding the stress tensor for a given dislocation density, under the constraint of mechanical equilibrium and with periodic boundary conditions (zero surface traction). This problem was adressed in Ref. [Brenner et al., 2014], and in Appendix A.6, we show how we solve Equations (40-41) to derive the equilibrium stress field from α i j using spectral methods. Figure 6 shows all the stress components after the dislocation loop has shrunk to 90% of its initial diameter for both dynamical models, as well as the stress σ (α) computed directly from the dislocation density tensor. 1 Note that the mean value of the components of σ (α) i j , is not determined by Eqs. (40) -(41), and is set to zero. In this comparison, we have also subtracted from σ ψ i j its mean value. As expected, the stresses obtained from the PFC-MEq model agree well with σ (α) i j . The small differences observed are due to the fact that the configurational stress determined by ψ is naturally regularized by the lattice spacing and the finite defect core, whereas the stress σ (α) i j is for a continuum elastic medium with a singular dislocation source (numerically, the δ-functions in Eq. (12) is regularized by an arbitrary width of the Gaussian approximation). Investigating exactly which length scale of core regularization derives from the PFC model is an open and interesting question that we will address in the future.
Conclusions
We have introduced a theoretical method, and the associated numerical implementation, to study topological defect motion in a three dimensional, anisotropic, crystalline PFC lattice. The dislocation density tensor and velocity are directly defined by the spatially periodic phase field, where dislocations are identified with the zeros of its complex amplitudes.
To illustrate the method, we have studied the motion of a shear dislocation loop, and found that it accurately tracks the loop position, circumference, and velocity. As an application, we have shown that under certain simplifying assumptions, the overdamped dislocation velocity follows from the Peach-Koehler force, with the defect mobility determined by equilibrium lattice properties. We have introduced the PFC-MEq model for three dimensional anisotropic media which constrains the classical PFC model evolution to remain in mechanical equilibrium, and shown that loop motion is much faster with this modification. The PFC-MEq model produces stress profiles that are in agreement, especially far from the defect core, to stress fields directly computed from the instantaneous dislocation density tensor.
In summary, we have presented a comprehensive framework, based on the phase field crystal model for the analysis of dislocation motion in crystalline phases in three spatial dimensions. Starting from a free energy that has a ground state of the proper symmetry, the model naturally incorporates defects, the associated topological densities, and the resulting defect line kinematic laws that are compatible with topological density conservation. Configurational stresses induced by defects are defined and analyzed, and shown to lead to a Peach-Koehler type force on defects, with an explicit expression for the line segment mobility given. we can find the amplitudes using the principle of resonance under coarse graining. Coarse grainingX with respect to a length scale a 0 is introduced as a convolution with a Gaussian filter function Given the PFC configuration of Eq. (A.1), to find η n (r), we multiply by e −q (n) ·r and coarse grain to get ψ(r)e −q (n) ·r =ψ(r) e −q (n) ·r + n η n (r) e i(q (n ) −q (n) )·r = η n (r), where we have used the slowly varying nature of the complex amplitudes to pull them out of the coarse graining operation and used the resonance condition e i(q (n ) −q (n) )·r = δ nn [Skogvoll et al., 2021a].
Appendix A.2. Dislocation density tensor decomposition A singular value decomposition of α is introduced as α = UΣV T , where Σ is a diagonal matrix containing the singular values of α, and U and V are unitary matrices containing the normalized eigenvectors of (αα T ) and (α T α), respectively. We assume that the dislocation density tensor can be written as the outer product of the unitary tangent vector t and a local spatial Burgers vector density B(r), i.e., α i j = t i B j . Under this assumption, one finds Σ with only one non zero singular value, |B|, and the columns of U and V that correspond to this singular value will be t and B/|B|, respectively.
Appendix A.3. Evolution of the phase field
The dimensionless parameters for the bcc ground state are set to: ∆B 0 = −0.3, T = 0 and ψ 0 = −0.325. Lengths have been made dimensionless by choosing |q (n) | = q 0 = 1, yielding a bcc lattice constant a 0 = 2π √ 2. In all simulations, the computational domain is given by 35 × 35 × 35 base periods of the undistorted bcc lattice, with grid spacing ∆x = ∆y = ∆z = a 0 /7. Periodic boundary conditions are used throughout. Equation (18) is integrated forward in time with an explicit method [Cox and Matthews, 2002], and ∆t = 0.1. A Fourier decomposition of the spatial fields is introduced to compute the spatial derivatives of the fields, while nonlinear terms are computed in real space. N is the plane normal to the tangent vector t upon which we impose a Cartesian coordinate system to determine the angles θ 1 , θ 2 that are used to construct the (inset) initial amplitude phase configuration. For more details, see Section Appendix A.4.
Appendix A.3.1. Mechanical equilibrium
We implement the correction scheme of Eq. (35) between every timestep ∆t. If u max = max r∈Domain (u δ (r)) > 0.1a 0 , we rescale u δ so that u max = 0.1a 0 , and repeat the process again until elastic equilibrium is achieved. Typically, when initializing the PFC field with a dislocation, around 5 such iterations are needed, after which, u max is on the order of 0.01a 0 at each correction step.
The dislocation loop shrink velocity is sensitive to the time interval ∆t between each equilibration correction. As shown in Fig. 4, the effect of imposing this correction at every time interval ∆t = 0.1 accelerates the annihilation process by approximately a factor of |v PFCMEq,∆t=0.1 |/|v PFC | ≈ 7.5. A slow convergence in the limit ∆t → 0 is observed, where we have estimated that the shrink velocity increases up to |v PFCMEq,∆t→0 |/|v PFC | ≈ 9.8. However, to reach this numerical convergence is computationally demanding. Indeed, this slow convergence suggests that the time scale of the elastic field relaxation is important for the process of shear dislocation loop shrinkage. For static problems however, such as obtaining regularized stress profiles for dislocation loops, or defect nucleation under quasi-static loading, this slow convergence is not an issue.
Appendix A.4. Initializing a dislocation loop in the PFC model
In this section, we show how to multiply the initial amplitudes η 0 with complex phases, to produce a dislocation loop with Burgers vector b in a slip plane given by normal vector n (see Sec. 3.1). Given a point r, it belongs to a plane N perpendicular to t for some point r on the dislocation loop (see Figure A.7). This plane also intersects the diametrically opposed point r of the dislocation loop. If r 0 is the center of the loop, the distance vector r − r 0 lies in N . Let (m 1 , m 2 ) be the first and second coordinate in the Cartesian coordinate system defined by the right-handed orthonormal system {(n × t ), n, t } centered at r 0 . If m 1 > 0, we get from geometrical considerations Both m 1 and m 2 are thus determined by r, r 0 and the normal vector to the loop plane n. θ 1 (θ 2 ) is the angle between r − r (r − r ) and n × t in the plane N and are found numerically by using the four-quadrant inverse tangent atan2(y, x), so that θ 1 = atan2 (m 2 , m 1 + R) (A.6) where R is the radius of the loop. For each point r, we determine θ 1 (r) and θ 2 (r) according to the equations above and initiate the PFC with the phases η n = η 0 e is n (θ 1 (r)−θ 2 (r)) , (A.8) where s n = 1 2π q (n) · b is given in Table A To calculate numerically the perimeter of a dislocation loop, recall that where we have added a subscript (l) onto r to emphasize that it is the point on the loop as indexed by the line element dl. Taking the double dot product with itself, we find The contributions to this integral will only come from points on the loop C and only when r (l) = r (m) , where dl i = dm i , so dl i dm i = (dl) 2 = |dl i | 2 = |dl i ||dm i |. Thus Taking the square root and integrating over all space, we find where L is the perimeter of the dislocation loop. Thus, Appendix A.6. Direct computation of stress fields The dislocation density tensor is calculated directly from the phase field ψ through Eq. (12). The general method of solving Eqs. (40-41) on a periodic medium is given in Ref. [Brenner et al., 2014] given α i j , where also the uniqueness of the elastic fields is proven given appropriate conditions on the dislocation density α i j . In the present case, the conditions on α i j are automatically satisfied as it is calculated from the phase-field. In this section, we thus show for our computational setup, how we compute the Green's function in the relating the distortion u i j to the dislocation density tensor α i j as a source. Since (40-41) given the periodic boundary conditions can be solved uniquely, we Fourier transform both sets of equations and add the condition of mechanical equilibrium (Eq. (41)) to the diagonal equations (i = k) in Eq. (40), which gives in Fourier space where there is no summation over (i), and we have multiplied the elastic constant tensor by i/µ where µ is the shear modulus of the cubic lattice, and C i jkl = λδ i j δ kl + µ(δ ik δ jl + δ il δ jk ) + γδ i jkl . By defining the 1D vectorsŨ andα as U T = u 11 , u 12 , u 13 , u 21 , u 22 , u 23 , u 31 , u 32 , u 33 , α T = α 11 , α 12 , α 13 , α 21 , α 22 , α 23 , α 31 , α 32 , α 33 , we rewrite Eq. (A.14) more compactly as where the explicit form of M(q) in the case of cubic anisotropy is given by M(q) can be inverted to yield the Fourier transform of the distortion U, OnceŨ (denoted byũ kl in components) is known, we compute the stress field in mechanical equilibrium The dislocation density α ik as obtained from the phase field as in Eq. (12) has a very small divergence due to numerical round-off errors. We impose ∂ i α ik = 0 explicitly before evaluatingσ, which improves numerical stability.
Appendix B. Inversion formula for highly symmetric lattice vector sets
In inverting Eq. (5) to obtain the displacement field u in terms of the phases θ n , we used the result of Eq. (6). This follows from the properties of moment tensors constructed from lattice vector sets Q = {q (n) } N n=1 . The p-th order moment tensor constructed from Q is given by In two dimensions, for a parity-invariant lattice vector set that has a B-fold symmetry, Ref. [Chen and Orszag, 2011] showed that all p-th order moments vanish for odd p and are isotropic for p < B. Every isotropic rank 2 tensor is proportional to the identity tensor δ i j , so for a 2D lattice vector set having four-fold symmetry, such as the set of shortest reciprocal lattice vectors {q (n) } 4 n=1 of the square lattice, we have 4 n=1 q (n) i q (n) j ∝ δ i j (Figs. 3 and 5 in Ref. [Skogvoll et al., 2021a] show the reciprocal lattice vector sets discussed in this appendix). Taking the trace and using that the vectors have the same length |q (n) | = q 0 , we get 4 n=1 q (n) i q (n) j = 4q 2 0 δ i j . In general, for any 2D parity invariant lattice vector set {q (n) } N n=1 with a B-fold symmetry where B > 2, we have 2D: As mentioned, this holds for the 2D square lattice, but it also holds for the 2D hexagonal lattice. In fact, the six-fold symmetry of the hexagonal lattice ensures that also every fourth-order moment tensor is isotropic, which results in elastic properties of the 2D hexagonal PFC model being isotropic [Skogvoll et al., 2021a].
To show this identity for a 3D parity invariant vector set with cubic symmetry, we generalize the proof in Ref. [Chen and Orszag, 2011] to a particular case of a 3D vector set that is symmetric with respect to 90 • rotations around each coordinate axis, such as the set of shortest reciprocal lattice vectors {q (n) } N n=1 of bcc, fcc or simple cubic structures. Let v be an eigenvector of Q i j with eigenvalue λ, i.e., is also an eigenvector of Q i j with the same eigenvalue λ. Repeating for a rotation around the y-axis demonstrates that Q i j has only one eigenvalue λ, so that it must be proportional to the rank 2 identity tensor Q i j ∝ δ i j . Taking the trace and using that the vectors have the same length |q (n) | = q 0 , we find 3D: Appendix C. Time derivatives of the dislocation density tensor Appendix C.1. Delta-function form Consider a moving dislocation line C = {r (λ, t)} of points r(λ, t) parametrized by the time t and a dimensionless λ which can be taken to go from 0 to 1 without loss of generality. Keeping the labelling fixed through its time evolution, we get α i j (r, t) = b j 1 λ=0 δ (3) (r − r (λ, t))(∂ λ r i (λ, t))dλ. (C.1) Suppressing the dependence of r on λ and t, we get taking the time derivative of Eq. (2), .
Inserting the expression for the delta-function in terms of the dislocation density tensor δ (2) (η n ) = α ik D (n) i q (n) k /(2π|D (n) | 2 ) into Eq. (14), we get Equating J (α) l j and J l j at a point r on the dislocation line, where α i j = t i b j δ (2) (∆r ⊥ ) using b · q (n) = 2πs n , lmn t m b j v n δ (2) (∆r ⊥ ) = Appendix E. Amplitude decoupling The (complex) polynomial f s (see Eq. (23)) results from the amplitude expansion of the ψ 3 and ψ 4 terms in Eq. (17). It may be computed by substituting Eq. (19) into Eq. (17) and integrating over the unit cell, under the assumption of constant amplitudes [Goldenfeld et al., 2005, Athreya et al., 2006, Salvalaglio and Elder, 2022. It features terms reading L =1 η n , with L = 3, 4 and n for which the condition L =1 q (n ) = 0 is satisfied. By multiplying this condition by b and using Eq. (8) it then follows that L =1 s n = 0. (E.1) In the equation for the dislocation velocity, Eq. (16), the only contributing amplitudes are those for which s n 0. The condition (E.1) implies that at least one of the other amplitudes, {η m } m n , appearing in terms of f s containing η n , also has s m 0 and then vanishes at the corresponding defect. Thus, for a given amplitude η n with s n 0, the terms in ∂ f s ∂η * n always contain at least one vanishing amplitude. Eq. (25) then reduces to Eq. (26) at the defect as η n = 0 and ∂ f s ∂η * n = 0 there. Importantly, a full decoupling of the evolution equation for amplitudes which vanish at the defect is obtained. | 2021-10-08T01:16:24.979Z | 2021-10-07T00:00:00.000 | {
"year": 2021,
"sha1": "a913500978eb19c96bcf675c9ce19739df99e252",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jmps.2022.104932",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "3281cbab24195e6bed71db131c3086d533d86fd7",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
79498925 | pes2o/s2orc | v3-fos-license | Synthetic vs biologic mesh for the repair and prevention of parastomal hernia
AIM To outline current evidence regarding prevention and treatment of parastomal hernia and to compare use of synthetic and biologic mesh. METHODS Relevant databases were searched for studies reporting hernia recurrence, wound and mesh infection, other complications, surgical techniques and mortality. Weighted pooled proportions (95%CI) were calculated using StatsDirect. Heterogeneity concerning outcome measures was determined using Cochran’s Q test and was quantified using I . Random and fixed effects models were used. Meta-analysis was performed with Review Manager software with the statistical significance set at P ≤ 0.05. RESULTS Forty-four studies were included: 5 reporting biologic mesh repairs; 21, synthetic mesh repairs; and 18, prophylactic mesh repairs. Most of the studies were retrospective cohorts of low to moderate quality. The hernia recurrence rate was higher after undergoing biologic compared to synthetic mesh repair (24.0% vs 15.1%, P = 0.01). No significant difference was found concerning wound and mesh infection (5.6% vs 2.8%; 0% vs 3.1%). Open and laparoscopic techniques were comparable regarding recurrences and infections. Prophylactic mesh placement reduced the occurrence of a parastomal hernia (OR = 0.20, P < 0.0006) without increasing wound infection [7.8% vs 8.2% (OR = 1.04, P = 0.91)] and without differences between the mesh types. CONCLUSION There is no superiority of biologic over synthetic mesh for parastomal hernia repair. Prophylactic mesh placement World Journal of Meta-Analysis W J M A Submit a Manuscript: http://www.f6publishing.com DOI: 10.13105/wjma.v5.i6.150 World J Meta-Anal 2017 December 26; 5(6): 150-166 ISSN 2308-3840 (online)
INTRODUCTION
Parastomal hernia is a common complication of stoma formation during colorectal surgery, with incidences up to 50%. The risk of parastomal hernia is highest within the first few years after formation of the stoma but may develop as much as 20 years later [1] . Hernias are often asymptomatic and managed with conservative treatment. However, 11% to 70% of patients undergo surgery due to discomfort, pain, obstructive symptoms and cosmetic dissatisfaction [2] . These treatment per centages vary because surgeons are often reluctant to repair a parastomal hernia due to the high recurrence rate, complicated operation and comorbidity of pa tients. Indeed, a parastomal hernia is regarded as a complex incisional hernia by hernia experts [3] . Hence, many patients suffer but never undergo surgery.
The recurrence rate of parastomal hernia is the lowest after mesh repair (0%33%), whereas primary fascial closure (46%100%) and relocation of the stoma (0%76%) result in much higher rates. Although low recurrence rates are reported after synthetic mesh repair, concerns have been raised regarding the safety of synthetic meshes in (potentially) contaminated fields due to the risk of mesh infection and subsequent removal. Other meshrelated complications include chronic infection, bowel stenosis, erosion of the mesh through the bowel and skin and enteroatmospheric fistulisation. These complications led to the development of biologic mesh, which due to its biodegradable nature, has the potential to ameliorate these problems in infected and contaminated fields.
The high prevalence of parastomal hernias and the difficulty of repair have led to a shift of focus from repair towards prevention using prophylactic mesh reinforcement at the time of stoma formation. However, prophylactic mesh placement coincides with risk of the same meshrelated morbidities of hernia repair.
There are no trials comparing biologic and synthetic mesh repair for parastomal hernias. Available studies show a large range in reported parastomal hernia recurrence rates and no difference in mesh type con cerning hernia recurrence or infection resistance [47] .
No clear answer can be given as to whether there is a difference between the outcomes of synthetic and biologic mesh repair. However, given the financial costs of biologic mesh, the evidence for superiority and more beneficial outcomes compared to synthetic mesh is mandatory to support its use.
There are various approaches regarding the an atomic position of the mesh during parastomal hernia repair. Meshes are implanted in an inlay, onlay, sublay or underlay (intraperitoneal) position. Laparoscopic repair involves the intraperitoneal technique, and open repair may involve any of the anatomical planes of the mesh. The inlay technique places the mesh within the fascial defect and is sutured to the fascial edges. With onlay repair, the mesh is placed subcutaneously and fixed onto the fascia of the anterior rectus sh eath and the aponeurosis of the external oblique abdominal muscle. When using a retromuscular or sublay technique, the prosthesis is placed dorsally to the rectus muscle and anteriorly to the posterior rectus sheath after mobilization of the latter. When performing intraperitoneal repair, the choice can be made between the Sugarbaker and keyhole repair techniques. Regarding the Sugarbaker technique, the hernia defect is closed with intraabdominal placement of the prosthetic mesh securely sutured or tacked to the abdominal wall. Between the abdominal wall and the prosthesis, the bowel is lateralized passing from the hernia sac into the peritoneal cavity [8] . During keyhole mesh repair, a 23 cm hole is fashioned in the mesh for passage of the stoma, and the rest of the mesh covers the entirety of the hernia orifice, including sufficient overlap (5 cm beyond the edge of the hernia defect is recommended). Both the keyhole and Sugarbaker techniques can be performed open or laparoscopically [9,10] . The primary aim of the current study was to compare biologic and synthetic mesh use for the treatment and prevention of parastomal hernia by systematic review and metaanalysis of available data in the literature. The secondary aim was to evaluate the different anatomical positions and surgical techniques used for parastomal hernia repair. With the absence of rigorous data focused on hernia recurrence in the literature, this review con tributes to the increased understanding of parastomal hernias.
Search strategy
Articles for this review were identified by searching the electronic databases PubMed and Medline (January 1946 to present) and by manual crossreference searches. The last search was performed on 1942016. The search included the following terms: "Parastomal hernia", "Parastomal", "Paracolostomy", "Paraileostomy", "Stoma" and "Colostomy" to represent the population. These terms were combined with terms relevant to the outcomes, such as "Ventral hernia", "Defect", "Mesh", "Synthetic mesh", "Biologic mesh", "Closure", "Reconstruction", "Prosthesis", "Scaffold", "Prevention" and "Prophylactic". The full search strategy is provided in Appendix 1. No limitation to date or language was considered. Randomized and non-randomized studies were included. When multiple studies describing the same population were published, the most complete report was used. The systematic review was performed in accordance with PRISMA [11] .
Critical appraisal
All selected papers were evaluated for methodological quality using the Cochrane risk-of-bias tool for randomized controlled trials and the NewcastleOttawa Scale (NOS) for all nonrandomized and single group studies [12,13] . Assessment using the Cochrane riskof bias tool is based on sequence generation, allocation concealment, blinding of participants, personnel, outcomes assessors, incomplete outcomes data, selective outcomes reporting, and other sources of bias, such as baseline imbalance, early stopping bias, academic bias, and source of funding bias. The NOS is an instrument for assessing methodological quality and potential bias in nonrandomized studies. A maximum of nine points were assigned to each study. Studies that scored four for selection, two for comparability, and three for assessment of outcomes were regarded as having a low risk of bias. Studies with two or three stars for selection, one for comparability, and two for outcome were considered as having a medium risk of bias. Any study with a score of one for selection or outcome, or zero for any of the three domains, was deemed as having a high risk of bias. A modification in the NOS was made for single group studies, which consisted of excluding the points for comparability with a maximum of six points: three for selection and three for outcome. After screening titles and abstracts, two reviewers (Knaapen L and Slater NJ) independently reviewed fulltext articles for eligibility using the critical appraisal approach. Any disagreement was resolved by consensus with a third reviewer (van Goor).
Outcome measures
Studies were identified according to the following inclusion criteria: Participants (human adults, minimum of 18 years of age), intervention (parastomal hernia repair with a synthetic or biologic mesh and prophylactic placement of mesh), and sufficient data available (10 or more patients).
The following criteria were used for exclusion: Stoma relocation, primary suture repair, and unspecified surgical technique. Studies published only as abstracts were excluded because quality assessment could not be performed.
The primary outcome measure was the recurrence rates of parastomal hernia as defined by the respective authors. Secondary outcomes were wound infection, mesh infection, mortality, other complications (medical and surgical), anatomic position of the prosthesis and surgical approach (open or laparoscopic).
Data extraction and statistical analysis
All fulltext articles that met the inclusion criteria were thoroughly reviewed, and the data for primary and secondary outcomes were extracted and recorded in a data form. Year of publication, study period, level of evidence, mean age, gender, number of patients included and evaluated, type of stoma, surgical technique (open or laparoscopic, anatomical mesh position, keyhole or Sugarbaker), type of mesh (biologic or synthetic) and duration of followup were also noted. Weighted pooled proportions with a 95%CI were determined for recurrence, wound infection, mesh infection, other complications and mortality using StatsDirect statistical software [14] . The heterogeneity concerning the outcome measures was determined with Cochran's Q test and quantified using I 2 . A randomeffects model was used unless heterogeneity was 0%, in which case, a fixed effects model was used. Metaanalysis was performed using Review Manager [15] with the statistical significance set at p < 0.05.
RESULTS
A flowchart overview of the search including reasons for exclusion of studies is shown in Figure 1. A total of 44 studies were included. Five studies provided information on 84 biologic mesh repairs; 21 studies, on 669 synthetic mesh repairs; and 18 studies, on 500 prophylactic mesh placements.
The Newcastle-Ottawa Scale for quality assessment showed that all 37 nonrandomized studies had a low risk of bias for study selection. The five non-randomized twogroup studies showed a low risk of bias regarding comparability in 1 study (20%), medium risk in 2 studies (40%), and high risk in 2 studies (40%). The risk of bias for outcome assessment was low in 20 (54%) studies, medium in 15 (41%) studies, and high in two (5%) studies ( Figure 3). Use of funding was not reported in 32 studies (73%). Five studies (11%) reported no funding [2,5,8,16,17] . Industry sponsored 4 biologic mesh studies (9%) [4,1820] . The manufacturer supplied the mesh material in one biologic and one synthetic mesh study (5%) [21,22] . The state funded one study without financial disclosures reported [23] . Fiftythree percent of patients were female, and the mean age was 64.6 years. The indication for stoma placement was reported in 32 studies: benign disease in 9%, malignant disease in 68%, inflammatory bowel disease or diverticulitis in 19% and other causes in 4%. Patient demographics, study characteristics and critical appraisals are described in Table 1.
Biologic mesh repair of parastomal hernias
Biological grafts used in the included studies were Surgisis, AlloDerm, Permacol and PeriGuard (Table 2). Five retrospective studies reported parastomal hernias that were repaired with a biologic mesh and included a combined enrolment of 84 patients. Patient followup ranged from 950 mo. One case of mortality was reported due to renal failure unrelated to the mesh [4] . Study characteristics and outcomes, including weightedpooled rates of recurrence and woundre lated complications, are shown in Table 3. Five studies reported 23 hernia recurrences with a weightedpooled proportion of 24% (95%CI: 8.644.1) (Figure 4).
Only three of these studies reported treatment after recurrence. Araujo et al [24] relocated the stoma and, Ellis et al [19] reported a reoperation using a bioprosthetic not further specified. Taner et al [25] reported two asy mptomatic recurrences that were both treated con servatively. There were 4 wound infections that were reported with a weightedpooled proportion of 5.6% (95%CI: 1.412.1) [4,18,25] . One was conservatively treated, one was treated with systemic antibiotics, and two were treated with local wound care [4,18,25] .
Synthetic mesh repair of parastomal hernias
Characteristics of the synthetic mesh used in the included studies are given in Table 2. One of the 21 studies was a prospective trial that recruited 12 patients with synthetic mesh repair and 13 control patients without mesh repair. The other 20 studies had a com bined enrolment of 669 patients with synthetic mesh repairs [26] . Patient followup ranged from 7 to 51 mo.
Knaapen L et al . Parastomal hernia: Biologic vs synthetic mesh One study did not specify mean or median followup. The overall mortality was 1.9% (11 patients, weighted pooled proportion, 95%CI: 0.93.2). None of the deaths were related to the mesh. Four postoperative deaths were due to progressive metastatic disease, two deaths were due to aspiration and subsequent cardiopulmonary arrest, and two deaths were due to secondary cardiopulmonary complications [8,2729] . Wara et al [5] reported one death due to a neglected bowel injury that resulted in multiorgan failure and another death due to uncontrollable bleeding that resulted from portal hypertension that was unknown prior to surgery. One postoperative death was reported by Mizrahi et al [2] following sepsis that was not further specified and caused by an infected retroperitoneal haematoma, which necessitated a second operation. Study characteristics and outcomes, including weighted pooled rates of recurrence and woundrelated complications, are shown in Table 3. Nineteen studies (4) 28 Wara et al [5] 72 -C: 48 I: 24 L: 72 IPOM: K; PP+ePTFE Pastor et al [ Berger et al [35] 47 Muysoms et al [27] 24 . Two studies reported 2 patients who required reoperation that involved relocation of their stoma and mesh repairs [27,28] . Van Sprundel et al [33] noted one hernia recurrence due to a wide circle cut in the mesh, and in a second operation, the hernia content was removed, and the circle was narrowed with sutures. Ruiter and coworkers reported 5 patients who had the prosthesis definitively removed (not specified), 1 patient who had a smallersized prosthesis implanted and 1 patient who had only the hernia sac closed after midline laparotomy. Muysoms et al [27] noted one patient with a recurrence in whom a second laparoscopy was performed because of obstructive symptoms and was treated with a modified Sugarbaker technique. Another patient needed a laparotomy for a colonic abscess due to Crohn's disease. After colonic resection and mesh removal, a translocation of the colostomy was performed. Two reoperations for parastomal hernia recurrences were described by Fei et al [34] and Berger et al [35] due to the breakdown of the sutures used for closing and keeping the mesh in place. Berger et al [35] reported three other patients who were treated with the sandwich technique and one with the Sugarbaker technique. All other described hernia recurrences were asymptomatic and treated conservatively. Surgical wound infection was mentioned in eleven studies reporting 17 patients with a weightedpooled proportion of 2.8% (95%CI: 1.64.4). Four studies reported treatment of wound infection [5,26,29,32] . Two patients were treated by surgical drainage, and five were treated with systemic antibiotics. Pastor et al [26] reported 1 patient with a parastomal abscess and subsequent fistula development repaired by laparotomy, transection of the fistula tract, and resiting of the ileostomy [26] . Sixteen mesh infections were observed with a weightedpooled proportion of 3.1% (95%CI: 1.84.6), resulting in mesh removal from 14 patients. Other complications [17.8% (95%CI: 12.024.4%)] were seroma (31.1%), cardiopulmonary event (8.3%), urinary tract infection (0.8%), cutaneous/fascial dehi scence (0.8%), stoma complications (6.1%), ileus (9.9%), peritonitis (2.3%), postoperative bleeding (3.8%), haematoma (4.5%), bowel stenosis (14.4%), fistula formation (1.5%), renal failure (3%) and other (13.6%). Five of the 41 seromas were treated by surgical drainage, 12 were conservatively treated, and 24 did not have any reported treatment [8,32,34,35] .
Comparison of biologic mesh repair and synthetic mesh repair: When comparing the prevalence of Asif et al [17] 33 C:
Anatomic position of the prosthesis
Various mesh positions were applied concerning bio logic mesh repair, including inlay, onlay, sublay and underlay (intraperitoneal) placement of the mesh. Two retrospective series reported on 40 cases that involved onlay mesh repairs. Hernias recurred in 31.3% (weighted pooled proportion, 95%CI: 0.978.8) of patients. Smart et al [4] placed 16 stomas lateral to the rectus sheath, which showed a high recurrence rate (75%) compared to 11 stomas within the rectus sheath (27%) [4,24] . Ellis et al [19] placed the mesh intraperitoneally using the Sugarbaker technique. Two of 20 (10%) patients had a recurrent hernia after a [25] . After a mean followup of 10 mo, two of 13 (15%) patients had a recurrent hernia. One other study reported multiple surgical techniques (including inlay and onlay) and did not allow for stratified outcome extraction [18] . Considering the anatomical position for open synthetic mesh repair, 3 retrospective studies using a series of onlay synthetic mesh repairs, reporting a total of 119 repairs, were included in this study. Hernias recurred in 21.5% (weighted pooled proportion, 95%CI: 14.729.3) of patients. In three studies, the mesh was placed in the sublay position, and 3 hernia recurrences with a weightedpooled proportion of 8.1% (95%CI: 2.117.4) were reported.
The mesh was placed intraperitoneally by the open approach in three studies reporting 48 repairs (19 Sugarbaker and 29 keyhole technique repairs) [3638] . The weightedpooled proportion of recurrence was 8.8% (95%CI: 1.820.2). Seven studies described laparoscopic synthetic mesh repair using the Sugar baker technique, and the weighted-pooled proportion of hernia recurrence was 10.9% (95%CI: 3.721.4). The keyhole technique was used in 8 studies, and hernia recurrence was reported in 35.6% (weighted pooled proportion; 95%CI: 14.660.1).
Prophylactic mesh placement
Eighteen studies reported a total of 500 prophylactic mesh placements, which included 13 studies consisting of 382 patients with synthetic mesh repair and 5 studies consisting of 118 patients with biologic mesh repair. The followup ranged from 765 mo.
The overall mortality was 2.5% (21 deaths, weighted pooled proportion, 95%CI: 1.34.2) None of the deaths were related to the mesh. Two postoperative deaths were due to progressive metastatic disease, one was due to a pulmonary thromboembolism, and two were due to cardiopulmonary complications [22,23,3941] . Jänes et al [42] reported five deaths due to septic or cardiovascular complications not further specified. Fleshman et al [20] described eleven deaths, none of which were related to the device or treatment not further specified. Study characteristics and outcomes, including wei ghtedpooled rates of hernia occurrence and wound related complications, are shown in Table 6. When comparing prophylactic placement of biologic mesh with synthetic mesh, there was no significant difference in hernia occurrence (OR = 0.79, 95%CI: 0.40-1.55; p = 0.49) or wound infection (OR = 0.30, 95%CI: 0.07-1.28; p = 0.10). In the mesh group, 58 hernia occurrences were observed with a weightedpooled proportion of 11.5% (95%CI: 7.116.8) ( Figure 6) and 31 wound infections with a weightedpooled proportion of 6.9% (95%CI: 3.611.1), and no infections of the prosthesis were reported [0% (95%CI: 02.0)].
DISCUSSION
The current study evaluated and compared all the evidence regarding the use of biologic and synthetic mesh for repair and prevention of parastomal hernia. Interestingly, the results of comparing biologic and synthetic mesh repairs showed a comparable or even superior result regarding parastomal hernia recurrence (24% vs 15.1%) and wound infection (5.6% vs 2.8%) in favour of the synthetic mesh repair. Overall, the mesh infection rate was low. Only sixteen mesh infections were reported in 753 repairs (2.1%), which resulted in fourteen mesh removals (all synthetic meshes). However, these observations should be interpreted cautiously because of the low to moderate quality of the studies. Biologic mesh has gained widespread popularity in the context of infection and a contaminated environment because of their proposed advantages, including bio compatibility resulting in rapid vascularization and migration of host (immune) cells. It is thought that biologic prostheses are therefore less susceptible to infection than their synthetic counterparts. The ventral hernia working group regards parastomal hernia repair as potentially contaminated (grade 3) and therefore recommends biologic mesh repair [47] . Many authors believe that synthetic mesh should not be used in a contaminated environment or in close proximity to the bowel and stoma due to the risk of erosion and fistula formation. However, studies with highlevel evidence are lacking, and the exact origins of these concerns are difficult to identify, are mostly anecdotal or reference old reports using inferior materials and techniques [4850] . Primus and Harris criticized the surgical literature on the use of biologics in contaminated fields, arguing that cumulative data do not support the claim that biologics are indicated for use in contaminated fields. The primary literature varies widely in terms of sample size, diagnosis of (recurrent) PSH, methods of mesh placement, followup period, reported hernia recurrences and surgical site infection [51] . Rosen et al [52] reported a critical review of the surgical literature on biologic mesh repair, which revealed that the majority of the studies evaluating the outcomes of biologic mesh are actually reporting the repair of clean defects. This finding is very surprising given the high costs of biologic mesh, whereas the position of synthetic mesh in "clean" hernia repair has been proven. Despite the lack of high grade evidence, biologic meshes are still preferred above synthetic mesh in contaminated fields as noted by Bondre et al [53] , who conducted a multicentre study about practice patterns in contaminated ventral hernia repair. This review shows a comparable to superior result of synthetic mesh over biologic mesh concerning parastomal hernia recurrence. This finding is confirmed by Lee et al [54] in a systematic review on ventral hernia mesh repair in contaminated fields. Mesh removal due to infection is a muchfeared complication. The literature suggest that biologic mesh does not prevent infection but can be more easily salvaged when infection arises [55] . This review challenges the concept that contaminated hernias should be repaired with expensive biologic mesh. Only sixteen mesh infections were seen in this current review, resulting in mesh removal from 14 patients. Concerning parastomal hernia repair, surgeons should carefully balance the risks and costs with the benefits when deciding on the choice of mesh for parastomal hernia repair. Similar to ventral hernia repair, the prosthesis is placed in either the inlay, onlay, sublay, or underlay (intraperitoneal) position during parastomal hernia repair. None of the included studies used an inlay placement of the prosthesis. Onlay mesh repair showed the highest recurrence rate, whereas the sublay tech nique showed the lowest in the current study. There was no difference in wound and mesh infection rates between the various anatomic positions. However, firm conclusions cannot be drawn based on this subanalysis because these results were obtained from small groups. Each method of mesh repair has its own theoretical advantages and disadvantages. Laparotomy is avoided with the onlay technique, but it requires extensive dissection of subcutaneous tissue, which predisposes patients for haematoma and seroma formation. Dis ruption of skin vascularization may lead to impaired wound healing. Additionally, intraabdominal pressure may lead to lateral detachment of the prosthesis, resulting in the higher recurrence rates. The sublay mesh technique protects the mesh from bacterial contamination while minimizing contact with the bowel because the mesh is enveloped in wellvascularized tissue, whereas the fascia and peritoneum form a natural barrier between prosthesis and abdominal organs. This technique reduces the risk of infection, adhesion or fistulation. The anatomic positions of the sublay and intraperitoneal mesh technique are more attractive because of the benefits from intra-abdominal pressures, which help to keep the mesh in place.
When performing laparoscopic intraperitoneal repair there was a significantly lower recurrence rate of parastomal hernia using the Sugarbaker technique compared to the keyhole technique (10.9% vs 35.6%, OR = 0.35; 95%CI: 0.21-0.59; p ≤ 0.0001). Remarkably, it appears that all failures using the keyhole technique were related to the use of an e-PTFEmesh. As noted by Hansson et al [9] , using the keyhole technique estimation of the size of the hole is difficult as mesh shrinkage may result in enlargement of the central hole and reherniation.
Unfortunately, the recurrence rate is still up to one third after mesh repair of parastomal hernias.
Our systematic review with metaanalysis shows that prevention of parastomal hernia by the use of mesh at the time of stoma formation reduces the incidence of parastomal hernia significantly compared to the conventional stoma group (14.9% vs 46.8% OR = 0.20; 95%CI: 0.080.50; p ≤ 0.0006). Interestingly, placement of preventive mesh did not result in increased wound infection or mesh infection. Recently published reviews also confirm our conclusion that prophylactic insertion of a mesh when forming a stoma prevents parastomal hernia without increasing the incidence of wound infections or other meshrelated complications [56,57] . One point of discussion remains whether universal reinforcement is expedient and costeffective. Other nonmesh prophylactic measures can be considered, such as lateral rectus abdominis positioned or extra peritoneal positioned stomas [58,59] . Most patients who develop a parastomal hernia are asymptomatic. However, complications due to an untreated parastomal hernia (incarceration, obstruction, strangulation) can be severe and are associated with significant morbidity and mortality. Identification of patients in whom reinforcement is beneficial is essential as the patient can avoid unnecessary longer operative time, costs and possible longterm complications associated with mesh placement. As noted by Hotouras et al [60] , risk factors for parastomal hernia formation include abdominal obesity, increasing age, corticosteroid use, poor nutri tional status, increased intraabdominal pressure, connective tissue disorders and other disorders that predispose patients to wound infection such as dia betes. Factors that need to be considered include the reason for the stoma (temporary or permanent stoma), patient comorbidity, chance of reoperations and risk factors concerning parastomal hernia formation. Patients undergoing stoma formation with short life expectancies will often not survive long enough to develop a parastomal hernia, and patients who are healthy enough to undergo stoma reversal before hernia occurrence would not benefit from prophylactic mesh placement.
Median direct costs for complex ventral hernia repairs with biologic mesh ($16970) is more than twice the amount compared to repairs with synthetic mesh ($7590) [61] . Parastomal hernia repair probably costs less due to the need for smaller meshes; however, a substantial cost difference is expected to remain. Figel et al [62] calculated that by using a bioprosthetic and considering a 30% incidence of surgical management of parastomal hernia repair, it would be costeffective if the prosthesis cost less than $4312. The decision to place prophylactic mesh after stoma formation must be patient tailored and may certainly be justified in selected patients. However, standard application in all patients does not seem warranted. More randomized controlled trials with adequate power for risk stratification and subsequent costs of usage of biologic and synthetic mesh are needed.
Most of the studies that were included are retro spective cohorts (level 3 evidence), which could introduce selection and information bias and are affe cted by heterogeneity. Most study populations were diverse with different types of stomas and indications for the initial surgery. The high recurrence rate regarding biologic parastomal mesh repairs was mostly determined by one study: A 75% recurrence rate of 16 stoma repairs lateral to the rectus sheath compared to a 27% rate when the repair was within the rectus sheath. As noted by Smart et al [4] , parastomal hernia repairs where the stoma is lateral to the rectus sheath had a significantly higher risk of recurrence and suggested that this higher risk was likely due to the inherent strength of the tissue onto which the onlay mesh was sutured.
Unfortunately, reporting was insufficient to allow proper stratification for individual risk factors for parastomal hernia. Followup time and diagnostic modalities used for determining recurrence rates had a strong impact on the outcome. The longer the follow up period was, the more recurrences were found. In addition, the diagnostic modalities differ in terms of sensitivity and specificity. Some recurrences found may be of no clinical relevance. Reported follow-up periods within and between studies varied from 7 mo to 51 mo. As recurrence occurs mostly in the first years after operation a minimum follow up of 12 mo seems appropriate.
Definitions of parastomal hernia, wound infection and mesh infection were ill-defined in most studies, and the modality of determining hernia recurrence (e.g., clinical evaluation or CT imaging) was often not clearly stated. Therefore, the results of this review should be interpreted with care.
In an effort to reduce the effect of low quality studies, we excluded the high risk of bias randomized controlled trials for the prophylactic mesh meta analysis. Only three studies considered of sufficient methodological quality remained, and a second metaanalysis was performed [20,23,40] . No significant difference was found in the occurrence of parastomal hernia when comparing the prophylactic group to the conventional However, provided the large amount of parastomal hernia repairs included in the current report, meaningful conclusions may be drawn regarding optimal surgical management of synthetic and biologic mesh repair in parastomal hernia recurrence.
Clinical implications
The current evidence suggests there is no superiority of (more expensive) biologic mesh over synthetic mesh for parastomal hernia repairs after parastomal hernia recurrence, wound infection and mesh infection. In the context of costeffective healthcare, careful consideration must be taken in choosing the types of materials to use [55] . Sublay seemed to be the most advantageous anatomic position of the mesh, as this position resulted in the lowest recurrence and protects the mesh from bacterial contamination while minimizing contact with the bowel. No difference was found for parastomal hernia recurrence between open or laparoscopic parastomal hernia repairs. When performing laparoscopic repair, the keyhole technique should be abandoned in favour of the Sugarbaker technique when using an ePTFE-mesh because of much higher recurrence rates. As shown by Wara et al [5] , the keyhole technique can be considered when using a polypropylenebased mesh or with open parastomal keyhole hernia repairs.
Prophylactic mesh placement at the initial surgery significantly reduced parastomal hernia occurrence on the midlong term without increasing wound infection or mesh infection. However, it has yet to become clear what the longterm results will be. The number of recurrences will increase over time, though at a slower pace than in the first few years after mesh placement. The same applies to some specific long-term side effects such as mesh infection and mesh-related fistulas. Although their incidence may be low, their impact is disproportionately high.
Identification of patients in whom reinforcement is mandatory is essential, as the patient can avoid unnecessary longer operative time, costs and possible longterm complications associated with mesh place ment.
Altogether there is still not enough evidence to recommend the use a biologic mesh over synthetic mesh under contaminated conditions in general and specifically not for parastomal hernia repair. Prophylactic mesh reinforcement during stoma formation significantly reduces parastomal hernia occurrence regardless of mesh type. Yet, a significant number of patients will develop asymptomatic parastomal hernia and there are no data on long term effects of preventive mesh placement. Therefore, it is essential to select the right patient for whom prophylactic reinforcement is mandatory.
Background
Parastomal hernia develops in 50% of patients. Hernias are often asymptomatic and managed with conservative treatments; however, 11% to 70% of patients undergo surgery due to discomfort, pain, obstructive symptoms and cosmetic dissatisfaction. Although standard care is mesh repair, prevention by prophylactic mesh placement is gaining popularity. The use of biologic mesh is becoming more popular as it claims less infections with sustained durability of the repair compared to synthetic mesh. The primary aim of the current study was to compare biologic and synthetic mesh use for the treatment and prevention of parastomal hernia by systematic review and meta-analysis of available data in the literature. The secondary aim was to evaluate different anatomical positions and surgical techniques concerning parastomal hernia repair.
Research frontiers
The recurrence rate of parastomal hernia is the lowest after mesh repair (0%-33%), whereas primary fascial closure (46%-100%) and relocation of the stoma (0%-76%) result in much higher rates. Although low recurrence rates are reported after synthetic mesh repair, concerns have been raised regarding the safety of synthetic meshes in (potentially) contaminated fields due to the risk of mesh infection and subsequent removal.
Innovations and breakthroughs
Biologic mesh was first introduced in the 1980s and was developed with the concept that due to its bio-degradable nature, it has the potential to ameliorate problems in infected and contaminated fields. No clear answer can be given as to whether there is a difference in the clinical outcomes between synthetic and biologic mesh repairs. The high prevalence of parastomal hernia and difficulty of repair have led to a shift of focus from repair towards prevention using prophylactic mesh reinforcement at the time of stoma formation.
Applications
This review and meta-analysis suggests there is no superiority of biologic over synthetic mesh for parastomal hernia repair after parastomal hernia recurrence, wound infection and mesh infection. Prophylactic mesh reinforcement during stoma formation significantly reduces parastomal hernia occurrence regardless of the mesh type. Identification of patients for whom reinforcement is mandatory is essential, and mesh reinforcement should be reserved for selected patients.
Terminology
Ostomy formation requires the creation of a full-thickness defect within the abdominal wall. Parastomal hernia is a type of incisional hernia that allows protrusion of abdominal contents through an abdominal wall defect that is created. Both synthetic mesh and biologic mesh (acellular collagen matrix) are used in parastomal hernia repair. There are various approaches regarding the anatomic position of the mesh during parastomal hernia repair. Meshes can be implanted in an inlay (between the fascia), onlay (over the fascia), sublay (below the anterior fascia and muscular level but above peritoneum) or underlay (intraperitoneal) position. Laparoscopic repair involves the intraperitoneal technique, and open repair may involve any of the anatomical planes of the mesh. When performing intraperitoneal repair, the choice can be made between the Sugarbaker and keyhole repair technique.
Peer-review
In this systematic review, the authors have presented a thorough and critical analysis of biologic and synthetic mesh use for the treatment and prevention of parastomal hernias. With a focus on hernia recurrence in the absence of rigorous data in the literature, the current review contributes to the increased understanding of parastomal hernias. | 2019-03-17T13:10:30.042Z | 2017-12-26T00:00:00.000 | {
"year": 2017,
"sha1": "f19b8f752a405dfd42ee8972541314e05607603b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.13105/wjma.v5.i6.150",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "17624eb013eaf09de692d6d93b0663994846505f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73718244 | pes2o/s2orc | v3-fos-license | Human Papillomavirus Infection in Pregnant Adolescentes : Is There an Association Between Genital and Mouth Infection ?
C l i n M e d International Library Citation: Cavalcanti, Silva, Ferreira, Neves, Vanderborght, et al. (2016) Human Papillomavirus Infection in Pregnant Adolescentes: Is There an Association Between Genital and Mouth Infection?. Int J Oral Dent Health 2:037 Received: October 30, 2016: Accepted: December 10, 2016: Published: December 12, 2016 Copyright: © 2016 Cavalcanti, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Silva et al. Int J Oral Dent Health 2016, 2:037
The prevalence of HPV infection in the genital region varies from 5.9% to 81.7%, for adolescents [22,23].During pregnancy, there is more vulnerability for HPV infection in the genital area of young women, than in older women [10].The prevalence of genital HPV infection in young pregnant women varies from 49% to 60%, in different countries [24][25][26].
Nearly 23% of the Brazilian healthy individuals are infected by the HPV in the oral mucosa [27].The HPV may infect the oral epithelium in a latent and asymptomatic form, or it may produce oral lesions.The virus has been detected in the site of periodontal disease of immune competent and immune suppressed individuals [28][29][30][31].Thus, it has been suggested that the periodontum may be a reservoir for HPV in the mouth [28].from each subject.Prior to collection, supragingival plaque and saliva were removed from the teeth using sterile cotton rolls.A sterile 11-12 Gracey periodontal curette (Hu-Friedy, Chicago, IL, USA) was gently inserted into the periodontal pocket and the subgingival material was collected.The samples were stored in dry sterile tubes and kept at -80 ºC, until analysis.
Laboratorial analysis
The slides of the cervical and oral samples were stained by the Papanicolau (Pap) method and were evaluated by a cytopathologist using the Bethesda criteria.
Statistical analysis
Descriptive data analysis was reported as the absolute frequency and percentage for categorical variables, and mean and standard deviation for the continuous variables.Fisher exact test was used to verify the differences between HPV presence in uterine cervix and in periodontal disease.Kappa test was used to correlate the results of the three methods of diagnosis for HPV infection (clinical, cytological and molecular).The p values ≤ 0.05 were considered statistically significant.
Socio-Demographic characteristics
Thirty-one pregnant adolescents were recruited for the study, but one refused to participate.The mean (and standard deviation [SD]) age of the 30 included subjects was 15.2 [SD ± 1.3] years and the mean gestational period was 28.8 [SD ± 7.3] weeks.Social and behavioral characteristics are showed in table 1.The number of sexual partners reported by the subjects varied from 1 to 10 partners, with a mean of 2.4 partners [SD ± 2.3].The practice of oral sex was reported by 18 (60%) of them, and the other 12 (40%) reported did not have this type of sex practice.
Clinical, cytological and molecular exams
In the clinical exam of the genital area, eight (23%) subjects presented HPV-induced lesions (condiloma accuminatum).Five of them showed lesions in the external genitalia and three in both external and internal genitalia.
In the cytological analysis of the uterine cervix, all the smears presented appropriated material for the evaluation.Three (10%) additional aim was to evaluate the concordance of clinical exam and cytological/molecular HPV methods to identify HPV infection.
Design
This was a cross-sectional study in which pregnant adolescents attending the Maternity School of the Universidade Federal do Rio de Janeiro (UFRJ/Brazil) were evaluated for HPV infection in mouth and genital areas.
Population
A convenience sample of adolescents seeking assistance for prenatal care, during the period of six months, represented the population of the study.All pregnant teenagers from the Maternity School of UFRJ were invited to participate in the study.Individuals were included if adolescent, aged between 10 and 19 years according World Health Organization, and agreed to participate in the study.Subjects were excluded if they presented any other genital infection.The research was approved by the institution review board at the University, and all subjects signed a consent form.
Clinical analysis
The pregnant adolescents were invited to participate in the study during the routine gynecological appointment.A complete examination of the genitalia and oral regions were performed by experienced gynecologist and dentist, respectively, to investigate for HPV infection.Clinical, socio-demographic and behavioral characteristics were collected from medical records and interview.
In the genital region, the external genital (minor and major labia), perianal and anal region were inspected.The internal genitalia was examined after the insertion of the speculum and the application of acetic acid 2%, in order to identify possible staining suggestive of HPV infection.
The oral exam was performed in a hospital gurney with a forehead light-emitting diode (LED) lamp.Acetic acid was not applied because it is not considered a good indicator of HPV infection in the oral tissues [32].A complete periodontal exam was performed by a trained periodontist, and the reliability of the evaluation was tested (P = 0.885, intra-class correlation coefficient [95% CI: 0.883 -0.887]).The gingival index system [33], probing depth, clinical attachment level, and bleeding on probing were obtained and measured with a conventional North Carolina periodontal probe (Hu-Friedy, Chicago, IL, USA).Six sites per tooth were measured in a full mouth exam.Periodontal disease (gingivitis and periodontitis) was diagnosed through the evaluation of these parameters.Gingivitis was diagnosed when the supragingival bleeding was present in > 10% of the sites [34].Periodontitis was considered if the clinical attachment level was ≥ 5 mm [35], in at least 4 sites, and bleeding on probing was observed in 3 different teeth.
Sample collection
Cytology foam brushes were used to collect two samples from the uterine cervix and two from the mouth.Each sample collection was taken with six full turns of the brush.The first sample from the uterine cervix was spread on a slide and fixed in 70% alcohol for cytological analysis.The second sample was collected from the same region and placed in a tube containing 3 ml of the specimen transport medium (STM) buffer solution (Papillocheck ® collection kit, Greiner Bio-One GmbH, Germany).The tube was cooled at 4 o C, until HPV genotyping.
In the mouth, the samples were taken from the dorsum of tongue and the area between hard and soft palate, with the same technique.The first sample was spread on a slide and fixed in alcohol, and the second sample taken from same area was placed in STM and cooled until genotyping.
Additionally, four subgingival biofilm samples were collected from the deepest sulcus sites (identified in the periodontal exam) from the uterine cervix.The genital HPV infection was significantly more frequent in subjects that showed gingivitis than those who did not present gingivitis (P = 0.04, Fisher exact test).This association was not found in those subjects who presented periodontitis.
One hundred and twenty subgingival biofilm samples were collected (the four deepest pocket per subject).The HPV was identified in one (0.08%) of the subgingival samples (low risk, HPV 6).Gingivitis was observed in the adolescent with a positive HPV in the gingival sulcus, but the site where HPV was detected presented normal clinical aspect.This adolescent who exhibited HPV in the biofilm also exhibited HPV in the uterine cervix (high risk, HPV 56).subjects presented HPV-induced cell changes in the cytological analysis of the uterine cervix.Two (6%) samples presented low grade squamous intraepithelial lesions and one (3%) slide showed atypical squamous cells of undetermined significance (ASC-US).
In the microarray assay of the uterine cervix, seventeen (56.7%) subjects were positive for HPV.One adolescent did not present conclusive cytology alterations for HPV infection (ASC-US), but was positive for HPV6 and HPV56 in the microarray assay (Papillocheck ® ).Fourteen (51.9%) of the 27 subjects that did not show HPV induced cell changes in the Pap test presented positive for HPV DNA (Table 2).The more prevalent subtype was HPV 16 (n = 4/23.5%),followed by the HPV 68 (n = 3/17.6%).Table 2 shows the different subtypes identified by the microarray test.Multiple infections were presented in eight (47.1%)subjects, and one of them exhibited five different HPV subtypes (high risk HPV 16, 31, 58, 39, 73).
In the mouth, no HPV-induced lesions were found in the clinical exam.In the cytological analysis, all the samples showed appropriate material for the evaluation, but none of them presented HPV-induced cytophatic cells.The microarray assay did not identify the DNA HPV in any oral smear sample.
Status of periodontal disease
None of the adolescents presented normal periodontal status.Table 3 shows the frequency of gingivitis and periodontitis in the studied population and the association with the microarray results
Genital and mouth HPV infection association
There was no association of HPV infection in the uterine cervix with the infection in the mouth.The simultaneous presence of HPV infection in the genitalia and the tongue/palate area was not observed in any subjects.
There was no concordance between the clinical exam, the cytological exam and the molecular HPV assay to identify the HPV infection in the genital area (clinical and cytologic, P = 0.10; clinical and molecular, P = 0.19; cytologic and molecular, P = 0.15, Kappa test).The concordance between methods for HPV detection could not be tested in the oral mucosa, because none of them could detect the HPV infection.In the periodontum, the positive sample was not enough for correlation.
Discussion
More than half of the pregnant adolescents in the present study presented HPV infection of the uterine cervix, but none of them presented HPV infection of the tongue/palate.There was no association between genital and oral HPV infection in the studied population.This is in agreement with other studies which evaluated older populations of pregnant women [10][11].This association was reported only in one case of pregnant adolescent in the literature [36], but had never been studied in a group of pregnant adolescents.
The lack of association between the HPV infection in the mouth and the genital area has also been observed in other populations [10,12,13,15].However, there are some studies that reported an association of HPV infection in these two regions [2,[4][5][6]8,9,32].Studies performed in the Brazilian population were performed in groups of non-pregnant women, men and heterosexual couples, and they are also controversial [2,6,12,24,25].Some risk factors have been suggested for the concomitant infection in the genital area and the mouth, like orogenital sex practice [4,5,9,32], young age at first intercourse [5], and alcohol consumption [4].In this study, more than half of the adolescents reported oral sex practices, but HPV DNA was not found in any sample of their tongue/ palate.Among the Brazilian studies that showed the association of HPV infection in the genitalia and the mouth, smoke was considered a risk factor in one study [6], but orogenital sex was not regarded as a risk factor [2].
Pregnancy has been pointed as a risk condition for HPV infection in the cervical region [8,22,24].The present study was not designed for risk calculation; therefore, we did not have a control group of non-pregnant adolescents for comparison.However, almost half of the adolescents presented multiple high risk HPV DNA infections, which is higher than the frequency observed in other studies [37,38].The HPV16 was the most prevalent subtype observed in the present study and in other studies [39].
The majority of the pregnant adolescents of the studied population did not show HPV-induced cytological changes in the uterine cervix, but over fifty percent of them had positive HPV DNA on the microarray assay.This is a common finding in other studies as well [22,39,40].According to the Bethesda criteria, the cellular changes on smears of the cervix are not conclusive for the HPV absence.Molecular tests are best addressed to detect viral DNA.
There was no HPV infection in the mouth (tongue/palate) of pregnant adolescents in the clinical, cytological and molecular evaluation.Cytology is not considered the first choice method for the analysis of HPV infection in the mouth, when patients do not present lesions, and it does not seem to be a reliable screening technique in the clinically healthy oral mucosa [32].Furthermore, the oral sample collection may not be representative of the whole oral mucosa.Despite these arguments, the results of the three methods agreed upon the absence of the HPV infection in the mouth, in the studied population.
There are specific receptors in the gingival cells for estrogens [41].Hormonal peaks that occur in adolescence and pregnancy may change the immune response and thus influence the susceptibility and the resistance to infections in the periodontal tissues in these periods of the women's life [41].Many studies showed controversial results in relation to the presence of HPV in the site of the periodontal disease [28,31,42,43].In none of these studies, the oral sex habits was investigated and related to the HPV presence in the periodontum.In the only pregnant adolescent of this study that presented the HPV DNA in the biofilm, it was observed that she reported having oral sex one week before the sample collection.
The patient with positive HPV in the biofilm presented gingivitis, but not in the site of sample collection.Maybe the lack of association was a result of the non-advanced periodontal disease due, to young age.However, it was observed that subjects with HPV infection of the uterine cervix presented more gingivitis than those negative for HPV.Pregnant women are more susceptible for both gingivitis and for HPV infection in uterine cervix, because of the hormonal changes [23,42].Thus, these findings are probably consequences of pregnancy, and may not be related among themselves.
This study had some limitations.Although it showed agreement with the studies that evaluated pregnant women, our sample size was relatively small and composed only by pregnant adolescents.Moreover, the young age of subjects limited the evaluation of some social and behavior characteristics that may be related to HPV infection, but needed more life time cumulative experience.
Conclusions
There was no association between the HPV infection in the genital and the mouth regions, in the studied population of pregnant adolescents.The methods used to detect HPV infection induced lesions showed concordance in the mouth, but not in the genital region.
Table 1 :
Socio-demographic characteristics of the 30 pregnant adolescents.
Table 2 :
HPV infection evaluation in the 30 pregnant adolescents.
Table 3 :
Periodontal status of 30 pregnant adolescents according to HPV infection in the cervix through the microarray results. | 2018-12-29T13:06:13.980Z | 2016-12-31T00:00:00.000 | {
"year": 2016,
"sha1": "ebb0d5d4719d90a5bb3eb0667b4ce6a450e45c19",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.23937/2469-5734/1510037",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ebb0d5d4719d90a5bb3eb0667b4ce6a450e45c19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
199046337 | pes2o/s2orc | v3-fos-license | Article J Comparisons of Test-Retest Reliability of Strength Measurement of Gluteus Medius Strength between Break and Make Test in Subjects with Pelvic
Korea Purpose: The purpose of this study was to compare the reliability of unilateral hip abductor strength assessment in side-lying with break and make test in subjects with pelvic drop. Hip abduction muscles are very important in the hip joint structures. Therefore, it is essential to evaluate their strength in a reliable way. Methods: Twenty-five subjects participated in this study. Unilateral isometric hip abductor muscle strength was measured in side-lying, with use of a specialized tensiometer using smart KEMA system for make test, of a hand held dynamometer for break test. Coefficients of variation, and intra class correlation coefficients were calculated to determine test-retest reliability of hip abductor strength. Results: In make test, maximal hip abductor strength in the side-lying position was significantly higher compared with break test (p<0.05). Additionally, Test-retest reliability of hip abductor strength measurements in terms of coefficients of variation (3.7% for make test, 16.1% for break test) was better in the side-lying position with make test. All intraclass correlation coefficients with break test were lower than make test (0.90 for make test, 0.73 for break test). Conclusion: The side-lying body position with make test offers more reliable assessment of unilateral hip abductor strength than the same position with break test. Make test in side-lying can be recommended for reliable measurement of hip abductor strength in subjects with pelvic
INTRODUCTION
The gluteus medius (Gmed) muscle is very important in maintaining the stabilization of hip joint. 1 The Gmed acts as a hip abductor and as a dynamic stabilizer of hip joint, especially during a singlelimb stance and walking 2 and side-lying position. 3 Insufficient hip abductor muscle may result in a Trendelenburg gait. 4 Pelvic drop (PD) is defined as dropping occurrence in pelvis due to Gmed weakness of weight bearing side in one leg standing position. 1,4 PD can be caused by insufficient Gmed muscle strength as a positive Trendelenburg sign. 4 The optimal Gmed strength is required to maintain the height of the top of iliac crest in one leg standing position. The Gmed strength of weight bearing side in one leg standing position is tested for Trendelenburg sign. The weakness of the tested side of Gmed contributed to the pelvic drop in one leg standing position. The tested side of Gmed in side-lying position was investigated the lumbopelvic stabilization with core muscles. 3 In addition, the findings of the previous study reported that the pelvic height was increased in the subject with weakness of Gmed in side-lying. 3 Therefore, Excessive changes in the Gmed strength cause functional limitations and movement impairments during standing and side-lying. 1 A hand held dynamometer (HHD) is a common tool used to clinically measure muscle strength. 5,6 The advantages of HHD include a quick tool of providing objective values in clinic and experimental settings. However, Schwartz 7 reported that the HHD is less sensitive for graded more than 4. In addition, between examiners, no prior study has compared the test-retest reliability of strength measurement of the Gmed between break and make tests in subjects with PD. The purpose of the present study was to determine the testretest reliability of break test and make test for the strength measurement of the Gmed in a side-lying position. We hypothesized that the reliability would be better in the side-lying position with make test because of more stable counter balanced resistance than break test.
The results of this study would guide the prefer measurement regarding the clinical techniques for testing Gmed performance.
Subjects
G*Power software was used (ver. 3.1.2, University of Kiel, Germany) in a pilot study of seven participants. The calculation of the sample size was conducted with a power of 0.80, an alpha level of 0.05, and an effect size of 1.41. This result indicated that the required sample size for the study was at least fifteen participants. Twenty-five subjects with PD aged 20-30 years were enrolled in this study (Table 1). The side for the measurement was defined on the opposite side from PD while subject was in one leg standing indicating the weak side of Gmed. 4 The specific resistance region for the Gmed muscle was placed at the lateral malleolus in side-lying position. And a straight line at the same region on the skin was marked to minimize the regional difference. 2 All measurements were performed on the same day to assess the test-retest reliability. The order of the tests between make and break test was randomized. Prior to the experimental procedure, the examiners and subjects were familiarized with the break and make tests to minimize measurement errors.
The examiner provide support by holding ipsilateral pelvic iliac crest to minimize pelvic compensations (Figure 1). 10 A tensiometer using a non-elastic band was used to measure the Gmed strength for make test. The examiner used the HHD to measure the Gmed strength with support by holding ipsilateral pelvic ilia crest for break test to minimize pelvic compensations (Figure 2). 10 For the measurements, the knee joint in tested side was fully extended during The duration of these contractions was approximately 5 seconds to measure the Gmed strength. The maximal strength provided by the tensiometer (in kg) was retained. Each task was performed with 3 times and highest force was selected.
DISCUSSION
The purpose of the current study was to determine the test-retest reliability of make and break test of the strength measurement of the Gmed in side-lying position in subjects with PD. We believe that the present research is the first reported study to investigate the testretest reliability of make and break test of the strength measurement of the weak Gmed in functional position like a side-lying. The results of this study showed that the test-retest ICC for the make test was higher than that of the break test in subjects with PD. The break test, which used an HHD, is the conventional way in clinical setting to measure muscle strength, whereas the make test, which used a tensiometer, is less common. These tests assess muscle contraction based on differences in resistance between isometric (make test) and eccentric contractions (break test).
There were some explanations to explain these findings. The fact that the make test measures strength by assessing isometric contractions may have contributed to its higher reliability compared to the break test in this study. A tensiometer using a non-elastic band was employed to maintain the hip abduction at a consistent angle of abduction in side-lying position. Although fatigue of the Gmed muscle occurred, the non-elastic band used for the make test may have contributed to maintaining the consistent abduction angle 4 because the end position of hip abduction in side-lying position was controlled within the acceptable range of the band. In contrast, the break test using a HHD may have allowed variations in the abduction angle of hip joint in side-lying position depending on the Gmed performance, especially in subjects with PD. In addition, the strengths of the Gmed with break test was smaller than those of make test because of the length-tension relationship of insufficient performance of the Gmed muscle. 4,7,8 The previous study reported that the electromyography (EMG) of Gmed was significantly increased and quadratus lumborum was significantly decreased with lumbar stabilization. 3 Although | 2019-07-12T19:28:30.755Z | 2019-06-30T00:00:00.000 | {
"year": 2019,
"sha1": "e540c3141874bc9a6e4e8103b2a0fe090bb1d55a",
"oa_license": "CCBYNC",
"oa_url": "http://www.kptjournal.org/journal/download_pdf.php?doi=10.18857/jkpt.2019.31.3.147",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0af7a38bdd9ff000946bbb279352987a30f0b342",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252095713 | pes2o/s2orc | v3-fos-license | Surface morphology engineering of metal oxide-transition metal dichalcogenide heterojunction
ABSTRACT A tremendous effort has been made to develop 2D materials-based FETs for electronic applications due to their atomically thin structures. Typically, the electrical performance of the device can vary with the surface roughness and thickness of the channel layer. Therefore, a two-step surface engineering process is demonstrated to tailor the surface roughness and thickness of MoSe2 multilayers involving exposure of O2 plasma followed by dipping in (NH4)2S(aq) solution. The O2 plasma treatment generated an amorphous MoOx layer to form a MoOx/MoSe2 heterojunction, and the (NH4)2S(aq) treatment tailored the surface roughness of the heterojunction. The ON/OFF current ratio of MoSe2 FET is about 1.1 × 105 and 5.7 × 104 for bare and chemically etched MoSe2, respectively. The surface roughness of the chemically treated MoSe2 is higher than that of the bare, 4.2 ± 0.5 nm against 3.6 ± 0.5 nm. Conversely, a 1-hour exposure of the multilayer MoOx/MoSe2 heterostructure with the (NH4)2S(aq) solution removed the amorphous oxide layer and scaled down the thickness of MoSe2 from ~92.2 nm to ~38.9 nm. The preliminary study shows that this simple two-step strategy can obtain a higher surface-area-to-volume ratio and thickness engineering with acceptable variation in electrical properties.
Introduction
Two-dimensional (2D) transition metal dichalcogenides (TMD) are well-studied layered materials with unique physical properties, widely used in nextgeneration electronic and optoelectronic devices such as chemical sensors [1,2], memory devices [3,4], batteries [5,6], phototransistors [7], and photodetectors [8]. Molybdenum diselenide (MoSe 2 ) is an excellent choice for low-power electronic applications and exhibits relatively low internal resistance and high electron and hole mobilities of about 200 and 150 cm 2 V −1 s −1 [9], respectively, enabling it to be a promising material for field-effect transistors (FETs). Furthermore, the vertical MoSe 2 -MoO x heterojunctions exhibit current rectifying characteristics and better photoresponse performance and have wide applications in electronic and optoelectronic devices [10].
Surface engineering and thickness tailoring of MoSe 2 is of great practical interest in preparing various devices. It is well known that the electronic structure of layered semiconductors can be altered by tuning the thickness; typically, the bandgap of MoSe 2 can be tuned by changing the thickness with an indirect bandgap of ~1.1 eV for bulk and a direct bandgap of 1.5 eV for monolayer, which is smaller than MoS 2 and comparable to Si [9]. In addition, the I on /I off ratios can be pushed to extremely high levels [11,12], 10 6 -10 9 by varying the thickness of semiconducting TMDs from monolayer to multilayer. In TMDbased FETs, the gate control of the electron density is limited with the increase in channel thickness. For a channel thickness greater than W max (maximum depletion depth), the electrostatic gate voltage loses control over the electrons in the extra thickness of the channel beyond W max . Thus for multilayer films beyond the W max , a leakage current is generated, and it will lead to an increased OFF current, and correspondingly the I on /I off ratio decreases [13]. Therefore, obtaining a proper thickness for the channel is necessary for achieving better electrical characteristics of the FET. However, the thinning down process of TMDs via exfoliation has poor efficiency and yield [14][15][16][17].
Moreover, recent research shows that MoSe 2 can be a potential candidate for sensory applications [18,19]. The sensitivity of semiconductors can be significantly enhanced by engineering surface morphology with the improvement of molecular adsorption behavior on channel surfaces [20][21][22][23][24]. However, the sensitivity of these MoSe 2 sensors is limited due to the weak van der Waals attraction between MoSe 2 and gas molecules. Several researchers tried to increase the sensitivity of the MoSe 2 gas sensors by incorporating heterostructures [20,21], defect engineering [22,23], and by increasing the surface area-to-volume ratio using MoSe 2 porous nanowalls [24]. Therefore, engineering MoSe 2 to achieve a proper thickness and roughness is necessary for obtaining better performance. Several approaches, such as post-growth thermal annealing [25], plasma etching [26], and laser etching techniques [27], are widely used for morphology and thickness engineering [16,28,29]. Especially the plasma etching based on CF 4 or SF 6 is a wellknown technique for surface engineering of semiconductors; however, this etching process includes the bombardment of energetic ions with the semiconductor surface, which may deteriorate the electrical properties of the semiconductor [30]. In addition, CF 4 or SF 6 plasma etching is expensive and requires sophisticated instruments. Chemical etching with KOH is an exceptional candidate for selective etching of oxide layers [31]. However, the K + ions released during the etching process can contaminate the SiO 2 substrate, thereby affecting the durability of the device. Moreover, the KOH creates H 2 gas bubbles, which can destroy the microstructure [31,32]. Conversely, the chemical etching with (NH 4 ) 2 S(aq) solution does not induce potential byproducts, which can degrade the semiconductors material, and this application can be broadened to other TMDCs [33,34].
We demonstrated a versatile and effective two-step process involving an O 2 plasma treatment followed by chemical etching with (NH 4 ) 2 S(aq) solution to tailor the surface roughness and thickness of MoSe 2 multilayers for various applications. The two-step process in this work is carried out in ambient conditions at room temperature. In addition, it does not require any complex instrumentation. The O 2 plasma treatment on MoSe 2 leads to the formation of an amorphous MoO x layer. With this two-step process, the surface roughness and thickness of the MoSe 2 can be tuned. The change in roughness and thickness can influence the electrical properties of the system. Oxidation of MoSe 2 surfaces can be confirmed using Raman and X-ray photoelectron spectroscopy. The FET characteristics of the surface engineered MoO x /MoSe 2 heterostructure can be conducted after the O 2 plasma treatment and chemical etching. The surface morphology of MoSe 2 after the O 2 plasma and (NH 4 ) 2 S(aq) treatment was analyzed by atomic force microscopy.
Device fabrication
The CVT-grown multilayer MoSe 2 was mechanically exfoliated by the scotch tape method. The flakes were transferred onto a 300 nm thick SiO 2 layer thermally grown on a heavily p-doped Si substrate. E-beam lithography and wet etching were employed to fabricate the source and drain electrodes with Ti/Au (20 nm/200 nm). Finally, to reduce the contact resistance between the electrodes and the MoSe 2 multilayer, the fabricated devices were annealed at 200 °C under a 5% H 2 environment diluted by Ar for 2 hours.
The two-step treatment
The O 2 plasma treatment was performed on multilayer MoSe 2 FETs using a plasma generator (PDC-32 G-2, Harrick plasma). During O 2 plasma, the power, the chamber pressure, and the oxygen flow rate were maintained at 10.5 W, 2.33 torr, and 60 sccm, respectively, for 20 min. After the O 2 plasma treatment, the multilayer MoSe 2 FETs were fully dipped in (NH 4 ) 2 S(aq) (Sigma-Aldrich, 20% diluted in H 2 O) for 30 min at 323 K; followed by thorough rinsing in IPA.
Measurement and analysis method
The electrical properties of bare, O 2 plasma-treated, and (NH 4 ) 2 S(aq) solution-treated multilayer MoSe 2 FETs were performed under ambient pressure using a semiconductor parameter analyzer (4200A-SCS, Keithley). The Raman spectra were obtained at room temperature under ambient pressure using a Raman spectrometer (DXR2xi, Thermo fisher scientific) with a laser excitation wavelength of 532 nm and an incident laser power of 6.1 mW. The X-ray photoelectron spectroscopy (XPS) (Nexsa, Thermo fisher scientific) using Al K α radiation was performed to analyze the chemical configuration of MoSe 2 . The surface morphology of MoSe 2 after the O 2 plasma and (NH 4 ) 2 S(aq) treatment was analyzed by atomic force microscopy (XE-100, Parksystems) using a noncontact mode.
Result and discussion
The formation and surface engineering of MoO x /MoSe 2 heterostructure are described in the schematic diagram of Figure 1(a). When the MoSe 2 layer is exposed to O 2 plasma, the Mo atoms react with atomic oxygen to form the MoO x layer. The MoO x /MoSe 2 samples are further dipped in 25% (NH 4 ) 2 S(aq) diluted in H 2 O to tune the surface morphology of MoO x . Even though the functionalization induces partial oxidation and etching of the topmost MoSe 2 layer, the initial shape of the MoSe 2 flake can be maintained, as shown in the optical images of Figures 1(b). The Structural transition of MoSe 2 surface upon applying O 2 plasma is elucidated using Raman spectroscopy. A 532 nm laser is irradiated onto the surface of bare multilayer MoSe 2 in ambient conditions. The prominent vibrational mode in Figure 1(d) (black curve) is observed at 239.7 cm −1 , consistent with the out-ofplane vibration A 1g [10,[35][36][37][38][39][40]. However, the A 1g peak is shifted to 241.8 cm −1 , and the intensity is reduced by 70% after the exposure to O 2 plasma, as shown in the red curve of Figures 1(d,e). The intensity decay of the peak around 240 cm −1 is consistent with the chemical transition of a few nanometer thick MoSe 2 into MoO x at the surface; thereby, the Raman signal from MoSe 2 is screened by the plasma-induced MoO x layers. After the oxidation, MoSe 2 is dipped in (NH 4 ) 2 S(aq) for 30 min at 323 K. Although the intensity of the A 1g peak slightly increases with respect to MoO x /MoSe 2, the position of the peak remains constant. In addition, the E 1g peak is also observed in all spectra, as shown in Figure 1(f). Regardless of the oxidation process and chemical etching, the E 1g peak remains unchanged. The peak-topeak distance of the chemically treated MoSe 2 is about 74.2 cm −1 , similar to the value of bare MoSe 2 , indicating that only a few top layers are oxidized by plasma treatment, and the MoSe 2 structure exists under the MoO x layers.
The chemical transition of MoSe 2 surfaces is probed using XPS. The spectra of Mo 3d and O 1s for bare bulk MoSe 2 are shown in Figures 2(a,b), respectively. As shown in the black curve of Figure 2(a), two distinct peaks can be observed at 231.6 eV and 228.5 eV, corresponding to Mo 3d 3/2 and Mo 3d 5/2 signals originating from Mo-Se chemical bonding [41][42][43][44]. A broad peak (purple curve) corresponding to Se 3s is also detected at 229.4 eV [38]. A weak and broad peak of oxygen signal (black curve) is also observed in the range from 529 eV to 534 eV, shown in Figure 2(b). The MoSe 2 bulk samples are probed using XPS without an annealing process to prevent unintentional chemical transition induced by thermal energy. It can be hypothesized that multiple chemical configurations of oxygen exist on the MoSe 2 resulting from the adsorption of oxygen-containing molecules introduced from ambient conditions, such as CO or O 2 [42], as shown in the red curve of Figure 2(b). A broad peak observed at 530.5 eV in Figure 2(b) (blue curve) reveals the presence of partial oxidation states in the bulk MoSe 2 sample [45].
Applying O 2 plasma onto the surface of MoSe 2 induces the formation of amorphous MoO x layers, which is consistent with the generation of XPS peaks corresponding to MoO x at high binding energy, as shown in Figure 2(c). Apart from the Mo 3d signal originating from the Mo-Se bond in MoSe 2 , two new peaks are observed at 232.5 eV and 235.5 eV due to the exposure of MoSe 2 with energetically activated O, which is consistent with the transition of top MoSe 2 layers to MoO x [42,46,47]. The position of the Mo 3d signal at higher binding energy is the result of oxygen atoms pulling electrons from Mo atoms in MoO x . As shown in Figure 2(c), the Mo-O peaks corresponding to MoO x are broadened, which is expected due to the absence of post-annealing processes and is consistent with the coexistence of multiple chemical configurations in MoO x . Also, it can be hypothesized that the plasma-induced MoO x is amorphous and substoichiometric, based on the previous report [42]. The chemical transition from MoSe 2 to MoO x can also be confirmed by probing O 1s spectra, as displayed in the blue curve of Figure 2 The electrical characterization of mechanically exfoliated multilayer MoSe 2 FETs is investigated at room temperature (300 K) under ambient conditions to explore the impact of the chemical transition. Figure 3 (a) illustrates the schematic of a back-gated multilayer MoO x /MoSe 2 FET fabricated on 300 nm thermally grown SiO 2 /Si substrates. The fabricated device's channel length (L) and width (W) are 0.75 µm and 10 µm, respectively. Details of the device fabrication process are described in the experimental section. The log I D -V transfer characteristics of multilayer MoSe 2 FET measured at V DS = 1 V while applying back gate control over 300 nm SiO 2 is shown in Figure 3(b). It is noted that the back-gate voltage V BG is swept in both the forward (negative bias to positive bias) and backward direction (positive bias to negative bias), while the source and drain voltage are kept constant for all measurements. As shown in the black curve of Figure 3(b), the as-fabricated FET displays an n-dominant behavior with Ti/Au contacts [49,50]. The OFF and ON current is about 7.2 × 10 −5 μA and 7.9 μA, respectively, which is consistent with a gate dependent ON/ OFF ratio of 1.1 × 10 5 for applied gate voltages (V BG ) ranging from -50 to 50 V. The threshold voltage (V TH ) of forward and backward transfer curves are also measured at -6.9 V and -26.2 V; thereby, the 19.3 V width of hysteresis of bare MoSe 2 FET is extracted, as shown in linear I D vs. V BG of Figure 3(c).
However, after the O 2 plasma treatment, the transfer characteristic of pristine MoSe 2 (red curve in Figures 3(b,c)) is drastically changed. It is clear from the figures that the drain current is independent of the gate voltage in the V BG range of -50 to 50 V. Therefore, it can be hypothesized that the oxidation of MoSe 2 via O 2 plasma generates a high density of O vacancies; thereby, a defect-associated conducting channel is induced at the MoO x /MoSe 2 interfaces between the metal contacts, resulting in a gate voltageindependent transfer curve [42,51,52].
The electrical characteristics of bare MoSe 2 after the (NH 4 ) 2 S(aq) chemical treatment is shown in the supplemental material, Figure S1. As shown in Figure S1, the increase of current at the p-branch is observed, consistent with p-doping via -SH species molecular adsorption [34]. Conversely, after the (NH 4 ) 2 S(aq) chemical treatment on O 2 plasma treated MoSe 2 , a pronounced difference in the measured transfer characteristic is observed as compared to O 2 plasma, as shown in Figure 3(b) (blue curve) [33]. The drain current dependence on back gate bias is observed. The maximum n-branch current at 50 V is 9.7 µA, similar to the n-branch current of bare MoSe 2 , while the p-branch current at -50 V is 0.02 µA, which is much smaller than that of bare MoSe 2 . Therefore, the minimum current level of chemical treated MoO x /MoSe 2 FET is higher than the OFF current of bare MoSe 2 . Therefore, the ON/OFF current ratio of chemically etched MoSe 2 FET is about 5.7 × 10 4 , consistent with nearly half of the ON/OFF ratio of bare MoSe 2 FET. The threshold voltage (V TH ) of forward and backward transfer curves in treated MoO x /MoSe 2 FET is measured as −15.5 V and −25 V, respectively, with a hysteresis of 40.5 V. The higher OFF current level and increased hysteresis after the chemical etching leads to the conclusion that plasma-induced defects still exist at the MoO x -MoSe 2 interface.
To elucidate the topographical change in MoSe 2 upon oxidation and chemical treatment, the surface of bulk MoSe 2 is probed using atomic force microscopy (AFM). As shown in Figures 4(a,b), a flat surface of bare bulk MoSe 2 is observed across the entire scan area (10 μm × 10 μm). The line trace corresponding to the yellow line of Figure 4(a) shows no noticeable variation of topography except at the step edges of MoSe 2 multilayers [44,53,54]. After applying O 2 plasma, Figures 4(c,d) reveal the formation of particle-like features. The particles are distributed across the entire surface with variable diameters (0.12 nm to 0.89 nm), and the particles' height ranges from 1 nm to 60 nm. These particle-like features are also observed on transferred WSe 2 treated with UV-O 3 and confirmed it as WO x particles without any polymer residues [34]. Furthermore, the intensity reduction of the carbon peak after O 2 plasma treatment (supplemental figure S2) compared to bare MoSe 2 indicates that the carboncontained residue or adsorbates can be removed during O 2 plasma [55,56]. Therefore, we can conclude it as MoOx particles without any polymer residues. However, after dipping in (NH 4 ) 2 S(aq) solution for 1 hr, the MoO x particles are entirely removed from the surface of MoO x /MoSe 2 . In contrast, small-sized particles remain on MoSe 2 , as shown in Figures 4(e,f). It is assumed that the agglomerated MoO x particles are removed mainly by the (NH 4 ) 2 S(aq) chemical treatment, and the oxidized layers are etched in the (NH 4 ) 2 S(aq) solution; thereby, the underlying MoSe 2 surface is exposed to ambient atmosphere. Consequently, sub-5 nm-sized particles are uniformly distributed on the MoO x /MoSe 2 surface, consistent with the formation of nanostructures.
An AFM topographic image of mechanically exfoliated MoSe 2 flake on SiO 2 substrate after 30 mins of chemical treatment is shown in Figure 5(a). Static root-mean-square (RMS) roughness analysis is performed on 20 AFM images for each condition, as shown in Figure 5(b). The RMS roughness of the bare MoSe 2 surface is about 1.48 ± 0.2 nm and dramatically increases to 4.9 ± 0.5 nm after the O 2 plasma exposure of MoSe 2 , as shown in Figure 5(b). Furthermore, the chemical treatment of MoO x /MoSe 2 decreases the RMS roughness (4.2 ± 0.5 nm) compared to the plasma-treated samples; however, it was still higher than the bare sample. These results are consistent with the height profile analysis. Figures 6(a,b), respectively. However, after the O 2 plasma treatment for 20 min, the multilayer thickness increased to ~92.2 nm, indicating an amorphous MoO x layer of thickness of ~3.8 nm, which agrees well with the TEM image in Figure 1(c). The multilayer thickness is further scaled down after 30 mins of chemical etching with (NH 4 ) 2 S(aq) solution. The total thickness of MoSe 2 after the chemical etching was ~86.3 nm indicating the removal MoO x layer, which is shown in Figures 6(e,f). A drastic decrease in the total thickness to ~38.9 nm was observed after the chemical etching for 1-hour, as shown in Figures 6(g,h).
It is noted that the etching with (NH 4 ) 2 S(aq) dipping processes occurs only at the metal oxide surface. For bare MoSe 2 surface without formation of MoO x , thickness and surface roughness of MoSe 2 flake is nearly identical after dipping in (NH 4 ) 2 S(aq) solution, as shown in the supplemental material, Figures S3-S6. Therefore, it can be concluded that the (NH 4 ) 2 S(aq) dipping process induces the selective etching of the oxidized layers. However, a significant thickness reduction (~53 nm) surpassing the oxide layer is observed after 1-hour of chemical treatment (Figures 6g,h). It is understood that the O 2 plasma treatment generates an ultra-thin MoO x layer on bare MoSe 2 . In addition, it can be possible that certain energetically activated atomic oxygen can further penetrate the MoSe 2 and form the oxidized defect sites [42,[57][58][59]. Therefore, (NH 4 ) 2 S(aq) can react with the underlying MoSe 2 layer containing atomic oxygen, resulting in thinning down of underlying MoSe 2 . It is noted that the electrical properties of the FETs after the 30 mins and 1-hour chemical treatments required the additional optimization of devices fabrication processes; thereby, it will be the future scope of this work.
Conclusion
In the present report, the surface engineering and thinning down of MoSe 2 is carried out through a twostep process involving exposure to O 2 plasma followed by chemical etching with (NH 4 ) 2 S(aq) solution. The O 2 plasma treatment resulted in the formation of an ultrathin (3 nm) amorphous MoO x layer on the surface of MoSe 2 due to the reaction of Mo atoms with oxygen. The surface roughness of the amorphous MoO x layer is increased by partially reducing the MoO x /MoSe 2 heterostructure with (NH 4 ) 2 S(aq) chemical treatment. A 40 min exposure of the multilayer MoO x /MoSe 2 heterostructure with the (NH 4 ) 2 S(aq) solution removed the amorphous oxide layer and scaled down the thickness of MoSe 2 from ~92.2 nm to ~38.9 nm. The structural transition of the few layers of MoSe 2 into MoO x is confirmed using Raman spectroscopy and XPS. The topographical changes and thickness of multilayer MoSe 2 after oxidation and chemical treatment were determined using AFM. Due to the chemical and topographical transition of the surface, the RMS roughness of MoO x /MoSe 2 significantly increased, resulted in an increase in the surface area-to-volume ratio. The electrical characterization of multilayer MoSe 2 FET is conducted to explore the effect of two-step functionalization on the chemical transition of the surface. A gate voltage-independent transfer characteristic is obtained for the O 2 plasma-treated MoSe 2 samples. The negligible gate voltage dependence of I D is expected due to the formation of defectassociated amorphous MoO x conducting channels. Moreover, the (NH 4 ) 2 S(aq) chemical etching demonstrated a back gate bias-dependent drain current like bare MoSe 2 . The gate dependent ON/OFF current ratios of about 1.1 × 10 5 and 5.7 × 10 4 were obtained for bare MoSe 2 and chemically etched MoSe 2 , respectively. In this work, a 10s chemical etching gives a surface roughness of 4.2 ± 0.5 nm without significant degradation in the ON/OFF current ratio. Therefore, the surface roughness and thickness of the MoO x /MoSe 2 heterostructure can be tailored using the two-step process. An increase in roughness, in turn, increases the surface-area-to-volume ratio of MoSe 2 . A 10s chemical etching has yielded a higher surface roughness for the MoO x /MoSe 2 heterostructure than the bare MoSe 2 . A more extended chemical etching removed the oxide layer and scaled down the thickness of the multilayer MoSe 2 . The analysis shows that a higher surface-area-to-volume ratio can be obtained with an acceptable variation in the electrical properties. Therefore, this work can be extended to fabricate a highly functional electronic device based on layered materials. Since the present two-step treatment involves the formation of transition metal oxide followed by the reduction of the metal oxide in (NH 4 ) 2 S(aq), this surface engineering via the present two-step treatment can be applied to various TMDCs such as WSe 2 and MoS 2 [33,34].
Disclosure statement
No potential conflict of interest was reported by the authors. | 2022-09-07T15:07:42.622Z | 2022-09-05T00:00:00.000 | {
"year": 2022,
"sha1": "3fafe49536399df41ce137e486113e722b0b2151",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21870764.2022.2117892?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "3194631ab39c2e701d3acb2b11cb452ce64a09de",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
158279974 | pes2o/s2orc | v3-fos-license | Adaptation of Globalization and Their Effect to the Tanzania Economic Growth
Globalization can be defined as the process based on international cooperation strategies, the aims of globalization is to expand the operation of a certain business or service into a worldwide level, Globalization facilitates the modern advanced technology which helps community to undergo social, political and economic development. Globalization economic has reinforced the margination for African developing economies and made it to be dependent for the few primary commodities or service whereby the price and demand are extremely determined by outsiders. On this outcome it led some of the African countries being turned into poverty or economic inequality due letting their own resources being determined by developed countries. On this paper we analyze the effect of adaptation of globalization to Tanzania economic growth.
1. Introduction (Kuepper, June 19,2017) Globalization had impact on every aspect of modern human life and continues growing force in the certain global economy.While there were a few drawbacks to globalization, most economists agree that it's a force for both net beneficial and unstoppable to the world economy.Through the periods of protectionism and nationalism but globalization still continues to be the most widely acceptable solution to ensuring the consistent of economic growth in the world (Kilic)Economic Globalization Index: The index includes two sub-indexes which were restrictions and actual flows.Actual flows are being calculated with GDP of trade percentages, foreign trade investments (FDI) and stocks, portfolio investments and income payment to foreign nationals.Come on restrictions are being calculated with hidden import barriers which mean tariff rate, current revenue percentages for taxes of international trade and capital account restrictions.Both restrictions and actual flows are immensity in economic globalization index is 50 percent.
Social Globalization Index: The index includes 3 sub-indexes which are cultural proximity personal contact and information flows.Personal contact was calculated with telephone traffic, international tourism, GDP percentages of transfers and the foreign population according to total amount of population and the international letters per capita.
Political Globalization Index: The index was calculated with 4 sub-indexes which are number of embassies in the country, membership of international organizations, participation in the (UN) United Nations Security Council and international treaties.
(Ibrahim, August 2013)Globalization refer as the process of intensification of political, economic, cultural and social relations across to international boundaries for the aimed of transcendental homogenization in socio-economic and political theory across the globe, the significant impacts on African countries through systematic restructuring of phases interactive among its nations, by breaking down barriers in the certain areas of commerce, culture, communication and several other fields.These processes have impelled series of cumulative international division of labor, economic distribution and political power; whereby African countries get pushed to qualifying in feature poverty, diseases, and unemployment.
•
To oversee whether the globalization is expanding financial integration organizations could be as threat to sovereignty.On scenario could cause of some leaders to become nationalistic.
Equity Distribution.The distribution of normal benefits of globalization is unfairly skewed towards the richest nation or individual, globalization creating a greater inequalities and it can lead to potential conflict both internationally and nationally as a result.
Theoretical Literature Review
(Pologeorgis, March 6,2017)Globalization compel businesses into adapted to different strategies concept based on the new ideological trend, which try to balance a right thing and interests for both individual and the whole community.On this change its able to push a competitive businesses to worldwide.(UNIVERSITY, May 2012) Impact of globalization has influenced massive change in our country, Tanzania had no health policy in the rural areas before 1990, but after the spread of the impact of globalization in Tanzania, health services were being introduced in (1990-2003) period.The main purpose of spreading health policies service in Tanzania was to improve the well-being of all Tanzania people and encouraged the health system to be more responsible to certain people
Globalization and Situation of Tanzania Economy
(Africa Development Group, 2018)Economic growth has been slowed growth since 2016, following a real GDP economic growth of at least 7% in (2013)(2014)(2015)(2016).On averaged of 6.8% has grown in the first and two quarter of 2017 and were estimated at 6.5% for the full year.Construction of infrastructure such as communication, mining sector and transportation were key factor of economic growth in 2017.Economic growth is a project which to remain robust at least 6.7% and 6.9% in 2018 and 2019 respectively, these represented one among of the best Economies of Scale.Large companies enable to realize the economies scale of the certain market that would help to reduce price and costs consumption, which can turns support into further to economic growth.
Increased competition, raising for more competitors who produce quality product and service to fight over a certain market located, each companies had to constantly looking the way of improved their service and goods to create customers loyalty.
Decreased Employment, The influence of foreign direct companies with new technology into developing like Tanzania, will help increases the rate of employment opportunity to many sectors, domestic companies worker were able to acquire some training skill to operate a modern machine technology on produce a quality product.
Conceptual framework
The independent variables of the project is rapid growth of globalization service which involved financial integration, International trade, Foreign Direct Investment (FDI), Technical change, change standard of living.
The dependent variable in the study is Economic Growth Sources: (Kilic).
From the above research conceptual model framework the following main hypothesis are developed.
H1-Financial integration is positively associated with Economic Growth H2-International trade is positively associated with Economic Growth H3-Foreign Direct Investment (FDI) is positively associated with Economic Growth H4-Technical change is negative associated with Economic Growth H5-Change standard of living is positively associated with Economic Growth
Research Methodology
Both primary and secondary data were collected to complete this research project, whereby the primary data were obtained in the field during the time study by the researchers and secondary data were obtained from different official documents.Researchers nominated 150 as a population sample and was achieved through random sampling methods.Researchers designed open and closed question through google doc form which allowed respondent to login direct and complete the specific questionnaire and resubmit online.The link was more convenient and send to all respondent through social media.Data were summarized through quantitative method and analyzed by using data processing software called SPSS.Researcher arranged the questionnaire according to rate scale which help respondent to fulfill the question as shown below:
Validity and Reliability
(Thanasegaran)The construction of validity and Reliability measures are directly concerned about the theoretical relationship of the variable.Its extent measure 'behaves' on the way that it construct purports of measure which should be behave with regard of established measures to other constructs.According to the criteria of determine the best Cronbach's Alpha, the highest alpha should be 1.0 and should be greater than 0.5, 1.0<A>0.5.Cronbach's Alpha of our projects shown 0.846 which means of 80% of the combined 6 items variables, Cronbach's Alpha of our project indicate the true score internal consistence reliable variance.
(
Nord) According to rapid growth of globalization and Millennium goals 2025, Tanzania has set up his goal/objective to make sure to reached the targets, these economic objective policy such as; Raise growth and Reduce Poverty (Mkukuta) Maintaining macroeconomic stability Durably strengthening public finances Accelerating financial sector reform Improving the business environment Globalization lead to increasing of the industry competition which stimulate to innovate the new technology development, increasing rate of Foreign direct investment automatically improved technology output of the country economy.
Table 2 .
Five point rate scale
Table 3 .
Reliability statistics
Table shown
Based on the correlation analysis result, Researchers found a new factor model which could help other | 2018-12-06T23:04:41.387Z | 2018-06-30T00:00:00.000 | {
"year": 2018,
"sha1": "7dccae53f6190a3e862aad9eadc42350488782ce",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/ijbm/article/download/75331/42207",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7dccae53f6190a3e862aad9eadc42350488782ce",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
12867764 | pes2o/s2orc | v3-fos-license | Effects of astaxanthin in mice acutely infected with Trypanosoma cruzi
During Trypanosoma cruzi infection, oxidative stress is considered a contributing factor for dilated cardiomyopathy development. In this study, the effects of astaxanthin (ASTX) were evaluated as an alternative drug treatment for Chagas disease in a mouse model during the acute infection phase, given its anti-inflammatory, immunomodulating, and anti-oxidative properties. ASTX was tested in vitro in parasites grown axenically and in co-culture with Vero cells. In vivo tests were performed in BALB/c mice (4–6 weeks old) infected with Trypanosoma cruzi and supplemented with ASTX (10 mg/kg/day) and/or nifurtimox (NFMX; 100 mg/kg/day). Results show that ASTX has some detrimental effects on axenically cultured parasites, but not when cultured with mammalian cell monolayers. In vivo, ASTX did not have any therapeutic value against acute Trypanosoma cruzi infection, used either alone or in combination with NFMX. Infected animals treated with NFMX or ASTX/NFMX survived the experimental period (60 days), while infected animals treated only with ASTX died before day 30 post-infection. ASTX did not show any effect on the control of parasitemia; however, it was associated with an increment in focal heart lymphoplasmacytic infiltration, a reduced number of amastigote nests in cardiac tissue, and less hyperplasic spleen follicles when compared to control groups. Unexpectedly, ASTX showed a negative effect in infected animals co-treated with NFMX. An increment in parasitemia duration was observed, possibly due to ASTX blocking of free radicals, an anti-parasitic mechanism of NFMX. In conclusion, astaxanthin is not recommended during the acute phase of Chagas disease, either alone or in combination with nifurtimox.
Introduction
Chagas disease is a zoonotic health concern in Latin America caused by Trypanosoma cruzi, with an estimated 6-7 million people infected. The infection is not limited to vectorial transmission, since it can be transmitted through blood transfusion or organ or tissue transplantation, and many cases of non-vectorial transmission have been reported in non-endemic areas [52].
The drugs available for the treatment of Trypanosoma cruzi infection in institutional health systems in Latin America are nifurtimox and benznidazole. However, these drugs have limited therapeutic value since they are effective only during the acute stages of the disease, and because these drugs may induce severe side effects in people undergoing long-term treatment [10,50]. Furthermore, resistance to NFMX and benznidazole has been reported in parasites of different genotypes in endemic zones [9]. These therapeutic drawbacks leave people of all ages at risk [46,47], and therefore, new strategies should be studied if an effective treatment is to be found.
During the acute phase of Chagas disease, an excessive production of free radicals in the heart has been correlated with irreversible oxidative stress (OS)-induced cardiomyocyte damage. Recent studies that analyzed the condition of the heart in Chagas disease have suggested that factors other than myocardial parasitism and autoimmune aggression are involved. It is unclear whether the tissue destruction is caused directly by factors related to the parasite, or indirectly by an immuno-inflammatory response amplified by the systemic overgeneration of reactive oxygen species (ROS) and reactive nitrogen species (RNS) [14,6,53]. Chagasic cardiomyopathy develops in 30-40% of chronically infected people. Cardiomyopathy may progress to cardiac insufficiency and sudden death because of progressive damage to cardiomyocytes and the ventricular intertruncal plexus [23].
Several studies in chronic chagasic patients suggest that the use of antioxidants, such as vitamin E and C, decreases free radical levels and the OS associated with the disease [30,42], protecting the myocardium and preventing the progression of chagasic cardiomyopathy into more severe syndromes [51]. Astaxanthin (ASTX), a reddish carotenoid that belongs to the xanthophyll class, is a potent antioxidant naturally found in several sea animals (Haematococcus pluvialis) and plant species [18,22]. It has anti-inflammatory [25] and immunomodulatory properties [12], which can stabilize free radicals and decrease oxidative stress damage, protecting biologically important molecules. Studies have shown that ASTX counteracts OS caused by some heart diseases, preventing tissue damage caused by cell oxidation and contributing to a healthier myocardium [17,34]. Here, we evaluated the effects of ASTX supplementation during the acute phase of Chagas disease in an induced infection with a pathogenic strain (Ninoa) of Trypanosoma cruzi in BALB/c mice.
Ethics
Mice were kept, fed, and reared under standard conditions (18-23°C, 50-60% relative humidity), according to the guidelines of the Bioethics Committee of the FMVZ-UAEM, the Official Mexican Standard regarding technical specifications for the care and use of laboratory animals (NOM-062-ZOO-1995) [37], and the standards of the National Academy of Science [35].
Parasite harvest from cell culture
Parasites were cultured for 1-2 weeks on Vero cell monolayers, when they started to break out from the infected cells. The medium with free-swimming parasites was then collected in 15 mL sterile conical tubes and centrifuged at 2700 rpm for 7 min. The supernatant was discarded and the pellet was resuspended in 1 mL of DMEM (Gibco Laboratories, USA). Parasites were counted using a hemocytometer, and the number of parasites was adjusted to the specific needs of each assay (in vitro or in vivo).
Astaxanthin preparation for in vitro assays
In order to purify astaxanthin from the commercial preparation for the in vitro assay, one gram of microencapsulated astaxanthin (AstaPure Ò , Algatechnologies, Israel) was ground in a sterile mortar, placed in a 15 mL sterile conical tube, and suspended in 6 mL of extraction solution (petroleum ether:acetone:water, [15:75:10]) [33]. The suspension was mixed by inversion several times and gently vortexed for 15 min. The tube was centrifuged at 7500 rpm for 10 min at 4°C, and the supernatant collected in a fresh sterile 15 mL tube. The solvents were evaporated at 40°C for 12 h in dark conditions and the astaxanthin was resuspended in 2 mL of DMEM-dimethyl sulfoxide (DMSO [Sigma-Aldrich, USA]) (99.7/0.3% V/V solution). This suspension was gently vortexed for 10 min, and then filtered using an acrodisc syringe filter (0.22 lm) in a 1.5 mL sterile tube and kept at 4°C until use. The ASTX concentration was determined in a 96-well plate using a b-carotene (Sigma-Aldrich, USA) standard curve and read at 450 nm in a spectrophotometer (BioTek, USA). A simple linear regression was used to determine ASTX concentrations in lg/lL.
Astaxanthin in vitro toxicity assay for T. cruzi and Vero cells Trypomastigotes (5 · 10 5 /well) or Vero cells (2 · 10 4 / well) were cultured in a 96-well plate (Sarstedt, USA) in supplemented DMEM (2% FBS, penicillin 10,000 units/mL, and streptomycin 10,000 lg/mL) and astaxanthin at 1, 5, 10, 20, or 30 lg/100 lL. The assay was performed in triplicate with the following controls: a) C-T (untreated trypomastigotes), b) C-V (untreated Vero cells), c) DMEM/DMSO in a proportion equivalent to the amount of DMSO used in the highest ASTX dose (99.7/0.33% V/V, respectively) (this control was necessary since ASTX and NFMX were solubilized in this solvent), and d) nifurtimox (400 lg/100 lL) (Lampit Ò , Bayer). NFMX was prepared as previously described by Rolón et al. [43]. One tablet of the commercial presentation of NFMX (120 mg) was ground in a sterile mortar and resuspended in 1 mL of DMSO. The final DMSO concentration in the culture media never exceeded 0.3% in a V/V solution. Plates were incubated for 24 h in controlled conditions (37°C, 5% CO 2 , and saturated humidity). After treatment, the viability of parasites and cells was estimated using MTS (3-[4,5,dimethylthiazol-2-yl]-5-[3-carboxymethoxy-phenyl]-2-[4-sulfophenyl]-2H-tetrazolium, inner salt) from CellTiter 96 kit Ò Aqueous One Solution (Promega, USA), following the manufacturer's instructions. The metabolic activity of parasites and cells over MTS was estimated by colorimetry at 490 nm wavelength. In this assay, the higher the optical density (OD) values, the higher the cell viability.
Morphologic evaluation of changes induced by ASTX on Vero cells and T. cruzi co-cultures
Vero cells (5 · 10 3 /well) were seeded and cultured for 24 h as previously described and then infected with trypomastigotes (10 parasites/cell) [13]. Once intracellular parasites were observed (about 96 h after infection), the old medium was replaced with fresh supplemented DMEM with different ASTX doses (1, 5, 10, 20, or 30 lg/100 lL). As a control, co-cultures were kept with NFMX (400 lg/100 lL) or with no ASTX or NFMX supplementation. After 24 h of incubation, microscopic morphological changes in the co-culture, such as loss of normal shape of T. cruzi infected Vero cell, changes of normal parasite shape or motility, and variations in the presence of intra-or extra-cellular parasites were evaluated by a trained technician. Additionally, parasite viability was evaluated by Trypan blue stain assay [2].
Animals and challenge
BALB/c female mice (N = 48), 4-6 weeks old, were distributed in eight groups (n = 6): G1 (Tc); G2 (Tc/ASTX); G3 (Tc/ASTX/NFMX); G4 (Tc/NFMX) and four non-infected controls: G5 (saline solution); G6 (NFMX); G7 (ASTX/ NFMX); and G8 (ASTX). Animals from groups G1 to G4 were infected intraperitoneally with 10 trypomastigotes each. Specimens were clinically evaluated on a daily basis; any animal health changes, such as weight loss, hirsutism, morbidity, lameness, or any other behavioral changes, were recorded. We decided to use ASTX during the acute phase of infection in BALB/c mice because there are no previous reports on the use of antioxidants at this stage of infection and because in in vitro experiments in our laboratory, ASTX had some antiparasitic effect. We also decided to test ASTX as an antiparasitic agent during an early stage of infection in BALB/c mice because this mouse strain is susceptible to infection with Ninoa strain of T. cruzi with a predictable outcome and the parasitemia is easily detected. Therefore, during the acute phase of infection, the level of parasitemia was used as an indicator of disease development [15], providing an easy-to-evaluate parameter, to determine the possible effects of ASTX on the infection while animals were alive.
Parasitemia
Parasitemia was analyzed for each mouse, by fresh blood smear test. Samples were collected twice a week starting on day 5, until day 60 post-infection, or when parasitemia was undetectable microscopically in fresh blood preparations. Sampling was performed according to Brener [7] with slight modifications. Briefly, a small cut was performed on the tip of the tail of the mouse, blood (4 lL) was collected with a micropipette, placed on a glass slide, and covered with a coverslip (18 · 18 mm). Samples were observed under light microscopy at 400·. Parasites in 100 fields were counted, and the number of parasites/lL was estimated with standard protocols [28,45]. (Table 1). ASTX was prepared from 400 mg beadlets of AstaPure Ò . Beadlets were ground in a sterile mortar in aseptic conditions and resuspended and homogenized in 3 mL of a 20% (V/V) sterile solution of Tween-20/distilled water [36] for a final volume of 3.1 mL. ASTX supplementation (60 lL of ASTX preparation, equivalent to 10 mg/kg/day of pure ASTX) was orally administered with a micropipette until day 60 post-infection. This concentration has exhibited immunomodulatory and anti-inflammatory effects in mice and other species, including humans [24,27,34,40]. NFMX was prepared in aseptic conditions by grinding one tablet containing 120 mg of NFMX (Lampit Ò , Bayer) in a mortar and resuspending it in 1 mL of sterile distilled water [11]. This solution was administered orally at a single daily dose of 100 mg/kg/day [8] (Table 1) in a 60 lL volume. Treatment was carried out until the day when parasitemia could no longer be detected through fresh blood preparations, as described above [6,28,45].
Animal sacrifice and tissue sampling
Heart and spleen tissues were collected from mice after they died from infection or when they were euthanized. Mice were sacrificed either because they were very ill or on day 60 after infection. Euthanasia was performed by cervical dislocation following protocols established by Norma Oficial Mexicana (NOM-033-ZOO-1999) [38], the Bioethics Committee from UAEM-FMVZ, and from the Council for International Organizations of Medical Sciences [35]. Blood samples were taken directly from the heart to obtain sera on the day of sacrifice and tissues were fixed in 10% formaldehyde for histopathological studies.
Histopathological study
Tissues were fixed in 10% formaldehyde for 24 h, dehydrated in absolute ethanol, and included in paraffin. Tissue sections (5 lm) were prepared and stained with hematoxylineosin and observed under light microscope (Carl Zeiss Axiostar, USA). Images were recorded with a Tucsen 5 MP camera (Tucsen, China) with the Image-Pro Plus 7 software. Tissue samples were studied microscopically at 400· magnification to assess parasite burden (amastigote nests observed in 100 random fields). The severity of inflammation was estimated by the severity of lymphocyte infiltration in the tissue, in 400 random fields, using the scale proposed by Barbabosa-Pliego et al. [5]: (À), none; (+), light; (++), moderate; and (+++), severe.
Malondialdehyde (MDA) assay
Malondialdehyde levels were determined in sera following the instructions of an OxiSelect TM MDA Adduct ELISA Kit (Cell Biolabs, USA). Standards and samples were incubated in a 96-well plate for 2 h, at 37°C. The MDA-protein adducts present in the sample and in the standards were probed with an anti-MDA antibody followed by the HRP-conjugated secondary antibody, revealed with 3,3 0 ,5,5 0 -tetramethylbenzidine (TMB) and read by spectrophotometry at 450 nm. The MDA-protein adducts content in each sample was determined by comparison with a standard curve that was prepared from predetermined MDA-BSA standard [16]. A simple linear regression was used to determine the MDA concentration in pmol/mL.
Statistical analysis
Analysis of variance (ANOVA) was used to analyze results from the in vitro viability assay, parasitemia, and MDA. Mean differences for all assays were assessed by a Tukey test, except for parasitemia where a Bartlett's test was used. Statistical analyses were conducted with the GraphPad Prism 5.0 software package (GraphPad Software Inc., USA). Differences were considered significant at p < 0.05.
Results
In vitro Trypanosoma cruzi and Vero cell viability after exposure to ASTX Trypomastigote and Vero cell viability was evaluated 24 h after treatment. Figure 1 shows parasite and Vero cell survival after treatment, either with ASTX (1, 5, 10, 20, or 30 lg/ 100 lL), NFMX (400 lg/100 lL), DMSO (0.33% V/V), or untreated (C-). Parasite viability was progressively affected (p < 0.05) as ASTX doses were increased; from nearly 100% parasite survival (with no ASTX) down to 18% survival at the higher doses (20-30 lg/100 lL) of ASTX. Vero cell Parasites were not affected by ASTX (1-20 lg/100 lL) when evaluated in co-culture with Vero cells, unlike the results observed in axenic culture (Table 2). These results call into question whether the effects of ASTX would be detrimental or not to the parasite in an in vivo model, and therefore we decided to continue testing ASTX as a therapeutic treatment in an experimental animal model.
Parasitemia in BALB/c mice infected with T. cruzi
Experimental groups showed differences in the number of blood trypomastigotes (Fig. 2). Challenged groups G1 (Tc) and G2 (Tc/ASTX) showed the highest parasitemia and did not survive beyond day 23 post-infection. It is worth mentioning that ASTX supplementation on its own, in infected animals, did not show any survival advantage over the control group. Challenged groups G3 and G4, treated with ASTX/NFMX or just NFMX, respectively, developed low levels of parasitemia. This was controlled by days 28 and 22 post-infection, respectively ( Figs. 2A and 2B). Parasitemia levels in groups G3 (33 ± 12.7 parasites/lL) and G4 (10 ± 5 parasites/lL) were statistically different (p < 0.05) from those found in animals in groups G1 (321 ± 138.2 parasites/lL) and G2 (362 ± 156.2 parasites/lL) around day 20 post-infection. All non-infected animals were in good health until the day of sacrifice (day 60 post-infection).
Anatomopathologic findings Heart
The size of the heart in all experimental groups (G1-G8) did not show differences. Hearts were measured in sagittal position and average length was 0.79 ± 0.036 cm. No apparent morphological changes were found macroscopically.
Spleen
Spleens were clearly enlarged in all T. cruzi-challenged groups (G1-G4), where splenomegaly was observed (Fig. 3). The average size of the spleen was 2.4 ± 0.26 cm for animals from groups G1 (Tc) and G2 (Tc/ASTX), and 2 ± 0.17 and 1.8 ± 0.26 cm in groups G3 and G4, respectively. All control groups (G6-G8) had an average spleen size of 1.5 ± 0.08 cm, similar in size and appearance to mice treated with saline solution (G5), which was considered normal.
Spleen
Morphological changes in the spleen were observed mainly as hyperplasia of lymphoid follicles and loss of characteristic shape. The G1 (Tc) group showed very diffuse and extended follicles with severe hyperplasia of lymphoid follicles (Fig. 5A). In animals from the G2 group (Tc/ASTX supplementation), slight hyperplasia of lymphoid follicles was observed (Fig. 5B). In groups G3 (Tc/ASTX/NFMX) and G4 (Tc/NFMX), as well as in control groups (G5-G8), lymphoid follicles appeared normal, with no pathological changes (Figs. 5C and 5D).
Discussion
Several in vitro research studies have reported that the antioxidants found in some plants might have a detrimental effect on the viability of different parasites [1,19] including Trypanosomatids [29,32,48,49]. In our laboratory, initial in vitro results showed that ASTX induced T. cruzi trypomastigote death in a dose-dependent manner (Fig. 1). Therefore, we wanted to address the question of whether ASTX would be able to control an in vivo T. cruzi infection using a mouse model. Results did not support our hypothesis, since ASTX did not control in vivo parasitemia loads ( Fig. 2A), and the infected animals treated only with the antioxidant (G2) died during the acute phase of infection, as occurred with infected animals with no treatment (G1 group). Furthermore, ASTX seemed to interfere with the efficacy of NFMX against ASTX dose (astaxanthin, 1-40 lg); NFMX (nifurtimox 400 lg); IP (intracellular parasite); EP (extracellular parasite); + (presence); À (absence); Tc (Trypanosoma cruzi); Vc (Vero cell). Integrity of the cell membrane was evaluated through Trypan Blue assay [28,45]. the parasites in vivo, since parasitemias observed in animals from group G3 (Tc/ASTX/NFMX) were significantly higher (p < 0.05) and longer (p < 0.05), than parasitemias found in infected animals from group G4 treated only with NFMX (Fig. 2B). Therefore, also considering the results reported by Wen et al. [51], who found that PBN (phenyl-a-tert-butylnitrone), a synthetic antioxidant, used in Sprague Dawley rats infected with T. cruzi, did not decrease parasite load during the acute phase of infection, it could be concluded that the use of antioxidants is not indicated during this phase of Chagas disease. However, strong antioxidants, such as ASTX, could still be useful during the chronic phase of Chagas disease. This idea is supported by the findings of Maçao et al. [30] and Ribeiro et al. [42], who found that supplementation with vitamins E and C after the use of benznidazole for the treatment of Chagas disease in humans reduced oxidative stress, and contributed to minimizing the risk of chagasic cardiomyopathy in chronically infected patients. If we consider that ASTX is a stronger antioxidant than vitamins E and C, and additionally that it has anti-inflammatory and immunomodulatory properties [17,34,40], the question that remains to be answered is whether ASTX supplementation, after the administration of anti-chagasic agents such as benznidazole or nifurtimox during the chronic phase of Chagas disease, would be beneficial to improve chronic chagasic cardiomyopathy. When comparing the histopathological appearance of the left ventricle from animals in groups G1 (Tc) and G2 (Tc/ASTX), it was observed that G2 animals had an increment in the number of focal lymphoplasmacytic infiltrations and necrotic cardiomyocytes, and a lower number of amastigote nests (Figs. 4A, 4B and Table 3). These differences suggest that ASTX had an immunomodulatory effect, which would promote the strong immune reaction observed, accompanied by a lower number of amastigote nests in cardiac tissue. It has been reported that the immunomodulatory properties of ASTX include the stimulated proliferation of T and B lymphocytes and NK cells, production of pro-inflammatory cytokines such as IL-1a and TNF-a, as well as promoting an increment in antibody production against various antigens [4,12,39,40]. Therefore, it would be interesting to further study whether ASTX could be used as a therapeutic drug in Chagas disease, either in combination with an anti-T. cruzi non-oxidative stress-inducing drug or in combination with antiparasitic (prophylactic or therapeutic) vaccines.
Splenomegaly has been reported in animals and humans infected with T. cruzi. This reaction is related to host inflammatory responses to the parasitic infection [41], and reactive oxygen species (ROS) generated by neutrophils and macrophages in the spleen [3,4,44], which induce the expression of inflammatory genes that contribute to inflammation [26]. In the present study, animals from non-infected groups had an average spleen size of 1.5 cm with normal histology. In comparison, animals from all infected groups (G1-G4) showed splenomegaly. The average spleen size for groups G1 (Tc) and G2 (Tc/ASTX) (Fig. 3) was 2.3 cm, i.e. 53% larger in comparison with the non-infected control groups. These spleens displayed hyperplasic lymphoid follicles (Fig. 5A). Animals from group G3 (Tc/ASTX/NFMX) and G4 (Tc/NFMX) showed 33% (2.1 cm) and 20% (1.8 cm) larger spleens than normal animals, respectively (Fig. 3). This inflammatory response could be partially explained by the fact that, before infection was controlled by NFMX, there was a period when parasites proliferated in the animals and inflammation was triggered.
Oxidative stress is one of the main features of the immune system that is triggered during the development of chagasic cardiomyopathy [20]. Oxidative stress induced by T. cruzi infection in the myocardium can be studied through markers such as MDA [16]. Our results showed statistical differences between serum MDA from infected (G3 and G4 groups) and non-infected animals (G5-G8 groups) (Fig. 6). However, unlike what was expected, no differences were observed in non-infected animals among groups receiving NFMX, ASTX/NFMX, and ASTX or saline solution. This outcome is difficult to explain as NFMX was expected to increase MDA values and ASTX to reduce them. A possible explanation could be that the MDA assay used to detect OS was not sensitive enough to identify small differences, and that the effects of NFMX and ASTX on mouse physiology were not large enough to be detected. T. cruzi infection did induce OS and was detected by the MDA assay. However, no statistical differences were observed between serum MDA levels from groups G3 and G4. We had hypothesized that animals receiving ASTX would have lower levels of OS [17], but this could not be proven. This outcome could probably also be explained if we assume that the doses of ASTX used in this experiment were not sufficiently high to promote an antioxidant effect detectable by the MDA assay. As a whole, the findings of the present study do not support the idea that ASTX has a positive effect during an acute T. cruzi infection and the question that remains to be answered is whether ASTX could be used in chronic Chagas infections to possibly improve the results observed with other antioxidants, such as vitamins E and C or synthetic antioxidants such as PNB, in chronically infected chagasic human patients [30] and rats [51], considering that ASTX is a more active antioxidant than those previously described [21].
Conclusions
The use of ASTX during the acute phase of T. cruzi infection is not recommended, whether alone or in combination with therapeutic drugs that induce oxidative stress, such as NFMX. However, the potential beneficial effects of ASTX if used in the chronic phase of Chagas disease, or in combination with non-OS-inducing antiparasitic drugs, or with prophylactic or therapeutic vaccines, remain to be studied. Figure 6. Malondialdehyde serum levels in animals after experimental Trypanosoma cruzi infection under treatments G3-G8 at the day of sacrifice (60 days post-infection). Each bar represents the mean pmol/mL value ± SD. Statistical differences (p < 0.05) among groups are shown with different characters above the bars. Groups G1 and G2 were not included because the mice died before day 30 post-infection. | 2018-04-03T00:33:58.223Z | 2017-05-31T00:00:00.000 | {
"year": 2017,
"sha1": "96f3ec03d85a057afbab4487d15b7ed73ce5c386",
"oa_license": "CCBY",
"oa_url": "https://www.parasite-journal.org/articles/parasite/pdf/2017/01/parasite170020.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96f3ec03d85a057afbab4487d15b7ed73ce5c386",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
19924017 | pes2o/s2orc | v3-fos-license | Visual Laser Ablation of the Prostate ( VILAP ) : Experience of the First One Hundred Cases in Jakarta
Transurethral resection of the prostate (IUR P) is still regarded as a standard treatment for BPH, but still have complication.Visual laser ablation of the prostate (VIL P) usingNeod.ymiumYttriumAluminum Garnet laser has been introduced as an alternativetrealmenl for BPH since 1992. We have treated 700 cases of BPH by VII,AP since April 1993 iill May 1994 in Sumber Waras Hospital. Mean age of the patients is 67.8 years range 41-91 years. From the 700 palients, only in 85 patients was the data obtained enough for analysis. There were improvement of mean symptom score (Mad,sen lversen) from 18.3 initially to 8.4 and 6.0 after 7 month and 2 months respectively (p<0.05) and, mean maximal flow rate increased from 6.7 ml/second to 10.8 ml/second and lj ml/second after 1 month and 2 months respectively (p<0.05) while rest urine dropped from 107.8 ml initially to 34.4 ml and 11.2 ml1 month and 2 months respectively (p<0.05). Complications : Post operative bleeding occured in2 patients but no blood transfusionwas required, post operative painwere found. in 3 cases and. prolonged retention (more than one week) in 79 cases and in two cases TUR P were done. From 74 cases who are still concern with sex, retrograde ejaculation were found in 2 cases, impotence in one case and in 3 cases failure to have full rigidity were noticed.
INTRODUCTION
Benign Prostate Hyperplasia (BPH) is the second most frequent disease in uiological clinic in Indonesia.lNearly 6O7o of the time of practicing urologist is focused on this disease.'The incidence of BPH is related to age, where at the age of 60 years the in- cidence is about 5O7o and around one third of it needs treatment.3-7 Transurethral Resection of the Prostate (TUR P) is still regarded as a standard of treatment with its advantages as a fast relieve from obstructive symptoms, giving tissue for pathological specimen, but at the same time it also has many disadvantages such as time consuming operation, bleeding which may need blood transfusion, clot retention, possible hiponatremia, retrograde ejaculation, i mpotence i n abou I 4-4OEo.3'6'8-1 Therefore in these last 10 years many other alternative methods of treatment for BPH have been proposed.
[15] Less invasive treatment such as balloon dilatation, and urethral stenting tlte results are only temporary relief of obstruction.r0'r/ Hyperthermia and thermotherapy sives onlv i*p.ou"rnËnt of the obstruction in 5O-607o Ëf th" "ur"r.t8 Neodyrnium Yttriurn Aluminutn Gameet (Nd YAG) laser was tried to ablate prostate gland for the first time iri 1991.It was Roth and Aretzre who tried prostate prostate lnore accurately.Since laser beam gives coagulation necrosis and not visual laser ablatioq of the Pros minimal bleeding.2oBecause n specirnen can be obtained by VILAP, preoperative assessrneut to exclude rnalignancy is becoming very .21.22 unponanr.
To our knowledge, until now there was no report yet on the experience in using VILAP in Indonesia, our first experience in treating 100 cases of BPH by VILAP will be reported.
MATERIALS AND METHODS
Frorn April 1 993 until May 1994, 100 patients with the diagnosis of BPH Sumber Waras Hospital, Jakarta, using Nd YAG laser from TRIMEDYNE generator, with Myriad side firing laser fiber, with a diameter of 600 micron.
The inclusion criteria were patients with the diagnosis of BPH, signing informed consent for laser treatment, with Madsen Iversen symptom score more than 10, maximal flow rate less than 10 ml/second measured by Dantec Flowmeter, or residual urine volume of more than 50 ml measured by post voiding catheterization, no suspicion of malignancy on digital rectal examina- tion (DRE) and on suprapubic ultrasonograpy, serum prostate specific antigen (PSA) level less than 4 ngltnl, negative urine culture and no bladder stone detected by intraveuous urogprahy (IVU) and cystoscopy.
Exclusion criteria were urinary tract infectiott, signs and clinical symptoms of neurological bladder disor- ders, urethral stricture, bladder stone aud bladder diverticulurn.
Treatrneut was perfonned by using GU cystoscope ru 22 Charier sheath to introduce the laser fiber, with destilled water as an irrigating fluid.Lasing was done at lO, 2, 4, and8 of endoscopic view, 60 seconds at 60 watts at each point of lateral lobes, and if the prostatic urethra is more than 2.5 cm in length, two rows of lasing points with the same method were done respec- tively, and for rnedian lobe two or three lasing points, depend on the size of the rnedian lobe, 30 seconds at 60 watts were done (see Figure 1).Preoperative assessement consists of physical ex- amination, including DRE, measurement of Madsen Iversen Symptom score, maximal flow rate, post void- ing residual urine, serum prostate speficic antigen (PSA), intravenous urography (IVU), and suprapupbic ultrasonography.
Postoperative assessement was done one month and two monts postoperatively, measurement of symptom score, maximal flow rate, post voiding residual urine were performed, and complications were noted.
Improvement of symptom score, maximal flow rate, and residual urine volume were analyzed by comparing their mean values, preoperative and postoperatively, using student t test, and p < 0.05 is considered as significant.
RESULTS
From 100 patients treated by VILAP, the mean age twas 67.8 years (range 4l To 9l years).The laser energy applied between 14.4 to 32.6 kilo joules.The energy was calculated by multiplying the number of seconds to the number of watt used during the treatment.The preoperative mean of symptom score was 18.35 with standard deviation of 4.37, the mean of maximal flow rate was 6.62 ml/sec.with standard deviation of From the 85 cases, the mean of symptom score that was initially 18.3 with standard deviation of 4.37 preopera- tively, decreased to 8.4 with standard deviation of 3.75 after one month follow-up (p<0.05) and 6.0 with standard deviation of 3.03 after two months (p<0.05).The score after two months follow-up consists mostly of irritable symptoms.Mean maximal flow rate increased from 6.7 ml/sec with standard deviation of 2.16 ml/sec initially to 10.8 with standard deviation of 250 ml/sec after one month (p<0.05) and 13 ml/sec with standard deviation of 1.63 ml/sec after two months (p<0.05),while mean residual urine volume decreased from 107.8 ml with standard deviation of 83.09m1 initially to 34.4 ml with standard deviation of 53.63 ml after one month (p<0:05) and lI.2 ml with standard devia- tion of 24.01 ml after two months (p<0.05).For all the three parameters, the improvement of one month and two month after treatment were all significant (p<0.05).
Complications
Bleeding was observed in two cases, in one of the cases we had to coagulate the bleeding point using cutting loop, and in the other one the bleeding was overcome by putting traction to the balloon inflated Folley catheter.No blood transfusion was needed in both Postoperative pain was observed in three cases, and ketoprofen 100 mg Suppository tdd for two days were given.
Prolonged urinary retention (more than seven days) were observed in 19 cases, and the longest was until 11 days.Diversion to TUR P occurs in 2 cases, in one it was because of the median lobe was not intendedly chopped during the lasing process, and the free floating of median lobe obstructed, and TUR was performed.
In the second case, TUR P was done because the patient came from a remote area, and he wanted to be free from suprapubic tube as soon as possible.
No postoperative true incontinence was observed.
Only in 74 cases in our series who still had concern with sexual life, from which retrograde ejaculation was observed in 2 cases with the age of 65 and 51 years old, total impotence in 1 case (patient had been suffering from diabetes since 10 years) with74 years of age, and failure in having full rigidity during erection in 3 cases, which preoperatively they never had that problem.From those 85 cases with evaluable data, we observed significant improvements in the mean of symptom score, which decrease from 18.3 initially to 8.4 and 6.0, one month and two month respectively; mean maximal flow rate which increased from 6.7 ml/sec initially to 10.8 ml/sec and 13.0 ml/sec, and mean residual urine volume decreased from 107.8 ml initially to 34.4 ml and ll.2 rnl one and two months respectively.This findings are comparable with the result of Costello et al.8'eThe improvements of all parameters were ob- served very clearly during the first one month, but still continuing until two months after treatment (p<0.05), this findings are consistent with the finding of histological "hung", as reported by Costello23 thut th" process of coagulation necrosis and tissue sloughing still occure after one month or lnore.
In our series, gross hematuria was found in 2 cases (27o) which is much less than the result of (3%) which also less than the series of Cowles (17.7%f4 and in those three cases they need keto-profen 1 00mg suppository. Prolonged retention (more than seven days) was ob- served in 19 cases (1970) which in two cases were lasting until 11 days, and in two cases (2%) diversion to TUR P should be done because in one case, part of medianlobe was chopped and the tree floating median lobe obstructed the bladder neck, and rernoved by TUR after resected in to smaller pieces, and in the other one case because ofpractical reason to free the patient from cystostomy as soon as possible since the patient corne from remote area.This nurnber of diversion to TUR P is smaller than the series of Cowles et a1.24 Which he found four cases out of 56.
The prolonged retention rate in our series (19%) is lower compared to the laser coagulation therapy group (25%) but higher than the evaporation therapy group (6.250/o) reported by Narajan et al."' in his conperative study.
Retrograde ejaculation was noted in two cases out of 74 who still have concern on sexual life, arrd unfor- tunately those two cases belongs to the younger age group, 55 and 59 years old, this finding is better than + amongst 74 patients who still have concerns on sexual life DISCUSSION VILAP is still relatively a new method of treatment of BPH.As far as we know, no single study on this new techniques has been reported in Indonesia.Therefore we think it is necessary to report our results in using VILAP for the treatment of BPH as a first experience in Indonesia.
We were still using Madsen Iversen scoring system for syrnptoms, since most of our cases consists of elderly people with not so high educational level, which rnake it very difficult to make self judgement if we are using the International Prostate Scoring Systems (IPSS
CONCLUSIONS
In our experience, the treatment by VILAP for BPH patients in our first 100 cases, using Nd YAG laser resulted significant improvement in mean Madsen Iversen symptom score, rnean maximal flow rate and mean residual urine volume, one month and two months after the procedure.
Ttwo months post-treatment, the mean symptom score is still 6.0, but mostly consists of irritative symptoms.
The adverse effects of VILAP in our series is com- parable with other investigators.
Base on our first experience, furtherstudy on long term result of VILAP should be done to know the durability of the result which it is very important if we will offer VILAP as an alternative to TUR P.
From the previous different investigators, we found that TUR P gives more morbidity anà mortality3'6'8-10 compared to our short-term follow-up of VILAP.
2. 16
mllsec, and the mean of residual urine volume was 107.8 ml with standard deviation of 83.09 ml.After one month and two months follow-up, only in 85 cases the data of symptom score, maximal flow rate Visual Inser Ablation of the Prostate 239 and residual urine volume could be obtained.In 10 cases they refused catheterization to measure residual urine, and in 5 cases we lost of follow-up.Only in those cases with complete data the improvement of symptom score, maximal flow rate, and residual urine volume were analyzed.
have been treated by VILAP in
Table l .
Age distribution of patients treated by VILAP.
Table 2 .
Results of VILAP using Nd YAG for BPH in 85 evaluable cases.
Table 3 .
Adverse effects of VILAP in 100 patients with BPH Cowles et al.za ç1.6.17o),^buthigher if compared to the result of Kabalin et al.z5 which claiming no changes in hema- ). the result reported bv Cowles et a1.24andKabalin et al.2s'26wnicn is Kio at twelve months of follow-up, and 277o after three years of follow-up.Total impotence after VILAP was noted in one 74 years old patient who has been suffering frorn diabetes mellitus since the last 10 years.In this case, we think that the cause of total impotence is not only by the VILAP.In three cases, failure in having full rigidity during erection was observed two months after VILAP and the age of the patients were 55,56 and 77 years respec- tively.It is very difficult to correlate the failure of erection to the age of the patient.This failure of erection rate is hisher than the series ofKabalin et a1.26e 4<and Narajan etal.'" which they clairn no impotence in his series-, but lower than the result of Cowles et a1.24 | 2017-05-01T20:54:23.907Z | 1997-10-01T00:00:00.000 | {
"year": 1997,
"sha1": "a0da2214d8119ee4a236634b512606642d10985a",
"oa_license": "CCBYNC",
"oa_url": "http://mji.ui.ac.id/journal/index.php/mji/article/download/834/787",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a0da2214d8119ee4a236634b512606642d10985a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202138 | pes2o/s2orc | v3-fos-license | High photocatalytic activity of Fe2O3/TiO2 nanocomposites prepared by photodeposition for degradation of 2,4-dichlorophenoxyacetic acid
Two series of Fe2O3/TiO2 samples were prepared via impregnation and photodeposition methods. The effect of preparation method on the properties and performance of Fe2O3/TiO2 for photocatalytic degradation of 2,4-dichlorophenoxyacetic acid (2,4-D) under UV light irradiation was examined. The Fe2O3/TiO2 nanocomposites prepared by impregnation showed lower activity than the unmodified TiO2, mainly due to lower specific surface area caused by heat treatment. On the other hand, the Fe2O3/TiO2 nanocomposites prepared by photodeposition showed higher photocatalytic activity than the unmodified TiO2. Three times higher photocatalytic activity was obtained on the best photocatalyst, Fe2O3(0.5)/TiO2. The improved activity of TiO2 after photodeposition of Fe2O3 was contributed to the formation of a heterojunction between the Fe2O3 and TiO2 nanoparticles that improved charge transfer and suppressed electron–hole recombination. A further investigation on the role of the active species on Fe2O3/TiO2 confirmed that the crucial active species were both holes and superoxide radicals. The Fe2O3(0.5)/TiO2 sample also showed good stability and reusability, suggesting its potential for water purification applications.
Introduction
Photocatalytic reactions have been widely suggested for environmental remediation under mild conditions.In the presence of only a photocatalyst and a light source of appropriate energy, the process can mineralize organic pollutants to harmless products such as carbon dioxide and water.Among the semiconduc-tor photocatalysts, titanium dioxide (TiO 2 ) has been the foremost established material for degradation of organic pollutants [1,2].In addition to its nontoxicity, abundance and relatively low cost, TiO 2 also shows excellent photocatalytic activity in many degradation reactions.Unfortunately, the photocatalytic performance of TiO 2 is generally restricted by its high charge carrier recombination rate.Therefore, the modification of TiO 2 in order to reduce such recombinations remains a critical task.Another important point is the emphasis on using an environmentally safe and sustainable material as the modifier.
As one of the best modifiers, the use of a co-catalyst has been recognized to improve the photocatalytic performance of semiconductor photocatalysts as it promotes charge separation and suppresses photocorrosion of the semiconductor photocatalyst [3,4].One of the potential co-catalyst modifiers is iron(III) oxide (Fe 2 O 3 ), which is nontoxic, stable, cost effective and found abundantly in the earth.It has been reported that Fe 2 O 3 can be used to increase the photocatalytic activity or selectivity of semiconductor photocatalysts for degradation of organic pollutants [5][6][7][8][9][10][11][12][13][14][15].Commonly, the reported methods for the preparation of Fe 2 O 3 /TiO 2 include impregnation [5,6,[16][17][18], sol-gel [7,19], and hydrothermal methods [8][9][10].A combination of several processes has also been employed, such as the electrospinning method combined with a hydrothermal approach [11], plasma enhanced-chemical vapor deposition (PE-CVD) and radio frequency (RF) sputtering approach [12], and plasma enhanced-chemical vapor deposition and atomic layer deposition (ALD) followed by thermal treatment [13].Among these preparation methods, impregnation is a commonly used approach for the preparation of Fe 2 O 3 /TiO 2 as it offers a simple process.However, there are contradicting reports on the performance of Fe 2 O 3 /TiO 2 catalysts prepared by the impregnation method.While some groups reported good photocatalytic activity [5,6], others showed contrasting results [16][17][18], which have resulted in different opinions regarding the contribution of the Fe 2 O 3 .Since the impregnation method usually involves heat treatment, the properties of TiO 2 such as the ratio of anatase/ rutile, particle size, as well as specific surface area may be altered during this process and could influence the photocatalytic activity of TiO 2 [16,17].Therefore, careful considerations shall be taken before concluding whether the Fe 2 O 3 is beneficial or not in regards to improving the photocatalytic activity of TiO 2 .
Another simple method to produce Fe 2 O 3 /TiO 2 is a mechanochemical milling approach that can be carried out at ambient conditions [14].Even though high activity was obtained, evidence of the formation of good contact between Fe 2 O 3 and TiO 2 nanoparticles was not provided.Recently, the photodeposition method has been proposed as a suitable method to directly investigate the role of added copper or lanthanum species without such heat-treatment effects [20,21].Moreover, the modification of TiO 2 nanoparticles by photodeposition resulted in an improved photocatalytic activity as compared to unmodified TiO 2 [20][21][22].Therefore, it is meaningful to employ the photodeposition method to prepare Fe 2 O 3 /TiO 2 catalysts without heat treatment at ambient conditions.Using iron(III) nitrate nonahydrate as the precursor, active and stable Fe 2 O 3 /TiO 2 was successfully prepared via photodeposition [15].However, the actual amount of iron precursor in the prepared Fe 2 O 3 /TiO 2 was much lower than that added.In the present study, Fe 2 O 3 /TiO 2 nanocomposites were prepared by a similar approach but using iron(III) acetylacetonate as the precursor to facilitate a complete photodeposition process.The properties and activity results were compared with those prepared by the commonly used impregnation approach.Furthermore, to the best of our knowledge, there is no study on the activity comparison between Fe 2 O 3 /TiO 2 prepared by the widely used impregnation and the photodeposition methods, which is important to determine the optimal method for the preparation of photocatalyst materials with good properties.
In this study, both impregnation and photodeposition methods were used to modify TiO 2 nanoparticles with Fe 2 O 3 in order to investigate the effect of preparation method on the properties and photocatalytic activity of the nanocomposites with respect to the degradation of 2,4-dichlorophenoxyacetic acid (2,4-D) under irradiation of UV light.2,4-D is a herbicide widely utilized in the agricultural industry; it can be found in water sources due to its common use in controlling broadleaf weeds [23].Excessive exposure of 2,4-D leads to adverse impacts on the ecosystem, and thus, the toxic organic pollutant must be eliminated from the water source utilizing efficient approaches.Various removal methods of 2,4-D have been developed, including adsorption [24], biodegradation [25], ozonation [26], and photocatalytic degradation [15,[20][21][22][27][28][29][30][31][32], of which the latter has been recognized for its capability to decompose the organic pollutant under a mild environment.In the present work, it was shown that the different preparation methods resulted in distinctly different properties and photocatalytic activity.Better properties and the improved activity of Fe 2 O 3 /TiO 2 nanocomposites prepared by photodeposition for the degradation of 2,4-D were discussed.In addition to identifying the charge transfer capability of the Fe 2 O 3 /TiO 2 catalyst for improved photocatalytic activity, the role of the active species on the Fe 2 O 3 /TiO 2 nanocomposites prepared by the photodeposition method was further investigated in order to understand the important active species contributing to the photocatalytic activity.
Photocatalytic activity comparison
The photocatalytic efficiency of the Fe 2 O 3 /TiO 2 nanocomposites prepared by impregnation was evaluated for the removal of 2,4-D under UV light illumination at room temperature for 1 h.Under the same conditions, it was confirmed that no photolysis 2,4-D using α-Fe 2 O 3 was only 2% after 1 h of UV illumination, which might be due to the fast charge recombination in hematite [13,15,33].
In contrast to the samples synthesized by the impregnation method, the high adsorption of 2,4-D at 25-30% was still achieved on the photodeposition synthesized samples as shown in Figure 2A.Only a slight decrease in adsorption was obtained with increasing Fe/Ti ratio, suggesting that the adsorption sites were not covered by the deposition of Fe 2 O 3 .Figure 2B shows the photocatalytic performance of the TiO 2 and the Fe 2 O 3 /TiO 2 (PD) nanocomposites after the exclusion of the 2,4-D adsorption.No significant difference in the activity was obtained for the TiO 2 (NT) and the TiO 2 (PD_T), which showed 2,4-D removal of 78 and 76%, respectively.This result clearly demonstrated that, in contrast to the heat treatment, the photodeposition treatment did not alter the photocatalytic performance of TiO 2 .It is worth noting that after the Fe species were photodeposited on the TiO 2 , all the nanocomposites gave superior activity as compared to that of unmodified TiO 2 .The Fe/Ti ratio of 0.5 mol % was found to be the optimum loading in which the Fe 2 O 3 (0.5)/TiO 2 (PD) sample showed the highest removal of 88% after 1 h irradiation.These results showed that different synthesis methods lead to different photocatalytic performance.The photocatalysts prepared by photodeposition showed superior performance compared to those prepared by the impregnation method.
Properties comparison
The structural, optical, and physical properties of the Fe 2 O 3 /TiO 2 photocatalysts synthesized by impregnation and photodeposition were investigated and compared to clarify the characteristic differences of the photocatalysts obtained from the different preparation methods.X-ray diffraction (XRD) patterns were collected for the Fe 2 O 3 /TiO 2 (IM) series prepared by the impregnation method.TiO 2 (NT) exhibited diffraction peaks corresponding to the anatase phase (JCPDS file No. 21-1272), which were observed at 2θ of 25.35, 38.10, 48.05, 54.55, and 62.60°, corresponding to (101), (004), ( 200), (105), and (204) diffraction planes, respectively (see Supporting Information File 1, Figure S1).After heat treatment, the TiO 2 (IM_T) sample showed improved crystallinity without any changes in the structural phase, which was found to be pure anatase.After addition of Fe species, the crystallinity of the Fe 2 O 3 /TiO 2 (IM) nanocomposites did not change and was confirmed to be similar to that of the TiO 2 (IM_T) sample.The characteristic diffraction peaks corresponding to the anatase phase of TiO 2 remained in all samples without any peak shifting.Furthermore, the existence of new diffraction peaks of α-Fe 2 O 3 (JCPDS file No. 33-0664) was not identified, suggesting that the low loading of Fe 2 O 3 might be dispersed well on the surface of the TiO 2 .
The Scherrer equation was used to calculate the crystallite size of the samples based on the (101) peak at 2θ of 25.35°.As listed in Table 1, the crystallite size of the TiO 2 (NT) was initially 9.3 nm (Table 1, entry 1).After heat treatment, the crystallite size of TiO 2 (IM_T) increased to 14.3 nm (Table 1, entry 2).The addition of Fe 2 O 3 did not further influence the crystallite size.All the Fe 2 O 3 /TiO 2 (IM) nanocomposites had a crystallite size in a range of 14.3-15.9nm (Table 1, entries 3-7), which was close to that of the TiO 2 (IM_T).Since there was no much difference in the crystallinity and crystallite size between the TiO 2 (IM_T) and Fe 2 O 3 /TiO 2 (IM), it was suggested that the improved crystallinity and crystallite size as compared to TiO 2 (NT) was mostly due to the heat treatment only and not to the addition of Fe 2 O 3 .
The XRD patterns of the Fe 2 O 3 /TiO 2 (PD) series that was synthesized by the photodeposition method were also recorded (see Supporting Information File 1, Figure S2).Different from the case of heat treatment with the impregnation method, the 1, entries 8-13), suggesting that the crystallite size was not altered by the photodeposition method.
Comparing the two synthesis methods, it was obvious that the photodeposition method maintained both crystallinity and crystallite size of the TiO 2 , while the impregnation method led to higher crystallinity and crystallite size.This difference was caused by the different preparation conditions; the photodeposition was conducted under mild synthesis conditions under illumination of UV light at room temperature, whereas a high heating temperature of 500 °C was used during the impregnation method.
The optical absorption properties of the nanocomposites prepared by the impregnation method were investigated (see Supporting Information File 1, Figure S3).The TiO 2 (NT) sample absorbs light in the UV region and exhibits a characteristic band for TiO 2 at about 370 nm due to the charge transfer of O 2− →Ti 4+ and electron excitation from the valence band (VB) to the conduction band (CB) [7,20,21].Both the heat treatment and addition of Fe species did not affect the light absorption of the TiO 2 (NT) in the UV and visible region.Owing to the low loading of Fe, there was no additional absorption peak corre-sponding to the Fe species.The bandgap energy (E g ) of the unmodified TiO 2 and the nanocomposites were studied by a Tauc plot, considering the indirect transition in anatase TiO 2 [34].The Tauc plot of the TiO 2 (NT) and the Fe 2 O 3 /TiO 2 (IM) nanocomposites was derived by plotting (αhv) 1/2 versus hv.The E g value was obtained from the x-intercept using the linear extrapolation in the plot.Table 1 summarizes the E g values of the samples.The TiO 2 (NT) sample has an E g of 3.30 eV (Table 1, entry 1).The heat-treated TiO 2 (IM_T) showed an E g value of 2.29 eV (Table 1, entry 2), close to the value of the TiO 2 (NT), indicating that a high calcination temperature of 500 °C did not affect the optical properties of the TiO 2 .The addition of Fe species did not result in significant changes to the E g of the TiO 2 , which with an increasing Fe/Ti ratio from 0.1 to 1 mol % only slightly reduced the E g from 3.29 to 3.25 eV (Table 1, entries 3-7).The insignificant change in the E g suggested that the Fe species might be loaded on the surface instead of incorporated into the TiO 2 lattice.The obtained results matched well with the nanocomposite prepared via adsorption and decomposition of the Fe(III) complex at 400 °C [5].This is in contrast to the one prepared by the sol-gel method that showed an obviously reduced E g value as the Fe ions were incorporated into the TiO 2 lattice [7,19].
Diffuse reflectance (DR) UV-vis spectra and Tauc plots of the nanocomposites prepared by the photodeposition method were also measured (see Supporting Information File 1, Figure S4).Similar to the nanocomposites prepared by the impregnation method, the photodeposition treatment and addition of Fe species also did not much affect the light absorption or the E g of both the TiO 2 (PD_T) and Fe 2 O 3 /TiO 2 (PD) sample.Besides, the slightly decreased E g from 3.28 to 3.24 eV (Table 1, entries 9-13) also suggested that Fe species might be loaded on the surface of the TiO 2 via photodeposition.
The amount of Fe content loaded on the Fe 2 O 3 /TiO 2 nanocomposites was determined by the inductively coupled plasma optical emission spectrometer (ICP-OES) measurement, as listed in Table 2.The Fe/Ti composition (mol %) obtained from the measurement confirmed that the Fe content loaded on the TiO 2 was close to the nominal added amount.These results clearly suggested that in the given range of Fe loading (0.1-1 mol %), all the iron precursor was successfully photodeposited onto the TiO 2 .
The Brunauer-Emmett-Teller (BET) specific surface area of the TiO 2 and the Fe 2 O 3 /TiO 2 nanocomposites prepared by the impregnation and the photodeposition methods are shown in Figure 3.The TiO 2 (NT) has a large specific surface area of 298 m 2 /g.After calcination at 500 °C, the specific surface area of the TiO 2 (IM_T) dropped drastically to 80 m 2 /g.The addi-
Samples
Fe/Ti (mol %) tion of Fe 2 O 3 to TiO 2 via the impregnation method did not significantly change the specific surface area of the TiO 2 (IM_T), given that all nanocomposites have values in the range of 72-80 m 2 /g.This result obviously showed that it was the heat treatment and not the Fe 2 O 3 addition that caused the decrease in the BET specific surface area.
In contrast to the nanocomposites prepared by the impregnation method, only a slight gradual decrease was observed with increasing Fe/Ti ratio in the Fe 2 O 3 /TiO 2 nanocomposites prepared from the photodeposition method.The nanocomposite sample with the lowest Fe/Ti ratio of 0.1 mol % still showed a large surface area of 297 m 2 /g, while the nanocomposite sample with the highest Fe/Ti ratio of 1 mol % showed a value of 265 m 2 /g.These results again confirmed that the mild photodeposition method did not influence the properties of the TiO 2 (NT).
As shown in Figure 1 and Figure 2, nanocomposites synthesized by the photodeposition method exhibited superior adsorp- tion and photocatalytic activity than those synthesized by the impregnation method.The higher percentage of 2,4-D adsorption could result from the larger BET specific surface area of the samples prepared by the photodeposition method.As for the photocatalytic activity, a few important parameters have been reported to contribute to a high photocatalytic activity, including high crystallinity [35], small crystallite size [36], and high specific surface area [30,36].Generally, materials with high crystallinity have fewer crystal defects, while a smaller crystallite size decreases the diffusion path length between the charge carriers − these two parameters can suppress recombination of photogenerated electrons−holes.On the other hand, materials with a large specific surface area have many available surface active sites for reaction to take place, which can lead to high photocatalytic activity.In the case of nanocomposites prepared by the impregnation method, even though improved crystallinity was observed, it might be compensated by the larger crystallite size and a lower specific surface area, which overall led to decreased photocatalytic activity.Since the photodeposition method did not have a great influence on the crystallinity, crystallite size, and the BET specific surface area, the effects caused by such changes can be avoided, and the main factors contributing to the photocatalytic activity can be narrowed down solely to the added Fe species.
Improved properties
Since nanocomposites synthesized by the photodeposition method showed better photocatalytic activity than the nanocomposites synthesized by the impregnation method, further detailed investigations were carried out on nanocomposites synthesized by the photodeposition method.Transmission electron microscopy (TEM) and high-resolution TEM (HRTEM) images of both unmodified TiO 2 (NT) and The formation of Fe 2 O 3 was in good agreement with other reported photodeposition methods when using a different iron precursor, Fe(III) nitrate nonahydrate [15].Due to the oxidative condition during the synthesis process, the Fe(III) acetylacetonate precursor could be decomposed to Fe 2 O 3 such as by the photogenerated oxygen radicals [21].It was demonstrated that the use of the Fe(III) acetylacetonate precursor led to a complete photodeposition to form Fe 2 O 3 , as also supported by ICP-OES results discussed above.
The improved charge transfer of the Fe 2 O 3 (0.5)/TiO 2 (PD) sample was further clarified using electrochemical impedance spectroscopy (EIS).Figure 5 shows the Nyquist plots of the unmodified TiO 2 (NT) and Fe 2 O 3 (0.5)/TiO 2 (PD) samples.The arc radius of the Nyquist plot reflects the impedance of the interface layer arising at the electrode surface.The smaller the arc radius the better the charge transfer [37].It is worth noting here that the Fe 2 O 3 (0.5)/TiO 2 (PD) material has a smaller arc radius than unmodified TiO 2 .These results clearly suggest that the Fe 2 O 3 (0.5)/TiO 2 (PD) material has a lower impedance than unmodified TiO 2 , indicating enhanced conductivity of TiO 2 after photodeposition of Fe 2 O 3 .The electron transfer kinetics of a material can be calculated using Equation 1: ( where k is the heterogeneous electron-transfer rate constant, R is the gas constant, T is temperature (K), n represents the number of transferred electrons per molecule of the redox probe, F is the Faraday constant, A is the electrode area (cm 2 ), R ct is the charge transfer resistance that can be obtained from the fitted Nyquist plot, and C° is the concentration of the redox couple in the bulk solution (ferricyanide/ferrocyanide) [38].From the fitted impedance data shown in Figure 5 Photoluminescence has been associated with electron-hole recombination of a photocatalyst [39].In this study, the ability of an Fe 2 O 3 co-catalyst to accept photogenerated electrons as well as to suppress the recombination of electron-holes on the TiO 2 was supported by the fluorescence spectroscopy results.The emission spectra of the unmodified TiO 2 (NT) and the Fe 2 O 3 (0.5)/TiO 2 (PD) samples under a fixed excitation wavelength of 218 nm are shown in Figure 6.TiO 2 exhibited three emission peaks at 407, 466 and 562 nm.The emission at 407 nm could be caused by the radiative recombination of selftrapped excitons, while peaks at 466 and 562 nm were attributed to the charge transfer of an oxygen vacancy trapped electron.The obtained results agreed well with the reported literature [39].The Fe 2 O 3 (0.5)/TiO 2 (PD) material showed a decreased emission intensity as compared to the unmodified TiO 2 (NT), suggesting that the photogenerated electrons on TiO 2 could be transferred and trapped by Fe 2 O 3 .This resulted in a suppression of the electrons−hole recombination on TiO 2 , which led to the improved removal of 2,4-D.
Active species and stability
It has been reported that the reaction pathways for photocatalytic oxidation of organic pollutants are dominated by several active species, such as holes, superoxide radicals, and hydroxyl radicals [39].Among the scavengers of active species, ammonium oxalate has been reported as an efficient hole scavenger [40], benzoquinone acts to scavenge superoxide radicals efficiently [40], while tert-butanol has fast reaction with hydroxyl radicals [27,40] and hence, they were selected for the scavenger studies.As shown in Figure 7, the photocatalytic reactions under 1 h of UV illumination were evaluated in the presence of each scavenger on the unmodified TiO 2 (NT) and the Fe 2 O 3 (0.5)/TiO 2 (PD).For the reaction conducted on the unmodified TiO 2 (NT), the addition of ammonium oxalate was found to drastically suppress the activity, which was reduced from 78 to 13%, equivalent to 5.8 times lower than the one without scavenger.The inhibited activity indicated the importance of the photogenerated holes for the oxidation of 2,4-D.
When benzoquinone was added, the activity was reduced from 78 to 66%, suggesting that superoxide radicals also played a role in the oxidation process of 2,4-D.In contrast, addition of tert-butanol did not affect the activity of the TiO 2 (NT), indicating that hydroxyl radicals are not the important active species for the reaction.
Since the photogenerated holes on the TiO 2 have strong oxidizing power among oxidizing species [41], it is reasonable that holes are the most dominate active species in the oxidation of 2,4-D.Moreover, it has been reported that the oxidation of 2,4-D via a direct holes mechanism was favored at pH 3 [27].In this study, an initial pH for 2,4-D was confirmed to be 3.2.On the other hand, superoxide radicals could be also easily formed for the oxidation reaction since the reaction was conducted in an open reactor, whereby the reduction of oxygen can easily take place.Related to the formation of hydroxyl radicals, it has been revealed that more hydroxyl radicals are formed from the adsorbed hydroxide ions with increased pH [28,42].Therefore, it is likely that under the present conditions, they did not contribute as the active species probably due to their low formation.
The scavenger study was also conducted using the Fe 2 O 3 (0.5)/ TiO 2 (PD) as shown in Figure 7.It was clear that the Fe 2 O 3 (0.5)/TiO 2 (PD) gave similar trend of activity as the ones obtained on the unmodified TiO 2 (NT).Both the photogenerated holes and superoxide radicals were important species, while hydroxyl radicals did not give much influence on the photocatalytic oxidation of 2,4-D.As compared to the unmodified TiO 2 (NT), the Fe 2 O 3 (0.5)/TiO 2 (PD) showed a more drastic reduction in the activity when the reactions were conducted in the presence of holes and superoxide radical scavengers.The activity decreased 8.8 and 1.4 times, respectively, as compared to those on TiO 2 (NT), i.e., 5.8 and 1.2 times, respectively.Such a result suggested the crucial role of Fe 2 O 3 as a co-catalyst to improve the interfacial charge transfer and suppress electron-hole recombination.This leads to the formation of more photogenerated holes and superoxide radicals that contributed to an improved photocatalytic activity, as was also supported by the HRTEM, EIS and fluorescence spectroscopy results.
The stability of the Fe 2 O 3 (0.5)/TiO 2 (PD) sample was investigated by performing several cycles of photocatalytic reactions under UV light irradiation for 1 h.The Fe 2 O 3 (0.5)/ TiO 2 (PD) sample gave a similar, comparable activity in a range of 82-88% even after 3 cycles of reactions, suggesting the good photostability of the Fe 2 O 3 (0.5)/TiO 2 (PD) nanocomposite and its potential application for photocatalytic water purification.
Degradation and proposed mechanism
After the photocatalytic reactions on all samples, the formation of a 2,4-dichlorophenol (2,4-DCP) intermediate was observed from the HPLC analysis, which was in good agreement with reported studies [15,[28][29][30][31][32].The 2,4-D degradation was then determined by Equation 2: ( where tion of 18%, which was three times higher than the unmodified TiO 2 (NT).Such enhanced performance was only slightly higher than that reported when using a Fe(III) nitrate nonahydrate precursor, which gave more than two times higher activity than the bare TiO 2 [15].
The photocatalytic oxidation of 2,4-D by active species involves various steps, including formation of intermediates before its mineralization to CO 2 and H 2 O. Decarboxylation has been reported as the initial step during the photocatalytic oxidation of 2,4-D when it is carried out at pH 3 [27].The benzene ring opening and hydrocarbon chain breaking then took place, which finally led to the formation of CO 2 [29].Since 2,4-DCP was detected as the dominant intermediate after the photocatalytic reactions, it could be suggested that 2,4-D was firstly oxidized by the active species (photogenerated holes and superoxide radicals) before decarboxylation and the formation of 2,4-DCP.The dechlorination of 2,4-DCP then took place, leading to ring opening, hydrocarbon chain breaking, and finally, the mineralization to CO 2 and H 2 O (see Supporting Information File 1, Figure S7).
The mechanism of major charge transfer pathways on the Fe 2 O 3 (0.5)/TiO 2 (PD) was also proposed and shown in Figure 9.When the photocatalyst is exposed to UV light, photo-generated electrons are excited from the VB to the CB of TiO 2 , while photogenerated holes are left in the VB.The photogenerated electrons could reduce oxygen to form superoxide radicals, while holes could directly oxidize 2,4-D to 2,4-DCP before its mineralization.The presence of Fe 2 O 3 reduces electron-hole recombination on the TiO 2 .Since the CB edge energy level of Fe 2 O 3 (−4.78eV relative to absolute vacuum scale (AVS)) is lower than that of TiO 2 (−4.21 eV relative to AVS) [43], Fe 2 O 3 could act as an electron trapper that captured the photogenerated electrons from the TiO 2 that were not used for reduction of oxygen, instead of recombination with holes.Such electron transfer could suppress charge recombination on TiO 2 [5,10,12,14,15], whereby the oxidation of 2,4-D still could occur in the VB of TiO 2 , and therefore, the photocatalytic degradation of 2,4-D could be improved.On the other hand, owing to the fast recombination of holes and electrons, the photocatalytic degradation of 2,4-D on bare Fe 2 O 3 was negligible (1%).The oxidation of 2,4-D is unlikely to take place in the valence band of Fe 2 O 3 and this would be the very minor pathway.Similar mechanisms have been also reported elsewhere [15].
Conclusion
Two series of Fe 2 O 3 /TiO 2 nanocomposites were prepared by the impregnation and the photodeposition methods.The Fe 2 O 3 /TiO 2 nanocomposites prepared by the impregnation method showed less activity than the unmodified TiO 2 (NT), which was mainly due to the lower specific surface area caused by heat treatment.On the other hand, all the Fe 2 O 3 /TiO 2 nanocomposites prepared by the photodeposition methods exhibited superior photocatalytic activity as compared to the unmodified samples.The good photocatalytic activity of the nanocomposites was associated with the formation of a heterojunction between Fe 2 O 3 and TiO 2 nanoparticles that promoted good charge transfer and suppressed electron-hole recombination.Scavenger studies showed that the photogenerated holes and superoxide radicals were the important active species in the reaction.The Fe 2 O 3 (0.5)/TiO 2 material showed excellent stability and reusability for the removal of 2,4-D.Among the nanocomposites, the Fe 2 O 3 (0.5)/TiO 2 sample showed the best activity, exhibiting 18% degradation of 2,4-D after 1 h of reaction, corresponding to three times higher activity compared to unmodified TiO 2 .
Sample preparation
The TiO 2 material used in this study was from the commercial supplier Hombikat, UV100 TiO 2 .The Fe 2 O 3 used as a control was prepared by direct calcination of Fe(III) acetylacetonate under air atmosphere at 500 °C for 4 h.Two series of Fe 2 O 3 /TiO 2 nanocomposites were prepared by impregnation and photodeposition methods.As for the synthesis of the nanocomposites via the impregnation method, an appropriate amount of Fe(III) acetylacetonate with varying mole percentage (mol %) of Fe/Ti of 0.1, 0.25, 0.5, 0.75 and 1 mol % were firstly dissolved in mixed solvents of water and ethanol (20 mL).Then, the commercial Hombikat UV100 TiO 2 (1 g) was dispersed in the Fe(III) acetylacetonate solution for 10 min by an ultrasonicator.The mixture was stirred and heated at 80 °C until all solvents were completely evaporated.The grind dried solid powder was then calcined at a temperature of 500 °C for 4 h.The prepared samples were labelled as Fe 2 O 3 (x)/TiO 2 (IM), where x relates to the loading of Fe/Ti in mol %.Bare TiO 2 with a similar heat treatment without the addition of the iron precursor was also prepared and denoted as TiO 2 (IM_T), while the TiO 2 without any pretreatment was denoted as TiO 2 (NT).
As for synthesis of the nanocomposites via the photodeposition method [20][21][22], an appropriate amount of Fe(III) acetylacetonate with various mole percentages of Fe/Ti (0.1, 0.25, 0.5, 0.75 and 1 mol %) were firstly dissolved in mixed solvents of water and ethanol (20 mL) by ultrasonication for few minutes.Then, the commercial Hombikat UV100 TiO 2 (1 g) was dispersed in the Fe(III) acetylacetonate solution by ultrasonic mixing for 10 min.The mixture was then stirred and irradiated under a 200 W Hg−Xe lamp (Hamamatsu, light intensity of 8 mW/cm 2 at 365 nm) at room temperature for 5 h.The solid was washed a few times with ethanol followed by deionized water before drying overnight inside an oven at 80 °C.Finally, the obtained solid powder was ground.The prepared samples were denoted as Fe 2 O 3 (x)/TiO 2 (PD), where x relates to the loading of Fe/Ti (in mol %).Bare TiO 2 undergoing a similar photodeposition treatment without the addition of the iron precursor was also produced and was denoted as TiO 2 (PD_T).
Sample characterization
A Bruker D8 Advance diffractometer was used to measure the XRD patterns of the TiO 2 and the Fe 2 O 3 /TiO 2 samples prepared by both impregnation and photodeposition methods using a Cu Kα radiation source (λ = 0.15406 nm) at 40 kV and 40 mA.A Shimadzu UV-2600 DR UV−vis spectrophotometer was used to record the absorption spectra of samples, in which barium sulfate (BaSO 4 ) was used as a reference.The elemental compositions (Fe, Ti) on the Fe 2 O 3 /TiO 2 (PD) nanocomposites were determined using an Agilent 700 series ICP-OES.The adsorption of nitrogen gas on the samples was measured at 77 K on a Quantachrome Novatouch LX4 instrument in order to determine the BET specific surface area of the samples.
TEM and HRTEM were performed on a JEOL JEM-2100 electron microscope with electron acceleration energy of 200 kV.EIS measurements were performed on a Gamry Interface 1000 potentiostat/galvanostat/ZRA.For the measurements of EIS, a screen printed electrode (SPE, DropSens) was used and prepared as follows.The photocatalyst sample (10 mg) was dispersed in water (6 mL) and the mixture was homogeneously mixed in an ultrasonic bath for 15 min.The mixture (20 µL) was then dropped onto the working electrode of the SPE, followed by immersion of the SPE in 6 mL of electrolyte which was a mixture of sodium sulfate (0.1 M) and potassium ferricyanide (2.5 mM).The frequency range was set in the range of 1 MHz to 100 mHz.A simplex model program (Gamry Echem Analyst) was selected to fit the obtained Nyquist plot by using constant phase element (CPE) with diffusion as the equivalent circuit model.The emission sites of the samples were investigated using a JASCO FP-8500 spectrofluorometer, in which the excitation wavelength was fixed at 218 nm.The reproducibility for emission spectra measurements was around 4%.
Photocatalytic tests
The photocatalytic activity of the Fe 2 O 3 /TiO 2 nanocomposites prepared by both photodeposition and impregnation methods was tested for the removal of 2,4-D under irradiation of UV light for 1 h.The photocatalyst (50 mg) was dispersed in a 2,4-D solution (0.5 mM, 50 mL) and stirred for 1 h in the dark to achieve adsorption-desorption equilibrium.The photocatalytic reaction was then conducted under irradiation of a 200 W Hg-Xe lamp (Hamamatsu, light intensity of 8 mW/cm 2 at 365 nm) for 1 h at room temperature.After each reaction, the solution was separated from the photocatalyst by using a membrane filter.The concentration of 2,4-D was determined using a high-performance liquid chromatography instrument (Shimadzu, Prominence LC-20A with Hypersil gold PFP column), which was monitored at a wavelength of 283 nm.The percentage of 2,4-D removal was determined following Equation 3: (3) where C o is the initial concentration of 2,4-D after reaching adsorption-desorption equilibrium under dark conditions, while C t is the remaining concentration of 2,4-D after the reaction.Further investigation on the role of active species contributing to the removal of 2,4-D was carried out on the Fe 2 O 3 (0.5)/TiO 2 (PD) nanocomposite, which showed the best photocatalytic activity.Ammonium oxalate, benzoquinone, and tert-butanol were used as the various scavengers for photogenerated holes, superoxide radicals and hydroxyl radicals, respectively.The scavenger was introduced to the 2,4-D solution in the presence of the photocatalyst with 1 mole ratio of scavenger/pollutant.
The photostability of the Fe 2 O 3 (0.5)/TiO 2 (PD) nanocomposite was investigated by evaluating the photocatalytic activity for removal of 2,4-D over three cycles.After the first run of reaction under 1 h UV irradiation, the photocatalyst was collected from the 2,4-D solution and was washed with deionised water before drying at 80 °C overnight.The second and third cycles of reactions were conducted using the recovered photocatalyst under similar experimental and treatment conditions, as mentioned above.
Figure 1 :
Figure 1: (A) Adsorption and (B) photocatalytic removal of 2,4-D using TiO 2 (NT), TiO 2 (IM_T) and series of Fe 2 O 3 /TiO 2 (IM).NT represents no treatment, IM shows the samples were prepared by impregnation method, and T indicates an additional heat treatment was carried out on the sample.
of 2 , 4 -
D was obtained without photocatalyst.After adsorption-desorption equilibrium was achieved in 1 h, adsorption experiments were conducted in the absence of light for another 1 h.Related to the following sample descriptions, NT represents no treatment, IM indicates the samples were prepared by impregnation, PD indicates samples were prepared by photodeposition, and T indicates an additional heat treatment was carried out.Figure1Ademonstrates that the TiO 2 (NT) sample gave 30% adsorption of 2,4-D.After heat treatment at 500 °C, the adsorption of 2,4-D on the samples was greatly suppressed.All the TiO 2 (IM_T) and Fe 2 O 3 /TiO 2 (IM) nanocomposites showed 2,4-D adsorption of 2-3%.The photocatalytic activity of the photocatalysts was each determined by exclusion of 2,4-D adsorption and the results are shown in Figure 1B.There was no significant difference observed between the TiO 2 (NT) and the TiO 2 (IM_T), which showed 2,4-D removal of 78 and 76%, respectively.Introducing various amounts of Fe 2 O 3 on the TiO 2 material via impregnation did not improve the photocatalytic activity of the TiO 2 .With increased loading of Fe 2 O 3 , the photocatalytic performance of TiO 2 in fact decreased.As another control experiment, α-Fe 2 O 3 synthesized at 500 °C for 4 h was also tested for the removal of 2,4-D.The removal of
Figure 2 :
Figure 2: (A) Adsorption and (B) photocatalytic removal of 2,4-D over TiO 2 (NT), TiO 2 (PD_T) and a series of Fe 2 O 3 /TiO 2 (PD) samples.Error bars in (B) are shown for comparison purposes.NT represents no treatment, PD shows the samples were prepared by photodeposition method, and T indicates an additional photodeposition treatment was carried out on the sample.
FeFigure 3 :
Figure 3: BET specific surface area of TiO 2 (NT), TiO 2 (T) and the series of Fe 2 O 3 /TiO 2 samples prepared by both impregnation (IM) and photodeposition (PD).NT represents no treatment and T indicates an additional heat treatment or photodeposition treatment was carried out on the sample.
Fe 2 O 3 (0.5)/TiO 2 (PD) are shown in Figure 4.As shown in Figure 4a, the TiO 2 (NT) sample has spherical particles with a diameter of 7-9 nm.This result agreed well with the crystallite size calculated by the Scherrer equation previously discussed.The HRTEM image of the TiO 2 (NT) sample displayed in Figure4bshows a lattice fringe spacing of 0.35 nm attributed to the anatase TiO 2 (101) crystal plane.Figure4dshows a HRTEM image of Fe 2 O 3 (0.5)/ TiO 2 (PD).It was evident that the deposition of Fe did not change the morphology of the TiO 2 .Since the lattice fringe spacing of 0.27 nm related to the Fe 2 O 3 (104) crystal plane was observed, the possible formation of a heterojunction between Fe 2 O 3 and TiO 2 was considered.Such close contact would cause the carrier diffusion length to be short, and in turn, would improve the charge transfer.This would thus suppress charge recombination, which is crucial to enhance the photocatalytic activity.
Figure 7 :
Figure 7: Percentage removal of 2,4-D on unmodified TiO 2 (NT) and Fe 2 O 3 (0.5)/TiO 2 (PD) in the absence and presence of various scavengers under UV light irradiation for 1 h.
Figure 8 :
Figure 8: Photocatalytic degradation of 2,4-D on TiO 2 (NT), TiO 2 (PD_T) and the series of Fe 2 O 3 /TiO 2 (PD) samples.NT represents no treatment, PD shows the samples were prepared by photodeposition method, and T indicates an additional photodeposition treatment was carried out on the sample.
Table 1 :
Crystallite size and band gap energy (E g ) of the unmodified TiO 2 and Fe 2 O 3 /TiO 2 nanocomposites prepared by impregnation (IM) and photodeposition (PD) methods.NT represents no treatment and T indicates an additional heat treatment was carried out on the sample.
a Scherrer equation was used to calculate the crystallite size.bTaucplot was used to determine the E g .photodepositiontreatmentdid not change the crystallinity of both the TiO 2 (PD_T) and the Fe 2 O 3 /TiO 2 (PD) nanocomposites.No peak shifting and the appearance of no new diffraction peak suggested good dispersion of the Fe species on the surface of the TiO 2 .The crystallite size of the Fe 2 O 3 /TiO 2 (PD) is given in Table1.All samples have a crystallite size in the range of 8.8-9.3 nm (Table
Table 2 :
The composition of the Fe 2 O 3 /TiO 2 (PD) nanocomposites (ratio of Fe/Ti (mol %)) determined from ICP-OES measurements.PD indicates samples that were prepared with the photodeposition method.
, the Fe 2 O 3 (0.5)/TiO 2 (PD) material gave an R ct value of 2.87 kΩ, which was smaller than that of unmodified TiO 2 (NT) with R ct = 3.40 kΩ.The lower R ct value obviously suggested that the For comparison, a Nyquist plot and emission spectrum of the Fe 2 O 3 (0.1)/TiO 2 (IM) material were also measured and given in Supporting Information File 1, FiguresS5 and S6, respectively.It was clear that the Fe 2 O 3 (0.1)/TiO 2 (IM) had a smaller arc radius of the Nyquist plot and slightly lower emission intensity than the TiO 2 (NT), suggesting that the Fe 2 O 3 (0.1)/TiO 2 (IM) has better charge transfer and suppressed electron-hole recombination.Unfortunately, these better properties did not promote the photocatalytic activity of the Fe 2 O 3 (0.1)/TiO 2 (IM).It turns out that the photocatalytic activity of Fe 2 O 3 (0.1)/TiO 2 (IM) would be more influenced by the distinct decrease in the specific surface area, as discussed previously. | 2017-04-28T22:27:49.996Z | 2017-04-24T00:00:00.000 | {
"year": 2017,
"sha1": "c51ed5231d3dfc309ee1c7be386b71a97cbb53b2",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-8-93.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "716a7b18319a686d871e7008a765feb1105f2f59",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
18949134 | pes2o/s2orc | v3-fos-license | Neuregulin-1 (Nrg1) signaling has a preventive role and is altered in the frontal cortex under the pathological conditions of Alzheimer's disease
Alzheimer's disease (AD), one of the neurodegenerative disorders that may develop in the elderly, is characterized by the deposition of β-amyloid protein (Aβ) and extensive neuronal cell death in the brain. Neuregulin-1 (Nrg1)-mediated intercellular and intracellular communication via binding to ErbB receptors regulates a diverse set of biological processes involved in the development of the nervous system. In the present study, a linear correlation was identified between Nrg1 and phosphorylated ErbB (pNeu and pErbB4) receptors in a human cortical tissue microarray. In addition, increased expression levels of Nrg1, but reduced pErbB receptor levels, were detected in the frontal lobe of a patient with AD. Western blotting and immunofluorescence staining were subsequently performed to uncover the potential preventive role of Nrg1 in cortical neurons affected by the neurodegenerative processes of AD. It was observed that the expression of Nrg1 increased as the culture time of the cortical neurons progressed. In addition, H2O2 and Aβ1–42, two inducers of oxidative stress and neuronal damage, led to a dose-dependent decrease in Nrg1 expression. Recombinant Nrg1β, however, was revealed to exert a pivotal role in preventing oxidative stress and neuronal damage from occurring in the mouse cortical neurons. Taken together, these results suggest that changes in Nrg1 signaling may influence the pathological development of AD, and exogenous Nrg1 may serve as a potential candidate for the prevention and treatment of AD.
Introduction
Alzheimer's disease (AD) is the most common age-associated disorder, accounting for ~60-80% of all cases of dementia (1). Previous studies have shown that atrophy of the hippocampus and amygdala may occur in AD, even at the preclinical stages (2)(3)(4). AD is typically characterized by a progressive loss of memory, impairment of higher cognitive functions and major degeneration in the brain cortex. This degeneration includes the production and deposition of β-amyloid (Aβ) peptide, intracellular neurofibrillary tangles (5,6) and extensive neuronal cell death (7) in specific cortical and subcortical zones. AD shares a number of common pathological features with other neurodegenerative diseases, including activated apoptotic biochemical cascades, up-regulated oxidative stress levels, abnormal protein processing, and so forth (8). Age-associated oxidative insults have been associated with neurodegenerative diseases, including AD and Parkinson's disease (9,10). Aβ peptide fragments are capable of inducing neuronal cell death directly or indirectly (11)(12)(13). In addition, transgenic mice with mutant amyloid precursor protein are considered a valuable animal model to test preventative and therapeutic interventions for AD due to the occurrence of biochemical, behavioral and histopathological changes that are similar to those observed in patients with AD (14). Although current therapeutic candidates for the treatment of AD that are similar to cholinesterase inhibitors such as meserine (15) and memantine (16) may modestly improve memory and cognitive function in this transgenic mouse model, these drugs do not show disease-modifying effects in patients. To date, the available therapies for AD only serve the purpose of ameliorating disease symptoms, and there are no effective therapeutic approaches that address the underlying pathological processes of AD (17).
Neuregulin-1 (Nrg1), a protein encoded by the NRG1 gene, has been identified as an active epithelial growth factor (EGF) family member (18). At least 31 isoforms and six types of Nrg1, including Nrg1α and Nrg1β, types I to VI, have been identified, due to alternative splicing (19). These types, or isoforms, perform a broad spectrum of functions. Nrg1 has been implicated in glioma malignancy (20), gastrointestinal systems (21) and prolactin secretion (22)(23)(24). Specific direct binding of Nrg1 to ErbB receptors, including ErbB3 and ErbB4 (25,26), activates a diverse set of biological processes,
Neuregulin-1 (Nrg1) signaling has a preventive role and is altered in the frontal cortex under the pathological conditions of Alzheimer's disease
including myelination, neurite outgrowth, cell proliferation, differentiation and protection against apoptosis (27,28). Nrg1, as well as its receptor ErbB tyrosine kinase, is expressed in the developing nervous system and the adult brain, where they exert a key role in regulating the development and regeneration of the nervous system (29)(30)(31). Nrg1 has been reported to prevent brain injury following stroke (32), and to exert a protective role for dopaminergic neurons in a mouse model of Parkinson's disease (33). A burgeoning body of evidence suggests that Nrg1 is associated with traumatic brain injury (34) and AD (35). Given the alteration of Nrg1 signaling in patients with AD and the protective role of Nrg1 in the lesioned nervous system (29,30), it was hypothesized that Nrg1 may exert a preventive role in the maintenance of cell survival-associated signaling under the pathological conditions present in AD. In the present study, it has been shown that the levels of Nrg1 are altered in response to hydrogen peroxide (H 2 O 2 )-or Aβ 1-42 -induced oxidative stress and neuronal damage in an attempt to protect the cortical neurons from abnormal changes in cell signaling. Notably, exogenous Nrg1 was revealed to have a pivotal role in preventing neurons from oxidative damage and in triggering changes in Nrg1-ErbB signaling in response to the harmful situation. These results demonstrated that Nrg1 signaling is perturbed under the pathological conditions of AD, and this alteration may be partially reversed by the exogenous application of Nrg1. Taken together, these data indicate a neuroprotective role of Nrg1 against pathological damage during the development of AD.
Materials and methods
Tissue microarray. Human brain tissue microarray containing 4-µm-thick cortical tissues was purchased from Shaanxi Chaoying Biotechnology Co., Ltd. (BN 126; Xi'an, Shanxi, China). In addition, human brain frontal lobe sections from a normal individual (cat. no. ab4304; Abcam, Cambridge, MA, USA) and from a patient with AD (cat. no. ab4582; Abcam) at a thickness of 5 µm were used.
Animals. Female and male C57BL/6 mice (n=20; age, 3 months) were purchased from Guangdong Medical Laboratory Animal Center (Foshan, Guangdong, China) and maintained in the animal center of Shantou University Medical College (SUMC). All the animals were housed in the SUMC animal center at 25˚C in a reversed 12/12 h dark-light cycle, and food and water were provided ad libitum. All experiments conducted on animals were reviewed and approved by the Animal Ethics Committee of SUMC and the authorities of the Guangdong Province. All efforts were made to minimize the suffering of animals and to reduce the number of animals used in these experiments.
Preparation of recombinant Nrg1β (rNrg1β) and oligomeric Aβ Primary culture of the mouse cortical neurons. Mouse frontal cortical tissues were obtained from postnatal C57BL/6 mice on day 0 (P0) and crudely homogenized by chopping following the removal of the vessels and meninges. The tissues were kept on ice in DMEM/F-12 culture medium (HyClone™; Thermo Fisher Scientific, Inc.) without serum, and subsequently digested with 0.125% trypsin (Solarbio Biotech Corp., Beijing, China) at 37˚C in a humidified 5% CO 2 atmosphere for 30 min. The finely separated cortical neurons were seeded in a volume of 200 µl at a density of 2x10 5 cells per well in 48-well cell culture plates pre-coated with 100 µg/ml poly-D-lysine (C0312; Beyotime Institute of Biotechnology, Shanghai, China). Cells were cultured in DMEM/F-12 culture medium supplemented with 10% fetal bovine serum (FBS; Sijiqing Biotech Corp., Hangzhou, China) and 1% penicillin/streptomycin (Solarbio Biotech Corp.) for 6 h to enable cell adhesion to the plates. The medium was subsequently aspirated and replaced with Neurobasal-A (cat. no. 21103-049; Gibco; Thermo Fisher Scientific, Inc.) culture medium supplemented with 2% B-27 (cat. no. 17504-001, Gibco, Thermo Fisher Scientific, Inc.) and 1% penicillin/streptomycin.
To investigate changes in Nrg1 signaling in vitro at the protein level, the primary cortical neurons were treated using two different paradigms: i) The cortical neurons were maintained at 37˚C in a humidified 5% CO 2 atmosphere for 24 h, and subsequently the culture medium was replaced with Neurobasal-A medium containing H 2 O 2 at various concentrations: 0, 1, 2.5, 5, 10 and 20 µM for 24 h; ii) the cells were cultured for 6 h, 1, 3, 6 and 10 days. Cells cultured for 6 h were used as a control (0 days). At the indicated time points, whole-cell lysates were collected.
To study the neuroprotective role of rNrg1β in regulating Nrg1 signaling at the protein level, cortical neurons were utilized with two different cell models: i) Cortical neurons were treated with 0, 5 or 10 nM rNrg1β for 2 h, followed by an exposure to 2.5 µM H 2 O 2 for 24 h; ii) after a 24-h incubation period, cells were treated with 0, 5 or 10 nM rNrg1β for 2 h prior to incubation with a sublethal dose of 10 µM oligomeric Aβ 1-42 for 24 h. Finally, whole-cell lysates were collected for western blotting.
The protein levels of Nrg1, pNeu and pErbB4 in the human brain cortical tissue microarray were evaluated using integrated fluorescence intensity (IFI). The IFI at each tissue point was obtained using the MultiImage Light Cabinet CY3 filter for Nrg1, and the CY5 filter of the FluorChem HD2 gel-imaging system for pErbB4 or pNeu (Alpha Innotech, San Leandro, CA, USA). The IFI was analyzed using Image Tool II software 3.0 (University of Texas Health Science Center, San Antonio, TX, USA).
Immunohistochemical staining. Deparaffinized human frontal cortical sections from a normal individual and a patient with AD were rehydrated via a graded array of ethanol to PBS. Subsequently, hit-induced antigen retrieval was performed in citrate buffer (10 mM, pH 6.0) at 95˚C for 40 min, followed by cooling down to RT for at least 60 min. Sections were then incubated in a 3% H 2 O 2 solution for endogenous peroxidase clearance at RT for 10 min. Sections were subsequently washed in PBS for 5 min three times. Following blocking in 10% PBS-buffered normal goat serum for 30 min, sections were incubated with primary antibodies, including mouse monoclonal anti-Nrg1 antibody (1:200, cat. no. MS-272-P1, Thermo Fisher Scientific, Inc.), rabbit polyclonal anti-pErbB4 antibody (1:200, cat. no. sc-33040, Santa Cruz Biotechnology, Inc.), rabbit polyclonal anti-pNeu antibody (1:200, cat. no. sc-12352-R, Santa Cruz Biotechnology, Inc.) and rabbit polyclonal anti-Aβ 1-42 antibody (1:1,000, cat. no. ab39377, Abcam) overnight at 4˚C, followed by incubation with an Enhanced Polymer DAB Detection kit (cat. no. PV-900; ZSGB-Bio, Beijing, China) and an AEC kit (cat. no. ZLI-9036; ZSGB-Bio). Stained sections were mounted on slides, dehydrated and sealed with coverslips using a commercial water-soluble mounting kit (cat. no. AR1018; Boster Biological Technology, Wuhan, China). Counterstaining was performed with Mayer's hematoxylin in certain of the tissue sections.
Congo red reagent (cat. no. DG0025, Beijing Leagene Biotech. Co., Ltd., Beijing, China) was used to confirm the formation of amyloid plaques in frontal lobe sections from a patient with AD, according to the manufacturer's protocol.
Statistical analysis. Statistical analyses were performed with SPSS 17.0 software (Chicago, IL, USA). Data were expressed as the mean ± standard error of the mean and analyzed with one-way analysis of variance (ANOVA) with Tukey's post-hoc test for independent samples. P<0.05 was considered to indicate a statistically significant difference.
Results
Co-immunostaining correlation analysis of Nrg1 with the phosphorylation levels of ErbB4 and Neu receptors in a human cortical tissue microarray. To determine a functional correlative relationship between the level of Nrg1 and the phos-phorylation levels of either the ErbB4 or the Neu receptors, the co-localization of Nrg1 with either pErbB4 or pNeu receptors was examined. The signal intensity for Nrg1 with pErbB4 receptors was revealed by double immunofluorescence (Fig. 1A), and a positive correlation between Nrg1 and pErbB4 (r=0.932, P<0.0001) (Fig. 1B) was identified. Similarly, the signal intensity for Nrg1 with Neu receptors was also revealed by double immunofluorescence (Fig. 1C) and an apparent positive correlation between Nrg1 and pNeu (r=0.979, P<0.0001) (Fig. 1D) was revealed in the human cortical region.
Detection of Nrg1 signaling in the frontal lobe of a human AD brain. Congo red staining was used to confirm the formation of the amyloid plaques in the frontal lobe of the brain of a human patient with AD. The results demonstrated that there were numerous amyloid plaques distributed in the frontal lobe of an AD brain, whereas no amyloid plaque was detected in the normal control ( Fig. 2A). In addition, immunohistochemical staining also revealed the formation of Aβ 1-42 positive plaques in the frontal cortical zone, whereas few Aβ 1-42 positive plaques were found in the normal individual ( Fig. 2A).
To study the changes in Nrg1 signaling in the frontal cortex of an AD brain, western blotting was used to determine the protein levels of Nrg1 and the phosphorylation levels of ErbB4 and Neu. The protein level of Nrg1 was increased in the frontal cortical gray matter from an AD brain (Fig. 2B). In contrast, the levels of pErbB4 and pNeu showed a tendency towards a decrease when compared with a normal control. Notably, the phosphorylation level of Erk1/2, which is involved in downstream Nrg1-ErbB signaling, was increased when compared with that in the normal control (Fig. 2B).
To further investigate changes in Nrg1 signaling under the conditions of AD, Nrg1, pErbB4 and pNeu in the frontal cortical gray matter from a patient with AD and a normal control were immunohistochemically stained. A tendency for there to be an increased level of Nrg1 was observed in the gray matter of an AD brain compared with a normal control (Fig. 2C). By contrast, the staining intensities for pErbB4 and pNeu were clearly reduced (Fig. 2C).
To evaluate the expression and potential co-localization of Nrg1 with pErbB4 or pNeu, co-immunostaining of Nrg1 with these receptors was performed using the frontal cortical tissue from an AD brain. Co-localization of Nrg1 (green) with pErbB4 (red) or with pNeu (red) was observed. In the human AD brain, a tendency towards an up-regulation of the levels of Nrg1 was observed, whereas the levels of pErbB4 and
A B C D
pNeu were not up-regulated compared with those of normal brain tissue ( Fig. 2D and E). In addition, pErbB4 and pNeu were detected in a smaller number of the neuronal cells in the AD-affected cerebral cortex, which may explain, in part, the reduced levels of the two molecules observed in the western blots.
Western blotting analysis of Nrg1 isoform expression and the ErbB receptor phosphorylation level in primary cortical
neurons during the progression of cell senescence. The primary cortical neurons were routinely cultured for 1, 3, 6 and 10 days without any treatment. The expression of Nrg1 isoforms, including the 123-, 95-, 75-, 71-, 60-, 54-and 36-kDa variants, showed a time-dependent increase that reached statistical significance at 3, 6 and 10 days when compared with the 0-day control (Fig. 3A). A marked increase in the protein levels of the pNeu and pErbB4 receptors was observed at 1 to 10 days when compared with the 0-day control (Fig. 3B). (Fig. 4A). In comparison with the vehicle control, the expression of Nrg1 was down-regulated, accompanied by a marked reduction in the receptor levels of pNeu and pErbB4 (Fig. 4B). Furthermore, in order to explore the role of rNrg1β in alleviating oxidative stress and axonal damage, western blotting was conducted to evaluate whether the Nrg1-ErbB signaling pathway was involved in the preventive mechanism on exposure to H 2 O 2 . Based on the data from a previous study (Chen et al, unpublished) a concentration of 2.5 µM was adopted as the optimal concentration of H 2 O 2 for treatment of the cortical neurons following pretreatment with 0, 5 or 10 nM rNrg1β for 2 h. It was revealed that the levels of pErbB4 and pNeu were markedly decreased following treatment with H 2 O 2 , and this effect was reversed on addition of rNrg1β, to a maximal extent at 10 nM for pErbB4 and at 5 nM for pNeu 4C). The levels of Akt1 and Erk1/2 activation exhibited a similar trend to that of pErbB4 ( Fig. 4D and E). Double immunofluorescence staining was subsequently used to further confirm these observations. It was revealed that, compared with non-stressed neurons, H 2 O 2 -treated neurons exhibited diminished levels of neurite outgrowth, with the detection of reduced levels of Nrg1 and of pNeu or pErbB4. In contrast, pretreatment with 5 and 10 nM rNrg1β prior to H 2 O 2 exposure was able to partially reverse these effects by increasing the levels of both pNeu and pErbB4, with more clearly recognizable effects observed at a concentration of 10 nM (Fig. 4F and G).
Investigation of the protective role of
The preventive role of rNrg1β in counteracting the effects of Aβ 1-42 on mouse cortical neurons. The protective role of rNrg1β in mouse cortical neurons treated with Aβ 1-42 was subsequently investigated. Western blotting was utilized to evaluate the influence of rNrg1β pretreatment on the phosphorylation levels of Neu and ErbB4 and on the downstream signaling pathways in primary cortical neurons following a 24 h incubation with 10 µM oligomeric Aβ 1-42 . Administration of 10 µM oligomeric Aβ1-42 significantly downregulated the protein levels of several Nrg1 isoforms, including the 60 kDa and 36 kDa Nrg1, whereas pretreatment with 5 nM rNrg1β counteracted the effects of Aβ 1-42 by increasing the Nrg1 isoforms. By contrast, pretreatment with 10 nM rNrg1β showed no apparent effects on the function of Aβ 1-42 (Fig. 5A). It was also demonstrated that the relative levels of pErbB4 and pNeu were markedly decreased following treatment with Aβ , and this effect on pErbB4 was compensated for by pretreatment with 10 nM rNrg1β and on pNeu by pretreatment with 5 nM rNrg1β (Fig. 5B). In addition, pAkt1 levels were increased, and pErk levels were decreased when treated with Aβ 1-42 alone (Fig. 5C and D). However, rNrg1β pretreatment upregulated the levels of pAkt1 and pErk in Aβ 1-42 -challenged cortical neurons (Fig. 5C and D).
We subsequently applied double immunofluorescence staining to further confirm these observations. It was observed that, compared with non-stressed neurons, Aβ 1-42 -treated neurons demonstrated a diminished neurite outgrowth, with reduced levels of pNeu or pErbB4 detected. In contrast, pretreatment with 5 or 10 nM rNrg1β prior to Aβ 1-42 exposure was able to partially reverse these effects by increasing the levels of both pNeu and pErbB4, with the most marked effects being observed with 10 nM rNrg1β for pErbB4 and 5 nM rNrg1β for pNeu ( Fig. 5E and F).
Discussion
It is widely acknowledged that the Nrg1-ErbB signaling pathway exerts a crucial role in multiple biological processes, including cell differentiation, organ development and tumorigenesis. Receptors for Nrg1 signaling undergo phosphorylation of their cytoplasmic tyrosine residues, which elicits downstream effects and biological responses (36). The binding of the ligand results in the dimerization and activation of ErbB receptors. Phosphorylation of the intracellular domains creates docking sites for adaptor proteins, including growth factor receptor-bound protein 2 (Grb2) and Shc for the activation of the Erk pathway, and p85 for the activation of the phosphoinositide 3-kinase pathway (37). A previous study revealed an association between Nrg1 and ErbB4 immunoreactivity and the formation of neuritic plaques in patients with AD in a transgenic animal model of AD (35). In the present study, a linear correlation was observed between Nrg1 and the phosphorylation of Neu and ErbB4 receptors in a normal human cortical tissue microarray. To elucidate the exact mechanism by which Nrg1 contributes to AD development, two cell models were applied. Based on our results using cortical neurons under the pathological conditions of AD, multiple isoforms of Nrg1 were altered, including the 123-, 95-, 75-, 71-, 60-, 54-, 40-and 36-kDa proteins. These bands represent alternatively spliced products of the NRG1 gene, post-translationally modified forms of the proteins, and/or a shedding of the ectodomains from the initial precursors. All isoforms of Nrg1 contain an epidermal growth factor (EGF)-like signaling domain that is required for activation of the receptors (38). In addition, the interaction of Nrg1 with its receptors was shown to be associated with the activation of intracellular signaling pathways that are associated with the development and regeneration of the nervous system (29,30). In the present study, it has been demonstrated that the changes in Nrg1 isoform expression and receptor phosphorylation are highly influenced by the pathological conditions observed in AD. Thus, expression changes in Nrg1 isoforms appear to be associated with the pathological development of AD, suggesting that Nrg1 may be a critical molecule in the development of AD.
Oxidative stress plays an essential role in the onset and development of AD (39,40), and cellular oxidative stress levels are increased in vulnerable regions of the AD brain (41,42). The brain is particularly sensitive to oxidative stress due to special cellular features, including a large dependence on oxidative phosphorylation for energy production, low antioxidant concentrations, low levels of membrane lipids and high levels of iron, which are associated with free radical injury (43)(44)(45). Previous studies reported that Nrg1 was up-regulated following nerve injury, and it served as an essential agent to protect the neurons from ischemic damage (34,46,47). In addition, Nrg proteins attenuated the release of free radicals and protected neuronal cells from H 2 O 2 -induced apoptosis (48,49). In the present study, the protein levels of multiple Nrg1 isoforms and the phosphorylation of their receptors were observed to increase in a time-dependent manner. Changes in Nrg1 signaling in cortical neurons exposed to oxidative stress were further investigated. It was observed that protein levels of Nrg1, and the phosphorylation of its receptors, were down-regulated in response to high concentrations of H 2 O 2 . Although the Nrg1 protein level was up-regulated at low concentrations of H 2 O 2 compared with the control, no up-regulation of receptor phosphorylation was identified. This suggested that the interactions between Nrg1 and the ErbB receptors were perturbed under conditions of oxidative stress. By contrast, exogenous rNrg1β was able to protect the cortical neurons from oxidative stress and neuraxonal damage via the up-regulation of the Nrg1-ErbB cell signaling pathway. Cui et al (50) proposed that endogenous Nrg1 was increased in response to the production of Aβ to protect the neurons against damage. However, the injured neurons were not capable of expressing sufficient Nrg1β1 to adapt to prolonged damage, which ultimately led to apoptosis. Increased oxidative stress occurs in response to increased Aβ levels (51); therefore, in the present study, it was originally hypothesized that the up-regulation of endogenous Nrg1 in primary cortical neurons exposed to H 2 O 2 may be an initial, local protective response against abnormal cell signaling. However, the cortical neurons were unable to express sufficient Nrg1 over time, eventually resulting in the dysfunction of Nrg1 signaling. These results suggest the existence of an intrinsic self-protective mechanism in which injured cortical neurons may adapt to, and automatically counteract, neuronal injury.
Aβ is associated with the generation of reactive oxygen species, which cause cell damage, apoptosis, mitochondrial dysfunction and the peroxidation of membrane lipids (52,53). In addition, the accumulation of Aβ peptides has been identified as a key step in the multiple pathogenic changes associated with neurodegeneration and dementia (54,55). Previous studies demonstrated that the neurotoxicity induced by Aβ 1-42 may lead to apoptotic cell death (56), and that Aβ is able to disrupt signaling pathways, including those involving Erk1/2 and Akt in the primary rat cortical neurons (57,58). In the present study, it was observed that exposure of primary cortical neurons to Aβ 1-42 caused an up-regulation in the level of Nrg1 protein and in Akt1 phosphorylation, and a down-regulation of Neu/ErbB4 phosphorylation and pErk1/2 levels. In vitro studies have demonstrated that Nrg1β treatment may protect neuronal cells (59)(60)(61)(62). In the present study, Aβ 1-42 treatment increased the levels of Nrg1 and Akt1 phosphorylation, and decreased the phosphorylated levels of ErbB4, Neu and Erk1/2. Moreover, the Aβ 1-42 -induced reduction in the levels of pErbB4, pNeu, pAkt1 and pErk1/2 was antagonized by rNrg1β pretreatment. Recombinant human Nrg1 contains an EGF-like domain that is essential for the phosphorylation-dependent activation of Neu/ErbB4 receptors (63). Nrg1 is able to signal to target cells via interactions with transmembrane tyrosine kinase receptors of the ErbB family. The interaction of Nrg1 with ErbB receptors may result in the dimerization of receptors, tyrosine phosphorylation, and activation of intracellular signaling pathways (59,64). Activation of ErbB4 by Nrg1 may induce a marked increase in ErbB4 phosphorylation (65) and lead to a sustained activation of Akt and Erk (66). In addition, the Akt and Erk1/2 signaling cascades have an essential role in regulating gene expression and in preventing apoptosis (67). A wide spectrum of in vivo and in vitro studies have demonstrated that phosphorylation of Erk facilitates cell survival (68), and that the dephosphorylation of Akt is involved in the development of AD (58,69,70). These results indicated that Nrg1 signaling in mouse cortical neurons is altered in response to the accumulation of Aβ, suggesting that Nrg1 may function as a crucial candidate for the prevention and treatment of AD.
In view of these observations, it was our hypothesis that, although Nrg1 is up-regulated in cortical neurons during the early stages of AD to protect against abnormal changes in cell signaling, the phosphorylation levels of its receptors are relatively less responsive due to some unknown interrupting factors. As a consequence, Nrg1 signaling is not able to function properly when ErbB4 is not adequately activated. In addition, sufficient levels of Nrg1 are not expressed when the damage is prolonged, thus failing to prevent the development and progression of AD. Notably, the present study revealed that pretreatment of neural cells with rNrg1β partially reversed the neurotoxicity of Aβ 1-42 . These findings have provided a foundational basis for Nrg1 signaling as a potential therapeutic target for the prevention, and possibly the treatment, of AD. | 2018-04-03T01:57:07.997Z | 2016-07-25T00:00:00.000 | {
"year": 2016,
"sha1": "76f577a4b08dec4fe664a7ed780d6189a99e2385",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2016.5542/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "76f577a4b08dec4fe664a7ed780d6189a99e2385",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17108530 | pes2o/s2orc | v3-fos-license | Non-linear Barrier Coverage using Mobile Wireless Sensors
A belt region is said to be k-barrier covered by a set of sensors if all paths crossing the width of the belt region intersect the sensing regions of at least k sensors. Barrier coverage can be achieved from a random initial deployment of mobile sensors by suitably relocating the sensors to form a barrier. Reducing the movement of the sensors is important in such scenarios due to the energy constraints of sensor devices. In this paper, we propose a centralized algorithm which achieves 1-barrier coverage by forming a non-linear barrier from a random initial deployment of sensors in a belt. The algorithm uses a novel idea of physical behavior of chains along with the concept of virtual force. Formation of non-linear barrier reduces the movement of the sensors needed as compared to linear barriers. Detailed simulation results are presented to show that the proposed algorithm achieves barrier coverage with less movement of sensors compared to other existing algorithms in the literature.
I. INTRODUCTION
A Wireless Sensor Network (WSN) consists of a set of sensor nodes. Each node can sense some physical parameters, and has limited computation and communication capability. The data sensed by the sensor nodes is usually transmitted to a central base station using a wireless network connecting the sensors for further processing. Mobile Wireless Sensor Networks (MWSNs) [1], [2], [3] are a class of wireless sensor networks in which some or all of the sensor nodes are mobile. Wireless sensor networks have been used for a wide variety of applications such as habitat monitoring, target tracking, intruder detection etc.
Many different problems have been addressed on wireless sensor networks, such as routing, topology control, localization, coverage, data aggregation etc. In this work, we focus on the coverage problem in sensor networks, which addresses the problem of covering a given area or set of objects using sensors. Several different variations of the problem exist depending on the nature of coverage required. As an example, Area Coverage requires that all points in a given area are within the sensing field of at least one sensor. In contrast, Point Coverage requires that only a given set of points within an area be covered by at least one sensor. Many other definitions of coverage exist such as perimeter coverage, barrier coverage, sweep coverage, path coverage etc. [4].
In this paper, we focus on Barrier Coverage in sensor networks. A target belt region provides strong k-barrier cov-erage if all crossing paths (a path crossing the width of the belt region, originating from one parallel boundary of the belt region and terminating at the other) intersect the sensing region of at least k distinct sensors. In the rest of this paper, we will refer to strong barrier coverage as just barrier coverage. Barrier coverage has many important applications such as intrusion detection along international borders, identifying spread of lethal chemicals around chemical factories, detecting potential sabotage in gas pipelines etc. [5], [6].
Static WSNs may not always work well in barrier coverage applications for different reasons. In many applications, sensors cannot be placed exactly at the locations desired due to deployment constraints, and hence sensors may be randomly deployed around the area. Such applications can benefit from mobile WSNs if the deployed sensors can autonomously move after deployment to achieve the desired barrier coverage. Mobile sensors can also help in scenarios where one or more sensors fail, thereby breaking barrier coverage. In such scenarios, some of the nearby sensors can readjust their positions to recreate the barrier. However, designing algorithms that utilize mobility efficiently is a challenging problem. Keeping the energy constraint of battery-powered sensor devices in mind, algorithms for mobility control should be able to achieve barrier coverage with low movement of the sensors.
There exist many works on formation of barrier coverage using mobile sensors. Some of these works propose centralized solutions for deploying sensors to achieve barrier coverage [7], [8], [6]. Centralized solutions are good since all the information is available at a central station. Hence, all computations regarding movement can be done centrally and only the final locations are sent to the sensors which then move to those locations. However, maintaining the information centrally incurs some overhead. To address this problem, distributed approaches in which sensors locally coordinate to adjust and move to their final positions have also been proposed [9], [10], [11], [12], [13]. However, most of these solutions (centralized or distributed) try to organize all sensors in a straight line to achieve barrier coverage. While a linear barrier is optimal with respect to the number of sensors needed to create a barrier, it can cause large redundant movement to move randomly deployed sensors to a linear configuration.
In this paper, we propose a centralized algorithm for 1barrier coverage which, given a random deployment of sensors in a belt, creates a non-linear barrier of sensors that provides barrier coverage of the belt region. The proposed algorithm views the barrier as a chain of sensors whose sensing disks overlap with each other, and uses some novel ideas of physical behavior of a chain along with the concept of virtual force to move the sensors to achieve barrier coverage. Detailed simulation results are presented to show that the proposed algorithm achieves barrier coverage from random initial deployments with both less average displacement and less maximum displacement of the sensors compared to other existing algorithms in the literature.
The rest of the paper is organized as follows. Section II discusses some related works. The problem statement is defined in Section III, Section IV describes the centralized algorithm and evaluates its performance. Finally, Section V concludes the work.
II. RELATED WORKS
Barrier coverage has been widely studied in WSNs [14], [15]. Saipulla et al. [6] suggested an approach to relocate sensors from an initial randomized line-based deployment model to replicate a scenario of sensors dropped from an aircraft. The work in [8] studied the problem of relocating mobile sensors with limited mobility in order to save energy. Bhattacharya et al. [7] proposed and solved an optimization problem to calculate an optimal movement strategy for barrier coverage on a circular region. All these approaches are centralized. Distributed approaches to barrier coverage formation are addressed in [12], [13], [9], [10], [16]. Kong et al. [12] used the concept of virtual forces to solve the kbarrier coverage. Silvestri [13] presented a novel approach MobiBar that outperforms the algorithm of [12] in k-barrier coverage. Cheng and Savkin [10] presented a decentralized approach of creating 1-barrier coverage between any two prespecified landmarks in the belt region. Shen et al. [9] suggested a centralized CBarrier and a distributed DBarrier(based on virtual forces) algorithm that create a barrier after an random initial deployment. Eftekhari et al. [16] presented distributed algorithms for barrier coverage using sensor relocation. All of the algorithms proposed in the literature (except DBarrier whose performance has been shown to be poorer than CBarrier) try to rearrange sensors on a straight line to form a linear barrier. Ban et al. [17] defined a special type of non-linear k-barrier coverage called grid barrier that is formed out of linear segments coinciding with grid boundaries in the region; however, the algorithm presented for 1-barrier coverage only (named CBGB) still forms linear barrier only. A linear barrier uses the minimum possible number of sensors, but causes more movement of the sensors to arrange them along a straight line. When some extra sensors (over the minimum number required) are available, forming a non-linear barrier can reduce the movement of the sensors and consequently cause less energy usage. Ban et al. [17] also presented a more general k-barrier coverage algorithm that creates a non-linear grid barrier by breaking the region into subregions, forming linear barriers in each subregions, and then forming vertical isolation barriers between subregions to connect the horizontal barriers. However, the algorithm uses very large number of redundant sensors, and for 1-barrier coverage (k = 1), provides no significant advantage over the CBGB algorithm. The algorithm proposed in this paper differs from the other algorithms (except DBarrier [9]) in that it finds truly non-linear barriers which reduces the movement of the sensors. The number of redundant sensors used is also low.
III. PROBLEM FORMULATION
We assume a rectangular belt region of length L and width W , with L W . A set of N mobile sensors with unique IDs are initially randomly deployed in this belt region. Each sensor has a sensing range R s and thus can cover a circular area of radius R s centered around the position where the sensor is placed. We assume that a sensor knows its own location.
The displacement of a sensor is defined as the Euclidean distance between the initial and final location of the sensor. As noted earlier, a centralized algorithm can compute the final location of the sensors and then the sensors can move to that location directly. Thus, the goal of the centralized barrier formation algorithm is to relocate the sensors to form 1-barrier coverage over the belt region while minimizing the average and maximum displacement of the sensors.
IV. CENTRALIZED ALGORITHM FOR BARRIER FORMATION
We first describe the intuition behind the proposed algorithm. A more formal description of the algorithm is given next.
A barrier can be viewed as a physical chain where each sensor, with its imaginary sensing disk, is analogous to a circular chain link. In a physical chain, if one chain link is pulled, it will exert a force on all chain links connected to it, and a chain link connected to it will move when the distance between them becomes maximum, i.e., when their rims start touching. We primarily use this simple property in designing the algorithm. Some of the terms we will use to describe the operation of the algorithm are described below.
• Chain Link: A chain link in the algorithm refers to a sensor with sensing radius R s . We will use the terms sensor and chain link interchangeably in the rest of this paper. The leftmost sensor in the belt region is called the left chain link. Similarly, the rightmost sensor is called the right chain link. In case there are multiple leftmost (rightmost) sensors, any one sensor is chosen as the left (right) chain link.
• Connected Chain Links: Two chain links are said to be connected if the sensing regions of the corresponding sensors intersect. Note that the distance between the centers of two connected chain links cannot be more than 2R s . If one chain link is pulled, a chain link connected to it will feel a force when the distance between their centers becomes equal to 2R s .
• Chain Graph: Consider the undirected graph G with each chain link as a node and an edge added between two nodes if the corresponding chain links are connected. A chain graph is a connected component of G. The chain graph that contains the left chain link is called the left chain graph. Similarly, the chain graph containing the right chain link is called the right chain graph.
Note that though a distance less than 2R s between the positions of two sensors implies that the sensors are connected by the above definitions, sometimes we will delete a connection even if the distance is less than 2R s to work with a subgraph of the chain graph. Such deletions will be clearly specified while describing the algorithm.
Given the above virtual constructs, the main idea of the algorithm can be described as follows. Each chain graph, by virtue of its connectedness, can provide barrier coverage for a part of the belt region (the part between the left boundary of its leftmost chain link and the right boundary of its rightmost chain link). The algorithm tries to extend the barrier coverage to the entire belt region by merging smaller chain graphs into larger ones, until a large enough chain graph is formed that spans across the length of the belt. In order to merge two chain graphs, one special chain link of one chain graph is pulled towards a chain link of the other chain graph in steps, pulling the other chain links along with it until the two chains merge. We next describe this technique in more detail.
A. Merging Chain Graphs
Let C denote the set of all chain links (sensors). Let C G denote the set of chain links in a chain graph G. For any chain link u ∈ C G and v ∈ C \ C G , we define the force f (u, v) exerted on u by v as is the Euclidean distance between u and v. Note that f represents the attractive force between two chain links in two chain graphs, with the force becoming stronger as the distance between the chain links decreases. α is simply a scaling parameter.
For a chain link u in C G , let denote the maximum force exerted on u by a chain link in another chain graph. Then the dominant point of the chain graph G is defined as the chain link d G ∈ C G such that F (d G ) is the maximum among all F (u), u ∈ C G . The chain link in a chain graph X = G that exerts the maximum force on d G is defined as the co-dominant point of G. If there are more than one pair of dominant and co-dominant points with the maximum force then we choose the pair which has a sensor with the lowest ID.
For merging two chain graphs, we compute the dominant and the co-dominant points of the chain graphs and then pull the dominant point towards the co-dominant point using the attractive force defined, pulling the other chain links in the chain graph along with it. The movements of a chain link due to the attractive force as well as when a connected chain link is pulled follows the usual laws of physics. The algorithm for merging chain graphs works in four phases: 1) Initialization Phase: In this phase, the algorithm first identifies all the chain graphs based on the initial locations of the sensors. For each chain graph, a DFS spanning tree is constructed starting from an arbitrary node, and all edges not in the tree are deleted from the chain graph. Removing an edge between two chain links implies that they are not considered as connected (even if their sensing regions overlap) in the rest of the algorithm and are not constrained to move together in any way. The DFS tree helps in identifying long paths in chain graphs that will be used to flatten the chain graphs later. At the end of this stage, all chain graphs to be considered are trees. We will refer to the DFS tree corresponding to a chain graph as a chain tree in the rest of this section. 2) Left-Attach Phase: In this phase, the left chain link is pulled horizontally towards the left boundary of the belt until it touches the left boundary. Note that pulling the left chain link may pull other chain links in the left chain graph. Once the left chain link touches the left boundary, it is allowed to only slide along the width of the belt but is not allowed to move away from the left boundary in the rest of the algorithm.
3) Right-Attach Phase: This phase is similar to the
Left-Attach Phase, only difference being the right chain link is moved horizontally towards the right boundary of the belt in this case. 4) Barrier-Formation Phase: This phase actually creates the barrier by merging chain graphs. The dominant point and co-dominant points are first determined for all chain graphs. The algorithm then iterates over the following step, with every iterative step corresponding logically to a time step τ of movement defined suitably. a) One chain graph G is picked up randomly from the set of chain graphs. b) The dominant point d G of G is moved towards the co-dominant point of G for duration τ (or until it touches the co-dominant point, whichever is earlier) following the laws of physics depending on the force f between them. Note that this may move other chain links in G, again following the laws of Physics. c) If d G does not touch the co-dominant point after the movement, the force f between d G and the co-dominant point is recomputed (as it may have changed due to the change in distance between them), and the algorithm goes to the next iteration. If d G touches the codominant point (the two chain graphs have merged), then the dominant and co-dominant point of all the chain graphs are recomputed before going to the next iteration. This phase terminates when the merging of the chain graphs causes a barrier to be formed. Thus, at each step of this phase, one chain graph is moved slightly. Note that moving one arbitrary chain graph completely in one step to cause two chain graphs to merge may cause a lot of redundant movement unless the proper chain graph is chosen; the random choice reduces this redundant movement in case of a bad choice of chain graph.
Note that a chain graph can potentially cover a larger part of the belt if more nodes in it have degree two, as higher degree nodes in a chain graph cause more redundant nodes that do not contribute in extending the barrier. Therefore, at each step of moving the dominant point, a flattening logic is applied to the chain graph that flattens the chain to make it longer.
B. Flattening Chain Graphs
The flattening logic is applied to the chain tree formed from the chain graph. For any chain tree, the dominant point of the chain tree is defined as the root chain link of the tree. For any chain tree (except the left and the right chain trees) the longest path from the root chain link in the chain tree is first computed. This path is called the flatten path. The following step are applied after each step of the Barrier Formation phase.
The flatten path is traversed starting from the root chain link until the first node with degree greater than two (branching) is found. Let this node be called the current chain link c. Let N c denote the set of chain links connected to c and S c denote the set of chain links connected to c but are not on the flatten path. For every chain link u ∈ S c , the chain link v (v ∈ N c \ S c ) closest to u in the flatten path is found, and a small fixed attractive force (taken to be β × f , 0 < β << 1, where f is the force between the dominant and the co-dominant point of the chain graph) is exerted upon u towards v to make u move towards v. If u touches v as a result of this movement, an edge is added in the chain graph between u and v and the edge between c and v is deleted (even if their sensing regions overlap). Note that this extends the flatten path by one sensor (replacing the edge (c, v) in the flatten path with the edge (c, u) and (u, v)). The change in edges still maintains a tree.
Thus, as the dominant point of a chain graph moves towards another chain graph in successive steps, the application of the above logic brings more and more nodes into the flatten path, eventually causing all chain links not present in this flatten path to collapse on it. This, along with the movement of the dominant point which pulls the chain links on the flatten path, causes a chain to become longer with sufficient number of steps, thus allowing a single chain to form a larger part of the barrier. Note that for a long enough belt region (large L), if the merging step and the flattening logic is applied on a chain graph long enough, the graph will be transformed into a linear chain with all nodes having degree two (except the end nodes that are with degree one), with the distance between the centers of two connected chain links becoming maximum.
An example of the flattening logic is shown in Figure 1. Figure 1a shows the initial state of a chain tree. The green chain link denotes the dominant point, with the red arrow depicting the direction of the force exerted. The chain links on the flatten path computed from the root chain link (dominant point) are shown in blue. As shown in Figure 1b, a node connected to the the first node on the flatten path (starting from the root) with degree greater than two is moved towards the flatten path. When this node meets the flatten path after sufficient number of steps, the flatten path is changed as shown in Figure 1c. This adds one more sensor to the flatten path. At later steps, the next chain link on the flatten path with degree greater than two is chosen and a chain link connected to it is moved towards and finally added to the flatten path (Figures 1d and 1e). The next chain link on this new flatten path with For the left and the right chain tree, the flatten path is fixed to be the path from the dominant point to the left and the right chain links respectively. The rest of the logic remains the same. This is done because of the special constraint on the leftmost and the rightmost sensors. The pseudo code for the flattening logic is given in Algorithm 1. More details are omitted because of space constraints. Figure 2 graphically shows the complete algorithm running on an example scenario that is generated by simulating the algorithm. The example scenario has a belt region of length 50m and width 8m. It has 65 uniformly randomly distributed mobile sensors each having a sensing radius of 0.5m. The color coding of the sensors is the same as in Figure 1. After the Initialization Phase, the sensor positions are as shown in Figure 2a. As seen in the figure, only small chain graphs exist spread all over the belt region, with no discernible parts of the barrier. Figure 2b shows the state after the Left-Attach Phase and Right-Attach Phase. Dominant points for all the chain graphs are calculated and are pulled towards their corresponding codominant points, pulling the chains with them. We can also observe some merged chain graphs compared to Figure 2a. After few steps in the simulation, we can see larger chain graphs in the belt region formed gradually as shown in Figure 2c and Figure 2d. It can be seen that there are only two chain S c = set of neighbor nodes of current that are not on P 8: for each u ∈ S c do 9: v = neighbor node of current closest to u on P 10: apply force f on u towards v to move u 11: if distance between u and v ≤ 2R s then 12: delete edge (current, v) from G The following theorem can be proved for the correctness of the algorithm; we omit the proof here due to lack of space.
Theorem 1: For a belt region of length L and sensors with sensing radius R s , the proposed algorithm always terminates with a strong barrier coverage if the number of sensors is greater than or equal to L 2Rs .
C. Performance Evaluation
The centralized algorithm is simulated using the pymunk 1 (python physics simulation library) and pygame 2 (python game development library used for visualizing sensor movements). Each sensor is modeled as a point rigid body in a gravity free environment. The connections between chain links are are modelled as slide joints. A slide joint is like an imaginary rod between two ends, which can keep the ends from getting too far apart but will allow them to get closer to each other. The constraints for the leftmost and the rightmost sensors are modelled as Groove Joints. The anchor point in the groove joint can slide on a specified line, which helps the leftmost and the rightmost sensors to slide on the left and right boundary when required. Pymunk allows us to apply forces on mobile rigid bodies and can simulate the results in time steps. The coordinates from pymunk simulation space are used in pygame to draw sensors and the connections between them.
The proposed algorithm is compared with two existing algorithms, the CBarrier algorithm proposed by Shen et al. [9], and the CBGB algorithm proposed by Ban et al. [17]. The CBarrier algorithm is chosen as the DBarrier algorithm proposed in the same work is the only one that forms a truly non-linear barrier, and it has been shown that CBarrier has a better performance than DBarrier. The CBGB algorithm is chosen as it has been shown to find a near-optimal movement strategy for linear barrier formation. We simulate both algorithms and measure the average and maximum displacement of a sensor. It should be noted that the algorithm can be centrally executed given the initial locations of the sensors and only final locations can be sent to the sensors directly without the need of sensors to make any intermediate movement.
A belt of dimensions 50m × 8m is taken, with varying number of sensors. The value of the sensing radius R s is taken as 0.5m. Note that these parameter values imply that a minimum of 50 sensors are needed to provide barrier coverage (when a linear barrier is formed). The results reported are the average of running the algorithms on 100 random initial deployments. The results are shown in Figure 3.
The results indicate that the proposed algorithm outperforms both CBarrier and CBGB algorithms with respect to both the average and maximum displacement of a sensor. In particular, the maximum displacement of the proposed algorithm is significantly better than both the algorithms. As the number of sensors are increased, the performance of the
V. CONCLUSION
In this paper, we have presented a centralized algorithm to form a barrier from a random initial deployment using mobile sensors. The algorithm uses some novel techniques of viewing a barrier as a chain and applying laws of physics to create non-linear barriers that reduce the movement of sensors.
Simulation results indicate that the algorithm outperforms other existing algorithms in terms of average and maximum displacement even with a small number of redundant sensors. The work can be extended further to investigate the design of distributed algorithms for the problem and algorithms for localized maintenance of the barrier after failures using similar techniques. | 2016-11-22T16:34:05.000Z | 2016-11-22T00:00:00.000 | {
"year": 2016,
"sha1": "f9ff3dadc018086587eaf42d63a883f4b78e8181",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1611.07397",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f9ff3dadc018086587eaf42d63a883f4b78e8181",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
258772023 | pes2o/s2orc | v3-fos-license | ANALYSIS OF FACTORS INFLUENCING THE TRUST LEVELS OF KYRGYZSTAN RESIDENTS, USING NEURAL NETWORK ANALYSIS
Abstract Objective: Kyrgyzstan, located in Central Asia, is a country which has a strong will to achieve national development. The aim of this study is to measure the levels of trust of local residents, a highly important factor in national development, and to derive suggestions for improving it. To this end, the primary means employed is to target the residents of Kyrgyzstan and measure the levels of trust they have towards each other. Methods: The study uses data relating to aid projects for rural development that Korea’s Good Neighbors International organization (GNI) is jointly carrying out in Kyrgyzstan along with the Korea International Cooperation Agency (KOICA), a Korean aid provider. In order to carry out the aid project to Kyrgyzstan, these organizations conducted a baseline survey at the initial stage, and the results of this study were used for analysis. As regards the analytical method used in this study, neural network analysis was employed for the questionnaire survey data of 583 people in Kyrgyzstan that was used for the baseline survey. Results: Neural network analysis, a component of the big data analysis method, has recently been in the academic limelight. The analysis revealed that ethnicity had the greatest influence on the trust levels of Kyrgyzstan residents, followed by gender and education level, in that order. Conclusions: From this, it can be seen that multifaceted efforts are needed to increase the levels of trust of peoples other than ethnic Kyrgyzstanis, as they occupy a central position in Kyrgyzstan.
Introduction
Almost all countries, developed or developing, focus their social policy on improving the quality of life of their citizens.Among the factors that affect quality of life are institutional factors such as the capacity of the government, but singular characteristics such as the trust level of each individual are also important (ECONOMIDES; KALYVITIS; PHILIPPOPOULOS, 2004;GROOTAERT, 2004; GROOTAERT; VAN BASTELAER, 2001).In particular, today, many social science studies report that social capital exerts a great influence on individuals' quality of life, and it is pointed out that individuals' levels of trust are an important factor in such social capital (ADLER; KWON, 2002;AGNEESSENS;WITTEK, 2008;AVEY et al., 2010;GARA;LA PORTE, 2020;KROT;LEWICKA, 2012).Even when developed countries successfully promote Official Development Assistance (ODA) projects that support developing countries, it is necessary to give attention not only to the institutional capacities of the aid-recipient countries, but also to social capital such as the levels of trust of the people receiving the aid.
In 1960, personal per-capita income in Korea was only 92 dollars.However, through successfully carrying out a continuous economic development plan, it has achieved economic development at a globally unprecedented rate.As a result, as of 2022 Korea is among the world's top ten economic powerhouses, and passes on its own successful development experience to developing countries in the name of ODA.Korea was itself a recipient of foreign aid from the mid-1950s up until 1999, whereas now it is the only formerly aid-receiving country that provides aid.Although the role of government officials, including that of the President in the 1960s, was a major factor in Korea's rapid economic growth, the high level of social capital of the people also acted as an important factor.The case of Korea shows that levels of public trust acts as an important factor influencing national development ("KOICA 홈페이지", [s.d.]).
Against this background, this study measures the trust levels of the residents of Kyrgyzstan, analyses the factors that affect this, and suggests ways of further improving this level of trust.Good Neighbors International (GNI), a private Korean organization, along with the Korea International Cooperation Agency (KOICA), the Korean Government's aid The ultimate goal of the project is to develop rural areas in Kyrgyzstan in an integrated way, and this particularly requires the voluntary participation of residents.The aim of the project is to improve the social and economic environment of Kyrgyzstan, improve women's rights, and establish a foundation to promote a sustainable increase in residents' incomes.
The first phase of the project began on 14 September 2021 and runs to 31 December 2025.
The project will cost around 10.6 billion Korean won (around 9 million in US dollars).
Naturally, if the first stage of the project is carried out successfully there is the possibility of a second (2026-30) and a third (2031-5) stage.The target areas for the project are the Osh Oblast (Province) and Batken Oblast (Province) regions of Kyrgyzstan, thirty villages in these two regions being included in the project targets.The number of direct beneficiaries from this project is estimated to be 85,570 (KOICA; GNI, 2021).
Summarizing the purpose of this study once again, the trust level of local residents of Kyrgyzstan located in Central Asia is measured and the importance of variables affecting the trust level is analyzed.Based on these data, we intend to discover implications that can help residents increase the acceptability of ODA policies and increase the chances of success when carrying out ODA projects in Kyrgyzstan in the future.
Theoretical Discussion and Research Problems
Trust can be defined in various ways.The dictionary definition of trust speaks of the belief 'that someone is good and honest and will not harm you, or that something is safe and reliable' (BURKE et al., 2007;DIENER, 2000) an academic point of view, meanwhile, trust occupies the most important place in the concept of social capital, social capital being defined as 'the trust and the networks of relationships among people who live and work in a particular society, enabling that society to function effectively'.Among conceptions of social capital, trust is the most important core concept.However, in general terms trust means mutual trust and reliance among people, in particular the mutual trust and reliance on neighbours that people have with those around them.This mutual will leads to the formation involvement in community activities, and sense of community (BEN HADOR, 2016;FERRES;CONNELL;TRAVAGLIONE, 2004;GRUMAN;SAKS, 2013).
For this reason, trust among residents can foster a sense of co-operation, reduce unnecessary regulation, and reduce the transaction costs required in business execution.
Further, compliance with government policies can be enhanced, and based on it, policy implementers can successfully and quickly implement them (HANSEN;BUITENDACH;KANENGONI, 2015;HELLIWELL;HUANG, 2010;HOBFOLL, 2002;LEUNG et al., 2013;SPENCE LASCHINGER et al., 2012).Above all, a high level of trust among residents can foster a sense of ownership by cultivating community spirit among residents.In addition, high levels of trust among residents can be a factor in the successful implementation of ODA projects such as GNI's rural development projects currently being carried out in Kyrgyzstan.
There have been many studies (LI et al., 2014;LUTHANS, 2002;MAYER;DAVIS;SCHOORMAN, 1995;MINCU, 2015) on the factors affecting the level of trust among residents.Some studies have suggested that personal factors greatly affect levels of trust.
Other studies (NATVIG;ALBREKTSEN;QVARNSTRØM, 2003;PASTORIZA, 2008;PERERA;WEERAKKODY, 2018;SCHOORMAN;MAYER;DAVIS, 2007;SELIGMAN, 2002;SENDJAYA et al., 2019) have argued that government institutions, or policy factors, have a strong influence.However, recent studies (SALAS-VALLINA; ALEGRE, 2021; SINGH; AGGARWAL, 2018;TAŞTAN, 2018TAŞTAN, , 2018;;TAŞTAN;GÜÇEL, 2017) claim that various factors within the community, such as race, income levels and gender, are becoming increasingly important in influencing individual levels of trust.Considering that there are various factors that both make up and affect levels of trust, the following research questions were set in this study: 1. What is the level of trust among Kyrgyzstan residents?2. What are the important variables that affect this?
Target areas
The target areas for analysis in this study are two provinces in Kyrgyzstan, Osh and Batken.These two provinces were selected because the Korean ODA projects are being carried out there, including a survey of residents.More specifically, the target areas comprise thirty villages included in the two provinces of Osh and Batgen.Survey data on local residents living in these village were used for the analysis.The number of households that responded to the survey was 583, and the total number of respondents, including family members and heads of family, was 3,591; however, in this study, the 583 heads of families were analysed.
Survey period
This survey was conducted at Osh University of Technology and Science from March 1 to March 11, 2022, and the analysis results and report writing were done between March and May 2022.
Questionnaire composition
The questionnaire used in this study was prepared by Professor Yang-Hoon Song, who is in charge of monitoring the ODA project in Kyrgyzstan.The questionnaire consists of four sections.Section 1 covers Household Demographics, Section 2 Income Structure, Section 3 Living Expenditure & Government Support Regarding Poverty, and Section 4 Community Activity.In particular, 'trust level', to be used as a dependent variable in this study, is composed of one question, How much do you generally trust your neighbours?The measurement scale used is a Likert 5-point scale, ranging from Do not trust at all (1 point) to Highly trust (5 points).
Variables
As stated, the dependent variable in this study is trust.Since the concept of trust is difficult to measure with a single variable, multiple indices can be used.It may be stated that there is no great difficulty in measuring the level of trust by measuring how much trust a person has in their neighbours.The independent variables, meanwhile, consist of ethnicity, type of residence, gender, marital status, income level, education level, age and occupation.
The results of coding and processing this information are shown in Table 1.Neural network analysis goes through a process of adjusting the connection weights of nodes in a direction in which the error between the output value and the actual value is reduced, which is called learning.In this study, an optimal neural network model that was not overfitted was built through a total of 2,000 repetitions of learning.Neural network analysis also presents results such as shown in the parameter estimation table (Table 2).This is the result of selecting synapse weight values in the artificial neural network structure.That is, the table calculates the connection strength between the independent variable presented in the neural network structure diagram as shown above and the hidden layer, and the connection strength between the hidden layer and the output layer.From the parameter estimates, it can be seen that ethnicity has a large influence.
Verification of the model's goodness of fit
In terms of a neural network analysis used as an analysis model in this study, the suitability and power of the multilayer perceptron model were analysed.To test the suitability of the multilayer perceptron model, prediction accuracy, Receiver Operating Curve (ROC) analysis was conducted.This is a method of analysis that provides a criterion for determining the suitability of a neural network model.As a result of this analysis, the ROC analysis result, which is the result of plotting sensitivity on the y-axis and plotting 1-specificity on the x-axis, can be derived.The ROC standard is classified as fail if ROC is less than 0.6, poor if less 9 de 17 than 0.7, fair if less than 0.8, good if less than 0.9 and excellent if less than 1.0 (CHO, 2020).
In this study, as Table 3 indicates, it is 0.773, which is the FAIR level.Therefore, it appears that there is no major problem in terms of the adequacy of the model.Figure 2 shows the predicted probability analysis result derived from the neural network analysis.In this graph, the x-axis is the value of trust_1, the actual target variable, and the y-axis is the probability value of the predicted outcome.The most important criterion in neural network analysis is ROC. Figure 3 shows the ROC curve results derived from neural network analysis.The ROC graph is used to verify the suitability of the neural network analysis.This curve is evaluated as good when it presents a curved shape in a rapidly increasing and then lessening form.In other words, when it has this shape, it can be evaluated that the neural network analysis model is well suited.To judge from Figure 3, based on the ROC, the goodness of fit is not very great, but as Table 3 shows, the ROC value is 0.773, so the model's goodness of fit level can be said to be fair.Figure 4 shows the cumulative profit chart derived from neural network analysis.
Here, the same logic as for the ROC chart analysis is applied to determine goodness of fit.
In other words, it may be interpreted that a rapidly rising curve indicates a better fit.
Importance of independent variables
Table 4 shows the relative importance of various independent variables that affect the trust (happiness_1) of Kyrgyzstan residents.Among the independent variables that affect trust, the ethnicity variable appears to have the greatest effect.This means that the Kyrgyz people have higher levels of trust than other peoples.The next variable is gender, with men having higher confidence levels than women.Figure 6 shows the degree of influence of the independent variables on the level of trust, which is the dependent variable, in order of importance.The variable that has the greatest influence on the dependent variable is ethnicity.Assuming that the influence of this ethnicity variable on the dependent variable is 100, the next most important variable is sex, which has an influence of 91 per cent.
Conclusion and Implications
This study analysed important factors that affect the level of trust that residents of Kyrgyzstan have in their neighbours by using a multi-layer perceptron model, one of the neural network analyses.As regards the analysis procedure, importance analysis was performed using the weight division method for the weight value for each node calculated by neural network analysis.The important variables that affect the level of trust of local residents in Kyrgyzstan are ethnicity, sex, education, income, type of residence, occupation, marriage status and age, and it can be said that the importance is high in the order of these.
The results of this study and its implications may be summarized as follows: 1.For the Kyrgyzstan residents, ethnicity variables have the greatest influence on level of trust.In other words, ethnic Kyrgyz people have a higher level of trust than other ethnic groups, such as Uzbeks.More in-depth study is needed to determine the reason(s) for this phenomenon.
2. The trust level of male local residents is higher than that of female residents.Men are often the heads of households.Even if the difference here is not a significant one, the fact that men have a higher level of trust than women is also an issue that requires more in-depth research.
3. There is a significant difference in terms of education levels.It was found that the level of trust among residents with higher levels of education appeared greater than among those with lower levels of education.This phenomenon also represents an area that requires more in-depth research through questionnaire analysis or interviews with Kyrgyzstan residents.
4. This study is significant in that it uses a new methodological approach instead of the statistical method traditionally used.The distinctive contribution of the study resides in the fact that it derived meaningful research results by analysing important factors affecting the trust levels of local residents in Kyrgyzstan.
Nevertheless, it should be recognized that there are limitations to neural network analysis.In particular, due to its black box characteristics, it has a limitation in that it cannot provide a basis for causal relationships between variables and model calculation results.In addition, since this neural network analysis assumes that all independent variables inputted into this analysis are statistically significant, the statistical significance between the dependent variable, trust, and the independent variables that are assumed to affect this dependent variable is not evaluated.
However, the analysis checks whether the order of importance of the calculated independent variables matches the theoretical background, and if the size of the influence is different or unsatisfactory from the common-sense or theoretical point of view, it is characterized by going back to the beginning and re-creating the model.
Recognizing the above points, it is necessary to supplement the work of identifying the influence relationship between important variables and dependent variables through complementary analysis such as logit model analysis.In the future, there is a need for methodological improvement that can overcome the limitations of neural network analysis and further increase the accuracy of prediction and classification.
execution agency, is conducting a rural development project in Kyrgyzstan examining the levels of trust of the residents it surveys.The official name of this aid project is the Integrated Rural Development Project in Kyrgyz Republic.As mentioned above, this project is being Encontros Bibli, Florianópolis, v. 28, 2023: e93526 Universidade Federal de Santa Catarina.ISSN 1518-2924.DOI: https://doi.org/10.5007/1518-2924.2023.e93526 3 de 17 carried out by GNI as part of KOICA's public-private partnership project, under the name 'Strategic Partner Project'.The present study uses the data from the baseline survey conducted by this project.
of a network, and eventually becomes the most important core concept in the concept of social capital.Sub-concepts of the trust-concept include empowerment, self-help, Encontros Bibli, Florianópolis, v. 28, 2023: e93526 Universidade Federal de Santa Catarina.ISSN 1518-2924.DOI: https://doi.org/10.5007/1518-2924.2023.e93526 4 de 17 Neural network analysis was performed to analyse the importance of the variables affecting the trust level of residents.The analysis was divided into multi-layer perceptron (MLP) and radial basis function, and in this study the MLP method is used.The reason for this is that MLP is the most basic algorithm of neural network analysis and has a great advantage in performing classification and discrimination(CHO, 2020;REED;O'DONOGHUE, 2005).The basic structure of the multi-layer perceptron model used for this paper is shown in Figure1.This model has a three-layer structure consisting of one input layer, one or more hidden layers and one output layer.Each layer consists of nodes.Each layer processes the data, and at the same time, each node takes over the output value of the previous step and goes through the process of calculating the output value again via the activation function.
Figure 6 :
Figure 6: Relative importance of independent variables | 2023-05-19T15:06:56.910Z | 2023-05-17T00:00:00.000 | {
"year": 2023,
"sha1": "c977f5e5e42005092428236c96d80eb6f061d2d3",
"oa_license": "CCBY",
"oa_url": "https://periodicos.ufsc.br/index.php/eb/article/download/93526/53229",
"oa_status": "CLOSED",
"pdf_src": "Dynamic",
"pdf_hash": "52e06f1cf03dcf87f8f3971e6a79b966049d20f7",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": []
} |
9046288 | pes2o/s2orc | v3-fos-license | A nine-country study of the protein content and amino acid composition of mature human milk
Background Numerous studies have evaluated protein and amino acid levels in human milk. However, research in this area has been limited by small sample sizes and study populations with little ethnic or racial diversity. Objective Evaluate the protein and amino acid composition of mature (≥30 days) human milk samples collected from a large, multinational study using highly standardized methods for sample collection, storage, and analysis. Design Using a single, centralized laboratory, human milk samples from 220 women (30–188 days postpartum) from nine countries were analyzed for amino acid composition using Waters AccQ-Tag high-performance liquid chromatography and total nitrogen content using the LECO FP-528 nitrogen analyzer. Total protein was calculated as total nitrogen×6.25. True protein, which includes protein, free amino acids, and peptides, was calculated from the total amino acids. Results Mean total protein from individual countries (standard deviation [SD]) ranged from 1,133 (125.5) to 1,366 (341.4) mg/dL; the mean across all countries (SD) was 1,192 (200.9) mg/dL. Total protein, true protein, and amino acid composition were not significantly different across countries except Chile, which had higher total and true protein. Amino acid profiles (percent of total amino acids) did not differ across countries. Total and true protein concentrations and 16 of 18 amino acid concentrations declined with the stage of lactation. Conclusions Total protein, true protein, and individual amino acid concentrations in human milk steadily decline from 30 to 151 days of lactation, and are significantly higher in the second month of lactation compared with the following 4 months. There is a high level of consistency in the protein content and amino acid composition of human milk across geographic locations. The size and diversity of the study population and highly standardized procedures for the collection, storage, and analysis of human milk support the validity and broad application of these findings.
H uman milk is considered the best source of nutrition for term infants. The World Health Organization recommends exclusive breast feeding during the first 6 months of life (1). Within the nutrientrich matrix of human milk, the quantity and quality of protein are vitally important to provide the infant with a source of peptides, amino acids, and nitrogen for visceral protein synthesis, tissue accretion, and growth. Additionally, human milk provides the amino acids required to synthesize hormones, enzymes, antibodies, and other compounds such as glutathione, nucleotides, and some neurotransmitters (2).
Numerous studies have evaluated protein and amino acid levels in human milk. The earliest studies yielded widely divergent findings that were attributed to variability among donors with respect to age, parity, and duration of lactation, as well as differences in the collection and storage of human milk samples, and methods of analysis (3). The introduction of the automated amino acid analyzer (4) represented a clear improvement in methodology that resulted in more consistent data on the protein and amino acid composition of human milk (5Á21). Despite such advances in analytical methods, research on the protein and amino acid content of human milk has been limited by small sample sizes and homogeneous study populations. Moreover, studies differ with respect to sample collection, storage, and methods of analysis, all of which can introduce variability to the measurement of protein and amino acid levels in human milk.
To our knowledge, this study is the largest, multinational study of protein levels and amino acid composition in human milk. A review of the published literature of total protein and amino acid composition of human milk from various regions and varying collection techniques is summarized in Table 1. In our study, milk samples from 220 women from nine countries across five continents were analyzed for amino acid composition, total nitrogen, and true protein concentration, using a unified protocol and standardized methodology. Therefore, the potential variability inherent from any differences in the sample collection, handling, storage, shipping procedures, and sample analyses was essentially eliminated, thereby increasing our confidence that the variations in our data reflect true biological variations among the samples. Enrolling mothers over a broad range of days, post-partum, permitted the assessment of amino acid and protein levels across several stages of lactation. It has been shown previously that the protein level and amino acid content in human milk decrease over the course of lactation (20), whereas maternal race/ethnicity, age, and maternal dietary protein intake appear to have little effect on the total protein in human milk (22).
By using a single commercial entity for shipping, a single commercial laboratory for sample storage, a single research laboratory for sample analysis, and standardized milk collection procedures, our methodology was highly consistent across sites and assures a reliable data set.
Study design and subjects
The human milk samples were collected as part of a crosssectional survey of major carotenoids (23) and fatty acids (24) in human milk from healthy, well-nourished lactating women in nine countries: Australia, Canada, Chile, China, Japan, Mexico, the Philippines, the United Kingdom, and the United States. All participants were aged 18Á40 years; were mothers of a healthy, full-term singleton infant; and were between 1 and 12 months postpartum at the time of milk collection. Participants signed written, informed consent in their native language prior to enrollment in the study, and the same two individuals conducted on-site training for all study personnel. The study was conducted in accordance with the principles of the Declaration of Helsinki and was approved by the Human Subjects Committee of the University of Arizona and the Human Ethics Committees associated with each participating institution.
A minimum of 50 human milk samples were collected from each of the nine countries represented in the study, for a total of 509 samples. Of these, 445 samples were from women who were 30Á188 days (1Á6 months) postpartum at the time of milk collection. Within this subset of 445 human milk samples, 220 were randomly selected for an analysis of amino acid and nitrogen composition. The random selection of samples was stratified by country to ensure the inclusion of at least 20 samples from each country.
Collection and handling of human milk samples Our collection and handling of the human milk samples has been described in detail (23). Briefly, a complete breast Mean total protein concentration in human milk calculated using 6.25 as the factor 011.9 (range 8.5Á22.9 g/L); mean total protein concentration in human milk calculated using 6.38 as the factor 012.2 (range 8.9Á23.4 g/L).
Ping Feng et al.
expression containing a minimum of 50 mL of human milk was collected between 1:00 p.m. and 5:00 p.m. on the day of the sampling. In all countries except Japan, samples were collected using an electric pump. In Japan, where the use of an electric pump is considered unacceptable to women, samples were collected manually using a handheld breast pump under the supervision of clinic staff. After collection, milk samples were immediately placed on dry ice or in a freezer at (208C, then shipped within 10Á14 days on dry ice to a central laboratory where the samples were stored at (708C. Prior to analysis, frozen samples from a single country were thawed overnight in the refrigerator. Under subdued lighting conditions, samples were warmed to 378C in a water bath and gently stirred, and approximately 10 mL of each sample was transferred to a tube and stored at (708C for a group analysis. One day before amino acid and nitrogen analyses, samples were thawed overnight in the refrigerator. Thawed samples were warmed to 378C in a water bath and gently stirred, before subsequent analysis. The analysis of samples was grouped daily by randomly choosing two samples from each country to eliminate dayto-day bias.
General amino acid analysis
To determine the concentration of 16 amino acids in the human milk samples, 10 mL of 6 M HCl (containing 0.1% phenol) was added to a hydrolysis tube that contained 1 mL of human milk. Following vacuum and nitrogen flush, repeated three times, the tube was sealed under a nitrogen blanket and the sample hydrolyzed by placing in an oven at 1108C for 24 h. The hydrolysate was then quantitatively transferred to a volumetric flask and made up to a volume of 50 mL using distilled water. A total of 15 mL of filtered hydrolysate solution was quantitatively pipetted into a derivatization tube, dried under vacuum, and then combined with alpha-amino butyric acid as an internal standard and analyzed using AccQ-Tag (Waters Corporation, Milford, MA) derivatization and highperformance liquid chromatography (HPLC) (25,26).
Cysteine analysis
Distilled water was added to 1 mL of the milk sample in a 50-mL volumetric flask; 15 mL of the solution was quantitatively pipetted into a derivatization tube. The sample was dried under vacuum and oxidized with performic acid for the conversion of cysteine and cystine into cysteic acid; this was followed by vapor phase acidic hydrolysis using a boiling HCl solution at 1108C for 24 h. The sample was combined with alpha-amino butyric acid as an internal standard and then analyzed for cysteic acid using AccQ-Tag derivatization and HPLC (25,26).
Tryptophan analysis
A total of 10 mL of 4.2 M NaOH solution was added to a hydrolysis tube that contained 3 mL of human milk. In addition, 800 mL of 5-methyl-tryptophan was added to the hydrolysis tube as an internal standard. Following vacuum and nitrogen flush repeated three times, the tube was sealed under vacuum and placed in an oven at 1108C for 20 h to hydrolyze the sample. Following adjustment to pH 4.2 using 12 M HCl, centrifugation, and filtration, tryptophan was determined using reversed-phase HPLC.
Nitrogen analysis
Total nitrogen was determined by complete combustion of each human milk sample using the LECO FP-528 nitrogen analyzer (LECO Corporation, St. Joseph, MI). An infant formula standard (NIST infant formula reference material 1846) solution with a nitrogen content of 0.221% (w/w) was used as the calibration standard. Protein and nitrogen calculations Total protein content was calculated from total nitrogen as follows: Total protein ¼ total nitrogen  6:25: True protein, including protein, free amino acids, and peptides, was calculated from total amino acids as follows: True protein ¼ total amino acids  100=116: This calculation corrects the amino acid sum to a corresponding weight of polypeptide. Specifically, 100 g of protein (milk source) generates approximately 116 g of hydrolyzed amino acids due to water molecules added during protein hydrolysis (5,27). Thus, true protein is a calculation of the amino acid sum, corrected for water added during hydrolysis to individual amino acids.
The percentage of protein nitrogen was calculated as true protein divided by total protein. Non-protein nitrogen was calculated as the amount remaining after subtracting the percentage of protein nitrogen from 100.
Statistical analysis
Descriptive statistics were calculated for all variables, including individual amino acids, total amino acids, and total protein. An ANOVA model was used in the assessment of total protein and amino acid concentrations with the stage of lactation, country, mother's age and parity as covariates. Post-hoc pair-wise comparisons for total protein and total and individual amino acid concentrations in different countries were done by the Fisher's LSD test; as this was an exploratory analysis, no adjustments were made for multiple comparisons.
Study population
Between 20 and 28 human milk samples were analyzed from each of the nine countries included in the study population (Fig. 1). The number of samples varied by the stage of lactation: n062 for lactation days 30Á60; n066 for lactation days 61Á91; n038 for lactation days 92Á121; n031 for lactation days 122Á151; and n023 for lactation days 152Á188. The mean (standard deviation [SD]) age of the mothers who provided the samples was 30 (4.8) years; median (range) parity was 1 (1Á4) ( Table 2).
Protein and amino acid concentrations in the overall study population Table 3 summarizes the mean concentrations of amino acids, total protein, and true protein in human milk samples from the overall study population and by stage of lactation. The mean (SD) total protein concentration across all samples was 1,192 (200.9) mg/dL, and the true protein concentration across all samples was 908 (176.2) mg/dL. Overall, the mean concentration of true protein was 76% of the total protein concentration. Mean (SD) total amino acid concentration was 1,053 (204.4) mg/dL. As expected, the true protein concentration and total protein concentration were highly correlated (R 2 00.7929) (Fig. 2). Results from multivariable analysis of variance (Table 4) demonstrated that the stage of lactation was significantly correlated with total protein concentration (P B0.0001) and total amino acid concentration (P B0.0001). Correlations with other variables (that is, the country, mother's age, and parity) were not statistically significant for either total protein or total amino acid concentration. Variability in the individual amino acid concentrations, assessed by the coefficient of variation (CV), ranged from 14 to 32% for absolute concentrations (mg/dL) of amino acids in the 220 samples; the mean CV was 23% (Table 5). When normalized according to the percentage of total amino acids, the CVs were much lower, ranging from 3 to 13%, with a mean of 7%.
Protein and amino acid concentrations by stage of lactation
The mean concentrations of total protein, true protein, total amino acids, and most individual amino acids in human milk declined steadily from 30 to 188 days of lactation ( Table 3). The total protein concentration and total amino acid concentration were both significantly higher (P B0.0001) in the second month of lactation (days 30Á60) compared with the following 4 months. In addition, the total amino acid concentration was significantly higher (P 00.029) in the third month of lactation (days Table 3. Mean concentrations of amino acids, total protein, and true protein in human milk samples from overall study population (n0220) and in samples by stage of lactation 61Á91) as compared with the sixth month (days 152Á188). The decline in total protein that occurred from the second month of lactation through the sixth month reflected nearly equal declines in the various components of total protein (Fig. 3a). As such, despite steady declines, the proportion of essential amino acids, non-essential amino acids, and non-protein nitrogen components remained relatively unchanged as the duration of lactation increased (Fig. 3b). Similarly, with the exception of cysteine and glutamic acid, the relative contributions of each individual amino acid to total amino acids remained consistent between lactation months 2 and 6 ( Table 6).
Protein and amino acid concentrations by country
Mean concentrations of amino acids, total protein, and true protein in human milk samples by country were summarized ( Table 7). The mean (SD) total protein concentration in human milk per individual countries ranged from 1,133 (125.5) to 1,366 (341.4) mg/dL. Protein and amino acid concentrations were similar across countries, and the overall effect of the country on the levels of total protein and total amino acids was not statistically significant, with the exception of Chile. Statistical analyses were performed after adjusting for the mother's age and stage of lactation, as the numbers of samples were unevenly distributed across lactation stages. Post-hoc comparisons between Chile and all the other countries were performed to evaluate the significance of the higher protein and amino acid levels in Chile. Even with adjustment, the results of these analyses showed that total protein, total amino acids, and most individual amino acid concentrations were significantly higher in the human milk samples from Chile as compared with the mean concentrations in samples from Australia, China, the Philippines, the United Kingdom, and the United States (P B0.05 for each comparison) (Fig. 4).
Discussion
Human milk has a substantially lower total protein concentration than the milks of other species. However, human milk provides a richer source of essential amino acids, which allows infants to meet their protein requirements in a lower concentration (28). The unique quantity and quality of proteins in human milk are important because of the elevated requirements for essential amino acids and the conditionally essential nature of certain other amino acids during infancy (29). Human milk changes in both protein content and whey-to-casein composition over the course of lactation. During the first 30 days of lactation, the decline in protein content and the compositional shift in whey-to-casein ratio is clearly apparent. Early milk has a whey-to-casein ratio of approximately 90:10 in early milk, which evolves to approximately 50:50 in late lactation (30,31); however, beyond the first month, the rate of change in protein content and composition becomes less obvious. It has been estimated that, during infancy, when protein accretion is at its highest, essential amino acids make up onefifth of protein requirements. By comparison, later in childhood, essential amino acids comprise one-fifth of protein requirements and reflect only one-tenth of the protein requirements in adults (32). Thus, the amino acid composition of human milk has relevance for understanding the nutritional needs of infants. A further measure of nutritional value is true protein, which represents only the polypeptide portion of total protein. (The 'Methods' section includes the calculation of true protein).
Protein and amino acid analyses in this study included samples from the second to the sixth month of lactation because of the known changes in human milk composition during the first month postpartum. The data from this study show a relatively large variation of amino acids and protein concentration among mothers' milk samples. However, the variations in amino acid content, protein content, protein nitrogen, and non-protein nitrogen composition observed in this study are consistent with other human milk studies (17,19). When the absolute concentrations of the individual amino acids were normalized to percentage of total amino acids, the variation in the data decreased considerably, indicating high consistency in the amino acid profile of human milk with very little impact from mother's race/ethnicity or age. Larger CVs in the amino acid profile (the percentage of amino acids) for cysteine and tryptophan may be explained by the use of separate procedures for the analysis of these two amino acids. The higher CVs in both the amino acid amount and the profile for methionine, arginine, and glycine may be due to the susceptibility of methionine and glycine to oxidation under hydrolysis conditions; the oxidized product of methionine may have affected the integration and quantification of arginine in the analysis method.
Over the years, the Institute of Medicine of the US National Academy of Sciences has organized scientific expert panels to evaluate the totality of scientific literature on individual nutrients and publish Dietary Recommended Intakes for macronutrients and micronutrients for all age groups. The recommendation for the protein intake for infants during the first 6 months of life is based on the average volume of human milk (0.78 L/day) consumed by infants during this age range at the average protein content of human milk (11.7 g/L), as determined from data from several studies conducted in the United States using various methods of analysis (2). The mean of protein content of human milk from USbased studies is consistent with results described in this paper, which represent a more global analysis. Moreover, the amino acid profile reported in the current study is similar to the calculated mean based on references in the literature, which included amino acid composition of human milk (Fig. 5) (5Á19). The mean total protein and amino acid concentrations reported here are also consistent with previously reported data (6,12,14,15,17,18,20).
The human milk samples from Chile contained significantly higher amounts of each amino acid, total amino acids, and protein than most of the other eight countries. These findings are consistent with the results of an analysis of lactoferrin levels in human milk samples from the same study, which found that the mean lactoferrin concentration was significantly higher in samples from Chile compared with the samples from the other eight countries (33), as were levels of zinc (34). Additionally, and also from this same study, the alphalactalbumin (a protein fraction) concentration throughout lactation differed between Chile and the other countries (35). The total amino acid concentration and total protein concentration were, however, highly correlated in the samples from Chile just as in the other countries. Thus, we cannot speculate why total protein levels would differ in milks from mothers in Chile versus other countries.
Assessment of maternal diet
The maternal dietary intake of many important micronutrients has been shown to influence the concentration of those nutrients in human milk including vitamins A and E (36), fatty acids such as DHA (24), and carotenoids such as lutein and beta-carotene (23); however, the protein content of human milk has been shown to be relatively unaffected by maternal diet (36,37). This lack of influence was one of the reasons we were interested in evaluating protein content across countries and explains why we did not evaluate maternal dietary protein intake in this study.
This study provides considerable evidence that the protein content and amino acid composition of human milk are relatively uniform across geographic regions when compared by stage of lactation. Our amino acid analysis revealed little change in amino acid profiles over time. Only cysteine and glutamic acid showed any significant variation with stage of lactation. The shift in these amino acids would be consistent with an increase in the proportion of casein as lactation continues because the concentration of cysteine is lower (and glutamic acid is higher) in the casein fraction of human milk, compared with the whey fraction.
In summary, the results from this large-scale, multinational study of 220 human milk samples revealed a high level of uniformity in protein and amino acid composition across a wide range of geographic locations. With the exception of Chile, there were no significant differences between countries in amino acid, true protein, and total protein concentrations. In all nine countries included in the study, protein and amino acid concentrations declined steadily from 30 to 188 days postpartum. Moreover, the proportion of true protein and the amino acid profiles of human milk were generally consistent across lactation stages and countries. Several features of this study, including the size and diversity of the study population, and our use of highly standardized procedures for collection, storage, and analysis of human milk samples strengthen the validity of our findings and enhance their applicability.
Conflicts of interest and funding PF and KP are employees of Wyeth Nutrition. MG, THZ, and AB were employees of Wyeth Nutrition at the time of study. This study was funded by Wyeth Nutrition, a Nestle business. | 2018-04-03T00:26:30.370Z | 2016-08-26T00:00:00.000 | {
"year": 2016,
"sha1": "38a700657bc9804cc37254da8377854bfdc426a1",
"oa_license": "CCBY",
"oa_url": "https://foodandnutritionresearch.net/index.php/fnr/article/download/989/3774",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38a700657bc9804cc37254da8377854bfdc426a1",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
244084462 | pes2o/s2orc | v3-fos-license | Association of Socio-Demographic and Climatic Factors with the Duration of Hospital Stay of Under-Five Children with Severe Pneumonia in Urban Bangladesh: An Observational Study
Severe pneumonia is one of the leading contributors to morbidity and deaths among hospitalized under-five children. We aimed to assess the association of the socio-demographic characteristics of the patients and the climatic factors with the length of hospital stay (LoS) of under-five children with severe pneumonia managed at urban hospitals in Bangladesh. We extracted relevant data from a clinical trial, as well as collecting data on daily temperature, humidity, and rainfall from the Meteorological Department of Bangladesh for the entire study period (February 2016 to February 2019). We analyzed the data of 944 children with a generalized linear model using gamma distribution. The average duration of the hospitalization of the children was 5.4 ± 2.4 days. In the multivariate analysis using adjusted estimation of duration (beta; β), extended LoS showed remarkably positive associations regarding three variables: the number of household family members (β: 1.020, 95% confidence intervals (CI): 1.005–1.036, p = 0.010), humidity variation (β: 1.040, 95% Cl: 1.029–1.052, p < 0.001), and rainfall variation (β: 1.014, 95% Cl: 1.008–1.019), p < 0.001). There was also a significant negative association with LoS for children’s age (β: 0.996, 95% Cl: 0.994–0.999, p = 0.006), well-nourishment (β: 0.936, 95% Cl: 0.881–0.994, p = 0.031), and average rainfall (β: 0.980, 95% Cl: 0.973–0.987, p < 0.001). The results suggest that the LoS of children admitted to the urban hospitals of Bangladesh with severe pneumonia is associated with certain socio-demographic characteristics of patients, and the average rainfall with variation in humidity and rainfall.
Introduction
Pneumonia is a leading cause of death accounting for about 15% of all deaths underfive years old, particularly in South Asia and sub-Saharan Africa in 2017 [1]. Several socio-demographic factors, such as age and gender, have been shown to be associated with the severity of pneumonia [2,3]. Children aged one month to five years old, and being male, are more likely to present with pneumonia [2,3]. It has also been reported that a lack of maternal education and lower family income are significantly related to the increase in severity of childhood pneumonia [4]. Female children younger than 12 months old and who were malnourished were found to be associated with childhood pneumonia-related death [5]. Moreover, pneumonia in malnourished children has a greater risk of death, compared to pneumonia alone, worldwide [6]. One study observed that children who were preterm and low birthweight had a higher risk of developing severe pneumonia and requiring admission to intensive care units (ICU) when compared to full term and normal weighted babies [7]. Another study in the UK also revealed that infants born by caesarean section had an increased risk of having severe pneumonia [8]. A study in China noted higher deaths from pneumonia in rural areas compared to urban areas [9].
Climate change impacts the health of children regarding all types of infectious diseases, including pneumonia [10]. Bangladesh is directly affected by global warming and has ranked nineth among the topmost 10 countries that are globally affected by the change [11]. The weather pattern of this country has been changing, with temperatures increasing between 0.6 and 2.0 degrees Celsius over the last 100 years [12]. Bangladesh has three distinct seasons: summer between March and June with temperatures from 30 degrees Celsius ( • C) to 40 • C, rainy between June and October, and winter between October and March, with an average temperature of about 10 • C. April is the warmest month, with a peak temperature of 40 • C, and January is the coldest month, with a drop in temperature to an average of 10 • C throughout the country [13]. A study in rural Bangladesh has reported a positive association between the temperature and humidity and the length of stay (LoS) of children hospitalized for severe pneumonia [14]. The duration of the hospital stay is usually indicative of the severity of disease [15], and its rise causes a significant economic burden in both developed and developing countries [16]. The peak of daily temperature, humidity, and rainfall increases the risk of childhood pneumonia [2,10]. Therefore, climate variability can affect the severity of childhood pneumonia and might influence the length of the hospital stay.
Children with pneumonia are recommended to be managed at primary health care facilities and referred to higher-level health care facilities for treatment when their condition becomes severe [17,18]. Unfortunately, 22% of such referred children in Bangladesh are not admitted to those health care facilities due to a shortage of beds, leading to their heightened risk of morbidity and death [19]. An inadequate number of beds is a constraint for the hospitalized management of children with severe pneumonia because consequently some are not able to be admitted and are deprived of getting appropriate treatment from the hospitals [19]. A longer LoS in the hospital increases the occupancy of the beds and the total cost of hospitalization [20,21].
There has been no study in Bangladesh that has assessed the correlation of the sociodemographic and climate factors with the LoS of under-five children with severe pneumonia in urban sites. Identifying such correlating factors could enable the targeting of measures to reduce LoS, which would in turn result in the availability of hospital treatment to more children suffering from severe pneumonia. For example, if high environmental temperature and humidity were found to influence the patients' LoS in hospital, the provision of effective air conditioning systems in hospitals along with an uninterrupted power supply might act to reduce the LoS.
Our study, which was nested onto a clinical trial, was designed to assess the association between factors related to the parents' socio-demographic characteristics, children's birth history (normal, preterm, and post-term; normal vaginal, caesarean section), and nutritional status, and variations in climatic factors (temperature, relative humidity, and rainfall) with the LoS of under-five year old children hospitalized for severe pneumonia in urban areas of Bangladesh.
Study Design
This was an observational study design following the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) checklist for reporting.
Data Collection
We extracted data from two different resources. The clinical data were collected from a cluster randomized control trial (ClinicalTrials.gov, Identifier: NCT02669654) conducted among the under-five children with severe pneumonia. Children who were included in the clinical trial were admitted to different pediatric departments of general hospitals (i.e., Dhaka Shishu Hospital, Dhaka Medical College and Hospital, icddr,b Dhaka Hospital, and other nearby hospitals) in the study areas for the management of severe pneumonia. That study enrolled 954 admitted children with severe pneumonia from those hospitals, and among them 946 children completed the study. We analyzed the data of 944 children for this study, excluding two children due to missing data ( Figure 1). The clinical trial was conducted between February 2016 and February 2019. We used the hospitals' data during the period of hospitalization of the children and computed the LoS in days. For our analysis, we considered individual parents' background socio-demographic characteristics; birth (full-term, preterm, post-term), delivery history (normal vaginal, cesarean section) of the children, and their nutritional status. We collected data on the climatic factors (temperature, relative humidity, and rainfall) of the study period from the Meteorological Department of Bangladesh (Agargaon, Dhaka 1207, Bangladesh) [22]. Then, from the climate data, we extracted and used the temperature, humidity, and rainfall data for only those days corresponding to the dates of each child's stay in hospital. The meteorological station called 'Dhaka Station' was located at Agargaon, 1.2 km from DSH, 6.2 km from icddr,b Dhaka Hospital, and about 6.3 km radius from other hospitals. We collected daily minimum and maximum values of the factors and computed the average for each day.
Severe pneumonia is described as the appearance of cough or difficulty in breathing and the presence of one or more of the following signs: central cyanosis, hypoxemia (oxygen saturation < 90% on pulse oximetry), severe respiratory distress (grunting, very severe chest indrawing), or pneumonia with danger signs (inability to breastfeed or drink, lethargy, unconsciousness, and/or convulsions) [23].
Average temperature, humidity, and rainfall: the average temperature, humidity, and rainfall were calculated as the mean value during the days of hospital stay period of the under-five children with severe pneumonia [25].
Temperature, humidity, and rainfall variation: the magnitudes vary over time during the days of hospital stay. The variation was the dispersion of values from the mean [25].
Daily temperature, humidity, and rainfall range: the daily range was the difference between the maximum and minimum values during the days of the hospital stay period [25].
Study Population
Primary health care in urban areas was provided by the Ministry of Local Government, Rural Development, and Cooperatives (LGRD) of Bangladesh through partnership with non-government organizations. Maternal and child health including respiratory tract infection was treated by the Primary Health Care Centers (PHCCs) [26]. Each PHCC provided primary health care at the urban community levels for populations of 30,000 to 50,000 [27]. Participants of the clinical trial were identified from the eight selected PHCCs of the urban areas of Dhaka city within the existing health systems. Community health workers (CHWs), who resided at the community level, routinely visited door-to-door of his/her catchment areas to find out if there were children with any illness, including pneumonia. If the patients had the symptoms of pneumonia, the health workers sent the patients to the nearby PHCC. Pneumonia children were treated by the PHCC's physicians, staying and taking medication at their homes. If the children's condition deteriorated and they developed severe pneumonia, they were transferred from these PHCCs to the referral hospitals. Children with severe pneumonia also were self-referred by their parents to attend the hospitals directly. After admission to the hospitals, the designated study nurse obtained written informed consent from the attending parents or their caregivers for enrollment into the clinical trial and the use of their data for this study. Children with severe pneumonia, aged 2 to 59 months, of both sexes, living in the designated study areas, with written informed consent by their parents or caregivers were included, and children who had severe acute malnutrition (SAM), weight-for-height Z-score [ZWH] <−3, or bilateral pitting edema, or a mid-upper arm circumference [MUAC] of <115 mm), occurring alone or in combination, were excluded from the study, as they need special care (e.g., nasogastric tube feeding, micronutrients, rehabilitation, etc.). The trial collected relevant data of individual children onto a predesigned and pretested "Case Report Form (CRF)," and transcribed them into a database.
Statistical Analysis
Demographic categorical data were presented as frequencies, and continuous data were presented as mean and standard deviation along with their range (maximum and minimum). All relevant study data were analyzed by using SPSS for Windows (version 25.0; SPSS Inc., Chicago, IL, USA) and Epi Info (version 7.0, USD, Stone Mountain, GA, USA). A probability of less than 5% (p < 0.05) was considered statistically significant. The strength of associations between LoS (i.e., the response variable in the present study) and other variables was determined by calculating the beta (β) coefficients of bivariate and multivariate generalized linear models that assume gamma distributed error (gamma GLM) and their 95% confidence intervals (CI). As explanatory variables in gamma GLM, we included four categorical variables on children and their family members' status (children's sex, nutritional status, birth history, and parents' occupation), six continuous variables regarding children and their family members' socio-demographic characteristics (children's age, the number of family members, the number of siblings, parents' age, years of parents' education, and the household income), and six continuous variables on climate (the average temperature, humidity, and rainfall as well as their changes (variation) and ranges). We used the gamma GLM because the response variable data type was as continuous as the LoS in the hospital (time to event), not normally distributed, and skewed to the right. To see the monthly trends of climate variables during the hospitalization of children, a time series analysis was performed, and association was estimated by Pearson's correlation method. We calculated the average of temperature, humidity, and rainfall by adding the total value divided by the length of the hospitalization period for each child. We also calculated the variation and range of temperature, humidity, and rainfall by using the data for the period of the stay in the hospitals. The range was calculated by subtracting the minimum value from the maximum value of climate factors, and the variation used in this research was the standard deviation of climate factors. We analyzed the explanatory variables and dropped out variables that had variance inflation factor (VIF) values more than ten, due to the presence of multicollinearity.
Demographic Characteristics of the Participants
Of a total 1693 screened under-five children with severe pneumonia, 954 (56.3%) children who were admitted in different hospitals were enrolled, and of them, the study was completed for 946 (99.2%). We analyzed the data of 944 children for this study (Figure 1). Table 1 shows the demographic characteristics, parents' occupation, nutritional status and birth history. Fathers of the children were 98.5% employed in the following categories: skilled occupations included office executive, big business, and government officials were 321 (34.1%); unskilled workers such as office nonexecutive, rickshaw, and pushcart puller, and taxi, bus, and tempo (local transport) drivers were 605 (64.4%). Mothers of 871 (92.3%) children were unemployed, including housewives and students. Children of 23.3% were malnourished and 49.0% were delivered by caesarean section. Table 2. expressed other demographic characteristics, the LoS, and the climate (temperature, humidity, and rainfall) data for all enrolled study children and all their corresponding hospital days. The average LoS was 5.4 ± 2.4 days with a range of between 1 day and 16 days.
Days of Hospitalization for Study Children (%)
Children with severe pneumonia were admitted and stayed in hospital between 1 day and 16 days. The highest percentage of children stayed in hospital for 5 days (19.1%) followed by 4 (18.1%), 6 (17.8%), and 3 (12.1%) days (Figure 2), respectively.
Trends of Temperature, Humidity, and Rainfall during Hospitalization
The monthly trend of temperature, humidity, and rainfall during the hospitalization among under-five children with severe pneumonia is shown in Figure 3. We observed there was no statistically significant association of monthly patient stays in the hospital with monthly average temperature (r = 0.114, p = 0.500), monthly temperature variation (r = −0.170, p = 0.315), monthly average humidity (r = 0.108, p = 0.525), monthly humidity variation (r = −0.019, p = 0.910), monthly average rainfall (r = 0.090, p = 0.596), and monthly rainfall variation (r = 0.113, p = 0.505). There were downward trends observed for all climate variables between October and February in every year. From April to October in every year, the temperature and humidity were maintained higher. Although rainfall was less in the winter season from December to February in every year, the influence on monthly patient stays in the hospital was not clearly observed.
Results of Generalized Linear Model with Gamma Distribution
We analyzed study data and dropped out the daily range of temperature, humi and rainfall variables due to the presence of multicollinearity. Table 3 shows the co cients (β) with 95% confidence intervals (Cl) and p-values of the socio-demographic climatic factors with respect to the LoS in days that were estimated in the bivariate multivariate analyses in gamma GLM. After adjusting for the children's sex, the mot and father's age, the mother's and father's education, the mother's and father's occ tion, the number of siblings, the household income, the birth and delivery history children's age (β: 0.996, 95% Cl: 0.994-0.999, p = 0.006), and the well-nutritional statu 0.936, 95% Cl: 0.881-0.994, p = 0.031), number of household family members (β: 1.020, Cl: 1.005-1.036, p = 0.010), the humidity variation (β: 1.040, 95% Cl: 1.029-1.052, p < 0. average rainfall (β: 0.980, 95% Cl: 0.973-0.987, p < 0.001), and rainfall variation (β: 1 95% Cl: 1.008-1.019, p < 0.001) were significantly associated with the LoS among un five children with severe pneumonia.
In summary, our analysis indicates that for every 1% increase in daily humidity iation and 1 mm greater daily rainfall variation, the LoS increased by 4.0% and 1.4% spectively. On the other hand, with a 1 mm decrease in average rainfall, the LoS incre by 2.0%. The LoS was positively associated with humidity variation and rainfall varia (β: 1.040, 95% Cl: 1.029-1.052, p < 0.001 and β: 1.014, 95% Cl: 1.008-1.019, p < 0.001), res tively, and negatively associated with average rainfall (β: 0.980, 95% Cl: 0.973-0.987 0.001). The results suggested that the greater variations in the ambient humidity and fall, and lesser average rainfall, were associated with a longer hospital stay of the child However, the average temperature and humidity were not associated with the LoS. T was also no significant association between the sex of the children, their parental ed tion, the mother's occupation, or the household income with the LoS.
Results of Generalized Linear Model with Gamma Distribution
We analyzed study data and dropped out the daily range of temperature, humidity, and rainfall variables due to the presence of multicollinearity. Table 3 shows the coefficients (β) with 95% confidence intervals (Cl) and p-values of the socio-demographic and climatic factors with respect to the LoS in days that were estimated in the bivariate and multivariate analyses in gamma GLM. After adjusting for the children's sex, the mother's and father's age, the mother's and father's education, the mother's and father's occupation, the number of siblings, the household income, the birth and delivery history, the children's age (β: 0.996, 95% Cl: 0.994-0.999, p = 0.006), and the well-nutritional status (β: 0.936, 95% Cl: 0.881-0.994, p = 0.031), number of household family members (β: 1.020, 95% Cl: 1.005-1.036, p = 0.010), the humidity variation (β: 1.040, 95% Cl: 1.029-1.052, p < 0.001), average rainfall (β: 0.980, 95% Cl: 0.973-0.987, p < 0.001), and rainfall variation (β: 1.014, 95% Cl: 1.008-1.019, p < 0.001) were significantly associated with the LoS among under-five children with severe pneumonia.
In summary, our analysis indicates that for every 1% increase in daily humidity variation and 1 mm greater daily rainfall variation, the LoS increased by 4.0% and 1.4%, respectively. On the other hand, with a 1 mm decrease in average rainfall, the LoS increased by 2.0%. The LoS was positively associated with humidity variation and rainfall variation (β: 1.040, 95% Cl: 1.029-1.052, p < 0.001 and β: 1.014, 95% Cl: 1.008-1.019, p < 0.001), respectively, and negatively associated with average rainfall (β: 0.980, 95% Cl: 0.973-0.987, p < 0.001). The results suggested that the greater variations in the ambient humidity and rainfall, and lesser average rainfall, were associated with a longer hospital stay of the children. However, the average temperature and humidity were not associated with the LoS. There was also no significant association between the sex of the children, their parental education, the mother's occupation, or the household income with the LoS.
Discussion
Our study demonstrated an association between the LoS of under-five children admitted to urban hospitals with severe pneumonia and their socio-demographic characteristics and some of the climatic factors. Malnourished children in the lower age group and having more household family members had more risk of a longer LoS in the hospital. Humidity and rainfall variation, and reduced average rainfall, also increased the risk of a longer LoS in the hospital.
We observed a significant negative association between the LoS and the age and malnutrition of the children, where the lower in these factors was associated with longer hospital stay of the children. Our study children had a wide distribution of age, 1-59 months, where the lower age of this range was associated with the longest duration of hospitalization. Our study finding is consistent to another study in which infants (less than one-year old) had longer stays than the children of one year and above [28].
Several previous studies in developing countries reported male children receiving better treatment compared to their female counterparts; however, we did not find its impact on LoS-a finding similar to other studies conducted in rural Bangladesh [12], the UK [29], and Iran [30].
We did not observe any association between the occupation of the mother or the father with a longer LoS. However, one study in the UK reported a longer hospitalization of children who had fathers with lower income. Thus, the findings may actually be related to the functional differences in the operations, and therefore outcomes of the health services of individual hospitals, regions, and countries [29].
We noted that there was a positive association between the number of household family members and a longer LoS in the hospital. It was observed that in some families, there was lack of care required to be given to the sick child due to the competing care needs of other family members. The severity of pneumonia may be related to the delay in seeking of care, and thus with a longer stay in those who had more household family members. Some physicians, for cultural reasons, extend their patients' hospital stay for longer, for fear that the parents cannot take proper care at home due to caring for other family members [31].
With respect to the nutritional status of our study children, we found a negative association of malnutrition with longer LoS in hospital. Similar findings were observed in previous studies, which reported a stronger association between better nutritional status and a shorter LoS in hospital [14,28,32]. One study in China reported that preterm and low birth weight children became more severe and needed more rigorous care compared to normal birth children [14]; however, we did not find any association.
We did not find a significant association of the socio-demographic factors of children's sex, their birth or delivery history, the parental education and occupation, the household income, or the number of siblings with children's LoS.
Climatic factors, including the daily humidity and daily rainfall variations, had a significant positive association with the LoS in hospital, with the daily humidity variation having the highest impact. We also observed that greater variations in humidity and rainfall could be associated with a longer hospital stay; the opposite findings have been reported elsewhere [33,34]. In our study, we found average rainfall had a significantly negative association with the LoS in hospital. However, we did not observe any significant association between the average temperature and the average humidity with a longer LoS. Previous studies have attempted to evaluate the relationship between climate variability and hospitalization [35]; however, a single study conducted in rural Bangladesh explored the association between temperature and humidity with the LoS during the hospitalization of under-five children who were suffering from severe pneumonia [14], signifying a knowledge gap for the impact of rainfall on the LoS in hospitals. We observed the trend of humidity variation increasing in summer, average rainfall and rainfall variation in the rainy season, with increasing number of monthly patients' stay in the hospital. Similar findings were observed in other studies [2,10]. We assessed variations in the climatic factors, particularly the ambient temperature, humidity, and rainfall with the LoS of underfive children with severe pneumonia. In our study hospitals, there were no indoor air controlling facilities; therefore, the ambient temperature and humidity persisted inside the hospitals during the stay of the participants.
Many attributes of children, such as their physical condition, comorbidities, history of previous hospitalization, attitude, and characteristics during their illness, medical practice, hospital admission criteria, treatment cost, and health insurance have been reported to in-fluence the LoS [30,36]. Our study findings might have been influenced by not considering those confounding factors in respect of the LoS in the hospitals due to lack of data. All the study children were diagnosed and managed by qualified physicians in the hospitals according to the WHO guidelines for clinical diagnosis and treatment of pneumonia in under-five children. An earlier study reported that shorter LoS were associated with the management of hospitalized children by more qualified physicians [37].
We conducted our study at urban hospitals, and the mean LoS was longer than a previous study's findings shown at a rural hospital of Bangladesh [25]. The reason might be that children with more severe conditions were admitted to urban hospitals compared to rural areas. In urban areas, there were more opportunities to consult with a physician or a child specialist, and the children were likely admitted to hospitals only when their condition was more critical. Children were also referred to urban hospitals from the rural ones for better treatment. Rural hospitals have less laboratory investigation facilities, which might cause delay in diagnosis and could be a cause of shorter stay in the hospitals [38]. Similar outcomes were observed at other urban hospitals where the air pollution was more likely to be observed compared to rural areas [39]. However, we did not consider the effects of air pollution on the LoS in the hospitals for our study.
Strengths and Limitations of This Study
We explored the impact of major climatic factors, namely the ambient temperature, humidity, and rainfall on the duration of the hospital stay. Our study children were managed by qualified physicians according to the severity of their conditions and following the WHO guidelines, thus outcome bias was unlikely. The study children were discharged from the hospitals after recovery from their severe conditions. Our study was conducted at hospitals that provide free care and treatment, and thus the LoS was unlikely influenced by the financial affordability.
There are some limitations in our study. First, our analysis excluded severely malnourished under-five children, and thus the findings may not be generalized for all under-five children. Second, we did not take into account the children's general health conditions, including comorbidities, which might have influenced the outcome. Third, we did not have data on the time course of the pneumonia before hospitalization; therefore, we could not analyze the association between climate factors and pneumonia before children were admitted into the hospital. Fourth, as we used weather station data in our analysis, the weather variables of humidity and rainfall were not measured in the actual hospital environment; thus, the effects on the LoS need to be interpreted with caution. Fifth, our study children received treatment from different hospitals, and the different rates of patient's admission flow of those hospitals might have influenced the LoS. However, the study physicians and nurses handled this issue to minimize the influence of timing of patient's hospital discharge. Finally, as the hospital treatment in our study was free of cost, we did not perform a cost analysis.
Conclusions
Our study observed patients' socio-demographic characteristics and some of the climatic factors to be associated with the duration of hospital stay of under-five children admitted to urban hospitals in Bangladesh with severe pneumonia. As Bangladesh is a climate-susceptible country, mass awareness and health education are needed for parents/caregivers and health staff. The changing climate conditions need to be considered with seasonal variation in future health-related policy making, including hospital management. Heating, ventilation, and air conditioning systems can be included in infection control strategies. On the basis of our study findings, the variation of humidity and rainfall issue could be adopted as one of the mitigation strategies, such as air conditioning in order to shorten the LoS and prevent the delay in severe pneumonia recovery time at the hospital, especially for such resource-impoverished countries. Further prospective studies could be conducted among hospitalized children with severe pneumonia in some other countries with similar socio-demographic and climatic factors, where the actual humidity and rainfall measurements are taken inside hospitals so to limit the confounding effects, and for external validation. | 2021-11-14T16:28:58.392Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "0986b9fa88936b08515f21b359dfc962cfda9460",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/8/11/1036/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "adf049121c2785c8083f2c17dd48ac2e3dedf8e8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
223555044 | pes2o/s2orc | v3-fos-license | The Effects of Tannins in Monogastric Animals with Special Reference to Alternative Feed Ingredients
Over recent years, the monogastric animal industry has witnessed an increase in feed prices due to several factors, and this trend is likely to continue. The hike in feed prices is mostly due to extreme competition over commonly used conventional ingredients. For this trend to be subdued, alternative ingredients of both plant and animal origin need to be sourced. These types of ingredients are investigated with the aim of substituting all or some of the conventional compounds. However, alternative ingredients often have a double-edged sword effect, in that they can supply animals with the necessary nutrients although they contain antinutritional factors such as tannins. Tannins are complex secondary metabolites commonly present in the plant kingdom, known to bind with protein and make it unavailable; however, recently they have been proven to have the potential to replace conventional ingredients, in addition to their health benefits, particularly the control of zoonotic pathogens such as Salmonella. Thus, the purpose of this review is to (1) classify the types of tannins present in alternative feed ingredients, and (2) outline the effects and benefits of tannins in monogastric animals. Several processing methods have been reported to reduce tannins in diets for monogastric animals; furthermore, these need to be cost-effective. It can thus be concluded that the level of inclusion of tannins in diets will depend on the type of ingredient and the animal species.
Introduction
Monogastric animal production, in particular the poultry production sector, is growing continuously, driven mostly by the demand for meat and eggs. However, this rapidly growing industry and the increasing demand for poultry feeds have led to a considerable increase in feedstuff prices. The gap between demand and supply of balanced feed is expected to increase, and consequently increase the cost of production. On the other hand, the conventional feed ingredients such as maize, wheat and rice can no longer meet the poultry industry's demand for feed. In addition, in-feed antibiotics have been used over a period of time as growth promoters, which positively aids in feed conversion rates and consequently reduces the cost. However, it was discovered recently that the inclusion of the antibiotics could leave residue in the meat and consequently cause resistance to some bacteria in humans [1]. These multifaceted challenges compelled the concerned researchers to look for alternative ingredients which can fill the gap. Tannins are considered valid alternatives to the conventional feed ingredients and as antipathogenic molecules, which can be used as an alternative ingredient.
Tannins are a group of polyphenolic compounds commonly found in the plant kingdom [2]. Because they are antimicrobial, antiparasitic, antiviral, antioxidant, and anti-inflammatory [2], they are considered valuable in that they can replace antibiotics in chicken feeds [2]. Although the use of tannins in monogastric animals' feed has been discouraged over the years because of the antinutrient contents [3], recent studies have revealed that if tannins are used with caution, they can be of benefit to monogastric animals [4]. Tannins also can decrease the risk of livestock diseases and the spread of zoonotic pathogens. Current studies on the use of tannins in poultry production sector show favorable outcomes [5].
The mechanism with which tannins promote growth in the monogastric animals are not as clear as in ruminants [2]. The popular suggestion is that the inclusion of tannins in low concentrations leads to an increase in feed intake and consequently the performance of monogastric animals [2]. There is also a suggestion that the improvement in performance comes as a result of the creation of balance between the negative effects of tannins on feed palatability and nutrient digestion and the positive effects on promoting the health status of the intestinal ecology [2]. A study by [6] found that the condensed tannins available in the extract of grape seed reduces the fecal shedding of E. Tenella, and an increased growth performance of broiler chickens infected with E. Tenella.
To render tannins available to the monogastric animals, different processing methods to reduce the antinutrient effects are recommended. For example, the reduction of the tannin component of sorghum has improved its nutritional quality to become the closest alternative feed ingredient to maize in poultry diets [7]. Lately, different processing methods were introduced to reduce the tannin content in feed ingredients. The main methods used are cooking, dehulling, autoclaving, toasting, soaking, using wood ash, adding tallow, and using tannin-binding agents and enzymes. Hence, the aims of this review are(1) to elaborate on the use of tannins as alternative ingredient in monogastric animals' feed; (2) to identify different structures and types of tannins; and (3) to identify successful processing methods to reduce the harmful effects of tannins.
Methodology
This review was conducted according to the reporting items for systematic reviews and meta-analyses (PRISMA) statement guidelines [8]. A comprehensive search was conducted to identify eligible studies. Databases, namely Web of Science, Science Direct, Google Scholar, PubMed and Wiley Online Database, were searched to obtain all relevant studies that were published before September of 2020. The search strategy used involved a combination of the keywords "tannins", "alternative ingredients", "monogastric animals", "health benefits", "condensed tannins", "hydrolysable tannins", "medicinal uses of tannins", "antinutrients in tannins", "antibiotic resistance" and "tannin processing methods". Furthermore, the researchers narrowed their search to time scale 1977-2020 to include old and new studies to draw a comparison between the uses of tannins in monogastric animals with the current use. The search was not restricted by language, date, or study type. A total of 315 records were screened after removal of duplicates. Later, 218 records were excluded because they were irrelevant. The first draft articles were excluded for the following reasons: a) they did not cover the alternative feed subject, b) some of the articles did not adequately address the importance of tannins in livestock nutrition, c) some of the articles only focused on the undesirable antinutritional factors in the tannins. A total of 97 records were initially used to prepare the review.
In the second stage, extra records were searched to include ''antibiotic resistant strains" to add to the knowledge regarding antibiotic resistance and the health benefits of tannin. The overall number of records used to prepare this review was 122 records.
Structural Properties of Tannins
The physical and chemical properties of tannins differ according to the plant species [9]. Tannins are classified into two main parts-the hydrolysable tannins (HTs) and condensed tannins (CTs), also known as proanthocyanidins [10,11]. Hydrolysable tannins, as the name indicates, can be hydrolyzed by acids or enzymes. Their structure is characterized by a polyol core [12]. On the other hand, the condensed tannins are non-hydrolysable oligomeric and polymeric proanthocyanidins [13]. Condensed tannins are where the coupling of the single units is by positioning of C-4 of the first unit with C-8 or C-6 of the second unit [14]. The two most common condensed tannins are the procyanidins and the prodelphinidins [12]. There are three types of hydrolysable tannins, which include: gallotannins, ellagitannins, and complex tannins and condensed tannins, called procyanidins [15], (Figure 1). Gallic acid is mainly found in rhubarb and clove, while ellagic acid is found in eucalyptus leaves, myrobalans and pomegranate bark [16].
Further to this, recent research showed that tannins are produced inside an organelle named tannosome, which is believed to arise in cell plastids occurring in the green parts of plants that contain chlorophyll pigments. After creation, the tannosome is encapsulated in a membrane, and later transported to a plant vacuole for safe storage [17]. According to [12], the structures of the condensed tannins from different species can be differentiated based on the proportion of trihydroxylated subunits, ratio of cis/trans monomers, and the degree of polymerization. Figure 1 shows the classification of tannins into different classes.
Structural Properties of Tannins
The physical and chemical properties of tannins differ according to the plant species [9]. Tannins are classified into two main parts-the hydrolysable tannins (HTs) and condensed tannins (CTs), also known as proanthocyanidins [10,11]. Hydrolysable tannins, as the name indicates, can be hydrolyzed by acids or enzymes. Their structure is characterized by a polyol core [12]. On the other hand, the condensed tannins are non-hydrolysable oligomeric and polymeric proanthocyanidins [13]. Condensed tannins are where the coupling of the single units is by positioning of C-4 of the first unit with C-8 or C-6 of the second unit [14]. The two most common condensed tannins are the procyanidins and the prodelphinidins [12]. There are three types of hydrolysable tannins, which include: gallotannins, ellagitannins, and complex tannins and condensed tannins, called procyanidins [15], (Figure 1). Gallic acid is mainly found in rhubarb and clove, while ellagic acid is found in eucalyptus leaves, myrobalans and pomegranate bark [16].
Further to this, recent research showed that tannins are produced inside an organelle named tannosome, which is believed to arise in cell plastids occurring in the green parts of plants that contain chlorophyll pigments. After creation, the tannosome is encapsulated in a membrane, and later transported to a plant vacuole for safe storage [17]. According to [12], the structures of the condensed tannins from different species can be differentiated based on the proportion of trihydroxylated subunits, ratio of cis/trans monomers, and the degree of polymerization. Figure 1 shows the classification of tannins into different classes.
Mode of Action and Functions of Tannins
Tannins are a complex group of polyphenolic compounds found in a wide range of plant species. They are characterized by astringency and tanning properties, which are believed to be associated with the higher molecular weight proanthocyanidins [20]. Hagerman [21] reported the molecular weight of tannins to be between 500 and 5000 Da. They are found in wood, bark, leaves and fruits; however, acacia species, which belong to the family of Leguminosae in the plant kingdom, are considered the most common sources of tannins [22]. Previously, harmful nutritional consequences have been attributed to tannins because they can precipitate proteins, inhibit digestive enzymes, and decrease the utilization of vitamins and minerals [23]. In addition, it was assumed that tannins are unabsorbable due to their high molecular weight and the ability to form insoluble structures with components of food such as proteins [24]. Hagerman et al. [11] reported that tannins in poultry feed affect dry matter intake and consequently the weight gain. Tannins that can be hydrolyzed are found in smaller amounts in plants, while the condensed tannins are found in abundance. The concentration
Mode of Action and Functions of Tannins
Tannins are a complex group of polyphenolic compounds found in a wide range of plant species. They are characterized by astringency and tanning properties, which are believed to be associated with the higher molecular weight proanthocyanidins [20]. Hagerman [21] reported the molecular weight of tannins to be between 500 and 5000 Da. They are found in wood, bark, leaves and fruits; however, acacia species, which belong to the family of Leguminosae in the plant kingdom, are considered the most common sources of tannins [22]. Previously, harmful nutritional consequences have been attributed to tannins because they can precipitate proteins, inhibit digestive enzymes, and decrease the utilization of vitamins and minerals [23]. In addition, it was assumed that tannins are unabsorbable due to their high molecular weight and the ability to form insoluble structures with components of food such as proteins [24]. Hagerman et al. [11] reported that tannins in poultry feed affect dry matter intake and consequently the weight gain. Tannins that can be hydrolyzed are found in smaller amounts in plants, while the condensed tannins are found in abundance. The concentration of tannins is dependent on the plant genotype, tissue developmental stage, and the environmental conditions [12].
Biologically, tannins are significant in that they provide protection for the plant while still in the plant and have potential effects after the plant has been harvested [25]. In recent research, tannins have been proposed as an alternative to antibiotics because of the antimicrobial properties of tannins, which is the ability to inhibit extracellular microbial enzymes. In addition, hydrolysable tannins could be used in lieu of antibiotics, because bacteria such as Clostridium perfringens cannot develop resistance to them. However, their use in animal feed is discouraged because they impact nutrition negatively. Their use has been linked with lower feed intake and digestibility and leads to poorer animal performance.
Tannins have numerous applications that benefit humans. Some of the applications of tannins include their use as nutraceuticals to prevent, for example, cancer, cardiovascular disease, kidney disease, and diabetes [26]. They are also used for tanning leather, and manufacturing ink and wood adhesives. Medicinally, tannins are homeostatic, antidiarrheal, and a remedy for alkaloid and heavy-metals toxicity. In the lab, tannins are used as a reagent for protein detection, alkaloids, and heavy metals due to their precipitating properties. In the food industry, tannins are used to clarify wine, beer, and fruit juices. Other industrial uses of tannins include textile dyes, and as coagulants in rubber production.
Antibiotic Resistance in Animal Byproducts
Antibiotic resistance is a concern for animal welfare and as a hazard to public health since the contamination can be passed onto humans through the byproducts from animals. Although some contributing factors are unavoidable, such as the ability of bacteria to adopt to the changing environment [27], some of the factors are contributed by humans, such as the excessive use of antibiotics for growth promotion in farm animals [28]. For example, antibiotic resistant salmonella has been detected in meat [29]. Food animals are considered the main reservoir of antibiotic resistant bacteria, which can be transferred to humans through zoonoses and the food chain [30,31]. Some of the antibiotic resistant strains are presented in Table 1. Table 1. Examples of antibiotic resistant strains in animal by-products.
Medicinal Uses of Tannins
Tannins in plants are believed to function as chemical guards that protect the plants against pathogens and herbivores, as stated by [38]. Furthermore, the properties of tannins as antioxidants and reducing scavenging activities were also reported by [39]. The ability of tannins to chelate metals, their antioxidant activity, antibacterial action, and complexation are believed to be the mechanism of action behind tannins' ability to treat and prevent certain conditions such as diarrhea and gastritis [40]. On the other hand, tannins' mechanisms of antimicrobial activity include inhibition of extracellular microbial enzymes, deprivation of the substrates required for microbial growth, or direct action on microbial metabolism through inhibition of oxidative phosphorylation. The authors of [41] state that the antimicrobial properties of tannins are believed to be associated with the hydrolysis of ester linkage between gallic acid and polyols hydrolyzed after the ripening of many edible fruits, which enables the tannins to function as a natural defense mechanism against microbial infections. Table 2 demonstrates some of the medicinal uses of tannins [42]. Table 2. Uses of tannins as medicinal sources and industrial agents.
Components Medicinal Uses References
Sweet chestnut extracts Escherichia coli, Bacillus subtilis, Salmonella enterica serovar Enteritidis [43] Extract of chestnut shell Enteritidis, Clostridium perfringens, Staphylococcus aureus, and Campylobacter jejuni [44] Gall nuts Treatment of diarrhea and dermatitis [45] Acacia Nilotica Antimutagenic and cytotoxic effects [46] Sweet chestnut extracts Reduction of Salmonella infection [47] Quebracho Tannins Reduction of worm eggs counts and inhibition of development of nematodes and lungworms [48] Chestnut extracts Control of Clostridium perfringens [49] Pine needles and dry oak leaves Control of coccidian infection [50]
Tannins as Adhesives
Tannins are used as a partial or complete substitute for phenols in wood adhesives in the form of tannin resin because of its phenolic structure [51]. The use of tannin adhesives was first successfully traded in South Africa in early 1970s [52]. It is documented that previous research in the field of fortified starch adhesives with wattle bark tannin was carried out in South Africa [53]. Mimosa tannin adhesives were used instead of synthetic phenolic adhesives to manufacture particleboard and plywood for external and marine applications [51]. In Kenya, the commercial wattle (Acacia mearnsii) is a well-known tannin-rich species and tannin-based adhesive [54]. Current industrialized technologies are based mostly on paraformaldehyde or hexamethylene tetraamine, which are considered more environmentally friendly [55]. The drive to create more environmentally friendly adhesives has led to different forms of research in the field; for example, the creation of corn-starch-tannin adhesives in a study by [56] in a bid to replace synthetic resins has shown that it has excellent structural stability.
Nutritive and Antinutritive Effects of Tannins
Tannins, commonly found in most cereal grains and legume seeds, as already indicated, are considered antinutritional factors that hamper the use of some feeds by monogastric animals. It has been reported that tannins bind protein, and as a result weakens protein digestion [57]. Tannins are blamed for the bitter taste of the feed, resulting in lowering feed consumption due to reduced palatability [58]. They are regarded as polyphenolic secondary metabolite; however, some reports have shown recently that low concentrations of some tannin sources can improve the nutrition and health status of monogastric animals [2]. Antinutrients are commonly known as natural or synthetic compounds that interfere with the absorption of nutrients. Condensed tannins are known to inhibit several digestive enzymes, including amylases, cellulases, pectinases, lipases, and proteases [59]. They have a major antinutritive effect that can influence the nutrient digestibility of lipids, starch, and amino acids negatively [60,61]. Tannins are a heterogeneous group of phenolic compounds, found in nature in many different families of plants. In Oakwood, Trillo, Myrobalaen and Divi-Divi they occur in almost every part of the plant, such as the leaves, fruits, seed, bark, wood and roots.
Supplementation of chestnut HT at the concentration of 0.5% and 1.0% on rabbit feed had no effect on growth performance [62]. However, [63] found different results when chestnut HT was included in rabbit feed at levels of 0.45% and 0.5%, as it increased feed intake and the live weight of rabbits. Similarly, [64] reported that adding 0.20% of chestnut, the tannin increased average daily gain and daily feed intake of broilers. The authors of [65] reported that when the sweet chestnut wood extract was used as a supplement at 0.07% and 0.02% for broiler chickens, no antinutritive activity was observed, and the crude ash, crude protein, calcium and phosphorus were not affected. The addition of tannic acid (HT) at a dietary level of 0.0125% and 0.1%, showed a negative impact on hematological indices and plasma iron of pigs [66]. According to [67], ideal digestibility of energy, protein, arginine and leucine were lowered in broiler chickens as dietary tannin levels rose to 20 g/kg diet and beyond, while phenylalanine and methionine were affected negatively only at tannin levels of 25 g/kg diet. In another study with broiler chickens [68], it was reported that the tannin content of 16 g/kg in red sorghum had no effect on phosphorus, calcium, and nitrogen retention in chickens. High-tannin sorghum treated with wood ash extract improves its nutritive value [69]. Tannins can act as a double-edged sword; therefore, a tannin content-specific solution could have an effect on their utilization. Although tanninferous feed and forages containing >5% tannin dry matter are not safe to be used as animal feed, low to moderate (<5% dry matter) is safe for animal consumption [59]. Table 3 shows the antinutritive and nutritive effects of tannins from different plant sources.
Influence of Tannins on the Productivity of Monogastric Animals
Tannins have been classified as an "antinutritional factor" for monogastric animals with negative effects on feed intake, nutrient digestibility, and production performance [1]. Currently, most researchers have revealed that some tannins can improve the intestinal microbial ecosystem, enhance gut health, and hence increase productive performance when applied appropriately in monogastric diets [62,70,75]. Strong protein affinity is a well-recognized property of plant tannins, which has successfully been applied to monogastric animals' nutrition. However, adverse effects of high-tannin diets on monogastric animals' performance have been reported by many researchers [71]. In monogastric animals, the main effects of tannins are related to their protein-binding capacity and reduction in protein, starch, and energy digestibility [76,77]. According to [10,78], dry matter intake, bodyweight, feed efficiency and nutrient digestibility were reduced when chickens were fed diets with tannins, whilst Ebrahim et al. [71] reported a decrease in body weight gain and feed intake. However, [72,75] reported no effects on growth performance and on egg weight, cell thickness or yolk color of layers. Several studies showed that low concentrations of tannins improved feed intake, health status, nutrition, and animal performance in monogastric farm animals [2,4,79].
According to [80], supplementing of pigs' diet with 0.2% chestnut wood extract rich in tannins had no effect on growth rate, carcass traits or meat quality of pigs raised up to 26 weeks of age; whereas Bee et al. [81] reported that pigs that were fed diets rich in 3% of hydrolysable tannins from chestnuts showed no negative effects in terms of growing performance raised from day 105 until 165. The authors of [49] reported an increase in small intestinal villus height, villus perimeter and mucosal thickness in pigs that were fed diets having 3% of hydrolysable tannins from chestnuts. Moreover, [4] reported increased growth performance in pigs aged 23-127 days when fed chestnuts rich in tannins at the 0.91% supplementation level. According to [82], pigs have parotid gland hypertrophy and secrete proline-rich proteins in the saliva that bind and neutralize the toxic effects of tannins, which make them relatively resistant to tanniniferous diets without showing any negative effects as compared to other monogastric animals ( Table 3).
In rabbits [62], no difference was observed in the performances of rabbits fed diets supplemented with up to 10 g of tannins from chestnuts. Moreover, they reported that no improvements were observed in health status, diet nutritive value, growth performance, carcass traits and oxidative stability of rabbits fed up to 400 g/100 kg of hydrolysable tannins originating from chestnuts. According to [83], rabbits fed diets with 4% of tanniniferous browsers of Acacia karroo, Acacia nilotica and Acacia tortilis showed no significant differences in intake and digestibility. Mancini et al. [84] also reported no significant difference in growth rate, feed intake or feed conversion ratio and carcass traits of rabbits fed a mixture of quebracho and chestnut tannins. Moreover, [85] observed no significant difference in growth rate, feed intake or feed conversion ratio of rabbits fed low-tannin sorghum grains. Thus, tannins, when included in monogastric animal diets, can have both positive and negative effects on animal performance, depending on the concentration. Therefore, it is important to minimize the inclusion or supplementation of feedstuffs containing high concentrations of tannins in monogastric animals, or to take measures to decrease their concentrations. In Table 4, the effect of tannins on productivity of monogastric animals is reported. Decreased body weight gain and feed intake; improved the fatty acid profile of breast muscle [71] Chestnut layers No effect on egg weights, cell thickness or yolk colour; reduced cholesterol content [72] 0.45% and 0.5% Chestnut Rabbits Increased live weight gain and feed intake of rabbits [79,86] 0.5% and 1.0% Quebracho and chestnut Rabbits Had no effect on growth performance [62,84] 4% Acacia karroo, Acacia nilotica and Acacia tortilis Rabbits No significant differences in intake and digestibility [83]
Processing Techniques Used to Reduce Effects of Tannins
Several processing techniques to reduce tannin levels in different feedstuffs, especially unconventional ingredients, have been suggested by most researchers [86,87]. Processing is an act of applying suitable techniques to reduce or eliminate tannins present in alternative feedstuffs. These techniques include enzyme supplementation, soaking, dehulling, alkali treatment, extrusion, and germination.
Enzyme Supplementation
Supplementation of enzymes to reduce the tannins content is an effective method, although it might not be the most economical. It is proven to reduce tannins better than other processing methods, such as soaking, dehulling, etc. Several studies have shown that enzyme supplementation has been effective in reducing tannins in alternative energy and protein feedstuffs [88,89]. A study by [88] found that treatment of sorghum with both polyphenoloxidase and phytase enzymes showed a decrease in hydrolysable and condensed tannins of 72.3% and 81.3% respectively. Moreover, [89] reported a decrease in both hydrolysable and condensed tannins by 40.6%, 38.92% and 58.00% respectively when sorghum grains were treated with the three enzymes tannase, phytase and paecilomyces variotii.
Soaking
Soaking is one of the cheapest traditional methods which animal nutritionists have used for many years. A study found that the addition of sodium bicarbonate, prolonged time of soaking, or higher temperature have proved to be effective during the soaking process [90]. Kyarisiima et al. [69] reported that high-tannin sorghum soaked in wood ash extract showed a decreased level of tannins without lowering the nutrient content of sorghum grains. Authors stated that tannin level did not only decrease with the soaking technique, but also with roasting. The decrease in tannins during soaking may result from leaching into the soaking water [77]. Moreover, [91] reported a decrease of about 73-82% in velvet beans.
Dehulling
Dehulling is a process of reming the outer coat/hull of a seed [92]. Most seeds of alternative feedstuffs have seed coats/hulls which are normally concentrated with tannins. If tannins are removed, feedstuffs have shown to have a significant increase in protein digestibility and protein content in legume seed meal. The authors of [93] reported that dehulling reduced tannins in chickpea without lowering protein digestibility, whereas in faba beans a 92% decrease of tannins occurred with dehulling [94].
Extrusion
The extrusion method is used to decrease levels of tannins in feedstuffs. According to [95], extrusion cooking is a high-temperature, quick process in which starchy food materials are plasticized and cooked by a combination of moisture, pressure, temperature, and mechanical shear. Extrusion has shown the ability to inactivate antinutritional elements [96][97][98]. For example, [99] reported that extrusion showed a significant reduction in tannins with minimum oil loss in flaxseed meal. The authors of [100] reported that lentil splits showed a reduction in tannins after treatment by using extrusion techniques. Moreover, [101] reported reduction to the extent of 34.52% to 57.41% in sorghum.
Germination
During the germination process, complex sugars are converted into simple sugars [91]. Tannin content has been shown to be reduced by the germination process, which is one of the cheapest methods. A maximum reduction in tannins of up to 75% has been observed when pearl millets were treated by using the germination method [102]. Rusydi and Azlan [103] observed a reduction of 57.12% when peanuts were treated by using germination. The reduction of tannins may improve the nutritional quality of feedstuffs. Thus, processing techniques may help to remove or reduce tannin levels in different feedstuffs, which might be favorable for animal production (Table 5).
Cooking
Cooking is considered important in reducing antinutrients activities in tannins. As stated by [104], cooking reduces the antinutrients present in tuber crops like cocoyam.
Auticlaving
Autoclaving is found to be one of the most effective methods in the elimination of antinutrients, although it might not be cost effective because of its reliance on electricity [105].
Grinding
Grinding is considered an effective method in reducing the tannin content because it increases the surface area which in turn reduces the contact between tannins and the phenolic oxidase in the plant [106,107].
Health Benefits of Tannins in Monogastric Animal Production
Tannins are plant extracts that can be used as additives in monogastric animal feed to control diseases [1]. In vitro studies have shown that most tannins have antiviral, antibacterial and antitumor properties [15]. Tannins have shown a favorable outcome in the preferment of gut health when used with other antimicrobials as growth-promoting factors (AGP) such as probiotics [1]. Condensed tannins extracted from green tea or quebracho have shown to have some antimicrobial substances [108]. However, [109] reported that condensed tannins may have less effect than hydrolysable tannins in controlling Campylobacter jejuni in the presence of high concentration of amino acids. Moreover, tannins derived from chestnuts (Castanea sativa) can inhibit the in vitro growth of Salmonella typhimurium [110]. Several in vitro studies have revealed that polyphenols of the procyanidins (CT) have an antioxidant property while tannic acid has anti-enzymatic, antibacterial and astringent properties, as well as constringing action on mucous tissues [111]. The ingestion of tannic acid causes constipation, so it can be used to treat diarrhea in the absence of inflammation [112]. Kumar et al. [69] reported that the tannin content of 16 g/kg in red sorghum had no effect on certain animal welfare parameters of broiler chickens. Similarly, globulin, protein, plasma albumin, phosphorus, glucose, calcium, and uric acid levels were not affected, even when maize is replaced 100% with red sorghum. However, mild histopathological changes in kidney and liver tissues, as well as high cell-mediated immune response, were detected when raw red sorghum containing 23 g tannins/kg was fed to the same group of broiler chickens. The supplementation of purple loosestrife (Lythrum salicaria) in rabbits has led to a significant increase in the total white blood cells and higher concentrations of volatile fatty acids and acetic acid, therefore a low level of loosestrife supplementation (<0.4%) has been suggested to gain health benefits and prevent adverse effects on animal health and performance [113].
Farmatan tannin concentrations of 0.05%, 0.025% and 0.0125% can inhibit the growth of Clostridium perfringens by more than 54-fold [114]. Another in vitro study was conducted by [108] to evaluate the effects of tannins from chestnuts and quebracho, or a combination of both, on Clostridium perfringens. All three products reduced the presence of C. perfringens. When the comparative analysis was conducted, it was discovered that the concentrations of quebracho tannin were more effective in inhibiting the growth of C. perfringens as compared to chestnut tannin. Commensal bacteria such as Bifidobacterium breve or Lactobacillus salivarius are very useful and their growth or presence should not be inhibited by the tannin. Kamijo et al. [115] reported that ellagitannins isolated from Rosa rugose petals have some antibacterial activities against pathogenic bacteria such as Salmonella sp, Bacillus cereus, S. aureus and E. coli but they had no effect on beneficial bacteria. Most in vitro results are supported by in vivo experiments that the inclusion of tannin in monogastric animals can lower the occurrence and severity of diarrhea [116]. However, the efficiency of adding tannins that shows robustness in inhibiting pathogens in in vitro studies needs to be evaluated further in the experimental set-up (in vivo) involving poultry and pigs. These disparities in terms of types of tannins that are efficient in combating certain pathogens warrant further research. Table 6 shows different health benefits of tannins in monogastric animals. Reduction in the absorption of mycotoxins in the gastrointestinal surface [118] Grape pomace (CT) Chickens (broiler) 6% Increased commensal bacteria (Lactobacillus) and decreased the counts of clostridium bacteria in ileal content [118]
Conclusions
In the quest to find alternative feed ingredients in the production of monogastric animals, the effects of tannins have proven to be of value. Tannins can be beneficial in both as feed ingredients and a valuable ingredient in animal health. Although tannins contain antinutrients, different processing methods have proved to be effective in the reduction or elimination of these antinutrients. This review has provided extensive literature on the benefits and impacts of tannins in poultry production. Furthermore, it has elaborated on the different processing methods which can be employed to reduce the negative effects of tannins. The methods chosen should be cost-effective, easy to use and should not defeat the purpose of alternative feed ingredients. Even though tannins can act as feed additives, their inclusion level will depend on the source, age and species of poultry. Thus, future research should focus on the optimum tannin inclusion level in poultry and more cost-effective processing methods, especially for small-scale poultry keepers who mostly utilize these alternative feed ingredients. The development of more convenient readily available products of tannins ready to be incorporated in the monogastric animal feed is encouraged. | 2020-10-18T13:05:39.337Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "359d19497f1b9b25d594461e2fce673f97502e2e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/25/20/4680/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b0ecc6384060140b7e09194496c1b4f482cc49e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
218989277 | pes2o/s2orc | v3-fos-license | What should organic farmers grow: heritage or modern spring wheat cultivars?
To achieve a complete organic value chain, we need organic seed from cultivars adapted to organic growing. A separate breeding for organic growing is difficult to achieve in small markets. Many breeding goals are equal for organic and conventional cereals, and cultivars failing to qualify as a commercial variety for conventional growing may possibly perform well in organic growing, with different regimes of fertilisation and plant protection. A field trial was conducted over 2 years to compare 25 cultivars of spring wheat, ranging from one land race and some old varieties released between 1940 and 1967, to modern market varieties and breeding lines. Grain yield, agronomic characteristics and grain and flour quality, including mineral content, were recorded. The performance of the 20 most interesting cultivars in artisan bread baking was measured, as were sensory attributes in sourdough bread from six cultivars. Modern varieties and breeding lines gave higher yields and had larger kernels, better grain filling, higher falling numbers and higher SDS-sedimentation volumes compared with old cultivars. The old cultivars, on average, had higher concentrations of minerals, although the growing site had a strong effect on mineral concentrations. Bread from modern cultivars performed best in a baking test. Several sensory attributes such as juiciness, chew resistance, firmness, acid taste and vinegar odour varied significantly between the six tested cultivars. Land races and old varieties have an important cultural value, and many consumers are willing to pay a premium price for such products. This will be required since yield levels are often considerably lower, especially with humid weather conditions at harvest.
Introduction
Wheat (Triticum aestivum var. aestivum) is an important food crop, and the main ingredient in Norwegian bread. About 320,000 tons of wheat are milled to flour for human consumption annually in Norway, comprising about 80% of the total milled volume (Norwegian Agriculture Agency 2019). Within the organic sector, there is a high interest for heritage varieties of cereals, including spelt (Triticum aestivum var. spelta), emmer (Triticum turgidum var. dicoccum) and einkorn (Triticum monococcum var. monococcum). However, these varieties were traditionally not grown in Norway and are not included in the Norwegian plant heritage. Along with land races and old cultivars, such wheat varieties are claimed to possess better characteristics than modern cultivars in several respects (e.g. Martineau 2016). People experiencing digestive problems when eating bread made by industry baking with flour from modern cultivars may experience no digestive problems when eating bread made from heritage varieties, land races or old cultivars. It is also often claimed that heritage varieties and old wheat cultivars are more nutritious due to higher concentrations of minerals in the grains, since breeding for higher yields have caused a "dilution effect" where more grains contain relatively more starch and less minerals per kilogram. This argument is supported by scientific studies (e.g. Fan et al. 2008;Zhao et al. 2009). Zhao et al. (2009) found a generally negative relationship between grain yield and concentration of proteins, zinc (Zn) and sulphur (S) in 175 lines of wheat grown in a field experiment in Hungary in [2004][2005]. The study comprised 150 lines of bread wheat (130 of winter wheat, 20 of spring wheat), out of which 59 were characterised as land races, old varieties or transitional varieties, with long straw. Furthermore, the trial comprised 15 heritage varieties (spelt, emmer, einkorn) and 10 durum wheat (Triticum turgidum var. durum). The time span of the year which bread wheat varieties in the data set was released, ranging from 1948 to 2003. Durum wheat was comparable to the bread wheat with respect to grain concentrations of Zn, iron (Fe) and selenium (Se), with a mean value for these groups of 21 mg Zn, 36 mg Fe and 90 μg Se per kg. On average for the 15 heritage varieties, these concentrations were 23, 41 and 239. For the whole data set, Zhao et al. (2009) also found a significant and negative relationship between grain yield and bran yield, and positive relationships between grain yield and thousand kernel weight and kernel diameter. A subset of 26 bread wheat varieties, released between 1948 and 2003, showed a significantly decreasing trend for grain Zn concentration over time, whereas for Fe the trend was less apparent (Zhao et al. 2009). Fan et al. (2008) analysed concentrations of Fe, Zn, copper (Cu), manganese (Mn), phosphorus (P), S and calcium (Ca) in totally 362 wheat grain samples stored in the archive of the Rothamsted Broadbalk Experiment close to London, UK. The period of study was from 1845-2005, and comprised 16 varieties of winter wheat, which were in each year one of the most commonly grown varieties at that time. The authors draw a line of distinction in 1967; since before this year, varieties had long straw whereas after 1967 breeding for higher harvest index and short straw changed the morphology of wheat varieties significantly. For most minerals, the average concentration in winter wheat samples taken between 1968 and 2005 were significantly lower than for samples taken before 1968, and most micronutrients showed a significant decreasing concentration over time in the most recent period. There was a negative relationship between mineral content and grain yield. Overall, mineral concentrations decreased by 20-30%, and the authors explain this as a dilution effect due to the breeding aim of harvesting more grains and less straw (increased harvest index).
As also mentioned by Zhao et al. (2009) land races and old varieties commonly have longer straw, and it is a tempting idea that this may be related to an ability of the root system to better cope with restricted growing conditions such as drought or lack of mineral nutrients. However, as discussed by Rich and Watt (2013), studies of cereal root architecture in field are very time-consuming, and interactions between soil conditions and genotype cannot be excluded, hence making this a very challenging topic to confirm scientifically. What remains a fact is that farmers have always tried to increase their grain yields. So, when cereal breeding turned into a highly advanced field of research about 1900 to serve a more intensive agriculture, there were some good reasons to aim for characteristics where land races often performed poorly, such as resistance to fungal diseases, better resistance to lodging and resistance to pre-harvest sprouting. Such breeding goals, and since about 1970 also strong gluten quality as requested by the Norwegian bakeries, have guided the breeding of cereal cultivars in Norway (Graminor 2020). As a result, modern cultivars tend to give significantly higher yields, often with a better quality than old varieties and land races (Zhao et al. 2009). For every breeding line that qualifies as a commercial new cereal cultivar, there are many lines which do not pass the qualification. This may be due to reasons which are of less importance in organic growing (Table 1). For instance, it may be that a line with excellent agronomic characteristics has too "soft" gluten to perform well in industrial baking. Such discarded lines represent valuable genetic resources, which possibly may be valuable in organic growing, where a larger proportion of the produced wheat may be used for home baking or artisan baking where a "strong" gluten is less important.
In Norway, the growing of spring wheat for bread making is challenging due to a short, cold and often wet growing season, and conditions get more challenging towards the North. Under such conditions, around the city of Steinkjer, 400 km south of the polar circle, a s u cc e s s f u l s m al l c o m pa n y , G u l l i m u n n A S (https://gullimunn.no/) which grows mills and distributes flour for artisan baking from locally grown cereals of old cultivars, heritage varieties and land races has been established. On this background, at this location, we studied a collection comprised of land races, old cultivars (approved 1940-1967), modern cultivars (approved 1970-2014) and breeding lines still under assessment for qualification as commercial cultivars for their performance under organic growing conditions and artisan baking. The aim of this study was to increase the diversity of cultivars used for growing of organic cereals, by searching for modern cultivars possibly performing well under organic growing conditions and with artisan baking in Mid-Norway. In the present paper, we present the achieved results and output of this study and discuss benefits and drawbacks of growing old or modern cultivars in organic farming. A comprehensive report, referring more detailed results of some characteristics, is available in Norwegian (Løes et al. 2019).
Material and methods
The study consisted of a field trial in 2 years, with subsequent grain quality analyses, including minerals, a baking test and sensory analyses.
Field trial
The field trial was conducted at two locations close to Steinkjer, Norway, during 2017 and 2018. At both experimental sites, we compared 25 lines of spring wheat, ranging from a Swedish land race to breeding lines still under testing. The experimental setup has a lattice design, with two replicates and plots sized 1.5 m × 8 m. In a lattice design, the replicates are grouped in subblocks according to a defined system, allowing for statistical corrections caused by variation in soil and other experimental conditions related to blocks. This arrangement reduced the experimental coefficient of variation (CV) in both years.
The selection of varieties (Table 5) comprised old varieties grown and marketed by Gullimunn AS for artisan baking (Dala landhvete, Fram II), two other old Norwegian cultivars (Norrøna, Møystad), one cultivar representing the success story of early Norwegian wheat breeding (Runar), two Swedish cultivars (Polkka, with a "softer" gluten quality than most modern Norwegian varieties, and Sport with a very high protein concentration) and two common market cultivars in Norway during -2018. The variety Mirakel, which is quite tall, was originally bred for organic farming, but because of high yields, good disease resistance and exceptionally good baking quality, it has gained popularity among conventional farmers and is now the most grown wheat variety in Norway. The selection further comprised 16 breeding lines (from 2003 to 2017) from the Norwegian breeding company Graminor, where three had yielded well in former testing with organic growing conditions (GN12634, GN15621, GN16503), four had "soft" gluten (GN17632, 17633, 17634 and 176353) and nine were selected for vigorous early growth (remaining lines in Table 3). Seniorita also scored high on this characteristic, which was assessed in a separate study of about 200 spring wheat lines grown at Ås, Norway, in 2016 in the project "Expanding the technology base for Norwegian wheat breeding: genomic tools for breeding of highquality bread wheat" (Research Council of Norway (RCN) 2020).
The experimental sites were fertilised according to the farmer's practice (Table 2). Topsoil analysis (0-20 cm) from samples taken after harvest (Table 2) revealed that the soil's nutrient status was medium (P-AL between 5 and 7, K-AL between 7 and 15 mg 100 g −1 air-dried soil) or high (P-AL between 8 and 14, K-AL between 16 and 30 mg 100 g −1 air-dried soil; Eurofins 2020). Extractable nutrients in agricultural soil in Norway are assessed by extraction with 0.1 M ammonium lactate and 0.4 M acetic acid at pH 3.75 (AL-method; Egnèr et al. 1960). Seed planting occurred with an experimental seed planting machine, seeding rate 24 g seed m −2 . For the 2017 trials, the seed was delivered from Gullimunn AS and Graminor AS. For the 2018 season, seed from the 2017 experiment yield was used. Weed harrowing was performed once before the wheat germinated, using the farmer's equipment. Harvesting occurred with an experimental combiner, where only grain yields were recorded.
In 2017, the growing season was initiated by a cold and dry spring, and a wet and cool and early summer ( Fig. 1). The amount of rain in June was high as compared with normal values in this area (Table 3). In late July, the weather changed, and harvest conditions were favourable with a warm and dry September. Despite a late start, the growing conditions were generally favourable for cereals in 2017 and yields were satisfactory. In 2018, the conditions were almost opposite. May was warm and dry, and the summer proceeded with very little precipitation. This affected negatively on cereal yields, with short straw and small grains. A very wet harvest with poor harvesting conditions further contributed to very low yield levels in that season.
Experimental plots were regularly observed, and characteristics such as days to maturity, length of straw and lodging were registered. The developmental stage was recorded by yellow ripening, using the BBCH scale (Lancashire et al. 1991). By this observation, plots with the cultivar Mirakel were used as a standard, and other plots assessed relative to these plots by giving a value of minus 1-8 days for earlier cultivars, 0 for similar cultivars and plus 1-4 days for later maturing cultivars (Table 4). By yellow ripening, the grain water content is 38%. At this stage, the whole aboveground plant is yellow, except for some green colouring of the nodes. Straw length was recorded on the same date, by choosing one row of plants per plot and measuring the lengths of 10 typical straws. Lodging was recorded before harvest, as percentage of the plot with flat lying plants. For example, if the straw inclined 45% on 50% of the plot, a value of 25% lodging was recorded. At harvest, the fresh weight of grains was recorded, and a representative sample of about 1 kg used for determination of dry weight (DW). Grain yields are presented with a standard water content of 15%. The samples used for DW Grain from 20 cultivars were selected by leaving out the five least promising breeding lines (GN) and used for analysis of minerals and test baking. The lines not included here were GN12634, GN12741, GN14529 and GN17633, being very late in maturation and/or low in yield, and GN17634 being the only modern cultivar with a low falling number value in 2017.
Grain quality was described by water content at harvest, test weight, thousand grain weight, starch quality (falling number), protein content and technological protein quality measured as SDS-sedimentation volume (see below). Grain water content at harvest provides useful information about the earliness and ripeness status by harvest, provided that the harvested lines were not dead ripe (completely dead) because then the water content will be a function of precipitation or dew, not maturity. In our case, results of grain water content from 2017 are useful to evaluate earliness, whereas results from 2018 are less reliable due to very wet harvest conditions (Fig. 2). Dead ripeness occurs 7-10 days after yellow ripeness. The grain test weight is measured on a standard volume of grains (0.5 dm 3 ) with standard water content and is given as kg/hl. It gives information about kernel development and grain filling. Well-filled grains increase the test weight, and numbers approaching 80 kg/hl indicate good grain filling. The milling industry usually demands a test weight of wheat above 75 kg/hl. The test weight is closely related with the proportion of flour after sieving, because less well-filled kernels have relatively more bran. The thousand grain weight (TGW) informs about the average grain size, and wheat cultivars have a different potential maximum grain size which typically varies from 30 to 45 mg per grain.
For the baking industry, the starch quality (falling number), protein content and technical protein quality measured by sedimentation volume with SDS are important characteristics to assess if a batch of grains may be usable for bread making.
The falling number describes the ability of the starch to swell and absorb water within its complex structure.
Starch damage by amylase enzymes activated by the onset of grain germination will affect this ability. Starch quality is measured as Hagberg Falling Number (Perkin Elmer 2019). Wet and warm weather conditions after grain maturity in autumn may initiate pre-harvest sprouting and thereby cause poor starch quality and low falling numbers. The amylases activated by onset of germination will proceed the decomposition of starch during dough making. Hence, even a small batch of grains with low falling number (and high enzymatic activity) may reduce the baking quality of a large batch of wheat with high falling number. Norwegian milling industry demands a falling number > 200 to accept wheat for bread making. Grain with values < 200 may be treated (heated) to reduce enzyme activity or used for other products, but it is mostly used for feed.
The protein content is measured as Kjeldahl-N, and the total N is multiplied by 6.25 to derive the protein content, assuming a standard N concentration of 16% in grain proteins (100/16 = 6.25). Total N in the grains was measured by near infrared transmittance with an Infratec Cloying-F Relates to a flew, non-fresh, watery flavour
Firmness
Related to the force needed to push the bread curvature completely flat
Hardness
Mechanical texture attribute related to the force needed to bite through the sample with the grinders Juiciness Surface textural property that describes liquid absorbed or emitted from a product. Perception of water after 4-5 chews.
Chewing resistance Mechanical texture attribute related to time and number of chews which is necessary to fine-tune the sample ready for swallowing.
Doughiness
Mechanical structural attribute related to the effort required to atomize the product into a condition ready for swallowing, related to a moderate level of doughiness.
1241 Grain Analyzer (FOSS). The Norwegian milling industry demands at least 11.5% protein in wheat. Grain batches with lower protein levels are normally used for feed but may also be applicable for other purpose such as making porridge or flat bread. Wheat grain batches with similar protein content may vary significantly in their ability to produce bread which raise efficiently and keep the form after baking (= baking quality). This is largely dependent on the protein quality, which is dependent both on the amount of gluten proteins in the total protein, and the ability of this gluten to construct a stable network which makes the bread stay risen after baking. Without such a network, carbon dioxide produced by the yeast during raising of the dough will emit, resulting in a heavy bread. Breeding wheat for industrial baking implies to foster the presence of some gluten components, whereas other components should be avoided. By modern industry baking, the dough must behave similarly over time, and it has to stand a tough mechanical processing. This has resulted in modern cultivars with "strong" gluten. For artisan baking, where the baker can get familiar with various batches of flour and adapt the practice to the characteristics in each case, cultivars with less strong gluten may be of high interest, especially if such cultivars can be tolerated by people not performing well with industrial bread products.
There are many rheological tests for measuring the bread making quality of flour, but such tests are usually time-consuming and demand costly equipment. There is a satisfactory and positive correlation between the fast SDS-sedimentation method and industrial bread making quality with high mixing intensity. In the SDSsedimentation method, milled grains are mixed with water, lactic acid and sodium dodecyl sulphate (SDS) leading to the name SDS-test. The volume of precipitated material (sediment) is recorded. Proteins from grains with strong gluten components will swell more, and hence derive a higher volume of sediment. However, the volume will also increase with a higher content of protein. Hence, specific SDS values (SDS divided by the protein content) can provide additional information about the gluten quality.
For analyses of minerals, a composite sample of 200 g dry grains, 100 g per replicate plot, was made for each of the 20 selected lines from both sites in 2017. The samples were shipped to Actlabs, Canada, to measure the concentrations of P, K, Ca, Mg, S, Na, Fe, Co, Cu, Mn, Mo, Se and Zn. The laboratory milled the grains and extracted the minerals by strong acids. Then, the concentrations of Co, Mo and Se were measured by inductively coupled plasma mass spectrometry (ICP-MS), and other minerals recorded by inductively coupled plasma optical emission spectrometry (ICP-OES). To study the relationship between "age" of a line and the mineral concentrations, each line was assigned a number computed as 2018 minus the year of approval or, for not yet qualified lines, the year they were entering testing. For the land race Dala landhvete, the year of approval was set to 1900, and the age = 118. For GN lines, the year of entering is indicated by the first two digits in the line number, e.g. GN06557 was first tested Similarly, composite samples of grains from both experimental sites grown in 2017 from the 20 selected cultivars were used to produce flour and sourdough breads, which were assessed for quality. The 40 samples (cultivar × site) were treated anonymously. The assessment of how well the grain samples performed for artisan sourdough bread baking was conducted by Caroline Lindö in Sweden between April 3 and May 31, 2018. Caroline holds a PhD in microbiology and is a very experienced artisan baker. The grains were milled into flour using a KoMo Fidibus XL table-sized stone mill, the same day as the test baking occurred. After milling, the flour was sieved through a sieving cloth, where about 85% of the particles passed the cloth. The amount of gluten proteins was measured by washing out the starch from samples of dough prepared from 10 g of flour. Remains after washing are gluten proteins. The remains were assessed for stability, elasticity and extensibility, with results presented in Løes et al. (2019). For bread baking, 300 g of non-sieved flour, 300 g of sieved flour, 80 g sourdough culture and 14 g salt were mixed with an appropriate amount of water, and the water volume was recorded. The dough was mixed by hand and set in room temperature with careful stretching and folding once per hour to level out the temperature and strengthen the gluten. When assessed ready for further treatment, the dough was divided in two, loaves of bread were formed and put in small baskets overnight in refrigerator. Next day, the bread was baked in an electric oven with a stone plate, with a temperature between 240 and 270°C. After baking, the form, colour, crust and odour of each bread was assessed. After cooling, one bread was cut in two to assess the structure, elasticity, colour and odour of the crumb (the inside of the bread). Then, the bread was tasted, and a photograph was taken of the grains, the complete bread and the crumb (see Løes et al. 2019). Several evaluation criteria were merged into a value between 1 (least good bread, raw, with poor raising, not keeping the initial spherical form well) and 5 (best bread, well baked, well raised and keeping the form).
Six bread loaves from growing site 2, which all received a character of 3 or better, were selected for sensory analysis at Nofima AS in Ås, Norway. The selected cultivars comprised the old cultivar Fram II, the early bred cultivar Runar, two Swedish cultivars The aim of this study was to describe objectively possible differences in sensory attributes between the six bread loaves, from 16 characteristics being observed. Prior to the assessments, two samples (Polkka, Mirakel) were applied in a training session to agree on the variation in attribute intensity. The results from the training session were reviewed by a profile plot, using the software Panel Check for a visual performance monitoring. The output of this session was a list of attributes which comprised odour, flavour/taste and texture (Table 4).
The bread loaves were sent frozen overnight from Sweden to Nofima and after thawing they were heated in a baking oven for 10 min at 200°C to allow for the aroma of the breads to develop. After cooling, even slices were cut in a bread slicing machine, and standardised pieces for testing were taken by cutting circles with a diameter of 22 mm from the medium slices of the loaves. For each bread, each panellist received two circles of the inner part of the bread (crust excluded), keeping room temperature. The two pieces were used to assess odour. Thereafter, one piece was used to assess flavour/taste, and the other to assess texture. For neutralization of the mouth, the panellists were required to rinse with lukewarm water and unsalted crackers between samples. The limited amount of bread complicated the study because it was a challenge for the sensory panel to catch all nuances with the small pieces available. The coded samples were served in blind trials randomized according to sample, assessor and replicate. The bread was served to the assessors in white plastic cups with a three-digit code, covered by a lid. The panellists evaluated the samples in duplicate, during four sessions with at least 15 min break between each session. The assessors recorded their results at individual speed on a 15 cm non-structured continuous scale with the left side of the scale corresponding to the lowest intensity and the right side corresponding to the highest intensity. The computer transformed the responses into numbers between 1 = no intensity and 9 = distinct intensity.
Grain yield and quality
Due to the weather conditions during the growing seasons, the cereal yields were quite high in 2017, but very low in 2018 (Table 5). In 2017, cold weather enhanced growth during early growing season, and a warm and dry autumn contributed to acceptable yields with good grain quality. In 2018, summer drought and high precipitation in autumn resulted in low yields and poor grain quality. However, the field experiments produced reliable results in both years, which allows for assessing the performance of the different lines in two very different growing seasons.
There was no strong relationship between grain yields in the two growing seasons. Two modern cultivars Mirakel and GN16503 gave the highest yields in both growing seasons, about 3.6 tons per hectare (ha) in 2017 (Table 5). The old cultivars Dala Landhvete and Fram II gave about 30% less yields (around 2.5 tons per ha), but Norrøna and Møystad performed remarkably well in 2017 and yielded about 3.5 tons. In the challenging year of 2018, all cultivars yielded below 2 tons per ha on average. The old cultivars performed better than several modern cultivars in that year, but the differences were insignificant. Runar, which was the earliest cultivar in our study, matured 5 days before Mirakel and yielded on average 3.31 tons of grains (15% water content) ha −1 in 2017.
The poor growing conditions in 2018 was also reflected in a considerably lower test weight (Table 6), 71 (kg per hectolitre) as compared with 81.3 in 2017, indicating a poor grain filling, especially at site 2 where the drought had even more negative effects than at site 1 with lighter soil (Table 1). This is also shown by the very short mean straw length at this site in 2018, only 65 cm (Table 5), whereas the length was 74 cm at site 1 and more than 90 cm at both sites in 2017. The kernels were also smaller in 2018 than in 2017 (thousand grain weight (TGW), Table 6).
The old cultivars had smaller kernels, with poorer grain filling than modern cultivars (Table 6). Across sites and years, the four cultivars approved before 1970 had a TGW of 29.6 g. The other 21 cultivars had a mean TGW of 33.3 g. Long and weak straw resulted in significant lodging in the oldest cultivars in both years (Table 5). This is a very clear difference between old and modern cultivars. As shown by low falling numbers (Table 6), even in 2017 the old cultivars were susceptible to preharvest sprouting damage, which was increased by lodging. In 2018, many cultivars had very low falling numbers, due to the extremely wet harvest conditions.
Breeding for increased grain yield tends to reduce the protein concentration, which was also found in our study. In general, the lower yielding old cultivars had higher protein content than the modern cultivars (Table 6), but lower protein yield.
The SDS-sedimentation volumes were considerably higher in 2018, on average 70 cm 3 as compared with 58 in 2017. The values are somewhat lower than reported by Åssveen et al. (2017) in a study from Southern Norway of conventionally grown spring wheat during 2015-2017. These authors found SDS values around 90 for Mirakel and Seniorita, likely due to better climatic conditions and higher application of fertiliser. On average across sites and years, the four old varieties had SDS-sedimentation volumes of 45 cm 3 , while the mean of the other 21 lines was 65 cm 3 . This confirms that modern cultivars have different gluten characteristics from old cultivars. No correlation was found between protein content and SDS value for each site and year, but a positive correlation (r 2 = 0.47) was found for the average values over four experiments (Løes et al. 2019).
Based on an overall assessment of the tested lines, GN03503, GN14649 and GN16503 are interesting for further testing with organic growing conditions in this region. They have stable high yields, low falling numbers and good grain quality characteristics. Among the tested current market varieties, Mirakel performed better than Seniorita, which is shorter and may compete less well with weeds. Runar is an interesting variety which could be considered for being brought back to practical use in organic growing, due to earliness, long straw and good performance, at least in the season with good conditions for cereal growing.
Grain mineral concentrations
The concentration of both selenium and cobalt was below 1 mg kg −1 in all grain samples analysed (n = 40). This may seem negative, since selenium is an essential micronutrient for humans and animals which often needs supplementation (Haug et al. 2007). However, concentrations well below 1 mg kg −1 of these minerals in cereals are to be expected (see e.g. Díaz-Alarcón et al. 1996;MacPherson and Dixon 2003), and if these minerals had been of special interest in our study, we would have had to use another laboratory package with lower limit of detection.
For other minerals, we found statistically significant relationships (p < 0.01) between concentration and "age" of the cultivars for zink (Zn), iron (Fe) and P. For Zn, 39-51% of the variation was explained by "age" (Fig. 2). For Fe, variation of "age" explained 33-68% of the variation in grain concentration, and for P it explained 39-42% (Løes et al. 2019). For other minerals tested (S, K, Mg, Ca, Na, Cu, Mo), the relationship with "age" was less clear than for P, Fe and Zn. The sum of all minerals analysed except total N showed a significant and negative relationship with the mean grain yield across growing sites (Fig. 3). The variation in grain yield explained about 42% of the variation in accumulated minerals. This demonstrates the dilution effect of minerals in modern lines of spring wheat.
Baking quality and sensory analyses
An important, overall result from the baking test was that fully edible and palatable breads were produced from all the 40 grain samples. Ten percent of the 40 bread loaves received score 1, 15% score 2, 20% score 3, 23% score 4 and 32% score 5, where 5 was given to the overall best bread. However, there were some interesting differences, and the modern cultivars Seniorita and Mirakel, some breeding lines and Runar achieved the best overall evaluations ( Table 6). The old cultivars had much lower SDS values (45 or below), whereas modern cultivars had 51 or above. Concurrently, the old cultivars scored only 1-3 in the baking test at both sites.
Since only a restricted number of sensory analyses could be conducted, we agreed to send bread loaves from cultivars which received a character of 3 or higher at both sites, hence excluding bread from the oldest cultivars. The sensory analyses were performed on bread made from each of the six cultivars Runar, Fram II, Seniorita, Mirakel, Polkka and Sport, grown at site 2 in 2017. Significant differences were found above all for odour and texture, but only for one flavour/taste attribute, acidic taste ( Fig. 4; Tables 7 and 8). The bread made of Polkka had considerably less acidic taste than the bread loaves of the other cultivars, where the acidic taste was quite similar. The bread made of Seniorita had both the most distinct vinegar odour, the highest total odour intensity and the highest mean value for acidic taste. Bread from Mirakel had a significantly weaker vinegar odour than Seniorita. The two Swedish cultivars Polkka and Sport were significantly firmer and harder than all Norwegian cultivars. Fram II, Runar and Seniorita also had a lower chewing resistance than Mirakel, Polkka and Sport. Another obvious difference was the juiciness, where bread from Runar was most juicy, and Polkka the least juicy. Bread loaves from the two Swedish cultivars were less juicy than bread from the four Norwegian cultivars.
Discussion
The main aim of this study was to increase the diversity of cultivars used for growing of organic cereals, by searching for modern cultivars possibly performing well under organic growing conditions in Mid-Norway, and with artisan baking. Our assessments, of agronomic traits, grain quality, baking quality and sensory attributes revealed that modern cultivars yield more grains, have stronger straw, less risk for lodging and pre-harvest sprouting damage and better grain quality. This result is in agreement with several other studies of breeding progress in spring wheat. For 10 Finnish wheat cultivars released in 1939-1990, grain yield increased by 20% with 7% reduction in straw length and 80% improvement in lodging resistance in the most recent cultivars (Peltonen-Sainio and Peltonen 1994). For 316 winter wheat varieties studied in Germany, the increase in grain yield from 1983 to 2014 was 24%, and the protein concentration declined by 8% in the same period (Laidig et al. 2017). On average, a lower protein concentration in modern cultivars was also found in the present study, but the variation was high (10.7-12.4% in 2017). Due to higher yields, the protein yield will normally be higher in modern cultivars. For 16 Italian winter wheat varieties released between 1900 and 1994, grown with different N application, again a significant yield gain with modern cultivars was confirmed, estimated to be 33.5 kg per hectare and year (Guarda et al. 2004). These authors also found a decreasing protein concentration over time, being 0.03% per year. However, the total grain uptake of N increased with Fig. 3 Relationship between accumulated mineral content (P, K, Ca, Mg, S, Na, Fe, Co, Cu, Mn, Mo, Se and Zn) and mean grain yield for 20 cultivars of spring wheat grown at two sites in Mid-Norway in 2017 Org. Agr. increasing N application, but also increased significantly in modern cultivars, irrespective of N application. The most recent cultivars gave 13, 37 and 43% more protein per hectare with application of zero, 80 or 160 kg N per hectare, when compared with the oldest. These authors conclude that even with restricted N application, which may occur in organic growing, modern cultivars will give best yield and best grain quality. The lower concentration of minerals in modern cultivars found here was in line with results of Fan et al. (2008) and Zhao et al. (2009). For Zn, which was the mineral where the effect of age was most evident also in these referred studies, the difference between the lowest concentration (found at both sites in GN13618) and the highest (at both sites in Dala Landhvete) was more than 10 mg/kg (Løes et al. 2019). However, for all recorded minerals except sodium (Na) and manganese (Mn), there was a significant effect of growing site. As shown in Fig. 2 for Zn, the concentrations were higher at site 2, with more clay in the soil (Table 1). For Zn, the effect of growing site comprised about 5 mg per kg grain.
As shown in the present study, modern cultivars also perform well with artisan baking, whereas old cultivars gave bread with less favourable characteristics. For bread from six cultivars, we also found a significant effect of genotype (cultivar) on sensory attributes. One former study also combined baking test with sensory analysis of wheat products (Kucek et al. 2017). These authors studied altogether 16 wheat cultivars used for making sourdough and/or yeast bread, cookies, pasta and cooked grain. Interestingly, the ranking of cultivars differed among products. For example, one variety performing well for crackers was the poorest for making cookies, and one cultivar performing poorly for sourdough bread was not the poorest for making yeast bread. Similarly, intense flavour in cooked grain (which is requested) was not linked with intense flavour when the same cultivar was used for sourdough bread, and one test panel which ranked one emmer cultivar highest when cooked ranked the same cultivar lowest when prepared as pasta. Hence, the authors conclude that to select the best cultivar depends on what type of product will be made from it.
The colour of the wheat bran (red or white) may also affect palatability of products made from the wheat. A French study showed that consumers there preferred the taste of red wheat cultivars (Vindras-Fouillet et al. 2014). In Norway, all spring wheat cultivars are red since white cultivars have a higher risk of pre-harvest sprouting, which is an important negative characteristic with Norwegian climatic conditions.
A study from New Zealand (Heenan et al. 2008) showed that positive drivers of fresh bread for consumers were porous appearance, floury odour, malty odour, toasted odour, sweet flavour and sweet aftertaste. For the breads in our study, there was no significant difference in sweet taste. Since the crust was not assessed, there was no toasted odour or flavour on the breads. But we know that most consumers prefer a juicy bread to a less juicy one. Since the Norwegian varieties were juicier than the Swedish cultivars, we would assume that consumers would appreciate these breads in terms of texture. They were also less hard and had less chewing resistance.
Overall, it should be of interest for organic farmers to grow modern cultivars, even for artisan baking. However, that seems not to be the case. Consumers' interest in old cultivars is high, after successful marketing and product development. The current price of flour from Dala Landhvete and spelt wheat in Norwegian web shops by March 2020 is NOK 40-50, which is 4-5 Euro. The price of 1 kg conventional wheat flour on the web (and in conventional shops) was NOK 13, about 1.3 Euro. In spite of this, we recommend expanding the number of varieties grown organically, and introducing new stories about some interesting varieties, e.g. Runar which is a successful example of early modern cereal breeding in Norway. Six varieties have been selected for a follow-up project during 2019-2021 (Dala landhvete, Runar, Mirakel, Seniorita, GN16503 and GN17635). With the aim of increasing the quality, and diversity, of organically grown grain seed, we will study how active selection of evenly large grain seeds by air-separation may affect seed quality. The project will inform about possibilities within national regulations to broaden the offer of cultivars for organic growing and disseminate knowledge on how to establish small-scale distribution of seed grains. | 2020-05-30T14:33:45.269Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "69764ba300d1581f17839ef26855de8e00ca0587",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13165-020-00301-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "69764ba300d1581f17839ef26855de8e00ca0587",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
5808790 | pes2o/s2orc | v3-fos-license | Cortical thickness in human V1 associated with central vision loss
Better understanding of the extent and scope of visual cortex plasticity following central vision loss is essential both for clarifying the mechanisms of brain plasticity and for future development of interventions to retain or restore visual function. This study investigated structural differences in primary visual cortex between normally-sighted controls and participants with central vision loss due to macular degeneration (MD). Ten participants with MD and ten age-, gender-, and education-matched controls with normal vision were included. The thickness of primary visual cortex was assessed using T1-weighted anatomical scans, and central and peripheral cortical regions were carefully compared between well-characterized participants with MD and controls. Results suggest that, compared to controls, participants with MD had significantly thinner cortex in typically centrally-responsive primary visual cortex – the region of cortex that normally receives visual input from the damaged area of the retina. Conversely, peripherally-responsive primary visual cortex demonstrated significantly increased cortical thickness relative to controls. These results suggest that central vision loss may give rise to cortical thinning, while in the same group of people, compensatory recruitment of spared peripheral vision may give rise to cortical thickening. This work furthers our understanding of neural plasticity in the context of adult vision loss.
density and volume. For example, it is unknown whether these measurements reflect changes in cortical thickness, cortical area, or gyrification patterns [16][17][18] . Further, little attention has been given to the question of whether increases in the use of parts of the visual field lead to changes in the structure of early visual areas. This is the first study, to our knowledge, that addresses the degree to which increased use of a visual region leads to increases on cortical thickness.
This study compared the cortical thickness of primary visual cortex (V1) between participants with MD who have central vision loss but intact peripheral vision and matched (age, gender, education) normally sighted control participants. Our experimental design allowed us to examine, in the same group of carefully-chosen and rare participants who have dense central vision loss in both eyes, the consequences of both increased and decreased use of a visual field. We hypothesized that participants with MD, as compared to the matched controls, would (a) have thinner cortex in centrally responsive parts of V1 (e.g. lesion projection zone) following the reduced use of central vision, and conversely (b) thicker cortex in peripherally responsive parts of V1 following the increased use of peripheral vision as a compensatory strategy for central vision loss.
Materials and Methods
Participants. The University of Alabama at Birmingham (UAB) Institutional Review Board approved this study, and all participants provided informed consent for their participation. All methods were carried out in accordance to the approved guidelines. We recruited ten participants with MD (six females and four males; mean age 63.1 years, range 34-81 years; mean education 14.8, range 3-18 years; see Table 1) and ten control participants with normal vision (minimum visual acuity: 20/44 best eye) matched to each MD participant for age (within five years), gender, and education level (no high school degree, high school degree, some college, college, or advanced degree). The groups were not significantly different in age (T(18) = 0.1103, p = 0.913). Eligibility criteria required that MD participants had been diagnosed with MD in both eyes for at least 2 years and did not suffer from any neurological disorder. The requirement of central vision loss in both eyes can make this a challenging population to recruit. Within the MD group, three participants had juvenile-onset MD (Stargardt disease), and seven MD participants had age-related MD. All MD participants had significant central visual field loss as measured by retinal microperimetry 6 at a hospital-affiliated clinical center for low vision.
Prior to MRI scanning, participants with MD underwent visual acuity testing (ETDRS) 19 , optical coherence tomography, and retinal microperimetry using the Rodenstock Scanning Laser Ophthalmoscope (SLO). The SLO confirmed that all participants with MD had significant central visual field loss. Each participant's scotoma was at least 3 degrees visual angle wide as determined by the SLO. The scotoma extent for each participant was hand drawn on each SLO by a trained clinician (Author DKD, See Fig. 1 for an example SLO image) and the extent of the scotoma for each eye was calculated from this hand drawn measurement ( Table 1). Measurements of scotoma size were calculated on a participant-by-participant basis using each individual's SLO image of their retina. We made a measurement of the diameter of the scotoma horizontally through the fovea as well as vertically through the fovea. We converted this measurement to degrees visual angle using previously reported methods 20 . Table 1 provides the mean diameter in visual angle of the scotoma in each eye and is consistent with previous literature 13,15 . Each cortical hemisphere processes one half the visual field, therefore, to determine the scotoma's cortical representation, we used the radius of the scotoma instead of the diameter of the scotoma, and because the cortex receives input from both eyes, the minimum for left vs. right eye was used to determine scotoma extent as used in later analyses. Cortical Reconstruction. Cortical thickness and grey matter volume were calculated using Freesurfer (version 5.3.0) -a surface based analysis tool that calculates the distance between the grey/white matter boundary and the pial surface [21][22][23] . All regions of interest (ROIs) were created in Freesurfer (version 5.3.0).
Bar ROIs.
We created a set (9 per hemisphere, 18 total) of ROIs in V1 of varying eccentricity on a flat-map of the occipital pole using the Freesurfer fsaverage brain. These ROIs were defined as bars that extended across the dorsal-ventral axis of V1, perpendicular to the calcarine sulcus ( Fig. 2A,B), and correspond to the left or right half of an annulus in the visual field. We hand drew eight such bar ROIs along the calcarine sulcus using the Freesurfer V1 label file (the yellow line in Fig. 2) as a guide. Each ROI had an approximate width of 10 mm as calculated with the plot_curv function in tksurfer. We also created a ninth ROI consisting of the remaining vertices between the eighth ROI and the end of the V1 label file ( Fig. 2A,B). These regions spanned from the V1/V2 border on the inferior gyrus across the depth of the calcarine sulcus to the V1/V2 border on the superior gyrus. These regions therefore span the upper, middle, and lower visual field representations in V1 24 . We hand drew these 9 ROIs for both the left and right hemisphere (total of 18 ROIs) on the Freesurfer fsaverage brain, and then using an automated process transformed the ROIs from the fsaverage space to each participant's anatomical space. This approach allows creation of consistent regions across participants. These regions are labeled 1 through 9 in Fig. 2 and each region's surface area is 309 mm 2 , on average. Published data 25 gives an estimate of the mean eccentricities of regions 2 to 9, which are: Additional circle ROIs specific to locations along the gyrus and sulcus. Throughout the cortex, the depth of the sulcus is generally thinner than at the gyral crowns 23 . Therefore, any results from the ROIs presented above could be driven by a difference in proportion of gyrus and sulcus represented in any one ROI. To control for the possibility that the proportion of surface area from the gyrus compared to the sulcus could influence our results, we performed the same tests in a separate set of ROIs where we separated gyral and sulcal regions. We created these ROIs as "circles" in V1 ( . We used the initial set of ROIs constructed on the flat-map as guides to space these new ROIs approximately 10 mm apart. We created each ROI using Freesurfer as follows: we selected a vertex by hand based on the initial set of ROIs and converted the vertex to a Freesurfer 'label. ' This single vertex was expanded using the FreeSurfer "Dilate Label" function, which expands the region to include the original vertex and all neighboring vertices. This process of dilation was repeated for each region a total of three times and the identical dilation procedure was repeated with each of the predefined ROIs. This created the regions as shown in Fig. 2C,D. The area of each of these regions was roughly 20 mm 2 . We created the ROIs on both hemispheres, and then transformed the ROIs to each subject's anatomical space.
Data Analysis. All MD participants had central vision loss including a minimum diameter of three degrees
visual angle (see Participants section). ROIs one through three corresponded approximately to the central 3 degrees visual angle, according to published retinotopic mapping data 25 . Thus, the ROIs one through three are likely to correspond to the regions of vision loss in our MD participants. ROIs four and five correspond to a mean eccentricity of about 4 and 7 degrees, respectively, according to published retinotopic mapping data 25 . These ROIs corresponded generally to the border between the scotoma and healthy retinal tissue. ROIs six through nine corresponded to mean eccentricities of 14 to 63 degrees, and represented the mid to far periphery 25 . Each of the presented analyses used data that were averaged across both hemispheres for each ROI. Similarly, data from the upper and lower bank ROIs (the gyrus and bank of sulcus ROIs from Fig. 2C,D) were averaged together. For each set of regions, we performed two-way mixed-measures ANOVA with a between-subjects factor of group (2 levels) and a within-subjects factor of ROI (ROIs from Fig. 2B; 9 levels). This analysis was chosen as measurements of cortical thickness across V1 were assumed to be dependent samples. We followed up any significant interaction with post-hoc t-tests. Figure 3 shows cortical thickness results across central to peripheral eccentricities in V1.
Previous research has found a decrease in grey matter volume in the lesion projection zone in patients with central vision loss [13][14][15] . Therefore, in order to directly compare our data to these previous results, we investigated grey matter volume in our participant group using the bar ROIs created for the cortical thickness analysis. For this analysis we used the Freesurfer estimated grey matter volume 26 . Previous papers used voxel-based techniques [13][14][15] , which are similar in concept but not entirely identical to our technique. We used Freesurfer instead of a voxel-based technique because we could use the same ROIs that we used for cortical thickness analysis, making comparison straightforward.
We first present the results from the Bar ROIs that represent the upper, middle, and lower visual fields for a given eccentricity. Following this we present the results from the additional "circle" ROIs that are specific to a location along the gyrus or sulcus, to control for differences in sulcus/gyrus surface area ratios between groups in a particular ROI. We then present the results of an analysis of cortical thickness relative to the scotoma border. Finally we conclude with the results of the Freesurfer volume based analysis to align our results with previous volume based experiments [13][14][15] .
Results
Cortical Thickness. Bar ROI. We performed a two-way mixed-measures ANOVA on data from the bar ROIs with a between-subjects factor of group (2 levels) and a within-subjects factor of ROI (ROIs from Fig. 2B; 9 levels). These data violated the repeated measures ANOVA assumption of sphericity with a Huynh-Feldt Epsilon = 0.77. The data are reported with the corrected degrees of freedom from the Huynh-Feldt Epsilon, an approach appropriate to use with Epsilon greater than 0.75 27 Circle ROI -crown of gyrus. We performed a two-way mixed-measures ANOVA on data from the crown of the gyrus ROIs with a between-subjects factor of group (2 levels) and within-subjects factor of ROI (ROIs from Fig. 2C blue; 9 levels). These data violated the repeated measures ANOVA assumption of sphericity with a Huynh-Feldt Epsilon = 0.84. The data are reported with the corrected degrees of freedom. There was no main effect of group (F(1,18) = 0.02, p = 0.90), but there was a main effect of ROI (F(6.8,121.5) = 18.07, p < 0.001), and a significant interaction of group by ROI (F(6.8,121.5) = 2.79, p = 0.011). To follow up this significant interaction, we performed independent sample t-tests. We found that the 2 nd ROI along the crown of the gyrus (corresponding approximately to 1.34 degrees eccentricity) was significantly thinner in the MD group compared to the control group (T(18) = − 2.41, p = 0.03). The 5 th crown of the gyrus ROI (corresponding approximately to 7.3 degrees eccentricity) was significantly thicker in the MD group compared to the control group (T(18) = 2.58, p = 0.02). No other post hoc t-test with this set of ROIs was significant (Fig. 3B).
Circle ROI -bank of the sulcus. We performed a two-way mixed-measures ANOVA on data from the bank of the sulcus ROIs (from Fig. 2D) with a between-subjects factor of group (2 levels) and within-subjects factor of ROI (9 levels). These data violated the repeated measures ANOVA assumption of sphericity with a Huynh-Feldt Epsilon = 0.86. The data are reported with the corrected degrees of freedom. There was no main effect of group (F(1,18) = 0.216, p = 0.648), but there was a main effect of ROI (F(6.8,123.1) = 15.72, p < 0.001). There was no significant interaction of group by ROI (F(6.8,123.1) = 1.251, p = 0.28) (Fig. 3C).
Circle ROI -depth of the sulcus. We performed a two-way mixed-measures ANOVA on data from the depth of the sulcus ROIs (from Fig. 2C, Magenta colored regions) with a between-subjects factor of group (2 levels) and within-subjects factor of ROI (9 levels). These data violated the repeated measures ANOVA assumption of sphericity with a Huynh-Feldt Epsilon = 0.84. The data are reported with the corrected degrees of freedom. There was no main effect of group (F(1,18) = 1.32, p = 0.27), but there was a main effect of ROI (F(6.7,121.2) = 5.24, p < 0.001). There was no significant interaction of group by ROI (F(6.7,121.2) = 1.10, p = 0.37) (Fig. 3D).
Cortical thickness near the scotoma border. In order to investigate how changes in cortical thickness relate to the border between the scotoma and the start of spared retinal tissue, we performed a series of analyses in which we took into account the location of each participant's scotoma border. The participants with central vision loss had a range of scotoma sizes from 8 to 14 degrees in diameter. This corresponds to a range of eccentricities of 4-7 degrees, as these scotomas were centered around the fovea. Based on these eccentricities, the V1 cortical representation of this scotoma border lay in either the 4 th or 5 th ROI in each MD participant. In order to examine how cortical thickness changed relative to the scotoma border, we aligned the same data from Fig. 3 Cortical Thickness the scotoma border ROI for each subject and their matched control. We included 5 regions in total for this analysis: the two closest centrally responsive regions (Border − 2, Border − 1), the border region, and the two closest peripherally responsive regions (Border + 1, Border + 2). For each set of ROIs in Fig. 2 we conducted a two-way mixed measures ANOVA with factors of group (2 levels) and ROI (9 levels). The results are presented in Fig. 4.
Bar ROIs aligned based on the scotoma border.
We performed a two-way mixed-measures ANOVA on data from the bar ROIs with a between-subjects factor of group (2 levels) and a within-subjects factor of ROI (ROIs from
Crown of gyrus circle ROIs aligned based on the scotoma border.
We performed a two-way mixed-measures ANOVA on data from the crown of the gyrus ROIs (from Fig. 2C, blue) aligned to the scotoma border with a between-subjects factor of group (2 levels) and within-subjects factor of ROI (9 levels). These data violated the repeated measures ANOVA assumption of sphericity with a Huynh-Feldt Epsilon = 0.85. The data are reported with the corrected degrees of freedom. There was no main effect of group (F(1,4) = 0.00, p = 0.96), but there was a main effect of ROI (F(3.4,61.0) = 16.87, p < 0.001) and a significant interaction of group and ROI (F(3.4,61.0) = 3.20, p = 0.03). To follow up this significant interaction, we performed independent sample t-tests. The peripheral region directly adjacent to the scotoma border (Border + 1) was significantly thicker in the MD group compared to the control group (T(18) = 2.42, p = 0.03). No other post hoc t-test with this set of ROIs was significant (Fig. 4B).
Bank of the sulcus circle ROIs aligned based on the scotoma border.
We performed a two-way mixed-measures ANOVA on data from the bank of the sulcus ROIs aligned to the scotoma border with a between factor of group (2 levels) and within factor of ROI (ROIs from Fig. 2D; 9 levels). These data violated the repeated measures ANOVA assumption of sphericity with a Huynh-Feldt Epsilon = 0.80. The data are reported with the corrected degrees of freedom. There was no main effect of group (F (1,4) = 0.76, p = 0.40), but there was a main effect of ROI (F(3.2,57.3) = 5.03, p = 0.003). There was no significant interaction of group and ROI (F(3.2,57.3) = 1.85, p = 0.14) (Fig. 4C).
Depth of the sulcus circle ROIs aligned based on the scotoma border.
We performed a two-way mixed-measures ANOVA on data from the depth of the sulcus ROIs (from Fig. 2C, magenta) aligned to the scotoma border with a between-subjects factor of group (2 levels) and within-subjects factor of ROI (9 levels). These data did not violate the repeated measures ANOVA assumption of sphericity. There was no main effect of group Although not significant, Fig. 5 shows that MD participants exhibited decreased grey matter volume in centrally responsive V1, consistent with the significant interaction of group by ROI. This is in line with previous work using grey matter volume 13,15 , and provides further support that the participants enrolled in the current study have similar anatomy to participants in previous studies.
Discussion
To our knowledge, these data are the first to suggest that central vision loss is associated with both increases and decreases in primary visual cortical thickness, in the same group of participants. As compared to matched controls, participants with MD had thinner cortex in central V1 areas no longer recruited due to retinal loss as well as thicker cortex in peripheral V1 areas corresponding to spared peripheral vision (Figs 3A,B, 4A,B). Regions whose cortical thickness increased or decreased (relative to controls) mirrored the increased or decreased behavioral importance of the corresponding visual field. Further, these data suggest the increase in cortical thickness preferentially occurred near the border between spared retina and damaged retina (Fig. 4A,B). MD, by definition, leads to impairment of central vision, and decreased reliance on information from that part of the visual field. As a partial compensation, patients may increase their dependence on peripheral vision. In fact, many individuals develop specific "preferred retinal loci" which they learn to use for tasks involving fine scale vision, such as reading 28 . The structural differences we observe here may underlie compensatory improvements in peripheral vision after central vision loss. This is a novel study that examined structural plasticity in human V1 as a function of eccentricity using surface-based morphometry. Results from this study will be important for researchers aiming to restore loss of vision due to retinal diseases, because vision involves levels of processing beyond the retina. Plasticity in V1 and other cortical areas following central vision loss will need to be understood to better determine how to restore vision. In addition, the strategy used here, segmenting areas of cortex that have typically been treated as homogeneous 29 , can provide valuable insight into disease states, as well as basic anatomical properties of the cortex that have previously been overlooked.
Although we did observe differences in cortical thickness between MD and control groups in some ROIs, several regions were not different between the groups. From Fig. 3, one can observe that cortical thickness did not significantly differ between groups at the 1 st , 4 th , and 7 th through 9 th regions in any of the sets of ROI defined in Fig. 2. The 1 st ROI is located at the foveal confluence, a region of cortex that is notoriously difficult to separate into different visual areas (V1, V2, V3) 30 . Therefore, the 1 st ROI may have included other visual areas that are not V1. Further, given that it is located at the occipital pole, the geometry of the cortical folds at that location may be different from the rest of the sulcus. Because cortical thickness does change between the sulcus and gyrus, this difference in folding may influence the plasticity of cortical thickness there. The 4 th ROI lies near the border between the representations of the scotoma and healthy retina, and therefore represents a transition between lost vision and increased vision recruitment. The 7 th through 9 th ROIs correspond to regions in the far periphery (with means of 25.5, 40, and 63.3 degrees eccentricity respectively), an area of visual space to which neither control nor MD participants likely frequently use. Thus visual space that is used most differently between the MD and control groups corresponded to the regions with the strongest observed cortical thickness differences. Further evidence for this is found in Fig. 4, which shows that cortical thickness changes are selective for regions nearest to the scotoma border.
It is unclear why differences between groups may be present in the crown of the gyrus, but not the depth of the sulcus. One possibility is that this is due to inhomogeneity of scotoma shape in the retina. On average our MD participants had a vertical scotoma of 9.5 degrees in diameter, and a horizontal scotoma of 12.7 degrees in diameter. The depth of the sulcus corresponds to the horizontal meridian, while the crown of the gyrus corresponds to the vertical meridian. Thus the MD participants were more impaired in the visual field associated with the depth of the sulcus, and this may have contributed to our not finding significantly thicker-than-control cortex in that region. However, another possible explanation is that the depth of the sulcus may be under more rigid anatomical restraints to remain thin due to the patterns of gyrification in the cortex 31 .
Previous studies have investigated how the morphometry of visual cortex changes with experience. Plasticity is maximized if sensory loss occurs prior to the critical period 32 . Studies of the early blind have shown an increase in cortical thickness in occipital cortex compared to both sighted controls and the late blind [33][34][35] . However, our data suggest that the loss of a specific portion of the visual field causes eccentricity-specific morphometric changes in V1.
Previous research into anatomical changes associated with central vision loss have reported decreased grey matter density and decreased grey matter volume in typically centrally responsive V1 13,15 , consistent with the grey matter volume analysis shown in Fig. 5. These results are also consistent with presented data in Fig. 3, which demonstrated that central visual cortex is thicker in control than MD patients. Previous studies using grey matter density and volume measures, as in our data in Fig. 5, did not observe an increase in grey matter density in peripherally responsive V1. The discrepancy between the cortical thickness effects observed in cortical areas outside the scotoma boarder (Figs 3 and 4), compared to the lack of an effect while using grey matter volume measurements (Fig. 5, and previous data) is likely due to the fact that cortical thickness assesses distinct aspects of cortical morphometry compared to grey matter density/volume measurements 16 . Thus our use of a cortical thickness metric, coupled with our very detailed ROI approach of examining all of V1, have made our study more sensitive than previous studies to anatomical differences as a function of eccentricity.
It remains unclear what cellular mechanisms might underlie the differences in cortical thickness reported. One possibility is that a change in the long-range connections into V1, as a result of central vision loss, result in changes in cortical thickness. There is evidence for eccentricity-dependent effects of attention in V1 36 . These attentional inputs might be modified after increased or decreased use of portions of the visual field. Future studies should test this hypothesis through the use of diffusion-weighted imaging or functional connectivity MRI in humans or tract tracing studies in foveate primates.
Cortical thickness modifications in V1 may also be driven by mechanisms at the local circuit level. Animal models have shown a decrease in interneuron axon density following binocular foveal lesions in the lesion projection zone in V1 11 . The decreased thickness reported here, and decreased volume previously reported [13][14][15] , might reflect this decreased interneuron axonal density. Animal models have also shown an increase in horizontal projections occurring at the border of the lesion projection zone for both excitatory and inhibitory neurons 11,12 . Further, central and peripheral V1 might also have different cellular architecture. Central V1 has higher cellular and neuronal densities than any other part of non-human primate cortex, including peripheral V1 37 . Dendritic structure of interneurons also varies as a function of eccentricity in V1 38 . The loss of central vision in participants with MD and the resulting compensatory visual strategies may have an impact on cellular structure and organization in V1. Future studies should apply postmortem work on humans or use animal models to investigate the relationship between cortical thickness changes in V1 and underlying cellular structure.
Data from the current study show a significant decrease in cortical thickness from centrally responsive to peripherally responsive regions of V1 in our normally sighted controls. These data are consistent with the left portion of Fig. 2A in a recent report 39 , which does not focus on or statistically test this result. These findings suggest a structural difference between centrally and peripherally responsive regions of V1. More work is needed to investigate this basic anatomical finding in V1.
The present study had several limitations. We did not calculate individual subjects' retinotopy in this experiment. Retinotopy is difficult to identify in patients with MD, but is possible 40 . However, retinotopy of early visual areas, especially V1, are stereotyped, so anatomy is an excellent predictor of retinotopy 25 ; therefore, we feel confident that our estimates of retinotopy are generally appropriate. The current work is somewhat limited by the cross sectional design, although we closely matched the participants on gender, education, and age. A longitudinal study would be necessary to confirm that visual experience caused (rather than is correlated with) the differences in cortical thickness we observed. However, a longitudinal study would be extremely difficult to perform, as it would require identifying MD participants who will eventually develop complete bilateral central scotoma early in their disease. Future work should examine the causal relationship between changes in vision and V1 cortical thickness. Finally, the present study included a limited number of participants, due to the fact that we have very selective inclusion criteria. We are studying only a subset of patients with macular degeneration: to be eligible for this analysis, subjects must have a central scotoma in both eyes, which rules out many possible participants with partial macular vision in at least one eye. Obtaining data from such a select group is difficult and time consuming, and required a strong relationship with our colleagues who see patients. Future work examining larger populations, perhaps shared data across sites, may provide power enough to tease apart possible differences between different forms of MD.
To facilitate other researchers' contributions to examining these types of questions in their own populations, we have made the Matlab code and the regions of interest that were used in this analysis available online at http:// labs.uab.edu/visscher/resources/software-protocols. These regions of interest are useful to anyone studying V1, as they span central to peripheral vision, and have been used in previously published work 41 .
To conclude, these results suggest use-dependent modifications in V1 following central vision loss: both increases in cortical thickness following increased use, and decreases in cortical thickness following decreased use. Compensatory visual strategies (e.g., peripheral vision use) in patients with MD may contribute to cortical modifications in V1. These findings are important to the neuroscience and neuroplasticity fields, as they imply that robust changes in cortical thickness, both increases and decreases, are possible in the adult brain. This work improves our understanding of the scope of adult neuroplasticity. | 2017-10-17T09:14:51.154Z | 2016-03-24T00:00:00.000 | {
"year": 2016,
"sha1": "6213759293c3278785cc960f5c42394b877fcfbd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep23268.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6213759293c3278785cc960f5c42394b877fcfbd",
"s2fieldsofstudy": [
"Biology",
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27714986 | pes2o/s2orc | v3-fos-license | Understanding DNA replication by the bacteriophage T4 replisome
The T4 replisome has provided a unique opportunity to investigate the intricacies of DNA replication. We present a comprehensive review of this system focusing on the following: its 8-protein composition, their individual and synergistic activities, and assembly in vitro and in vivo into a replisome capable of coordinated leading/lagging strand DNA synthesis. We conclude with a brief comparison with other replisomes with emphasis on how coordinated DNA replication is achieved.
The T4 replisome has provided a unique opportunity to investigate the intricacies of DNA replication. We present a comprehensive review of this system focusing on the following: its 8-protein composition, their individual and synergistic activities, and assembly in vitro and in vivo into a replisome capable of coordinated leading/lagging strand DNA synthesis. We conclude with a brief comparison with other replisomes with emphasis on how coordinated DNA replication is achieved.
Enterobacteria phage T4 infects Escherichia coli bacteria. Its genome is 170 kb (1) long and encodes 289 proteins. The DNA genome within an icosahedral head of a virus whose tail is hollow passes into the bacterial cell for infection. The rate of DNA replication in the cell is 400 -700 nucleotides s Ϫ1 (2) with a mutation rate per base pair of only 7 ϫ 10 Ϫ8 (3). This efficient and highly accurate replication system is the subject of this review.
In vitro reconstitution of a T4 replication system capable of leading-and lagging-strand synthesis on a duplex DNA substrate requires a minimum of seven proteins: the DNA polymerase (gp43); the ssDNA 2 -binding protein (gp32); the clamp loader (gp44/62); the clamp (gp45); the helicase (gp41); and the primase (gp61). The helicase loader (gp59) accelerates the reconstitution of the replication system but is not essential once assembled. The numbers in parentheses are the T4 gene designations. The brilliant biochemistry by the Alberts group and in parallel with the Nossal laboratory established the basis for functional and structural characterization of the system (4 -7). An attractive reason for study of the T4 system is the strong similarities between it and the less accessible eukaryotic replication complexes (8).
Polymerase (gp43)
As a first step in understanding the contributions of individual proteins to the functional properties of the complex is the elucidation of their properties. Kinetic schemes for the 5Ј-3Ј polymerase and 3Ј-5Ј exonuclease activities of gp43 were determined by pre-steady-state kinetic methods and fit by computer simulation (9). The minimal kinetic scheme for the action of gp43 on a model duplex is depicted in Fig. 1.
Incorporation of a single correct base follows a minimal fivestep kinetic sequence with an observed rate constant for single nucleotide incorporation of Ͼ400 s Ϫ1 assigned to the chemical step and close to that observed in the cell. Thus, the accessory proteins do not increase the rate of the polymerization reaction per se. The dissociation rate of gp43 from the duplex sets the observed steady-state velocity. The polymerization process is further distinguished by a high degree of dNTP-binding discrimination-up to 300-fold-in the ternary reactive complex.
gp43 exhibits an active 3Ј-5Ј-exonuclease cleavage rate of 100 s Ϫ1 . Note there is a kinetic barrier that protects a properly base-paired 3Ј terminus from excision as illustrated by a partitioning between the polymerase site and the exonuclease site biased 23:1 in favor of the polymerase site for correct base pairing. In the case of a mismatched 3Ј terminus, this partitioning drops to 5:1 markedly favoring excision. The distance between the two active sites has been estimated as 2-3 nucleotides (10). The 2.8-Å resolution structure of the bacteriophage RB69 gp43 (63% identical in amino acid sequence to its homolog from T4 phage) as well as a 2.6-Å structure with primer/template and nucleotide bound are central for examining protein-protein interactions that maintain the replisome (11). A ribbon representation of the enzyme showing gp43 and its active sites can be found in supplemental Fig. S1. The revealed palm domain contains the three conserved carboxylates (Asp-411, Asp-621, and Asp-623) implicated in catalyzing the nucleotidyl transfer reaction. The conversion of Asp-219 in the exonuclease domain to Ala-219 generates an enzyme that is devoid of exonuclease activity but retains unchanged polymerase activity (12). The distance between the polymerase and exonuclease sites in the crystal structure can be spanned by a 4-base oligonucleotide or a three-nucleotide duplex DNA (13).
Holoenzyme (HE)
In the absence of accessory proteins, gp43 has limited ability to extend DNA templates (2). Numerous physical studies and processivity assays found that gp44/62, gp45, and gp43 in the presence of ATP formed an active replication complex capable of extending large primed circular ssDNA templates (M13 or ⌽X174) or polyoligonucleotides (14 -18). Experiments on endblocked linear primer/template substrates established that the core HE consisted of a complex of gp45 and gp43 with the gp44/62 acting catalytically to load gp45 but not acting in its unloading (19 -21). The molecular basis for the increased processivity of the HE was revealed by the structure of gp45 that, like other processivity factors for the bacterial and eukaryotic DNA polymerases, is a highly symmetrical, three-subunit, ringshaped structure through which duplex DNA can be threaded (supplemental Fig. S2) (22).
Investigations of the solution structure of gp45 by various physical methods found a cooperative assembly of the monomers into an open complex composed of two closed subunit interfaces with a third subunit interface separated by a distance of 35-38 Å (23). This intriguing finding was further scrutinized by time-resolved fluorescent spectroscopy of gp45 labeled across its three-subunit interfaces with a pair of dyes capable of FRET. gp45 was found to exist in two forms in solution with 67% in a closed state and 33% with a gap between two subunits of 42 Å (24). The gap is sufficiently large to permit clamp loading to form a functional replication complex without the gp44/62 (25).
Interactions between gp45 and gp43 were first uncovered by deleting the last six C-terminal amino acids of gp43. This mutant, which retained all the kinetic parameters of the parent, does not form an HE (26). A solution model of the HE bound to DNA (supplemental Fig. S3) (27) was created through studies with the following: (i) a fluorescently labeled peptide based on the gp43 C-terminal residues as well as an analogous peptide cross-linker; (ii) FRET-based stopped-flow measurements (gp45 was labeled in multiple positions on opposite sides of the subunit interface) that tracked the kinetics of HE formation (28,29); and (iii) a crystal structure of the C-terminal peptide bound to gp45 (13). The model shows the C terminus of gp43 inserted into the open subunit interface of gp45.
Assembly/disassembly of the HE
Before discussing how gp44/62 solves the topological problem of opening and closing gp45 on DNA, it is instructive to view the structure of gp44/62. The general organization of the clamp loader consists of one copy of gp62 and four copies of gp44. The architecture of a gp44/62⅐gp45⅐DNA complex with gp44/62 bound to an open gp45 has been solved by X-ray crystallography (supplemental Fig. S4) (30).
Germane to integrating this structure with function is the proposed reaction cycle for loading gp45 by gp44/62 followed by the latter's dissociation from the DNA. In the presence of ATP bound to the four subunits of gp44, gp44/62 binds, opens gp45, and loads gp45 onto the duplex (Fig. 2, steps 1-4). ATP hydrolysis is associated with both loading and departure of gp44/62 but not for gp43 binding (31). Hydrolysis of ATP promotes closure of gp45 and ejection of gp44/62 (Fig. 2, steps 4 -10). All of these steps are associated with large conformational changes in the four gp44 subunits (supplemental Fig. S5) (30,32). The stoichiometry of ATP hydrolysis by gp44/62 has been measured by pulse-chase kinetics; however, the numbers vary from 2 to 4 ATPs per turnover cycle apparently arising from differences in the quenching procedures (33)(34)(35).
Besides loading gp45 onto duplex DNA, gp44/62 acts as a chaperone to escort gp43 to its binding site on the DNA duplex, consistent with the finding that gp43 binds to the same face of gp45 as gp44/62 (36). Moreover, the formation of the HE can occur through one of four pathways as illustrated in supplemental Fig. S6 (37,38).
The dissociation of the HE from the DNA duplex is governed by the dissociation of gp45 subunits into monomers at a rate of (3.3 Ϯ 0.6) ϫ 10 Ϫ3 s Ϫ1 as measured using a FRET signal engineered across the subunit interface (39). Unexpectedly, gp43 in the HE was found to exchange with an unbound gp43 in solution (40). Neither ATP hydrolysis nor the presence of gp44/62 was required for the exchange. Two possible models for the exchange process are shown in supplemental Fig. S7 (40).
gp32 is often considered part of the HE because it stimulates replisome processivity and the rate at which gp43 traverses helical regions of the DNA by melting out adventitious secondary structure (41). It is essential for leading-and lagging-strand synthesis in vitro (42). Its crystal structure revealed an ssDNAbinding cleft with a positively charged surface parallel to a series of hydrophobic pockets conferring sequence independence and high discrimination against duplex DNA (43). Consequently, the protein may slide along the ssDNA, although its cooperative binding favors complete coverage of the ssDNA.
The primosome (helicase gp41/primase gp61)
The gp41⅐gp61 complex exhibits both helicase and primase activities (44,45). The preferred substrate for gp41 is a replication fork with single-stranded extensions of Ͼ29 nucleotides on both strands of the fork duplex consistent with gp41 interacting with both the leading and lagging strands (46). Unwinding requires ATP or GTP hydrolysis and proceeds at a rate of 30 bp/s (47). gp61 stimulates gp41 unwinding less than 2-fold by facilitating its binding to the ssDNA (46). At physiological concentrations, gp41 exists primarily as a dimer, but the binding of ATP/GTP or ATP␥S/GTP␥S drives the assembly of the dimers into an asymmetric hexametric ring complex (supplemental Fig. S8) (48). Electron microscopy images of gp41 support open and closed forms of the ring (49). gp41 is highly processive in the presence of the six other replication proteins (excluding gp59) with an association half-life of ϳ11 min (50), sufficiently long to accomplish the replication of the entire T4 genome implying that gp41 also has an accelerated rate in the presence of the other replication proteins. gp61 generates the pentameric ribonucleotide primers required to initiate Okazaki fragment synthesis (51,52). The biological relevant primers with the sequence pppApC(pN) 3 are the products of the gp41⅐gp61 complex; in the absence of gp41, gp61 can generate dimers as well as products greater than five nucleotides (52). The primase activity is greatly stimulated by complexation with gp41. In fact, gp41⅐gp61 complexes on templates coated with gp32 exhibit a physiologically relevant priming rate of about 1 primer s Ϫ1 to provide sufficient primers for lagging-strand synthesis given the rate of replication (53). The stoichiometry of gp61 binding to a gp41 hexamer has been reported as 1:1 (54), 6:1 (53, 55), or 3:1. 3 The last stoichiometry measurement was done with single-molecule photobleaching and is more definitive and reflective of physiological conditions. Moreover, the variability of this stoichiometry probably arises from the dissociative rather than processive nature of gp61 (56), and it is likely that only one gp61 is scanning the ssDNA to synthesize a primer at a given time. The primosome synthesizes far more primers than needed with only ϳ25% being utilized for Okazaki fragment synthesis (57).
Assembling the replisome
Replication initiates from R-and D-loops for origin-dependent and recombinant-dependent replication, respectively (58). Origins of replication facilitate the formation of RNA primers within the R-loop to start leading-strand DNA synthesis implying that it is primed by the gp41⅐gp61 complex or coupled with gp61-dependent lagging-strand synthesis. Recombinant-dependent replication begins with the strand-invasion reaction that creates D-loops with the invading 3Ј-end of the DNA being used to prime leadingstrand synthesis following loading of a gp41⅐gp61 complex and gp59 on the displaced strand of the D-loop. gp41 and gp59 form a 1:1 hexameric complex with the lagging strand of the replication fork passing through the center of the ring-shaped helicase (47). Loading of gp41 onto gp32 that coats ssDNA exposed during replication initiation is facilitated by gp59 that destabilizes the interaction between gp32 and ssDNA through a direct contact with gp32 (59). At least two to three gp32 proteins (binding-site size of 8 nucleotides each) must be released for loading of gp41 (binding-site size of 12-20 nucleotides per hexamer) (60,61), and indeed, gp32 promotes gp59 oligomerization (62). In turn, direct interactions between the C-terminal peptides of gp59 and those of gp41 promote the latter's assembly (63). A plausible mechanism revealed by cross-linking experiments is depicted in supplemental Fig. S9. The hexameric nature of gp59 when bound to forked DNA was definitively confirmed by single-molecule photobleaching (64). The stepwise assembly of the primosome was then traced by single-molecule FRET microscopy leading to the sequence illustrated in Fig. 3 (65).
The leading-strand HE can readily assemble on the DNA fork as the primer strand becomes available. What's to prevent premature synthesis before the primosome is in place? Mutations in gp59, which have a deleterious effect on origin-dependent DNA replication in vivo, suggest a gatekeeper role for it (66). In vitro studies found a complex formed between gp59 and the leading-strand gp43, whose structure was modeled (supplemental Fig. S10) (67). Both the synthesis and exonuclease activities of gp43 are inhibited. Single-molecule FRET microscopy showed this complex was "unlocked" by the addition of gp41 followed by gp61 to form a functional primosome and subsequently a fully active replisome (supplemental Fig. S11) (68). The "unlocking" stems from the loss of gp59 contacts with the replication fork (69) leading to its loss from the active replisome (68). We have summarized the known interactions between the proteins within the replisome in supplemental Fig. S12.
The in vitro gp41 unwinding rate is ϳ10-fold less than the replication rates observed in vivo and in vitro. Likewise, an independent HE is very inefficient at strand displacement synthesis at a rate of about 1 nt/s (70). However, no physical interaction was found between the two to account for their rate of replication when both are present at a replication fork, leading to the postulate of a functional coupling that depended on interac-tions modulated by the DNA replication fork (71). In this model, the trailing HE traps the ssDNA product of the gp41 unwinding activity preventing the separated strands from reannealing and causing back slippage of gp41 and thereby increasing the unwinding rate. From investigations with magnetic tweezers, a collaborative model was postulated where gp41 prevents the HE from stalling and in turn HE blocks gp41 slippage so both activities are stimulated through the DNA fork (key data and instrumentation are shown in supplemental Fig. S13) (70,72). Consequently the coupled replication of gp41 and gp43 manifests a rate of 300 -400 bp/s in accord with the rates for replication fork movement noted above.
Coordinated leading-and lagging-strand replication
DNA synthesis in vivo is tightly coordinated with the synthesis of both strands completed simultaneously despite the continuous replication of the leading strand and the discontinuous replication of the lagging strand. How is this achieved?
As first steps in understanding leading-and lagging-strand DNA synthesis, reconstitution in vitro of an active replisome was achieved on a model replication fork or a minicircle substrate (42,73). The latter allows for the quantitation of synthesis of each strand (supplemental Fig. S14). The synthesis was initiated with all eight proteins and was tightly coordinated. With a two-hybrid system based on the phage C1 repressor, a homodimerization domain was established in the 400 -600amino acid region of gp43 (42) and then narrowed by crosslinking to Cys-507 specifically (74). The physical tethering of the two gp43s in a replisome necessitates the formation of a replication loop (5) visually observed in electron micrographs (75).
Experiments showed that the activity of coordinated synthesis by a reconstituted replisome was highly resistant to dilution provided the buffer was supplemented with gp45, gp44/62, and gp32 (76). More sensitive trapping experiments additionally revealed gp61 dissociated as well (56). As noted earlier, in the presence of excess gp43 in solution, the gp43s in the replisome will exchange but not impede the processivity of the replisome (40). Thus, only gp41 and the gp43s do not dissociate from the replisome within lifetimes sufficiently long to permit processive duplication of the entire T4 genome.
Central to the synthesis of Okazaki fragments on the lagging strand is the need to accommodate gp41 and gp61 translocating in opposite directions (5Ј to 3Ј unwinding versus 3Ј to 5Ј primer synthesis). Three possible scenarios can be visualized (supplemental Fig. S15): pausing both gp41 and gp61; disassembly of the primosome to synthesize a primer forming a pppRNA⅐gp61 complex while gp41 continues unwinding; and having the primosome remain intact forming a priming loop with the unwound DNA. With magnetic tweezers experiments, two mechanisms were observed: disassembly of the primosome to form pppRNA⅐gp61 complexes and priming loop formation with an intact primosome. No pausing was found (77). Primosome disassembly during primer synthesis has important ramifications for the discontinuous lagging-strand synthesis.
Lagging-strand synthesis requires transient release of gp43 from the DNA template upon Okazaki fragment completion, Figure 3. Assembly mechanism of the T4 lagging-strand primosome on forked DNA. The gp32 protein binds to forked DNA with either subsequent or concurrent binding of gp59. Subsequently, gp41 binds to gp59 and is loaded onto the forked DNA in the presence of nucleotide. ATP hydrolysis is required for gp41 to displace gp32 and gp59, either directly or by translocation. The gp61 protein then binds and interacts closely with gp41 on forked DNA. In the absence of gp32 and gp59, both gp41 and gp61 bind to forked DNA. Figure yet it remains associated with the replisome during recycling to initiate a new fragment (76,78). Recycling can be triggered by the lagging-strand gp43 colliding with the end of the previous Okazaki fragment that accelerates the transient dissociation, i.e. the collision mechanism (5,79,80). However, the size and number of the Okazaki fragments can be manipulated by varying the activity of gp61, the gp45 and gp44/62 levels, and the rate of synthesis by the lagging-strand gp43 to create a pattern of gapped Okazaki fragments, i.e. the signaling mechanism (57,81). The cumulative events of primer synthesis and gp43 recycling to initiate another Okazaki fragment compete with the advance of the leading-strand HE to increase the separation of the two HEs and potentially disrupt coordinated replication by the replisome. How then is replication coordination maintained?
Recognizing that only 20 -30% of the primers produced by gp61 are actually utilized for Okazaki fragment synthesis, many unused primers in the form of pppRNA⅐gp61 complexes could build up ahead of the lagging-strand HE. Indeed, the collision with such complexes was found to trigger early termination of Okazaki fragment synthesis (82). Consequently, the mechanism for dissociation of gp43 to recycle is always collision either with the previous Okazaki fragment or an unused pppRNA⅐ gp61 complex. These signaling and collision mechanisms are illustrated in Fig. 4 (82) and accommodate the above factors that affect the signaling pathway.
A model derived from all the available kinetic data (82) reproduces the observed distribution of Okazaki fragments (83). Several important conclusions may be drawn from the modeling. First, all Okazaki fragments originate from primers synthesized by the looping mechanism. Second, the interplay between the recycling gp43 binding to an already available pppRNA⅐gp45⅐gp44/62 complex (Ͻ1 s) and a median time of ϳ6 s for the formation of a pppRNA⅐gp45⅐gp44/62 complex provides a mechanism for a semi-random distribution of Okazaki fragment lengths in our simulation. Finally, the laggingstrand gp43 will recycle to bind newly synthesized primers a mean distance of 400 nt from gp41 as a result of the brief lifetime of a naked primer and the slightly longer lifetime of a pppRNA⅐gp45⅐gp44/62 complex. Thus, signaling alone is sufficient to keep the leading-and lagging-strand synthesis coordinated even though the two gp43s are moving at the same mean velocity.
Comparison with three DNA replication systems
We conclude with a comparison of three DNA replication systems focusing on how they coordinate leading-and laggingstrand synthesis.
The bacteriophage T7 replication system has been the subject of a very recent comprehensive review (84). The replisome and its constituent proteins are depicted in supplemental Fig. S16. The slowest event in coordinated replication stems from the collection of steps involved with primer synthesis (a short tetraribonucleotide) requiring some 6 -12 s. In contrast, the polymerase (gp5) with bound processivity factor, thioredoxin, extends primers at a rate of 114 -122 bp/s. Importantly, this is also the rate of duplex unwinding by the helicase (gp4) in this gp4⅐gp5 complex that is accelerated some 13-fold over that of gp4 alone. Recall a similar phenomenon was noted for the T4 helicase in a gp41⅐HE complex (72).
This large difference between the rate of priming versus the rate of the gp4⅐gp5 complex advance would result in the loss of coordination between the syntheses of the two strands (85). This problem can be resolved by pausing duplex unwinding and leading-strand synthesis, by having the lagging-strand gp5 advance at a faster rate than the leading-strand gp5, or by having more frequent gp5 recycling and Okazaki fragment initiation via the signaling mechanism. An initial report found pausing of the gp4 and leading-strand synthesis (86); a later report found no pausing but an ϳ30% faster rate for lagging-strand synthesis versus leading-strand synthesis (87). Given the observations of concomitant primer synthesis and efficient interprotein transfer of the primer to gp5, the faster copying of the lagging-strand template ensures synthesis of both strands remains coordinated.
The T7 system, however, also exhibits both signaling and collision mechanisms in the synthesis of Okazaki fragments. The signal for premature release of gp5 appears to be primer synthesis followed by replication loop release before completion of the previous Okazaki fragment (88) rather than polymerase release upon collision with unused primer⅐primase complexes because the helicase and primase activities are properties of a single polypeptide.
An unusual characteristic is the binding of additional units of gp5 to gp4 (up to six gp5s theoretically can bind to the six C termini of hexameric gp4), which offers a switching mechanism enabling exchange of the synthesizing lagging-strand polymer-ase and the non-synthesizing polymerase reservoir (89). This exchange could also contribute to the early termination of Okazaki fragments as a signal along with primer synthesis. Both the faster synthesis rate of the lagging-strand gp5 and the added benefit of the premature recycling of gp5 via the switching mechanism act together to retain coordination of DNA replication by the T7 replisome.
The Escherichia coli replisome has also been the subject of a recent review (90). The replisome with its more complex constituent proteins is exhibited in supplemental Fig. S17. Moreover, the E. coli system features the helicase (DnaB) and primase (DnaG) activities residing on separate proteins like the T4 replisome. The processive coupling of DNA synthesis on both template strands faces the same timing issues noted above. The slowest event surrounds priming of the lagging-strand polymerase. Like T4, the primer synthesis activity of DnaG is accelerated ϳ5,000-fold by the presence of DnaB reducing the time for this part of the process to ϳ0.2 s. Deposition of primers then occurs at the physiological 1-2-s interval similar to the T4 replisome (91).
The leading-strand polymerase complexed to -clamp and in the presence of DnaB in single-molecule experiments shows bursts of synthesis at 500 -600 nt/s and mean rates of about 350 nt/s independent of DnaG activity (92,93). Again, the activity of DnaB is stimulated by the presence of the leading-strand HE; by itself, the DnaB unwinds duplex DNA at a rate of only 84 -86 bp/s (93). Consequently, a displacement of about 350 -700 bp between the two polymerases can be imagined as a consequence of each Okazaki fragment synthesized (90).
The question again is how the lagging-strand polymerase is recycled after Okazaki fragment synthesis. Testing for the collision mechanism found it might act 40% of the time (94). However, recent reports contend the polymerase recycling time for the collision mechanism is greater than 2 min and would lead to lengthening of successive Okazaki fragments contrary to observation (95). An additional hypothesis is that the torsional strain generated by a replisome with two interconnected polymerases is responsible for polymerase recycling (96), but a direct measure of the force required to break the connection would bolster this proposal. Consequently, the literature appears to favor a more conventional signaling mechanism (93,97).
The signal that triggers lagging-strand polymerase recycling has been ascribed to the synthesis of a primer (97), consistent with an earlier observation that the frequency of primer synthesis and the efficiency of primer utilization control Okazaki fragment size (98). Pertinent to our discussion, however, is that primer utilization varies from 70 to 95% and that DnaG acts distributively (99) like gp61. This raises the untested possibility that the recycling of the E. coli lagging-strand polymerase might also be triggered by collision with an unused primer or pppRNA⅐DnaG complex similar to the T4 system.
Recent single-molecule experiments found the replication velocity of the leading-and lagging-strand polymerases to be equivalent, but punctuated with stops/starts to change replication rates (93). Consequently, their behavior would not be coordinated but stochastic. Because there is no difference in velocity to compensate for the time required for various events related to Okazaki fragment synthesis, one can question whether the stochastic rate fluctuations can be coordinated to achieve the DNA replication necessary for a long genome. A further complicating factor is the observation that both in vivo and in vitro studies indicate that two polymerases may function on the lagging strand, which is not taken into account in the single-molecule experiments. So the issue of replisome coordination or lack thereof is somewhat unsettled.
Much remains to be done on eukaryotic replisomes whose constituent proteins are more numerous and complex. For example, there are two distinct, multisubunit polymerases: one for leading-(⑀) and one for lagging-strand replication (␦). The fact that the primase is both an RNA and DNA polymerase is another prominent difference. An understanding of the present status of the eukaryotic replication machine and a brief comparison with bacterial replisomes have appeared recently (100). Many questions that have been answered for T4, T7, and E. coli are only now being explored for eukaryotic replisomes. One wonders to what extent similarities will be universal and whether the differences between phage, bacteria, and eukaryote replisomes are merely evolutionary nuances with the key features of the mechanistic solution to DNA replication already solved by primitive organisms. | 2018-04-03T05:53:07.297Z | 2017-09-25T00:00:00.000 | {
"year": 2017,
"sha1": "9f20a8e6adf1b415a25ebf746b607494ecb840a9",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/292/45/18434.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "991a328222b26e8ef586b824de5bdf26ae98e555",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253523627 | pes2o/s2orc | v3-fos-license | Fusion of Microgrid Control With Model-Free Reinforcement Learning: Review and Vision
Challenges and opportunities coexist in microgrids as a result of emerging large-scale distributed energy resources (DERs) and advanced control techniques. In this paper, a comprehensive review of microgrid control is presented with its fusion of model-free reinforcement learning (MFRL). A high-level research map of microgrid control is developed from six distinct perspectives, followed by bottom-level modularized control blocks illustrating the configurations of grid-following (GFL) and grid-forming (GFM) inverters. Then, mainstream MFRL algorithms are introduced with an explanation of how MFRL can be integrated into the existing control framework. Next, the application guideline of MFRL is summarized with a discussion of three fusing approaches, i.e., model identification and parameter tuning, supplementary signal generation, and controller substitution, with the existing control framework. Finally, the fundamental challenges associated with adopting MFRL in microgrid control and corresponding insights for addressing these concerns are fully discussed.
I. INTRODUCTION
M ICROGRIDS are gaining popularity due to their capa- bility for accommodating distributed energy resources (DERs) and form a self-sufficient system [1].Microgrids not only contribute to the development of a zero-carbon city but also work as a fundamental component of the 'source, network, and load' integrated energy systems.A microgrid may incorporate various types of energy sources and act as an energy router [2], making it possible for the grid to survive severe events while also making the country more energyresilient and secure [3].
A typical microgrid is composed of various DERs, energy storage systems, and loads that are connected locally as a united controlled entity [4].In comparison to a traditional synchronous generator-dominated bulk power system, microgrids have a larger penetration of DERs [5]- [6], a smaller system size [7], a greater degree of uncertainty [8], lower system inertia [9] - [10], and a stronger coupling of voltage and frequency (V-f).All these features pose challenges to the design of a microgrid control system.A complete microgrid control system is comprised of software and hardware that can both perform microgrid functionalities and guarantee stability at the same time [11].The software is also referred to as microgrid controllers, and focuses on control algorithm design in the paper.Existing microgrid controllers are usually designed under a hierarchal framework that includes the primary, secondary, and tertiary controllers [12].Ref. [13] conducted a thorough review of the hierarchal control of microgrids.
There are also some articles providing an overview from the different perspectives of communication interfaces [14], operation modes [15], and control techniques [16].All these reviews provided an excellent summary and future directions of microgrid control research.As a result, we synthesize the valuable viewpoints and develops a high-level research map of microgrid control based on existing work.Furthermore, modularized control blocks have been developed to dive into the design of the fundamental units of microgrids: grid-following (GFL) and grid-forming (GFM) inverters [17], which is advantageous for microgrid researchers.
Model-free controllers have been used previously in microgrid control because they are easy to set up and independent of the physical model of the microgrid components.For example, fuzzy logic controllers [18] - [19] and adaptive controllers [20] - [21] can adjust their output based on predefined membership functions and adaption laws, respectively.However, they are difficult to scale up and cannot deal with emerging uncertainties in microgrids.Neural network control [22] - [23] is another type of well-known model-free method.Although neural network is good at perception and decisionmaking based on historical data, it lacks exploration capability and cannot adapt to the rapidly changing microgrid environment.Apart from the above-mentioned model-free techniques, reinforcement learning (RL) is a prominent approach that is concerned with how an intelligent agent learns to solve Markov Decision Processes (MDP) in an environment.If we do not assume knowledge or an exact mathematical model of the environment, RL is referred to as model-free reinforcement learning (MFRL).Then, the RL agent finds the optimal policy through repeated interactions with the environment [24]- [25].MFRL is a promising data-driven and model-free approach since it is not dependent on an accurate system model and does not need as many labeled datasets as supervised learning.In addition, it has strong exploration capability and can achieve autonomous operation once set up.MFRL is gaining more and more attention due to its successful applications in video games [26], autonomous driving [27], robotics [28], and power systems [29].Recently, researchers from DeepMind and École Polytechnique Fédérale de Lausanne developed a nonlinear, high-dimensional, and RL-based magnetic controller for nuclear fusion [30] and published their work in N ature.This indicates the great potential of implementing MFRL in engineering control (microgrid control).
For now, MFRL is still under development and needs further study.While some research has been conducted on MFRL for its application in microgrid control, there has been no indepth review of how MFRL can be integrated into the current arXiv:2206.11398v4[eess.SY] 6 Feb 2023 microgrid control framework.Hence, this paper performs a comprehensive review of the control framework of microgrids and summarizes how MFRL fuses with the existing control schemes.
Compared with other review papers on microgrid control, the main merits of this manuscript include: • Plotting of a high-level research map of microgrid control from the perspective of operation mode, function grouping, timescale, hierarchical structure, communication interface, and control techniques.
• Development of modularized control blocks to dive into the fundamental units of microgrids: GFL and GFM inverters.
• Introduction of the mainstream MFRL algorithms and summary of MFRL application guidelines, and the answering of two important questions: i).'What kinds of tasks is MFRL suitable for?'; ii).'How can MFRL be fused with the existing microgrid control framework?'.
• Discussion of the primary challenges associated with adopting MFRL in microgrid control and providing insights for addressing these concerns.
The rest of this paper is organized as follows.Section II introduces the current microgrid control framework, including a high-level research map and modularized control blocks.Section III gives a brief introduction to RL and the mainstream algorithms of MFRL.The characteristics of each algorithm and its application scenarios in microgrid control are also summarized.A full discussion of the fusion of microgrid control with MFRL is presented in Section IV, along with the associated challenges and insights.Section V concludes this paper.
II. MICROGRID CONTROL FRAMEWORK
This section first plots a high-level research map of microgrid control, and then develops modularized control blocks to dive into GFL and GFM inverters.
A. High-level research map of microgrid control Fig. 1 shows the high-level research map of microgrid control from the perspectives of 1) operation mode, 2) function grouping, 3) timescale, 4) hierarchical structure, 5) communication interface, and 6) control techniques.For each perspective, there are articles providing comprehensive reviews.They are denoted in Fig. 1 for the reader's reference.
1) Operation mode: A microgrid can operate in either gridconnected (GC) mode or islanded (IS) mode depending on its connectivity to the main grid [31] - [32].In GC mode, the microgrid keeps tracking the phase of the main grid through the phase-locking loop (PLL), and exchanges the mismatched power at the point of common coupling (PCC).In IS mode, the microgrid forms a self-sufficient system based on the local generations.Ref. [33] summarized the strategies for the seamless transition between GC and IS modes.
2) Function grouping: To meet the objectives of the microgrid operation, the 2 nd viewpoint is associated with function grouping, which specifically include the microgrid controller and device controller [34].Grid-level controllers focus on supervisory control functions and grid interactive control functions, and they are more likely to be software-based and applied to the hardware; while device controllers focus on device-level control functions and local-area control functions, and they are more likely to be applied directly on the hardware (devices and assets).
3) Timescale: The time scale of microgrid control is tightly related with the control structure.So, it will be discussed in detail in the next discussion about hierarchical structure.
4) Hierarchical structure: The hierarchical control structure is another specific function grouping perspective that clearly sets up the control targets for all the controllers, with which each level controller can work independently within the distinct timescales [11].
The primary controller is responsible for voltage and current control of inverters and automatic power sharing among generations while maintaining V-f stability on a timescale of seconds [35].The indirect current control is used in the early stages [36]- [37], and is later replaced by the direct current control due to its fast response and accurate current control capability [38].More details can be found in the review paper [39].Because the primary controller pertains to fast control actions, it predominantly determines the stability of microgrids [2].Ref. [40] gave an overview of the primary control of microgrids.The secondary controller mitigates the V-f deviation unsolved by the primary controller in the timescale of seconds to minutes.It improves the power quality by generating supplementary signals based on the errors between the measurements and reference values.Ref. [41] - [42] performed a review on the secondary control of ac microgrids.The tertiary controller mainly focuses on economic and resilient operations in the timescale of minutes to hours.It adjusts the setting points of the primary and secondary controllers by solving optimal power flow and considering the load side demand response.Some reviews can be found in [43] - [44].
5) Communication interface: Depending on the communication interface, the control structure of the microgrid can also be categorized into centralized control, decentralized control, and distributed control [45].
In centralized control, the microgrid control center coordinates the load and generation and responds to all disturbances.It collects and processes all the local information before sending the control signals to each device.The centralized control has the advantage of accurate power-sharing and good transient performance but suffers from the high cost of the communication device and single point failure.In distributed control, each node communicates only with its adjunct nodes.Average-based, consensus-based, and eventtriggered distributed algorithms are employed in microgrid control [46].Distributed control algorithms require the connected communication graph of microgrids.They also have a reduced convergence speed as the network grows [47].In decentralized control, the control signals are generated based on local measurements.It has the advantage of the plugand-play capability and is free of communication channel time delay, but it suffers from inaccurate power-sharing and large V-f deviation after disturbances.Ref. [48] conducted a Beginning with the classical linear control theory, advanced model-based control approaches such as non-linear control, optimum control, and model-predictive control (MPC) are then extensively used in microgrids.Ref. [49] summarized the advances and opportunities of employing MPC in microgrids, and [50] reviewed the robust control strategies in microgrids.To address the problems of model uncertainty and unavailability, a variety of data-driven methodologies such as cutting-edge machine learning (ML) and deep learning (DL) are also employed in microgrid control.Ref. [51] reviewed the application of big data in microgrids, and [36] conducted a survey on DL for microgrid load and DER foresting.A review of MFRL for microgrid control has yet to be done, which is why it is the main scope of this manuscript.
In summary, MFRL is a promising approach that is worth investigating and being employed in microgirds.As shown in the high-level research map, MFRL doesn't mean to replace the existing control framework, but to complement it, improve it in a data-driven way, and finally work as an integrated part of the microgrid controller.
B. Configuration of grid-following and grid-forming inverters GFL and GFM inverters are no doubt one of the most important units in microgrids [52].This subsection develops the modularized control blocks to present the bottom-level control details of GFL and GFM inverters.Fig. 2 shows the diagram of the modularized control blocks, with which a GFL or GFM inverter can be configured easily by connecting the modules in series.In addition, it is beneficial to the fusion summary in Section IV because the diagram clearly shows the control details that could couple with MFRL.
1) M1: Grid ∪ inverter module: The 1 st module (M1) is named the 'Grid ∪ Inverter Module' because it illustrates the connection of an inverter to the main grid.As shown in Fig. 2, the dc source, dc-ac inverter, and RLC filter are linked in series, which are then connected to the main grid through the PCC point.Here, an average model of an inverter that neglects the switching of pulse-width modulation (PWM) is often employed for the control system design.All the high-level controllers work together to generate the reference terminal voltage e abc−ref for PWM. 2 Fig. 2: Modularized control blocks of GFL and GFM inverters 3) M3: Current-ref module: The 3 rd module (M3) is named the 'Current-ref Module' since it generates the reference current [i dref , i qref ] for M2.For a GFL inverter, [i dref , i qref ] are regulated based on the error between the actual output and the reference value.Eqs. ( 3)-( 4) show the transfer function of M3 using PI controllers, where two low-pass filters are used to filter measured power output.
For a GFM inverter, its physical model is formulated using Kirchhoff's voltage law (KVL) at point u abc .After Park transformation and PI controller integration, the algebraic equation and control transfer function in dq framework are shown in ( 5) and ( 6), respectively.
) 4) M4: Power ∩ Voltage module: The 4 th module (M4) is named the 'Power ∩ Voltage Module' which indicates the fundamental difference between GFL and GFM inverters.A GFL inverter is controlled as a current source and requires a power reference as an input, while a GFM inverter is controlled as a voltage source and needs a voltage reference as an input [39].Another big difference is that a GFL inverter needs a PLL to track the phase of the main grid while a GFM inverter is self-synchronized [53].Droop control is the most widely used control method in microgrids.It takes advantage of the coupling between power generation and the grid V-f [54].Typically, an inductive microgrid employs the P −f and Q−V droop curves, while resistive microgrids uses the reverse P −V and Q − f droop curves.The M4 plotted in Fig. 2 shows the control blocks for an inductive microgrid, and their control models are shown below.
• Droop-controlled GFL inverter • Droop-controlled GFM inverter To provide more inertia support to microgrids leveraging DERs, the virtual synchronous generator (VSG) control method is proposed to emulate the behavior of synchronous generators [55].Mathematically speaking, the VSG belongs to proportional-differential control.Below is the transfer function of the GFL and GFM inverters implementing the VSG.
• VSG-controlled GFL inverter • VSG-controlled GFM inverter Readers are encouraged to check Refs.[56] - [57] for some modified VSG and droop control techniques that provide more effective inertia support to microgrids.
5) M5: Auxiliary service ∪ Optimization module: Microgrids exploiting M1-M4 can withstand normal disturbances such as load changes and plug-and-play generations.Then, M5 participates in grid optimization and provides auxiliary services, i.e., optimized active and reactive power sharing [28], demand-side management, and V-f support [58].In order for more economic energy management, M5 also calculates the steady-state setting points such as (P 0 , Q 0 ) by solving optimal power flow [59].On the other hand, it generates the supplementary signals for controller parameters and outputs [60] according to the targets of auxiliary service.Review papers regarding M5 can be seen in [61] - [62].
C. Motivation for MFRL 1) Challenges in the existing control framework: The highlevel research map and modularized control blocks clearly show how existing microgrids are controlled.However, the evolution of microgrids brings more challenges to the existing control framework.The challenges are five-fold: i).The penetration of DERs results in higher uncertainties.Although some robust and stochastic techniques have been employed to address the emerging uncertainties, they are somehow conservative and the probability distribution function still needs to be accurately estimated.ii).It is difficult to model each element of microgrids in detail, i.e., customer behavior.The information that is difficult to model is critical for energy management in M5. iii).Some system parameters are not always accessible; even if accessible, they are not necessarily accurate.iv).Microgrid dynamics are becoming faster because more and more inverter-based resources participate in grid services by adaptively changing their control modes and control parameters.Then, the existing controllers may not be valid anymore.v).Smart grids call for autonomous microgrids, with which engineers and grid operators are free from parameter tuning for modules in Fig. 2.Even for other model-free controllers, they still need elaborate tuning for hyper-parameters, i.e., the membership functions of the fuzzy logic controller and the coefficients of the adaption law.
2) Why MFRL?: Microgrid operators have access to massive data sampled by phasor measurement units (PMUs) and advanced metering infrastructures (AMIs) now [63].It opens the possibility for data-driven control.MFRL is an advanced decision-making technique with goal-oriented, datadriven, and model-free characteristics [64].With the help of MFRL, the uncertainties of the model and parameters may be mitigated through repeated interaction between the environment and the RL agent.It is also beneficial to the autonomous operation of microgrids because the RL agent can actively update its policy based on the microgrid dynamics.
To better fuse MFRL with the existing microgrid control framework, it is necessary to first know the capabilities of each MFRL algorithm, and then choose the proper algorithms in real applications.Thus, the following sections introduce the map of MFRL, the features of main stream MFRL algorithms, and how MFRL can be incorporated into the existing microgrid control framework.
III. MODEL-FREE REINFORCEMENT LEARNING
This section first gives a brief introduction to RL and then summarizes the methodology of MFRL.
A. Formulation of RL
RL is a basic ML paradigm formulated as an MDP.As shown in Fig. 3a, the environment defines the state space S and the agent holds the action space A. The agent keeps interacting with the environment to update its policy π that maps the environment states to actions.In each iteration, the agent chooses action a t ∈ A according to π.Then, the environment generates the next state according to its intrinsic transition probability P (s t+1 | s t , a t ) : S × A → ∆(S) and feeds back the instant reward r (s t , a t ) to the agent.The iteration is repeated until the agent finds the optimal policy π * as follows.
Where γ is the discounting factor and J(π) is the infinite horizon discounted reward.The optimal policy guarantees the maximum accumulated reward obtained from the environment.In MFRL, A and S can be either continuous or discrete.For the sake of illustration, this paper uses discrete notation to introduce the methodology.
B. Methodology of MFRL
Through temporal-difference learning, Q π can finally converge to its true value under mild assumptions [65].
The approximated Q π was first recorded in a Q-table [66].Considering the table's inefficiency, the deep Q-learning network (DQN) [67] replaced the Q-table with a deep artificial neural network (ANN), which has a strong fitting capability that maps the states to Q-value with less memory.Then, the DQN was further improved using the following tricks [68]: • (Prioritized) Reply Buffer enhances the training efficiency.
• Double Network relieves the overestimation of Q-value.
• Dueling Network improves the performance in highdimensional action space.
Later, a distributional DQN [69] and a quantile regression DQN [70] were proposed using stochastic policy and distributed training, and they were combined as 'Rainbow DQN' by David Silver [71] in 2017.
2) Policy-based algorithms: Policy gradient methods directly learn the parameterized policy based on feedback from the environment.Before diving into policy gradient algorithms, it is necessary to introduce the actor-critic (AC) structure.The AC structure has two ANN models that optionally share parameters: i) Critic updates the parameters of value functions; ii) Actor updates the policy parameters under the guidance of the critic.Under the AC structure, policy function can be either stochastic or deterministic.The stochastic policy is modeled as a probability distribution: a ∼ π θ (a | s), while the deterministic policy is modeled as a deterministic decision: a = π θ (s).They classify the policy-gradient methods.
a) Stochastic Policy: As for stochastic policy a ∼ π θ (a | s), the gradient of the expected reward to policy parameters is calculated according to policy gradient theorem [72] as follows Where µ θ (S) ∈ ∆(S) is the state distribution.Then, the policy is updated using the gradient ascent method Where η is the learning rate.It is necessary to avoid large updating of step size in each iteration since the policy gradient readily falls into a local maximum.To make the policy gradient training more stable, trust region policy optimization (TRPO) added a Kullback-Leibler (KL) divergence constraint to the process of policy updating [73].It solves the optimization problem as follows In PPO, the actor network and critic network share the same learned features, and this may result in conflicts between competing objectives and simultaneous training.Hence, a phasic policy gradient (PPG) separates the training phased for actor and critic networks [75], which leads to a significant improvement in sampling efficiency.Other improved versions of the AC structure include advantage actor-critic (A2C), asynchronous advantage actor-critic (A3C), and soft actorcritic (SAC).A2C and A3C both enable parallel training using multiple actors, but the actors of A2C work synchronously, and those of A3C work asynchronously [76].SAC improves the exploration of agents incorporating policy entropy [77].
b) Deterministic Policy: The gradient of deterministic policy a = π θ (s) is expressed as The deterministic policy gradient (DPG) method firstly used deterministic policy [78].Then, the deep deterministic policy gradient (DDPG) was developed by combining the DPG and DQN [79].The DDPG extends the discrete action space of the DQN to continuous space while learning a deterministic policy.Later, the twin delayed deep deterministic (TD3) policy gradient applied three tricks, i.e., clipped network, delayed update of critic network, and target policy smoothing to prevent the overestimation of Q-value in the DDPG.
3) Summary: The DQN, DDPG, and A3C are three basic paradigms of MFRL representing value-based methods, deterministic policy methods, and stochastic policy methods.Their upgraded versions, the Rainbow DQN, TD3, and PPG, SAC represent the state-of-the-art of each paradigm, which are the best choices for fusing MFRL with the existing microgrid control framework.Moreover, the value-based methods such as DQN are more suitable for discrete control tasks like transformer tap and switch on/off control, while the policybased methods like TD3 are more suitable for continuous tasks such as active power and reactive power reference generation.
IV. FUSION OF MODEL-FREE REINFORCEMENT LEARNING WITH MICROGRID CONTROL
Section II and Section III introduce the existing microgrid control framework and the MFRL, separately.This section furthers the fusion details, including the application guidelines and the challenges and insights of using MFRL in microgrid control.
A. Application guideline 1) Problem formulation: Microgrid control is intrinsic to an infinite MDP that MFRL can solve.Ref. [80] answered the question of 'How', that is, 'How to formulate a control problem that can be solved by MFRL?', which includes four steps: i).Determine the environment, state space S, and action space A; ii) Design reward function R according to control targets; iii).Select proper learning algorithm; iv).Train agent and validate the learned policy.The four steps are exemplified below based on two specific application scenarios, frequency regulation and voltage regulation.
i) Formulation of frequency regulation: Eqs. ( 19)-( 21) show the general configuration of a MFRL agent for frequency regulation in microgrids.The agent has unique action space when fusing with different modules in Fig. 2.
Where w i is frequency at each bus i; (P ij , Q ij ) is the power flow over line from bus i to bus j; M2-M5 are the modules summarized in Fig. 2; I is the inverter set; I GF L and I GF M are the set of GFL inverters and GFM inverters, respectively.Since the control target is to maintain frequency, the deviation of frequency is designed as reward function.
ii) Formulation of voltage regulation: Eqs. ( 22)-( 24) show the general configuration of a MFRL agent for frequency regulation in microgrids.
Where v i is the voltage magnitude of bus i, and τ i is the tap positions of the on-load tap changers (OLTPs) of transformers.
Compared with frequency regulation, the agent has distinct action of OLTPs in M5 for voltage regulation.After selecting S, A, and R, the mainstream MFRL algorithms are selected to update the policy of the agent.Note that the selected algorithms should be applicable to the application scenarios.For example, the discrete algorithm in Fig. 3b is better for discrete control actions like OLTPs.In addition, the above formulations give a general form of configurating an MFRL agent for microgrid control, and they can be modified according to customized control tasks.
In addition to problem formulation, there are another two fundamental questions regarding 'What' that remain to be answered.They are • Q1: What kinds of tasks is MFRL suitable for?
• Q2: How can MFRL be fused with the existing microgrid control framework?
The following two subsections tries to answer these two questions based on the state-of-the-art of MFRL.The answers can serve as the application guideline for adopting MFRL in microgrids.
2) What kinds of tasks is MFRL suitable for?: In general, MFRL is suitable for tasks with the following four features: i) Relatively unchanged environment.Policy learned by RL agents reflects the physical law in the training environments, which fundamentally determines the state transition probability.As shown in the diagram in Fig. 3a, environment generates rewards based on P (s t+1 | s t , a t ) : S × A → ∆(S) and feed the rewards to RL agent for policy updating.A new environment has distinct state transition probability function, which may have conflicts with the buffer data and trained policy.Thus, the working environment should not differ too much from the training environment.That's why in Tab. 1, the training microgrids and validation microgrids usually have fixed topology and predefined disturbances.
ii) Clear control target.Clear control targets facilitate the design of reward functions.The objective function in the optimization problem, optimal control, and MPC can be directly transformed to a reward function.With the function grouping and hierarchical structure in Fig. 1, the specific control targets can be briefly categorized into frequency regulation, voltage regulation equation, and economic benefits.Then, the voltage deviation [81], frequency deviation [82] and energy management cost/revenue [83] - [84] are transformed into reward functions in (21) and (24).Crucially, a well-designed reward function gives the MFRL agent the best guide to learn the optimal policy.
iii) Available data.Environmental data must be accessible if the agent interacts with a real system.Also, the real environment should tolerate improper actions for exploration.If the environment is a simulator, the simulation should run quickly to allow for thousands of repetitions.For example, a fast a simulator and a real tokamak vessel were developed for training and validation in [30].iv) Acceptable control complexity.'Acceptable' means the control complexity should be neither too low nor too high.For each perspective summarized in the high-level research map, there is no research trying to replace all the controllers.Most of the research just focused on a specific task that a modelbased controller cannot handle but MFRL can, because there is no need to replace a simple model-based controller that has good performance and it is unrealistic to let AI directly control the whole microgrid for now.
3) How can MFRL be fused with the existing microgrid control framework?: MFRL is essentially a useful tool that serves microgrid control.It follows microgrid control targets when fused with the existing control framework.In general, there are three ways of fusing as follows.
i). Model identification and parameter tuning.MFRL assists in identifying the uncertain models of the grid components accurately.Also, it can address the uncertainty and unavailability of model parameters and release the grid operators from complex and time-consuming parameter tuning, especially tuning a large model with many parameters.
ii).Supplementary signal generation.MFRL can generate the supplementary control signals for model-based controllers, with which the current controllers can be made more robust and deal with complicated control tasks.
iii).Controller substitution.MFRL can completely replace the existing model-based controllers if they are no longer effective.It needs fewer inputs but has better performance than model-based controllers owing to the ANN's strong fitting capability.
In general, the application guide is summarized based on the existing microgrid control research that employ MFRL.The detailed literature review will be performed in the next subsection.
B. Literature review
Sorted in the way of fusing, Table I summarizes the literature adopting MFRL in microgrids, where the key features are listed in the last column.In general, MFRL has fused with the optimization and control tasks in microgrids.Most research has tried to replace the existing model-based controllers with MFRL agents.In addition, more researchers focus on optimization problems that have clear targets.The objective functions are directly transformed or incorporated into the reward function.
C. Challenges and insights
Although many researchers have been investigating the applications of MFRL in microgrid control, there is still a clear gap between theory (simulation) and practice (real microgrid operation).The main concerns are the aspects of the environment, scalability, generalization, security, and stability.This subsection summarizes these challenges and gives some insights on how to tackle them.
1) Environment: • Challenges: As shown in Fig. 4, the conventional model-based microgrid controllers have several types of tests before implementation, i.e., simulation, controller hardware in the loop (HIL) test, power HIL test, subscale system test, and full system test.They are the options for the MFRL environment.Existing literature suggests offline training in the numerical simulator and online implementation in real systems [94] because the RL agent requires sufficient exploration during training which is unrealistic in HIL or real systems.That's why early RL was mainly used in video games, where the simulator could perfectly emulate the working environments [99].Among the current power testbed types, simulation has the highest coverage of test scenarios but the least fidelity, which is the major concern of employing MFRL.Even if the agent learned a good policy in a numerical simulator, it may not function effectively in a real microgrid.
Online implementation
Fig. 4: Microgrid testbeds [34] and MFRL environment • Insights: As for numerical simulators, they are on the way to developing a more accurate and faster toolbox capable of serving as a high-fidelity MFRL environment.Improved power system modeling [100] and more efficient numerical simulation techniques, such as the hybrid symbolic-numeric framework [101], are currently being developed.Further, it would be better to develop a standardized and customized training environment that assists in setting up the interface with power simulators such as PSCAD, PSSE, and MATLAB-Simulink, just like "Gym" in the field of deep RL [102].The standardized environment can also serve as a baseline for algorithm tests and comparisons.On the other hand, it is a good way to design a HIL test system that is equipped with specialized protection and can tolerate random exploration to some degree.In this way, the HIL test system may work as an environment that closely resembles an actual microgrid.Moreover, MFRL agent can learn from historical data.To improve the learning efficiency and address the problem of real-data insufficiency, some advanced techniques have been developed.For example, i) long-tail learning [103] can learn effectively on biased data set; ii).deep active learning [104] can also be used to more efficiently label disturbance data.
2) Scalability: • Challenges: MFRL suffers from the curse of dimensionality like some model-based controllers.The expansion of state space and action space will result in an exponential increase in control complexity, thereby increasing the difficulty of exploration and training.Existing MFRL research on microgrid control mainly focuses on some smallscale problems [105] and utilizes ANN with a few layers.To promote the application of MFRL in microgrid control, it is necessary to improve its scalability.
• Insights: On the one hand, it is an effective way to reduce control complexity by integrating domain knowledge into problem formulation.For example, [106] narrowed down the learning space and avoided baseline violations based on the generation constraints.On the other hand, it would be better to increase the capability of existing MFRL models by: i).increasing the exploration efficiency by designing guided exploration strategies like evolutionary RL [107]; ii).increasing the fitting capability of ANN through the modern design of network structures, i.e., sequential-to-sequential networks and transformers [108]; iii).increasing the training efficiency through distributed techniques like federated learning [109] and edge computing [110].All of these methods can help relieve the pressure on training and make MFRL more scalable for microgrid control.
3) Generalization: • Challenges: Similar to DL, MFRL was accused of "inability of generalization" because a well-trained agent does not function effectively in a changing environment [111].Even in an unchanged environment, the diversity of disturbances may also distort the agent.In microgrid control, it is difficult to cover all the disturbances during the training, which is critical on the condition that RL agents replace the existing controllers.
• Insights: Firstly, rich training scenarios benefit the generalization of MFRL.For example, [112] addressed the uncertainty of Volt-Var control in active distribution systems by generating a bunch of offline training scenarios.It is also a good way to employ robust RL that can tolerate the uncertainty of the environment [113].Further, transfer learning can also enhance the MFRL's generalization capability, which has proven to be effective in the field of DL.
4) Security: • Challenges: Security is referred to as static security in this paper, meaning that system state should respect the static physical constraints to avoid damaging the device.In microgrids, these constraints can be thermal limit constraints and control signal constraints decided by the physical capability of microgrid components.They are usually explicit and known according to microgrid device manufacture, and there are IEEE Standards setting the secure operational range of voltage and frequency.However, due to the non-interpretability of ANN, the learned policy cannot always guarantee each variable respect the constraints.Furthermore, it is also a problem to guarantee secure exploration in a HIL or real system.In the future, MFRL agents may be trained in a HIL microgrid to overcome the shortcomings of numerical simulators, where the exploration cannot violate the physical constraints of the HIL or real system for sure.
• Insights: Through constrained RL [114] - [115] and safe RL [116] - [117], the actions of RL agents can be projected to a safety region and thus always respect the physical operational constraints.In addition, physics-constrained and physics-informed deep learning [118] is also under development and can be integrated into MFRL to address security concerns.In physics-constrained deep learning, a "safety layer" is often leveraged to maintain constraint satisfaction under different physics knowledge, while physics-informed learning embeds the knowledge of physical laws that govern by partial differential equations into training.
5) Stability: • Challenges: Stability is referred to as dynamic stability under a disturbance.According to the definition in [119], the stability is the ability of an electric power system, for a given initial operating condition, to regain a state of operating equilibrium after being subjected to a physical disturbance, with most system variables bounded so that practically the entire system remains intact.Modelbased microgrid controllers must pass the stability test through eigenvalue analysis or the Lyapunov function validation before implementation.However, the employment of MFRL challenges the model-based criteria because the uninterpretable RL agents dramatically change the closed-loop dynamics of microgrids.
• Insights: Integrating domain knowledge is the best way to guarantee microgrid stability for now.As for the first two fusing approaches, i) model identification and parameter tuning and ii) supplementary signal generation, modelbased stability criteria can still be used to verify the system stability because the MFRL agent doesn't break down closed-loop systems.MFRL complements the model-based approaches and improves them in a data-driven way.The supplementary signals generated by the MFRL agent can be viewed as hyper-parameters.Through techniques like semidefinite programming (SDP), linear matrix inequality (LMI), and sum-of-square programming [120], the security range of these hyper-parameters can be obtained to guarantee dynamic stability [121].As for the third way of fusion, the complete controller substitution, MFRL agents dramatically change the closed-loop dynamics and make the system difficult to model.To address the stability issues in this condition, this paper gives three potential solutions.i).enrich the training data and training scenarios.The learned policies basically reflect the state transition of the environment.If the training data set has covered sufficient instability scenarios, the corresponding punishment reward can help RL agents avoid unstable actions.ii).use a physics-informed approach by integrating modelbased stability criteria into MFRL training.For example, the Lyapunov function [122] and the Gaussian process estimation [117] can be used to generate stability criteria for MFRL training, and [123] proposed a Lyapunov-regularized RL for transmission system transient stability.iii).performpolicy sta-bility validation through time-domain simulation (TDS).TDS has been widely used in power systems to validate the stability of nonlinear components or modules.It can also help validate the stability of the inexplicable RL policy.
V. CONCLUSION
Model-based controllers are still the foundation of existing microgrid control systems.However, the emerging challenges caused by the uncertainty of DERs and extreme weather call for advanced control techniques.As a model-free and datadriven approach, MFRL opens the possibility of non-linear, high-dimensional, and high-complex microgrid control.It may contribute to a huge upgrade of the existing control framework.
Against this background, this paper firstly performs a comprehensive review of the current microgrid control framework and then summarizes the applications of MFRL.In general, there are three ways of fusing MFRL with the existing model-based controllers, including i). model identification and parameter tuning, ii).supplementary signal generation, and iii).controller substitution.For now, there is still an obvious gap between the theory (simulation) and its practical application.The challenges are mainly categorized into environment, scalability, generalization, security, and stability.With the rapidly developed techniques in the fields of both power and artificial intelligence, the author believes the challenges summarized in this paper will finally be overcome.Someday in the future, the MFRL can perfectly fuse with the existing microgrid control framework.
Fig. 1 :
Fig. 1: High-level research map of microgrid control ) M2: Terminal Voltage-ref module: The 2 nd module (M2) is named the 'Terminal Voltage-ref Module' since it directly generates the reference terminal voltage.The control model is formulated using Kirchhoff's current law (KCL) from e abc to u abc and conducting Park transformation.Then, after implementing proportional-integral (PI) controllers, the physical model and control transfer function in dq framework are shown in (1) and (2), respectively.
Fig. 3b shows
Fig.3bshows the mainstream MFRL methodology.They are categorized into value-based and policy-based algorithms.
Fig. 3 :
Fig. 3: The framework and map of MFRL (a) agentenvironment interaction in an MDP (b) methodology
TABLE I :
Literature summary of implementing MFRL in microgrids main knowledge to narrow down learning space to a feasible region and avoids violations. | 2022-06-24T01:16:03.866Z | 2022-06-22T00:00:00.000 | {
"year": 2023,
"sha1": "46a5ca882f570db7160310f488382beba7a65f7f",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/5165411/5446437/09951405.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "6f97556af8920c8e3a67f2db28220997bf593a0e",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
144977551 | pes2o/s2orc | v3-fos-license | Research and trends in the studies of WebQuest from 2005 to 2012: A content analysis of publications in selected journals
This paper provides trends analysis and content analysis of studies in the field of WebQuest that were published in seven major journals: TOJET, Educational Technology & Society, Educational Technology Research & Development, Computers & Education, Learning and Instruction, Australasian Journal of Educational Technology and British Journal of Educational Technology. These articles were cross analyzed by published years. Content analysis was implemented for further analysis based on their research topics, issues category, research settings and samplings, research designs, research method and data analysis. It was found that WebQuest benefited students academically. The results of the analysis also provides insights for educators and researchers into research trends and patterns of WebQuest. © 2013 The Authors. Published by Elsevier Ltd. Selection and peer-review under responsibility of The Association of Science, Education and Technology-TASET, Sakarya Universitesi, Turkey.
Introduction
The process of learning and improving students performance by using appropriate technological processes and resources have been the order of the day.In this regard, WebQuest application is a new phenomenon that motivates students in their learning processes (Dodge, 1995).It has been used by students as a web-based tool for collecting and evaluating information to increase their learning performance.According to Dudeney (2003), its application that requires student to perform certain tasks, analyse, evaluate and solve problems motivates students than outdated course books and other such teaching materials.WebQuest is a web-based activity which requires the students to be active learners and allows them to enhance their higher order thinking skills such as finding topic-related web sites, examining and selecting well-prepared and reliable Web sites (Halat, 2008).Regarding finding relevant resources, they must evaluate the sites so that all the unnecessary information will be eliminated and this will help them to develop their critical-thinking skills.WebQuest is a valuable tool for providing students with many interaction opportunities in realistic settings thus making for a more meaningful, experiental and very motivating learning experience.If the WebQuest is associated with students' professional needs, their implementation can be very successful and it helps to enhance students' skills both in academic and cooperative work (Laborda, 2009).As research documented that WebQuests is more effective in promoting student engagement, motivation, connecting to authentic contexts, critical thinking, creativity, literacy skills, improving problem solving skills, social interaction, scaffolding learning and collaborative learning (Abu-Elwan, 2007;Allan & Street, 2007;Abbitt & Ophus, 2008;Ikpeze & Boyd, 2007;Kanuka, Rourke & Laflamme, 2007;Lara & Reparaz, 2005;Lim & Hernandez, 2007;Segers & Verhoeven, 2009;Yasemin, Madran & Kalelioglu, 2010).That is why a majority of the teachers used WebQuest as a tool for teaching and learning; to achieve the learning outcomes (Yasemin, Madran,& Kalelioglu, 2010;Segers & Verhoeven, 2009;Cheng, Tzung, & Wei, 2011;Pear & Crone-Todd, 2001) and assisted in bridging the gap between theory and practice (Lim & Hernandez, 2007).A WebQuest is one example of how teachers can integrate technology into classrooms, which is a growing area of interest as information technology creates new learning opportunities and becomes more accessible across the world (Krismiyati Latuperissa, 2012;Garry, 2001;Lin & Hsieh, 2001).Research has shown that integrating technology especially using internet in teaching and learning can have positive influences on students' motivation, inquiry-based learning, attitudes, achievement and peer interactions in the classrooms (Abu Bakar Nordin & Norlidah Alias, 2013;Chandra & Lloyd, 2008;Dorothy Dewitt, Saedah Siraj & Norlidah Alias, 2013;Lim & Tay, 2003;Norlidah Alias & Saedah Siraj, 2012;Wang, Kinzie, Mcguire, & Pan, 2010).
In Malaysia the research done on WebQuest is still lagging far behind compared to other countries in the world.Research done in 2009 by Norazah Mohd Nordin & Ngau Chai Hong on the development and evaluation of WebQuest for Information and Communication Technology subject found that WebQuest could attract students to search more information using the web by using the link attached.It also serves as an easy way for e-learning.
So far, there have been numerous publications about WebQuests in the last few years; the majority have either fallen within the categories of conceptual, descriptive, design or other technical aspect of it (Sox & Rubinstein-Avila, 2006).Since then, there has been increasing research on WebQuest application on a wide range of topics.However, none has focused on content analysis of publications on WebQuests.
Research Objective
This review intends to concentrate on and learn about the research trends related to WebQuest covering the years 2005 to 2012.The articles were obtained from seven professional journals and abstract databases and published in the Social Sciences Citation Index (SSCI); which has comprehensive coverage of the world's most important and influential journals and research results.This paper only covers 7 journals that have been selected from The study undertaken is to extract the similarities and differences from the above classification including the findings found in those selected journal articles.It may shed some light on future research that will be conducted and provide guidance for researchers and a basis for discourse for policy makers.
3.Method
Content analysis is a research technique and tool for social science and media researchers.As a scientific method for an objective, systematic, and quantitative description of the manifest content of communication (Berelson, 1952).
Furthermore it can be extended to describe the characteristics of content of a document, making an observation and provides an analysis.Krippendorff (1980) defined content analysis as a research technique for making replicable and valid inferences from data to their context.One of the most frequent uses of the content analysis is to study the changing trends in the theoretical content and methodological approaches by content analyzing the journal articles of the discipline (Loy, 1979).
Firstly, a content analysis of journal articles was conducted by formulating the research question and the objectives of the study.Secondly, 7 journals were selected which were published in the Social Science Citation Index (SSCI) pertaining toWebQuest research.Thirdly, a set of criteria were set for the analysis and development of the content categories.Fourthly, data preparation for the necessary analysis was completed before final analysis and drafting of the report.
In order to examine the research trends in WebQuest research, the study make classification of research topic of each published article.Descriptions of those categories consist of (i) Research Trends (ii) Research topics (iii) Research Design and methodology; and (v) Data analysis and findings.
Webquest Research Trend
The utilization of information technology such as internet, digital program and gadget has increased in education.Recently the use of internet learning tools such as WebQuest, Facebook, Wiki, YouTube and Web-Based have been integrated into instructional learning in the classroom.Focusing on the topic of WebQuest in all the 7 selected journals and results of trend analysis from 2005-2012 shows that out of 3614 articles published, only 13 articles were related to WebQuest.This is only about 0.35% from the total number of journals analyzed as shown in Table 1.For this reason, teachers and students need to be trained in order to use WebQuest more effectively (2012).The pre service elementary school teachers who used designed WebQuest-based applications in teaching and learning mathematics shows that it had a great effects on their motivation (2011).Results of these studies imply that developing WebQuest-based activities in a college level methodology courses may have more positive effect on the attitudes of the pre-service elementary school teachers towards teaching and learning mathematics, rather than doing spreadsheet activities.If they try to design WebQuest activities as a group or individual project in their methodology courses, they might have the opportunity to practice their pedagogical and content knowledge in a different environment, which will give more benefit in developing their competency.Multimethodology in teaching and learning processes also promises a good sign to practice not only for teachers, but also the students.The distribution of WebQuest articles in selected journals is shown in Table 2.One of the main purposes of this study was to categorize research topics in WebQuest and to help indentify research trends from 2005-2012 in the 7 selected journals.The selected articles which were reviewed were classified into three categories such as WebQuest as learning enviroment, WebQuest as a learning tool and WebQuest for self development.Our study found that 40 % of the articles focused on the learning enviroment.For example, research done by Allan and Street (2007) on the impact of a knowledge by pooling WebQuest in primary initial teacher training shows that WebQuest has the potential to promote high order learning within different disciplines in higher education.It also creates a new enviroment in learning.As a result, the study of the 10 articles on WebQuest research found that: (1) when WebQuest was used in real situations, students could acquire more knowledge and experiences, and (2) in the learning activity of the experiment, the students accomplished different learning tasks and expressed their own opinions and perspectives, which could foster their critical thinking skills.On the other hand, the students in outdoor situation could be positive to participate in learning activity.Based on these outcomes, WebQuest has the potential to develop as a pedagogical model for teaching and learning.In Table 3, the study summarizes the content analysis from the selected journals.
Qualitative part: (content analysis) interview
This paper analyzed the publication by the methodology involved in WebQuest research.The results shows that 60% of the articles used quasi-experiment as the research design.The research objectives were more to see the positive aspects of using WebQuest.The other methods used were Developmental Research and as a concept paper.For example in TOJET (April 2011-Vol 10, Issue 2), a study on the impacts of mathematical representation developed through WebQuest and spreadsheet activities on the motivation of pre-service elementary school teachers used quasi experiment which involved 30 students in the experimental group and 40 students in the control group.While, the study done by Zacharias, Nikoletta, and Constantinos (2011) used a qualitative approach to study the effect of two different cooperative learning approaches, namely, the Jigsaw Cooperative Approach (JCA) and the Traditional Cooperative Approach (TCA), on students' learning and practices/actions within the context of a WebQuest science investigation.Various aspect of the learning processes including performance, comprehension, understanding, creativity and motivation were highlighted.The study described the implementation of a WebQuest in science in which 38 seventh-graders were involved as participants to study about the ecology, architecture, energy and insulation of CO 2 -friendly houses.The result revealed that by applying the Wilcoxon test procedure, the JCA and TCA conditions improved the students' understanding of concepts related to the ecology, architecture, energy and insulation of CO 2 -friendly houses.However, there were no differences between the two approaches, in terms of enhancing students' understanding of concepts related to CO 2 -friendly houses using the Mann-Whitney procedure.The study described 6 categories of working mode that the students followed within the context of a WebQuest science investigation.The study also identified 4 categories of problems the students face within the context of a WebQuest science investigation (i.e., the problem with regard to the actions/practices (working mode) to follow, the WebQuest material, the web-based platform tools and student interaction within a group).
In comparison, the articles in C&E which studied WebQuest used an experimental design as the approach.An experimental group comprising 103 sixth-grade students participating in the study were broken down into three groups namely: traditional instruction, traditional instruction with WebQuest and WebQuest instruction with outdoors.Their assignment was to learn more about resource recycling and classification with the objectives being to know about the use of natural resources which not only improves the quality of life but also destroys the natural landscape and brings about environmental pollution at the same time; to learn the concept of resource recovery so that they can form habits of resource recovery and classification; and to understand that the earth's resources are limited.The results of the study showed that using WebQuest in outdoor instruction influences students' learning performance positively.
As a result of this study, the research methodologies had been used in all the articles appeared in the seven selected journals were descriptive research studies conducted involving WebQuest.The focus was on informational research with regard to the concept or the use of WebQuest in education.
Conclusion
The present study examines the WebQuest research trends between 2005-2012 from seven selected journals.In this decade, the WebQuest research trend has involved the quasi experimental study on how WebQuest can be used as a tool in teaching and learning, enhancing student potential and creating a positive learning enviroment.The analysis shows that all research had relied on a descriptive methods and critical analysis.It puts less concern on the importance of interpretative methodology that can be applied.Both types of research could emphasize and attempt to understand such phenomenon (WebQuest application) in order to deal with the issues of ability, reliability and effectiveness of such technology in terms of creating a better student and teacher as well.From the research also, it shows that we do not have any evidence that the studies were able to deal with those issues.Finally, this study suggests that research in the future should expand the data sources for more deliberate analysis.Future research is also encouraged to conduct similar studies; but with more current information and research data from various sources.
containing 906 articles and Computer & Education (C&E) comprising 1240 articles. From the total of 3707 articles, only 10 articles were successfully extracted and identified to cover related WebQuest researchers. It showed up in the publication of journals namely, Education Technology Research & Development
ISI, namely the Educational Technology Research and Development (ETRD), Turkish Online Journal of Education and Technology (TOJET), The Educational Technology And Society Journal (ETS), The Learning and Instruction Journal (L&I), Australasian Journal of Educational Technology (AET), British Journal of Educational Technology
Table 1 :
Number of research articles in WebQuest published in 7 selected Online journals.The issues that were highlighted since 2009-2012 was reading comprehension performance (2012), motivation of the preservice elementary school teachers (2011), oral communication in English (2009) and WebQuest-Web Macaresi for teaching and learning (2010).Two articles from TOJET 2011 and 2012 clearly discussed advantages of using WebQuest.The post-test administered in those research found that WebQuest helps in improving students' reading comprehension performance and it shows that WebQuest use has the potential in promoting competency in reading comprehension.
Figure 1: Percentage and number of articles in selected journalsThe study found that from the year 2005 until 2012; only 1 or 2 articles which were related to WebQuest were published in the seven selected journals.For example, in AET there is a research on self regulated WebQuest learning system for Chinese elementary schools.While in TOJET there were two articles that studied WebQuest was found in 2010 & 2012.ETS also had the same pattern which WebQuest related research articles found in 2009 and followed in 2010.
Table 2 :
Distribution of WebQuest articles in selected journals
Table 3 :
Content analysis of WebQuest articles in selected journals. | 2019-05-05T13:07:59.784Z | 2013-11-26T00:00:00.000 | {
"year": 2013,
"sha1": "e23b7b5e8728cb67aaa84151af2f76ed3aa2ad49",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.sbspro.2013.10.397",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bc60688f3feba6e63c44031caff6cc90392e58d0",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
245503858 | pes2o/s2orc | v3-fos-license | A deep dive into the latest European Glaucoma Society and Asia-Pacific Glaucoma Society guidelines and their relevance to India
Glaucoma is the second leading cause of blindness in India. Despite advances in diagnosing and managing glaucoma, there is a lack of India-specific clinical guidelines on glaucoma. Ophthalmologists often refer to the European Glaucoma Society (EGS) and Asia-Pacific Glaucoma Society (APGS) guidelines. A group of glaucoma experts was convened to review the recently released EGS guideline (fifth edition) and the APGS guideline and explore their relevance to the Indian context. This review provides the salient features of EGS and APGS guidelines and their utility in Indian scenario. Glaucoma diagnosis should be based on visual acuity and refractive errors, slit-lamp examination, gonioscopy, tonometry, visual field (VF) testing, and clinical assessment of optic nerve head, retinal nerve fiber layer (RNFL), and macula. The intraocular pressure target must be individualized to the eye and revised at every visit. Prostaglandin analogues are the most effective medications and are recommended as the first choice in open-angle glaucoma (OAG). In patients with cataract and primary angle-closure glaucoma (PACG), phacoemulsification alone or combined phacoemulsification and glaucoma surgery are recommended. Trabeculectomy augmented with antifibrotic agents is recommended as the initial surgical treatment for OAG. Laser peripheral iridotomy and surgery in combination with medical treatment should be considered in high-risk individuals aged <50 years. In patients with phakic and PACG, phacoemulsification alone or combined phacoemulsification and glaucoma surgery are recommended. Visual acuity, VF testing, clinical assessment of the optic disc and RNFL, and tonometry are strongly recommended for monitoring glaucoma progression.
(11.08 million) was due to OAG. In 2010, George et al. [5] estimated that India has approximately 11.2 million persons aged 40 years and above with 6.48 million and 2.54 million cases related to primary OAG (POAG) and ACG, respectively. An additional 28.1 million people were estimated to have ocular hypertension, primary angle-closure suspects (PACSs), or PAC. One in eight person aged 40 years or above have glaucoma or are at risk of glaucoma. Worldwide, the number of people with glaucoma is expected to increase to 111.8 million in 2040, especially among women and Asians. [6] Amid this growing prevalence of glaucoma, advances in the understanding of glaucoma and technology have broadened the tools and options for better screening, diagnosis, monitoring, and management of glaucoma. However, in clinical practice, ophthalmologists look forward to evidence-based recommendations as a guide to patient management.
Several associations, including the European Glaucoma Society (EGS) and Asia-Pacific Glaucoma Society (APGS) have released guidelines to support ophthalmologists in managing people with or at risk of glaucoma and provide useful information to fellow ophthalmologists and trainees. [7,8] Ophthalmologists in India often refer to the EGS and APGS guidelines as a guide to their daily practice.
Purpose
In the milieu of the growing burden of glaucoma in India, a group of experts in glaucoma came together to review the recently released EGS guideline (fifth edition) and the APGS guideline and explore their relevance to practice in India.
This paper aims to capture the salient features of EGS and APGS guidelines and their utility in the Indian scenario.
Methods
A group of glaucoma experts reviewed the similarities and differences between EGS and APGS guidelines and voiced their opinion about their relevance to Indian practice.
Panel Opinion on APGS and EGS
The APGS and EGS guidelines have primarily covered the major clinical aspects of glaucoma management. There are subtle differences in the APGS and EGS.
The APGS advocates screening of glaucoma in at-risk patients, but EGS does not. The APGS prefers opportunistic screening over universal screening. The APGS emphasizes n reassessment of risk factors, IOP/target IOP, all basic investigations, adverse effects on medications, and quality of life (QoL) issues. It offers timings for follow-up depending on the extent of the damage. The APGS provides additional information on the calibration of Goldmann applanation tonometry (GAT). Compared to EGS, the panelists found that the APGS guideline was more robust in detailing the procedures and providing diagnostic tips for using different tools. The format of each of the diagnostic tests provided in APGS is practice-oriented. It provides notes on interpreting optical coherence tomography (OCT) and visual fields (VFs). It also has included the Glaucoma QoL-15 Questionnaire to assess of QoL of the patient.
In context to Indian practice, the experts' opinion on several aspects of diagnosis and management within the purview of EGS and APGS is provided in this article. An overview of the expert's opinion is summarized in Box 1.
Risk factors
The panel observed and agreed that risk factors mentioned in both guidelines are more or less universal and relevant to India as well [ Box 1]. Besides the risk factors stated in the EGS guideline, hyperopia included as a risk factor for PACG in APGS holds good for the Indian context too. [7,8] Risk profiling According to the Ocular Hypertension Treatment Study, central corneal thickness (CCT) is a significant predictor of future glaucoma (POAG) in patients with ocular hypertension. [9] The risk for glaucoma was threefold greater in eyes with CCT of 555 µm or less compared with eyes that had CCT of more than 588 µm. [10] According to a cross-sectional study (n = 81,082) of a multiethnic population, CCT was used to clarify the findings of increased risk of glaucoma among Blacks and Hispanics.
Variation in CCT accounted for nearly 30% of the increased risk of glaucoma among Blacks and Hispanics. [11] In the Chennai Glaucoma Study (n = 7774), the CCT positively correlated with IOP. The CCT among urban subjects was 18 µm thicker than the rural population. The average CCT in the study was 511.4 µm. [12] Concerning the role of CCT in risk profiling, the panel agreed to the EGS recommendation that CCT may be a useful tool for profiling the risk at baseline. In agreement with both EGS and APGS, the panel also felt that IOP correction algorithms/nomograms based on CCT are not valid and must be avoided [Box 1]. The APGS has given a CCT range of ≈520 ± 30 and ≈505 ± 30 µm for urban and rural Indians, respectively. [8] Screening for glaucoma Unquestionably, glaucoma screening is important; however, population-based screening found that it is not cost effective. [13] Although assessing the cost effectiveness of the screening for glaucoma is more relevant to the European setting, the EGS could not find substantiating evidence to relate glaucoma screening to the disease progression, visual loss, IOP, or patient-reported outcomes. [7] In accordance with the APGS guideline, the panellist also agreed that opportunistic glaucoma screening is a viable option in India and universal glaucoma screening for glaucoma only may not be feasible. The panellists also opined that the clinical and cost effectiveness of screening, detection, or monitoring tests for glaucoma is pertinent to the Indian setting [Box 1]; however, there is no evidence from the Indian context relating the cost effectiveness of the available tests to improvement in outcomes.
Diagnosis
Patients with acute ACG may show signs and symptoms of glaucoma as pain radiating from the eye, visual impairment, conjunctival hyperemia, and sometimes nausea and vomiting. On the contrary, OAG is asymptomatic until it has progressed to an advanced stage. Hence, nearly one-third of patients with OAG will be in an advanced or late stage in at least one eye at the time of diagnosis. [2] The available diagnostic methods include ophthalmoscopy, tonometry, imaging techniques, and perimetry. An overview of the panellist opinion relating to the diagnosis of glaucoma is described in Box 1.
History taking
The complete initial glaucoma evaluation begins with history taking and eye examination of the patient. In agreement with the APGS, the panellists suggested that patients be questioned on their past and current medical factors, social factors, past ophthalmic history, socioeconomic factors, and family history of glaucoma. [8] To help ophthalmologists get the right picture of the clinical condition at baseline, the panellists reviewed both EGS and APGS guidelines on history taking and endorsed the questionnaire proposed in the EGS for glaucoma patients [ Table 1]. [7] Initial optical examination The panellist discussed the recommended tools for initial assessment of glaucoma stated in the EGS and APGS and felt
Intraocular pressure and tonometry
The mean IOP in adult populations is estimated at 15-16 mmHg, with a standard deviation of nearly 3.0 mmHg. An IOP of ≥21 mmHg is considered elevated. [7] Patients with glaucoma tend to show diurnal variations; hence, IOP has to be evaluated at different times of the day in such patients. IOP diurnal variations are common and extensive in glaucoma patients
Antifibrotic agents in glaucoma management
Mitomycin C is the choice of drug in glaucoma surgery Antifibrotics should be judiciously used Intraoperative mitomycin can be used at 0.1-0.4 mg/mL for 1-3 min, depending on the condition of the disease Postoperatively both 5-FU and mitomycin-C can be used 5-FU concentration: 0.1 mL injection of 50 mg/mL undiluted solution. It has to be administered as subconjunctival injection adjacent to but not into bleb (pH 9), with a small-caliber needle (e.g., 30 G needle on insulin syringe) Mitomycin C concentration: 0.1 mL injection of 0.1-0.5 mg/mL solution. It must be administered adjacent to but not into bleb, with a small-caliber needle (e.g., 30 G needle on insulin syringe)
Cataract and glaucoma surgery
In patients with cataract and PACG, phacoemulsification alone or combined phacoemulsification+glaucoma surgery is recommended. However, the decision should be made based on the disc and field damage and the status of the angle Open-angle glaucoma Trabeculectomy augmented with antifibrotic agents is recommended as initial surgical treatment for OAG, provided the ophthalmologist is familiar with the use of antifibrotics. Antifibrotics should be used with caution Alternatives like OlogenÒ should not be a preferred option due to a lack of evidence on its equality of superiority over trabeculectomy Angle-closure disease Treatment of PACG depends on the spectrum of disease and presence of cataract Laser peripheral iridotomy and surgery is combined with medical treatment should be considered in high-risk individuals below the age of 50 years, e.g., high hyperopia, and patients requiring repeated pupil dilation for retinal disease Primary angle-closure suspect: LPI in high-risk individuals such as those with very high hyperopia, family history, or those requiring pupil dilatation due to retinal disease PAC or PACG: Laser peripheral iridotomy is the first line of treatment Visually significant cataract and PAC: Laser peripheral iridotomy to manage PAC or PACG and lens extraction should be considered based on level and extent of angle closure and IOP There may be a risk of aqueous misdirection or surgical complications if cataract surgery is done without LPI in patients with cataract and PAC or PACG Ophthalmologists should be proficient in handling patients with cataract and PAC or PACG Prostaglandin analogues are the most effective medications and are usually recommended as the first choice in PACG In patients with phakic and PACG, phacoemulsification alone or combined phacoemulsification + glaucoma surgery is recommended. However, the decision should be made based on the disc and field damage and the status of the angle Monitoring glaucoma progression Despite a very low level of direct evidence, the panelists endorsed the EGS recommendations Keeping in view the goal of preventing vision impairment, the visual acuity, VF testing, clinical assessment of the optic disc and RNFL, tonometry is strongly recommended for monitoring glaucoma progression. However, OCT of disc/RNFL/macula and repeat gonioscopy carries a weak recommendation In preperimetric glaucoma, OCT is used for monitoring the disease progression. Visual field is mandatory for diagnosing and monitoring the progression of glaucoma OCT is always complementary to visual field testing but cannot replace visual field testing in monitoring glaucoma progression CCT, Central corneal thickness; CDR, cup-to-disc ratio; OAG, open-angle glaucoma; OCT, optical coherence tomography; IOP, intraocular pressure; LPI, laser peripheral iridotomy; ONH, optic nerve head; PGAs, prostaglandin analogues RNFL, retinal nerve fiber layer; PAC, primary angle closure; PACG, primary angle-closure glaucoma. than in healthy individuals. The IOP readings are to be taken at 3-h intervals between 7 am and 10 pm. [14] Patients with high-baseline IOP might benefit from diurnal IOP monitoring. The diurnal IOP fluctuation is also high in patients with pseudoexfoliative or exfoliative glaucoma. [7] GAT is the gold standard and the preferred tonometry for measuring IOP. [15] Error due to the presence of high or irregular astigmatism warrants correction while taking with GAT. Besides GAT, alternative tonometers such as the self-tonometer, noncontact tonometry, rebound tonometer (ICare®), and hand-held tonometer (Tono-pen®) are also available. [7] A meta-analysis of six studies showed no significant difference in the intraindividual IOP deviation between ICare® PRO and GAT. [16] Results from another meta-analysis of studies comparing tonometers (dynamic contour tonometer, noncontact tonometer (NCT), ocular response analyzer, Ocuton S, hand-held applanation tonometer (HAT), rebound tonometer, trans palpebral tonometer, and Tono-pen®) with the GAT was hampered by poor reporting from the studies. However, it concluded that NCT and HAT were comparable to GAT. [17] The APGS exclusively described the risk factors that affect the IOP measurement [ Table 2], [8] which the panellists found relevant to the Indian context.
In agreement with EGS and APGS, the panellists unanimously agreed that GAT is the gold standard for measuring IOP. They also revealed that ophthalmologists in India use Tono-pen® or ICare®, especially in children and patients with the scarred cornea or edematous cornea. However, they opined that neither self-tonometry nor iCare® tonometry should replace GAT for clinical measurement. They also pointed out that despite the influence of CCT on GAT readings, CCT-adjusted IOP values should not be considered in the diagnosis of glaucoma. The panellists endorsed the instructions for calibrating tonometer described in detail in APGS.
Gonioscopy
Gonioscopy is used to inspect the anterior chamber angle and it forms an essential component in evaluating patients with or suspected of having glaucoma. [7] Gonioscopy offers information about the pathophysiology of glaucoma. [2] A wide range of instruments are available for ophthalmologists to explore anterior chamber angle configuration; however, none of these methods may be considered a reliable substitute of slit-lamp gonioscopy. [18] The Van Herick technique is a helpful adjunct to gonioscopy in terms of grading depth of anterior chamber; nevertheless, it is not a substitute for gonioscopy. The Van Herick technique fails to provide information about the neovascularization, inflammation, or tumors in the angle. [19] After reviewing the EGS and APGS, the panellists found that the APGS recommendation on gonioscopy is more practical to the Indian context [ Fig. 1]. [8] Clinical evaluation of optic nerve head and retinal nerve fiber layer Funduscopic examination of the optic disc and the RNFL is the key to glaucoma diagnosis. [2] OCT facilitates assessing the damage of the RNFL, while retinal tomography characterizes changes in the optic nerve topography. Photographic images are warranted to assess the static optic nerve damage and for detecting glaucoma progression. Confocal scanning laser ophthalmoscopy, OCT, and scanning laser polarimetry are available for quantitative imaging of the ONH, retinal nerve fiber layer, and inner macular layers. [20] Devices based on these technologies help in glaucoma diagnosis and detect glaucomatous progression during follow-up. [7,20] OCT is a valuable clinical tool for glaucoma diagnosis and detection of progression. However, the quality of the diagnostic accuracy of the reviews on OCT for diagnosing glaucoma has not been encouraging. [21] Quantifying the size and shape of the optic disc, cup, and neuroretinal rim enables one to detect the onset of glaucoma and follow-up on its progression. The damage to the disc can be assessed through cup-to-disc ratio (CDR) or rim-to-disc ratio. [22] After reviewing the APGS and EGS, the panellist concluded that diagnosis of glaucoma should not made on the OCT findings alone. As suggested by EGS, the panellist also opined that ophthalmologists should focus on the neuroretinal rim-Inferior > Superior > Nasal > Temporal-the rule can be used but cautioned that the pattern might be less conspicuous in larger discs. The APGS has described the RNFL assessment technique in detail. The visibility of RNFL decreases with age and is more difficult to visualize in less pigmented fundi. Disc hemorrhage is a common finding but is often overlooked. So the panellist suggests that ophthalmologists should specifically look for hemorrhages, especially in patients at high risk of progression. As the vessel position changes, the panellists suggested that vessel positions should be assessed in sequential photography. Also, they reiterated EGS and APGS guidelines on the need for sequential photography or imaging of ONH and RNFL features for detecting the disease progression. As an alternative to photos unavailable, disc drawing, enumerating the disc, was strongly endorsed by the panellist. The EGS guideline recommends against the use of CDR to classify patients as glaucomatous and the panellist concurred. The panellists suggested that ophthalmologists could refer to APGS for the range of normal vertical CDRs for disc size for Indians. As per the guidelines, the panellist pointed out that ophthalmologist should be cautious while interpreting CDR in patients with different discs sizes.
Perimetry
VF testing using static automated perimetry is a vital tool for detecting and monitoring visual function loss associated with glaucoma. Further, it is also necessary for understanding visual loss relative to the level or the future risk of functional disability. Such an understanding will help ophthalmologists to make clinical decisions. [23] Glaucoma staging is based on the severity of VF damage. The panellist reviewed the EGS guideline and agreed that a simple system based on mean deviation (MD) alone is acceptable to the Indian context. [7] Glaucoma is defined as early glaucomatous loss, moderate glaucomatous loss, and advanced glaucomatous loss if MD is ≤6 dB, 6-12 dB and MD ≤12 dB, respectively. From a clinical perspective, the panellists found the EGS-based diagnostic strategy in case of the initial VF abnormality [Fig. 2] as a value added to Indian ophthalmologists. [7] Management of glaucoma Treatment of glaucoma involves medical therapy, laser, or surgery depending on the underlying cause and stage of the disease. The primary goal of glaucoma therapy is to slow or prevent disease progression by adequately lowering the IOP. [24] Setting a target IOP To achieve a targeted IOP, aggressive treatment and frequent change of therapy may be necessary. Setting a target IOP range is a dynamic concept. Setting a target IOP range is a dynamic concept. patient risk factors, life expectancy, and social circumstances. [24] Nevertheless, setting a target and applying it as a therapeutic guide remains a source of contention among ophthalmologists. A "target" IOP was set by percentage reduction or a threshold value in many randomized control trials and studies. Another method is the formula-based "target" IOP setting, which is more time-consuming, yet it is beneficial in addressing the risk factors in an individual patient. [25] Target IOP is the upper limit of IOP judged to be sufficient to slow the rate of VF deterioration to maintain the quality of life. [7] Both the EGS and APGS guideline discusses setting an IOP target. The panelist agreed that a target IOP should be personalized and constantly reevaluated in the milieu of the stage of disease [ Fig. 3
]. [7]
Based on a detailed review and understanding of the APGS and EGS guidelines, the panellist made some observations that are relevant to the Indian settings [Box 1]. They suggested that the IOP target must be individualized to the eye and revised at every visit. The target IOP is the upper limit of IOP judged to be compatible with this treatment goal. Documentation of target IOP is up to the discretion of the ophthalmologist. In early glaucoma, an IOP of 18-20 mmHg with a reduction of at least 20% may be sufficient. In moderate glaucoma, an IOP of 15-17 mmHg with a decrease of at least 30% may be required. In advanced glaucoma, a reduction of at least 40% may be required.
Topical therapy
IOP-lowering topical therapy remains the mainstay of glaucoma management. As mentioned in the EGS and APGS, the panellists also found prostaglandin analogues (PGAs), β-blockers, α-adrenergic agonists, and carbonic anhydrase inhibitors (CAIs), and pilocarpine as the commonly used classes of topical therapies for glaucoma in India [ Table 2]. [2.7,24] PGAs have been shown to have a more remarkable ability to reduce IOP than other prescribed therapeutic classes for patients with glaucoma. In addition, PGAs are associated with greater persistence than other classes of medications. [26] After reviewing the EGS and APGS, the panelist agreed to start treatment with monotherapy and viewed PGAs as the most effective medication and the first choice in OAG, provided the cost is not a limiting factor. The panelists felt that PGAs should be the first choice followed by nonselective beta blockers, alpha agonists, Rho kinase inhibitors, selective beta blockers and topical CAIs.
Use of lasers in glaucoma
Lasers have revolutionized the treatment of glaucoma. Owing to the simplicity of the laser procedure, most ophthalmologists are routinely using the laser at a fundamental level. Neodymium: yttrium-aluminium-garnet (Nd: YAG) laser peripheral iridotomy (LPI) is the most common procedure for angle closure. Laser trabeculoplasty (LTP), gonioplasty/iridoplasty, diode laser cyclophotocoagulation and endocyclophotocoagulation, laser suturolysis, bleb remodelling, iridolenticular synechiolysis, and Nd: YAG laser hyaloidotomy are the other procedures that are currently used. [27] Laser peripheral iridotomy LPI is indicated for angle-closure disease (high-risk PACS, PAC, PACG), and treatment of AAC is done with suspected pupillary block or plateau iris mechanism. [7] LPI is contraindicated in neovascular glaucoma and eyes with angle closure due to the nonpupillary block mechanism. [27] After reviewing the guidelines, the panellist opined that laser iridotomy is usually possible and surgical iridotomy is rarely required.
Laser trabeculoplasty LTP is indicated for lowering IOP in POAG, pseudoexfoliative glaucoma (PXFG) and/or pigmentary dispersion glaucoma (PDG), high-risk Ocular Hypertension (OHT) either as initial treatment or as an add-on or replacement treatment. [7] LTP is contraindicated in the event of inadequate visualization of angle structures and glaucoma associated with uveitis, trauma, or angle dysgenesis. It is relatively contraindicated in eyes with normal-tension glaucoma, aphakia, and PACG with PAS. [27] The Laser in Glaucoma and Ocular Hypertension trial supports SLT as a first-line treatment for OAG and ocular hypertension. Selective laser trabeculoplasty (SLT) as the first treatment was more cost effective than eye drops. [28] The panelists reviewed the laser options provided in EGS and APGS and agreed that argon laser trabeculoplasty (ALT) and SLT have similar IOP-reducing effects. However, they stressed that SLT is commonly practiced in India. The panelists added that the success of SLT depends on the trabecular meshwork's pigmentation.
Thermal laser peripheral iridoplasty
Thermal laser peripheral iridoplasty (TLPI) may be considered in those with plateau iris syndrome with remaining angle closure despite a patent peripheral iridotomy and elevated IOP. However, its efficacy in reducing IOP is limited. [7] TLPI is contraindicated in the event of nonvisibility of the iris due to corneal edema or opacity or a flat anterior chamber. [27] The panellist reviewed EGS and APGS guidelines and opined that TLPI had limited IOP lowering efficacy. They also added that once daily, pilocarpine could be used as an alternative to TLPI for plateau iris syndrome and patent peripheral iridotomy.
Cyclodestructive procedures
Cyclodestructive procedures are considered for treating refractory glaucoma-uncontrolled glaucoma despite previous filtration surgery and/or laser treatment and/or with maximum tolerated medical treatment. [27] The available cyclodestructive procedures are lasers (endoscopic, transpupillary, transcleral cyclophotocoagulation), ultrasound, and cryoprobe. [7] Micropulse transcleral cyclophotocoagulation replaces continuous mode diode cyclophotocoagulation with fewer complications; however, there are concerns of unexplained visual loss in some eyes. [7] After reviewing the EGS and APGS guidelines, the panellist concluded that transcleral cyclophotocoagulation is India's most commonly used method. As ultrasound is not common in India, they did not make any recommendations.
Incisional surgery
Ophthalmologists resort to surgery when nonsurgical treatment options fail to lower the IOP to the target pressure or cause intolerable side effects. [2] However, it is also recommended in those whose glaucoma is relatively nonprogressive. [7] Primary congenital glaucoma is also treated surgically. Complicated glaucoma may require additional therapy (in addition to trabeculectomy). Cyclodestructive procedures and long-tube implants are more commonly used in case of repeat surgery. The outcome of the surgery can be evaluated in terms of IOP lowering in the absence of IOP lowering medications. The commonly preferred surgical technique for penetrating glaucoma surgery is trabeculectomy and trabeculotomy with goniotomy. [7] The advantage of trabeculectomy is that it is associated with lower long-term postoperative IOP and requires fewer postoperative lOP-lowering medications. However, it also has certain disadvantages associated with a higher rate of cataract formation, postoperative bleb complications, and a higher risk of complications from postoperative hypotony (e.g., choroidal detachment). [7] Long-tube glaucoma drainages are generally reserved for patients with risk factors for a poor result with trabeculectomy with antifibrotics. Recent trials have established their potential role as a primary surgical procedure in select cases. [7] The ab-interno nonbleb forming procedures are defined as minimally invasive glaucoma surgery (MIGS). These procedures can be combined with phacoemulsification. MIGS surgeries are suitable for patients with mild to moderate glaucoma. Currently, there is not sufficient evidence to support the superiority or equivalence between these procedures versus trabeculectomy. [7] The panelists discussed EGS and APGS guidelines and concluded that trabeculectomy in adults and trabeculotomy-trabeculectomy in congenital glaucoma was a commonly preferred surgical technique for penetrating glaucoma surgery. They added that nonpenetrating glaucoma surgery is not helpful in the Indian context. They also agreed that MIGS is not widely available in India.
Role of antifibrotic agents in glaucoma management
Antifibrotics such as 5-fluorouracil (5-FU) and mitomycin-C are generally used in patients undergoing glaucoma filtration surgery to reduce postoperative conjunctival scarring and improve drainage. [7] General precautions in using antifibrotic agents: antifibrotics are associated with a potential risk of postoperative infection. The use of antifibrotic requires careful surgical techniques to prevent complications. It should not enter the eye and contact with the cut edge of the conjunctival flap should be avoided. Precautions to the use and disposal of cytotoxic substances should be observed. [7] After reviewing the use of adjunctive agents in glaucoma surgery in the guidelines, the panelist put forth a series of suggestions on adjunctive agents in intraoperative and postoperative [Box 1]. The felt mitomycin C is the choice of drug in glaucoma surgery and antifibrotics should be judiciously used. Intraoperative mitomycin can be used at 0.1-0.5 mg/mL for 1-3 min, depending on the condition of the disease. Postoperatively, both 5-FU and mitomycin-C can be used.
Cataract and glaucoma surgery
In the Indian setting, most often, glaucoma is detected in cataract screening camps. It is rather challenging to optimize the management of coexisting glaucoma and cataract. It is challenging because one has to achieve glaucoma control, accomplish visual improvement, and decrease complications due to surgery. [29] Cataract and glaucoma surgery can be combined or performed sequentially. Cataract surgery alone is of limited benefit in lowering IOP in OAG and is not recommended. A clear lens extraction is an option in PACG and PAC with high IOP. Combined surgery allows, more significant IOP reduction. The success rate of combined phacoemulsification and filtration surgery is less than filtration surgery alone. However, the comparative evidence on outcomes of sequential versus combined cataract and glaucoma surgery is insufficient. In context to the guidelines, the panelists commented that in patients with phakic and PACG, phacoemulsification alone or combined phacoemulsification plus glaucoma surgery could be considered. However, the decision should be made based on the disc and field damage and the status of the angle.
Treatment algorithms
Topical therapy in glaucoma Topical treatment for glaucoma is initiated with monotherapy to minimize side effects. After reviewing the EGS and APGS guidelines, the experts agreed that different classes of drugs have a different degree of lowering IOP [ Table 3]. [7] PGAs are first-line of therapy largely on the basis of their efficacy, once-daily dosing, and safety profile.
In accordance with the EGS and APGS guidelines, the panelists suggested that if the initial therapy failed to achieve the target IOP or is not tolerated, switch to another monotherapy or consider LTP. On the other hand, if monotherapy is well tolerated and effective but the desired IOP reduction is not achieved, then addition of a second drug of different class should be considered as per the APGS guideline [ Fig. 4]. [7,8] The panel's expert opinion in the management of OAG concerning EGS and APGS guidelines is summarized in Box 1.
Angle-closure glaucoma
Angle-closure glaucoma is defined by the presence of iridotrabecular contact (ITC > 180°). PACS is an angle in which 180-270° of the posterior trabecular meshwork cannot be seen gonioscopically. [8] Angle-closure glaucoma is diagnosed by gonioscopy, the gold standard. However, it is essential to rule out secondary causes. As provocative tests are of less diagnostic value, they can be avoided. Diagnostic mydriasis is generally safe and can be used to evaluate the retina as long as the angle is reasonably wide. [7] Patients with chronic ACG have to be evaluated for the pathophysiological mechanisms. In the event of a pupillary block, medication along with LPI should be considered. In the case of plateau iris, medical therapy and LPI can be considered. Iridoplasty should be performed only if the angle remains closed even after LPI and the IOP remains high or medical management with pilocarpine can be considered. Eventually, trabeculectomy (filtration surgery) may also be considered in either pupillary block or plateau iris. Lens-induced blockage warrants lens extraction. [7] In the case of acute primary angle-closure attack, the treatment is targeted at lowering the aqueous humor production, reopening the angle, and reducing inflammation. Topical therapy with beta-blockers and alpha 2 agonists or systemic treatment with acetazolamide/mannitol will help in reducing the aqueous humor production. In contrast, pilocarpine can help reopen the angle and steroids will take care of the inflammation.
Experts endorsed the EGS guideline on laser or surgical approach to acute primary angle-closure attack Fig. 6. [7] The panel's expert opinion in the management ACG in context to EGS and APGS guidelines is summarized in Box 1.
Monitoring glaucoma progression
The panelists endorsed the EGS guideline's strong recommendation on monitoring glaucoma. Keeping in view the goal of preventing vision impairment, they agreed to use visual acuity, VF testing, clinical assessment of the optic disc, and RNFL and tonometry to monitor glaucoma progression. Repeat gonioscopy and OCT of disc/RNFL/macula may not be helpful as OCT analysis cannot replace VF analysis for assessing the progression. [7] The panelists also concurred that VF is mandatory for not only diagnosing but also for monitoring the progression of glaucoma. They opined that OCT should complement VF testing and cannot replace it in the progression of glaucoma. However, in preperimetric glaucoma, OCT is used for monitoring the disease progression. [7] Summary and key expert opinions After reviewing and discussing the EGS and APGS guidelines, the panelists issued recommendations that are practical from the Indian context [Box 1].
Conclusion
This review uses an expert-based assessment of the updated EGS and APGS guidelines from an Indian perspective. While the EGS guidelines are mainly applicable to the Indian context, the APGS guidelines are closer to Indian practice, especially for angle-closure disease. This highlights the impact of health care resources and disease prevalence on global glaucoma guidelines in the Indian context. medical writing assistance but did not influence the content in any manner.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2021-12-24T06:17:02.778Z | 2021-12-23T00:00:00.000 | {
"year": 2021,
"sha1": "1c544ff1c7a9dfa2a2cded8515478a7164c23aab",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijo.ijo_1762_21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc5a8beaed0184e40fb02fe61141054aec5035fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267268145 | pes2o/s2orc | v3-fos-license | Impact of air humidity on the tenacity of different agents in bioaerosols
Despite the variety of pathogens that are transmitted via the airborne route, few data are available on factors that influence the tenacity of airborne pathogens. In order to better understand and thus control airborne infections, knowledge of these factors is important. In this study, three agents, S. aureus, G. stearothermophilus spores and the MS2 bacteriophage, were aerosolized at relative humidities (RH) varying between 30% and 70%. Air samples were then analyzed to determine the concentration of the agents. S. aureus was found to have significantly lower survival rate in the aerosol at RH above 60%. It showed the lowest recovery rates of the three agents, ranging from 0.13% at approximately 70% RH to 4.39% at 30% RH. G. stearothermophilus spores showed the highest tenacity with recovery rates ranging from 41.85% to 61.73% with little effect of RH. For the MS2 bacteriophage, a significantly lower tenacity in the aerosol was observed with a recovery rate of 4.24% for intermediate RH of approximately 50%. The results of this study confirm the significant influence of the RH on the tenacity of airborne microorganisms depending on the specific agent. These data show that the behavior of microorganism in bioaerosols is varies under different environmental conditions.
Introduction
It has long been known that a wide variety of infective agents are transmitted via the airborne route [1].Aerosols as a significant transmitting medium for pathogens became a focus of attention with the Covid-19 pandemic worldwide.There, airborne transmission of the virus plays a key role in spreading the disease [2].Infectious severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been detected in the air samples [3] as well as viral RNA [4].However, few data are available on the factors affecting the tenacity of airborne pathogens [5].Sedimentation of large particles, temperature, humidity and the UV-C component of solar radiation probably influence the infectivity of airborne microbes in the environment [6][7][8][9].Indoors, relative humidity in particular is subject to large variations with rather constant other environmental conditions such as temperature or UV radiation [10].The object of this study was therefore to investigate the influence of relative humidity for a representative of bacteria, spores and viruses in comparison under standardized experimental conditions.
The aerosolization process and the detection methods within this study are well standardized and so all data are highly comparable.
We used non-or low-pathogenic agents as surrogate microorganisms because this greatly increased experimental safety and facilitated the generation of large data sets [11], especially when working with aerosols.Staphylococcus aureus (S. aureus) was chosen as a surrogate primarily for MRSA, but the results may also be extrapolated to other related gram-positive airborne pathogens, e.g.Streptococcus pneumoniae.For MRSA, airborne transmission in hospital settings has been presumed for a long time [12] and MRSA has been detected in air samples taken in hospitals [13,14], veterinary clinics [15] and livestock farms [16].Bacterial spores are very resistant and can thus persist for a very long time under unfavorable environmental conditions and can also be found in the air [17,18].G. stearothermophilus was used in this study as a surrogate for spores from gram-positive, aerobic or facultatively anaerobic, pathogens, like Bacillus anthracis.The MS2 bacteriophage served as a surrogate for non-enveloped pathogenic viruses because of its characteristics, including a size of 230-300 nm, single-stranded RNA and the absence of a complex tail structure [19].It is used as a surrogate for viruses causing enteric and respiratory diseases [20].Some non-enveloped viruses, where airborne spread is an important transmission path, are norovirus, adenovirus or rotavirus in human medicine [21,22] and the foot-and-mouth disease virus in veterinary medicine [23].This is the first study that allows a direct comparison of the airborne tenacity of three surrogates with entirely different properties at different RH since they were aerosolized using the same methodology.
Experimental design
We used Staphylococcus aureus DSM-799 purchased from the German Collection of Microorganisms and Cell Cultures GmbH, G. stearothermophilus DSM-22 and an MS2 bacteriophage DSM-13767 provided by the Robert-Koch Institute for this experimental series.
Characteristics of the aerosol chamber
The aerosol chamber made of stainless steel has a volume of approximately 7 m 3 and is used to generate defined bioaerosols (particle number and particle size) under defined adjustable climatic conditions (airflow, RH, temperature).Fig 1 shows a schematic drawing of the chamber, adapted from Rosen et al. (2018).Technical details were also previously published by Rosen et al. (2018) [24].For all experiments, the airflow was set to 100 m 3 /h and the temperature was set at 24˚C.The specific climatic conditions were set in the chamber supply air.All parameters were measured in the exhaust air and automatically documented every minute.Deviations of +/-5% of temperature and airflow were accepted.
A perfusion pump transported the bacterial, spore and viral suspensions at a rate of 36 ml/h to an ultrasonic nebulizer (SONO-TEK Corporation, Milton, USA) in which droplet aerosols were generated.We used an ultrasonic nozzle with a conical nose for a divergent spray pattern operating at a frequency of 120 kHz, where the average diameter of the droplets generated from the device is initially 18 μm.Thereafter, the droplet size in the aerosol changes very rapidly and shrinks [25].The collection of air samples began only after the agent had been aerosolized in the chamber for 8 min, allowing for the formation of a stable bioaerosol within the whole chamber volume.
A fan in the ceiling of the chamber dispersed the aerosol.The aerosol chamber is located in a part of a building for experimental studies, where the ambient air is standardized before entering.There, ambient air was filtered using an E 11 filter according DIN EN 1822 and activated charcoal filter before entering the air supply duct in the ceiling of the aerosol chamber.The filter E 11 filters out at least 95% of airborne particles with a size of 0.3 μm and larger.This was installed because there is a cattle and pig barn in the immediate vicinity of the experimental building with the aerosol chamber used within this study and dustintensive work can therefore occur at certain points with a corresponding increase in particles in the ambient air.The supply air was therefore standardized.The activated charcoal filter was installed to absorb any odors such as ammonia that may occur from the surrounding cattle and pig stables.The airborne particle size distribution and the particle number were monitored at one-min intervals during the 30-min experimental period using an aerosol spectrometer (Grimm, model 1.109, GRIMM Aerosol Technik Ainring GmbH & Co., KG, Germany), which also calculated the aerodynamic diameter.The spectrometer probe was positioned at a point at a height of 1.30 m approximately between the fan and the outlet for the exhaust air.
Air sampling
Air sampling was performed with three AGI-30 impingers connected in parallel (AGI-30; Neubert Glas GbR, Geschwenda, Germany, VDI Norm 4252-3) at different heights (0.3 m, 0.8 m, 1.3 m) for each experimental replicate.For the air samples from S.aureus aerosols, the impingers were filled with 30 ml of sterile phosphate-buffered saline (PBS; Oxoid, Wesel, Germany).For the trials with G. stearothermophilus spores and the MS2 bacteriophage, 30 ml of sterile deionized water was used.The sampling time for each experiment was 30 min.Impingers were connected to vacuum pumps (Leybold S4B; Leybold, Cologne, Germany and Edwards RV3; Edwards, Feldkirchen, Germany).The airflow was approximately 12.5 l/min and was monitored with a rotameter (Analyt, Mu ¨llheim, Germany).
Preparation of the bacterial and viral suspensions
S. aureus suspensions were prepared individually for each experimental replicate.S. aureus DSM-799 was incubated overnight in Mueller Hinton broth (Oxoid, Wesel, Germany) with added 6.5% NaCl (MHB+) in a shaking incubator at 37˚C and 200 rpm (Multitron, Infors HT, Germany).The following day, 5 ml of this suspension was added to 100 ml MHB+ and incubated for 8h in the shaking incubator.Of this culture, 100 μl were plated on one plate of blood base agar (Blood Agar Base No. 2, Oxoid, Wesel, Germany) for each trial and aerobically incubated for 8 h at 37˚C to achieve the exponential growth phase.All colonies on one plate were removed with a plate spreader by adding 3 ml of PBS and homogenized on a vortex mixer for 3 min with glass beads.The suspension was adjusted to the target concentration of 10 9 colony forming units (cfu)/ml using Mc Farland standard measurement as well as measuring the optical density.Concentrations ranging from 5x10 8 to 5x10 9 cfu/ml were accepted for the validity of the experiment.
Cryopreserved vegetative G. stearothermophilus DSM-22 bacteria were thawed, and streaked to four previously prepared spore agar plates.Ingredients of 1 l spore agar are: 2 g meat extract, 3 g peptone, 0.5 g glucose, 0.36 g CaCl 2 , 0.15 g MgSO 4 x2H 2 O, 0.03 g MgSO 4 xH 2 O and 15 g agar adjusted to a pH-value of 7.0 and solved at 100˚C.After streaking the samples on the plates they were incubated for 11 d at 60˚C.On days 4 and 7, the bacteria were removed from the agar plates using an inoculation loop and bacteria from each plate were split to two fresh spore agar plates.From day 11 to 15, the agar plates were stored at room temperature, resulting in the sporulation of the bacteria.The spores from the agar plates were suspended in 5 ml of sterile deionized water using a plate spreader.Three washing steps ensued, using centrifugation (2700 g for 15 min) and resuspension.Finally, spores were suspended in 70% ethanol and stored at 4˚C.The concentration of the hence achieved stock G. stearothermophilus spore suspensions was determined by plating serial dilutions on tryptone soy agar (TSA; Oxoid, Wesel, Germany) and incubated at 60˚C for 24 h.Spore solutions for aerosolization were diluted in sterile deionized water from the stock solution to the target concentration of 10 7 cfu/ml for each experimental replicate.Concentrations ranging from 5x10 6 to 5x10 7 cfu/ml were accepted for the validity of the experiment.
For the stock solution of MS2 bacteriophage, three colonies of the susceptible host-strain E. coli DSM-5695 were incubated overnight in 10 ml of Luria/Miller-broth (LB; Roth, Karlsruhe, Germany) at 37˚C.The following day, 200 μl of the overnight culture and 100 μl of MS2 bacteriophage suspension were mixed and incubated for 10 min at room temperature.Then 5.5 ml of soft agar was added.Soft agar was prepared by adding bacteriological Agar (Agar No.1; Oxoid, Wesel, Germany) to LB Medium in a ratio of 1:200 and boiling the solution three times.After that, the suspension was transferred to an LB agar plate (Roth, Karlsruhe, Germany) and incubated at 37˚C overnight.The following day, the soft agar was removed using a plate spreader and covered with SM buffer (5.8 g NaCl, 2.0 g MgSO 4 x7H 2 O, 50 ml 1 M Tris-HCl in 1 liter H 2 O to pH 7.4) in an Erlenmeyer flask.The mixture was homogenized on a magnetic stirrer (Phoenix instruments, Garbsen, Germany) for 4 h at room temperature, then transferred to a tube and centrifuged (16.000 g, 30 min).Finally, the supernatant was filtered using 22 μm sterile filters (Merck, Darmstadt, Germany) and stored at 4˚C until use.Phage solutions for aerosolization were diluted in sterile deionized water from the stock solution to the target concentration of 10 8 plaque forming units (pfu)/ml for each experimental replicate.Concentrations ranging from 5x10 7 to 5x10 8 pfu/ml were accepted for the validity of the experiment.
Quantification of the suspensions and air samples
Bacterial and spore suspensions used for aerosolization and air samples from the bacteria and spores experiments were quantified by streaking out triplicates of 100 μl of the impinger collection fluid on blood base agar and TSA for S. aureus and G. stearothermophilus spores, respectively.The agar plates were incubated for 24 h at 37˚C for S. aureus and 60˚C for G. stearothermophilus spores.Then, cfu were counted and calculated as cfu per ml of impinger fluid.Then it was extrapolated to the final volume of collected liquid in the impinger.This volume was determined using serological pipettes when transferred into sterile tubes.It varied from 18 to 28.6 ml of remaining collection fluid in the impinger after the experiments.Then the cfu in the volume of air collected were calculated and finally the cfu per m 3 air were specified.
For the quantification of the MS2 bacteriophage, a soft-agar overlay technique was used.Serial dilutions of the phage suspensions and air samples were prepared and 100 μl were incubated with 100 μl of an overnight culture of E. coli DSM-5695 in triplicates for 10 min at room temperature.Afterward, the fluid was mixed with 5.5 ml of soft-agar, transferred to an LB-agar plate and incubated for 24h at 37˚C.Plaques were counted the following day.
Statistical analysis
All statistical analysis was performed using R version 4.02 (R Foundation Vienna).Since bacterial counts were lognormal distributed, we used the geometric mean for averaging.For statistical analysis, we used a mixed count regression.Due to overdispersion, we choose a negative binomial distribution.For the aerosol chamber experiment, a random effect was used for each of the 72 experiments to account for repeated measures because there were 3 AGI-30 impinger measurements per experiment.We checked for interactions using interaction plots and found an interaction between targeted humidity and the surrogate and an interaction between the impinger height and the surrogate.Both were included in the model as fixed effects.Post-hoc comparisons between all surrogates were adjusted for multiple comparisons using the Bonferroni method.The mixed model was fitted using the R package lme4 (version 1.1-23).Estimated marginal means and multiple comparison post-hoc tests were performed using the emmeans R package (version 1.4.6).Results are reported with 95% confidence intervals.A significance threshold of 0.05 was used.For the visualization, we fitted a restricted cubic spline with the actually measured humidity as a continuous variable using the package mgcv (version 1.8-33).
Results
A negative binomial model was used to compare the concentration of surrogates per m 3 For S. aureus, the concentration decreased significantly between each RH level except for 60% and 70% RH where no significant difference was found (p -value = .695).The largest differences were found when comparing the concentrations of approximately 30% RH to 60% and 70% RH, with 25.9-fold (p < .001)and 20.1-fold (p < .001)higher concentration at approximately 30% RH respectively.The concentration of G. stearothermophilus spores detected in the air samples was comparatively stable at different RH, with slightly lower concentrations at the lowest (30%) and highest (70%) RH ranges.Significant differences (p < .05)were therefore observed only for the comparisons of 30 to 40%, 30 to 60% and 40 to 70%.Concentrations were 2.0-fold higher at 40% than at 30%, which represented the largest deviation.
For the MS2 bacteriophage, the concentration showed a large drop to *10 6 CFU per m 3 air around 50% RH while for all the other RH the concentration was stable at *10 7 CFU per m 3 air.Significant differences (p < .001)were detected when comparing the pfu/m 3 at 50% RH to all other RH ranges.The largest deviation was observed when comparing the concentration per m 3 at 30% and 40% RH to 50% RH, with an 11.3-fold (p< .001)and 8.3-fold (p < .001)higher concentration, respectively.Another significant but small difference was detected when comparing the pfu/m 3 at 30% RH and 60% RH range, where a 1.9-fold higher concentration was observed at 30% (p = .003).
Since we also measured the actual humidity in the aerosol chamber for each trial and we expected a continuous effect of the humidity on the agent recovery from the aerosol, a restricted cubic spline was fitted (Fig 2).
When comparing the concentration of surrogates collected at different impinger heights, no significant differences were observed for G. stearothermophilus spores and the MS2 bacteriophage.However, the concentration of S. aureus in the lowest impinger (0.3 m) was 1.22 times higher than in the impinger at 0.8 m (p = .003)and 1.44 times higher than in the impinger at 1.3 m (p < .001).
Recovery rates from the aerosol were calculated by dividing the concentration of surrogates detected in the aerosol by the theoretical concentration in the aerosol chamber, which was calculated considering the concentration of the suspensions, the forward speed of the perfusion pump and the airflow measured in the aerosol chamber.The recovery rate of S. aureus decreased steadily with increasing RH.The recovery rate was highest around 30% RH (4.39%) and lowest at 70% RH (0.13%).For G. stearothermophilus spores, recovery rates were higher than for S. aureus at every RH level, ranging from 41.85% at 30% RH to 61.73% at 50% RH.The MS2 bacteriophage showed intermediate recovery rates compared to the bacterial surrogates, ranging from 4.24% at 50% RH to a maximum of 41.57% at 30% RH (Table 1).
The mean aerodynamic diameter of aerosolized particles was 3.4 μm for the experiments with S. aureus, 1.8 μm for the experiments with G. stearothermophilus spores and 1.4 μm for the experiments with the MS2 bacteriophage.
Discussion
Knowledge of the tenacity of aerosolized microorganisms under different environmental conditions is useful for assessing the risk of airborne pathogen transmission.However, Tang (2009) pointed out that studies on the tenacity of airborne organisms vary widely in their methodologies, making it difficult to compare results [6].
In the aerosol chamber used for our experiments, microorganism suspensions of defined concentrations are continuously aerosolized into a large volume of air under stable and defined environmental conditions.This dynamic aerosol appears to be advantageous over the use of rotating drums or chambers as described by Goldberg et al. (1958) and recently applied by van Doremalen (2020) [26,27].The system here appears to be closer to reality when thinking of a person or animal in a room or barn continuously shedding the pathogen via the airborne route.
To the best of our knowledge, this is the first laboratory study to investigate the influence of RH on the tenacity of experimentally aerosolized S. aureus.We observed the highest recovery rates below a RH of 50%.Madsen et al. (2018) did not find a significant effect of RH on the concentration of airborne S. aureus in indoor air [28].However, several studies also indicate a lower tenacity of S. aureus at high RH.Wilkoff et al. (1969) exposed textiles to airborne S. aureus at 35% and 78% RH and reported a substantially shorter persistence at 78% RH [29].Zukeran et al. (2017) investigated the influence of the RH on the inactivation of S. aureus in an electrostatic precipitator and reported a significantly lower survival rate at RH above 60% [30].In contrast, bacteria other than S. aureus, such as gram-negative microorganisms like Pseudomonas or E. coli, showed higher survival rates in air at high humidities in previous studies [31,32].We think the variation in cell walls between gram-positive and gram-negative bacteria may account for this phenomenon.Gram-positive bacteria possess cell walls with a thick peptidoglycan layer, rendering them less adaptable to fluctuations in water availability.In highhumidity environments, water infiltrates their cells, and they may encounter difficulty expelling the excess water.Consequently, certain bacteria may experience swelling and eventually burst due to these conditions.This highlights the potential of highly varying and individual behavior of different airborne bacteria in the environment.The phenomenon of differential detection of pathogens in bioaerosols under varying environmental conditions was also recently demonstrated in a field study [33].This underlines the importance of documenting climatic conditions during every air sampling for the prober interpretation of results, both in the experimental setting and during field investigations.
In our study, which is the first to systematically analyze the tenacity of aerosolized G. stearothermophilus spores at different RH, high recovery rates were measured over a wide range of RH (from 41.85% to 61.73%).For values outside this range, only slightly lower detection rates of the agent in the air were observed.The influence of RH on tenacity in air was lower compared to the other surrogates.This confirms a high tenacity of spores in the aerosolized state.The cell wall of bacterial spores is complex and multilayered, and the water activity has no relevant impact on their ability to survive.This can be an advantage for aerosol studies, where methods, devices, or other factors of air microbial sampling are to be tested, as such a significant influencing factor can be reduced when using spores.Spores of Bacillus subtilis subsp.niger have been widely used as an indicator in airborne survival studies with bacteria [34,35] and viruses [36] and to validate methods [37,38] due to their extremely high stability in aerosols in several studies.
Concerning the MS2 bacteriophage, the significantly lower recovery rates at an intermediate RH of approximately 50% appear unexpected.However, Verreault et al. (2008) pointed out that the impact of RH on viral infectivity should be determined for each virus individually, as viruses have maximum tenacity at either low, intermediate, or high RH [39].For the MS2 bacteriophage, the airborne tenacity at different RH has been investigated previously.Dubovi and Akers (1979) aerosolized MS2 bacteriophage from a buffered salt solution at 20%, 50% and 80% RH and showed that recovery was lowest at intermediate (50%) RH and highest at low RH (20%) [19].This is in line with the results of our study.Trouwborst and De Jong (1973) described the lowest recovery rates of aerosolized MS2 bacteriophage at a RH of 75% [40].In their study, recovery rates were higher at RHs above and below 75%, which means that they accordingly did not detect a linear correlation between the recovery rate of MS2 bacteriophage and the RH, but instead found the lowest recovery rates at an intermediate RH.Accordingly, for the mycobacteriophage D29, Liu et al. (2012) found the fastest inactivation at intermediate RH (55% ± 5%), followed by high (85% ± 5%) and low (25% ± 5%) RH [41].The lowest survival rates at intermediate RH have been described for decades for enveloped and non-enveloped viruses.Songer (1967) aerosolized Newcastle disease virus, infectious bovine rhinotracheitis virus, vesicular stomatitis virus, and Escherichia coli B T3 bacteriophage at a RH of 10%, 35% and 90% and described the lowest survival rates at 35% RH for all four viruses.It was assumed that initial losses upon aerosolization are highest at low RH, while inactivation over time in the aerosol is highest at high RH [42].This once again shows that precise and stable adjustment and documentation of relative humidity during experiments with virus aerosols are absolutely necessary and must be taken into account in result interpretation.
We must mention that in addition to the precise recording of microorganisms and environmental conditions, the exact number and distribution of particles in the whole room also plays a major role in the dynamics of bioaerosols.This is a limitation of this study, as the exact distribution of the particles in the different areas of the aerosol chamber was not recorded, but only at one point.In addition, we did not document the particles before aerosolization of the microorganisms in order to assess background levels.However, by pretreating the supply air with an E 11 filter, we assume a certain degree of standardization.Nevertheless, particles in aerosols can behave very differently in distribution depending on their size and the room structure [43] and thus also the microorganisms bound to them.This is particularly important in dusty environments, as is often the case in animal stables.The particle sizes there can vary greatly [44].For future studies in the aerosol chamber, it is therefore advisable to record the particles before starting aerosolization and, in addition, at different heights and lengths of the chamber in order to be able to evaluate the overall situation with regard to the particles more comprehensively.
In our study, the suspension medium of the investigated agents was different which may also affect the tenacity of the aerosolized microbes [45].However, we decided to choose the most suitable medium for the specific agent to evaluate the influence of different humidity.S. aureus was aerosolized from PBS to avoid osmotic shock and associated viability loss [46].The MS2 bacteriophage was suspended in sterile deionized water because in a preliminary study we observed greatly reduced recovery rates when suspending in PBS.This is consistent with the results in another study in which approximately 30-fold higher survival rates were observed when aerosolizing D29 phage from sterile deionized water instead of PBS [41].In addition, studies with E. coli using the same experimental design, also using PBS as the suspension medium, yielded exactly opposite results regarding the influence of air humidity on the infectivity of the bacterial pathogen in the air compared to S. aurues [32].Thus, we conclude that the survival rates are attributed more to the pathogen itself rather than the carrier medium.
The discussion of medium used is also interesting in terms of its relevance to a real clinical setting.In such a setting, the pathogen is present in a protein-and cell-rich medium as it is present in various types of saliva or secretions, and subsequently excreted during different activities such as speaking, coughing, or sneezing.Artificially produced saliva substitutes have been used in recent studies to investigate this point further [47].More complex media in combination with different particle sizes generated first would definitely be interesting starting points for further studies.
Conclusions
This study confirms that the airborne survival of microorganisms varies greatly depending on the characteristics of the microorganism and environmental factors, in this case RH.This finding should be considered when taking air samples in the field, as the detection limit in air samples can vary depending on the RH.This is also of great importance in experimental studies, as an accurate and stable relative humidity setting must be ensured, especially for bacterial and viral pathogens, in order to obtain comparable study results.In this study, a significantly reduced tenacity of S. aureus at high RH was described, and S. aureus showed the lowest recovery rates of the three surrogates.We confirmed high tenacity of Geobacillus spores in the aerosol, which was also least influenced by the RH.For the MS2 bacteriophage, a significantly lower tenacity in the aerosol was confirmed for intermediate RH around 50%.Our study highlights that for airborne tenacity studies, appropriate methods, which should be standardized as much as possible, are crucial for comparing the tenacity of different microorganisms.
Fig 1 .
Fig 1.Schematic drawing of the aerosol chamber adapted from Rosen et al. (2018) [24].https://doi.org/10.1371/journal.pone.0297193.g001 of air for the different targeted RH.All data including raw measurements data points are shown in Fig 2. The exact values of all air samples along with their corresponding humidity can be found in S1 Table.
Fig 2 .
Fig 2. Predicted concentration of S. aureus (a), spores of G. stearothermophilus (b) and MS2 bacteriophage (c) per m 3 at different RH (black line).The symbols represent the raw measurements in cfu or pfu per m 3 in different impinger heights.The shaded areas indicate the upper and lower 95% confidence interval.https://doi.org/10.1371/journal.pone.0297193.g002
Table 1 . Mean recovery rates in % of the aerosolized surrogates at different RH
. N represents the number of measurements per humidity range.Three air samples were simultaneously analyzed per measurement. | 2024-01-28T05:13:11.712Z | 2024-01-26T00:00:00.000 | {
"year": 2024,
"sha1": "352b64dd4ac196a286fb4d6d5d854f10960bfb55",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "be94019dab867c605696faabf8560d2dc25cad82",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.