id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
233664357 | pes2o/s2orc | v3-fos-license | Enhanced Photostability and Photoluminescence of PbI 2 via Constructing Type-I Heterostructure with ZnO
Since the discovery of graphene, 2D materials have attracted significant attention in basic sciences and promising applications in electronics, valleytronics, optoelectronics, and sensors, due to their unique physical and chemical properties. Apart from graphene, there are many other semiconducting or insulating materials in the family of 2D materials, such as hexagonal boron nitride (h-BN), black phosphorus (BP), transition metal oxides (TMOs), transition metal dichalcogenides (TMDs), and so forth. Among them, TMDs materials with a rich variety of electronic and optical characters had been widely studied over the past decade in optoelectronic devices. However, the bandgap transition of TMDs is quite sensitive to its thickness, where the bandgap could crossover from indirect to direct only when the thickness of TMDs thins down to monolayer. Unfortunately, the relatively low absorbance in the monolayer TMDs system limits the further improvement of their optoelectronic device performances. Compared with TMDs materials, lead iodide (PbI2) is also a layered semiconductor but exhibits an opposite bandgap dependence with TMDs, where the band structure of PbI2 shifts from direct to indirect bandgap as the thickness decreases from multilayer to monolayer. Therefore, PbI2 could become a good supplement for the existing TMDs and other 2D optoelectronic materials. In addition, PbI2 has a wider bandgap, higher light absorption coefficient, and better perseverance of direct bandgap than TMDs materials, and has potential applications in nuclear radiation detectors, low-threshold lasers, and high-efficiency photodetectors. However, there is a fly in the ointment, that is, the stability, especially photostability, of PbI2 is inferior to that of TMDs materials. The poor photostability of PbI2 would result in its structural damage and property degradation upon laser irradiation and obstruct its progress of practical applications. Regrettably, so far, the improvement of photostability of PbI2 has not been effectively solved yet. One common strategy of protecting PbI2 is to deposit organic polymers as encapsulation layers, such as polydimethylsiloxane (PDMS), but the relatively low thermal conductivity of these polymers cannot produce a desired improvement in the photostability of PbI2. Therefore, it is urgent to seek a more effective strategy to prevent the PbI2 from suffering structural damage and property degradation upon laser irradiation, and meanwhile maintain its photon absorption and radiation. In this work, we have proposed a novel and feasible strategy for improving the photostability of PbI2 via the construction of type-I heterostructure with ZnO with high thermal conductivity. In the PbI2/ZnO heterostructure, the photostability of PbI2 can be improved at different excitation wavelengths, including 320, 405, and 532 nm. The essential reason for the improvement of Dr. J. Li, Dr. Y. Li, Prof. W. Liu, Dr. Q. Feng, Dr. R. Huang, Dr. X. Zhu, Dr. C. Zhang, Prof. H. Xu, Prof. Y. Liu Key Laboratory of UV-Emitting Materials and Technology of Ministry of Education Northeast Normal University Changchun 130024, China E-mail: liyz264@nenu.edu.cn; wzliu@nenu.edu.cn Dr. X. Liu School of Materials Science and Engineering Changchun University of Science and Technology Changchun 130022, China
photostability is that the ZnO offers the desired path of heat dissipation for PbI 2 , which benefits from the high thermal conductivity of ZnO that is approximately two orders of magnitude higher than organic polymers. More impressively, the photoluminescence (PL) of PbI 2 in the heterostructure demonstrates nearly eightfold PL enhancement compared with the pristine PbI 2 because the photogenerated electrons and holes can be transferred from the ZnO to the PbI 2 due to the type-I band alignment. This work not only opens up a novel and feasible strategy to improve the photostability and PL of PbI 2 , but also minimizes the negative influences from the protective layer and obtains significant PL enhancement of PbI 2 .
Results and Discussion
Compared with TMDs materials, 2D layered PbI 2 has a similar hexagonal lattice structure, where a lead atom layer is sandwiched between two layers of iodine atoms. [28,29] Here, strong covalent bonds provide in-plane stability in the layer, while the weak van der Waals forces keep every layer stack together, as shown in Figure 1a. Similar to other 2D materials, PbI 2 can be used to build van der Waals heterostructures with other semiconductor materials, regardless of lattice mismatch and other stringent conditions. [30,31] In addition, large-area, uniform, and high-quality PbI 2 flakes can be synthesized via facile solution processing method at atmosphere, [32,33] which undoubtedly broaden the possibility of the PbI 2 -based electronic devices with low manufacture cost and high integration in future. Therefore, we have successfully fabricated 2D PbI 2 flakes on Si/SiO 2 substrate via drop-cast technology (more details in Experimental Section) instead of physical vapor deposition method that generally requires rigorous conditions, such as high temperature, vacuum environment, and so forth. Figure 1b shows the optical image of PbI 2 flakes with different thicknesses in which these PbI 2 flakes demonstrate regular hexagonal or triangle shape with sharp edge. To characterize their thicknesses, we first have performed atomic force microscopy (AFM) scanning for three different samples that are marked by white dashed lines in Figure 1b. Corresponding AFM images are shown in Figure 1c. As can be seen, the PbI 2 flakes in the three regions demonstrate smooth and flat surfaces as well as different thicknesses, which are around 12, 207, and 553 nm, respectively.
Subsequently, we have conducted PL and Raman spectroscopy measurements to further characterize these PbI 2 flakes with different thicknesses at room temperature. In terms of previous works, [34,35] the PL emission of PbI 2 at room temperature mainly originates from the recombination of bound and free excitons as well as the recombination of donor-acceptor pairs (DAPs), and the corresponding PL peak locates at about 510 nm (i.e., 2.43-2.44 eV), which fits well with our experimental data. In Figure 1d, the PL intensity of PbI 2 at around 510 nm gradually decays with the thickness decreasing from 553 to 12 nm. The thinning PbI 2 flakes accompanied with the dropped light absorption would reduce the PL quantum efficiency, and meanwhile induce the band structure to change from direct to indirect as the thickness thins to the monolayer. Different from PL spectra, the Raman spectra of PbI 2 are too weak to recognize at room temperature, and therefore, we have to increase the excitation power to irradiate the PbI 2 flakes so as to obtain observable Raman signals. Therefore, it is inevitable that the crystal quality of PbI 2 flakes is damaged by high excitation power owing to its poor photostability. In this way, the signal-to-noise of Raman spectra of the degenerated PbI 2 flakes would be dramatically reduced. To balance the intensity and signal-to-noise in the Raman spectra, we continuously optimize the laser power shining on the samples and choose a moderate excitation power of 500 μW. Figure 1e shows the characteristic Raman peaks of PbI 2 at around 75, 97, 113, and 216 cm À1 , which are identified as E 2g , www.advancedsciencenews.com www.adpr-journal.com A 1g , 2LA(M), and 2LO(M) modes, respectively. [22] Similar to PL spectra, the dropped light absorption with the decreasing thickness of PbI 2 leads to these Raman modes tending to be weaker, even barely visible. Based on the synthesized PbI 2 flakes, we next focus on how to improve their photostability without sacrificing the efficiency of photon absorption and radiation. Currently, the mainstream principle of improving the photostability of PbI 2 is to cover protective layers directly onto its surface, not only to conduct the heat caused by the photothermal effect away from PbI 2 , but also to prevent the PbI 2 from suffering hydration and oxidation in the atmosphere. Following the principle, we try to select a wide bandgap semiconductor material with a high thermal conductivity as protective layers to form type-I heterostructure with PbI 2 . ZnO, a typical representative of the third-generation semiconductors, has a wide direct bandgap of 3.37 eV, [36] and high thermal conductivity of %37 W mK À1 , [37] which is generally two orders of magnitude higher than organic polymers. Therefore, ZnO could serve as the desired material of the protective layer for PbI 2 and meanwhile, maybe, form type-I band alignment with PbI 2 . To intuitively verify the feasibility of this combination, the ZnO capping layer should not have large coverage because it is not beneficial to build a contrast region for naked PbI 2 and ZnO/PbI 2 heterostructure, considering the PbI 2 flakes with a scale of only tens of microns. For this purpose, we choose ZnO nanowire in place of other structures such as quantum dot and thin film, as a demonstration to combine with PbI 2 flakes. The ZnO nanowires can be facilely synthesized via the hydrothermal method (see Experimental Section for more details) and transferred anywhere on the surface of PbI 2 flakes, forming van der Waals heterostructure with controllable regions (see Figure 2a). Herein, the prepared ZnO/PbI 2 heterostructure consists of PbI 2 flake with about 105 nm thick and ZnO nanowire with around 563 nm thick. The AFM scan profiles of them are shown in Figure 2b.
For the protective layer of PbI 2 , an important prerequisite is high light transmissivity so that the photon absorption of the protective layer is reduced to the minimum. As shown in Figure 2c, the absorption edge of ZnO nanowire locates at about 380 nm, while the absorption edge of PbI 2 flake locates at about 520 nm. Obviously, the ZnO with wide bandgap could provide an ideal window of photon absorption for PbI 2 in near-UV-vis light. Subsequently, we have developed appropriate studies to check out the protective effect of ZnO nanowire for PbI 2 as follows. Power-dependent irradiation experiments of PbI 2 flake and ZnO/PbI 2 heterostructure were performed, respectively. Herein, the excitation source is a 405 nm continuous wave (CW) laser. Figure 2d,e separately shows the PL spectra of the PbI 2 flake and ZnO/PbI 2 heterostructure at various power densities ranging from 200 to 700 W cm À2 . With the increasing power density, the PL peak of pristine PbI 2 flake demonstrates a significant attenuation trend, while the PL signal from the ZnO/PbI 2 heterostructure is almost unchanged. To compare the difference intuitively, the PL intensity from the pristine PbI 2 and the ZnO/PbI 2 heterostructure is extracted and shown in Figure 2f. It can be seen that the PL intensity of PbI 2 from the heterostructure always maintains at around 2.8 Â 10 3 even though the power density increases from 200 to 700 W cm À2 . By contrast, the PL intensity of pristine PbI 2 decreases from 1.6 Â 10 3 to 0.3 Â 10 3 in the same range of power density. The phenomenon can be attributed to the degradation of PbI 2 upon laser irradiation due to its poor photostability, which leads to the reduction of PL quantum efficiency. Rather, the photostability of PbI 2 in the heterostructure has represented a dramatic improvement, indicating the ZnO nanowire could supply effective protection for PbI 2 flake. In addition, we have also compared the protective effect of PDMS with ZnO nanowire, and the corresponding results are shown in Figure S1, Supporting Information. As expected, the photostability of PbI 2 flake covered by ZnO nanowire is demonstrably superior to that of PDMS. Allowing for the aforementioned power-dependent irradiation experiments are only performed under a single excitation source of 405 nm; therefore, more options for the excitation sources should be considered to roundly prove the reliability of the ZnO nanowire improving the photostability of PbI 2 . In this regard, we have chosen two other excitation sources except for 405 nm to irradiate the PbI 2 and ZnO/PbI 2 heterostructure, which are UV light at 320 nm and visible light at 532 nm, respectively. During the power-dependent irradiation experiments, the two excitation sources are used to irradiate the naked PbI 2 and ZnO/PbI 2 heterostructure at various power densities, respectively. The irradiation time is fixed at nearly 2 s. Subsequently, under 405 nm laser excitation the PL spectra of the irradiated PbI 2 flakes are collected to monitor the degradation of PbI 2 .
To avoid negative effects caused by the 405 nm laser, the output power is set at a relatively low level, only 1 μW (power density % 100 W cm À2 ) in all PL measurements. Figure 3a,b separately demonstrates the PL spectra of pristine PbI 2 flake and ZnO/ PbI 2 heterostructure in which they are irradiated by 320 nm laser with power density from 200 to 700 W cm À2 . As expected, the PL intensity of pristine PbI 2 gradually decreases as the power density increases, indicating the PbI 2 flake without the protective layer appears as a visible degradation upon 320 nm laser irradiation (Figure 3c). For the heterostructure, the PL intensity of PbI 2 tends to stay stable below the power density of 400 W cm À2 , and then begins to slightly decay as the power density further increases. The reason might be that the photon of 320 nm laser with higher energy could generate more phonons via relaxation process to enhance the photothermal effect and erode the protective effect of ZnO. Even so, the PL signal-to-noise and intensity of PbI 2 in the heterostructure are still higher than that of the naked PbI 2 . In contrast, the PL spectra from both pristine PbI 2 flake and ZnO/PbI 2 heterostructure that are irradiated by 532 nm laser are almost unchanged with the increasing irradiation power density (see Figure 3d-f ). The result is due to the photon energy of 2.33 eV (532 nm) is lower than the bandgap of PbI 2 , which suppresses its photon absorption at the wavelength of 532 nm and reduces the degradation of PbI 2 induced by the photothermal effect. We also have performed illumination time-dependent PL spectra of PbI 2 flake and ZnO/PbI 2 heterostructure irradiated by 320, 405, and 532 nm laser which are shown in Figure S2, S3, and S4, Supporting Information, respectively. Under the three excitation wavelengths, the PL intensity of PbI 2 flake and ZnO/PbI 2 heterostructure with increasing the illumination time exhibits a similar variation trend, respectively. The PL intensity of pristine PbI 2 gradually decreases as the illumination time increases, while the PL intensity of the ZnO/PbI 2 heterostructure demonstrates relative stability with illumination time, further indicating the ZnO could supply effective protection for PbI 2 flake. In addition, it is intriguing that the PL intensities of PbI 2 in the heterostructure irradiated by 320 and 532 nm Figure 3. PL spectra of a) pristine PbI 2 and b) ZnO/PbI 2 heterostructure treated by 320 nm laser with power density ranging from 200 to 700 W cm À2 . c) The PL intensity of pristine PbI 2 and ZnO/PbI 2 heterostructure as a function of power density, which are extracted from parts (a) and (b), respectively. PL spectra of d) pristine PbI 2 and e) ZnO/PbI 2 heterostructure treated by 532 nm laser in the same range of power density. f ) The PL intensity of pristine PbI 2 and ZnO/PbI 2 heterostructure as a function of power density, which are extracted from parts (d) and (e), respectively. These PL spectra are all excited by a 405 nm laser with a low power density of 100 W cm À2 . www.advancedsciencenews.com www.adpr-journal.com laser are higher than that of the naked PbI 2 at each irradiation power density, as shown in Figure 3c,f. To find out the underlying reason for the PL enhancement, we have performed PL measurements in nonirradiated PbI 2 and ZnO/PbI 2 heterostructure to rule out the influence that, maybe, is caused by laser irradiation treatment, as shown in Figure 4a. Similarly, the PL intensity of PbI 2 from the heterostructure is much stronger than that of pristine PbI 2 under the same 405 nm laser excitation. To verify the repeatability of the phenomenon, PL intensity mapping of PbI 2 is conducted in a rectangular area that contains naked PbI 2 , ZnO nanowire, and ZnO/ PbI 2 heterostructure (see Figure 4b). Corresponding optical image of the rectangular area is demonstrated in the inset of Figure 4a. It can be seen that the overall PL intensity in the heterostructure (marked by the dashed red line) is stronger than that of the naked PbI 2 region, even though the distribution of PL intensity is not uniform. Further analysis of the unequal distribution of PL intensity will be discussed later. It should be noted that there is no appreciable PL signal on the suspended area of ZnO nanowire, that is, the position outside of PbI 2 flake, which further indicates the PL enhancement is irrelevant to the separate ZnO nanowire. Therefore, we focus on the ZnO/PbI 2 heterostructure and monitor the variation in structure and symmetry of PbI 2 flake via the Raman spectroscopy. For clarity, we have compared the Raman spectra of the location with the strongest PL intensity in the heterostructure with that of naked PbI 2 .
Coincidentally, the Raman signal intensity collected in the heterostructure is also stronger than that of naked PbI 2 , just like the case of PL intensity, but there is no significant difference in the two spectra except for intensity. Consequently, the structure and symmetry of PbI 2 can be considered to maintain undamaged during the process of PL enhancement. According to previous works, [38,39] ZnO nanowire with hexagonal cross section is a natural whispering-gallery-mode (WGM) microcavity and could strengthen the light-matter interactions through the total internal reflection mechanism. Figure 4d shows the schematic illustration of the WGM light path generated inside the ZnO nanowire. Therefore, a reasonable explanation is that the enhanced light-matter interactions in the ZnO nanowire can serve as a WGM resonator to increase the intensity of excitation source, that is, incident into PbI 2 , resulting in the enhancements in both PL and Raman spectra of PbI 2 . For the nonuniform intensity distribution of them, it may be attributed to the inhomogenous topography of microcavity along the ZnO nanowire, which may be caused by the transferring process of nanowire. The inhomogenous microcavity gives rise to the varying degrees of enhancement for the intensity of incident light into PbI 2 .
In terms of our aforementioned assumption, if the ZnO/PbI 2 heterostructure could exhibit a type-I band alignment of their band positions, the photocarriers in ZnO would be propelled toward PbI 2 as long as the photon energy of the excitation source www.advancedsciencenews.com www.adpr-journal.com is more than the bandgap of ZnO. The corresponding schematic diagram is shown in Figure 5a. Through the literature research, [40,41] the conduction band minimum (CBM) and valence band maximum (VBM) of ZnO are separately located at -4.35 and -7.72 eV, while the CBM and VBM of PbI 2 are located at À4.44 and 6.82 eV, respectively. In principle, the combination of ZnO and PbI 2 could form type-I band alignment (see Figure 5b). It means that in the heterostructure, the PL intensity of ZnO would be dramatically reduced, while the increased photocarriers in the PbI 2 would enhance its PL intensity. To verify it, we have conducted PL measurements for pristine PbI 2 , ZnO nanowire, and ZnO/PbI 2 heterostructure, respectively, where the excited areas of pristine PbI 2 , ZnO nanowire, and ZnO/ PbI 2 heterostructure are separately marked as the black, red, and blue dots, as shown in the inset of Figure 5c. Here, the PL spectra of them were collected under the same experimental condition in which the excitation source was 320 nm laser with the power of %400 nW and the laser spot was about 1 μm. In Figure 5c, the PL spectra of ZnO nanowire consist of a narrow near-band-edge exciton emission at %380 nm and a broad deeplevel defect emission ranging from 430 to 660 nm, which is well in accord with previous reports. [40] The deep-level defect emission of ZnO mainly originates from the capturing of band-edge excitons at defects. That is to say, the number of band-edge excitons is proportional to the intensity of deep-level defect emission. Due to the ZnO/PbI 2 heterostructure with type-I band alignment, the band-edge excitons could transfer from ZnO toward PbI 2 , leading to a decrease in the number of band-edge excitons in ZnO. Therefore, the intensity of both band-edge exciton and deep-level defect emission of ZnO in heterostructure would be dramatically reduced compared with pristine ZnO nanowire, as previously predicted. As the broad deep-level defect emission of ZnO overlaps the PL emission of PbI 2 (see Figure 5d), the actual contribution from deep-level defect emission is not observed intuitively in the PL spectra of the heterostructure. In contrast, the band-edge exciton emission of ZnO without overlapping the PL emission of PbI 2 can be used as an ideal reference to indirectly acquire the PL contribution of ZnO in the heterostructure. First, the PL spectra of the ZnO nanowire and the heterostructure are normalized by respective band-edge emission of ZnO, as shown in Figure 5e. The band-edge exciton emission of the pristine ZnO is essentially coincident with that of the ZnO in the heterostructure. As positive correlation is in existence between the intensity of defect emission and the number of band-edge excitons, so we presume that the normalized PL emission of pristine ZnO is approximately equal to the PL contribution of ZnO in the normalized PL spectra of the heterostructure. Accordingly, the difference between the normalized PL spectra of the ZnO and the heterostructure can be considered as the PL contribution from PbI 2 in the heterostructure. Subsequently, the normalized PL spectra of ZnO nanowire are subtracted by the normalized PL spectra of the heterostructure. In this way, the PL emission from the ZnO nanowire could be removed from the ZnO/PbI 2 heterostructure, and the remaining part is only from the PL emission of PbI 2 . For convenient comparison with the heterostructure, the PL spectra of pristine PbI 2 , shown in Figure 5d, are also normalized by the near-band-edge emission of ZnO from the Figure 5. a) Schematic diagram of the ZnO/PbI 2 heterostructure for the illustration of the carrier interlayer transportation. The blue ball represents the carrier. The red straight arrow represents the interlayer transportation processes of the carrier. b) Band positions of ZnO and multilayer PbI 2 , implying type-I band alignment of the ZnO/PbI 2 heterostructure. c) PL spectra of the pristine ZnO and the ZnO/PbI 2 heterostructure. The inset is optical image of ZnO nanowire and PbI 2 flake, where the black, red, and blue dots represent the excited areas of PbI 2 , ZnO, and ZnO/PbI 2 heterostructure for PL spectra, respectively. Scar bar: 5 μm. d) PL spectra of the naked PbI 2 and the ZnO/PbI 2 heterostructure. e) PL spectra of pristine ZnO and ZnO/PbI 2 heterostructure, normalized at respective near-band-edge emission of ZnO. f ) The PL spectra of the PbI 2 from the heterostructure (red curve) and the pristine PbI 2 normalized by the near-band-edge emission of ZnO from the heterostructure (olive curve). These PL spectra are all excited by a 320 nm laser with output power of 400 nW.
www.advancedsciencenews.com www.adpr-journal.com heterostructure. Figure 5f shows the normalized PL spectra of the pristine PbI 2 and the PbI 2 from the heterostructure. Compared with pristine PbI 2 , the PL peak of PbI 2 from the heterostructure is significantly enhanced, accompanied by redshift and broadening. Considering that the quality of interface in the heterostructure is not systematically optimized in this work; therefore, the broadened PL peak of PbI 2 , maybe, contains band-edge emission and interfacial defect emissions. Even so, the PL integrated intensity of the PbI 2 from the heterostructure is still obviously enhanced about eightfold compared with pristine PbI 2 . To conclude it, the photostability and PL intensity of PbI 2 could be effectively enhanced via constructing type-I heterostructure with the ZnO nanowire.
Conclusion
In summary, we have successfully realized the improvement of photostability of PbI 2 at different excitation wavelengths, including 320, 405, and 532 nm, via constructing type-I heterostructure with ZnO nanowire. In the heterostructure, ZnO nanowire synthesized by hydrothermal method offers a desired path of heat dissipation for PbI 2 , which is attributed to the high thermal conductivity of ZnO that is two orders of magnitude higher than organic polymers. In addition, due to the type-I band alignment between PbI 2 and ZnO, the photogenerated electrons and holes in the ZnO nanowire can be transferred to the PbI 2 flake to enhance the PL intensity of PbI 2 , which is proved to obtain nearly eightfold enhancement in the heterostructure based on the PL spectra analysis. Our work not only provides a feasible strategy to improve the photostability and PL intensity of PbI 2 , but also gives possibilities for propelling the progress of practical applications.
Experimental Section
Fabrication: A recrystallization reaction method was used to synthesize PbI 2 flakes. First, PbI 2 powder (99.9%) was dissolved in secondary deionized water (1 mg mL À1 ) and heated with stirring at 110 for 1 h to guarantee PbI 2 powder was dissolved completely. Subsequently, the PbI 2 aqueous solution was kept 1 h at room temperature and then dropped on a cleaned SiO 2 /Si substrate (1 cm  1 cm) via using a high-precision pipette. After several minutes, the excess undried liquid was absorbed by a dropper and finally PbI 2 flakes of various shape and thickness grow. ZnO nanowires were prepared by a typical hydrothermal process. Zinc acetate and hexamethylenetetramine (HMT) were separately dissolved in water with adequately stirring, and then were mixed to get 20 mmol L À1 ZnO precursor solution. The mixture was poured into a reactive kettle in which the sapphire substrate was facing down. Finally, the reactive kettle was heated for 2.5 h at 95 in an oven to obtain ZnO nanowires. The ZnO nanowires were mechanically exfoliated from the sapphire substrate and then accurately transferred onto the pregrown PbI 2 flakes by PDMS-assisted dry transfer technique.
Characterization: The tapping mode of AFM (Dimension Icon, Bruker) was used to monitor the surface morphology of ZnO nanowires and PbI 2 nanoflakes. All steady-state PL spectra and mappings were measured with a micro-PL system (Metatest, ScanPro) equipped with a monochromator, a charge-coupled device (CCD), and three different excitation sources, including 320, 405, and 532 nm CW laser. The Raman spectra were collected by a spectrophotometer system equipped with a Si-based CCD (HR Evolution, Horiba). The excitation source is a 488 nm CW laser with an output power of 1 μW, and the laser spot size is focused to a spot diameter of %1 μm. A Hitachi UV-4150 spectrophotometer is used to measure the steady-state absorption spectra.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. | 2021-05-05T00:09:59.636Z | 2021-03-09T00:00:00.000 | {
"year": 2021,
"sha1": "085dde2734d68455bc36c28ed4b4d8207ac4c2ee",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1002/adpr.202170017",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2b304a87e81269737826816d3b10dfebc2cdb315",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
16479571 | pes2o/s2orc | v3-fos-license | Sex-dependent differences in avian malaria prevalence and consequences of infections on nestling growth and adult condition in the Tawny pipit, Anthus campestris
Background Parasites play pivotal roles in host population dynamics and can have strong ecological impacts on hosts. Knowledge of the effects of parasites on hosts is often limited by the general observation of a fraction of individuals (mostly adults) within a population. The aim of this study was to assess the prevalence of malaria parasites in adult (≥1 year old) and nestling (7–11 day old) Tawny pipits Anthus campestris, to evaluate the influence of the host sex on parasite prevalence in both groups of age, and explore the association between infections and body condition (adults) and growth (nestlings). Methods Two hundred Tawny pipits (105 adults and 95 nestlings) from one Spanish population were screened for avian malaria parasites (Haemoproteus and Plasmodium) using the polymerase chain reaction (PCR)-based methods. Body condition (body mass against a linear measure of size) was measured in adults and growth rate (daily mass gain) was calculated for nestlings. Results The overall prevalence of infection was 46 %. Sixteen different mitochondrial cytochrome b haplotypes of Plasmodium spp. and one Haemoproteus spp. haplotype were found. Malaria parasites were equally prevalent in nestlings and adults (45 and 46 %, respectively). Males were more likely to be infected by parasites than females, and this sex-bias parasitism was evident in both adults and nestlings. Furthermore, a lower daily mass gain during nestling growth in males than in females following infections were found, whereas the effect of infections on body condition of adults was detrimental for females but not for males. Conclusions Age-specific differences in physiological trade-offs and ecological factors, such as nest predation would explain, at least in part, the observed host sex and age-related patterns in Tawny pipits.
This type of parasites had attracted considerable attention as disease causes a variety of adverse effects on hosts [9][10][11][12][13][14][15][16][17], including the extinction of naive host populations [18]. Although the infections caused by these parasites can also persist in hosts for years causing no direct effects or only mild effects on fitness [9,12,[19][20][21], recent studies have revealed that long-term chronic infections can produce important hidden costs for hosts [17].
The vast majority of studies exploring malaria impacts on wild avian hosts have been focused on adults, which inevitably restrict the ability to understand properly the consequences of infections for hosts. For example, during the acute phase of malaria infection many individuals were naturally removed from the population by dying [21], which makes really difficult to assess what type of parasites are involved and the real damage they caused at the population level. In this context, one interesting approach is to use nestlings as research focus. Newborns represent an attractive target for malaria parasites because of their low immunological and behavioural defences, which are relatively immature at hatching [22,23]. Development of the immune system begins posthatching and continues during the growth period, when birds are able to mount a significant immune response to blood parasites [24]. Therefore, age determines host cell maturation and the capacity to resist against parasites and support infections (reviewed in [25]). A naive immune system is known to increase susceptibility to parasitism and, therefore, those parasites that can take advantage of the immunologically naive hosts to increase its recruitment and survival rates should be favoured [25]. In birds, the immunological naive hosts (nestlings) are highly available to vectors only during early stages of development, whereas for much of the remaining year the host population consists of individuals with experience of prior infection. Moreover, infected nestlings may face at this stage a trade-off between allocating resources to growth or to keep them as immune enhancers for fighting parasites or diseases [26][27][28][29]. Thus, adults and nestlings should be considered altogether in order to better understand how parasites (in terms of species composition and prevalence) impact globally on a population. Despite its importance, surprisingly little is known about the avian malaria infections during the crucial phase of nestling growth as compared to adult phase.
Furthermore, parasite prevalence in vertebrates is often different between males and females [30,31], which has been mainly attributed to sex-specific host characteristics, such as the endocrine-immune interactions. Mounting evidence indicates that sex hormones influence the immune system [32][33][34]. Androgens, primarily testosterone, can suppress cell-mediated and humoral immunity in males, whereas oestrogen can suppress cell-mediated immunity while boosting humoral immunity in females [32]. In birds, evidence suggests that an effective immune system is costly, and that there are trade-offs among investment in immune function and other physiological and ecological aspects during reproduction [35][36][37]. This may result in differences in parasite prevalence between the sexes [4,38]. There is, therefore, a need to assess what role, if any, sex-related trade-offs between investment in immunity and other aspects of reproduction play in mediating acquisition of parasites, and if costs of parasite infections are different for each sex.
In this study, nestling and adults from a wild population of the open-ground nestling passerine Tawny pipit (Anthus campestris) were used as a novel host-parasite system to explore: (1) how malaria parasites are distributed among adults and nestlings, (2) the influence of the host sex on parasite prevalence in both groups of age, and (3) the association of parasite prevalence with adult body condition and nestling growth.
Ethical consideration
All research was carried out within Spanish standard requirements (project license reference JCCM/PAC 06-137) and the guidelines of the University of Castilla-La Mancha (CCM). All methods were approved by the University of Castilla-La Mancha ethical committee for Animal experimentation (CEEA) and permission to capture and manipulate birds was obtained from the Organismo Autonomo de Espacios Naturales de Castilla-La Mancha (permission numbers: OAEN/SVSIA/avp_10_153, and DGPF/08031701). Birds were caught under Spanish standard requirements (bird ringing license numbers: SEO/BirdLife 520038 and 530351).
Host species, study areas and sampling method
The Tawny pipit is a small (~28 g) migratory passerine of the family Motacillidae. It is a widespread summer visitor of open habitats throughout Eurasia [39]. Tawny pipits are sexually monomorphic, with slightly larger males than females (mean ± SD; wing length 94.0 ± 3.42 and 88.0 ± 2.32 mm for males and females, respectively; [39]). Males and females have different roles during breeding; while nest attendance and care for nestlings are exclusive to females, males are responsible for territory defence [40]. The open-cup nests are constructed directly on the ground [41]. Clutch size varies between three and five eggs, and nestlings are altricial (range time in the nest between 8 and 11 days) [40].
The study was conducted during the reproductive seasons of 2008, 2009, and 2010 in a study area with approximately 2.5 km 2 located in Valeria, central Spain (Cuenca province, 39º 48′ N, 2º 10′ W, 1090 m above sea level). The study site is located in a flat terrain, where the species' density is about 15 breeding pairs/km 2 . Climate is Mediterranean (annual rainfall is 750 mm and mean annual temperature 16 °C) and the prevailing habitat consists on natural shrub steppe vegetation (Rosmarinus officinalis, Thymus spp. and pastures dominated by Brachypodium phoenicoides) with scattered cereal crops.
Nest visits were performed at intervals of 3-5 days throughout the breeding season (May-July). A total of 105 adult birds were captured in their territory around nests (males, n = 63) or at nest when feeding nestlings (females, n = 42). Adults were measured (wing length and mass), ringed with a metal ring to avoid resampling, and 5-10 μl of blood was collected from the jugular vein. Additionally, 95 nestlings (48 males and 47 females) from 28 nests were sampled twice at nest, when they were 1-3 day-old and when they were 7-11 day-old. During the first visit, all nestlings were marked with different waterproof colours on tarsus and feet to allow individual recognition. Wing length was measured with a ruler (accuracy 0.5 mm) and body mass with a digital balance (accuracy 0.01 g) in the two visits. During the last visit all nestlings were banded with a metal ring and 2-5 μl of blood was taken. Tawny pipits were classed in two age groups for analyses: adults (≥1 year old) and nestlings (7-11 day old). Blood samples were preserved in ethanol until DNA analyses. All birds sampled were captured and handled with the corresponding permissions of both regional and national Spanish authorities.
Genetic characterization of parasites
DNA was extracted from blood samples using ammonium acetate/ethanol precipitation methods [7], and diluted to a working concentration of 25 ng/μl. All birds were molecularly sexed by amplifying introns of the CHD1Z/W gene [42]. For the detection of parasites we used the nested polymerase chain reactions (PCR) protocol used by Waldenström [43], which amplify a fragment of the mitochondrial cytochrome b gene of Plasmodium spp. and Haemoproteus spp. parasite genera. Each PCR included two positive samples of infected birds to verify the proper functioning of the reaction and a negative (ddH 2 O) to control for the presence of false positives. A volume of 2.5 µl of each final reaction was evaluated on 2 % agarose gels and stained with ethidium bromide. Pre-and post-PCR work was performed with different material and in different laboratory sections to avoid contamination. The protocol was repeated three times to confirm negative results. Samples with positive PCR reactions were cleaned up using Exonuclease I and Shrimp Alkaline Phospatase (Fermentas) and sequenced using the forward PCR primer and the Big Dye Terminator Kit (Applied Biosystems). DNA sequences were obtained using an ABI 3130XL Automated Sequencer (Applied Biosystems). Data were processed with the ABI PRISM1 Sequencing Analysis Software v3.7 (Applied Biosystems). New parasite lineages were sequenced twice to ensure the accuracy of the sequences.
Phylogenetic analysis
Sequences were edited manually using Bioedit 7.0.5.3 [44]. All new haplotypes were deposited in GenBank (accession numbers: JF279937-JF279958 and KF747759-KF747764). The taxonomic identity of each haplotype was inferred by assessing the phylogenetic affinities with published sequences from GenBank that were reliably identified as morphological species of Plasmodium and Haemoproteus, as compiled in the MalAvi database [45]. Multiple infections (more than one parasite haplotype in the same individual) were identified by the presence of double peaks on the electropherograms [46]. Phylogenetic relationships among parasite haplotypes were estimated by Bayesian inference using MrBayes 3.1.2 [47]. The GTR+I+G model of molecular evolution was identified by jModelTest [48] as the most appropriate for the dataset. Two simultaneous runs of four Monte-Carlo Markov Chains (MCMC) were conducted over 5 million generations, sampled every 100 generations. The posterior probability distribution of the 50 % majority rule consensus tree was calculated after discarding the first 25 % generations as the burn-in period. The phylogenies were visualized using FigTree v1.3.1 [49].
The genetic divergence between different parasite haplotypes was calculated using the Tajima-Nei distance model computed in the software MEGA4 [50].
Statistical analysis
The effects of host sex and age on infection probability were analysed by means of generalized linear/nonlinear mixed models (GLMMs) with a binomial distribution for our dependent variable (infection status: infected or uninfected), and a logit link function. The model included year as random factor and age, sex, and their interaction as fixed effects. Date of sampling was incorporated as a continuous predictor to account for its possible effect on the infection probability. Date was calculated as the number of days from 1st April within each study year until the day of sampling. Individuals carrying multiple infections were treated as 'infected' in this analysis. Difference in the proportion of sexes and age cohorts infected by each parasite clade was tested using Fisher's exact test (two-tailed).
The effect of infection status on body condition was estimated for adults by means of linear mixed models (LMMs). The model included body mass as dependent variable and wing length as continuous predictor (to account for differences in size between individuals). Infection status (infected or uninfected), sex and their interaction were included as fixed factors. The model included also the year of study as a random factor and date of sampling as continuous predictor.
Additionally, we performed LMMs to investigate whether infection influence nestling growth (growth increments for mass and wing length) using the two measurements taken on each nestling during its nest stage. The rate of daily mass gain during growth was expressed as [(mass at second visit-mass at first visit)/ number of days between first and second visit], and the rate of daily wing length increment was expressed as [(wing length at second visit-wing length at first visit)/ number of days between first and second visit]. Two independent models were built, with mass gain or wing length increment as dependent variables, infection status (infected or uninfected), sex and their interaction as fixed factors, age of nestling and sampling date as continuous predictors, and year and nest identity as random factors.
All analyses were performed using R 2.14 [51] (R Core Team 2013). Specifically, package lme4 [52] was used to fit LMMs and GLMMs, and package phia [53] to perform post hoc analyses. Means and parameter estimates are reported together with their standard errors.
Clustering of parasite lineages
The phylogenetic tree that resulted from Bayesian analyses is shown in Fig. 1. Morphospecies nomenclature was tentatively assigned to smaller cladistic groupings within the Plasmodium and Haemoproteus genera based on the affiliation and bootstrap support of branches with lineages published previously.
Five distinct Plasmodium clades (clades A-E; Fig. 1) were found, with genetic distances between haplotypes within clades ranging from 0.2 to 0.7 % (mean = 0.44 %, S.D. = 0.2 %), whereas genetic divergence between these clades varies between 3.4 and 8.5 % (mean difference = 6.4 %, S.D. = 2.1 %). Of the five clades, only two can be associated to a known parasite morphospecies (clade C: Plasmodium cathemerium, and clade B: Plasmodium relictum). One single Haemoproteus clade (clade F) was detected, which included only one haplotype isolated from Tawny Pipits (BIC15) that did not contain sequences of any of the Haemoproteus morphospecies previously described.
Variation in parasite prevalence
The prevalence of malaria infections in the Tawny pipit population studied did not show significant variation with date of sampling (Table 2). Overall, there were no significant differences between the infection status of the two age groups, being the proportion of infected adults and nestlings 0.45 and 0.46, respectively (see also (Tables 1, 2). The proportion of infected females and males was 0.37 and 0.53, respectively, and this pattern is consistent across ages, as supported by the nonsignificant interaction sex × age ( Table 2). At parasite-clade level, avian malaria clades A-E (see Fig. 1) found in our study area were all present in both adults and nestling, whereas clade F (Haemoproteus) was only retrieved from adults. The prevalence of clade A in Tawny There were significant differences between the prevalence of clades B and C according to sex, being both clades more frequent in males than in females (clade B: males 12.0 %, females 1.12 %, Fisher exact test, two tailed, P = 0.02; clade C: males 9.0 %, females 1.12 %, P = 0.02). There were no statistical differences between males and females for clades A, D and E (P > 0.05).
Infection status and body condition in adults
Overall, body mass was positively related with wing length (t = 2.70, P = 0.008), and was larger in males than in females (t = 1.94, P = 0.055; Fig. 2). Infected individuals were in poorer condition than uninfected ones (t = −2.91, P = 0.004), and the significant interaction (infection × sex: t = 2.00, P = 0.048) indicates that the effect of infection status on body mass was significant for females (post hoc: χ 2 1 = 8.49, P = 0.007) but not for males (χ 2 1 = 0.23, P = 0.63) (Fig. 2). There was no support for a significant effect of date of sampling on body mass (t = −0.78, P = 0.44).
Wing length increment was higher for males than for females (males: 5.18 ± 0.08 mm, females: 4.87 ± 0.09 for females, t = 3.52, P = 0.0008). Differences in wing length increment among infected and uninfected individuals were not significant (t = 1.03, P = 0.31), and the interaction between sex and infection status was also not significant (t = −0.51, P = 0.61). Date of sampling was not significantly related to wing length increment (t = 0.07, P = 0.97), whereas age of nestling is marginally significant in the model (t = −1.98, P = 0.053).
Discussion
The present study adds data on host-parasite relationships for a so far non-investigated passerine species, the Tawny pipit. Host sex-related differences affecting prevalence of avian malaria, but no evidence of age-related differences, were reported here. Tawny pipit males were more likely to be infected by malaria parasites compared to females, and this sex-bias parasitism was evident in Other variables in the model are described in "Results" section both adults and nestlings. Avian malaria parasites were also prevalent in 7-11 day-old nestlings, consistent with other reports [54]. Most importantly, infected males showed lower daily mass gain during nestling growth than infected females, whereas infections at adulthood were associated with lower condition in females than in males.
The effect of host age on parasite prevalence is a controversial topic in avian ecology. On one hand, the number of host-parasite encounters and the accumulation of infections over time means that we can expect a pattern of higher parasite prevalence in adults than at early stages of life [55,56]. On the other hand, the low immunological and behavioural defences against parasites of nestlings compared to adults suggest the opposite pattern, with higher expected prevalence in nestlings than in adults [57,58]. Few studies have explicitly compared avian malaria infections between nestlings and adults in the wild, since the prepatent period (when the parasite is developing in the host tissues and absent from blood) is often longer than the nestling period of most bird species (see [59]). Results from Tawny pipit supported the hypothesis that the naïve immune system makes nestlings highly susceptible to developing infections. The agepattern of malaria infection found in this study indicated a rapid accumulation of parasites at the age at which an individual is first susceptible to infection. Considering the age of nestlings studied (7-11 days) and that prepatent periods for avian species of Plasmodium and Haemoproteus range from 2 days to several months [60], it is likely that some of the nestlings considered as uninfected in this study might be truly infected, but the disease has not yet reached the blood (prepatent stage) at the time of sampling. If so, the real parasite prevalence in nestlings can be underestimated, and the age-prevalence curve would reflect a decreasing incidence of malaria with age [58] typical of a parasite-mediated viability selection pattern [61,62]. A reduction of prevalence with age should merely indicate the loss (death) of infected individuals from the population [58,61], or that parasites were cleared from hosts or entered a latent stage (remaining usually in host tissues) later in life [63,64]. Unfortunately discrimination among these possibilities is not possible in the present study, but the topic merits further research as they can reveal patterns that may allow quantification of rates of parasite transmission, parasite-induced host mortality, and development of host resistance.
Regarding the processes whereby infections occur, age-specific differences in physiological trade-offs would explain, at least in part, the observed pattern [58]. Nestlings hatch when parasite prevalence peaks in vector populations [65,66], so they are expected to rapidly acquire numerous parasite lineages present in the breeding area. Maturation of the immune system in birds may take several weeks after hatching and, therefore, investment in immune defence may come at the expense of investment in somatic growth (review in [67]), which may be especially patent for species with determinate growth (i.e. growth ceases before chicks leave the nest) like passerines. Moreover, in species or populations at higher risk of nest predation the trade-off between investment in immune defence and investment in growth may be particularly important, as predation is one ecological factor that exerts strong selection on growth rates in passerines [68,69]. Ground-nesting species living in open environments in the Mediterranean region are under strong predation rates [40,70], and previous work conducted on the same Tawny pipit population has shown that nest predation is a key factor driving the species' reproduction [40]. As a consequence, nestlings of altricial or semi-altricial species, such as Tawny pipits, should reach adult size faster than precocial species [71], indicating that growth is compressed into a shorter period fuelled by a strong evolutionary pressure to leave the nest as soon as possible in order to avoid or minimize predation [72]. Therefore, nestlings may prioritize growth over development of immune function, causing high disease incidence in this age group despite the short time of exposure to parasites. The high prevalence found in nestlings, together with the fact that most parasite haplotypes were found infecting both nestlings and adults points towards transmission of these parasites in the European breeding grounds of Tawny pipits. The only exception was the Haemoproteus haplotype, which was detected only in adults. This suggest that parasites of this clade could have been transmitted in winter quarters (i.e. Africa), as proposed in Other variables in the model are described in "Results" section the case of many other Haemoproteus and Plasmodium haplotypes for which the occurrence of suitable vectors in Europe could be currently absent [73]. Alternatively, perhaps, the prepatent period for Haemoproteus spp. described in the literature (between 11 days and 3 weeks) diminishes the probability of detection in the nestling age cohort.
Differences between males and females in prevalence and/or intensity of infection, in particular male-biased parasitic infections, are often observed in nature [6,31,35,[74][75][76]. However, sex-bias in parasitism is not universal and consistent, and often varies between and within host-parasite systems [3,58,77]. In Tawny pipits, males had higher parasite prevalence than females in both age cohorts, a pattern compatible with gender differences in susceptibility to parasites according to biological differences between host sexes [78]. In the avian malaria system, host sex is considered as a potentially important source of variation in both prevalence and cost of parasites for hosts [78]. The proximate explanations for sex-bias in parasitism may be caused by many different factors that revolve around two hypotheses, which are not mutually exclusive: 1) certain hormones have an immunosuppressive effect upon the host defence against pathogens, and 2) because of morphological and/ or behavioural differences, one sex has a greater likelihood of being parasitized by differential exposure to vectors [35,79]. In Tawny pipits, males and females do not differ substantially in mass or colour [80] and their home ranges overlap within territories, which were probably not large enough (3.5-12.1 ha; [81]) to explain differences in vector exposure between males and females. Although behavioural differences cannot be excluded as a potential explanation, there is overwhelming evidence that sex-associated hormones can directly influence the susceptibility of each sex to infections [35]. For example, testosterone has immunosuppressive effects in some species, leading to increased susceptibility of males to parasite infections [82]. However, while the immunosuppressive effect of sex-linked hormones is a well-recognized phenomenon in adulthood [83,84], that could explain the results found in adults, its rationale in the case of nestlings is not clear, as differences in susceptibility to parasite infection might not be attributable directly to sexual differences in circulating testosterone levels [85]. Firm conclusions cannot be drawn about why males were more infected than female nestlings, but the results found on differential growth between sexes (discussed below) should shed some light on this issue.
Male and female nestlings leave the nest at the same time; therefore, for a species where males are slighter larger in wing size than females [80], males may need to grow faster than their smaller sisters, as supported by the higher wing growth rate of male nestling compared to females found in this study. The growth strategy of small breeding-ground passerines under strong nest-predation rates may be due to the need to leave the nest as soon as possible, which depends on their ability to fly and, thus, the development of the wings. If male nestlings need to allocate more resources to growth and less to prevent infections than females this could result in increased parasite prevalence in males as compared to females. In a comparative study with amphibians, Johnson et al. [86] showed that amphibian species with rapid growth were more susceptible to infections and pathology than species with slow growth. Within birds, selection for growth compromises the immune function in lines of commercial poultry (meta-analysed by van der Most et al. [87]). Although the increment in wing length was higher for male than for female nestlings, wing-growth strategies in both sexes seem to be unaffected by parasite infections. As a faster development of wings can facilitate both escape from predators and survival outside the nest, growth of body components that reduces the risk of predation may be relatively prioritized over mass in species at higher risk of nestling predation [71]. In contrast, there is a clear sex-biased growth strategy in relation to the rate at which individuals gain body mass under malaria infestation. The advantage of 'being a male'-attaining higher weight and larger biometric size than sisters-, and the competitive strength related to this [88] becomes a handicap under parasite infestation, when the selective pressure for a faster growth is strong. In our study, the impact of parasites in term of mass gain is particularly important in the fastest growing sex [89], which support this idea. Unfortunately, the potential incidence of a reduced condition during growth on posterior mortality rates cannot be evaluated here, but several authors have demonstrated this kind of relationships [90,91]. Therefore, in species/ populations that evolved under strong (nest) predation pressures, such as Tawny pipits [40], the impact of malaria parasites might ultimately influence offspring mortality rates.
Once adulthood is reached, selection pressures changed, and the trade-offs between immune system and investment in reproduction makes females more susceptible to the effect of malaria parasites than males. In our species model, females make greater investment than males in terms of time and energy during breeding, since females are solely responsible for incubation and chick rearing, which included the search for food and feeding chicks [40]. As a result, Tawny pipit females show a continuous decline in body condition during the breeding season of around 27 % of initial mass [92]. Here, infection was associated to a reduced host condition for females but not for males, supporting the view that infections by malaria parasites cause sex-related differences in host body condition, presumably as a result of increased investment in reproduction by females compared to males. However, it is also possible that host with reduced body mass may be less likely to fight off parasites or their vectors. Further work is clearly needed to conclusively evaluate the link between reproductive effort in both sexes and the effect of parasites acquired by Tawny pipits.
Conclusions
Most studies only consider adults when investigating avian malaria parasites and as a result infection in nestlings is under-represented in the literature. This study has focussed not only in adults (in terms of parasite prevalence and association with body condition), but also considered infections of these parasites in nestlings (prevalence and effects on nestling growth). This study has demonstrated high prevalence of avian malaria parasites in nestlings (7-11 day-old), comparable to adult prevalence, and sex-related differences in both age groups. Both the acquisition of these parasites as well as the consequences of parasites for individuals depends on host sex and age. Trade-offs between investment in immune system and investment in other tasks (reproduction in adults and growth in nestlings) may explain the results found, and some ecological factors (nest predation) could exacerbate the effects of malaria parasites for avian hosts at the population level.
Authors' contributions MC-R and JTG conceived and designed the study, analysed data and wrote the manuscript. MC-R collected data and performed molecular analyses under the supervision of JTG. Both authors read and approved the final manuscript. | 2016-05-12T22:15:10.714Z | 2016-03-22T00:00:00.000 | {
"year": 2016,
"sha1": "e76f7736dafe2e3c46e805b52bc27a3b713c8d75",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/s12936-016-1220-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e76f7736dafe2e3c46e805b52bc27a3b713c8d75",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
20401190 | pes2o/s2orc | v3-fos-license | Can hockey playoffs harm your hearing?
Excessive exposure to loud sounds is the leading cause of preventable hearing loss,[1][1] and most cases of noise-induced hearing loss are due to occupational exposure. The importance of hearing protection in the workplace is now well recognized, and most industries in North America have programs
of preventable hearing loss, 1 and most cases of noise-induced hearing loss are due to occupational exposure. The importance of hearing protection in the workplace is now well recognized, and most industries in North America have programs and regulations in place to ensure the hearing health of their workers. Far less attention has been paid to auditory damage caused by noise outside of work. With the popularity of loud devices, such as MP3 players and cellular telephones, and noisy activities, such as rock concerts and sporting events, everyday life is increasingly hazardous to hearing for all members of society. Therefore, there is a growing need to increase awareness of potential sources of damaging sounds and education about the use of hearing protection during leisure pursuits.
This report illustrates the impact that even brief exposure to leisure noise can have on an individual's hearing, through the example of a Stanley Cup final hockey game. The success that the Edmonton Oilers enjoyed during the 2006 Stanley Cup playoffs electrified the city. It was suggested in the media that the arena used by the team was one of the loudest buildings in the National Hockey League, and the Canadian Broadcasting Corporation demonstrated noise levels at certain times during broadcasts with the use of a sound level meter. Although measuring sound levels at key points is informative, what matters most is the exposure of a given individual over the course of the entire game and the effects of that exposure on the person's hearing.
To measure cumulative sound exposure, the second author wore a noise dosimeter to games 3, 4 and 6 of the 2006 Stanley Cup finals between the Edmonton Oilers and Carolina Hurricanes. The effect on the hearing function of the second author and his wife was measured by audiological testing immediately before and after game 3.
Noise measurement
A data-logging noise dosimeter was set to sample the noise level near the second author's ear every second for the entire game. Thus, no matter where he was in the building, the dosimeter sampled his noise exposure.
Audiometric tests
Two audiometric tests were used for the pre-and post-game measures: pure-tone audiometry and otoacoustic emissions. Both tests were performed in a double-walled audiometric booth by a licensed audiologist using calibrated equipment. For the pure-tone audiometry test, we measured the softest pure tone that could be detected (threshold) at the following frequencies: 250, 500, 1000, 2000, 4000 and 8000 Hz. The distortion production otoacoustic emissions test assesses the integrity of the outer hair cells of the inner ear. The outer hair cells are important for detecting soft sounds and allow tolerance of a wide range of input intensities. Unfortunately, outer hair cells are usually the first structures to be damaged by exposure to loud noise.
Noise data
During game 3 of the series, the scoring of goals led to fairly obvious spikes in the noise level ( Fig. 1). A level of 120 dB A is roughly equivalent to the sound level of a jet taking flight. (A-weighting is a filtering function applied to the noise dosimeter so that it is sensitive to input frequencies in the same way as the typical adult ear is.) The intermissions offered a temporary reprieve for the ears, but even during those interludes, the noise level was such that in an equivalent 8 h/day workplace environment, hearing protection would be required by law.
The average exposure levels for each game (> 3 hours) were 104.1, 100.7 and 103.1 dB. Standards have been defined for maximum allowable daily noise doses, 2 and an average level of 85 dB A for 8 hours is generally considered the maximum allowable daily noise dose. Stated differently, this means that there is a risk of hearing damage if you experience that level of noise for more than 8 hours. For each 3 dB increase in average noise level, the time you can safely stay at a level is halved. Thus, at 88 dB, it would take only 4 hours to reach the maximum allowable daily noise dose, at 91 dB it would take only 2 hours, and so on. For the levels experienced in game 3 of the series, the time to reach the maximum allowable daily noise dose was less than 6 minutes. In terms of projected noise dose, each person in the arena not wearing hearing protection received about 8100% of their daily allowable noise dose. Given that most fans do not wear hearing protection during hockey games, thousands are at risk for hearing damage.
Audiometric data
Pure-tone audiometric data indicated that the hearing thresholds of both subjects deteriorated by 5 to 10 dB for most frequencies. The biggest changes occurred at 4000 Hz (the frequency known to be most susceptible to noise damage), where subject 2 experienced a temporary threshold shift in one ear of 20 dB. Whereas 5 to 10 dB may be within the test-retest confidence limits of pure-tone audiometry, 20 dB represents a real change in hearing status. It is important to note that this temporary threshold shift usually disappears in a day or two. However, if the ears are subjected to further noise exposure before full recovery, the temporary threshold shift may become permanent. 3 According to the otoacoustic emissions data, subject 1 experienced a decrease in the strength of the outer hair cell responses. Consistent with the pure-tone results, the decrease was more pronounced at higher frequencies. For subject 2, the otoacoustic emissions were so strong both before and after the game that any decrease in emissions might have been masked by an equipment ceiling effect. Both subjects described the world as sounding muffled after the games, and both experienced mild ringing tinnitus.
Interpretation
Most people do not consider the risk of excessive noise exposure when participating in leisure activities. However, as this brief report shows, leisure noise over a period of a few hours can be harmful if precautions are not taken. The risk of hearing loss for those who attend hockey games frequently (e.g., season ticket holders, arena workers and the hockey players themselves) warrants serious consideration. Even the cheapest foam earplugs will attenuate sounds by about 25 to 30 dB. At the levels experienced during these hockey games, such earplugs would drop the average sound exposure to below 80 dB, where no hearing damage is likely to occur (even if the game were to go into quadruple overtime). And, contrary to popular belief, communication in noisy environments is actually easier with earplugs than without. 4 The 2 most common symptoms of excessive noise exposure are hearing loss and tinnitus, both of which can have a substantial negative impact on quality of life. We live in an increasingly clamorous world, and many of our occupations and leisure activities are potentially hazardous to hearing. More than ever before, there is a need to broaden awareness and better educate everyone about the need to protect hearing, both at work and at play. | 2017-06-17T20:05:33.744Z | 2006-12-05T00:00:00.000 | {
"year": 2006,
"sha1": "0f8dfc1c1335571a0781a79e540f8c87f526aa62",
"oa_license": null,
"oa_url": "http://www.cmaj.ca/content/cmaj/175/12/1541.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "d65cc25003f8ef7a51b996a99ac5040e658dfbb9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255734724 | pes2o/s2orc | v3-fos-license | Robotized Knee-Ankle-Foot Orthosis-Assisted Gait Training on Genu Recurvatum during Gait in Patients with Chronic Stroke: A Feasibility Study and Case Report
Genu recurvatum (knee hyperextension) is a common problem after stroke. It is important to promote the coordination between knee and ankle movements during gait; however, no study has investigated how multi-joint assistance affects genu recurvatum. We are developing a gait training technique that uses robotized knee-ankle-foot orthosis (KAFO) to assists the knee and ankle joints simultaneously. This report aimed to investigate the safety of robotized KAFO-assisted gait training (Experiment 1) and a clinical trial to treat genu recurvatum in a patient with stroke (Experiment 2). Six healthy participants and eight patients with chronic stroke participated in Experiment 1. They received robotized KAFO-assisted gait training for one or 10 sessions. One patient with chronic stroke participated in Experiment 2 to investigate the effect of robotized KAFO-assisted gait training on genu recurvatum. The patient received the training for 30 min/day for nine days. The robot consisted of KAFO and an attached actuator of four pneumatic artificial muscles. The assistance parameters were adjusted by therapists to prevent genu recurvatum during gait. In Experiment 2, we evaluated the knee joint angle during overground gait, Fugl-Meyer Assessment of lower extremity (FMA-LE), modified Ashworth scale (MAS), Gait Assessment and Intervention Tool (G.A.I.T.), 10-m gait speed test, and 6-min walk test (6MWT) before and after the intervention without the robot. All participants completed the training in both experiments safely. In Experiment 2, genu recurvatum, FMA-LE, MAS, G.A.I.T., and 6MWT improved after robotized KAFO-assisted gait training. The results indicated that the multi-joint assistance robot may be effective for genu recurvatum after stroke.
Introduction
Genu recurvatum (knee hyperextension) refers to the extension of the knee in the affected leg beyond the neutral anatomical position during gait [1]. Genu recurvatum is a common disorder after stroke [2]. The incidence of patients with genu recurvatum due to hemiplegia has been reported to be 19.5-65% in those who could walk without assistance [3,4]. Problems associated with genu recurvatum include an increase in asymmetry and energy cost in walking [5], an increased impact due to foot contact in the knee extension position [6], and a risk of joint deformity due to pain and ligament extension [7]. Treatment for genu recurvatum is necessary for patients with frequent walking opportunities. adults aged 20-80 years. None of the healthy individuals had a history of neurological disease or orthopedic disorders of the legs.
Stroke Patients
We recruited nine patients with chronic stroke. The inclusion criteria were as follows: (1) patients with a stroke for more than six months; (2) patients with hemiparetic stroke and motor disability of the lower extremity; (3) first occurrence of stroke; (4) no medical history of fracture and injury by fall within three months preceding the study; (5) ability to understand and consent to the objectives and methods of the study; (6) ability to safely maintain a standing position, with or without assistance; and (7) presence of genu recurvatum during gait. The exclusion criteria were as follows: (1) serious cardiac disease; (2) uncontrolled hypertension; (3) acute systemic disease or fever; (4) recent pulmonary embolism, acute cor pulmonale, serious pulmonary hypertension, or recent pulmonary embolism; (5) serious liver or renal dysfunction; (6) serious orthopedic disease that bars exercise capability; (7) serious cognitive dysfunction or psychiatric disorder; (8) other metabolic abnormalities; (9) serious contractures of the joints of the legs; (10) implanted electronic pacing or defibrillation devices; (11) surgical history of shunting or clipping; and (12) medical history of epilepsy. Patients A-H participated in Experiment 1, and patient I participated in Experiment 2. All patients were community-ambulating, independently using ankle-foot orthosis (AFO). Some patients used a T-cane. The demographic and clinical characteristics of the study participants are summarized in Table 1. Figure 1 illustrates the exoskeleton robotic device. The robotic device is the knee-ankle exoskeleton robot, which is an extension of the ankle exoskeleton robot, detailed in a prior report [22], to a knee-ankle exoskeleton robot. In the prior report [22], they first robotized the widely used AFO with a Klenzak double-channeled ankle joint instead of designing an ankle-joint exoskeleton robot from scratch, termed "the robotized AFO". Thus, the robotic device in Figure 1 is termed "the robotized KAFO". The robotized KAFO consists of the following four parts: the exoskeleton body consisting of the KAFO of metal struts, the actuator of the PAMs, the operation computers, and the control computer. As engineering researchers in our team aimed to develop a gait-assist robot that can be easily used in a clinical setting, they developed a novel robot, Modular Exoskeletal Joints, to actively drive a double-bar AFO. This modular joint is driven by the external PAMs to achieve both a large push-off force during the terminal stance phase and lightweight during the swing phase as same as the prior work [22]. Subsequently, the state of the art is that we applied this technology to KAFO and developed a robotized KAFO for patients with kneeankle linkage problems in this study. The specifications of the robotized KAFO are listed in Table 2 Figure 1 illustrates the exoskeleton robotic device. The robotic device is the kneeankle exoskeleton robot, which is an extension of the ankle exoskeleton robot, detailed in a prior report [22], to a knee-ankle exoskeleton robot. In the prior report [22], they first robotized the widely used AFO with a Klenzak double-channeled ankle joint instead of designing an ankle-joint exoskeleton robot from scratch, termed "the robotized AFO". Thus, the robotic device in Figure 1 is termed "the robotized KAFO". The robotized KAFO consists of the following four parts: the exoskeleton body consisting of the KAFO of metal struts, the actuator of the PAMs, the operation computers, and the control computer. As engineering researchers in our team aimed to develop a gait-assist robot that can be easily used in a clinical setting, they developed a novel robot, Modular Exoskeletal Joints, to actively drive a double-bar AFO. This modular joint is driven by the external PAMs to achieve both a large push-off force during the terminal stance phase and lightweight during the swing phase as same as the prior work [22]. Subsequently, the state of the art is that we applied this technology to KAFO and developed a robotized KAFO for patients with knee-ankle linkage problems in this study. The specifications of the robotized KAFO are listed in Table 2.
Figure 1.
Robotized knee-ankle-foot orthosis. We used a robotized knee-ankle-foot orthosis. The device consisted of an orthosis, Modular Exoskeletal Joints, nested chamber pneumatic artificial muscles (NcPAMs), and a control personal computer. The unit of NcPAMs was carried on the patient's back and was relieved by a body-weight support device. The four PAMs assisted in knee flexion and extension, ankle dorsiflexion, and plantar flexion. The Modular Exoskeletal Joints were attached to the knee and ankle joints of the KAFO to convert cable tension to joint torque. The forces generated by PAMs were transmitted to the Modular Exoskeletal Joints via the Bowden cables. Stator of the Modular Exoskeletal Joint is attached to upper link of the orthosis, and the mover of the Modular Exoskeletal Joint is attached to lower link of the orthosis. The robot assists were provided by the Modular Exoskeletal Joints driving the knee and ankle joints of the orthosis.
The actuators were four nested chamber PAMs (NcPAMs) attached to the two of exoskeletal module joint of the knee and ankle joints. One exoskeletal modular joint is actuated by an antagonistic pair of NcPAMs. The NcPAMs can be located away from the exoskeleton, thanks to Bowden cables transmission system. The NcPAM unit was attached to the back of the participants in a state of suspension from the unloading device to make it feel light, although the NcPAMs are light weight. Therefore, the robotized KAFO weight borne by the participant was 2.9 kg. The KAFO is an exoskeleton that is not mechanically strong enough to directly receive the large radial force generated by the NcPAMs; therefore, it was necessary to design an exoskeletal joint that could only transmit the torque to the knee and ankle joints while supporting the large radial force from the exoskeletal joint structure. This structure also makes it possible to reduce the weight of the robotic exoskeleton. The functions of the two NcPAMs connected to the knee joint were knee extension and flexion, and those of the two NcPAMs connected to the ankle joint were plantarflexion and dorsiflexion. The robotized KAFO joint can be inherently compliant when NcPAMs are activated. More importantly, the joint also can be mechanically transparent and backdrivable, e.g., when the NcPAMs are not activated, the knee joint and ankle joint can be free thanks to NcPAMs without any feedback control.
Assist Setting
The robotized KAFO is an assistive device that can allow a patient to move his or her joints by themselves. It also inhibits abnormal joint motion and assists in poor joint motion. The timing and amount of assistance can be changed to suit each patient; therefore, it responds flexibly to patients with various gait peculiarities. We regulated the assistive power and timing of each NcPAM using a parameter control device. The adjustment of assistive timing was applied as the feed-forward regulation based on the identification of the walking phase. The walking phase was identified by an algorithm based on foot pressure data obtained from both foot force-sensing register (FSR) sensors. The FSR sensors were placed on the balls of the great toes and heels. The FSR value during unassisted walking was used to identify the walking phase of each patient. Normalized data were linearized, and the gait cycle was identified based on the linearized values. The timing to begin and end the assist were independently adjusted in four motion directions with a value between 1% and 100%, with one gait cycle being 100%.
The assistive parameter was adjusted to prevent genu recurvatum, enhance push-off during the terminal stance phase, and achieve foot clearance and heel contact during the swing phase. Genu recurvatum was prevented by adjusting the parameter to prevent the knee joint angle from exceeding 0 • . We evaluated the foot pressure of each FSR sensor, and the joint angle of the knee and ankle of the robot using a monitor through robot-assisted gait training and adjusted the assistive parameter corresponding to changes in gait. In Experiment 2, we introduced a real-time evaluation system for the knee angle of a patient to achieve a higher effect on genu recurvatum. An electric goniometer attached to the patient's paralyzed knee joint was synchronized with the robot to adjust the assistive parameters according to the patient's actual knee-joint angle.
Experiment 1
We conducted Experiment 1 to confirm the safety of robotized KAFO-assisted gait training for the intervention because this trial is the first opportunity that the robotized KAFO was clinically used. First, we developed a safety assessment (Appendix A) to forestall the possible the risks of performing robot-assisted gait training through experiments in six healthy participants. Experienced doctors, physical therapists, and robot developers participated in safety assessment development. The safety assessment consisted of the following three parts: (A) at the time of robot wearing (physical condition, risk assessment of robot and harness wearing, and risk assessment of environment); (B) during training (presence of unexpected assistance, presence of pain due to orthosis and leg contact/excessive leg motion/antagonism between robot assist and active muscle movement, and physical condition); and (C) after taking off the robot (skin condition at the contact point of the robot or harness, physical condition, and state of gait). The assessment included items about the participant's physical condition or presence of pain and items about the state of gait to check for risks, such as instability or falls due to change in gait caused by training.
Second, we assessed the safety of both single-day and multi-day interventions in patients with stroke. Four patients with stroke (ID A, B, C, D) received single-day robotized KAFO-assisted gait training on a treadmill (20 min in total). Another four patients with stroke (ID E, F, G, H) received robotized KAFO-assisted gait training for 10 days. The participants wore the robot with the assistance of physical therapists for 5 min. The training was carried out by attaching a harness connected to a body-weight support device to prevent falls, but the setting of the body-weight support was 0 kg.
Experiment 2
Since the safety of robotized KAFO-assisted gait training and the feasibility of multiday interventions were confirmed in Experiment 1, Experiment 2 was conducted to preliminarily examine the therapeutic effect of genu recurvatum. A patient with stroke (patient I) received robotized KAFO-assisted gait training on a treadmill for 30 min/day, four days a week, for nine days in total. Patient I received no other physical therapy during robotized KAFOassisted gait training and at least 1 month prior to the intervention. A session consisted of the period of assistive parameter adjustment for 6 min and three trials of intervention for 8 min, for a total of 30 min. Patient I wore the robot with assistance from physical therapists within 5 min. The training was carried out by attaching a harness connected to the body-weight support device for fall prevention, but the setting of the body-weight support was 0 kg. During training, we instructed the patient to walk with the image of relearning the movement of the leg in accordance with the robot assistance. Real-time feedback of the knee joint angle from an electronic goniometer was also applied. After training, a physical therapist instructed the patient on overground gait for 5 min to prevent falls due to changes in gait pattern. We performed assessments before (pre) and after nine days of training (post). Pre-assessments were done the day before the intervention start date, and post-assessments were done the day after the intervention end. We also evaluated the safety of the training using our safety assessment.
Kinematic Data during Overground Gait
We used a three-dimensional (3D) motion capture system (Vicon, Vicon Motion Systems. Ltd., Oxford, UK) for gait analysis. The reflective marker sets were chosen according to the plug-in gait lower-body model. Sixteen markers were attached to anatomical landmarks on both sides as follows: anterior superior iliac, posterior superior iliac, thigh, knee, tibia, ankle, toe, and heel. We sampled motion data at a frequency of 100 Hz. Foot sensors were also used to identify the gait cycles. A total of 12 gait cycles were measured.
The FMA is a commonly used assessment to rate the recovery of motor function. We used MAS to assess spasticity in the quadriceps, hamstrings, and triceps surae muscles. This scale includes grades from 0 to 4, a high score indicates severe spasticity. A 10-m gait speed test was performed at both normal and fast speeds. The 6MWT was assessed according to the American thoracic society guidelines [26]. We wanted to reproduce the patient's usual walking at both inside and outside of her house; therefore, the 6MWT was assessed in two situations. First, the patient walked barefoot and without a cane, just as she walked inside her home. Afterwards, the patient wore an AFO and walked using a T-cane, just as she walked outdoors. Regarding observational assessment of gait patterns, we used the G.A.I.T., which was developed to evaluate coordinated gait components after neural injury. The examiner rated the gait components based on the video of the patient's walking. The total score was 62, and a lower score was preferred.
Data Analysis of Kinematic Data
We analyzed 3D and EMG data using the MATLAB software (MathWorks, Natick, MA, USA). Both sets of data were time-normalized for each gait cycle. The mean value of the joint angle of the total gait cycle was calculated for each time points (0-100%, total 101 points) and plotted on the graph. EMG data were filtered at 20-450 Hz using a bandpass filter (4th Butterworth filter). We calculated the root mean square (RMS) with a time window of 200 ms and convolution interval of 1/1500 s. The 101 points RMS for 1 gait cycle were calculated and averaged for 12 steps. This averaged RMS was normalized with the mean RMS of 101 points. This 'normalized EMG' was plotted to the graph as representative data. Foot pressure of the heel and ball of the foot were time-normalized for each stance phase and calculated as the average of 12 steps.
Safety of Robot-Assisted Gait Training
In both Experiments, all participants safely completed the training. All participants showed no considerable change in physical condition, had no problems with robot fitting, no pain during training, and no dangerous change in the state of gait. Four patients with stroke completed single-day intervention (20 min in total). Another four patients with stroke completed 20 min/day interventions for 10 days in total. One patient showed contact of the fibular head with the brace by the safety assessment before the training (robot fitting section). Since the contact area was padded before the training, the patient could complete training without any pain or adverse events. There was also no trouble continuing the intervention for multiple days. It was confirmed that robotized KAFO-assisted gait training can be performed safely by using the safety assessment, even with a multiday intervention, so we moved on to Experiment 2.
In Experiment 2, slight pain in the lateral malleolus was noted on the affected side by the safety assessment before the training (robot fitting section) on the first intervention day. The cause of the pain was identified as contact between the shoe and the medial malleolus, so the contact area was padded. Subject then completed the intervention without pain.
Experiment 2
The results of Experiment 2 are presented in Table 3. Patient I showed improvement in the peak knee extension angle of the affected side during the stance phase of overground gait without the robotic device (pre: −13.7 • ; post: 12.1 • ) ( Figure 2). The patient also showed improvement in the peak knee flexion angle of the affected side during the swing phase (pre: 12.5 • , post: 33.3 • ). That is, after the training period, genu recurvatum did not appear during overground gait even without robotic assistance. The peak ankle plantarflexion angle changed from −34.6 • to −31.1 • , and the peak ankle dorsiflexion angle changed from 2.7 • to 15.0 • (Figure 2). The EMG data are shown in Figure 3. We compared normalized RMS before and after training, and the activation patterns of the RF and VM on the affected side during the early stance phase (immediately after heel contact) changed after training. The VM in the early stance phase was especially activated after training. The GM activity pattern during the stance phase also changed after training. The foot pressure data are shown in Figure 4. Heel pressure responded throughout the stance phase before training ( Figure 4A), however, after training the response decreased in about 80% of the stance phase ( Figure 4B). Ball of the foot pressure was slow to rise in response before training ( Figure 4C), and the reaction started in the latter half of the stance phase, after training, the reaction started in about 20% of the stance phase ( Figure 4D). Data from the two sensors suggested an increase in forward load transfer in the stance phase. . Electromyography (EMG) on the affected side during gait before and after nine days of robot-assisted gait training. Graphs indicate normalized electromyography (EMG) during barefoot overground walking before (left) and after training (right). The horizontal axis indicates one gait cycle, with 0% indicating initial foot contact on the affected side and 100% indicating the next contact. The thick solid line represents the mean normalized EMG, and the thin vertical line represents the standard deviation. We assessed eight muscles: the tibialis anterior muscle, the medial gastrocnemius muscle, the soleus muscle, the rectus femoris muscle, the vastus medialis muscle, the semitendinosus muscle, the biceps femoris muscle, and the gluteus maximus muscle. After the training, the genu recurvatum disappeared, and the activity of the vastus medialis muscle increased in the early stance phase. . Electromyography (EMG) on the affected side during gait before and after nine days of robot-assisted gait training. Graphs indicate normalized electromyography (EMG) during barefoot overground walking before (left) and after training (right). The horizontal axis indicates one gait cycle, with 0% indicating initial foot contact on the affected side and 100% indicating the next contact. The thick solid line represents the mean normalized EMG, and the thin vertical line represents the standard deviation. We assessed eight muscles: the tibialis anterior muscle, the medial gastrocnemius muscle, the soleus muscle, the rectus femoris muscle, the vastus medialis muscle, the semitendinosus muscle, the biceps femoris muscle, and the gluteus maximus muscle. After the training, the genu recurvatum disappeared, and the activity of the vastus medialis muscle increased in the early stance phase.
Discussion
We found that robotized KAFO-assisted gait training could be performed safely using the safety assessment to prevent risk. In patients with chronic stroke, long-term robotized KAFO-assisted gait training may improve genu recurvatum during overground gait without the use of a robotic device, and motor function of the paralyzed lower limb. Changes in the pattern of muscle activities during overground gait were also observed with the improvement of genu recurvatum. We consider that robotized KAFO-assisted gait training has a therapeutic effect on genu recurvatum in patients with stroke.
Safety of Robotized KAFO-Assisted Gait Training
Robotized KAFO-assisted gait training was safely performed in all participants. We were also able to construct the safety assessment of robot-assisted gait training by conducting Experiment 1. Previous studies have assessed the safety of robot-assisted gait
Discussion
We found that robotized KAFO-assisted gait training could be performed safely using the safety assessment to prevent risk. In patients with chronic stroke, long-term robotized KAFO-assisted gait training may improve genu recurvatum during overground gait without the use of a robotic device, and motor function of the paralyzed lower limb. Changes in the pattern of muscle activities during overground gait were also observed with the improvement of genu recurvatum. We consider that robotized KAFO-assisted gait training has a therapeutic effect on genu recurvatum in patients with stroke.
Safety of Robotized KAFO-Assisted Gait Training
Robotized KAFO-assisted gait training was safely performed in all participants. We were also able to construct the safety assessment of robot-assisted gait training by conduct-ing Experiment 1. Previous studies have assessed the safety of robot-assisted gait training in patients with neurological disorders using a self-report questionnaire [27], assessment of pain [28], recording of adverse events [27][28][29][30], and assessment using the U.S. Food and Drug Administration's list of known and unforeseen adverse events [31]. Before the training, we listed the possible risks and adverse events and included the safety assessment. As a result, we were able to detect the pain caused by fitting the robot in order to prevent adverse events. Using safety assessment may make it possible to perform robot-assisted gait training safely. It is necessary to analyze many differences in robotic devices and diseases; however, we suggest that this safety assessment is potentially useful for various robotic devices and robot-assisted gait rehabilitation.
Case Study of 9 Days Intervention on Genu Recurvatum (Experiment 2)
The duration between the onset of stroke in patient I and her inclusion in this study was 3.7 years. Genu recurvatum is habitual; therefore, the patient had difficulty in preventing genu recurvatum during gait by herself before the intervention. However, nine days after the intervention, the patient achieved overground gait without genu recurvatum, even without the use of the robotized KAFO. Robotized KAFO-assisted training can stimulate the learning of joint movements while preventing genu recurvatum.
In a previous report on robotic intervention for genu recurvatum, a 15-day intervention that involved the use of a robotized knee orthosis could not prevent the appearance of knee hyperextension during both the stance and the swing phases [6]. The causes of genu recurvatum are not only due to disorders in the knee joint, but also due to movement disorders of the hip and ankle joints [2,5,7]. Therefore, gait training may be more effective if devices that assist both the knee and ankle joints are used during training sessions. A previous study examined kinematic changes before and after robot-assisted gait training in patients with chronic stroke; however, the study did not focus solely on genu recurvatum [21]. They found that Lokomat-assisted gait training for four weeks did not change the average coefficient of correspondence and the peak joint angle of flexion/extension of the hip and knee. Although Lokomat is an exoskeleton-type gait assist robot, changing the kinematics of a stroke patient's gait could be relatively challenging. Our robotized KAFO is also an exoskeleton type, but any link of the robotized KAFO is not fixed while root link of the Lokomat is fixed to stational flame. One of its novel features is that the assist settings can be changed in detail for each patient. Furthermore, it may be that it is important to change the assist setting in order to change the kinematics of the gait of a patient with stroke with high individuality.
The patient walked barefoot overground before the intervention and this resulted in less angular changes between the knee and ankle on the affected side and loss of coordination between the knee and ankle joints. The training focused on heel contact with the knee in a slightly flexed position, promotion of lower leg forward tilting after heel contact, and relearning of push-off in the pre-swing phase. The device can independently adjust the timing and power of the assist in each joint motion, and it may have enabled tailor-made assistance for her gait pattern. Furthermore, real-time feedback of the knee joint angle was also performed using a monitor so that the patient could be aware of heel contact with the knee in a slightly flexed position. As a result, the patient acquired an overground gait without genu recurvatum. We suggest that relearning joint motion with robot assistance and motor learning with angle feedback may improve genu recurvatum. Considering that proprioceptive training using videographic observation improved genu recurvatum [14] and electrogoniometric feedback of the knee joint showed constant effects on genu recurvatum [2,32,33], motor learning with joint angle feedback also plays an important role in the improvement of genu recurvatum.
In Experiment 2, the FMA-LE also showed improvement in E-I, E-II, E-IV, and F. Previous studies in patients with acute and sub-acute stroke have reported improvement in FMA-LE with robot-assisted gait training [34][35][36][37][38][39]. Although some studies have shown similar improvements in FMA-LE after robot-assisted gait training in patients with chronic stroke [40,41], the number of sessions or duration of intervention were longer than those used in our intervention. The robotized KAFO can independently adjust the knee flexion/extension assistance and ankle plantar flexion/dorsiflexion assistance, as required. Furthermore, the MAS scores of the quadriceps, hamstrings, and triceps surae muscles improved after this intervention. The causes of spasticity include neural changes, muscle atrophy, and muscle contractures [41][42][43][44]. Increased motion of the knee joint during gait may have resulted in stretching and shortening of the thigh muscles and could have affected the viscoelasticity of the muscle-tendon complex. Spasticity in the quadriceps or triceps surae muscle is thought to be part of the cause of genu recurvatum [2,5,7]. Reduced spasticity of these muscles may have led to reduced extensor synergy and improved genu recurvatum.
In addition to the qualitative component of lower-limb movement, it is important not to lower the efficiency of gait. The walking speed after training did not get worse, while controlling for genu recurvatum. Rather, the results of 6MWT with barefoot showed improved walking efficiency over long distances. After the intervention, as assessed using the sub-items of the FMA-LE (E-II and E-IV), flexor synergy and voluntary ankle dorsiflexion improved. Furthermore, the EMG activation pattern of the knee extensors changed. It has also been reported that the activity of the knee extensor muscle increases immediately after initial foot contact (in the early stance phase) in the gait of healthy individuals [3]. Before the intervention, the patient showed low VM activity in the early stance phase due to genu recurvatum. After the intervention, VM activity increased at the same time as heel contact did. Simultaneously, GM activity decreased. It is generally known that the GM is active in maintaining forward movement of the center of gravity against the impact of the ground at the initial contact [45]. The decrease in GM activity at initial contact in this patient may be related to the reduced impact of contact with the ground owing to regaining contact with a slightly flexed knee position. In relation to these changes in muscle activities and kinematics, foot pressure data also suggested increased load transfer to forward in the affected side during the stance phase. Since this was a single case study, the training effect of treadmill walking should be considered. There are limited reports showing that treadmill walking training alone changed the kinematics of the affected lower limb joint in patients with stroke [46]. The change that Druzbicki et al. [46] reported in hemiparetic knee angle before and after training was about 5 • in the mean value of 10 patients. In comparison with this result, our case showed a larger knee and ankle joint angle change. The changes in leg motor function, and muscle activation pattern with kinematics change on the affected side may contribute to improved walking efficacy. Further studies are needed to investigate muscle activity after the training.
Clinical Implication
Many patients suffer from genu recurvatum after a stroke [3,4] and show decreased walking efficacy or pain; therefore, treatment for genu recurvatum is required in this patient population. Our robotic device has the potential to be therapeutic device for genu recurvatum.
Limitations
As this study was conducted with a very small number of participants/patients. Statistical analysis was also not performed; therefore, the extent to which the effects of the intervention could be verified is limited. Follow-up assessments after completion the intervention are unavailable and will be needed in the future. Another limitation of this study was that we could not assess the activity of daily living of the participants. Additionally, robotized KAFO-assisted gait training was performed on a treadmill, and in Experiment 2, joint angle feedback was also performed; therefore, there was no control group. A randomized controlled trial with a sufficient sample size is necessary in the future. | 2023-01-12T17:28:21.369Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "9f2ab2299696d90605ed4f332b4aaf58c460c0f2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/2/415/pdf?version=1672881719",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbf429790d0ff29806dcdc2f913cb107f1ab388e",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
259735605 | pes2o/s2orc | v3-fos-license | A knowledge management system in the strategic development of universities
The purpose of this study is a conceptual description of the implementation of knowledge management systems (KMS) as a mechanism for universities’ strategic development. Knowledge management (KM) practice from around the world proved the positive influence of KMS on productivity of educational institutions. The theoretical provisions and concept for KMS are determined based on an analysis of international experience of KMS use in higher education (HE). Theoretical provisions consist of 1) the staff activities as an object of KM and knowledge because of these activities, 2) the specificity of HE restrains a transfer of the KM mechanism from business to HE,
Introduction
T he strategic management of a university has to respond in a timely manner to ongoing changes in society, ensuring its development and the fulfillment by the university of its mission [1].The source of the current transformation is digitalization, which creates new capabilities for universities and new requirements for structure and content of education [2][3][4].Research into a proper way to meet the challenges of digitalization is within the responsibility of the strategic management of a higher education (HE) institution [5].The purpose of the study is to present a conceptual model of knowledge management (KM) as a mechanism of university development in the context of the digitalization of society.
Business practice gives examples of implementation of KM as a tool for development of large companies.Relying on KM, companies create innovations, develop their personnel, and extend their activities [6,7].Corporate management has transmitted modern features to the theory and practice of KM.However, universities were the first to create knowledge management systems (KMS) based on scientific and learning activities.From the beginning, universities provided facilities for working with knowledge for society at large.Traditional methods and technologies in HE should be extended with innovative mechanisms in order to leverage the potential of digitalization for HE evolution and to respond to digital challenges properly.
Digital transformation, compared with the previous stage of technological transformation, is characterized by an increase in the complexity of knowledge [4]; acceleration of scientific and technological progress [8]; expansion of areas of interdisciplinary knowledge; an increase in the intensity of the use of intellectual assets [9]; the emergence of new forms and methods of professional employment of people [10].These and and 3) the uniqueness of each university determines the structure and content of KMS for strategic development.The KM process in HE is reflected in the Socialization-Externalization-Combination-Internalization (SECI) model, where each stage contains a list of staff activities and a set of digital services.The novelty of the KM process model in HE is that knowledge flows in a wave, not a spiral.In this motion, knowledge passes from uncodified to partly codified and codified form.The study demonstrates that knowledge can go the from stage of partly codified to uncodified for revision, and knowledge flow can stop at any stage.The advantage of the concept we designed is the ability to control the flow of knowledge before it takes the codified form of a document.The digital environment for KM first allows management to control faculty activities at the initial stage of uncodified knowledge through measurement of activities, and then to estimate the knowledge flow itself.The gathered indicators help to make decisions to motivate or restrain faculty.The university management gets a complete picture of faculty activities with knowledge and the intensity of knowledge flow in training courses and educational programs.other characteristics of digitalization give rise to challenges to HE.
Overcoming the challenges depends on how the university organizes in a digital environment working with knowledge, including creation, acquisition, accumulation and use of knowledge, which is a main asset of HE.Meanwhile digitalization opens up new sources, technologies, and methods for KMS, which allow faculty to redistribute routine operations to IT and creative cognitive tasks to humans [11].The high importance of KMS in HE is due to the great influence of HE on society and its transformation into a knowledge society [12].
Statement of research tasks
ISO Knowledge management systems -Requirements to establish a broad framework for KM definition and a variety of different tools and methods for KM [13], because the structure and content of knowledge are very specific for each industry or even activity.There is a generalized definition relying on sources [13][14][15]: KMS is an organizational and IT environment in which a set of available methods and tools are used by people to create, accumulate, store, search, enrich, exchange and apply knowledge.
The KM mechanism should be adapted to the specificity of HE in the context of digitalization in order to ensure achievement of strategic goals of university development.For conceptual design of KMS for HE, it is necessary to solve the following tasks: ♦ development of theoretical provisions of KMS in higher education, ♦ design of a conceptual model of KMS to support the strategic management of the university.
There are many different dimensions of activity and processes in any university.For study, the business process with involvement of most of faculty staff is under the spotlight of consideration.This business process is development of training courses and edu-cational materials.Deja [14] and Yeh [15] highlighted this specific activity as an academic KM.
Literature review
Some KM methods penetrate into the operational level of universities' everyday staff activities.Tools embedded in KMS such as network drives, web-services and messengers are actively used by teachers in the educational process for collaboration and sharing [16].Occasionally implementation of a method or tool from a KM toolbox is not tagged as KMS by researchers.Publications that consider KMS as a field of science can be classified according to KM objectives.KMS can be focused on development of innovation [17], maintaining partnerships with businesses in industries [18], promotion technological transformation in HE [19,20], facilitating e-learning [21] and teaching the theory and practice of KM [22,23].This list of KM objectives is not complete due to the huge variety of tasks and dimensions in HE.Meanwhile Quarchioni et al. [24] summarized KM practices in HE as six conceptual approaches to the KM objective: 1) control of intellectual assets, 2) transfer of knowledge and best practices, 3) improvement of KM technologies, 4) KM training, 5) creation and sharing of academic knowledge and 6) implementation of KM.
KM in HE as a scientific field is multidisciplinary, so papers on the topic "knowledge management in higher education" cover 123 research areas in the Web of Science.There are 3102 publications retrieved in the Web of Science for 2000-2020 period: 29% of the papers are in the research area of education, 25.1%business economics, 9.7% -engineering, 8% -computer science, 7% -information science and library science.Most of the research findings on KM in HE are focused on the practice of solving operational tasks in HE.Only 1% of the papers raise issues of KMS in strategic management in HE.
The practice of KMS in strategic management has already been studied in several universities which are included in the World University Rankings1 .The experience of King Abdulaziz University in Saudi Arabia [25] demonstrates an organizational culture as a main driver of KM.Italian University of Bari [26] and Ca' Foscari University of Venice [27] use facilities of KMS as an environment for interactions between academic and business communities, also as a mechanism for attracting enrollees.In China's Wuhan University of Technology [28] and in India's University of Delhi [29] KMS is implemented to ensure demand for their graduates and future employment.Another Chinese university, Northwestern Polytechnical University [30], uses KMS to enhance research activities among faculty staff and students.A case study of Moscow State University of Economics, Statistics, and Informatics (Russia) presents KMS as a means for innovative transformation of education into e-learning [31].Consideration of the above sources shows that the effectiveness of KMS is measured through indicators of university performance.There is a huge experience of applying of the performance indicators for strategic management in HE on large simulation systems, decision support systems and business intelligence [32].Digital transformation projects, regardless of the field of activity, are always aimed at the strategic development of an organization [33,34].
Meanwhile, a conclusion about the positive impact of KMS on university performance can hardly be drawn without exclusions due to the so-called survivor bias.In the sources under consideration, the assessment of KMS influence on university performance is based on a survey among students [29,35,36] and lecturers [25,37] to show their satisfaction with KMS and the relevance of KMS to their activities.It is necessary to take into account the limitation of the method of surveys and expertise in evaluating performance when interpreting the results.The conclusions drawn from our review of literature cannot be extrapolated to the entire HE due to differences in the understanding of KMS and its tools on a case-by-case basis.
Several studies have been carried out on a national scale covering the practice of KMS at a few universities: in the United Kingdom [36,38], Australia [39], Spain [40,41], Poland [14] and Malaysia [31,42].In studies at the national level, the tasks of the KMS are revealed in the context of state regulation and regional specifics.National HE systems differ significantly from each other, but they are united by the dominant influence of public authorities on the operational and strategic activities of universities.The introduction of KMS in Italian [27] and Australian [39] universities may be hindered by regulations.In Poland, the KMS is supported and implemented at the national level as a mechanism to ensure the transparency and manageability of intellectual assets at each university and throughout the country [14].
It is quite difficult to single out a universal structure from the sources of the KMS which could fit universities of at least one type.Moreover, in other industries there is no shared understanding about KMS structure and content.Common to HE and other industries is the awareness that KMS supports and ensures the achievement of strategic goals.A review of the literature shows a gap in the disclosure of the conceptual scheme of the KMS as a mechanism for strategic management of the university in the context of the digitalization of society.The academic community will have to conduct fullscale studies of KMS in the strategic management of the university.
Methodology of the study
The empirical data for studying the content of KMS in strategic management in HE were extracted from sources describing KMS practice of universities located in Australia [39], the United Kingdom [36,38], India [29], Italy [26,27], Spain [40,41], China [28,30], Malaysia [31,42], Saudi Arabia [25], Poland [14] and Russia [31].Methods of analysis, comparison and generalization are applied to develop the theoretical provisions of KMS.Methods of categorization [43] and semantic modeling [44] are used to design a conceptual model of KMS in strategic management in HE.
Because of the huge number of university activities, the study is limited to considering the processes of developing educational programs and its educational and methodological content, i.e., academic knowledge.An attempt to cover all university fields at once could lead to blurry, non-specific results.
KM Object
Universities were the first organizations to hold KM systematically.Their managerial activity focused on knowledge accumulation, storage and dissemination.The relevance of knowledge control appeared in the processes where the value of knowledge is prioritized as assets.The first business cases of KM considered the problem of knowledge retention that arises when an employee generation changes [7].A well-known and widespread solution to this problem is documenting and storing information about knowledge in an information system, library or knowledge base.Knowledge has been defined for centuries as subjective [45], which does not exist outside the context of human activity.Thus, information systems store information about knowledge and not the knowledge itself.Recent studies support the concept of knowledge as a subjective category [46] and expand the list of knowledge subjects to include an organization and a local community [6,11,47].Therefore, an organization can learn, create, store and use knowledge.Organizational knowledge as a management resource is characterized as intellectual capital and connects human, social and operational assets [48].
The subject property of knowledge determines the priority of the qualitative measurement of its value over quantitative characteristics [14].The academic community discusses the issue of qualitative meas-urement of scientific results because the quantitative measurement through the evaluation of bibliometric indicators does not reflect the level and significance of scientific results [49].The quantitative measurement should be given by an expert in a proper scientific area [50].Expertise of study is a time-consuming and expansive method, so it can be applied to cases where the main function of KMS is distinguishing the most important knowledge.If the main function is creation, sharing, dissemination and modification of knowledge, the expertise will slow down KM processes.When the scientific and technological progress is accelerating, such a slowdown of KM limits the flexibility and intensity of work with knowledge.
The processes of external and/or internal peer review are used to approve the syllabus of training courses in almost all Russian universities.The process of assessing the quality of knowledge is laborious and cannot cover the entire volume of knowledge circulating at the university.
Early MK theories relied on various surrogates for knowledge to separate knowledge from humans and extract the most valuable information from the available content.Founders of KM theory Nonaka and Takeuchi [6] chose the use of knowledge by people as a sign of knowledge that is of value to the organization.Kurlov and Petrov [51] for the purposes of innovation management introduce a concept of instrumental knowledge on the basis of which an activity is transformed.The ISO [13] deals with the value of knowledge, not knowledge itself.In order to consider KMS as a mechanism for strategic management, it is necessary to put aside the discussion about the structure and content of knowledge.
The value of knowledge is defined by staff activities with knowledge in the performance of their labor functions.Thus, staff activities regarding to knowledge should be considered as an object in KMS.The first theoretical provision is that the object of KMS is the activity of users in the knowledge environment.
ISO [13] defines an environment that provides favorable conditions for people to work with knowledge as a common means of KM.In a broad sense, the environment contains the internal capabilities of the organization and a part of the external sources of knowledge and experts.In a narrow sense, the environment is supported by the KMS, which is a set of organizational and information solutions for performing the functions of the KM.Through KMS employees get access to knowledge, can interact with each other, and use different methods and tools to work with knowledge.
Staff activities drive the knowledge flow in educational and other areas of universities.A study of communication between lecturers shows their high appreciation of the opportunity to interact with each other [52].A series of conversations conducted with Nobel laureates in economics emphasizes the great role of the communication environment in their scientific progress.World science leaders highlight the importance of informal discussion of hypotheses and theories with colleagues [53].The stage of informal discussions is included in the cycle of scientific and technical information, including non-published materials; from this stage the life cycle of knowledge begins in the knowledge management system of the state corporation Rosatom [7].
Specificity of KM in HE
The spread of KM technologies and methods among businesses is uneven.Almost every industry has its own KM methodology.The need to adapt and develop a special approach to KMS for a given industry is due to the specific properties of knowledge for each industry and even organization [13,54].The dependence of knowledge on subjective interpretation in the context of an industry makes it difficult to directly transfer best KM practices across industries and organizations.KMS as a mechanism for strategic development came from the business community to HE [55].In business, various ways of implementing KMS are used which differ depending on the goals of strategic development and the industry or market specificity where the organization operates.Rosatom developed the KMS based on the scientific and technical information system to control codified (documented) intellectual assets [7].The Japanese companies Honda Motors and Eisai relied on the knowledge environment in which employees have a deal mainly with uncodified (undocumented) knowledge [56].
The specifics of HE institutions influence a methodology of KMS for universities.The main feature lies in setting a goal of strategic development.Kuzminov and Yudkevich [1] point out that goals of strategic development for Russian universities are set by public authorities.There is also a dependence of the national HE system on budget funding, which limits any initiative of universities in choosing their own way of development.A large role of public authorities in the KM practice in HE stands out in Australia, Italy and Poland.
The KM environment is often considered from the perspective of its three enlarged groups of elements: people, processes, technologies [57,58].Through human activities, knowledge acquires its value and meaning.Often a department responsible for personnel development also is responsible for KMS.The processes performed in an organization determine the possibility and space to include KM activities into the business.These processes impose requirements on the structure and content of the KMS.Organizational development policy and regulations should rely on KMS and describe the KMS contribution to performance of the organization and productivity of employees.Current digital technologies provide KMS with ingenious tools for creating and sharing knowledge.The emphasis on one of three enlarged groups of KMS elements puts responsibility for KMS on the HR, administration, or IT department of an institution.Table 1 summarizes the features of KMS at universities by people, processes and technologies.
People
Confirmed high intellectual potential of employees (scientists and lecturers) [3,40] The ability to use intellectual potential from the business environment through graduates [59] Employees' acceptance of the value of the free exchange of knowledge for the development of education and science [42,60] Academic competition among faculty staff [36] Processes Conducting research and educational programs in a large number of fields [61] Diverse approaches to forming and supporting creative teams and projects Priority for fulfilling the public mission of science and education [35] Integration and intensive interaction with external communities [3,40] Strict regulation and control by public authorities of HE [62] Technologies External content sources: digital libraries, databases Scattered internal sources of content: teaching materials, scientific reports, regulations, etc.
Strict information security requirements apply to work with personal data, but not to content that is created, used and distributed in education and research The concept of "BYOD" according to which the lecturers and students themselves choose the computers, software and web services that are suitable for them in terms of performance and cost [63] The second theoretical provision is that design, structure and contents of KMS for universities should take into account the features of HE in order to fully realize the high intellectual potential of employees and cover many scientific areas and training courses with the help of the heterogeneity of IT facilities for education and science.
An analysis of the KMS practice in universities shows that each group of elements contributes to success and strategic development.Elements of KMS provide cultural [25], organizational [39] and technological [41] conditions for the success of KMS in HE.
Adaptation of the KM mechanism
to strategic goals of university development KMS as a mechanism for strategic development is based on the mission and values of the university [56].KM cases in universities differ significantly from each other, but their common features are revealed when they are grouped by mission type.The practices of KM implementation in universities follow a common mission type and also have common features.There are three types of mission in HE: educational, scientific and the so-called third mission.The third mission appeared because of changes in society under the influence of scientific and technological progress, economic globalization, political and economic crises [64].The third mission of the university directly influences the socio-economic development of a city or local area by facilitating interaction between communities of entrepreneurs and citizens, the dissemination of best practices and new business models, etc. [65].Meanwhile universities staying on their educational and scientific mission indirectly influence societal development through their graduates and scientific results.Universities have been guided by an educational and scientific mission for centuries.Lomonosov Moscow State University nowadays follows the mission formulated in the 18 th century2 .
The productivity of KMS is measured by the performance indicators of a university.Based on analysis of the KM practice in different universities, the characteristics of KMS are extracted in accordance with the type of mission in terms of geographic scape and KM means (Table 2).The university's educational mission focuses on the value of professional evolution and demand for their graduates.Employment of graduates is regarded as one of the main key performance indicators of the university.Therefore, KMS aims to ensure that graduates of educational programs are in demand in the labor market.Universities with an educational mission conduct their activities in selected regions to build relationships with employers and interact with the labor market.
The scientific mission of the university sets a task for strategic management to advance in world rankings, promote papers in top scientific resources and obtain world-class scientific results.These tasks determine the global scope of KMS [67].The activities of faculty staff in dealing with knowledge may be located outside a campus.Case studies of research universities raise the issue of negative impact of some tools or practices of KMS on performance indicators.An analysis of the implementation of KMS by 70 Spanish universities found a relationship between the spread of IT for collaboration and a decrease in the number of publications in top-cited journals [41].
Universities of the third mission focus on the social, cultural, and technological development of a particular region, such as a city [68].The third mission is most often characteristic of entrepreneurial universities [26], which act as a connector between businesses, citizens and public authorities [65].In smart cities projects, universities perform functions of generating, collecting and selecting knowledge to fill a lack of scientific and educational expertise in business and society.Rapid changes in technology, the economy and society require HE institutions to diversify sources of knowledge and ensure their transfer to society.Thus, universities link parts of a societal ecosystem: production, education, public administration and research.The considered cases Table 2.
Characteristics of KMS for mission in HE
Educational mission [25,28,29] Scientific mission [30,40,63] Third mission [26,27] Key performance indicators employment, competencies, education, employer, student satisfaction, rating publications, rating, citations, patents on scientific results, innovations innovations, competitive advantage, value, strategy, improvement of society Geographic scope of activity In selected regions or countries Global Regional The most typical knowledge management tools
Corporate portals, collaboration tools based on cloud computing
Communities of practice [66], knowledge libraries, variety of information sources, collaboration tools LivingLabs [65], communities of practice of the use of KMS in universities of the third mission show a local or clearly defined regional scale of their activities.
The third theoretical provision of KMS in HE is to ensure that the university fulfills its mission.At the same time, the productivity of KMS is measured by the key performance indicators of the university, and not by the performance of individual functions of KM.
Following key performance indicators in the strategic management of the university is the basis in BPM (business performance management) systems, which are already used in HE [32].Thus, KMS can be embedded into an existing IT landscape of strategic management using the available IT infrastructure for data storage and analytical processing.
Conceptual scheme of knowledge flow in HE
The activities of faculty staff drive knowledge flow in the university, which goes through stages from the birth of an idea of knowledge (creation of a training course) to its use and distribution in codified form as educational and methodological materials.In HE, knowledge is often understood as scientific and technical information, and a process of creating knowledge goes through a cycle of unpublished knowledge, primary sources of knowledge publication and secondary sources of knowledge publication [7, p. 75].In business practice, the SECI model by Nonaka et al. [56] is widespread.This model of crea-tion and use of knowledge in organizations consists of the stages: Socialization, Externalization, Combination, Internalization (SECI).The authors of the SECI model distinguish the stages depending on the degree of knowledge codification and the number of participants involved.Based on the SECI model, Fig. 1 shows the stages in the development of teaching and learning materials.Figure 1 demonstrates a sequence of stages in a clockwise direction.The inner circle contains a list of staff activities, and the outer circle contains the means of digital environment for performance of these activities.For three stages (E, C, I), types of codified knowledge are given as an example, and the figure does not contain a complete list of possible documents.
Stage S is the initiation or relaunch of a knowledge project.The stage consists of interpersonal interac-tions of a few lecturers.The results of this stage can be recorded in the form of drafts and a set of ideas, but they are not published as documents.Thus, knowledge is not registered and included in information systems or libraries, because it is uncodified.E-mail or social media can be used at this stage.Participants are a small group of authors.
Teece [69] points out that supporting staff activities with uncodified knowledge ensures intellectual assets as a stable source of competitive advantages for an organization.In Russian universities, this stage is practically not controlled by management since it takes place in the lecturers' environment and is not supported at the university level.Consequently, universities do not receive possible benefits for their development from the stage S of creation of educational materials.
review, approval
At stage E, knowledge is partly codified to involve more people in knowledge processing such as the review, discussion and approval of materials submitted by authors.The approval of educational materials could be done in different ways.At HSE University, an academic council of the educational program reviews and approves a syllabus of training courses.At the Plekhanov Russian University of Economics, this is done by a scientific methodological council of higher schools.
Educational, learning and teaching materials are codified at stage C, when materials are approved and accepted.At this stage educational materials become available in libraries and information systems.They are open for lecturers and students to use in training courses of the university.
The final stage I in the KM process includes an assessment, feedback, analysis and synthesis based on the experience of using knowledge.At stage I, we find the students' assessment of their learning experience during a training course and the lecturers' assessment of their teaching experience.The knowledge gained at stage I is codified as ratings, proposals, comments and recommendations on the results of the analysis and the synthesis of practice to use the materials.
The SECI model is often presented as a spiral on the timeline, where knowledge sequentially passes through the stages, and the cycle of working with knowledge is repeated on a new round.The development of training and methodological materials in general goes through all the stages of SECI, but trajectories can be different.The different trajectories arise because knowledge can move back and forth.For instance, after discussion on stage E a syllabus returns to the previous stage S for a revision.Thus, on the timeline, the knowledge flow looks like a wave.Figure 2 is a schematic presentation of academic knowledge flow, where the x-axis is a time scale, and the y-axis is a categorical scale reflecting the levels of knowledge codification.
In Fig. 2 the wave shown by the solid line crosses level a of codified knowledge three times.That means that the training materials went through three full cycles and were used in the training course.On the peak of the wave, the educational materials are being approved and accepted.Waves shown in dashed lines do not reach the stage C and cross level a; they do not enter a library or repository; and they are not introduced into training courses.Meanwhile the work on this material is ongoing.The full stop at the end of the wave means the end of work on materials.Some flows of knowledge are stopped after a week, while others can run on for years.Knowledge flows differ significantly in the duration and intensity of the waves, depending on a training course, scientific area, moti- vation and competence of the author team.If in some scientific areas a life cycle of knowledge can be more than five years, then in others it will not exceed a year [70].Knowledge flows in various areas of training and scientific areas can take various periods of time from several weeks to years.
The number of knowledge flows in a university can be indirectly measured through the number of educational programs and training courses.Knowledge flows can be grouped in an educational program or a scientific area based on departments.
The model of academic knowledge flow offered here does not change the usual course of its development but formalizes it for control and management.The traditional approach to KM through a codification and storage of knowledge in libraries allows universities to control knowledge that entered a library in codified forms as syllabus, curricula, textbooks, etc.The importance of libraries as knowledge repositories is not subject to revision, but they should be complemented by digital means that support knowledge operations and interaction between employees.The staff activities use partly codified knowledge and are partly controlled by the administration.All activities below level b are out of sight for the university administration.The flow of knowledge in the digital environment allows the administration to bring all its stages out of the blind zone and ensure control over them.
System of indicators for knowledge flow measurement
The main function of KMS is to support the knowledge flow which is provided through measurement and control.The control of knowledge flow requires a system of indicators to assess the state, intensity and volume of knowledge flows.
The digitalization of society enhances the transfer of many activities and processes to the digital environment.One of the advantages of the digital environment is the ability to automatically gather data on selected metrics.The modern knowledge environment is a digital environment.A significant part of activities with knowledge is carried out using digital services, such as e-mail, messengers, online conferences, collaboration through cloud services and network storage disks.Thus, the digital environment of KM meets the necessary condition for the automatic measurement of the staff activities driving the knowledge flow in motion.
The SECI model shows that knowledge codification is preceded by the stage of knowledge emergence, which assumes operations with implicit knowledge.It is impossible to measure uncodified or implicit knowledge, but it is known that it appears in staff interaction.This stage is usually not considered and controlled by the university administration.The existence of implicit knowledge in KMS can be compared to the phenomenon of a black box in cybernetics, in which input and output can be under control, but not inside of the black box [71].Precisely at stage S (socialization) there is the occurrence of new knowledge or adaptation of already known knowledge to changes and new requirements.
The digital environment allows for capture of the state of each stage of the knowledge flow and control of its progress.The object of control in KMS is a staff activity; therefore, the system of indicators of the knowledge flow quantitatively measures the staff activities in the knowledge flow.In accordance with the SECI model for the stages of developing training materials, the indicators can be grouped as follows: 1) interaction and communication between employees characterize stage S, which does not contain codified knowledge; 2) contribution of employees to the knowledge librarystage C; 3) knowledge sharing at stages E and I, where knowledge is partially codified.In the knowledge flow scorecard, these two stages are combined because they both involve discussion and interaction involving a group of stakeholders (a supervisor of the educational program, students, lecturers, etc.).
Vol. 17 No. 2 2023 The knowledge flow indicators are presented in Table 3, which contains a short description of source, data type and method for measurement.
A comparison of values of the indicators of knowledge flows for different training courses and educational programs is a function of KMS specific to HE.Similar values of staff activities at stage S for most of knowledge flows point to a homogeneity of the organizational culture at a university.A means for managerial impact on promotion of the organizational culture is justified by measuring the indicators of stage S. If the values of staff activities in one knowledge flow are lower than in other flows, then this indicates the disunity of the lecturer team in the area.In business, the phenomenon of sabotage is known [72]: this is when employees deliberately exclude themselves from the flow of knowledge.
At stage C, the contribution of a lecturer to the accumulation and keeping of knowledge is assessed.Meanwhile, the value of knowledge is not assessed.The indirect assessment of the value of knowledge through its relevance supposes a risk that some knowledge may be underestimated and lost.This risk was first described in the middle of 20 th century, when it was discovered that society does not have enough capacity to store and process the entire information flow which is permanently growing and varying [73].Despite the breakthrough development and spread of digital technologies, this risk remains relevant [74].
Knowledge sharing indicators characterize the stages of work with partially codified knowledge when other persons in addition to the authors join the knowledge flow to discuss and improve materials.Values of these indicators point to the intensity and volume of the flow of knowledge, and help determine the need for support for staff activities in the stages E and I.
The knowledge flows of a contemporary university are growing and changing all the time.The digital environment is suitable for measuring and considering the processes of working with knowledge.The stages of creation and use of academic knowledge become transparent for control and, therefore, manageable.KMS should be considered as one of the application layer elements of the IT architecture of a university shown in Fig. 3. Using the service approach, KMS is integrated into the IT landscape of the university in such a way as to use the capabilities of the multidimensional warehouse for storing and processing the indicators of the knowledge flow, and BPM systems for measuring performance indicators and evaluating the performance of KMS.
On the one hand, KMS uses the possibilities of digitalization in terms of simulation modeling and predictive analytics of knowledge flows.On the other hand, KMS complements the strategic management systems of HE with data on the flows of knowledge, all of which have a decisive impact on the university performance.
Conclusion
In the context of high technological and economic dynamics, the university, along with business, needs a favorable environment for creating innovations that ensure its development.In business practice, an approach using methods and technologies of knowledge management has become widespread.These means, combined in KMS, can complement traditional higher education approaches based on scientific research and systematic university staff training.The specificity of KMS in higher education lies in the fact that the object of control is the activities of faculty staff for the development, modification, discussion and use of educational materials.The flow of academic knowledge is set in motion by lecturers from the birth of an idea to its implementation in the educational process and subsequent refinement.KMS introduction in the university requires taking into account the specifics of higher education, such as a large number of training courses and scientific areas, the proven high intellectual potential of staff, and the disparate IT infrastructure of the university with many involved technologies and knowledge sources.Also, the methods and technologies in KMS should be adapted to the individual needs and capabilities of each university which are determined by the mission, region, scale and other parameters.The specifics of each university make it difficult to develop a standard of KMS suitable for all institutions of higher education but do not prevent knowledge flow modeling.
The flow of academic knowledge at the university is presented based on the SECI model of the process of creating and using knowledge in organizations.Our modified SECI model, adapted to higher education, contains a list of activities and digital services that ensure the motion of the knowledge flows.The flow moves in waves through the stages of uncodified knowledge (S), partially codified (E, I) and fully codified knowledge (C).Currently almost all knowledge management functions are carried out using IT, which allows us to control the indicators of the intensity of the knowledge flows.
A knowledge flow in the digital environment become a transparent to measure its scope, intensity and volume.Timely and informed decision making relies on the measurement of knowledge flows.The proposed system of indicators measures the interaction and communication between faculty staff, their contribution to the creation of educational materials, their The modern methodology of the KMS makes it possible to form a set of events to involve almost all university staff in the development and dissemination of knowledge.A university that does not fully control the knowledge flows does not have a complete understanding of the innovative potential of its strategic development.Further research in the field of KM in higher edu-cation is aimed at developing the principles of KMS at universities, structuring the methods and technologies of KMS by levels of management and areas.The authors of this study are working on testing the theoretical and methodological provisions of KMS proposed in the article at a team level in Russian universities.
Fig. 1 .
Fig. 1.Process of development of learning and teaching materials at SECI model. | 2023-07-12T08:07:07.001Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "0ef02d74f4398542ab71bf805c7287e6bea2305d",
"oa_license": null,
"oa_url": "https://bijournal.hse.ru/data/2023/06/28/2077689505/2.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "03c4f11d9925350a12b73e9ba95d973d37c99f33",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
5034390 | pes2o/s2orc | v3-fos-license | Overview of Virus Metagenomic Classification Methods and Their Biological Applications
Metagenomics poses opportunities for clinical and public health virology applications by offering a way to assess complete taxonomic composition of a clinical sample in an unbiased way. However, the techniques required are complicated and analysis standards have yet to develop. This, together with the wealth of different tools and workflows that have been proposed, poses a barrier for new users. We evaluated 49 published computational classification workflows for virus metagenomics in a literature review. To this end, we described the methods of existing workflows by breaking them up into five general steps and assessed their ease-of-use and validation experiments. Performance scores of previous benchmarks were summarized and correlations between methods and performance were investigated. We indicate the potential suitability of the different workflows for (1) time-constrained diagnostics, (2) surveillance and outbreak source tracing, (3) detection of remote homologies (discovery), and (4) biodiversity studies. We provide two decision trees for virologists to help select a workflow for medical or biodiversity studies, as well as directions for future developments in clinical viral metagenomics.
INTRODUCTION
Unbiased sequencing of nucleic acids from environmental samples has great potential for the discovery and identification of diverse microorganisms (Tang and Chiu, 2010;Chiu, 2013;Culligan et al., 2014;Pallen, 2014). We know this technique as metagenomics, or random, agnostic or shotgun high-throughput sequencing. In theory, metagenomics techniques enable the identification and genomic characterisation of all microorganisms present in a sample with a generic lab procedure (Wooley and Ye, 2009). The approach has gained popularity with the introduction of next-generation sequencing (NGS) methods that provide more data in less time at a lower cost than previous sequencing techniques. While initially mainly applied to the analysis of the bacterial diversity, modifications in sample preparation protocols allowed characterisation of viral genomes as well. The fields of virus discovery and biodiversity characterisation have seized the opportunity to expand their knowledge (Cardenas and Tiedje, 2008;Tang and Chiu, 2010;Chiu, 2013;Pallen, 2014).
There is interest among virology researchers to explore the use of metagenomics techniques, in particular as a catch-all for viruses that cannot be cultured (Yozwiak et al., 2012;Smits and Osterhaus, 2013;Byrd et al., 2014;Naccache et al., 2014;Pallen, 2014;Smits et al., 2015;Graf et al., 2016). Metagenomics can also be used to benefit patients with uncommon disease etiologies that otherwise require multiple targeted tests to resolve (Chiu, 2013;Pallen, 2014). However, implementation of metagenomics in the routine clinical and public health research still faces challenges, because clinical application requires standardized, validated wet-lab procedures, meeting requirements compatible with accreditation demands (Hall et al., 2015). Another barrier is the requirement of appropriate bioinformatics analysis of the datasets generated. Here, we review computational workflows for data analysis from a user perspective.
Translating NGS outputs into clinically or biologically relevant information requires robust classification of sequence reads-the classical "what is there?" question of metagenomics. With previous sequencing methods, sequences were typically classified by NCBI BLAST (Altschul et al., 1990) against the NCBI nt database (NCBI, 2017). With NGS, however, the analysis needs to handle much larger quantities of short (up to 300 bp) reads for which proper references are not always available and take into account possible sequencing errors made by the machine. Therefore, NGS needs specialized analysis methods. Many bioinformaticians have developed computational workflows to analyse viral metagenomes. Their publications describe a range of computer tools for taxonomic classification. Although these tools can be useful, selecting the appropriate workflow can be difficult, especially for the computationally less-experienced user (Posada-Cespedes et al., 2016;Rose et al., 2016).
A part of the metagenomics workflows has been tested and described in review articles (Bazinet and Cummings, 2012;Garcia-Etxebarria et al., 2014;Peabody et al., 2015;Sharma et al., 2015;Lindgreen et al., 2016;Posada-Cespedes et al., 2016;Rose et al., 2016;Sangwan et al., 2016;Tangherlini et al., 2016) and on websites of projects that collect, describe, compare and test metagenomics analysis tools (Henry et al., 2014;CAMI, 2016;ELIXIR, 2016). Some of these studies involve benchmark tests of a selection of tools, while others provide brief descriptions. Also, when a new pipeline is published the authors often compare it to its main competitors. Such tests are invaluable to assessing the performance and they help create insight into which tool is applicable to which type of study.
We present an overview and critical appraisal of available virus metagenomic classification tools and present guidelines for virologists to select a workflow suitable for their studies by (1) listing available methods, (2) describing how the methods work, (3) assessing how well these methods perform by summarizing previous benchmarks, and (4) listing for which purposes they can be used. To this end, we reviewed publications describing 49 different virus classification tools and workflows-collectively referred to as workflows-that have been published since 2010.
METHODS
We searched literature in PubMed and Google Scholar on classification methods for virus metagenomics data, using the terms "virus metagenomics" and "viral metagenomics." The results were limited to publications between January 2010 and January 2017. We assessed the workflows with regard to technical characteristics: algorithms used, reference databases, and search strategy used; their user-friendliness: whether a graphical user interface is provided, whether results are visualized, approximate runtime, accepted data types, the type of computer that was used to test the software and the operating system, availability and licensing, and provision of a user manual. In addition, we extracted information that supports the validity of the workflow: tests by the developers, wet-lab experimental work and computational benchmarks, benchmark tests by other groups, whether and when the software had been updated as of 19 July 2017 and the number of citations in Google Scholar as of 28 March 2017 (Data Sheet 1; https://compare.cbs.dtu.dk/ inventory#pipeline). We listed only benchmark results from in silico tests using simulated viral sequence reads, and only sensitivity, specificity and precision, because these were most often reported (Data Sheet 2). Sensitivity is defined as reads correctly annotated as viral-on the taxonomic level chosen in that benchmark-by the pipeline as a fraction of the total number of simulated viral reads (true positives / (true positives + false negatives)). Specificity as reads correctly annotated as non-viral by the pipeline as a fraction of the total number of simulated nonviral reads (true negatives / (true negatives + false positives)). And precision as the reads correctly annotated as viral by the pipeline as a fraction of all reads annotated as viral (true positives / (true positives + false positives)). Different publications have used different taxonomic levels for classification, from kingdom to species. We used all benchmark scores for our analyses (details are in Data Sheet 2). Correlations between performance (sensitivity, specificity, precision and runtime) and methodical factors (different analysis steps, search algorithms and reference databases) were calculated and visualized with R v3.3.2 (https://www.r-project.org/), using RStudio v1.0.136 (https://www.rstudio.com).
Next, based on our inventory, we grouped workflows by compiling two decision trees to help readers select a workflow applicable to their research. We defined "time-restrained diagnostics" as being able to detect viruses and classify to genus or species in under 5 h per sample. "Surveillance and outbreak tracing" refers to the ability of more specific identification to the subspecies-level (e.g., genotype). "Discovery" refers to the ability to detect remote homologs by using a reference database that covers a wide range of viral taxa combined with a sensitive search algorithm, i.e., amino acid (protein) alignment or composition search. For "biodiversity studies" we qualified all workflows that can classify different viruses (i.e., are not focused on a single species
RESULTS AND WORKFLOW DESCRIPTIONS Available Workflows
We found 56 publications describing the development and testing of 49 classification workflows, of which three were unavailable for download or online use and two were only available upon request (Table 1). Among these were 24 virusspecific workflows, while 25 were developed for broader use, such as classification of bacteria and archaea. The information of the unavailable workflows has been summarized, but they were not included in the decision trees. An overview of all publications, workflows and scoring criteria is available in Data Sheet 1 and on https://compare.cbs.dtu.dk/inventory#pipeline.
Metagenomics Classification Methods
The selected metagenomics classification workflows consist of up to five different steps: pre-process, filter, assembly, search and post-process ( Figure 1A). Only three workflows (SRSA, Isakov et al., 2011, Exhaustive Iterative Assembly, Schürch et al., 2014, and VIP, Li et al., 2016 incorporated all of these steps. All workflows minimally included a "search" step ( Figure 1B, Table 4), as this was an inclusion criterion. The order in which the steps are performed varies between workflows and in some workflows steps are performed multiple times. Workflows are often combinations of existing (open source) software, while sometimes, custom solutions are made.
Quality Control and Pre-processing
A major determinant for the success of a workflow is the quality of the input reads. Thus, the first step is to assess the data quality and exclude technical errors from further analysis. This may consist of several processes, depending on the sequencing method and demands such as sensitivity and time constraints. The pre-processing may include: removing adapter sequences, trimming low quality reads to a set quality score, removing low quality reads-defined by a low mean or median Phred score assigned by the sequencing machine-removing low complexity reads (nucleotide repeats), removing short reads, deduplication, matching paired-end reads (or removing unmated reads) and removing reads that contain Ns (unresolved nucleotides). The adapters, quality, paired-end reads and accuracy of repeats depend on the sequencing technology. Quality cutoffs for removal are chosen in a trade-off between sensitivity and time constraints: removing reads may result in not finding rare viruses, while having fewer reads to process will speed up the analysis. Twenty-four workflows include a pre-processing step, applying at least one of the components listed above ( Figure 1B, Table 2). Other workflows require input of reads pre-processed elsewhere.
Filtering Non-target Reads
The second step is filtering of non-target, in this case nonviral, reads. Filtering theoretically speeds up subsequent database searches by reducing the number of queries, it helps reduce false positive results and prevents assembly of chimaeric virus-host sequences. However, with lenient homology cutoffs, too many reads may be identified as non-viral, resulting in loss of potential viral target reads. Choice of filtering method depends on the sample type and research goal. For example, with human clinical samples a complete human reference genome is often used, as is the case with SRSA (Isakov et al., 2011), RINS (Bhaduri et al., 2012), VirusHunter , MePIC (Takeuchi et al., 2014), Ensemble Assembler (Deng et al., 2015), ViromeScan (Rampelli et al., 2016), and MetaShot (Fosso et al., 2017). Depending on the sample type and expected contaminants, this can be extended to filtering rRNA, mtRNA, mRNA, bacterial or fungal sequences or non-human host genomes. More thorough filtering is displayed by PathSeq (Kostic et al., 2011), SURPI (Naccache et al., 2014), Clinical PathoScope (Byrd et al., 2014), Exhaustive Iterative Assembly (Schürch et al., 2014), VIP (Li et al., 2016), Taxonomer , and VirusSeeker (Zhao et al., 2017). PathSeq removes human reads in a series of filtering steps in an attempt to concentrate pathogen-derived data. Clinical PathoScope filters human genomic reads as well as human rRNA reads. Exhaustive Iterative Assembly removes reads from diverse animal species, depending on the sample, to remove non-pathogen reads for different samples. SURPI uses 29 databases to remove different non-targets. VIP includes filtering by first comparing to host and bacterial databases and then to viruses. It only removes reads that are more similar to non-viral references in an attempt to achieve high sensitivity for viruses and potentially reducing false positive results by removing non-viral reads. Taxonomer simultaneously matches reads against human, bacterial, fungal and viral references and attempts to classify all. This only works well on high-performance computing facilities that can handle many concurrent search actions on large data sets. VirusSeeker uses the complete NCBI nucleotide (nt) and non-redundant protein (nr) databases to classify all reads and then filter non-viral reads. Some workflows require a custom, user-provided database for filtering, providing more flexibility but requiring more user-input. This is seen in IMSA (Dimon et al., 2013), VirusHunter , VirFind (Ho and Tzanetakis, 2014), and MetLab (Norling et al., 2016), although other workflows may accept custom references as well. In total, 22 workflows filter non-virus reads prior to further analysis ( Figure 1B, Table 3). Popular filter tools are read mappers such as Bowtie (Langmead, 2010;Langmead and Salzberg, 2012) and BWA (Li and Durbin, 2009), while specialized software, such as Human Best Match Tagger (BMTagger, NCBI, 2011) or riboPicker (Schmieder, 2011), is less commonly used ( Table 2).
Short Read Assembly
Prior to classification, the short reads may be assembled into longer contiguous sequences (contigs) and generate consensus sequences by mapping individual reads to these contigs. This helps filter out errors from individual reads, and reduce the amount of data for further analysis. This can be done by mapping reads to a reference, or through so-called de novo assembly by linking together reads based on, for instance, overlaps, frequencies and paired-end read information. In viral metagenomics approaches, de novo assembly is often the method of choice. Since viruses evolve so rapidly, suitable references are not always available. Furthermore, the short viral genomes generally result in high sequencing coverage, at least for hightitre samples, facilitating de novo assembly. However, de novo assembly is liable to generate erroneous contigs by linking together reads containing technical errors, such as sequencing (base calling) errors and remaining adapter sequences. Another source of erroneous contigs may be when reads from different organisms in the same sample are similar, resulting in the formation of chimeras. Thus, de novo assembly of correct contigs benefits from strict quality control and pre-processing, filtering and taxonomic clustering-i.e., grouping reads according to their respective taxa before assembly. Assembly improvement by taxonomic clustering is exemplified in five workflows: Metavir (Roux et al., 2011), RINS (Bhaduri et al., 2012), VirusFinder (Wang et al., 2013), SURPI (in comprehensive mode) (Naccache et al., 2014), and VIP (Li et al., 2016). Two of the discussed workflows have multiple iterations of assembly and combine algorithms to improve overall assembly: Exhaustive Iterative Assembly (Schürch et al., 2014) and Ensemble Assembler (Deng et al., 2015). In total, 18 of the tools incorporate an assembly step ( Figure 1B, Table 4). Some of the more commonly used assembly programs are Velvet (Zerbino and Birney, 2008), Trinity (Grabherr et al., 2011), Newbler (454 Life Sciences), and SPAdes (Bankevich et al., 2012) (Table 2).
Database Searching
In the search step, sequences (either reads or contigs) are matched to a reference database. Twenty-six of the workflows we found search with the well-known BLAST algorithms BLASTn or BLASTx (Altschul et al., 1990; Table 2). Other oftenused programs are Bowtie (Langmead, 2010;Langmead and Salzberg, 2012), BWA (Li and Durbin, 2009), and Diamond (Buchfink et al., 2015). These programs rely on alignments to a reference database and report matched sequences with alignment scores. Bowtie and BWA, which are also popular programs for the filtering step, align nucleotide sequences exclusively.
Diamond aligns amino acid sequences and BLAST can do either nucleotides or amino acids. As analysis time can be quite long for large datasets, algorithms have been developed to reduce this time by using alternatives to classical alignment. One approach is to match k-mers with a reference, as used in FACS (Stranneheim et al., 2010), LMAT (Ames et al., 2013), Kraken (Wood and Salzberg, 2014), Taxonomer , and MetLab (Norling et al., 2016). Exact k-mer matching is generally faster than alignment, but requires a lot of computer memory. Another approach is to use probabilistic models of multiple sequence alignments, or profile hidden Markov models (HMMs). For HMM methods, protein domains are used, which allows the detection of more remote homology between query and reference. A popular HMM search program is HMMER (Mistry et al., 2013). ClassyFlu (Van der Auwera et al., 2014) and vFam (Skewes- Cox et al., 2014) rely exclusively on HMM searches, while VMGAP (Lorenzi et al., 2011), Metavir (Roux et al., 2011), VirSorter (Roux et al., 2015), and MetLab can also use HMMER. All of these search methods are examples of similarity search-homology or alignment-based methods. The other search method is composition search, in which oligonucleotide frequencies or k-mer counts are matched to references. Composition search requires the program to be "trained" on reference data and it is not used much in viral genomics. Only two workflows discussed here use composition search: NBC (Rosen et al., 2011) and Metavir 2 (Roux et al., 2014), while Metavir 2 only uses it complementary to similarity search (Data Sheet 1).
All search methods rely on reference databases, such as NCBI GenBank (https://www.ncbi.nlm.nih.gov/genbank/), RefSeq (https://www.ncbi.nlm.nih.gov/refseq/), or BLAST nucleotide (nt) and non-redundant protein (nr) databases (ftp://ftp.ncbi. nlm.nih.gov/blast/db/). Thirty-four workflows use GenBank for their references, most of which select only reference sequences from organisms of interest (Table 2). GenBank has the benefits of being a large, frequently updated database with many different organisms and annotation depends largely on the data providers.
Other tools make use of virus-specific databases such as GIB-V (Hirahata et al., 2007) or ViPR (Pickett et al., 2012), which have the advantage of better annotation and curation at the expense of the number of included sequences. Also, protein databases like Pfam (Sonnhammer et al., 1998) and UniProt (UniProt, 2015) are used, which provide a broad range of sequences. Search at the protein level may allow for the detection of more remote homology, which may improve detection of divergent viruses, but non-translated genomic regions are left unused. A last group of workflows requires the user to provide a reference database file. This enables customization of the workflow to the user's research question and requires more effort.
Post-processing
Classifications of the sequencing reads can be made by setting the parameters of the search algorithm beforehand to return a single annotation per sequence (cut-offs). Another option is to return multiple hits and then determine the relationship between the query sequence and a cluster of similar reference sequences. This process of finding the most likely or best supported taxonomic assignment among a set of references is called post-processing. Post-processing uses phylogenetic or other computational methods such as the lowest common ancestor (LCA) algorithm, as introduced by MEGAN (Huson et al., 2007). Six workflows use phylogeny to place sequences in a phylogenetic tree with homologous reference sequences and thereby classify them. This is especially useful for outbreak tracing to elucidate relationships between samples. Twelve workflows use other computational methods such as the LCA taxonomy-based algorithm to make more confident but less specific classifications (Data Sheet 1). In total, 18 workflows include post-processing ( Figure 1B).
Usability and Validation
For broader acceptance and eventual application in a clinical setting, workflows need to be user-friendly and need to be validated. Usability of the workflows varied vastly. Some provide web-services with a graphical user-interface that work fast on any PC, whereas other workflows only work on one operating system, from a command line interface with no user manual. Processing time per sample ranges from minutes to several days ( Table 3). Although web-services with a graphical user-interface are very easy to use, such a format requires uploading large GBsized short read files to a distant server. The speed of upload and the constraint to work with one sample at a time may limit its usability. Diagnostic centers may also have concerns about the security of the data transferred, especially if patientidentifying reads and confidential metadata are included in the transfer. Validation of workflows ranged from high-i.e., tested by several groups, validated by wet-lab experiments, receiving frequent updates and used in many studies-to no evidence of validation ( NBC (125), and Rega Typing Tool (377 from two highly cited publications).
Classification Performance
Next, we summarized workflow performance by aggregating benchmark results on simulated viral data from different publications (Figure 2). Twenty-five workflows had been tested for sensitivity, of which 19 more than once. For some workflows, sensitivity varied between 0 and 100, while for others sensitivity was less variable or only single values were available. For 10 workflows specificities, or true negative rates, were provided. Six workflows had only single scores, all above 75%. The other four had variable specificities between 2 and 95%.
Runtimes had been determined or estimated for 36 workflows. Comparison of these outcomes is difficult as different input data were used (for instance varying file sizes, consisting of raw reads or assembled contigs), as well as different computing systems. Thus a crude categorisation was made dividing workflows into three groups that either process a file in a timeframe of minutes (
Correlations Between Methods, Runtime, and Performance
For 17 workflows for which these data were available, we looked for correlations by plotting performance scores against the analysis steps included (Figure 3). Workflows that included a pre-processing or assembly step scored higher in sensitivity, specificity and precision. Contrastingly, workflows with postprocessing on average scored lower on all measures. Pipelines that filter non-viral reads generally had a lower sensitivity and specificity and precision remained high.
Next, we visualized correlations between the used search algorithms and the runtime, and the performance scores (Figure 4). Different search algorithms had different performance scores on average. Similarity search methods had lower sensitivity, but higher specificity and precision than composition search. The use of nucleotide vs. amino acid search also affected performance. Amino acid sequences generally led to higher sensitivity and lower specificity and precision scores. Combining nucleotide sequences and amino acid sequences in the analysis seemed to provide the best results. Performance was generally higher for workflows that used more time.
Finally, we inventoried the overall runtime of 17 workflows ( Table 5) and separated them based on the inclusion of analysis steps that seemed to affect runtime. This indicated that workflows that included pre-processing, filtering, and similarity search by alignment were more time-consuming than workflows that did not use these analysis steps.
Applications of Workflows
Based on the results of our inventory, decision trees were drafted to address the question of which workflow a virologist could use for medical and environmental studies (Figures 5, 6).
DISCUSSION
Based on available literature, 49 available virus metagenomics classification workflows were evaluated for their analysis methods and performance and guidelines are provided to select the proper workflow for particular purposes (Figures 5, 6). Only workflows that have been tested with viral data were included, thus leaving out a number of metagenomics workflows that had been tested only on bacterial data, which may be applicable to virus classification as well. Also note that our inclusion criteria leave out most phylogenetic analysis tools, which start from contigs or classifications. The variety in methods is striking. Although each workflow is designed to provide taxonomic classification, the strategies employed to achieve this differ from simple one-step tools to analyses with five or more steps and creative combinations of algorithms. Clearly, the field has not yet adopted a standard method to facilitate comparison of classification results. Usability varied from a few remarkably user-friendly workflows with easy access online to many command-line programs, which are generally more difficult to use. Comparison of the results of the validation experiments is precarious. Every test is different and if the reader has different study goals than the writers, assessing classification performance is complex.
Due to the variable benchmark tests with different workflows, the data we looked at is inherently limited and heterogeneous. This has left confounding factors in the data, such as test data, references used, algorithms and computing platforms. These factors are the result of the intended use of the workflow, e.g., Clinical PathoScope was developed for clinical use and was not intended or validated for biodiversity studies. Also, benchmarks usually only take one type of data to simulate a particular use case. Therefore, not all benchmark scores are directly comparable and it is impossible to significantly determine correlations and draw firm conclusions.
We do highlight some general findings. For instance, when high sensitivity is required filtering steps should be minimized, as these might accidentally remove viral reads. Furthermore, the choice of search algorithms has an impact on sensitivity. High sensitivity may be required in characterization of environmental biodiversity (Tangherlini et al., 2016) and virus discovery. Additionally, for identification of novel variant viruses and virus discovery de novo assembly of genomes is beneficial. Discoveries typically are confirmed by secondary methods, thus reducing the impact in case of lower specificity. For example, RIEMS showed high sensitivity and applies de novo assembly. MetLab FIGURE 2 | Different benchmark scores of virus classification workflows. Twenty-seven different workflows (Left) have been subjected to benchmarks, by the developers (Top) or by independent groups (Bottom), measuring sensitivity (Left column), specificity (Middle column) and precision (Right column) in different numbers of tests. Numbers between brackets (n = a, b, c) indicate number of sensitivity, specificity, and precision tests, respectively.
combines de novo assembly with Kraken, which also displayed high sensitivity. When higher specificity is required, in medical settings for example, pre-processing and search methods with the appropriate references are recommended. RIEMS and MetLab are also examples of high-specificity workflows including preprocessing. Studies that require high precision benefit from pre-processing, filtering and assembly. High-precision methods are essential in variant calling analyses for the characterization of viral quasispecies diversity (Posada-Cespedes et al., 2016), and in medical settings for preventing wrong diagnoses. RINS performs pre-processing, filtering and assembly and scored high in precision tests, while Kraken also scored well in precision and with MetLab it can be combined with filtering and assembly as needed.
Clinicians and public health policymakers would be served by taxonomic output accompanied by reliability scores, as is possible with HMM-based search methods and phylogeny with bootstrapping, for example. Reliability scores could also be based on similarity to known pathogens and contig coverage. However, classification to a higher taxonomic rank (e.g., order) is more generally reliable, but less informative than a classification at a lower rank (e.g., species) (Randle-Boggis et al., 2016). Therefore, the use of reliability scores and the associated trade-offs need to be properly addressed per application.
Besides, medical applications may be better served by a functional rather than a taxonomic annotation. For example, a clinician would probably find more use in a report FIGURE 3 | Correlations between performance scores and analysis steps. Sensitivity, specificity and precision scores (in columns) for workflows that incorporated different analysis steps (in rows). Numbers at the bottom indicate number of benchmarks performed.
of known pathogenicity markers than a report of species composition. Bacterial metagenomics analyses often include this, but it is hardly applied to virus metagenomics. Although FIGURE 4 | Correlation between performance and search algorithm and runtime. Sensitivity, specificity and precision scores (in columns) for workflows that incorporated different search algorithms, using either nucleotide sequences, amino acid sequences or both, and workflows with different runtimes (rows). Numbers at the bottom indicate number of benchmarks performed.
Numerous challenges remain in analyzing viral metagenomes. First is the problem of sensitivity and false positive detections. Some viruses that exist in a patient may not be detected by sequencing, or viruses that are not present may be detected because of homology to other viruses, wrong annotation in databases or sample cross-contamination. These might both lead to wrong diagnoses. Second, viruses are notorious for their recombination rate and horizontal gene transfer or reassortment of genomic segments. These may be important for certain analyses and may be handled by bioinformatics software. For instance, Rega Typing Tool and QuasQ include methods for detecting recombination. Since these events usually happen within species and most classification workflows do not go deeper into the taxonomy than the species level, this is something that has to be addressed in further analysis. Therefore, recombination should not affect the results of the reviewed workflows much. Further information about the challenges of analyzing metagenomes can be found in Edwards and Rohwer (2005) (2017). An important step in the much awaited standardization in viral metagenomics (Fancello et al., 2012;Posada-Cespedes et al., 2016;Rose et al., 2016), necessary to bring metagenomics to the clinic, is the possibility to compare and validate results between labs. This requires standardized terminology and study aims across publications, which enables medically oriented reviews that assess suitability for diagnostics and outbreak source tracing. Examples of such application-focused reviews can be found in the environmental biodiversity studies (Oulas et al., 2015;Posada-Cespedes et al., 2016;Tangherlini et al., 2016). Reviews then FIGURE 5 | Decision tree for selecting a virus metagenomics classification workflow for medical applications. Workflows are suitable for medical purposes when they can detect pathogenic viruses by classifying sequences to a genus level or further (e.g., species, genotype), or when they detect integration sites. Forty workflows matched these criteria. Workflows can be applied to surveillance or outbreak tracing studies when very specific classification are made, i.e., genotypes, strains or lineages. A 1-day analysis corresponds to being able to analyse a sample within 5 h. Detection of novel variants is made possible by sensitive search methods, amino acid alignment or composition search, and a broad reference database of potential hits. Numbers indicate the number of workflows available on the corresponding branch of the tree. FIGURE 6 | Decision tree for selecting a virus metagenomics classification workflow for biodiversity studies. Workflows for the characterisation of biodiversity of viruses have to classify a range of different viruses, i.e., have multiple reference taxa in the database. Forty-three workflows fitted this requirement. Novel variants can potentially be detected by using more sensitive search methods, amino acid alignment and composition search, and using diverse reference sequences. Finally, workflows are grouped by the taxonomic groups they can classify. Numbers indicate the number of workflows available on the corresponding branch of the tree.
provide directions for establishing best practices by pointing out which algorithms perform best in reproducible tests. For proper comparison, metadata such as sample preparation method and sequencing technology should always be included-and ideally standardized. Besides, true and false positive and negative results of synthetic tests have to be provided to compare between benchmarks.
Optimal strategies for particular goals should then be integrated in a user-friendly and flexible software framework that enables easy analysis and continuous benchmarking to evaluate current and new methods. The evaluation should include complete workflow comparisons and comparisons of individual analysis steps. For example, benchmarks should be done to assess the addition of a de novo assembly step to the workflow and measure the change in sensitivity, specificity, etc. Additionally, it remains interesting to know which assembler works best for specific use cases as has been tested by several groups (Treangen et al., 2013;Scholz et al., 2014;Smits et al., 2014;Vázquez-Castellanos et al., 2014;Deng et al., 2015). The flexible framework should then facilitate easy swapping of these steps, so that users can always use the best possible workflow. Finally, it is important to keep reference databases up-to-date by sharing new classified sequences, for instance by uploading to GenBank.
All these steps toward standardization benefit from implementation of a common way to report results, or minimum set of metadata, such as the MIxS by the genomic standard consortium (Yilmaz et al., 2011). Currently several projects exist that aim to advance the field to wider acceptance by validating methods and sharing information, e.g., the CAMI challenge (http://cami-challenge.org/), OMICtools (Henry et al., 2014), and COMPARE (http:// www.compare-europe.eu/). We anticipate steady development and validation of genomics techniques to enable clinical application and international collaborations in the near future.
AUTHOR CONTRIBUTIONS
AK and MK conceived the study. SN designed the experiments and carried out the research. AK, DS, and HV contributed to the design of the analyses. SN prepared the draft manuscript. All authors were involved in discussions on the manuscript and revision and have agreed to the final content.
FUNDING
This work was supported by funding from the European Community's Horizon 2020 research and innovation programme under the VIROGENESIS project, grant agreement No 634650, and COMPARE, grant agreement No 643476. | 2018-04-23T13:04:11.206Z | 2018-04-23T00:00:00.000 | {
"year": 2018,
"sha1": "38b6634cd639fcb94166d9492b84bc9fef380f7e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.00749/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2585403015b1b1b31ab187eace93d67f4441e3cf",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
85458595 | pes2o/s2orc | v3-fos-license | Gate-controlled anisotropy in Aharonov-Casher spin interference: signatures of Dresselhaus spin-orbit inversion and spin phases
The coexistence of Rashba and Dresselhaus spin-orbit interactions (SOIs) in semiconductor quantum wells leads to an anisotropic effective field coupled to carriers' spins. We demonstrate a gate-controlled anisotropy in Aharonov-Casher (AC) spin interferometry experiments with InGaAs mesoscopic rings by using an in-plane magnetic field as a probe. Supported by a perturbation-theory approach, we find that the Rashba SOI strength controls the AC resistance anisotropy via spin dynamic and geometric phases and establish ways to manipulate them by employing electric and magnetic tunings. Moreover, assisted by two-dimensional numerical simulations, we identify a remarkable anisotropy inversion in our experiments attributed to a sign change in the renormalized linear Dresselhaus SOI controlled by electrical means, which would open a door to new possibilities for spin manipulation.
I. INTRODUCTION
Spintronics and spin-based quantum computing rely on the precise manipulation of spin orientations and related spin phases. Electron spins may couple directly to a magnetic field (Zeeman interaction) as well as to an electric field via spin-orbit interaction (SOI), resulting in a momentum-dependent effective magnetic field acting on itinerant spins. In particular, the electric-field-controllable Rashba SOI [1,2,3] is a prominent resource for spin-orbitronics [4], i.e., for the generation [5,6,7], manipulation [8,9], and detection [10,11] of spins by electrical means only. The direction of the effective Rashba field is perpendicular to the momentum of the spin carriers, but its strength is isotropic. In III-V compound semiconductors, the Dresselhaus SOI [12] induced by bulk inversion asymmetry also plays an important role in spin dynamics [13]. The direction of the effective Dresselhaus field has a different symmetry from Rashba's one. Therefore, the combination of Rashba and Dresselhaus SOIs gives rise to an anisotropic, momentum-dependent field.
A spin interferometer is an invaluable tool to probe the spin-phase information carried by electrons via the Aharonov-Casher (AC) effect [14,15], the electromagnetic dual of the Aharonov-Bohm (AB) effect [16,17]. The role played by Rashba and Zeeman fields on spin phases has been widely investigated in this context [18]- [22]. In contrast, the effects of introducing Dresselhaus SOI on spin phases are not yet well understood. Since hybrid-field engineering is a prerequisite to attain spin manipulation at the nanoscale, the electric control of the Dresselhaus SOI strength and sign appears as a challenging goal that would supply us with new tools for efficient spin control.
In this paper, we use Aharonov-Casher (AC) spin interferometry to extract information about the spinorbit fields and related spin phases. We study the anisotropic response of AC resistance measurements in an array of InGaAs-based mesoscopic rings subject to in-plane magnetic fields oriented along different directions. The experiment shows that the sign of the AC resistance anisotropy changes as a function of the Rashba SOI strength. Perturbation-theory calculations indicate that the AC resistance anisotropy is modulated by the Rashba SOI strength via spin dynamic and geometric phases as well as by the direction of the in-plane Zeeman field [Section II & Appendix A2]. In addition, we find that the reported data are to a great extent reproduced by numerical results performed at constant Dresselhaus SOI strength [23]. There is, however, a remarkable discrepancy: the experiment reveals an extra sign inversion in the anisotropy which is not reproduced by the numerical calculations. This is consistently explained by a sign change of the renormalized linear Dresselhaus SOI emerging from strain effects in the working material, which is controlled electrically. Our results provide crucial information about the SO fields and show how different spin-phase contributions can be manipulated, demonstrating a potential for applications in spintronics and spin-based quantum technologies.
In the following Sec. II, the concept of spin dynamic and geometric phases in magnetic textures is introduced. In Sec. III, we show the anisotropic response of these phases when perturbed by additional Dresselhaus and Zeeman terms. The analytical details on the perturbation theory is described in Appendix A. In Sec. IV, we describe the gate-controlled anisotropy in Aharonov-Casher (AC) spin interferometry experiments with InGaAs mesoscopic rings by using an in-plane magnetic field as a probe. In Sec. V, we discuss a sign change of the renormalized linear Dresselhaus SOI. Section VI summarizes the paper.
II. SPIN DYNAMICS IN MAGNETIC TEXTURES
A magnetic texture is a magnetic field, either of real or effective (e.g., spin-orbit) origin, with nonuniform orientation. The spin dynamics of a carrier travelling through a magnetic texture is determined by the ratio of two characteristic frequencies: the Lamor frequency of spin precession around the local magnetic field, , and an orbital frequency accounting for the change of direction of the magnetic field from the point of view of the spin carrier, [24]. The spin dynamics is said to be adiabatic if the carrier's spin can stay (anti)align with the local magnetic field all across the magnetic texture. This corresponds to the regime where the spin precession frequency is much larger than the orbital frequency, ≫ . In the adiabatic limit, spin states have been shown to acquire phase contributions of geometric nature in addition to the usual dynamic quantum phases [25]. These geometric (or Berry) phases are identified with the solid angle Ω subtended by spins after a round trip in the Bloch sphere (spin texture). However, the adiabatic limit is difficult to achieve in usual experimental setups, where both frequencies tend to be of comparable magnitude and the spin dynamics is non-adiabatic. Still, geometric phases can be generalized to non-adiabatic situations with identical interpretation in terms of spin solid angles [26] even when the non-adiabatic spin texture does not coincide with the magnetic texture. Complementary dynamic spin phases are identified with the projection of the spin texture on the magnetic texture. See Fig. 1 for an illustration in the case of AC rings from the point of view of the spin carrier's rest frame.
INTERFERENCE
The coexistence of Rashba and Dresselhaus SOIs leads to an anisotropic effective magnetic field ( 0 ) with two-fold symmetry given by where is the 2 nd -order-perturbation Zeeman phase shift reported in [22], which was demonstrated to be of purely geometric origin. In contrast, the corresponding 2 nd -order-perturbation Dresselhaus phase shift shows a hybrid geometric/dynamic origin [Appendix A2]. This is also the case for the 3 rd -order-perturbation spin phase [Appendix A2], which is responsible for the anisotropic response of the conductance to the in-plane Zeeman field's direction , defined with respect to the [100] direction. Moreover, its linear dependence on shows that the anisotropic is sensitive to a sign inversion of the Dresselhaus SOI.
The AC conductance anisotropy can be studied by defining Α = ( /4) − (3 /4) , the conductance difference for in-plane Zeeman fields ∕∕ oriented along different symmetry axes. The resulting expression in this approximation is The anisotropy Α oscillates as a function of , , and ∕∕ as the corresponding phases increase.
However, within this perturbative regime, only the dominating Rashba AC phase √1 + � 2 − 1 is expected to induce a sign inversion of Α as the Rashba SOI strength changes (a sign inversion due to the Zeeman field beyond the perturbative approach was confirmed by numerical analysis in [23]).
Moreover, � shows that an additional sign inversion is expected in Α in case the Dresselhaus SOI changes sign. Also notice that Eq. (4) implies that the anisotropic response is originated from the joint action of the Dresselhaus and Zeeman perturbations on the Rashba system.
Additionally, a phenomenological discussion on the role of disorder in the conductance and, particularly, resistance (better suited in experiments) can be found in Appendix A3.
IV. EXPERIMENTS
The experimental setup consist of a top-gate-attached 40×40 ring array (ring radius r= 610 nm) fabricated by electron beam lithography and reactive ion etching. A scanning electron microscope image of the array is shown in Fig. 2 A common strategy is to investigate the gate-voltage dependence of the Al'tshuler-Aronov-Spivak (AAS) [17] oscillations amplitude, originated from the interference of time-reversal (TR) paths in the absence of magnetic flux (i.e., for vanishing perpendicular magnetic field ⊥ = 0 ). The phase contribution from the orbital part of the wave function to TR-path interference is always constructive at ⊥ = 0. Therefore, the AAS amplitude dependence on voltage reflects a phase contribution from the spin part of the wave function. This gives access to the AC spin-interference effect independently from the orbital phases at any gate-voltage value.
Ensemble averaging in the ring-array structure leads to clear AAS-interference patterns in transport measurements. We focused on AC spin interference under in-plane magnetic fields of variable strength ∕∕ and direction , defined with respect to the [100] axis. The magnetoresistance (MR) for fixed gate voltage and ∕∕ = 1 T was measured for different orientations . is explained by the narrowing of the effective channel width for decreasing carrier densities [28]. Most importantly, for = -2.8 x 10 -12 eVm ( Fig. 4(b)) the AAS amplitude shows its minimum at = π/4 and its maximum at = 3π/4, a response opposite to the one observed at = -1.5 x 10 -12 eVm ( Fig. 3(b)). This demonstrates the Rashba-SOI-induced anisotropy inversion without changing the sign of the Rashba SOI.
To study the observed inversion of the anisotropic response, in Fig. 5 (a) we show detailed experimental data on the Zeeman field angle dependence of the AAS amplitude for a field strength ∕∕ =1 T at two different Rashba SOI strengths. We find that the angle-dependent pattern inverts as changes from -1.5 x 10 -12 eVm to -2.8 x 10 -12 eVm while α's sign remains constant. This is well accounted by perturbation theory, Eq. (4), where the anisotropy inversion is attributed to the AC phase √1 + � 2 − 1 in Α , sharing geometric and dynamic phase contributions [20], [29]. The AAS amplitude dependence on the Zeeman field angle (for a given ) has also a hybrid geometric/dynamic phase origin via [Appendix A2]. We notice that a purely geometric spin-phase tuning by the Zeeman field's strength is possible at magic angles =0, π (where vanishes) through [22].
In order to account for realistic conditions in our models beyond the limitations of perturbation theory, we resort to 2D numerical simulations of disordered multi-mode rings. We use the Kwant code [30] with a disorder potential corresponding to a mean-free path of 1.8 µm, which is shorter that the ring circumference 3.8 µm. This disorder is crucial to develop dominating AAS interference paths [17].
The calculation details are described in [23]. We assume a ring radius of 610 nm and a ring channel including 5 modes, with carrier density Ns= 1.52 x 10 16 m -2 . The in-plane Zeeman energy is set to g ∕∕ =0.17 meV, with g= 3 and ∕∕ = 1T, while is fixed to 0.3 x 10 -12 eVm. These parameters are very similar to those of the InGaAs QW used in the present experiment. The results, depicted in Fig. 5 (b), show that the maximum and minimum AAS amplitudes appear around = π/4 and = 3π/4 for = -1.5 x 10 -12 eVm, while this anisotropy is reversed for = -2.8 x 10 -12 eVm.
This is in quite good agreement with the experimental results shown in Fig. 5 (a).
In Fig. 6 we present the AAS amplitude measured as a function of the gate voltage corresponding to two different in-plane field angles = π/4 (red) and = 3π/4 (blue) and field strengths ∕∕ = 1 T (Fig. 6 (left)) and ∕∕ =2 T ( Fig. 6 (right)) for ⊥ =0. The oscillatory response as a function of is due to the AC effect induced by spin phases in TR-path interference. The observed period is well reproduced by perturbation theory, Eq. (2), once the gate voltage dependence of is taken into account. We find that the AC oscillation amplitude decreases by increasing ∕∕ from 1 T to 2 T. This is explained by the spin-induced dephasing effect, as discussed in Ref. [27] and experimentally confirmed in [31]. This discrepancy is remarkable. The most plausible reason for such an additional anisotropy reversal is a sign change in the Dresselhaus SOI, as expected from the two-fold symmetry of the effective field of Eq. (1) and the perturbation theory in Eq. (4). By taking into account higher order and strain induced Dresselhaus effects, one notices that the sign of the resulting renormalized linear Dresselhaus SOI can be controlled by modifying the carrier density [32]. The Dresselhaus SOI Hamiltonian HD including an additional strain term is given by
The value of ′ is controlled electrically by the carrier density through = �2 . The confinement wave vector 〈 2 〉 = ∫ Ψ * ( ) (− 2 / 2 )Ψ( ) in the InGaAs QW is estimated to be [35], the renormalized linear Dresselhaus SOI ′ including the strain term is plotted as a function of the carrier density in Fig. 8. From Fig. 8, we obtain the critical carrier density 1.95 × 10 16 m −2 at which ′ changes its sign. It is difficult to explain our result without considering the strain term as shown by the red dashed line. It should be emphasized that the critical density is not changed if the ratio between and � /ℏ is preserved. This critical carrier density is consistent with the one corresponding to the additional anisotropy reversal in Fig. 7 (a) and supports the conclusion of a sign change of ′ in our experiment by electric means.
VI. CONCLUSIONS
Our The Hamiltonian for spin carriers with effective mass m* confined in a Rashba 1D ring of radius r (parametrized by the azimuthal angle ) is given by [19] (S1) with frequencies . (S2) The main contributions to (S1) are the kinetic energy (first term) and the Rashba spin-orbit coupling (second term), corresponding to an effective (momentum-dependent) magnetic field pointing along the radial direction. The third term is the Meijer's correction [19] that guarantees the hermiticity of the Hamiltonian. The latter can be neglected in the semiclassic limit of large Fermi momentum, typically satisfied in mesoscopic semiconductors.
We perturb the radial magnetic texture in ℋ 0 by introducing an in-plane Zeeman term Δℋ 1 and a Dresselhaus spin-orbit term Δℋ 2 of the form By following the standard perturbation theory for nondegenerate systems [36] we find the first signs of anisotropy in the perturbed eigenenergies only after a 3rd-order expansion in Δℋ = Δℋ 1 + Δℋ 2 . This procedure leads to The anisotropic response of the perturbed eigenenergies (S10) to the Zeeman field orientation appears at the 1 st order in the Dresselhaus coupling strength and at the 2 nd order in Zeeman one, 16 � 2 sin(2 ).
showing that the anisotropy can discriminate the sign of the Dresselhaus term. The perturbed eigenstates � , , ⟩ need to be expanded only up to 2 nd order in Δℋ = Δℋ 1 + Δℋ 2 to show the first anisotropic features due to the joint Dresselhaus-Zeeman action. Up to a normalization factor, they read where the sums in (S11) run such that the denominators do not vanish. The perturbative corrections to the first term in (S11) lead to 2 and 2 contributions, only, while the last term shows additional joint contributions. We point out that the results (S10) and (S11) hold for 1 ≪ � ≪ 2 , where degeneracy mixing is avoided and the perturbative approach is sound.
Anisotropic conductance and the role of geometric/dynamic spin phases.
We calculate the AAS (Al'tshuler-Aronov-Spivak) corrections to the conductance of a two-terminal AC 1D ring originated from the interference of time-reversed paths at the lowest order (i.e., semiclassical paths describing single windings around the ring corresponding to strongly coupled contacts) by following a procedure similar to our previous works on Rashba rings [20,22], where the phase difference gathered by counter-propagating spin carriers in found by solving = for noninteger orbital numbers , with the Fermi energy. As a result, the AAS conductance takes the We then find (S12) with (S13) (S14) . (S15) These contributions represent a Zeeman phase shift , a Dresselhaus phase shift , and an anisotropic phase shift . The latter depends explicitly on , showing the two-fold symmetry anticipated in Eq. (1) with opposite extreme values at /4 and 3 /4. Notice that and derive from the quadratic contributions to the perturbed eigenenergies (S10). Hence, according to perturbation theory [36,37], they originate from the linear contributions to the perturbed eigenstates (S11). As for , it is a consequence of the cubic contributions to (S10) and the quadratic one to (S11).
Each of the phases (S13-S15) can be of either pure or hybrid geometric/dynamic origin. The geometric-phase contribution to the conductance (S12) can be evaluated from the perturbed eigenstates (S11) as [22,25] = ∫ � , , ������� | | plane field [20,22]. The additional contributions to the geometric phase in (S16) are interpreted as perturbations ΔΩ to Ω 0 . The share of these geometric-phase contributions in the conductance (S12) depend on the corresponding weight factors appearing in (S16). The Zeeman contribution to (S12) results to be of purely geometric origin (as reported in [22]) as a consequence of a weight factor 1 (absolute value) in (S16). The Dresselhaus contribution to (S12), with a weight factor ½ in (S16), turns out to be only 50% geometric (the other 50% is of dynamic origin), likely due to the different symmetry class of Zeeman and Dresselhaus perturbations. As for the geometric-phase contribution to the anisotropic phase , the corresponding weight factor ½ in (S16) indicates a 50% share. Namely, has an hybrid geometric/dynamical origin.
Role of disorder.
The role of disorder can be effectively accounted by introducing a classical conductance 0 and a quantum-correction amplitude ≪ 1 such that The resistance = 1/ , better suited in experiments, then reads with 0 = 1/ 0 the classical resistance.
By noticing that the anisotropic phase is much smaller than the unperturbed AC phase √1 + � 2 − 1, we rewrite the resistance as 0 ≈ 1 + 2 sin(2 ) (S19) The Eq. (S19) shows an anisotropic response of the resistance to the Zeeman field's direction .
Moreover, the sign of the anisotropy can be independently modulated by the Rashba strength � but its response is isotropic.
APPENDIX B: CARRIER DENSITY DEPENDENCE OF RASHBA SOI STRENGTH
The gate fitted Hall bar (70 µm x 280 µm) was fabricated on the same chip on which the spin interferometer (40 x 40 ring array) was put. The relation between carrier density and Rashba SOI strength was obtained from the analysis of Shubnikov-de Haas (SdH) oscillations as shown in Fig. 9.
The SdH oscillations show a beating pattern because of spin splitting due to the strong Rashba SOI.
The Rashba SOI strength is given by Here, ↑ and ↓ are the spin split densities, which can be obtained from the fast Fourier Transform (FFT) spectra of SdH oscillations. The electron effective mass * = 0.05 can be estimated by analyzing the temperature dependence of SdH oscillation amplitude. The relation between the Rashba SOI parameter α and the carrier density is plotted in Fig. 10. In the above analysis, we assumed that the Dresselhaus SOI strength is negligible since = 〈 2 〉 is one order of magnitude smaller than the Rashba SOI strength. (Left) In the moving electron's rest frame, the SOI field subtends a solid angle (blue) in a round trip around the interference ring. The solid angle is proportional to the spin geometric phase. Only when the Lamor frequency of spin precession is fast enough compared with the orbital frequency , the SOI field is in x-y plane (adiabatic limit). Spin precession around SOI field Btotal is associated with the dynamical phase. The angle is given by the relation tan = / .
(Right) The in-plane field modulates the geometric phase by changing the solid angle subtended by the total effective field. | 2019-03-18T05:10:09.803Z | 2018-03-30T00:00:00.000 | {
"year": 2018,
"sha1": "800e46c0c93d54391eda90ee022cd9ebdd52967b",
"oa_license": "CCBY",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/116786/5/CONICET_Digital_Nro.5d3220e2-d6dc-4bc7-bf8a-4b7706f89d30_X.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "800e46c0c93d54391eda90ee022cd9ebdd52967b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
81122363 | pes2o/s2orc | v3-fos-license | Surgical anatomy of cricothyroid membrane with reference to airway surgeries in North Indian population: a cadaveric study’
Cricothyroid space is the space that extends between the arch of the cricoid cartilage below and the inferior edge of the thyroid lamina above. The cricothyroid membrane covers this space between the cricoid and the thyroid, which on either side of the midline is occupied by the thicker median cricothyroid ligament. The cricothyroid space and membrane is easily accessible from the surface. Although orotracheal intubation is the preferred method of securing the airway, many conditions demand establishing an urgent surgical airway. Surgical cricothyroidotomy is one such technique used to rapidly gain entry into the subglottic airway by creating an opening in the cricothyroid membrane. However, the size and position of the cricothyroid membrane is variable depending on racial characteristics of the individual. Statistics regarding the dimensions of the cricothyroid membrane has been documented extensively in the ABSTRACT
INTRODUCTION
Cricothyroid space is the space that extends between the arch of the cricoid cartilage below and the inferior edge of the thyroid lamina above. The cricothyroid membrane covers this space between the cricoid and the thyroid, which on either side of the midline is occupied by the thicker median cricothyroid ligament. The cricothyroid space and membrane is easily accessible from the surface.
Although orotracheal intubation is the preferred method of securing the airway, many conditions demand establishing an urgent surgical airway. Surgical cricothyroidotomy is one such technique used to rapidly gain entry into the subglottic airway by creating an opening in the cricothyroid membrane. However, the size and position of the cricothyroid membrane is variable depending on racial characteristics of the individual. 1 Statistics regarding the dimensions of the cricothyroid membrane has been documented extensively in the Caucasian race. 2,3 Race, heredity, climate and nutritional status are known to affect the body size of a population.
More knowledge regarding the cricothyroid membrane in Indian population would facilitate optimal procedural guidelines for cricothyroidotomy in Indians.
Objective
The purpose of the current paper was to measure the dimensions of the cricothyroid membrane and the depth of subglottic space in the adult north Indian population.
METHODS
The study was performed at a university teaching hospital -Dept of ENT, Army College of Medical Sciences and associated Base Hospital, Delhi Cantt -110010, from Jul 2016 to Mar 2018, on thirty nine (n=39) apparently normal adult Indian cadaveric larynges (F:M -14:25) obtained from Dept of Anatomy, Army College of Medical Sciences, Delhi Cantt. Laryngeal specimens excised from cadavers with any possibilities of laryngeal damage as a result of diseases or manipulations were not taken into consideration.
All the larynges were removed from the hyoid till the second tracheal ring. All soft tissues (ligaments and muscles) were carefully removed. These larynges were serially numbered F-01 to F-14 for female specimens and M-01 to M-25 for male specimens. The cricothyroid membrane was identified in the space between the thyroid and cricoid cartilages, and the dimensions measured using an electronic vernier caliper (with least count of 0.1 mm), as shown in Figure 1. The findings were recorded in the predesigned proformas. The data obtained was finally entered into Microsoft Excel (Microsoft Corporation, Silicon Valley, Ca. USA) and analyzed. For each of the parameters, range (minimum valuemaximum value), arithmetic mean and standard deviation (S.D.) were calculated. The measurements of the various dimensions in female larynges (n=14) are shown in Table 1, while those of male specimens (n=25) are shown in Table 2. As can be seen from the data, the average dimensions of cricothyroid space and cricothyroid membrane were uniformly larger in males compared to females. The height of the cricothyroid space in the midline was marginally larger in males (range-F: 5. *'Working' dimensions (exposed area of the membrane between the two cricothyroid muscles) were measured in the other two studies, whereas 'full' dimensions (complete widths) were measured in the current study.
DISCUSSION
In substantial neck swelling caused by subcutaneous oedema, haematoma or emphysema, emergency cricothyroidotomy is very complicated as normal anatomical landmarks are obliterated. 1 Blind attempts have frequently caused injury to the thyroid cartilage, accidental penetration of thyrohyoid membrane or tracheal insertion of the airway. 3 Puncture of the posterior wall of larynx and oesophagus are reported as well following cricothyroidotomy. 3 Fatal airway hemorrhage has also been reported following cricothyroidtomy resulting in endobronchial hemorrhage and asphyxia. 3 Placing outsized tubes results in fracture of the laryngeal cartilages and later dysphonia and subglottic stenosis. 2,[5][6][7] Hence a thorough understanding of the dimensions of the cricothyroid membrane is vital to prevent such complications, especially in relevance to the population we serve. The present study on 39 adult laryngeal specimens (25 male and 14 female) focuses entirely on the Indian race (north Indian subset). 3,8 It is to be noted that although the previous studies take into account 'working dimension' meaning the exposed cricothyroid membrane (between the two cricothyroid muscles), this study measures the entire membrane that is available in the cricothyroid space, as we feel the entire space is available for cricothyroidotomy should the need arise. It is observed that the dimensions are consistently smaller in the adult Indian population compared with their Caucasian counterparts, and almost comparable between the north and south Indian population. Tube sizes recommended in studies with Caucasian subjects might not be applicable to Indian population. Further there is minor variation in dimensions between the north and south Indian population.
Cannula with dimensions of outer diameter 8 mm and internal diameter of at least 5 mm has been suggested by The American Association of Clinical Anatomists for use in cricothyroidotomy. 9 Smaller cannulas are easier to introduce, but narrower the tube more is the resistance to airflow, as per Poiseuille's law. Although it is quicker to insert a smaller diameter cannula, more reliable oxygenation (↑ PaO 2 ) was attained only with larger cannulas. 10 Narrod et al has recommended size 6 tracheostomy tube (which has an internal diameter of 6 mm and an outer diameter of 8 mm) for cricothyroidotomy. Larger cannulas might fracture the thyroid or cricoid cartilage and also cause subglottic stenosis. 2,5-7 Dover et al cautioned that tubes frequently utilized for tracheostomy (#8 and #10 Shiley tracheostomy tubes with outer diameters of 12 mm and 13 mm respectively) could cause laryngeal injury when used for cricothyroidotomy. 3 Develi et al established that the vital anatomical structures (cricothyroid vessels, pyramidal lobe of thyroid gland) were mostly located on the upper half and lower left quadrant of the cricothyroid membrane. 11 They further recommend that the lower right quadrant of the membrane is safer for invasive procedures such as needle cricothyroidotomy or other cannulation techniques.
Perforation of the posterior wall of larynx and oesophagus are reported complications following cricothyroidotomy resulting in the formation of tracheoesophageal fistulae. 3,12 This study showed that depth of subglottic larynx ranged from 13.1 to 27.2 mm; with mean being 17.24±2.09 mm in women and 21.94±2.93 mm in men. Care should be taken not to incise too deeply while entering the subglottic larynx.
CONCLUSION
When rapid surgical access to the airway is required following traumatic injury, cricothyroidotomy is the procedure of choice. To avoid or manage complications following surgical cricothyroidotomy, understanding of the dimensions and relations of the cricothyroid membrane is crucial.
As dimension of the cricothyroid membrane is smaller in the Indian population compared to Caucasian population, ET tubes ranging from size 3.0 to 5.0 in females and size 4.0 to 6.0 in males are suggested for use for cricothyroidotomy in north Indian population. Insertion of oversized tubes is known to cause dysphonia, laryngeal damage and subglottic stenosis. There is scope for additional research to corroborate these findings and study if there is any disparity in size in other ethnic races in the world. | 2019-03-18T13:58:54.997Z | 2018-08-25T00:00:00.000 | {
"year": 2018,
"sha1": "c6f9869695e08f1084b7eb3789add0fa1069cce5",
"oa_license": null,
"oa_url": "https://www.ijorl.com/index.php/ijorl/article/download/1038/578",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e65b28e27d3b4dc903db555f6d1f6c6281419d61",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225150084 | pes2o/s2orc | v3-fos-license | DOES INTERNAL CONTROL WORK ? Fraud Case in Government Sector Indonesian Evidence
This research analyzed the role of internal control in reducing intention to do fraud. Internal control becomes a variable that moderates the relationship between organizational culture, information asymmetry and law enforcement with a person’s intention to commit fraud. This research took place in a district in Eastern Indonesia. Respondents from this research are government employees and taken with a convenience sampling method. The findings of this study are: This study proved that the greater the information asymmetry, the higher the intention to do fraud. In addition, law enforcement and regulation could reduce fraud intentions. However, this study had not proven the influence of organizational culture on fraud and this study also failed to prove the role of internal control to reduce fraud intention. This research had a practical contribution where in this case, law enforcement and regulation become the most important thing in reducing fraud intention. Besides it is necessary to improve the internal control system in order to function as expected. The originality of this study, there are still little empirical research that discusses the study of internal control functions associated with the reduction of fraud intention.
INTRODUCTION
Fraud is an issue that endlessly discussed. Various institutions and the State always face this problem. Corruption committed by lower level employees or even top management in private companies often adorned the mass media. Corruption or other fraud committed in the government sector are also become common phenomena in several countries. Fraud or corruption does not only happen in the third world countries but also in developed countries. One of the biggest corruption cases in Indonesia is the e-ktp (electronic identity card) case that involving and has ensnared several of the political party figures. Until now this case is still a widespread concern of the community. In Malaysia, there is corruption case that ragged international public which involved several popular names and countries. It is the misappropriation the Malaysian flow of development funds (1MDB). The case involved the United States of America as a plaintiff and the Singapore as an investigator and a bailiff for 240 million Singapore dollars worth of assets. This suspicious funding stream also involves major banks in Malaysia such as Good Star Limited, Aabar Investments PJS Limited and Tanore Finance Corp. (Tempo.co, 2016). The case has dragged the name of prime minister of Malaysia, Najib Razak due to his stepson's involvement and also the name of the famous movie star, Leonardo Di Caprio. (Ferdian, 2016).
A bribe issue of 12 million yen from a construction company had made Japanese economic minister, Akira Amari, resigned from the cabinet of prime minister, Shinzo Abe. (bbc.com, 2016). In mid-2015, there was a financial scandal in Japan. Toshiba's management allegedly had made earnings management by exaggerating Toshiba's revenue. The Toshiba accounting scandal is estimated to reach more than US $ 1 billion as of March 2014. As a result of this event, the public questioned the performance of the company's management. Toshiba Corp.'s CEO Hisao Tanaka finally decided to resign along with other board members including vice chairman, Norio Sasaki, for being held responsible for accounting irregularities. (Panji, 2015).
Related to fraud cases, a number of theories had tried to explain and understand the underlying causes and motivations of the fraud perpetrators to commit crimes, from the Fraud Triangle theory (Cressey, 1953); Fraud Scale (Frontier, Howe and Romney, 1984) Fraud Cube (Wolfe and Hermansons, 2004); ABCs of White Collar Crime (Ramamoorti, 2009); Fraud Pentagon (Marks, 2009) to A Disposition-based Fraud Model (Raval, 2016). One of the contributing factors of fraud occurring is internal control weaknesses (fraud triangle, fraud diamond and pentagon fraud) The best way to minimize chances of fraud is to apply a good internal control system. (Buckhoff, 2002). Rae and Subramaniam (2008) examined the quality of internal control procedures (ICP). They found that the quality of ICP moderates the relationship between organizational fairness perceptions and employees' fraud. Quality of ICP is also significantly and positively related to three major organizational factors: the company's ethical environment, the extent of risk management training of staff, and the internal audit (IA) activity level. In addition to internal control, there are many other factors that also cause the occurrence of fraud. Faisal (2013) found that there is a negative influence among compliance of internal control systems, ethical behavior and leadership style toward fraud in the government sector. However, there is no effect on the compensation amount and organization culture toward employees' fraud. Pramudita (2013) also founds a negative influence of the internal control effectiveness to fraud in the government sector. Pressure factors also become one of the fraud drivers. The compliance (demand) and compliance pressure (orders) from the CEO significantly increased the willingness of the CFO to revise their initial inventory adjustments. In addition, the CFO accounting experience is inversely proportional to the revised initial estimates. CFOs who approve of the CEO's pressure will ultimately take personal responsibility for their adjustments. (Bishop et al, 2017).
The corruption phenomena are world phenomena. Farooq and Brooks (2013) tried to examine cultural linkages with corruption. Their research found that instead of maintaining public sector fraud as a cultural issue, respondents felt that they were part of the solution in preventing fraud and corruption. However, as elsewhere, those who tried to prevent corruption in the Gulf region, will meet with a deep-rooted attitude found throughout the world. Research in India found that the regulatory system was weak, and desperately needed to redefine the role of auditors. Coordination between different regulatory authorities is bad, so fraud continues and then followed by blame game. Reporting fraud and publication of fraud prevention policies is also missing. Banks and financial institutions are not effective in due diligence, and also lack of professionalism within the council and at other executive levels. (Gupta and Gupta, 2015). Mihret (2014) examined cultural influences with fraud rates in 66 countries using Hofstede's cultural dimensions. Results from a sample of 66 countries showed a higher fraud risk for countries characterized by a high power of distance (PDI) and lower fraud risks for countries with long-term oriented cultures.
Various factors can be a person's motivation to commit fraud. Corporate culture, law enforcement and information asymmetry are some of the factors that can lead to make fraud. Pramudita (2013) found that there was a negative influence of organizational ethical culture toward fraud in the government sector but there was no influence of law enforcement toward fraud. Mustikasari (2013) found a positive correlation between information asymmetry and fraud This research aimed to examine several factors that could trigger a person to do fraud. These factors were organizational culture, information asymmetry and law enforcement. In addition, this research wants to analyze whether the existence of internal control could minimize the occurrence of fraud by the existence of trigger factors. This study had practical contribution for the government to establish a better internal control system to minimize fraud in the government sectors and law enforcement is a significant factor to reduce fraud intention. There is still few empirical research about fraud in government sectors and analyze whether internal controls could function as expected, according to the theory.
Fraud Theory
There are various behavioral theories that address the causes of fraud. The theory was initiated by fraud triangle theory (Cressey, 1953). According to this theory there are three causes of a person to do fraud, those are pressure, opportunity and rationalization. This theory then developed into fraud diamond (Wolfe and Hermanson, 2004) by adding one dimension, that was individual capability. Without a qualified ability, someone could not possibly succeed doing fraud (Ruankaew, 2016). Beside fraud diamond theory that developed from the fraud triangle, there is the theory of fraud square I (Bressler and Bressler, 2007). This theory was consisted of 4 elements: incentive, opportunity, capability, and realization. The variant of fraud square is the theory made by Cieslewicz (2010) (in Mackevicius & Giriunas, 2013) by adding social influence as the 4th element (Fraud Square II).
This theory was later developed again by Marks (2009) became Fraud Pentagon I. According to Marks (2009), the motivation of people doing fraud there are five i.e: pressure, opportunity, rationalization, competence, and arrogance, where "competence" expands the element of Cressey's opportunity to incorporate one's ability to the exclusion of internal control and social control over its benefits. "Pride or arrogance or lack of conscience" is an attitude of superiority and rights or greed from people who believe that company policies and procedures are not personally applied. Mui and Mailey (2015) combined fraud triangle and crime triangle became Symbiosis of Fraud Triangle and Crime Triangle theory. Theory of fraud triangle complements the focus of the fraud triangle-centers by examining the environment in which fraud occurs and the parties involved in preventing fraud or not playing their roles, fraud.
Albrecht, Howe and Romney (1984) developed the theory of fraud scale I by taking two elements of the fraud triangle, the pressure and the opportunity but replacing the rationalization with personal integrity. Mackevicius and Giriunas (2013) developed the fraud scale of Albrecht version, where the element of deception scale is the motive, condition, ability and fulfillment (Fraud Scale II). The first element of the scale of fraud is its motive. It determines whether an employee is likely to behave unfairly and why. The second element of the fraud scale is the condition that increases the risk. The third element is the possibility, which is treated as an option given to employees who wish to cheat. The fourth element is realization, which is seen as a means to justify unfair behavior.
Ramamoorti et al (2009) used a different approach than previous theories. His theory is called ABCs of White Collar Crime. ABC: A -the Bad Apple: individual personality characteristics of those who commit fraud; B -the Bad Bushel: the dynamics of collusive behavior groups; C -the Bad Crop: a larger cultural / community factor that increases or allows fraud. The MICE theory proposed by Dorminey et al (2012) says that the cause of the fraud is M which means money, I -Ideology, Co -Coercion and E -Ego or Entitlement. Looking at the elements conceived by this theory, it appears that this theory is still the development of the fraud triangle. Raval (2016) made the theory of disposition-based fraud models. This model frames financial fraud as an act of indulgence. A person commits an act of fraud for engaging in moral temptation, which leads to deliberate action. Thus, the Disposition Based Fraud Model is essentially an interaction between (a) the circumstances indicated by the stimuli that make up the moral temptations at hand, and (b) the character of the actor (disposition).
Internal Control Theory
The definition of internal control according to COSO (Committee of Sponsoring Organizations of the Treadway Commission) is: "a process, effected by an entity's board of directors, management and other personnel, designed to provide reasonable assurance of the achievement of objectives in the following categories: (1) Effectiveness and efficiency of operations, (2) Reliability of financial reporting, (3) Compliance with applicable laws and regulations" According to COSO there are 5 components that must exist so that internal controls can run effectively so that management can achieve the vision, mission and goals of the company. The five components are: control environment; risk assessment; control activities; information and communication and monitoring.
The control environment consists of various aspects such as ethical values, operating style and organizational structure, management philosophy, and quality of human resources. The control environment acts as the initial foundation that determines the next component. Risk assessment requires identification, understanding and action to an event, problems, conditions, opportunities and threats faced by the company either operationally, financially and even the ultimate goal of the entity. Risk assessment keeps the entity alert in the presence of monetary and non-monetary risks that threaten the company, its potential risks and how to manage those risks.
Control activities such as procedures, rules and policies have a role to ensure that the objectives of the control system are in place and that risks are properly managed. Control activities fall into three categories: operational control, financial information control, and compliance control. Operational control is carried out in the presence of control over the operations of the company. The control of financial information is done by ensuring the financial reporting and asset protection activities of the entity. Control over compliance is made by ensuring that all applicable laws and regulations have been implemented.
Elements of information and communication in internal control consider all the information collected and how to make one information message flow in the entity. This means that everyone should be able to receive a top management message on aspects of the control system, how they work and their roles and responsibilities within the entity. Furthermore, the control system shall be periodically monitored to assess the effectiveness of the control system. Monitoring activities can be done by providing employee training, personnel performance evaluation and auditing feedback. The most important thing is that internal auditors are responsible for reporting non-conformities in the entity's internal control system to top management, board of directors and audit committee.
Organizational Culture, Internal Control And Fraud Intention
Organizational culture includes values and behaviors that contribute to the unique social and psychological environment of an organization. Organizational culture represents the collective values, beliefs and principles of organizational members, management styles and includes the vision, values, norms, systems, symbols, languages, assumptions, environment, location, beliefs, and organizational habits (Needle, 2004). Associated with fraud, organizational culture can be a justification (rationalization) to commit fraud. According to COSO's internal control theory, organizational culture is part of the control environment. The control environment is the foundation of the success of the internal control system. If the members of the organization are accustomed to apply, management also does not familiarize the norms of integrity and cruel honesty, then these bad values will spread throughout the organization's joints so the intention of fraud will be higher. Conversely, if the corporate culture accustoms to the values and norms of good behavior within the company, then the intention to cheat will decrease.
Organizational Culture, Internal Control and Fraud Intention
Organizational culture includes values and behaviors that contribute to the unique social and psychological environment of an organization. Organizational culture represents the collective values, beliefs and principles of organizational members, management styles and includes the vision, values, norms, systems, symbols, languages, assumptions, environment, location, beliefs, and organizational habits. (Needle, 2004). Associated with fraud, organizational culture can be a justification (rationalization) to commit acts of cheating. According to COSO's internal control theory, organizational culture is part of the control environment. The control environment is the foundation of the success of the internal control system. If the members of the organization are accustomed to cheat and management also does not accustom the norms of integrity and cruel honesty, then these bad values will spread throughout the organization's joints so the intention of fraud will be higher. Conversely, if the corporate culture accustoms to the values and norms of good behavior within the company, fraud intentions will decrease.
Internal control is a procedure developed by management with one goal to minimize the occurrence of fraud. A good organizational culture, combined with a good internal control system will minimize fraud intentions. Conversely, if the organizational culture that occurs is not conducive plus the internal control system that does not work, then fraud intention will increase. Rae and Subramaniam (2008) found that the company's ethical environment affected the level of employees' fraud. Pramudita (2013) found that there is a negative influence between organizational ethical culture and fraud in the governmental sector but Faisal (2013) did not find an influence between organizational culture and fraud in the government sector. While good internal control has a negative effect on fraud (Faisal, 2013 andPramudita, 2013). From here the following hypothesses are derived: H1a: The better the organization culture, the less fraud intention. H1b: Internal control moderates the relationship between organizational culture and fraud intention
Information Asymmetry, Internal Control and Fraud Intention
Information asymmetry is a manifestation of agency theory (Jensen and Meckling, 1976). In this theory, the agent is the person who runs the company so he or she has more information than the principal, so there is the possibility of agents will commit acts of fraud to maximize their welfare by ignoring the interests of the principal. It needs certain procedures to minimize the behavior of agents that are not in harmony with the interests of the principal. Company need internal controls such as budget restrictions, compensation rules, standard operating procedures and others. Wilopo (2006), Najah (2013), Prawira et al (2014) found a positive influence between information asymmetry and fraud. Rae and Subramaniam (2008) found that internal audit activity levels affect employee fraud rates. From here the hypothesis are derived as follows: H2a: The higher the level of information asymmetry the higher the fraud intention H2b: Internal control moderates the relationship between information asymmetry and fraud intention
Law Enforcement, Internal Control and Fraud Intention
One of the internal control objectives is to comply with applicable laws and regulations. If applicable laws and regulations are not enforced by the authorities, it means there is opportunity to commit fraudulent acts, and vice versa if laws and regulations are enforced, it will minimize the members of the organization to commit fraud. Usually the fraudsters will try to cover up their fraudulent acts therefore there must be a system that can detect the occurrence of fraud or actions that violate laws or regulations. So good law enforcement and combined with a good internal control system will be able to minimize the occurrence of intention to commit fraud. Pramudita (2013) in his research did not find the influence of law enforcement on fraud tendencies in the government sector while Mustikasari (2013) and Faisal (2013) found a negative influence between law enforcement and fraud tendencies. Subramaniam (2008) found that the level of internal audit activity affects the level of employee fraud. From here the hypothesis are derived as follows: H3a: The better the law enforcement, the fraud intention will be reduced H3b: Internal control moderates the relationship between law enforcement and fraud intention
RESEARCH METHOD
This research is an empirical research that seeks the causal relationship between organizational culture, information asymmetry and law enforcement with the fraud and analyse how the role of internal control in that relationship. This research uses 3 types of variables; ie: independent variables, dependent variable and moderating variable. Independent variables are organizational culture, information asymmetry and law enforcement. Organizational culture was measured by 5 questions adopted from Faisal's (2013) research. Information asymmetry was measured by 6 questions developed from Dunk's (1993) theory, had been used by Wilopo (2006). Law enforcement is measured by 5 questions developed from Robin's (2008) theory, had been used by Pramudita (2013). Dependent variable is fraud intention. The moderation variable is internal control and measured by 5 questions developed from COSO and had been used by Mustikasari (2013) research. All questions used likert scale.
Organizational culture is how employee perceptions regarding the atmosphere and habits in the institution where they work, including communication between employees and superiors, and how the work environment formed within the institution. Information asymmetry is how employee perceptions regarding the process of disseminating information within the institution, whether all employees know all the information or not, especially financial information that is very easy to be misused. Law enforcement is how the employee's perceptions regarding the existing rules within the institution and how the management operates the rules and how applicable sanctions apply to those who violate them. Fraud intention is how employee perceptions of fraudulent behavior that often occur in the government sector. Internal Control is how the employees' perceptions regarding the procedure that guides and supervises them in achieving agency goals that include reliability in financial reporting, effectiveness and efficiency of work, and compliance with rules and applicable laws in the institution where they work.
Population and sampling
The population is permanent employees who work for the district government in one of Provinces in East Indonesia. Sampling was taken using convenience sampling technique. Convenience sampling is the collection of information from members of the population who were volunteered to provide answers to the questionnaires. We distributed 200 questioners and 155 were returned, so the response rate was 77.5%. The questionnaires were distributed to all levels of employees in district government with help of one division head there.
Data analysis technique
Data were analyzed through 2 stages. In the first stage, we tested the quality of data and the second stage tested the hypothesis using regression. The data quality test was performed by a reliability test using Cronbach alpha and validity test using Pearson correlation. The result of data quality test showed that the data used was reliable because all of the variable are bigger than 0.6. All data was also valid, because Pearson correlation value smaller than significant level of 5%.
Demographic of Respondent
The following table describes the demographics of respondents regarding age, gender and level of education. Individual's age could be a depiction of the experiences and responsibilities of individual in the organization. Of the 155 respondents in this study, the majority of respondents aged were between 31 -35 years (26.5%), followed by 26-30 years (23.2%). This result shows that the majority of respondents are productive age.
In terms of gender, the number of male respondents was 81 (52.3%) and female respondents were 74 (47.7%). There is a balance between the number of male and female respondents which means that male and female civil servants generally have equal opportunities to work as employees at Sorong district government.
Table 1 Demographics of respondents
Regarding the level of education, the data shows that the majority of respondents have a Bachelor degree (S1) of 112 people or the next 72.3% of SMA is 31 people or 20% and those with D3 are 12 people or 7.7%. This shows that most of the Sorong district government's employees already have adequate education to be able to carry out their functions as civil servants.
As for the period of employment, the data shows that there are quite enough of new employees in Sorong regency, around 28,4%, although there are still more experienced staff (who have worked for more than 5 years), which is 71.6%. This shows a good balance between the number of new employees and the number of experienced staff.
Hypothesis Testing
To test the hypothesis, we used 2 models. The first model examined direct influence of independent variables (organizational culture, law enforcement and information asymmetry) toward the fraud intention. The second model included internal control variables as a moderating variable of relationships among organizational culture, law enforcement and information asymmetry variables with fraud intention.
Model 1: FI = a + b1.OLCUL + b2.IA + b3.LE + e Model 2: FI = a + b1.OLCUL + b2.IA + b3.LE + b4.OLCUL*IC + b5.IA*IC + b6.LE*IC +e From table 2 we are able to see the results of model 1 that tested the direct relationship among the independent variables (organizational culture, information asymmetry and law enforcement) to the dependent variable, intention to commit fraud. There was no relationship between organizational culture and fraud intention. While hypothesis 2a and 3a successfully proven. The test results proved that the greater the information asymmetry, the higher the intention to conduct fraud (p-value 0.010, smaller than 5% and coefficient had positive sign) and good law enforcement will reduce the occurrence of fraud (p-value equal to 0.001, means less than 5% and coefficient had negative sign). Table 2 also shows the test results by including internal control variables to moderate the relationship among organizational culture, information asymmetry and enforcement of penalties with the intention of committing fraud. From this result shows that internal control does not moderate the relationships among the independent variables with fraud intention. All of interactive variable testing: OLCUL * IC (interaction of organizational culture with internal control), IA* IC (interaction of information asymmetry with internal control), LE * IC (interaction of law enforcement with internal control) had p-value value bigger than 5%.
If we compare the results of the M1 test (model 1) with M2 (model 2) for the independent variables, it can be seen that in M1 the information asymmetry and law enforcement variables have a significant effect on reducing the intention to commit fraud, by including the internal control variable as the moderating variable, the two variables become insignificant. This further reinforces that this study fails to prove that internal control can be a factor that reduces the intention to commit fraud.
Discussion and Analysis
This study fails to prove the first hypothesis that tests the influence of organizational culture on the intention of committing fraud either directly tested (model 1) or by including internal control as a moderating variable (model 2). Basically, organizational culture is a collective value formed from the habits of old employees, which are then believed and then passed on to new employees. From the descriptive data, 71.6% are employees who have worked for more than 5 years, even 20.6% have worked for more than 15 years. However, the organizational culture which is part of environmental control within the COSO framework has no effect on the intention to fraud committing. This is also an interesting finding and deserves further investigation. Additionally, 72.3% of employees have an undergraduate education, which means they have sufficiently and maturity in term of appropriate way of thinking and insight to form a good organizational culture that can reduce the intention to commit fraud. This result contradicts the research of Rae and Subramaniam (2008); Pramudita (2013) but in line with the research results of Faisal (2013).
The second hypothesis examines the relationship between information asymmetry and the intention to commit fraud. This study has succeeded in proving back agency theory (Jensen and Meckling, 1976) where the greater the information asymmetry, the higher the intention to commit fraud in order to maximize the welfare of the fraudster. The ability to see whether information is useful or not can be influenced by age, education level, length of work and gender. As someone gets older, he or she will have more needs and will increase if she or he is married. Demographics of respondents indicate that the majority of respondents are in the age range of 26 to 45 years (77.4% total). The ages of 26 years to 45 years are the age at which they are newly married and are currently raising children, which means they need more living cost. The education level is also dominated by undergraduate (S1) education, which means that respondents have the ability to select, sort and analyze information, furthermore 71.6% are employees who have more than 5 years of work experience. The length of work will provide unique experience and knowledge to select, sort and analyze information. Meanwhile the gender in this study is nearly equal. This result was in line with Wilopo (2006), Najah (2013) and Prawira (2014) studies.
This study also succeeded in proving directly that law enforcement had a significant effect on reducing the intention to commit fraud. In terms of age, education level and length of employment of the respondents, this is not surprising. Age and education level will affect emotional maturity and ability to consider risks. Although the age range of 26 years to 45 years is a productive age and has many needs, emotional maturity and educational level will make someone afraid to take risks to break the law. If the law is properly enforced, there is a risk of committing fraud. This result in line with Mustika Sari (2013) and Faisal (2013).
Otherwise, this study was failed to prove that good internal control could be a factor that reinforces organizational culture and law enforcement in reducing fraud. Nor could the test results prove that internal controls were able to weaken the information asymmetry in relation to fraud. These results were not in line with Rae and Subramaniam (2008) which also tested whether internal controls could moderate relationships between organizational fairness with employees' fraud. The findings were also different with Faisal's (2013) and Pramudita (2013) studies that examined the direct relationship of internal controls with fraud.
From these findings it was appeared that in this case, there was no effect of internal control on the intention to commit fraud. Internal control was not proven to be a factor that reduces the effect of information asymmetry on the fraud intention as described in Jensen and Meckling's theory (1976). In terms of law enforcement, this study proved that this element more important to reduce fraud intentions. This may be due to the perception of respondents who consider internal control as a merely of work procedures, not as a procedure that should be able to prevents and detects the occurrence of fraud. For some people internal control procedures will not work properly if there is no sanction for violation of a procedure. In other words, if there is no sanction then the procedure tends to be violated. This is in accordance with the theory of fraud diamond where internal control can still be deceived by people who have capabilities. However, if there is sanction, there will be a fear to outsmart the procedure, because there is a risk of being caught and punished.
Conclusion
This study proved that the greater the information asymmetry the higher the intention doing fraud. In addition, law enforcement and regulation can reduce the intention to commit fraud. However, this study failed to prove the influence of organizational culture on fraud and this study had not yet proven the internal control power to reduce fraud. Practical contribution of this study that law enforcement and regulation becomes the most important thing in reducing fraud intention. Additionally, It's necessary for the organization to improve the internal control system in order to function as expected.
Limitation
There were several limitations of this research: (1) This research used questionnaires for data collection, so there is a possibility of bias of respondents in answering questions (2) In relation to organizational culture, the question items had not been able to explain the type of organizational culture in the organization where the respondents work. Questions in the questionnaire only describe the relationship between colleagues. (3) This study only took one government office in Eastern Indonesia, so the results could not be generalized to all regions in Indonesia.
Suggestion
Based on the above limitations, future research can use a mixed method in data gathering, such as conducting interviews so that the finding and analysis will be stronger. For questionnaire instruments, especially organizational culture variables can be improved by using Hofstede's theory or other organizational culture theory. | 2020-10-28T19:12:56.001Z | 2020-09-30T00:00:00.000 | {
"year": 2020,
"sha1": "2fcedde89936ea25483913ad55ffd54135327a52",
"oa_license": "CCBYNC",
"oa_url": "https://trijurnal.lemlit.trisakti.ac.id/mraai/article/download/7347/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ba2e27a4f6de4f5cfe42a1f8a39897f9c47120d1",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
13646239 | pes2o/s2orc | v3-fos-license | The association of XRCC1 gene single nucleotide polymorphisms with response to neoadjuvant chemotherapy in locally advanced cervical carcinoma
Background Platinum-based neoadjuvant chemotherapy (NAC) is new therapeutic strategy for locally advanced cervical carcinoma, but the variables used to predict NAC response are still infrequently reported. The aim of our study was to investigate the association between XRCC1 gene single nucleotide polymorphisms (SNPs) and NAC response. Methods Seventy patients with locally advanced cervical carcinoma who underwent NAC were collected. SNPs of XRCC1 (at codon 194 and 399) and XRCC1 protein expression were detected. The association of XRCC1 gene SNPs and protein expression with NAC response were analyzed. Results Response to NAC was not statistically significant in three genotypes, Arg/Arg, Arg/Trp, Trp/Trp of XRCC1 at codon 194(X2 = 1.243, P = 0.07), while responses were significantly different in genotypes Arg/Arg, Arg/Gln, Gln/Gln of XRCC1 at codon 399 (X2 = 2.283, P = 0.020). The risk of failure to chemotherapy in the patients with a Gln allele(Arg/Gln+Gln/Gln) was significantly greater than that with Arg/Arg(OR = 3.254, 95%CI 1.708 ~ 14.951). The expression level of XRCC1 protein was significantly associated with response to NAC. Moreover, the genotype with the Gln allele(Arg/Gln+Gln/Gln) at codon 399, but not codon at 194, presented a significantly higher level of XRCC1 protein expression than that with Arg/Arg genotype (F = 2.699, p = 0.009). Conclusion SNP of XRCC1 gene at codon 399 influences the response of cervical carcinoma to platinum-based NAC. This is probably due to changes in expression of XRCC1 protein, affecting response to chemotherapy.
Background
Cervical carcinoma is the second most common malignancy, and continues to be a leading cause of cancer death in women. It is generally accepted that radical surgery or radiotherapy can be curative for the majority of patients with early-stage cervical carcinoma. However, the progno-sis of locally advanced or bulky disease remains very poor, and the optimal management for those patients is still a matter of debate, new therapeutic strategies, such as neoadjuvant chemotherapy (NAC) and concurrent chemoradiation, have been adopted to improve the prognosis for those patients [1].
Many clinical studies have revealed that NAC is highly effective for patients with locally advanced cervical carcinoma, the use of NAC followed by radical surgery and/or radiation for the treatment of cervical carcinoma has been investigated extensively in the past decade, it has been reported that NAC with cisplatinum-based chemotherapeutic regimens have high response rates (ranging from 53% to 94%) [1,2]. However, those who have a poor response to chemotherapy usually fail to respond to radiotherapy, and have a poor prognosis. Thus, NAC may delay definitive treatment, increase cost, and result in poorer outcomes in those patients [3]. It is important to select appropriate patients before undergoing NAC; however, the variables used to predict NAC response are infrequently reported in locally advanced cervical carcinoma.
Cisplatin is considered to be the most effective drug for the treatment of cervical carcinoma, and usually is an essential element in the NAC regimen, but the mechanisms dictating variable response to chemotherapy among individuals are still unknown. Because platinum compounds produce adducts and breaks in the DNA double helix, individual variability of DNA repair may be relevant in modulating the efficacy of such cytotoxic agents. In resent years, some studies have shown that the molecular condition of DNA repair genes can predict the response of chemotherapy in some human cancers [4]. The presence of single-nucleotide polymorphisms (SNPs) among patients suggests that genetic variability may contribute to variations in responsiveness to chemotherapy [5].
X-ray repair cross-complementing gene 1 (XRCC1) is one of the most important DNA repair genes. The XRCC1 protein physically interacts with ligase III and poly(ADProbose) polymerase, acting as a scaffold in the removal of adducts through both single-strand break repair and base excision repair (BER), and in the repair of other types of cisplatin-induced damage, including double-strand breaks, through a nonhomologous end-joining pathway [6]. There are three main coding polymorphisms in the XRCC1 gene: at codon 194 (Arg to Trp), 280 (Arg to His), and 399 (Arg to Gln). It was suggested that SNPs in the XRCC1 gene may alter the ability of XRCC1 to repair damaged DNA, especially SNPs at codon 399 [7]. Some studies have shown that genetic polymorphisms of the XRCC1 gene are associated with response to platinum-based chemotherapy in non-small-cell lung cancer, colorectal cancer, and breast cancer [8,9], but few studies have investigated the association of XRCC1 SNPs with response to chemotherapy in locally advanced cervical carcinoma. Only one study has analyzed XRCC1 SNPs at codon 399, and another study has analyzed SNPs at codon 194 recently, the results have shown that the XRCC1 Arg399Trp polymorphism or the XRCC1 Arg194Trp poly-morphism is associated with the response to platinumbased NAC in cervical cancer, but the number of cases were all small (36 patients and 66 patients respectively) [10,11]. No results of this two SNPs in the same patients were showed.
To clarify the influence of the XRCC1 gene polymorphisms on the response to NAC, in the present study, we examined the association of the different genotypes (at codons 194 and 399), as well as protein expression with NAC response in patients with locally advanced cervical carcinoma.
Patient enrollment
From June 2003 to June 2007, a total of 109 patients with histologically confirmed locally advanced cervical carcinoma (FIGO stage IB2-IIA at least 4 cm in diameter) underwent NAC and subsequent radical hysterectomy in Women's Hospital School of Medicine, Zhejiang University. Of those, 70 patients who had complete clinical data, peripheral blood samples, and cervical carcinoma tissures by biopsy just before chemotherapy were enrolled in the study. Each patient signed a form to indicate informed consent before chemotherapy.
Evaluation of chemotherapy response
The chemotherapy response was evaluated two weeks after completion of the final cycle according to WHO criteria, if no obvious response occurred after two cycles, the patient would not accept another cycle of chemotherapy. Tumor size was measured by pelvic examination and colposcopy as the product of the maximal perpendicular diameter of the tumor. complete response (CR) indicates disappearance of the disease, partial response (PR) indicates at least 50% reduction in tumor load, stable disease (SD) indicates that the lesion showed £ 25% progression or <50% shrinkage, and progression of disease (PD) indicates >25% enlargement of the lesion, or appearance of a new lesion. CR and PR were considered to be a good response; SD and PD, a poor response.
DNA extraction
Genomic DNA was extracted from peripheral blood lymphocytes by the routine phenol/chloroform method. First, white blood cells were separated from red blood cells by washing three times in phosphate buffer solution. Then, the DNA was extracted with phenol/chloroform and was precipitated with cold ethanol. All DNA samples were dissolved in water and stored at -20°C.
Genotyping
The two SNPs were detected using modified polymerase chain reaction (PCR) mismatch amplification (MA-PCR). The two forward primers for XRCC1 gene Arg194Trp site were 5'-GGGGGCTCTCTTCTTCAGGC-3' and 5'-GGG GGCTCTCTTCTTCAGGT-3', which differ in the last base; the reverse primer was 5'-CGCTGGCTGTGACTATGAAG-3', which together produce a 362 bp fragment. The two forward primers for the XRCC1 gene Arg399Gln site were 5'-CGTCGGCGGCTGCCCTCCTG-3' and 5'-CGTCGGCG-GCTGCCCTCCTA-3'; the reverse primer was 5'-TTACAG-GCGTGAGCCACTGC-3', which together produce a 354 bp fragment. For assessing the reproducibility of results, all samples were tested twice by different technical personnel and the results were concordant for all masked duplicate sets.
Detection of protein expression Primary Antibodies
The rabbit anti-human polyclonal antibodies specific for XRCC1 were purchased from Santa Cruz Biotechnology™, Inc, Santa Cruz, California, USA.
Immunohistochemistry and Evaluation XRCC1 protein expression was detected by Immunohistochemistry, using the EnVision two-step method. The cervical carcinoma samples from patients were obtained from the paraffin-embedded tissue blocks from cervical biopsy before therapy.
The quantitative immunoreactive scores (H-Score method) were used to evaluate the results, calculated by Sp(i+1), with i representing the various levels of stain: 0, no detectable stain in the nucleus or cytoplasm; 1, yellowish stain; 2, yellow stain; 3, brown stain; p represented the percentage of samples of each stain level. Five random fields (400× objective) were counted, and slides were reviewed independently by two pathologists without knowledge of the clinical data, The average of the the quantitative immunohistochemical scores data was calculated as the final result for each sample.
Statistical analysis
Difference in frequencies of the XRCC1 genotypes and alleles between the different chemotherapy response groups were evaluated by X 2 test and Fisher's test. The association between XRCC1 polymorphisms and protein expression were evaluated by variance analysis. We also evaluated the observed genotype frequencies with those calculated from the Hardy-Weinberg equilibrium equation (p 2 +2pq+q 2 = 1, where p is the frequency of the variant allele and q = 1-p). We applied logistic regression to calculate odds ratios (ORs) and 95% confidence intervals (95% CI) for the association between the genotypes and the risk of chemotherapy failure(SD or PD). The variance analysis was used for measurement data. All P-values were two-tailed and values < 0.05 were considered statistically significant. Statistical package for social science software (Version 11.5, SPSS Inc, Chicago, IL) was used to perform all of the statistical analysis.
Response of NAC
In the total of 70 patients, NAC response was as follows: CR in 2 patients, PR in 58 patients, and SD in 10 patients. No PD was found. Accordingly, the good response rate was 85.71%; the poor response rate was 14.39%.
XRCC1 allele and genotype frequencies
The allele frequencies of XRCC1 194Arg(C) and 194Trp(T) were 65.8% and 34.2%, respectively in all patients; the allele frequencies of XRCC1 399Arg (G) and 399Gln (A) were 80.1% and 19.9%, respectively. The distributions of these genotype frequencies were all in agreement with those expected from the Hardy-Weinberg equilibrium model, the Hardy-Weinberg equilibrium test showed X 2 = 0.03 and X 2 = 1.62 respectively.
The association between XRCC1 polymorphisms and response to NAC
Results are shown in Table 1 for the analysis of NAC response of patients with different genotypes. The NAC good response rate (CR+PR) among patients with locally advanced cervical carcinoma who carry three different homozygous genotypes at codon 194 [Arg/Arg (CC), Arg/ Trp (CT), and Trp/Trp(TT)] were 82.35%, 100%, and 66.7% respectively. No statistically significant differences were found among polymorphisms of XRCC1 at codon 194 (X 2 = 1.243, P = 0.07).
The association between XRCC1 protein expression and NAC response
The level of XRCC1 protein expression was significantly higher in patients with poor response to NAC (SD+PD) than it was in those with good response (CR+PR) (2.99 ± 0.38 vs. 1.94 ± 0.28; t = 13.64, P = 0.008).
The association between XRCC1 polymorphisms and protein expression
The association of the variant genotypes at codon 194 and 399 with expression of the XRCC1 protein in locally advanced cervical carcinoma tissues were further evaluated, as shown in Table 2. No statistically significant difference was found between the codon 194 polymorphism and XRCC1 protein expression(F = 1.186, P = 0.103); however, there was a statistically significant association between codon 399 polymorphism and XRCC1 protein expression (F = 15.915, P < 0.001).
In addition, the level of expression of XRCC1 protein in patients with at least one Gln allele [Arg/Gln (GA) + Gln/ Gln (AA)] was significantly higher than that in the patients with the Arg/Arg (GG) genotype (F = 2.699, P = 0.009).
Discussion
It is well known that DNA repair is very important in the maintenance of genetic stability, and in protection against the initiation of cancer. Owing to its possible effects on gene expression, polymorphisms of DNA repair genes related to metabolism may influence tumor response to chemotherapy or radiotherapy. The identification of molecular variables that predict either sensitivity or resistance to chemotherapy is of major interest in selecting the first-line treatment most likely to be effective. Because XRCC1 is one of the most important DNA repair genes, the main aim of the present study was to determine whether the XRCC1 genetic polymorphisms could predict clinical response of patients with locally advanced cervical carcinoma to platinum-based NAC.
Some studies have assessed the association between XRCC1 gene polymorphisms and chemotherapy response in various carcinomas, but the results are inconsistent. There has been increasing evidence that decreased DNA repair capacity resulting from genetic polymorphisms of various DNA repair genes is associated with improved survival of cancer patients treated with platinum-based chemotherapy, especially in non-small cell lung cancer [12]. Studies addressing the association of XRCC1 gene polymorphisms at codon 194 with chemotherapy response have focused mainly on non-small cell lung cancer. Wang and his colleagues reported 105 patients with non-small lung cancer undergoing platinum-based chemotherapy, and found that the response rate was significantly higher in patients carrying at least one Trp allele than in those with the Arg/Arg genotype (43.1% vs. 20.3%) [13]. In patients with advanced-stage lung cancer, the risk of failure of chemotherapy was five-fold higher in patients with Arg/Arg genotype at codon 194 than in [14]. On the other hand, some other studies did not find that the SNPs of XRCC1 contributed to susceptibility to cancer or to sensitivity to chemotherapy. These inconsistent results may be related to the different types of cancers studied in different ethnic populations [15,16].
Only one study assessed the association between XRCC1 gene polymorphisms at codon 194 and NAC response in cervical cancer, recently, Kim and his colleagues reported 66 patients with cervical cancer undergoing platinumbased NAC, the results showed that the genotypes of XRCC1 Arg194Trp was associated with the response [11]. But Our current report did not find any significant association, the inconsistent results may be related to the different ethnic populations and the limitatiom of the sample.
It has been suggested that the SNPs of XRCC1 at codon 399 may influence the outcome of cisplatinum-based chemotherapy in some human carcinomas, but the results are also variable. Wang and his colleagues reported that in patients with non-small cell lung cancer who received the platinum-based chemotherapy, the response rate was significantly higher in patients with the Arg/Arg genotype than that in those with at least one Gln allele (41.5% vs. 21.2%). In contrast, other studies of patients with neck cancer revealed that sensitivity to chemotherapy was higher in patients with a Gln allele than in those with other genotypes [13,17]. Moreno and colleagues also found that the prognosis of colorectal cancer patients receiving chemotherapy with 5-FU was better in patients with the 399Gln/Gln genotype than in those with Arg/Arg or Arg/Gln genotype [18]. While in a recent study, no significant association was found between the SNPs of XRCC1 at codon 399 and the response to chemotherapy in non-small cell lung cancer [14].
Our study showed that the response to chemotherapy in locally advanced cervical carcinoma was significantly higher in patients with the Arg/Arg genotype at codon 399 than in those with the Arg/Gln or Gln/Gln genotype (90.0% vs. 76.92%). The risk of failure of NAC therapy was 3.254 fold higher in patients carrying at least one Gln allele compared with those carrying no Gln allele. Our findings suggest that SNPs of the XRCC1 gene at codon 399 influences the response of cervical carcinoma to platinum-based neoadjuvant chemotherapy, and that the genotype carrying at least one Gln allele may be considered to be a candidate molecular marker to predict poor response to NAC in locally advanced cervical carcinoma.
The fact that SNPs of XRCC1 at codon 399 influences response to NAC in locally advanced cervical carcinoma affirms previous results reported by the studies of other carcinomas, but the exact mechanism remains unknown [13,17,19]. Some studies have shown that resistance to platinum-based agents was related to the overexpression of DNA-repair protein [20]. Dabholkar and colleagues found that the mRNA level of some DNA repair gene was significantly increased in platinum-resistant ovarian carcinoma, indicating that the level of DNA repair gene expression correlates with the response to platinum-based chemotherapy [21].
Similarly, our results also showed that the level of XRCC1 protein expression was significantly higher in patients with poor response than in those with good response to NAC in locally advanced cervical carcinoma. In addition, we found that this altered expression of the XRCC1 protein was associated with XRCC1 genotype variation at codon 399, the protein expression was significantly higher in the patients with a Gln allele (Arg/Gln or Gln/ Gln) than that with the Arg/Arg genotype in locally advanced cervical carcinoma. Our findings suggest that the genotype with at least one Gln allele probably increases the expression of XRCC1 protein, and consequently, results in poor response to platinum-based chemotherapy in patients with locally advanced cervical carcinoma. To our knowledge, this is the first investigation of XRCC1 gene SNPs, protein expression, and their association with response to chemotherapy. Further study is needed to clarify the mechanism behind this phenomenon.
We have demonstrated that SNPs of the XRCC1 gene at codon 399 influence the response of patients with locally advanced cervical carcinoma to platinum-based NAC. Patients with a genotype carrying at least one Gln allele have an increased risk of failure to respond to chemotherapy compared with those carrying no Gln allele. This reduced response to chemotherapy is probably due to elevated expression of XRCC1 protein in those patients who have at least one Gln allele. | 2017-06-25T01:54:47.814Z | 2009-06-29T00:00:00.000 | {
"year": 2009,
"sha1": "2334a3d63d661d5d0feda9d6d4cf09f5c271cae3",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/1756-9966-28-91",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2334a3d63d661d5d0feda9d6d4cf09f5c271cae3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
211505816 | pes2o/s2orc | v3-fos-license | SHIFT: A Highly Realistic Financial Market Simulation Platform
This paper presents a new financial market simulator that may be used as a tool in both industry and academia for research in market microstructure. It allows multiple automated traders and/or researchers to simultaneously connect to an exchange-like environment, where they are able to asynchronously trade several financial assets at the same time. In its current iteration, this order-driven market implements the basic rules of U.S. equity markets, supporting both market and limit orders, and executing them in a first-in-first-out fashion. We overview the system architecture and we present possible use cases. We demonstrate how a set of automated agents is capable of producing a price process with characteristics similar to the statistics of real price from financial markets. Finally, we detail a market stress scenario and we draw, what we believe to be, interesting conclusions about crash events.
Introduction
A recent Congressional Research Service report on high frequency trading (Miller and Shorter, 2016) estimates that it accounts for 55% of the U.S. equity market and 40% of European equity markets. Many studies have been done on the advantages and disadvantages such group of traders pose to the health of financial markets. Some are discussed in Ye and Florescu (2019). High frequency trading, however, is just a subset of larger algorithmic trading.
There are many possible interpretations to what algorithmic trading actually means. In general, it refers to advanced mathematical models used in either automatic trading strategies or optimal order execution algorithms, with little to no human interaction. Kissell (2013) estimates that algorithmic trading as a whole accounted for 85% of market volume in arXiv:2002.11158v1 [q-fin.TR] 25 Feb 2020 2012. A report from Research and Markets 1 estimates the global algorithmic trading market size to grow from 11.1 billion U.S. dollars in 2019 to 18.8 billion U.S. dollars by 2024. In fact, according to JPMorgan analysts, only 10% of 2017's stock market trading volume was performed by fundamental value traders. Among other possible effects, this high activity of algorithmic-controlled trading may cause sell-off episodes when machines act immediately after data releases, without the proper analysis a human would do.
In order to try and mitigate the effects of ill-constructed algorithms, regulations such as Regulation Automated Trading ("Reg AT") by the Commodity Futures Trading Commission ("CFTC") (CFTC, 2015) of the United States (one of the main regulators for derivatives markets in the U.S.) have been proposed. In general terms, Reg AT recognizes the urgent need to adapt financial market regulations to the current business models under which exchanges and most traders are operating today. Specifically, today's trades are based on high-speed automated market processes for all segments of typical financial transactions, from order placement and cancellation, to the operation of matching engines for connecting and clearing bids and offers, to post-trade processing and data reporting. The CFTC points out in its extensive and ambitious rule-making effort (CFTC, 2015) that most of its supervisory policies assume a world in which trades are executed "by hand" -with extensive human intermediation, and at "human speed" -whereas the technology in the market today operates at "machine speed" with latency as low as a few hundred microseconds. The broad charter of Reg AT calls for a rethinking and revamping of market regulation to bring the framework up to date with modern technology.
Reg AT specifically calls for development of capability to test all forms of automated trading or trade-support systems that interface directly with financial markets before they are introduced into a real exchange environment. This capability should operate in a controlled, off-line test-bed environment -but one that is realistic enough to allow reasonable assessment of the likely impact and risk of operating those systems in a "live" market.
We believe such a test-bed would allow exchange operators to explore the consequences of possible rule changes, new order type offerings, or anti-spoofing measures. The system would potentially have value for private sector participants who wish to test the effectiveness of algorithmic trading systems. Moreover, researchers proposing specific changes to the way markets operate, e.g. Budish et al. (2015), could benefit from such platform. Most of the current research involving policy changes are either based on a theoretical framework, with no empirical evidence that rules would properly work in practice, or they study the consequences of a rule change implementation on a particular exchange, months after the fact (Jørgensen et al., 2018).
This technical capability does not yet exist, and Reg AT is vague as to the requirements such a system would have to meet. The models developed thus far are primarily based on agent-based simulations. These existing models are generally limited -often based on a agents trading a single instrument, with simulated low-frequency data and highly artificial trading rules.
The simulator described in this paper was constructed with the goal of creating a test environment as close to reality as possible. We replicated all the basic characteristics of a financial market exchange and tried to expand on what is presented in agent-based models literature. The final result is a system that is very versatile and can be applied to different scenarios in education and research.
Reviewing Agent-Based Models in Finance
The Santa Fe Artificial Stock Market (Palmer et al., 1999) is one of the most cited agent-based systems applied to finance. With the development work taking place in the 90's, the system models a market with one risk-free bond and a single stock traded by agents, which follow a set of pre-defined basic rules. The system is now viewed as ground breaking since it is the first one that models traders and measures the result of their interaction i.e., the equity behavior.
Another, more developed market simulator is presented in Jacobs et al. (2004). The authors point to the fact that although asynchronous-time, discrete-event simulations are commonly used to model complex systems, they are rarely used to model financial markets. The system is a multi-asset trading environment with asynchronous events. The authors describe these asynchronous events as the agents states are updated at different times (not all agents are updated at every turn). Outstanding buy and sell orders remain in a book, and simulation sessions last several (virtual) days, with trading events happening throughout the day. The agents are mean-variance portfolio holders that trade at most once a day.
The next references are important as they detail a system used to study multiple aspects of financial markets. The Genoa Artificial Stock Market (GASM) (Raberto et al., 2001) is a market simulator that serves as basis for multiple research papers published since 2000. The authors set to build a simple market structure that would be able to reproduce some stylized facts, such as volatility clustering and heavy tails, observable in the distribution of real data returns. There is only one risky asset in the market and agents send random limit orders at every simulation step, based on a finite amount of cash and the current realized volatility. Price is formed by the intersection of the demand and supply curves (since the system does not implement a limit order book). Raberto et al. (2003) increments the simulation with different agent strategies, and compares their performance by looking at their wealth evolution. Cincotti et al. (2003) adds multi-asset support to the simulator. Each agent is now holding a portfolio, with no short positions allowed. Most of the agents act in a completely random fashion, but the paper explores the application of three different trading strategies (mean-variance, mean-reversion and relative chartist strategy) acting in the resulting market. Both Ponta et al. (2011) and Ponta and Cincotti (2018) explore information exchange networks among traders in variations of this multi-asset market. In Raberto and Cincotti (2005) and Ponta et al. (2012), the single-asset model from Raberto et al. (2001) is extended to use a limit order book as its pricing mechanism. To accommodate the simulation to this new pricing mechanism (with time-price priority), one single agent is chosen at random at every time step to perform an action. Jacob Leal et al. (2016) proposes a model designed to study the interaction between low frequency traders and high frequency traders (HFT) on a single-asset market. Slow (low frequency) traders submit orders every θ turns, with each agent having a different θ, based on either a fundamentalist or a chartist strategy. The orders sent by low frequency traders are sent ahead of other agents at every simulation step. High speed traders act every time they see a profit opportunity, employing directional strategies. They submit their orders after the submissions from low frequency traders are completed. The idea being that they are fast enough to exploit the information generated by slow traders. The authors conclude that their approach is able to reproduce main stylized facts from current financial markets. We review this paper as one of the few examples of agent based models that is attempting to model the low frequency/high frequency interaction. Please note that the framework is a turn based simulator similar with the traditional ones described above.
A thorough review of such agent-based simulation studies is presented in Alsulaiman and Khashanah (2015). In general, authors set to solve a research problem. They adapt the most suitable agent-based simulator to answer the research problem studied. They generally focus on a few sets of features that are affecting the research problem. Once the problem has been answered and a new problem appears likely the old simulator needs to be redone. The GASM model mentioned above is symptomatic in this aspect as every paper added a new layer of complexity to be able to answer a new problem in effect evolving the system toward a more realistic one.
In 2014, when we started the development of the system described in this paper we wanted a "as close to reality as possible" replica of a real market exchange. To this end we created and replicated a real market. This task was extremely complicated and in fact we rebuilt the system from scratch four times to be the completely expandable system we have today. We believe the resulting SHIFT 2 system described in Section 2 behaves as close to a real market as possible in a research environment. In fact, we can trade any real standardized asset in the SHIFT system. We shall discuss this in Sections 3 and 4.
When comparing SHIFT with the existing agent-based models we found three features that are all present in our system and which we think are crucial to replicate how markets operate today. The artificial markets in existing literature may contain one or at most two of these features. These features in order of importance are: real pricing mechanism, distributed asynchronous, and multi-asset.
Real pricing mechanism. Most exchanges today are order driven, while the rest are quote driven. Both types of exchanges as well as exchange participants need to keep track of supply and demand as these are the main drivers of market microstructure. Alsulaiman and Khashanah (2015) cites only four studies which use the limit order book as the pricing mechanism and no quote driven markets.
Distributed asynchronous market. The majority of financial market simulators in the literature employ some type of "turn-based system". Even if the agents do not "play" at every turn, e.g. they perform an action every ∆t, most of the times there is a notion of action taken at step t = 1...T and a central unit dictating the order of agent turns. 3 Despite some serious attempts to introduce a real batch auction exchange, operating in discrete time (Budish et al., 2015), all exchanges today operate in real time. Therefore, we believe that having a distributed asynchronous system where clients may be all over the world dealing with real latency as well as a market exchange operator processing the orders in the order that they arrive is crucial to be able to simulate a high frequency trading environment. Jacobs et al. (2004) goes in this direction with its implementation of asynchronous events, but even though events are rendered randomly or are caused by other events, the central unit controlling the simulation knows what the next event will be.
In our system, agents perform actions whenever they want to, and the central unit is constantly listening for incoming messages, with no control over when they are sent and by whom. In a turn based system, high frequency traders are commonly simulated using a smaller ∆t, and thus the orders from the low frequency traders never arrive before their orders. However, in a real system low frequency orders operating on outdated information may arrive earlier at the exchange and by chance predate the HFT orders.
Multi-asset market. Most of the academic literature employing agent-based simulators is using a single risky asset and a risk free asset. This one-traded-asset model is certainly the basis of any simulation, and many interesting conclusions may be derived. However, allowing agents to trade multiple assets can potentially recreate the highly correlated markets we are experiencing today. We note Cincotti et al. (2003) as early work using a portfolio of traded assets. Furthermore, the ability to trade an ETF (i.e., a basket of stocks), as well as the ETF's components allows us to study complex events such as the May 2010 Flash Crash (Kirilenko et al., 2017;Paddrik et al., 2012).
We would like to make a special mention of the Penn-Lehman Automated Trading Project (Kearns and Ortiz, 2003), developed by the Computer Science Department at Pennsylvania State University in partnership with Lehman Brothers. The concept was similar with our system, 4 but it was limited to single-asset trading. Further, it required a constant feed of real market data (either historical or live every 3 seconds) to operate (probably to provide liquidity to its users).
Although the system was used to organize algorithmic trading competitions, we are not able to find any evidence of agents trading against each other or being capable to move the market through their trades.
Focus of this Paper
This paper is focused on defining a realistic test bed capability, extending the agent-based approach to encompass a much higher degree of realism in a rich market environment capable of dealing with: • Large numbers of agents trading large numbers of assets.
• Realistic and robust trading strategies.
• Real-time, high-frequency market pricing and limit order book data.
• The ability to observe interactions between multiple agents (traders) employing potentially overlapping and competing strategies, to enable the study of realistic market events such as crowded trades and liquidity crises. • The ability to test under realistic conditions the effects of regulatory measures, either imposed by a central regulator (e.g., the CFTC or the U.S. Securities and Exchange Commission -"SEC") or introduced by the exchange operators or researchers. • The possible application of such a capability to perform "stress testing" of financial market systems (similar in spirit to the Comprehensive Capital Analysis and Review -"CCAR" -program introduced for major banks under the auspices of Dodd-Frank (2010)).
Our research builds upon an extensive modeling effort conducted over the past six years at the Hanlon Financial Systems Center of the Stevens Institute of Technology (Hoboken, NJ -USA), known as the Stevens High Frequency Trading Market Simulation System (SHIFT) project. We aim to demonstrate that this tool is extremely versatile and provides a financial laboratory environment akin to laboratory environments from other research areas -where experiments can be run in isolation, but in realistic conditions. To accomplish this, the rest of this paper is organized as follows. Section 2 provides a description of the system, presenting its modules and some of the design decisions behind them. Section 3 discusses market event replay capabilities, along with their applications on research and teaching. Section 4 presents the use of the platform when creating a completely artificial market through the use of autonomous agents. The agents can reproduce actual market stylized facts, and we study the effects caused by changing their parameters. Section 5 concludes our paper, and presents future possible directions for our work.
System Description
SHIFT is a complete and standalone system designed to emulate the essential parts of an exchange: a distributed, real-time, and order-driven market. Its initial development focus has been on equity markets, however, the platform can be extended to commodity, future, and option markets, and potentially to any other asset class. The system may be thought of as more of a replica of a real time market exchange rather than a simulation environment.
The platform operates in two different ways. In one mode, SHIFT works with live, real-time, order-level market data sent by market participants which influence everything in the market. This is typically the format used in research studies. This implementation provides researchers the ability to assess unexpected interactions between different strategies. In a second mode, the system replays recorded datasets of quote data. This implementation is typical for commercial market simulators (e.g., paper trading accounts from Interactive Brokers 5 , Quantopian 6 , etc.) and we normally use this type of implementation for trading competitions and teaching. In either mode, SHIFT is capable of generating trade and quote records that may be used to evaluate the effectiveness of complex trading strategies under conditions similar to a real high frequency market.
A realistic platform such as SHIFT needs to process a massive amount of real-time data, while interacting with an undefined number of clients. Thus one of its major challenges is performance. Particularly, in a high frequency market, speed is a critical factor. To accomplish this, apart from developing in a high performance programming language (C++), we separate the server side of the system in different modules, each with a specialized task. This allow us to avoid overloading any of the modules, as well as to divide the work of each layer of the system into multiple copies of the same module, if necessary.
A simplified schematic of the primary modules in our system is shown in Figure 1. The arrows in Figure 1 point from the server to the client, however information flows both ways in all connected levels of the platform. All communication is done using the Financial Information eXchange ("FIX") protocol 7 , the industry standard. In the following subsections, we offer a brief description of the system's architecture, with details on both server side, called "Exchange", and client side. A note on the scalability of the platform is also given.
Exchange
Our financial market exchange simulator contains three distinct modules: Datafeed Engine; Matching Engine; and Brokerage Center.
Datafeed Engine. This module works as a streamer of data to the Matching Engine when the system is running in replay mode. It requests and stores historical quoting and trading data from a market data provider, by implementing the necessary API (application programming interface). Replay mode may be used to test single-user trading strategies with historical data, or to provide liquidity in a multi-user environment (e.g., artificial agents or students in a classroom).
Matching Engine. As it is the case for all its real market counterparts, this module is the brain of our exchange. It is responsible for managing the limit order book (LOB) of all of the platform's traded assets, implementing the dynamics of an order driven market. The matching engine manages a local LOB, containing only orders from the clients that are connected to the platform, for each ticker. It also maintains a global LOB, which functions as the National Best Bid and Offer (NBBO) system specific to U.S. markets. The Matching Engine automatically routes orders from the local LOB to the global LOB whenever a better price may be obtained in an outside exchange.
Brokerage Center. This is the hub that centralizes all communication between clients and Matching Engine. It was initially conceived as a way to remove unnecessary load from the Matching Engine in functions such as providing all current limit order book data to newly connected clients, as well as broadcasting changes in the limit order books to clients. However, in its current implementation, its use has been expanded. It charges transaction fees (both long and short sells), and it keeps portfolio information for all connected clients, along with their current buying power 8 . This information is used for account persistence and portfolio valuation, as well as assessing trading limits and margin calls. The Brokerage Center also stores permanent records of all trading data generated by the users of the exchange.
Clients
We have developed two main ways for users to access our platform: a web interface and APIs in C++ and Python. The web interface was developed with students in mind, so that they could use it in market microstructure classes to learn the rules of operating a trading account in a real market. A sample of the interface is presented in Figure 2. In addition to the overview page ( Figure 2a) and the limit order book page (Figure 2b), users can also see their portfolio information.
(a) Overview page, with last and best prices data for each of the trading symbols. The green and red coloring indicate up and down movements, respectively, since the last update.
(b) Each trading symbol has its own LOB page, containing a candlestick data plot of the simulated price, as well as the global and local LOBs, as explained in Section 2.1. Figure 2: SHIFT web interface. 8 The amount of money a user of the system has available to spend.
For more advanced uses, we have created APIs in both C++ and Python. These can be used to create complete algorithmic trading strategies, and we use them in teaching and in research. For research, each client can be viewed as an agent in an agent-based simulation. Since agents are actual trading accounts operated by individual pieces of software or real people, SHIFT provides a more complex and close-to-reality simulation than existing literature. Because of its server-client architecture, multiple simultaneous agent connections are naturally asynchronous, and even the effects of network latency can be explored.
Examples of use of the platform as an agent-based simulation tool are presented in Section 4. Some basic examples of use of our Python API can be found in Appendix A.
Scalability
The platform was developed so that it may be scaled to any number and types of assets as well as any number of clients. The modular architecture allows us to add more instances of each module as needed. For example, a common issue in high frequency studies is when a large number of simultaneous client connections causes the system to slow down due to increased network traffic. A solution is presented in Figure 3a where we add more instances of the Brokerage Center.
In the case when the Matching Engine starts receiving more orders than it can process in real time, or if we simply want to add different financial assets, we may add more Matching Engine modules (Figure 3b).
Replaying Market Events
Outstanding orders of users connected to the platform are placed in what we call the local limit order book. These orders follow the usual rules of order-driven markets, with price-time priority of orders. When in replay mode, the system also makes use of market data obtained from a particular provider. We currently collect microsecond last and best prices data from different exchanges, along with their volume, and we use this information to create what we call the global limit order book of each asset -representing the National Best Bid and Offer (NBBO) system.
The Datafeed Engine streams data to the Matching Engine, which keeps track of the best prices as they were at a given moment in time. These global quotes together with the orders coming from the users in the system create market liquidity. Liquidity therefore is not infinite in the system. There are two major consequences of this design. First, users are in fact competing for liquidity, so two equal orders submitted at exact the same time may have completely different outcomes, depending on which order arrives first at the Matching Engine. Second, even though users cannot cause long-term impact on market prices in replay mode, traded prices may deviate from the real prices for a little while.
In general, researchers and students who backtest trading strategies use downloaded historical price and quotes data. They therefore use unrealistic assumptions such as infinite liquidity and minimal reaction time. In our system we can account for order timing, bid-ask spread, and available volume thus creating much more realistic results. Moreover, the capability of replaying any given day, or of creating completely artificial market scenarios (see Section 4), allows researchers to better design take profit and stop loss rules, as well as stress test their trading strategies.
Using the System as a Tool to Understand Market Dynamics
In an effort to engage students with hands-on experience with modeling and algorithmic trading, we introduced SHIFT into lectures at Stevens Institute of Technology. From computing basic statistics from a live stream of limit order book and last price data to implementing and verifying their own trading strategies in market microstructure and algorithmic trading classes, the feedback from students so far has been very positive. We should mention that SHIFT is invaluable in demonstrating to the students a specific point. Every strategy that we implemented which is profitable when using daily data ends up losing money in a realistic system using intraday, high frequency data.
A pilot algorithmic trading competition ran during the Spring semester of 2019, and others are planned for the future. In this first edition, there were 38 participating students, divided in 11 teams of 3 to 4 students each, trading any of the 30 Dow Jones Industrial Average 9 stocks. Each week, teams were given access to their own instance of the simulator for 6 days of training, which culminated in all 11 algorithmic trading strategies running against each other and competing for the best opportunities in day 7. Every competition day had a different theme, from low volatility days to flash crash days, no outside (human) intervention was allowed, and portfolios would reset, giving a fair chance for teams to recover from a bad week. In the end, the team with the highest total profit after 6 weeks of competition won.
The competition was beneficial for us since it allowed us to discover and fix many issues as well as improve the system usability. It was also beneficial for the students who learned about trading and difficulties of applying class concepts to real world. Figure 4 shows the daily profit of the top 7 teams along with their average (red dotted line) during the competition's 6 weeks. The lines generally display a positive trend showing that students were learning from their mistakes and enhancing their algorithms.
Artificial Market Capabilities
When not replaying market events, the global limit order book functionality is turned off, and all market formation happens in the local limit order book, with orders coming from the users of the systems. These can be researchers, students, market practitioners, or completely artificial agents.
As an initial proof of concept, we set ourselves to create the simplest possible market, where zero intelligence agents with no notion of profit or loss trade a single asset. We describe these agents in the next subsection, followed by the results of experiments we did with such agents in Sections 4.2 and 4.3.
Zero Intelligence Agents
The trading strategy we chose for our zero intelligence agents is inspired from previous work done in the Genoa Artificial Stock Market, described in Raberto et al. (2001) and Ponta et al. (2012). Modifications were necessary due to the real-time nature of our simulation.
Trading Strategy
During the trading session (i.e. a simulation execution), each agent trades according to a Poisson process with fixed rate λ (λ is the same for every trader). One can find details of generating a Poisson process in (Florescu, 2014, Chapter 10). Each of the N traders trades Φ i times at times τ i,j , j = 1...Φ i .
At time τ i,j , the i-th trader will execute two simple actions: 1. If the trader has an outstanding order, i.e. if their last limit order (or a portion of it) is still in the limit order book, send a corresponding cancel for the remaining (buy/sell) order. 2. Decide whether their next limit order is going to be a buy or a sell (probability 0.5): • If the order is a limit buy, the limit price of the order will be P b τi,j ∼ N (µ b τi,j , σ 2 ), where µ b τi,j is the smaller value between the current best bid and the last available price. This simulates the fact that buyers want to pay the lowest possible value to acquire assets.
• If the order is a limit sell, the limit price of the order will be P a τi,j ∼ N (µ a τi,j , σ 2 ), where µ a τi,j is the larger value between the current best offer and the last available price. Sellers want to receive the highest possible value for their assets.
An initial price value P 0 is given as a parameter to our autonomous agents, representing the close price of the previous day. This value is used as the initial µ b τi,0 and µ a τi,0 values if no other information is available at the moment, i.e. if no other agent submitted limit orders yet. Furthermore, the volume of each submitted limit orders is determined as a proportion r τi,j , which we will call the current confidence level, of the buying power (for limit bids) or number of shares (for limit offers) the ith trader has available at the moment of order submission.
Wealth Distribution
The GASM papers that inspired our agents implementation use an equal distribution of buying power and amount of shares among their autonomous agents. We discovered that such homogeneous distribution has an important contribution to the resulting price formation process, as will be shown in Section 4.2.3. Therefore, we have opted for a randomized wealth distribution in our experiments.
The initial division of shares S = (S 1 , ..., S N ), i = 1...N , with N the total number of traders in the simulation, follows a Dirichlet distribution. The probability density function of a Dirichlet distribution has the following form: x i = 1, and α 1 , ..., α N are the concentration parameters. A symmetric Dirichlet distribution is a particular case of a Dirichlet distribution with α 1 = ... = α N = α. The probability density function is then simplified to: with α the concentration parameter. The higher the value of α, the more homogeneous the distribution of wealth is among traders.
When a trader i is assigned S i initial shares, we also give them an initial buying power (cash) equal to their selling power (shares). That is BP i = P 0 S i where P 0 is the initial share value.
Experiment 1: Establishing a Functioning Market Exchange
In this section we will introduce parameters and agents that create a well functioning exchange. Our goal is to demonstrate that the resulting price process has similar characteristics with a real price process during a normal trading day devoid of any financial events. We also aim to study how the agent parameters affect the behavior of the formed price process.
There are 2, 000, 000 available shares of CS1, a fake stock ticker, with an initial price P 0 = $100, 00. The initial market capitalization of CS1 is thus 200 million. Since traders receive a sum of cash equal to their endowed shares the total initial value of assets in the market (sum of all buying and selling powers) is $400, 000, 000. The rest of the parameters in this experiment are as follows: • N = 200 traders.
• Agents attempt to trade an average of λ = 390 times during the trading session, i.e. on average, they submit one limit order every minute.
• Confidence level r τi,j ∼ U (0.2, 0.6), i.e. limit order size is uniformly distributed between 20% and 60% of the total buying or selling power of the agents, depending if it is a limit buy or a limit sell order, respectively.
• Wealth concentration parameter α = N = 200. With this α value, traders on average have 10, 000 initial shares each, with a standard deviation of about 670 initial shares among all traders.
Example simulated price paths are show in Figure 5, with the respective return plots in Figure 6. Visually, these plots resemble real stock price/return behavior during a given day. Even though there is no flux of outside information into the system, prices display characteristics of the real price series, which we discuss in the next sections.
Comparing Statistics of Simulated and Real Traded Price Data
Figures 7 and 8 present statistics for the returns displayed in Figure 6. Although here we only discuss the results of two simulated series, the statistics of all simulated experiments are very much in line with the known stylized facts of return time series (Cont, 2001). We present two results to display the consistency of the resulting statistics. Negative autocorrelation. Because of the "bounce effect" caused by the bid-ask spread, where market orders may match against either side of the book, returns are expected to exhibit a negative autocorrelation when sampling in small time scales, as shown in Figures 7a and 8a.
Leptokurtic behavior. The distribution of returns has "heavy tails" (Bouchaud and Potters, 2003;Voit, 2005). That is, return values far from the average are occurring more frequently than if they should if they followed a Gaussian distribution. This is evidenced in the Q-Q plots (Figures 7b and 8b), where the excess kurtosis is also reported. Here we use data sampled every second, and the average excess kurtosis of all experiments we did was around 2.
Volatility clustering. When looking at the realized volatility, more precisely at the autocorrelation function of squared returns, it is possible to see in Figures 7c and 8c that periods of high volatility will lead to other periods of high volatility. This phenomenon, known as volatility clustering, is another known feature of financial market data (Cont, 2007). In fact, the slow decay found in these autocorrelation plots, showing signs of long memory in the volatility, is also documented in the literature (Lobato and Velasco, 2000).
Furthermore, the resulting time series data exhibits heteroskedastic effects. When performing the autoregressive conditional heteroscedasticity (ARCH) test (Engle, 1982) we obtain p-values extremely low and we reject the null hypothesis of no ARCH effects for all experiments. If we try to fit an actual ARCH model, we need around 9 lags to best fit returns sampled every second.
We note that unlike the GASM model our zero intelligence agents do not look at the realized volatility and adapt their strategy depending on its current value. Indeed, the volatility parameter they use when choosing the price of submitted limit orders remains constant during the whole simulation. Our system does not need the agents to adapt to create a price process with all characteristics mentioned in above. We make this observation since it is argued in literature (see e.g., Lux and Marchesi (1998)) that the arrival of news and the reaction of agents to the news and the market plays a big role in creating such characteristics. In our system we see that even though there is no external news and the agents are very basic we still observe these market characteristics. We thus argue that implementing and respecting the actual trading rules of current financial markets are instrumental to create a proper market simulation.
Limit Order Book Dynamics for Simulated Versus Real Data
The limit order book data gathered from our simulations displays characteristics found in real market data. Figure 9a shows the average shape of the limit order book, i.e., the average volume at each tick (in our case, $0.01) distance from the mid price. Here, we present both bids and offers together, since their average volume behavior is the same. In real data this shape is characterized (Bouchaud et al., 2002) by a peak a few ticks away from the mid price, since volumes closer to the mid price tend to be executed more frequently, followed by a power law decay of the average volume of more "patient" traders. We observe similar characteristics in Figure 9a. In Figure 9b, we plot the dynamic volume imbalance in the limit order book. Specifically, we plot the difference between the volume in each side of the limit order book during the trading day. There are imbalance peaks in both sides of the spectrum throughout the trading day, when there is more pressure from one of the market sides. However, as expected in near-equilibrium, the general trend is mean reverting.
We then turn to spread characteristics in Figure 10. Spread is the difference between current best bid and offer prices in the limit order book, and it represents the cost someone incurs when executing a market order. Spread is one of the best proxies for liquidity in high frequency trading Mago et al., 2017). As previously described in the literature (Plerou et al., 2005), the time series of spread values should be characterized by persistence (Figure 10b). Furthermore, the asymptotic shape of the spread distribution should be described best by a power law (Figure 10c).
Connection Between Agent Parameters and Resulting Price Process
Generally agent-based model papers provide a set of parameters that is tested to create market behavior similar to real markets. Here, since the system is so close to reality we can study the impact of parameters on the resulting price formed. Intuitively, in a homogeneous environment we have an entropy/central limit principle that tells us the resulting quantity (temperature, pressure, price) has a Gaussian behavior. The more non-homogeneous the environment the more departure from Gaussianity. Thus we wanted to create parameters characterizing the agents which will allow us to go from homogeneous agents to non-homogeneous ones.
Impact of order size. In our base experiment scenario, traders submit buy or sell orders with sizes ranging from 20% to 60% of their current cash or shares value, respectively. This proportion r τi,j is random for every trade and represents the trader confidence level at the moment when the order is sent. This confidence level turns out to be very important for the distribution of the resulting returns. Figure 11: Normal Q-Q plots for simulated returns with different confidence levels.
We ran three different experiments. We vary the agent confidence levels from conservative (20% to 40%), baseline (20% to 60%), and risky (20% to 80%) and we show the Q-Q plots of the returns in Figure 11. We can see from the plots that the larger the orders the traders can execute, the more leptokurtic the resulting return distribution will be. When running several of the same experiment, the average excess kurtosis increases from 1.75 for the conservative case, 1.96 for the baseline case, to 2.12 for the risky case -all of these values being statistically different.
Impact of wealth distribution. Recall that in the typical agent-based simulations all agents have the same initial wealth. This is a typical homogeneous environment. In SHIFT we wanted to have random initial endowment. This is why we use the Dirichlet distribution. We experimented with modifying the wealth concentration parameter α from N to 1. When α = N traders have on average 10, 000 initial shares each with a standard deviation of about 670 shares among all traders. When α = 1 the agents are much more heterogeneous from the perspective of initial wealth. Their expected value is still 10, 000, but the standard deviation is now about 9, 590 shares. In our experiments, the trader with the largest amount had 66, 500 shares in the beginning of the trading day, while the poorest trader had only 100 shares.
The homogeneous wealth results (α = N ) have been presented in Figure 11. We contrast those results with the results in Figure 12. This non-homogeneous distribution of wealth is likely to be closer to reality. Exchanges today have a small number of large institutional traders that dominate through their volume of trades. Figure 12: Normal Q-Q plots for simulated returns with different confidence levels, when wealth distribution in non-homogeneous.
The resulting excess kurtosis is always larger than in the previous (more homogeneous) experiments. We note that, although for this particular run Figure 12b shows an excess kurtosis below the value in Figure 12a, on the average, the relationship between different confidence level ranges stays the same. The average values are 2.80, 2.98, and 3.30from conservative to risky cases.
The relationship between "heavy tails" and the impact of orders coming from large market participants has been previously studied (Gabaix et al., 2003). It is nonetheless interesting that we can easily reproduce it with simple parameter values changes in our simulation.
Effects of Different Sampling Frequencies and Trading Activity
We think this section is one of the most interesting observations we made simply by running the system and varying parameters. It is well known that using different sampling frequencies produces different parameter values. For example, in one of the most cited papers in mathematical finance literature (Zhang et al., 2005), the authors observe that realized variance has different values depending on the sampling frequency of the price data used. They attribute this discrepancy to noise in the market and propose a new estimator that is used extensively today (multi-grid realized volatility). However, in our simulations the same exact run produces completely different distribution shapes depending on the sampling frequency used.
(a) 0.5-Second Returns (b) 1-Second Returns (c) 2-Second Returns Figure 13: Normal Q-Q plots for simulated returns with different sampling frequencies.
Figure 13 exemplifies such behavior. The returns presented in Figures 13a, 13b, and 13c are all from the same price series (Figure 5a), but the smaller the time scale, the "heavier" the tails of the distribution of returns. This effect is actually present when sampling real financial data as well (Aldrich et al., 2014). In fact, this particular effect is called aggregational Gaussianity in Cont (2001). As the sampling time scale is increased, the returns distribution will get closer to a Normal distribution. Moreover, we found that this aggregational Gaussianity is not only related to the sampling time interval, but also to the total trading activity in the market. In the base experiment scenario, with agents submitting orders on average every minute (λ = 390), we averaged 98, 757, 500 shares traded during a simulation day. If we increase their action frequency to once every half a minute (λ = 780), we averaged 197, 323, 570 shares traded during a simulation day. This increase corresponds to a more active equity.
Figures 14 and 15 exemplify the relation between sampling frequency, trading activity, and the Gaussian distribution. First, we note the aggregational Gaussianity -specifically we see the excess kurtosis dropping as the sampling interval increases. Second, an even more interesting phenomenon is observed by looking at the "two" equities: the baseline and the more active equity. As we double the trading activity and the sampling frequency we obtain similar kurtosis values with the baseline case. Specifically, compare Figure 14a with Figure 14c and Figure 14b with Figure 14d. We observe a similar phenomenon in Figure 15, where we decrease the sampling frequency further. In these results the excess kurtosis values are negligible, but the Q-Q plots visual resemblance is present. This is interesting because it points toward studying and comparing different financial asset time series differently depending on their characteristics, such as trading volume.
Experiment 2: Market Stress Scenarios
Following our findings on replicating stylized facts in the context of SHIFT, we demonstrate the system capability to study market stress conditions. Specifically, we study the relationship between market factors and crash characteristics.
To set up the experiments we use N = 200 traders, and the simulation length is set to M = 3600 seconds (1 hour). Around 30 minutes into the simulation, we create a crash by having new trader(s) forcefully placing a large order on the market. We study the differences in the way we create the crash and the interaction with the market conditions.
Market factors: • Trading frequency: Market traders attempt to trade on average every minute (1 min) or every half a minute (0.5 min).
• Homogeneity: The market can be homogeneous (H), with an even distribution of wealth (α = N ) and traders confidence level r τi,j ∼ U (0.2, 0.4), or non-homogeneous (NH), with an uneven distribution of wealth (α = 1) and traders confidence level r τi,j ∼ U (0.2, 0.8). That is, we choose the extreme cases described in Section 4.2.3 to represent the homogeneous and the heterogeneous market conditions.
Crash condition factors:
• Stress size: Crash traders own 5% (level one of the factor) or 10% (level two of the factor) of the total amount of shares available in the market. • Stress traders: This factor has three levels. The first level is a single crash trader placing a large order around 30 min into the simulation (labeled in the output with 1). For the second level we consider 20 crash traders collectively owning the same quantity as the one trader and all placing the orders at about the same time (20 S simultaneously). For the third level, we consider 20 crash traders placing the same total quantity with a 3-second interval between their actions (20 NS non-simultaneously).
We ran 10 experiments for each possible combination of factors, for a total of 240 experiments. We were initially planning more experiments for each factor combination, but the results were very stable. Sections 4.3.1 and 4.3.2 discuss the results obtained.
Market Drawdown Analysis
In the vast majority of our stress event experiments, the price of the CS1 stock falls after the sell-off event. In some cases, the price drop was considerably large, as exemplified in Figure 16a. In other cases, the price decrease was not large, and the market would either continue to drop slowly after the stress event (Figures 16b) or completely recover (Figure 16c). Based on the results obtained we analyze which of the factors listed in Section 4.3 are influential for a stress event. One of the main issues is constructing a variable that measures a crash. We could use the drawdown or the time to drawdown to measure the impact of the crash. Here we chose to use the slope of the market drawdown since we know the exact time when the stress event starts. The slope combines both the drawdown size and duration. We illustrate the drawdown slope in blue in Figure 16.
Since we use categorical variables as inputs and a quantitative variable (drawdown slope) as output, an analysis of variance (ANOVA) is the most appropriate statistical analysis. We display the ANOVA table of the final model that eliminated all non-significant interaction terms in Table 1. These results indicate a strong influence of each of the four factors individually, as well as that some factor interaction is significant. As expected, the market conditions do not interact, however, the crash agents characteristics interact with market conditions. Since the three way and more interactions are not significant, we next investigated how the combination of factors affect the drawdown slope. We apply a multiple pairwise procedure (Tukey's honestly significant difference -HSD -test) to the resulting significant factors, and we summarize the results in Table 2. Looking at individual factor effects, we see results that we more or less suspected. A more active (0.5 min) market exacerbates the drawdown. A non-homogeneous (NH) market creates steeper market drawdown movements. Similarly, when traders liquidate a larger market share (10%), this produces a larger slope.
Looking at the stress traders characteristics produces interesting conclusions. Markets in which the stress event is caused by a single trader have a stronger tendency to steeper market drawdown movements than in markets in which the stress event is caused by 20 traders. In fact, there is no statistical difference when comparing 20 simultaneous traders (20 S) liquidating their shares and 20 non-simultaneous traders (20 NS) liquidating the same amount. It is easy to understand the difference may exist when comparing a single trader with 20 traders liquidating the same order but over a longer period. It is the difference between absorbing a sudden shock all at once or in smaller doses. The reasoning why the 20 simultaneous traders behavior is closer to the 20 non-simultaneous traders rather than the single trader is not that easy. We believe what we are seeing is related to the price-time order priority of order-driven markets, and the fact that there are other 200 traders in our simulation competing for this priority. That is, the market order of the single stress trader will be executed in its entirety, all at once. The 20 orders from the 20 simultaneous traders are programmed to be submitted all at the same time. However, random orders from the other 200 traders may arrive between these orders, thus sometimes smoothing the stress event effect. This in turn produces statistically different results. We highlight this finding since such impact may be difficult to observe unless using an order-driven and distributed asynchronous market implementation.
Studying interaction terms, trading frequency and stress size show a clear multiplicative behavior. Less active markets (1 min) with stress events of smaller magnitude (5%) show smoother drawdown compared with high active markets (0.5 min and 10% stress size). When looking at the interaction between homogeneity and stress size, we see a different picture. Liquidating a 10% order impacts the market much stronger than liquidating 5%, regardless of market conditions (H versus NH). Most interestingly, when studying the interaction between market conditions (homogeneity) and stress traders characteristics (1 versus 20), it is clear that stress events caused by a single trader in non-homogeneous market conditions produce a steeper drawdown movement.
Immediate Market Impact Analysis
Drawdown is a classical measure which may be calculated it in our experiments since we know the exact start time of the crash. However, we also want to study the immediate effect of the orders' liquidation on the exchange price. Visually, we can see that some of our experiments show signs of an immediate impact in the stock price while others do not ( Figure 17 versus Figure 16). The price drop may have different magnitudes, as presented in Figures 17a and 17b. The drop may in fact happen a few minutes after the sell-off event, as is the case in Figure 17c.
In order distinguish between the situations in the figures depicted we have to devise a distinguishing criteria. In order to do this we use the return statistics from the identical experiments in Section 4.2 which were lacking the crash traders. Specifically, we use the levels of the two market factors to identify the corresponding non-stress experiments and use its statistics.
The procedure looks at the one second returns of the asset during a window of time starting a few moments before and ending 5 minutes after the stress event. The largest negative one second return during this period must be greater than 3 standard deviations from the mean return of the corresponding non-stress simulation day. If there is no such return the market did not experience an immediate impact.
If such a large return exists, we use it as the starting point for further investigation. We denote the time of the largest return with τ 0 . We next look for returns at least 2 standard deviations away from the mean return of a calm day, k seconds prior and k seconds past τ 0 . If they exist, these returns may be positive or negative, since at this point we are interested in any market disturbance that might be part of the immediate impact. We continue to expand our immediate impact window in both directions k seconds at a time until no such returns are found anymore. Finally, the resulting total return (sum of all returns for the period) must be at least 4 standard deviations away from the mean return of a non-stress simulation day. If everything passes the check the immediate impact period is returned. In practice we use k = 15 seconds and 2 standard deviations as these parameter values maximized the recognition of events with immediate impact visible on the plots, while also minimizing the number of false positives.
This simple technique allow us to identify experiments which had immediate impacts. Next, we analyze which factors had the most influence on the probability of having an immediate impact. The most significant factors and their interactions are presented in Table 3. Both homogeneity and stress size seem to play a large role on the probability of immediate impacts. In fact, about 68% of the simulations in which the market was homogeneous and the stress size was 10% present immediate impact events. The contribution of the stress size on such probability is expected, but the homogeneity behavior is complementing the findings in Table 2. In the previous results the non-homogeneous markets created a larger drawdown slope. However, when coupled with results from Table 3 we see that even thought heterogeneous markets may suffer more drastically overall from a stress event, the impact is not as sudden and there is a smaller likelihood of an observed crash.
The conclusions related to the stress traders are the same i.e., the 1 stress trader creates immediate impact more often than the 20 traders and there is no significant difference between the 20-trader cases.
Looking at the actual magnitude of the immediate impact, we see that larger stress traders (10%) produce an average return drop of −0.5%, as opposed to −0.3% when the stress event size is 5%. The immediate impact is longer if the market is less active, with an average of 43s against 26s on more active markets.
The stress traders characteristics affect the immediate impact event duration. When a single trader causes the stress event, the average duration is 29s, while 20 simultaneous traders produce an average duration of 34s. These two quantities are not significantly different. However, when 20 non-simultaneous traders cause the stress event, the average duration of 45s is significantly different from the other two cases. This is as expected, as the sell-off in this scenario is executed in waves instead of all at once.
Conclusion
In 2015, market participants reviewed the Regulation AT (CFTC, 2015) proposal. The proposed regulation required that any algorithm needs to be tested in "laboratory conditions" before being put into practice. The tool is never explicitly mentioned and the absence of such a tool meant that traders would test algorithms in a replica of a real exchange without any market impact in effect backtesting paper trades. Further, implementing the algorithms in a system accessible to regulators means that proprietary algorithms would be potentially analyzed by regulators.
In fact the CFTC Chairman J. Christopher Giancarlo had the following remarks at FIA Expo Chicago, Illinois, on October 17, 2018: As you know, Regulation AT was an initiative of my predecessor, Chairman Massad. My position was and continues to be that, while there were some good things in the proposal, there were other things that were unacceptable and perhaps unconstitutional, including that proprietary source code used in trading algorithms be accessible without a subpoena at any time to the CFTC and the Justice Department.
At heart, Reg AT is a registration scheme that would put hundreds if not thousands of automated traders under CFTC oversight, a role for which our agency has inadequate resources and capabilities. While I share genuine concerns about the inevitability of some future market disruption exacerbated by automated trading algorithms, there is nothing in Reg AT's proposed imposition of burdensome fees and registration requirements that will prevent such an event. The blunt act of registering automated traders does not begin to address the complex public policy considerations that arise from the digital revolution in modern markets. Worse is that it would give a false sense of security that the CFTC had regulatorily foreclosed such market disruption, which is impossible. That is why I voted against Reg AT. I do not intend to advance it in its current iteration. Giancarlo (2018) This paper details SHIFT, a financial market replica with applications to learning and research. Our goal is to replicate real market conditions rather than create a software specialized in agent-based modeling. SHIFT offers a unique environment combining a real pricing mechanism, a distributed asynchronous market, and multi-asset support. We believe that SHIFT may create an environment where algorithms can be tested and stressed in laboratory conditions. The environment may be setup so that proprietary source code may be tested adequately in absence and without participation of other market participants.
This paper describes the system architecture and discusses several use cases. We show how a simple setup may reproduce known stylized facts of the financial markets such as leptokurtic return distributions and volatility clustering. We investigate the resulting order book dynamics, and show that the system reproduces the known average shape of the order book and statistics of the spread. We hope we convinced the reader that the resulting price process has very similar characteristics to real price behavior.
We think one the most important contributions of the paper is studying how the price behavior is affected by the trading agents characteristics. Finally, we analyzed a stress experiment in a statistical manner and we draw some interesting conclusions. We found that having a single trader with a large order is more likely to produce a market crash than 20 traders liquidating the same order. However, the impact on the market of the 20 traders lasts longer and has a larger impact on the price in the long term. A crash event in a non-homogeneous market (market in turmoil) has a larger long term impact but it is less propitious to an immediate impact to the price than a crash in homogeneous market conditions.
Moreover, SHIFT has been successfully used in market microstructure classes at Stevens Institute of Technology for over a year and plans for future editions of the algorithmic trading competition are under way. Students have the opportunity to try out what they learn in class either by using the web interface or one of our APIs.
We envision a multitude of experiments to take advantage of our financial laboratory environment. Future work includes analyzing market participants wealth evolution and many potential expansions. | 2020-02-27T02:00:35.568Z | 2020-02-25T00:00:00.000 | {
"year": 2020,
"sha1": "57764e1807309eb586ca9a6f99598a4f999422dd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2002.11158",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "57764e1807309eb586ca9a6f99598a4f999422dd",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Business",
"Economics",
"Computer Science"
]
} |
219049471 | pes2o/s2orc | v3-fos-license | Crosstalk Aware Register Reallocation Method for Green Compilation
As nanoscale processing becomes the mainstream in IC manufacturing, the crosstalk problem rises as a serious challenge, not only for energy-efficiency and performance but also for security requirements. In this paper, we propose a register reallocation algorithm called Nearby Access based Register Reallocation (NARR) to reduce the crosstalk between instruction buses. The method includes construction of the software Nearby Access Aware Interference Graph (NAIG), using data flow analysis at assembly level, and reallocation of the registers to the software. Experimental results show that the crosstalk could be dramatically minimized, especially for 4C crosstalk, with a reduction of 80.84% in average, and up to 99.99% at most.
Introduction
With the progress of technological development, the size of embedded devices becomes smaller and smaller, so the bus lines lay out more and more intensively, making the crosstalk a more and more serious challenge for the circuit design. The increments of crosstalk not only influence the scalability and performance of the embedded system, but also consume more power, making the device more vulnerable to overheat and to malicious attacker. The additional power consumption of crosstalk can particularly be used by attackers through the Differential Power Analysis to get the security and hidden information of the system, such as revealing hidden hardware faults on integrated circuits, accessing cryptographic keys, and getting the actual executing codes of the microprocessors [Mangard, Oswald and Standaert (2011);Zhang, Fang, Li et al. (2016); Park, Xu, Jin et al. (2018); Liu, Yarom, Ge et al. (2015)]. Furthermore, the extra power needed increases noise and decreases the lifetime of the embedded device, therefore compromising the current "green compilation" pursuit.
Crosstalk is a traditional problem for circuit design and many efforts have been done for it. Circuit designers proposed sorts of methods to reduce the crosstalk between couple buses, such as Codec [Duan, Calle and Khatri (2009) ;Shirmohammadi, Mozafari and Miremadi (2017)], buffer insertion [Halak and Yakovlev (2010)], Shielding [Mutyam (2009)], gate sizing [Gupta and Ranganathan (2011)], and so on. Lucas et al. [Lucas and Moraes (2009)] evaluated different crosstalk fault tolerant approaches for Networks-onchip (NoCs) links such that the network can maintain the original network performance even in the presence of errors. Their results demonstrated that the use of CRC coding at each link should be preferred if minimal area and power overhead were the main goals. Cui et al. [Cui, Ni, Miao et al. (2017)] proposed an enhanced code based on the Fibonacci number system (FNS) to suppress the crosstalk noise below 6C level, in which both the redundancy of numbers and the non-uniqueness of Fibonacci-based binary codeword were utilized to search the proper codeword. Experimental results showed that the proposed technique decreased about 22% latency of TSVs comparing with the worst crosstalk cases. Shirmohammadi et al. [Shirmohammadi and Sabzi (2018)] propose DR coding mechanism, which uses a novel numerical system in generating code words that minimizes overheads of codec and is applicable for any arbitrary width of wires. Experimental results show that worst crosstalk-induced transition patterns are completely avoided in wires using DR coding mechanism. Jiao et al. [Jiao, Wang and He (2018)] proposed a crosstalk-noise-aware bus coding scheme with groundgated repeaters. This approach minimized the routing overhead as well as power consumption of data bus systems. The routing overhead was reduced by 12.31% with the new bus coding scheme compared to the conventional data bus with shielding wires. Furthermore, the power leakage and worst-case active power consumptions were reduced by 12.5% and 18.26%, respectively, with the new crosstalk-noise-aware data bus system compared to the previously published bus coding system in an industrial 40nm CMOS technology. Ohama et al. [Ohama, Yotsuyanagi, Hashizume et al. (2017)] proposed a selection method of adjacent lines for assigning signal transitions in test pattern generation. The selection method could reduce the number of adjacent lines used in test pattern generation without degrading the quality of test pattern that could excite the fault effect. Bamberg et al. [Bamberg, Najafi and Garciaortiz (2019)] presented a 3D CAC method which was based on an intelligent fixed mapping of the bits of existing 2D CACs onto rectangular or hexagonal TSV arrangements. Their method required less hardware and reduced the maximum crosstalk of modern TSV and metal wire buses by 37.8% and 47.6%, respectively, while leaving their power consumption almost unaffected. However, these methods either need extra hardware unit support or must increase the area of chip, making them unfavorable for the development of advanced embedded devices requiring portability and minimized cost. With the existing Selective Shielding method, Weng et al. [Weng, Lin, and Shann (2010)] proposed a co-hardware/software register relabeling combination to reduce the crosstalk of instruction bus. Kuo et al. [Kuo, Chiang and Hwang (2007)] adapted also a combination approach with instruction rescheduling, register renaming, NOP instruction padding, and instruction opcode assignment. They proposed the software method to eliminate the 4C crosstalk. These methods are either based on current hardware support or having limitations, illustrated in the next section, to reduce the crosstalk.
Register allocation is also an important component of compilers, many techniques have been proposed, such as Graph-base register allocation [Florea and Geliert (2016); Odaira, Nakaike, Inagaki et al. (2010)], Linear scan register allocation [Poletto and Sarkar (1999); Wimmer and Franz (2010)], tree-based register allocation, and others [Lozano, Carlsson, Blindell et al. (2019), Su, Wu and Xue (2017); Chen, Lueh and Ashar (2018)]. Tabani et al. [Tabani, Arnau, Tubella et al. (2018)] propose a new register renaming technique that leverages physical register sharing by introducing minor changes in the register map table and the issue queue. Experimental results show that it provides 6% speedup on average for the SPEC2006 benchmarks in modern out-of-order processor. Kananizadeh et al. [Kananizadeh and Kononenko (2018)] propose a new class of register allocation and code generation algorithms that can be performed in linear time. These algorithms are based on the mathematical foundations of abstract interpretation and the computation of the level of abstraction. They have been implemented in a specialized library for just-in-time compilation. The specialization of this library involves the execution of common intermediate language (CIL) and low level virtual machine (LLVM) with a focus on embedded systems. But most of these proposed methods were aiming at increasing the performance with litter spill codes, while the crosstalk between instructions was seldom considered. We propose here a software method Nearby Access based Register Reallocation (NARR) to reduce the crosstalk. Though similar to the graph color register allocation method, it is distinguished by combining the frequency of near neighbor access to assign the registers. Our register reallocation approach is not only a software-only method requiring no modifications in hardware, but also improves performance in reducing the instruction bus crosstalk since it deeply analyzes the data flow. Our contributions of this paper in crosstalk-reducing by software can be summarized: -A proposed new register reallocation algorithm called NARR to reduce the crosstalk. It is a pure software method without any hardware modifications and can theoretically get extra power saving and security enhancing by reducing the crosstalk that register renaming and other software techniques could fail to. -A modified interference graph called Nearby Access Aware Interference Graph (NAIG) is designed and implemented with the help of assemble-level data flow analysis and profiling information that make the register reallocation feasible and more easily.
-Implemention of the algorithm in an evaluation of crosstalk improvement. Results show that the NARR is an efficient algorithm in reducing crosstalk, especially in 4C class of crosstalk. The following Section 2 illustrates the background and motivation firstly, and then introduces our new crosstalk aware register allocation algorithm (NARR). Section 3 presents the performance evaluation using a benchmark from Mibench. Section 4 draws some conclusions and highlights future directions.
Methods and materials 2.1 Crosstalk overview
Crosstalk is the noise signal for one circuit or channel of a transmission system caused by the other circuits or channels that are usually parallel to the effected one. The strength of crosstalk is often subject to the following factors: wire length, wire width, switching pattern of nearby wires, and so on. For better evaluating the delay and energy caused by crosstalk, researchers have established the crosstalk delay model and energy model as Eqs. (1) and (2) respectively [Moll, Roca and Isern (2003); Duan, Calle and Khatri (2009); Mutyam (2009) where k is a constant determined by the driver strength and wire resistance, The above formulas show that the different transmission pattern can influence the effect of crosstalk significantly due to their different effective capacitance. According to the effective capacitance of different switching patter of nearby wires in continuous cycles, the crosstalk is classified into six classes shown in Tab. 1. (The symbols -, ↑, ↓, x stand for no, positive and negative, any transitions, respectively.) Currently, research work focuses on how to eliminate the 3C and 4C classes of crosstalk and also to reduce those of other classes. But the established techniques are more or less based on to modification of the integrated circuit that could increase the overhead of the system and therefore can't be used for some cost-constraint embedded systems. There are some software researches attempting to reduce the crosstalk between instruction data buses. Some approaches need to insert extra "NOP" instructions. Others can't make use of the full power of changing register because insufficient amount of program information such as data flow is analyzed. In the next section, we will illustrate the limitations of register renaming techniques, as well as how to overcome these limitations by using register reallocation, with a simple example.
Crosstalk case study
Register renaming is used as software-only or software/hardware combined technique for reducing the crosstalk. Since the lifetime of registers is not analyzed and the results of register allocation are not used, Register renaming alone can't alleviate the limitation of register allocation which aims to use a minimal number of registers to generate a good program performance, therefore losing potential improvement in crosstalk reduction.
Considering the example instruction lists in Kuo et al. [Kuo, Chiang and Hwang (2007)], as illustrated in Fig. 1(a), we can see that the instruction scheduling fail to reduce the crosstalk of instruction buses between I1 → I2 → I3. From the I2 and I3 lines, we can see that R5 is lastly used of its previous definition in I2. And for saving registers, register allocator assigns R5 for saving the results of I3 which causes the 4C crosstalk between I2 and I3. From this piece of codes, R6 is not used and we assume that R6 is available too. If the register allocator uses the R6 to replace R5 for saving the results of I3, the crosstalk will be eliminated (shown in Fig. 1(b)). However, if we use the register renaming to rename R5 with R6, the crosstalk between I2 and I3 will be eliminated, but the new 3C crosstalk will occur between I1, I2 and I2, I3 (see Fig. 1(c)). This example allows us to see the potentially better capability of register allocation in crosstalk reduction, compared to register renaming and to other software techniques.
Crosstalk aware register allocation, optimization process outline
In order to get an effective optimization for the total program such as the library of system, our optimization process utilizes the disassemble codes and the profiling results as inputs.
Then NAIG constructor is used to build the NAIG from the disassemble codes and set the weight of it. Finally, the NARR processor analysis the NAIG to reallocation the register and generate the optimized code. The outline of the process is presented as followed (Fig. 2).
Figure 2: Outline of the optimization process
From this outline, we can see that the kernel of the optimization is NAIG construction and NARR process. The detail will be presented in the following two subsections.
NAIG construction
The goal of this work is to reduce the crosstalk on instruction data bus. So the more frequently access patterns of registers pairs are, the more important the registers are. For better illustrating the nearby accesses frequency feature combining with the register allocation, we enhanced the original Interference graph that was widely used for register allocation and constructed the new nearby access aware interference graph, called NAIG. NAIG is a weighted undirected graph that can be represented by a four tuple G=(V, EI , EN, WE). Where v∈V represents a variable or constant of the program, e(u, v)∈EI expresses that the node u and node v cannot share the same register, e′(u′, v′)∈EN expresses that the node u and node v may be nearby access and the weight w(e)∈WE represents the frequency of such access pattern e (u, v). For building the NAIG, we get the disassemble code as input and suppose that there are unlimited registers, the same as the registers called virtual registers in many compilers.
Firstly, we change the disassemble codes to the SSA form for each basic block that makes sure the registers are defined only once (Algorithm 1, line 1-4). Then, we use the methods described in to construct the data flow of each basic block and get the lifetime of each register in each instruction (line 5-6). The interference graph can be constructed of analysis the live register in each instruction (line 7-15). After getting the interference graph, we can use the profiling results to add the weight of edges (line 16-21). And then we return the constructed NAIG at last (line 22). The detailed construction algorithm is expressed in Algorithm 1. In this program, the CFG is control flow graph for the program and each node v ∈ V ′ represents a basic block that contains number of in order executed instructions. The Liveregi expresses the register defined before the instruction i and will be used after the instruction that called the live register.
To reduce the cost of spill node is a very complex work because it changes the source order of instruction by inserting extra spill codes that will make the NAIG rebuilt. Luckily, in our algorithm, we can avoid to generate the spill codes since the source code is allocated successfully and we can always eliminate the spill code by assigning the spill node with its original one. The detailed reallocation algorithm is presented in next section. Figs. 3(a) and 3(b) are the changed SSA representation and corresponding NAIG for the first three instructions of Fig. 1(a), respectively.
NARR algorithm
Based on the above NAIG, we implement our new NARR algorithm as follow: firstly, we construct the NAIG for each function of the program (Fig. 3); then, we sort the edge of NAIG by the decreased weight order (Algorithm 2, line 1). Since the heavily weighted edge represents that the nodes own the edge are more frequently nearby access in instruction date buses, we expect it in the same register or the least crosstalk registers. At the same time, we expect the lowest spill code which will not only lose the performance of the system, but also increase the undetected crosstalk by this algorithm, so we make sure that no additional new spill codes will be emerged in our algorithm. Then, we analyze the ordered edges one by one to finish the register allocation for each node (lines 2-36). For each edge e(u, v)∈EI , we first check whether a node is assigned. If any one of nodes u is assigned for register ri, we will choose the register other than ri but with minimal crosstalk to assign it for v (lines 5-7). If both nodes are not assigned, we first assign any of them to one register and then find the other suitable register as the previous case for the other one (lines 15-19). If the two nodes are assigned with the same register, we will try to change one assigning into another register (lines 6-14). For the edge not in EI, we first try to assign the two nodes in the same register. If it is not reasonable, we can handle it as the edge in EI (lines 22-34). The program detail is shown in Algorithm 2.
Input:
the NAIG(V,EI,EN,WE) for each function of program; the available registers R={ r0 , ri , … , rn } Output: the allocation map M: for each node V in NAIG; 1: E' := sort EN by decreased order in WE 2: while do 3: e(u,v) :=pop the first element of E' 4: if then 5: if only one node ( assuming for u ) is assigned for register ri then 6: get the register with minimal cost crosstalk(ri,rj) 7: M.add(v,rj) 8: else if both u, v are assigned for the same register ri then 9: if one of this two node (assuming for u) can be changed to other register Set R' without violating the IG of current analysis then 10: rj:= rk where and satisfy 11: M(u) :=rk 12: else 13: assign the two nodes foe original registers. 14: end if 15: else if both u,v are not assigned for any register then 16: ri:= get the random register that node v can be used. 17: M.add (v, ri) 18: get the register with minimal cost crosstalk(ri, rj) 19: M.add (u,rj) 20: end if 21: else 22: if only one node (assuming for u) is assigned for register ri then 23: if ri without violate the conflict of other assignments till now then 24: M.add(v,ri) 25: else 26: assign as line 5-7 27: end if 28: else if none of node is assigned then 29: if exist an register rk can be used for both node without violate the conflict of the other assignments till now then 30: M.add (v,rk), M.add(u,rk)
Experimental setup for performance evaluation
The experiment is built up in Fedora 12 combined with Windows 7 home basic version. The test cases are selected from the MIBench [Guthaus, Ringenberg, Ernst et al. (2001)]. The compiler tools are arm-linux-gcc 4.4.3, and combined with objdump 2.19.51 to get the disassemble codes. The sim-profile tool for arm is used as profile tool to get the access frequency of instructions. The whole experimental framework is shown in Fig. 4. Firstly, we use the arm-linux-gcc to compile the source code to binary codes in Fedora 12 environment. Then, we disassemble and get profile information for the binary codes respectively. After getting the disassemble codes and profile information, we use them as input for the NARR processor to get the crosstalk aware optimized binary codes. Finally, we compare the source binary codes to the optimized binary codes to evaluate the performance of NARR, and analyze the improvement details in crosstalk reduction, especially for 3C and 4C ones. From this benchmark, we can see that the 4C crosstalk has been significantly reduced. In the cases such as stringsearch_large, stringsearch_small, dijkstra, and crc, the reduction percentage of 4C crosstalk is higher than 95%, eliminating almost all 4C crosstalk of the program. And the average decrease rate is about 81%. And for 3C+4C crosstalk, we can see that most of them are also significantly reduced except dijkstra since the dijkstra has many conflicts between the 3C and 4C crosstalk. We force a crosstalk avoid priority for 4C, so the dijkstra benchmark result is not so good in 3C+4C condition. However, we still get an excellent reduction rate under the major benchmark tests for 3C+4C and the average reduction rate of all tested benchmarks is about 44%. Figure 5: 4C and 3C+4C crosstalk reducing in crosstalk number For better understanding the crosstalk avoid in instruction level, we also analyze the 4C and 3C+4C crosstalk in dynamic execution with profile recorded in Fig. 6. From this result, we can see that the 4C crosstalk shows again a good reduction and the average decreased percentage is 80.87%. The highest reduction rate is 99.99% for the crc test under that only two 4C crosstalk appeared in the program after optimization (shown in Tab. 2). And for 3C+4C crosstalk, the average reduction percentage is also 37.01%, similar to the results shown in Fig. 5. Special cases are, however, again a smaller reduction rate recorded in 3C+4C crosstalk condition for stringsearch_large, patricia and dijjkstra tests. The main reason could be that in an instruction, there may be some crosstalk in the same class such as 3C. So if the instruction frequently executes, the crosstalk data at instruction level will be less than those at crosstalk number level. However, getting crosstalk statistics at instruction level is reasonable since the program is executed at instruction level and the possible attackers might also try to work in the instruction level to get the most detailed information of the system. Figure 6: 4C and 3C+4C crosstalk reducing in execution instructions Tab. 3 shows the evaluation results of adapting NARR to reduce the 4C and 3C+4C crosstalk in the aspect of the whole executed instructions. We can see that after NARR, the crosstalk percentage is significantly reduced for almost every benchmark tested, in both 4C and 3C+4C cases, in comparison with GCC. The average percentage of 4C crosstalk is reduced to a level of 0.89%, compared with the initially compiled result of 9.89% with GCC (with a relative reduction rate of 91% based on the GCC value). Furthermore, under specific tests such as crc, dijkstra, etc, we get nearly 0 crosstalk in 4C situation after NARR. And the 3C+4C crosstalk is also reduced from 40.77% for GCC to 25.85% after NARR, in average. So the NARR method is good for reducing the crosstalk, especially for the 4C case. Tab. 4 shows all types of crosstalk decreased percentage compared to GCC by NARR.
We can see that the overall crosstalk is also decreased largely. For the bitcnts and qsort, the reduction rate is up to more than 44%. The average reduction rate is also achieved to 24.24%. So our NARR method is not can get good performance for crosstalk. | 2020-05-07T09:10:42.141Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "d7792ce1e491fd199be633970596fc5f0c0c24e3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.32604/cmc.2020.09929",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "41adcc798b3012dfaea84fd9c443094fcd9f8d56",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245362590 | pes2o/s2orc | v3-fos-license | Multicolor Fluorescence Imaging for the Early Detection of Salt Stress in Arabidopsis
: Salt stress is one of the abiotic factors that causes adverse effects in plants and there is an urgent need to detect salt stress in plants as early as possible. Multicolor fluorescence imaging, as a powerful tool in plant phenotyping, can provide information about primary and secondary metabolism in plants to detect the responses of the plants exposed to stress in the early stage. The purpose of this study was to evaluate the potential of multicolor fluorescence imaging’s application in the early detection of salt stress in plants. In this study, the measurements were conducted on Arabidopsis and the multicolor fluorescence images were acquired at 440, 520, 690, and 740 nm with a self-developed imaging system consisting of a UV light-emitting diode (LED) panel for an excitation at 365 nm, a charge coupled device (CCD) camera, interference filters, and a computer. We developed a classification method using the imaging analysis of multicolor fluorescence based on principal component analysis (PCA) and a support vector machine (SVM). The results showed that the four principal fluorescence feature combinations were the ideal indicators as the inputs of the SVM model, and the classification accuracies of the control and salt-stress treatment at 5 days and 9 days were 92.65% and 98.53%, respectively. The results indicated that multicolor fluorescence imaging combined with PCA and SVM could act as a tool for early detection in salt-stressed plants. in front of the lens in the respective bands (440, 520, 690, and 740 nm). The images were captured by the CCD camera controlled by a software developed by our group using the C++ programming language based on the platform Visual Studio 2018 (Microsoft, Redmond, WA, USA). During the process of multicolor fluorescence image acquisition, the sample was placed on flat surface and the distance between the lens and the sample was approximately 25 cm.
Introduction
Salinity has become one of the most challenging problems in plant development and crop productivity worldwide [1,2]. Salt stress affects plant growth in various ways, such as through osmotic effects, ion toxicity, and nutrition disorder [3,4], which greatly inhibits agricultural production. Thus, in order to accelerate the process of breeding and the cultivation of salt-resistant crops, it is essential to monitor the growth status of plants. At present, the evaluation of plant growth performance is often conducted via the analysis of the physiological, biochemical, and molecular responses of plants to environmental stress [5][6][7]. However, these methods of analyses are prone to being affected by environmental and genetic factors, and are sometimes destructive for plants. Therefore, the development of noninvasive, fast, and efficient technologies for detecting plant growth status to promote the selection of useful plant traits has become a research focus in recent years [8,9].
Spectral imaging technologies are powerful non-destructive tools that have been widely applied in evaluating the performance of plants exposed to abiotic stresses, such as salinity, water, and heat [10][11][12][13][14]. These imaging technologies include red-green-blue (RGB) imaging, thermal imaging, hyperspectral imaging, and kinetic chlorophyll fluorescence imaging. RGB imaging technology has been used to assess the effect of soil and water salinity on date palm growth, which provided supports for the application of RGB imaging in monitoring salinity-stressed plants [15]. Thermal infrared imaging technology can be utilized to analyze the responses of plants to water stress via characterizing the change of canopy and leaf temperature in plants [16,17]. Combining hyperspectral imaging and machine learning allowed for the estimation of the salinity tolerance of 13 okra genotypes by investigating fresh weight, SPAD, and transpiration rate [18]. The photoinhibition and reduction of plants exposed to heat and salinity stress was revealed by using kinetic chlorophyll fluorescence imaging [19].
The studies listed above indicated that spectral imaging technologies can be used to analyze plant phenotype changes responding to abiotic stresses and provide useful information for the detection of the plants exposed to abiotic stresses [15][16][17][18][19]. However, the early responses of plants to stress are often manifested in physiological and biochemical microscopic changes, so RGB imaging is limited in early detection as it only can recognize the visible changes observed by human inspection. Although hyperspectral imaging, thermal infrared imaging, and kinetic chlorophyll fluorescence imaging could reveal the microscopic performance of plants to stress, they also have limitations, such as requiring an expensive and complex device, being subject to the impact of environmental temperature, and requiring dark adaptation.
Multicolor fluorescence with the excitation of UV light and its emission spectrum is usually characterized by four bands near 440 nm (blue; F440), 520 nm (green; F520), 690 nm (red; F690), and 740 nm (far-red; F740) [20]. The blue and green fluorescence are often grouped as blue-green fluorescence (BGF) emitted by secondary metabolites bound to cell walls. Additionally, red, and far-red fluorescence are treated as chlorophyll fluorescence (ChlF) emitted by chlorophyll a in the chloroplasts of green mesophyll cells [21]. Multicolor fluorescence imaging can evaluate the physiologic state of plants before the symptoms induced by different environmental stress factors become evident [22], which has an advantage in the early stress detection of plants. Thus, multicolor fluorescence imaging has been applied to evaluate the physiological state of plants in many studies. Multicolor fluorescence imaging technology has been used for the early detection of pathogens in plants such as zucchini, melon, and Nicotiana benthamiana, demonstrating the potential of multicolor fluorescence imaging in revealing stress-associated signatures [23,24]. The combination of kinetic chlorophyll fluorescence and multicolor fluorescence imaging has been successfully applied in the early detection of drought stress responses in Arabidopsis [25]. Based on a multicolor fluorescence system coupled with a dynamic fluorescence index (DFI), the fluorescence index has been utilized in predicting the water stress status of cabbage seedlings [26]. In addition, multicolor fluorescence imaging technology has been employed in the study of plant nutrients, as reviewed by Tremblay et al. [27]. However, there are still limited studies applying multicolor fluorescence imaging technology as a tool to analyze the plant response to salt stress.
Hence, the main objective of this work was to explore the possible application of multicolor fluorescence imaging technology for evaluation of the early detection of salt stress in plants. In this work, light-emitting diode (LED)-induced multicolor fluorescence was detected by a charge coupled device (CCD) sensor through different light filters with four bands (440, 520, 690, and 740 nm) to acquire multicolor fluorescence images. Based on this, multiple fluorescence parameters were extracted to assess the effects of salt stress, and data dimension reduction was conducted through principal component analysis (PCA) to reduce the number of features and select the best features. Then, the best features and an optimized support vector machine (SVM) model were integrated to construct the early detection model for salt stress. The results showed that multicolor fluorescence imaging has great potential in the early detection of salt stress in plants.
Plant Material and Growth Conditions
Arabidopsis Columbia (Col-0) was used to establish the cultivation and salt treatment in the experiment. Similar to the procedure in [28], seeds were sown in Petri dishes in half-strength Murashige and Skoog salts (1/2 MS; Sigma), 1.5% (w/v) sucrose (Sigma), and [29].
Salt-Stress Treatment
In this experiment, the Arabidopsis plants were irrigated with NaCl solution with a concentration of 100 mM. The experimental treatment began 28 days after sowing (day 28), when the plants were irrigated with NaCl solution for 9 days, while watered plants served as the controls. In the experiment, the control and salt-treated plants were used for nondestructive multicolor fluorescence imaging at day 1, 3, 5, 7, and 9 after treatment.
Multicolor Fluorescence Imaging (MFI) System
The schematic representation of the self-developed MFI system is shown in Figure 1.
The system consists of a LED panel, a monochrome CCD camera (MV-CA005-20GM, Hikvision, Hangzhou, China), band-pass filters, and a computer. The LED panel containing 48 × 3W LEDs provided an excitation light at 365 nm to measure the fluorescence signals emitted by plants. In order to reduce the influence of the heterogeneous intensity field of LED, four mirrors were installed at four sides of the dark box. The monochrome CCD camera had a spatial resolution of 1024 pixels × 1280 pixels. The focal length of the camera lens was 12 mm with a standard view (H0514-MP2, Computar, Tokyo, Japan). Multicolor fluorescence imaging was implemented by using band-pass filters (half-band width of 15 nm) placed in front of the lens in the respective bands (440, 520, 690, and 740 nm). The images were captured by the CCD camera controlled by a software developed by our group using the C++ programming language based on the platform Visual Studio 2018 (Microsoft, Redmond, WA, USA). During the process of multicolor fluorescence image acquisition, the sample was placed on flat surface and the distance between the lens and the sample was approximately 25 cm.
Arabidopsis Columbia (Col-0) was used to establish the cultivation and salt treatmen in the experiment. Similar to the procedure in [28], seeds were sown in Petri dishes in hal strength Murashige and Skoog salts (1/2 MS; Sigma), 1.5% (w/v) sucrose (Sigma), and 0.8% (w/v) agar. At the four-leaf stage (day 15 after sowing), plants were transplanted to po (70 × 70 mm and 50 × 50 mm top and bottom, respectively, and 54 mm height, with hole in the bottom) filled with a mixture of nutrient soil and vermiculite (3:1, v/v). Arabidops plants were cultivated in a climate-controlled growth chamber (F731, Hipoint, Taiwan China) at 22 °C, 65% RH, 8/16 h light/dark, under an optimal light intensity of 120 μ mo m −2 ·s −1 [29].
Salt-Stress Treatment
In this experiment, the Arabidopsis plants were irrigated with NaCl solution with concentration of 100 mM. The experimental treatment began 28 days after sowing (da 28), when the plants were irrigated with NaCl solution for 9 days, while watered plan served as the controls. In the experiment, the control and salt-treated plants were used fo non-destructive multicolor fluorescence imaging at day 1, 3, 5, 7, and 9 after treatment.
Multicolor Fluorescence Imaging (MFI) System
The schematic representation of the self-developed MFI system is shown in Figure The system consists of a LED panel, a monochrome CCD camera (MV-CA005-20GM Hikvision, Hangzhou, China), band-pass filters, and a computer. The LED pan containing 48 × 3W LEDs provided an excitation light at 365 nm to measure th fluorescence signals emitted by plants. In order to reduce the influence of th heterogeneous intensity field of LED, four mirrors were installed at four sides of the dar box. The monochrome CCD camera had a spatial resolution of 1024 pixels × 1280 pixel The focal length of the camera lens was 12 mm with a standard view (H0514-MP Computar, Tokyo, Japan). Multicolor fluorescence imaging was implemented by usin band-pass filters (half-band width of 15 nm) placed in front of the lens in the respectiv bands (440, 520, 690, and 740 nm). The images were captured by the CCD camer controlled by a software developed by our group using the C++ programming languag based on the platform Visual Studio 2018 (Microsoft, Redmond, WA, USA). During th process of multicolor fluorescence image acquisition, the sample was placed on fla surface and the distance between the lens and the sample was approximately 25 cm.
Determination of Leaf Area
In order to observe the effect of salt stress on plant morphology features, the RGB images were acquired at day 1, 3, 5, 7, and 9 after salt stress using an RGB camera (Nikon D5600, Tokyo, Japan) and the projected leaf area of the plants was calculated from the total pixels.
Image Processing and Statistical Analysis
There were no fluorescence signals in the non-plant region, where it was dark compared to the plant region in the multicolor fluorescence images. Hence, we set a threshold to segment the plant region from the image, in which an image can be grouped into two classes representing the plant and the background, respectively. Based on the four basic multicolor fluorescence images F440, F520, F690, and F740, we calculated the individual fluorescence intensity from the corresponding plant region. Additionally, the fluorescence ratios could be calculated, including F440/F520, F440/F690, F440/F740, F520/F690, F520/F740, and F690/F740. After image processing, an analysis of variance (ANOVA) was employed to evaluate the differences in four basic multicolor fluorescence parameters between the control and salt-stressed plants. In this study, we obtained 10 fluorescence parameters, including 4 basic parameters and 6 fluorescence ratios, and Pearson's correlation analysis was utilized for the linear correlation among these parameters and to check whether all parameters had potential as an input for the detection model of salt stress to obtain better results. Imaging processing and the calculation of fluorescence intensity were carried out using MATLAB 2016b (Mathworks, Natick, MA, USA). ANOVA and Pearson's correlation analysis were conducted on IBM SPSS Statistics 26 (IBM Corporation, Armonk, New York, NY, USA).
Construction of a Classification Model Based on PCA and SVM
In this paper, to further select ideal principal features in limited sample data to differentiate the salt-stressed plants from the controls, PCA was performed to reduce the dimensionality of such datasets, increasing interpretability and at the same time minimizing information loss [30]. After the application of PCA, the principal components were uncorrelated variables that successively maximized variance and then the ideal principal component features were determined. An SVM is a versatile and configurable model that could be treated as a classification problem, which has a better performance than other traditional machine learning algorithms [31,32]. Therefore, it is a great alternative to classify the salt-stressed plants from the controls based on the model that combines PCA and SVM. In this study, four principal component combinations characterizing the plant response to salt stress were extracted from all multicolor fluorescence parameters, and these combinations were preprocessed together to form sample data. The two classifications, with the labeling "1" for the control plants and "2" for the salt-stressed ones, were conducted by an SVM classifier. For the classification scheme, the data set consisted of multicolor fluorescence data of 168 pots (84 controls and 84 salt-stress treatments) for 5 days, from which 100 pots (50 controls and 50 salt-stress treatments) for 5 days were taken as the training set, and the remaining 68 pots (34 controls and 34 salt-stress treatments) per day were used as the testing set with 10 repetitions using 10-fold cross-validation. In this study, PCA for feature selection and SVM for classification were performed in MATLAB 2016b (Mathworks, United States) and Python 3.6 (Python Software Foundation, Wilmington, DE, USA).
Salt Stress Affected Growth over Time
The RGB images of Arabidopsis Columbia (Col-0) under control and salt stress treatment at day 1, 3, 5, 7, and 9 after exposure to the salt stress are presented in Figure 2A. From the RGB images, it can be observed that the slender leaves of the plant under salt stress treatment changed in roundness over time and the salt stress caused a significant decrease in the projected leaf area after salt stress treatment for 7 days ( Figure 2B). However, the color of the plants exposed to salt stress did not show significant changes over time from the RGB images, which was consistent with an earlier report [33]. These results showed that early salt-induced changes had limited effects on the structural traits of the plants from RGB images over time. However, early salt-induced changes ignored by RGB images could be revealed using multicolor fluorescence imaging in this study. significant decrease in the projected leaf area after salt stress treatment for 7 days ( Figure 2B). However, the color of the plants exposed to salt stress did not show significant changes over time from the RGB images, which was consistent with an earlier report [33]. These results showed that early salt-induced changes had limited effects on the structural traits of the plants from RGB images over time. However, early salt-induced changes ignored by RGB images could be revealed using multicolor fluorescence imaging in this study.
Effect of Salt Stress on Basic Fluorescence Parameters of Arabidopsis Leaves
From the multicolor fluorescence images in the 440 nm, 520 nm, 690 nm, and 740 nm regions, four basic multicolor fluorescence parameters were derived (F440, F520, F690, and F740), and these multicolor fluorescence parameters and pseudo-color images are shown in Figures 3 and 4. Differences in the four parameters between the control and saltstressed plants were observed at day 1, 3, 5, 7, and 9 after treatments by using ANOVA analysis (Figure 3). It was found that the values of F440 and F520 for the control were relatively consistent during plant growth, while the salt-stressed plants showed statistically significant increases in F440 and F520 starting from day 5 and day 3 after saltstress treatment, respectively. However, the values of F690 and F740 decreased after saltstress treatment, and the difference of these values under the control and salt-stress
Effect of Salt Stress on Basic Fluorescence Parameters of Arabidopsis Leaves
From the multicolor fluorescence images in the 440 nm, 520 nm, 690 nm, and 740 nm regions, four basic multicolor fluorescence parameters were derived (F440, F520, F690, and F740), and these multicolor fluorescence parameters and pseudo-color images are shown in Figures 3 and 4. Differences in the four parameters between the control and salt-stressed plants were observed at day 1, 3, 5, 7, and 9 after treatments by using ANOVA analysis (Figure 3). It was found that the values of F440 and F520 for the control were relatively consistent during plant growth, while the salt-stressed plants showed statistically significant increases in F440 and F520 starting from day 5 and day 3 after salt-stress treatment, respectively. However, the values of F690 and F740 decreased after salt-stress treatment, and the difference of these values under the control and salt-stress treatment condition was obvious from day 5 after salt-stress treatment. These differences caused by salt tress became more significant between the control and salt-stressed plants from day 3 after salt stress treatment.
To visualize the effect of salt stress on Arabidopsis plants, the representative pseudocolor images of control and salt-stressed plants at day 5 and 9 after treatments are shown in Figure 4. When viewing the blue and green fluorescence images, it could be found that the fluorescence intensity of F440 and F520 increased considerably and salt stress could cause the spatial heterogeneities within the leaf level, specifically at the edge of leaves. However, the decreased signals of F690 and F740 appeared in the entire canopy from the red and far-red fluorescence images. Often, F440 and F520 are treated as blue-green fluorescence (BGF), which could be primarily emitted from several phenolic compounds located in the cell walls of the epidermis or vacuoles of leaves [34,35]. According to the study by Lang et al., the blue fluorescence emission is often caused by several phenolic substance such as chlorogenic acid, caffeic acid, coumarins (aesculetin, scopoletin), and stilbenes (t-stilbene, rhaponticin), while the green fluorescence emission is derived from substances such as alkaloid berberine and the flavonoid quercetin [36]. Based on the previous study, salt stress could contribute to the significant accumulation of cinnamic acids and ferulic acid in salt-stressed plants over time, which could lead to the increase of F440 [37]. Yastreb et al. showed that the plants under salt stress could enhance the level of flavonoids and form a protective system against salt stress [38]. Thus, the increase of flavonoid content could be one of the reasons to explain the increase of F520. Additionally, a previous study reported that the exposure of Arabidopsis to salt stress resulted in a decline in chlorophyll content [29], which also could be a result of the increase of F460 and F520 since the reabsorption of bluegreen fluorescence is reduced [39]. The chlorophyll-fluorescence emission spectra usually exhibits two emission maxima around 690 nm and 740 nm, which are termed F690 and F740 [40,41]. The decrease of F690 is partially caused by the in vivo chlorophyll (overlapping of absorption and fluorescence emission bands of chlorophyll a forms) [42,43]. The far-red chlorophyll fluorescence band F740 is not affected by this re-absorption process [44] and the decline of F740 with increasing chlorophyll loss as well as the breakdown of chlorophyll in salt-stressed plants over time.
To visualize the effect of salt stress on Arabidopsis plants, the representative pseudocolor images of control and salt-stressed plants at day 5 and 9 after treatments are shown in Figure 4. When viewing the blue and green fluorescence images, it could be found that the fluorescence intensity of F440 and F520 increased considerably and salt stress could cause the spatial heterogeneities within the leaf level, specifically at the edge of leaves. However, the decreased signals of F690 and F740 appeared in the entire canopy from the red and far-red fluorescence images.
Often, F440 and F520 are treated as blue-green fluorescence (BGF), which could be primarily emitted from several phenolic compounds located in the cell walls of the epidermis or vacuoles of leaves [34,35]. According to the study by Lang et al., the blue fluorescence emission is often caused by several phenolic substance such as chlorogenic acid, caffeic acid, coumarins (aesculetin, scopoletin), and stilbenes (t-stilbene, rhaponticin), while the green fluorescence emission is derived from substances such as alkaloid berberine and the flavonoid quercetin [36]. Based on the previous study, salt stress could contribute to the significant accumulation of cinnamic acids and ferulic acid in salt-stressed plants over time, which could lead to the increase of F440 [37]. Yastreb et al. showed that the plants under salt stress could enhance the level of flavonoids and form a protective system against salt stress [38]. Thus, the increase of flavonoid content could be one of the reasons to explain the increase of F520. Additionally, a previous study reported that the exposure of Arabidopsis to salt stress resulted in a decline in chlorophyll content [29], which also could be a result of the increase of F460 and F520 since the reabsorption of blue-green fluorescence is reduced [39]. The chlorophyll-fluorescence emission spectra usually exhibits two emission maxima around 690 nm and 740 nm, which are termed F690 and F740 [40,41]. The decrease of F690 is partially caused by the in vivo chlorophyll (overlapping of absorption and fluorescence emission bands of chlorophyll a forms) [42,43]. The far-red chlorophyll fluorescence band F740 is not affected by this re-absorption process [44] and the decline of F740 with increasing chlorophyll loss as well as the breakdown of chlorophyll in salt-stressed plants over time.
Correlation Analysis for Multicolor Fluorescence Parameters
From the effect of salt stress on basic multicolor fluorescence parameters, the consistent trends of F440 with F520 and F690 with F740 were observed, which indicated that a high correlation occurred between them. Thus, Pearson's correlation analysis on these multicolor fluorescence parameters was conducted and the correlation detection matrix diagram of multicolor fluorescence parameters is shown in Figure 5. It can be observed that there were Agronomy 2021, 11, 2577 7 of 12 different degrees of correlation among the fluorescence parameters, indicating that these fluorescence parameters contain different types of information, but also have repeatability. Correlation analysis indicated that the extremely high correlation coefficients exceeding 0.85 appeared among F440/F690, F440/F740, F520/F690, and F520/F740. The high correlation coefficients exceeding 0.7 appeared in the feature values including F440 with F520, F690 with F740, F690 with F440/F740 and F520/F740, and F740 with F440/F690 and F520/F690. Although these parameters were related to class label, there was redundancy. Hence, a data dimension was needed to reduce the redundant information and construct the optimized detection model for salt stress. Representative images for F440 and F520 at day 5 and day 9 and that of F690 and F740 at day 5 and day 9 after salt-stress treatment. The color code depicted at the right of the images ranges from black (minimum value) to red (maximum value).
Correlation Analysis for Multicolor Fluorescence Parameters
From the effect of salt stress on basic multicolor fluorescence parameters, the consistent trends of F440 with F520 and F690 with F740 were observed, which indicated that a high correlation occurred between them. Thus, Pearson's correlation analysis on these multicolor fluorescence parameters was conducted and the correlation detection matrix diagram of multicolor fluorescence parameters is shown in Figure 5. It can be observed that there were different degrees of correlation among the fluorescence parameters, indicating that these fluorescence parameters contain different types of information, but also have repeatability. Correlation analysis indicated that the extremely high correlation coefficients exceeding 0.85 appeared among F440/F690, F440/F740, F520/F690, and F520/F740. The high correlation coefficients exceeding 0.7 appeared in the feature values including F440 with F520, F690 with F740, F690 with F440/F740 and F520/F740, and F740 with F440/F690 and F520/F690. Although these parameters were related to class label, there was redundancy. Hence, a data dimension was needed to Representative images for F440 and F520 at day 5 and day 9 and that of F690 and F740 at day 5 and day 9 after salt-stress treatment. The color code depicted at the right of the images ranges from black (minimum value) to red (maximum value).
Principal Component Analysis of Effect of Salt Stress in Arabidopsis
The feature information contained from the high correlation feature values was also highly similar to the data represented in Figure 5, and thus there was a need to reduce the redundant information and simultaneously decrease the dimension of data for constructing an optimal classification model. Therefore, principal component analysis (PCA) was used to reduce the dimensionality of the preprocessed multi-dimension data, and its first several components can express most of the contributions of the original data. In this study, principal component analysis (PCA) with these fluorescence parameters obtained from fluorescence images of the control and salt-stressed plants was performed at day 1, 3, 5, 7, Agronomy 2021, 11, 2577 8 of 12 and 9, respectively. It was found that the first two components could explain over 70% of the total variances in the vector graph at day 1, 3, 5, 7, and 9, respectively (Figure 6), and their contribution rate increased from 72% to 82% over the treatment time. Additionally, principal component analysis showed that the control and salt-stressed samples were gathered in whole at day 1 after treatment, while they were gradually clustered into two groups from day 5 after treatment and the control samples were grouped on the left side and the salt-stressed samples were clustered on the right side with higher values of PC1 based on an overall analysis ( Figure 6). However, the clustering ability for the control and the salt-stressed plants based on the PC2 score was not as good as that based on PC1. This demonstrated that the distribution of salt-stressed plants was strongly related to the increased value of the PC1 score.
Principal Component Analysis of Effect of Salt Stress in Arabidopsis
The feature information contained from the high correlation feature values was also highly similar to the data represented in Figure 5, and thus there was a need to reduce the redundant information and simultaneously decrease the dimension of data for constructing an optimal classification model. Therefore, principal component analysis (PCA) was used to reduce the dimensionality of the preprocessed multi-dimension data, and its first several components can express most of the contributions of the original data. In this study, principal component analysis (PCA) with these fluorescence parameters obtained from fluorescence images of the control and salt-stressed plants was performed at day 1, 3, 5, 7, and 9, respectively. It was found that the first two components could explain over 70% of the total variances in the vector graph at day 1, 3, 5, 7, and 9, respectively ( Figure 6), and their contribution rate increased from 72% to 82% over the treatment time. Additionally, principal component analysis showed that the control and salt-stressed samples were gathered in whole at day 1 after treatment, while they were gradually clustered into two groups from day 5 after treatment and the control samples were grouped on the left side and the salt-stressed samples were clustered on the right side with higher values of PC1 based on an overall analysis ( Figure 6). However, the clustering ability for the control and the salt-stressed plants based on the PC2 score was not as good as that based on PC1. This demonstrated that the distribution of salt-stressed plants was strongly related to the increased value of the PC1 score. In order to further understand changes of the functional and structural information of secondary metabolism in salt-stressed plants, the effects of each multicolor parameter on the first several principal components were also taken into consideration. According to PCA at day 1, 3, 5, 7, and 9, respectively, the contribution rate of the first four components could obtain 99%, which could explain most of the variance in the original data, so the relative "contribution" of each multicolor parameter to the formation of the four components was then used for further analysis. The component matrix details the factor loadings onto the four components. The loading matrix of the variables onto the components at day 1, 3, 5, 7, and 9 are shown in Supplementary Table S1. The names addressed to each component were based on an understanding of the content of the variables. PC1 was the most dominant pattern in the four principal components and its contribution rate increased from 40.71% at day 1 to 71.03% at day 9, whereas each of the remaining three principal components was varied between 11.24-31.77% (PC2), 10.40-21.5% (PC3), and 5.78-12.45% (PC4) of the variance.
From the data represented in Supplementary Table S1, it can be observed that the parameters for the formation of the four principal components differentially responded to salt stress at day 1 and day 3 after treatment. However, with the extension of treatment time, there were some common fluorescence parameters with high positive values in the first four components, such as F440/F690, F440/F740, F520/F690, F520/F740 in PC1, F740 in PC2, and F440/F520 in PC3. It was indicated that these parameters were more sensitive indicators of gradients or changes in fluorescence emission over the leaf surface and presented a relatively high contribution for the detection of salt stress compared to other parameters. The significant changes of the fluorescence ratios, including F440/F690, F440/F740, F520/F690, and F520/F740, were a result of the decline of the red and far-red fluorescence and the increase of the blue-green fluorescence. Although there was no doubt that the sensitivity of the different multicolor parameters to salt stress varied in its degree, the first four components extracted from these fluorescence parameters based on PCA could explain approximately 99% of the variance in these parameters. Additionally, the highest coefficient with a positive value in each principal component was different, which indicated that principal component analysis was an appropriate approach to fuse multidimensional data and preserve valid information to the maximum extent possible. After principal component analysis, it could be found that the contribution rate in PC1 increased with the decline of the contribution rate in the remaining three principal components, but they still accounted for a significant proportion. Therefore, the four principal component combinations could be considered as the input for SVM model to detect salt stress in plants. In order to further understand changes of the functional and structural information of secondary metabolism in salt-stressed plants, the effects of each multicolor parameter on the first several principal components were also taken into consideration. According to PCA at day 1, 3, 5, 7, and 9, respectively, the contribution rate of the first four components could obtain 99%, which could explain most of the variance in the original data, so the relative "contribution" of each multicolor parameter to the formation of the four components was then used for further analysis. The component matrix details the factor loadings onto the four components. The loading matrix of the variables onto the components at day 1, 3, 5, 7, and 9 are shown in Supplementary Table S1. The names addressed to each component were based on an understanding of the content of the variables. PC1 was the most dominant pattern in the four principal components and its contribution rate increased from 40.71% at day 1 to 71.03% at day 9, whereas each of the remaining three principal components was varied between 11.24-31.77% (PC2), 10.40-21.5% (PC3), and 5.78-12.45% (PC4) of the variance.
From the data represented in Supplementary Table S1, it can be observed that the parameters for the formation of the four principal components differentially responded to salt stress at day 1 and day 3 after treatment. However, with the extension of treatment time, there were some common fluorescence parameters with high positive values in the first four components, such as F440/F690, F440/F740, F520/F690, F520/F740 in PC1, F740 in
Classification Model for Salt Stress Detection
Multicolor fluorescence imaging could visualize the changes in plant with salt-stress treatment and provide information about the primary and secondary metabolism of plants.
The changes of fluorescence parameters could be used to detect the growth status of plants combining machine learning. Therefore, the classification models at different saltstressed levels were constructed with the SVM classifier using the four fluorescence feature combinations selected by PCA, and the results are summarized in Table 1. It was found that at day 1 and day 3 after treatment, the overall accuracies were 60.29% and 73.53%, respectively, while the accuracy of the classification model was 98.53% at day 9 after treatment. This implied that the degree of difference caused by salt stress varied in different stress times and growth stages.
The results of the classification showed that the detection accuracy of the SVM model was low at day 1 and 3 after salt-stress treatment, which was due to the fact that the earlier changes caused by salt stress in the metabolite were not significant [32]. With the elongation of stress time, however, these differences also became more significant and the overall accuracy obtained was 90% starting from day 5, when few visual symptoms were observed from the RGB images. This revealed that the salt stress could be detected using multicolor fluorescence imaging at the early stage, and the SVM classifier with PCA selection showed a good performance for classifying healthy and salt-stressed plants.
Conclusions
In this study, the fluorescence images of normal growing and salt-stressed plants were acquired using multicolor fluorescence imaging, from which the difference of the growth status of plants could be significantly visualized after 5 days of salt-stress treatment. The results from PCA showed that the control and salt-stressed plants can be potentially distinguished by basic multicolor fluorescence parameters and their ratio values. The four fluorescence feature combinations selected by PCA were used as inputs to establish a SVM classifier with an overall accuracy of 92% for salt-stressed plants after salt treatment for 5 days. Additionally, the classification accuracy gradually increased to 98% at day 9 after salt-stress treatment, which demonstrated that multicolor fluorescence imaging has the potential to be used as a diagnostic tool to assess salt stress in plants. This study provides a reference for detecting salt-stressed plants by multicolor fluorescence imaging and its application in other salt-stressed crops. Further development of the technique would be desirable in order to facilitate its applicability in accessing plant stress and relative breeding programs. | 2021-12-22T16:03:11.798Z | 2021-12-18T00:00:00.000 | {
"year": 2021,
"sha1": "ed99018b88c358d149d4d3dd8b73f26cfc63da04",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/11/12/2577/pdf?version=1639988521",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9d6ce6637ff6a6aa16d90f19c0a0b304169f6275",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
236632106 | pes2o/s2orc | v3-fos-license | Natural products in Cyperus rotundus L. (Cyperaceae): an update of the chemistry and pharmacological activities
Cyperus rotundus L. (Nutgrass, family Cyperaceae) is a notorious weed which is widespread in temperate tropical and subtropical regions of the world. Owing to its richness and potent pharmacological activities, e ff orts have been devoted to identify its bioactive constituents. Since 1965, a total of about 192 compounds including terpenoids, fl avonoids, stilbenes, aromatics and aliphatic fatty acids have been characterized. This review summarizes the bioactivities and mechanism of action of some of the compounds from C. rotundus L.
Introduction
2][3] It is a notorious weed and has a destructive effect on agricultural yields aer it invades the crop elds. 4,5It is a smooth, erect, glabrous, grasslike, brous rooted, perennial herb that grows up to 15-60 cm height (Fig. 1) and reproduces widely through rhizomes and tubers. 65][16] In West Asia, the roots are applied in traditional medicine for the treatment of leprosy, thirst, fever, and blood diseases. 17,18In Egyptian folk medicine, the tubers are used as an antihelmintic, aphrodisiac, diuretic, sedative, carminative, stimulant and tonic, and for treating renal colic and stomach ache. 19This perennial herb has recently received much attention due to its broad range of pharmacological and biological activities. 5, Smit. Babiaka is an Assistant Lecturer at the Department of Chemistry, University of Buea (Cameroon), where he received his PhD in Chemistry in 2019.His research interest has been focused on natural products drug discovery and molecular molecular modelling of potentials hits.He has served as a consultant at the Agro-Ecohealth Unit, International Institute of Tropical Agriculture (IITA), Cotonou, Benin.Aurélien F. A. Moumbock received both BSc (2013) and MSc (2017) degrees in Chemistry at the University of Buea (Cameroon).Since 2018, he is carrying out doctoral studies in pharmaceutical sciences at the University of Freiburg (Germany) under the guidance of Profs Stefan Günther and Henning J. Jessen, with a fellowship from the German Academic Exchange Service (DAAD).His research focuses on the development and application of chemoinformatics methods and tools to accelerate (nature-inspired) small-molecule drug discovery.
Aer decades of detailed phytochemical investigation, it is evident that this plant species contains two major classes of secondary metabolites, namely, terpenoids and avonoids Dr Stefan Günther studied biology and informatics in Germany and was appointed to a JuniorProfessor in Pharmaceutical Bioinformatics in 2009 and to a Full Professor in 2015 at the University Freiburg, Germany.His research area is the development and application of methods from bioinformatics in pharmaceutical sciences.He has a special focus on structurebased drug discovery and the prediction of theeffects of natural products for therapeutic application.
Fidele Ntie-Kang heads the Molecular Simulations Laboratory, Chemistry Department, University of Buea.He studied Chemistry at the University of Douala (Cameroon) from 1999 to 2005, leading to Bachelor's and Master's degrees.His PhD from University oof Douala (Caeroon) was based on molecular modeling of anti-tubercular drug target to design novel inhibitors, followed by an Habilitation in Pharmaceutical Chemistry from Martin-Luther University Halle-Wittenberg (Germany), under the supervision of Prof. Wolfgang Sippl.His current focus is the discovery of bioactive natural products from African ora by the use of virtual screening followed by in vitro assays.A major contribution of his research team has been the development of the African natural products database.4][85][86][87][89][90][91][92][93][94][112][113][114][115][116][117][118][119]
Flavonoids
0][131][132][133][134][135][136][137][138][139][140] In this report, summaries of the most interesting results for avonoids (132-153) isolated from CR have been shown in Table 2, while the chemical structures of the isolated compounds are shown in Fig. 8 and 9.In Table 2, the biological activities of the compounds and the organism studied have been provided.Harborne et al. 141 isolated luteolin-7-O-glucoside (132), tricin (133) and aureusidin (134) from the leaves/stems of CR by paper electrophoresis.The structures of the compounds were identied by standard procedures and co-chromatography with authentic samples carried out in at least 4 solvents.
Stilbenes and derivatives
Stilbenes are polyphenols containing resveratrol as a basic subunit.These compounds have received much attention because of their cardioprotective effects, but they also display anti-inammatory, antioxidative, and antimicrobial activities.They are also known as anticancer and cancer-chemopreventive agents. 142 The chemical structures of those isolated from this herb are shown in Fig. 10 and 11.
Miscellaneous compounds
Steroidal glycosides, furochromones and aromatics (177-185) isolated from the aerial parts of CR collected from Egypt demonstrated antioxidant, a-amylase inhibitory and antifeedant activities. 8,33A summary of these bioactive compounds isolated from CR is provided in (Table 4) and their chemical structures in Fig. 12.New iridoids, cerebroside, known aliphatic fatty acids and coumarin (186-192) from this plant species have also shown hepatoprotective, anti-proliferation, and growth inhibitory properties respectively. 78,86,143,144
Bioactivities and proposed mechanisms of C. rotundus compounds
7][78] The summaries of the most interesting results for some NPs isolated from this weed have been shown in Fig. 12.
Conclusions
Cyperus rotundus L. (Nutgrass, family Cyperaceae) popularly called "the world's worst weed" has attracted particular attention as a medicinal plant, due its broad spectrum of pharmacological activities.In the past six decades, about 192 NPs have been isolated and characterized from this plant species.Among them, terpenoids and avonoids are the major bioactive constituents mostly harvested from Asia and Africa.The chemical structures of pure compounds were retrieved from literature sources comprising data collected from articles from major peer-reviewed journals, from all over the world spanning the period 1965 to 2020.The collected data includes region of collection of plant material, voucher specimen number, isolated metabolites and class, and measured biological activities of isolated compounds.The study has provided a survey of the biological activities of 192 NPs and the mechanism of action of some of the compounds isolated from C. rotundus.It is worth mentioning that C. rotundus and its NPs have shown good safety in vitro and in vivo studies.Thus it would be interesting in future to evaluate the toxicities of the NPs from this weed using in silico approaches.
Fig. 2
Fig. 2 Pie chart showing the distribution by compound class.
While (+)-nootkatone(40) has been found to have potent inhibitory effect on collagen-, thrombin-, and AA-induced platelet aggregation.Compound 40 was treated with mice and it exhibited signicant prolonged bleeding times.It has also shown signicant inhibitory effect on rat platelet aggregation ex vivo.77Three novel sesquiterpene alkaloids; rotundines A (44), B (45), and C (46) were isolated from the MeOH extract using standard methods of extraction of alkaloids.The structures of the compounds were determined by comprehensive spectroscopic analyses and chemical methods.15Ohira et al. 82 isolated the new sesquiterpenoids; 2a-(5-oxopentyl)-2b-methyl-5b isopro-penylcyclohexanone (48), 2b-(5oxopentyl)-2b-methyl-5b-isopropenylcyclohexanone (49), cyperolone (50) together with the known compounds 17, 19, 40, 51 and 52 from the roots of CR.The antibacterial activities of the new hits were screened against Escherichia coli and Bacillus subtilis using the paper disk method.Cyperolone (50) possessed moderate activity against B. suhtilis at a concentration of 0.5 mg per disk; the other compounds did not show notable activities.82
Table 1
Summary of the bioactivity of derived terpenoids from Cyperus rotundus a
Table 2
Summary of the bioactivity of derived flavonoids from Cyperus rotundus a | 2021-06-15T02:55:07.001Z | 2021-04-21T00:00:00.000 | {
"year": 2021,
"sha1": "7164f3c5d9af7f189e43679af61395e0e5d239e6",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d1ra00478f",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8b9729e33795aef0e20b2f91d2454b4d142eb4cd",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
226192792 | pes2o/s2orc | v3-fos-license | Impact of prebiotics on equol production from soymilk isoflavones by two Bifidobacterium species
The influence of commercial prebiotics (fructo-oligosaccharides and inulin) and sugars (glucose and sucrose) on enhancing equol production from soymilk isoflavones by Bifidobacterium longum BB536 and Bifidobacterium breve ATCC 15700 was evaluated in vitro. Sterilized soymilk was inoculated with each bacterial species at 37 °C for 48 h. The growth and β-glucosidase enzyme activity for the two Bifidobacterium species in soymilk throughout fermentation were assessed. The highest viable count for B. breve (8.75 log CFU/ml) was reached at 36 h and for B. longum (8.55 log CFU/ml) at 24 h. Both bacterial species displayed β-glucosidase activity. B. breve showed increased enzyme activity (4.126 U) at 36 h, while B. longum exhibited maximum activity (3.935 U) at 24 h of fermentation. Among the prebiotics screened for their effect in isoflavones transformation to equol, inulin delivered the highest effect on equol production. The co-culture of B. longum BB536 and B. breve ATCC15700 in soymilk supplemented with inulin produced the highest level (11.49 mmol/l) of equol at 48 h of fermentation process. Level of daidzin declined whereas that of daidzein increased, and then gradually decreased due to formation of equol when soymilk was fermented using bifidobacterial. This suggests that the nutritional value of soymilk may be increased by increasing bioavailability of the bioactive ingredients. Collectively these data identify probiotics and prebiotic combinations suitable for inclusion in soymilk to enhance equol production.
Introduction
A significant body of research has been directed to the nutritious and healthy properties of soybean and soy products. It has been found that soybean isoflavones and isoflavone-derived metabolites resemble estrogen and exhibit certain of its health benefits (Chen et al., 2018;Wee et al., 2017;Bilal et al., 2014). Isoflavones include aglycones and their glycosides (Hughes et al., 2003). It is important to clarify that aglycones (daidzein and genistein) are the more biologically active form of isoflavones than their glycosides (genistin, daidzin) (Elghali et al., 2012;Kawakami et al., 2005). Daidzein (7-hydroxy-3-(4-hydroxyphenyl)-4H-chromen-4-one) is one of the therapeutically important natural isoflavones originated in soybean. Daidzein has been approved for relieving menopausal syndromes in females, treatments of hypertension, coronary heart disease, cerebral clotting, dizziness, and deafness. However, daidzein does not commonly show the estrogenic activity unless it is converted to equol by the intestinal bacteria (Wang et al., 2017). Equol (4 0 , 7-isoflavandiol) is an isoflavone metabolite derived from daidzin/daidzein by certain bacterial biotypes in small intestine and colon of human, has non-planar construction which offers its physiological properties (Raffi, 2015;Del Rio et al., 2013;Setchell and Clerici 2010). It is more stable, more easily absorbed, and has stronger estrogenic activity than the other isoflavones or its precursor molecule daidzein (Jackson et al., 2011;Setchell et al., 2005).
In addition, equol has been confirmed as having a protective action on osteoporosis by up regulating the minerals content and bones density in menopausal women (Lambert et al., 2017) (S)-Equol exhibits potential neuro-protective effects when it was used by Alzheimer's patients (Wilkins et al., 2017). About 25-30% of younger individuals are able to produce equol in vivo when fed with soy bean products. Thus, there is a need to improve the methods used for equol production. One of promising equol production approaches is natural bacterial fermentation. However, lower growth and productivity are the major problems of this procedure which should be resolved (Li, 2019).
Bifidobacterium species are reported to exhibit health-promoting effects and are classified as probiotic organisms since they are thought to enhance the bacterial homeostasis in the human digestive tract (Schrezenmeir and de Vrese, 2001). Probiotics possess several healthy features, including antimicrobial and anticarcinogenic activities as well as other valuable health effects to the host (Lourens-Hattingh and Viljoen, 2001). Soymilk helps on delivering probiotic to the consumer (Otieno et al., 2005). Moreover, studies reported that, soymilk is a good culture medium for bifidobacterial growth. This is for the reason that it consists of various carbohydrates, sucrose, raffinose, glucose and stachyose which are fermented by the majority of strains affiliated to this genus (Liu, 1997;Desjardins et al., 1990). However, humans are not able to produce sufficient amounts of α-galactosidase (an enzyme that catalyzes breakdown of the terminal α-galactosyl moieties of polysaccharides and oligosaccharides, in the digestive system to completely digest the galactosaccharides of soymilk. Therefore, bacterial metabolism of these α-galactosyl oligosaccharides requires strains with higher α-galactosidase activity (Lu-Kwang et al., 2018;Sengupta et al., 2015).
A prebiotic is identified as "a substrate that is selectively utilized by host microorganisms conferring a health benefit. This definition expands the idea of prebiotics to possibly include non-carbohydrate substances, applications to body sites other than the gastrointestinal tract, and diverse categories other than food (Gibson et al., 2017). Since the major influence of prebiotics is to stimulate bacterial growth and/or activity, primarily Bifidobacteria have a role in promoting human health condition (Park et al., 2016;Kaur and Gupta 2002;Gibson and Roberfroid, 1995). Besides, prebiotics (FOS and inulin) are recognized to have influence on development of Lactobacillus and/or Bifidobacterium spp. Therefore, supplementation of soymilk with prebiotic could enhance bacterial growth in soymilk by offering additional supply of oligosaccharides. Furthermore, fructo-oligosaccharides (FOS), inulin and galacto-oligosaccharides (GOS) have attracted wide attention because they are appropriate food for Bifidobacteria in the intestine and can enhance the stability of useful bacteria in the gut, therefore they can improve human's health (Simpson and Campbell, 2015;Huebner et al., 2007;Tuohy et al., 2003). A study by Roberfroid et al. (1998) stated that the inulin-type fructans are the only prebiotics characterized as functional food ingredients; however another one reported that prebiotics with specific standard (in in vivo and in vitro experiments) effective features include inulin, fructo-oligosaccharides (FOS) and galacto-oligo-saccharides (GOS) (Florowska et al., 2016).
In the present study, soymilk was used as a natural source of isoflavones, so it is better to explain that, selection of bacterial species for screening of equol production from soymilk was created depending on β-glucosidase activity of bacterial species. Due to our interest in β-glucosidase enzyme, this study only included screening of the β-glucosidase activity as it is essential for enzymatic transformation of isoflavone glycosides to aglycones to provide excessive levels of daidzein, the direct precursor of equol (Yuksekdag et al.,2017;Otienoet al., 2006;Tsangalis et al., 2002). Also this study evaluated (in-vitro) the influence of two commercial prebiotics (fructo-oligosaccharides and inulin) and two sugars (glucose and sucrose) on equol production from soymilk isoflavones by Bifidobacterium longum BB536 and Bifidobacterium breve ATCC15700.
Materials
All standards (daidzein, equol and daidzin) were bought from Millipore Sigma Chemical Co (St. Louis, USA). Soybean (Glycine max (L.) Merrill) was bought from the local market in Serdang-Selangor, Malaysia. The chemicals of analytical HPLC grade were purchased from Merck (Darmstadt, Germany). Brain Heart Infusion (BHI) broth was used for motivation of bacterial strains. It was handled in compliance with the manufacturing instructions (Oxoid Ltd., West Heidelberg/Vic., Australia). Glucose as well as Sucrose was from Millipore Sigma (Louis, USA), while Inulin and Fructo-oligosaccharides from Orafti Pty.Ltd,(-Tienen, Belgium).
Bifidobacteria culture conditions
Unadulterated cultures of B. breve ATCC 15700and B. longum BB536 were used. Gram staining was used to check the purity of bacterial cultures. The standard bacterial culture was proliferated and stored in 40% glycerol at À80 C for further use. Bifidobacteria grow anaerobically. Anaerobic environment was obtained with Anaero Gen sachets (Oxoid Ltd., West Heidelberg/Vic., Australia).
Production of soymilk
Soymilk was produced following the procedure described by Hou et al. (2000) with few changes. Soybean grains were firstly cleaned up and soaked overnight in distilled water. The soaked soybeans were added to ten times the weight of (100 g dry soya bean to 1000 ml water) distilled water and boiled for 30 min at 95 C in a water bath. Further it was blended for 5 min. The obtained slurry was then purified through double-layered cheesecloth to yield soymilk (New England Cheese making supply company, South Deerfield, MA, USA). Soymilk was autoclaved at 121 C for 15 min and stored in a refrigerator (4 C).
Enumeration of bacterial population
Viable cell counts of B. breve and B. longum were established in duplicate using the pour plate method on BHI agar medium. Each fermented soymilk was added to 90 ml sterile 0.85% saline (w/v) and vortexed for 30 s. Resultant suspension was serially diluted with sterile 9ml saline and 1 ml of the proper dilution was used for selective enumeration by the pour plate technique. The cell growth of each organism was assessed by enumerating a bacterial population on BHI agar at 0, 12, 24, 36 and 48 h of fermentation. To be effective, plates containing 30-300 colonies were counted and recorded as CFU per ml of fermented soymilk.
Preparation of bacterial single and co-culture inoculums
Bacterial species (B. breve ATCC 15700, B. longum BB536) were activated in BHI medium by relocating three times in 10 ml of BHI broth and incubation at 37 C 20 h followed by collecting bacterial cells by centrifuging (3000 Â g for 15 min). To get bacterial co-culture cell suspensions, the two cell suspensions were mixed at a volume ratio of 1:1. Inoculums of the bacterial single and co-culture were set by using 100 ml of sterile soymilk and incubation for 20 h at 37 C.
2.2.5. β-glucosidase activity assay B. longum BB536 and B. breve ATCC15700 were activated by incubating in 10 ml of BHI broth. Incubation was carried out at 37 C for 20 h. Bacterial cells were collected by centrifugation at 3000 Â g for 15 min. The inoculum of single culture for every bacteriological strain was made with 50 ml of sterile soymilk and incubation for 20 h at 37 C. Ten milliliters of the vigorous culture were injected into 250 ml of each S.E. Mustafa et al.
Heliyon 6 (2020) e05298 soymilk (5% v/v) batches of and incubated for 48 h at 37 C. Fifty milliliters were withdrawn aseptically from every inoculum at 12, 24, 36 and 48 h of incubation to measure the enzyme activity. β-Glucosidase activity of the bacterial strains was evaluated by identifying the degree of hydrolysis of the substrate ρ-NPG. It was prepared in 100 mM sodium phosphate buffer (pH 7.0) (Millipore Sigma, Chemical Co., St. Louis, Mo-U.S.A). One milliliter of ρ NPG (5 mM) was added to 10 ml of each aliquot and incubated at 37 C for 30 min (Otieno et al., 2006;Scalabrini et al., 1998). The reaction was ended by adding of 500 μl from 1 M cold sodium carbonate. The aliquot was transferred to centrifuge tube followed by centrifugation (14,000 g for 30 min) using Eppendorf refrigerated centrifuge (Model 5810 R). The quantity of ρ-nitro-phenol relieved was determined by Perkin Elmer spectrophotometer (Model: Lambda 25 UV/VIS Spectrophotometer) at 420 nm. One unit of the enzyme was defined as the amount of enzyme that released1 μ mol of ρ-nitro-phenol from the substrate ρ NPG, per ml per min under assay conditions.
Batch fermentation conditions
The fermentation process was executed in 1 L volume bioreactor BIOSTAT QDCU3 (Sartorius BBI System GmbH, Melsungen, Germany) and controlling of temperature was achieved using water bath (Jeio Tech Desk Top, Seoul, South Korea) and an electronic stirrer (Gas-Col Ltd, Northvale, NJ 07647, USA). The temperature was set at 37 C. Anaerobic condition for fermentation was conserved by flushing oxygen-free nitrogen gas through the medium. No control stood for pH. The stirring speed for all batch fermentation was set at 200 rpm/min. One hundred ml inoculums of single culture for each bacterial strain (B. longum BB536 and B. breve ATCC 15700) in sterile soymilk were transferred to the fermenter to inoculate the soymilk in a 2-L vessel (with 1 L working volume). Samples of fermented soymilk were taken at 0, 24 and 48 h into sterile universal bottles to examine changes on isoflavones concentrations.
Sample preparation for isoflavones investigation by high performance liquid chromatography (HPLC)
Fermented soymilk (2 ml) was added to 80% methanol (8 ml) and stirred for 2 h at 25 C.
Then, the blend was centrifuged at 9000 rpm for 20 min. The supernatant was clarified using a 0.22 μm syringe membrane into HPLC vials and kept at -20 C for HPLC investigation.
HPLC gradient elution was composed of 10% acetonitrile solution in water (solution A) and 90% acetonitrile solution in water (solution B). The elution program was as follows: solution B was run at 30% for 15 min, linearly increased to 50% for 10 min, and then linearly increased to 70% for 5 min. The flow rate was at 1 ml/min. A diode array UV-visible detector was set at 270 nm. UV spectra and retention times of the metabolites produced from daidzin and daidzein by bacteria were compared with those of the standard compounds daidzin, daidzein and equol in HPLC chromatograms.
Screening of prebiotics for equol production
Commercial sugars and prebiotics were screened for ability to enhance equol production from fermented soymilk. They were: glucose (!99.5%) and sucrose (!99.5%) purity [Sigma, Louis, USA], inulin and fructo-oligosaccharides (OraftiPty. Ltd, Tienen, Belgium). The inulin used was Raftiline ST with a purity of 92% and an average degree of polymerization (DP) of 10. The fructo-oligosaccharide (FOS) which utilized was Raftilose P95 that formed from 5% of glucose, fructose and sucrose. It also composed of oligo-fructose with DP ranging from 2-7 with an average of 4. One hundred ml of sterile soymilk supplemented with Inulin, FOS, Glucose and Sucrose (1%w/v) individually was inoculated with activated culture of (B. breve ATCC15700 and B. longum BB536) and incubated anaerobically at 37 C for 48 h. The soymilk medium was set to contain a final concentration 1% (w/v). Trials of inoculated soymilk were taken at 12, 24, 36 and 48 h to measure the quantity of isoflavones by the usage of HPLC (see section 2.2.8).
Statistical analysis
Results analysis was performed using SPSS version 16. Data achieved were subjected to analysis of variance (ANOVA) and minimum significant difference tests (LSD). Fisher test was used to classify the significant differences among mean values (P 0.05).
Cell growth during fermentation
Growth of B. breve and B. longum in soymilk during fermentation was assayed by enumerating the viable cell counts. Table 1 shows the growth pattern of B. breve and B. longum at 0, 12, 24 and 48 h in soymilk during fermentation at 37 C. The highest viable counts for B. breve (8.75log CFU/ml)and B. longum (8.55 log CFU/ml) was reached at 36 and 24 h, respectively. These findings agreed with those showed that different lactic acid bacteria strains revealed greater (7-9 log CFU/ml) cell population in soymilk (Rekha, &Vijayalakshmi, 2011;Chun et al., 2007). Moreover, after 48 h there was dropping on B. breve and B. longum growth, which clarified the conversion from exponential to stationary growth phase. The diminution in population was 2.47 and 2.37 log CFU/ml, respectively, over 48 h of incubation. Reduction in the growth of bifidobacteria at 48 h fermentation is probably owing to shortage of nutrient supply in the medium, which is strongly supported by Rekha, &Vijayalakshmi (2011) andScalabarini et al. (1998), who found that the nutrient content of soymilk is reduced at 48 h fermentation with Bifidobacteria, fully to one-half of the original concentration. Donkor and Shah (2008) stated that the maximum viable count took place at 12 h for L. casei L26, 24 h for B. lactis B94, and 36 h for L. aciophilus L10. However, the cell growth in soymilk fermentation is influenced by the cultures and fermentation period (Jiyeon et al., 2008).
β-Glucosidase activity of Bifidobacterium species in fermented soymilk
β-Glucosidase activity of soymilk fermented with Bifidobacterium species is shown in Table 2. Both bacterial species exhibited measurable levels of the enzyme activity. The enzyme activity differed between the (Otieno et al., 2005). Mostly; β-glucosidase activity was established to be reliant on time and strain. It is notices that, soymilk fermented with B. breve, which had the maximum β-glucosidase activity (4.126 U) at 36 h of fermentation, represented the highest cell number (8.75 log CFU/ml) also at 36 h. Similarly, soymilk fermented with B. longum which has the highest β-glucosidase activity (3.935 U) at 24 h of fermentation, had a maximum cell number (8.55 log CFU/ml) at 24 h of fermentation. Therefore, increased cell growth may be followed by an increase in enzyme activity. It appears that there is a correlation between β-glucosidase activity and growth characteristics during fermentation of soymilk. So, the decrease in β-glucosidase activity at 48 h might be due to decline of the bacterial growth at 48 h of fermentation time (Table 1). These findings agreed with those of Donkor and Shah (2008) who stated that there is a parallel relationship between growth of microorganisms in soymilk and β-glucosidase activity. Otieno et al. (2005) stated that, the increase in β-glucosidase activity and the subsequent decline apparently corresponded to the growth of these probiotic microorganisms in the soy media (growth results not shown). However, the tested bacterial strains revealed an increase in β-glucosidase activity upon incubation time of up to 24 h followed by reduction as fermentation progressed. Three strains of L. acidophilus and two strains of L. casei exhibited increasing β-glucosidase activity up to 24 h and declining as fermentation proceeded. According to the result achieved from this research which was intended for the screening of β-glucosidase enzyme activity of different bacterial species, B. breve ATCC 15700 and B. longum BB536 exhibited different β-glucosidase activity through incubation in soymilk for 48 h. Accordingly, β-glucosidase activity is strain reliant and differs amongst the organisms. In addition, Donkor and Shah (2008)reported that, L. acidophilus L10, displayed higher β-glucosidase activity, when comparing to B. lactis B94 and L.casei L26. Moreover, another study found that Lactobacillus acidophilus exhibited the highest β-glucosidase activity at 24 h of fermentation in soymilk compared to Bifidobacterium spp. and L. casei (Otieno et al., 2006).Furthermore, Bifidobacteria species showed different levels of β-glucosidase yields dependent on the sugar quantity for the cultivation media required by the species and to the phase of growth (Tsangalis et al., 2002).
Concentrations of isoflavones in plain soymilk fermented with two bacterial species
As presented in Table 3, the amounts of isoflavones isomers are not significantly changed and equol was not found in plain soymilk.
Moreover, the level of isoflavones glucosides (daidzin) was significantly declined when soymilk fermented with B. breve. The levels of daidzin at 0, 24 and 48 h were 10.36 AE 0.02, 8.45 AE 0.03 and 7.38 AE 0.01 mmol/l, respectively. Instead, the concentrations of daidzein increased significantly through fermentation of soymilk with B. breve. However, at 0 h, the concentration of daidzein was 1.48 AE 0.02 and after 12 h of incubation it was 6.61 AE 0.02 mmol/l, then it was followed by gradually decrease in the concentrations due to production of equol. Moreover, at 0 h, equol was not detected, after 12 h it was 0.56 AE 0.04 and then increased regularly to2.23 AE 0.04 mmol/l after 48 h of incubation time. Furthermore, once soymilk was fermented with B. longum, the concentrations of daidzin were decreased significantly from 10.35 mmol/l at 0 h to 7.15 mmol/l after 48 h of incubation period. In contrast daidzein concentrations were increased from 1.47 at 0 h to 7.34 mmol/l after 24 h. Later, it started to decrease slowly after 36 h owing to equol production. 7).a-c Means in the same column with different superscripts are significantly different (P 0.05). One unit of enzyme (U) is the amount of β-glucosidase that released one μ molar of ρ-nitrophenol from ρ-NPG per ml/min at 37 C.
Effect of prebiotics on equol production
In the current research the effects of the selected prebiotics such as (inulin, FOS) and glucose and sucrose on equol production from soymilk isoflavones using different bacterial species (B.longumBB536 and B. breve ATCC 15700) were estimated. Table (3) shows the results of plain soymilk fermentation with B. longum BB536 and B. breve ATCC 15700. There was noticeable decrease in isoflavone glycoside (daidzin) and daidzein parallel to increasing of equol production by fermentation time. Table 4 represents the influence of adding sucrose to soymilk on equol production. As shown, by 48 h of incubation, B. longum BB536 and B. breve ATCC 15700 co-culture delivered high quantity of equol (7.31 mmol/l); this amount is high compared to that being produced in the case of plain soymilk. These findings go along with those demonstrated by Wei et al. (2007), which revealed that supplementation of soymilk with sucrose for isoflavones aglycones and equol production using five strains of isoflavones metabolizing microorganism, yielded smaller quantities of aglycones and equol than those observed when soymilk was enriched with fructose and lactose sugars. Results for the effect of glucose addition on soymilk fermented with single and co-culture of B. breve ATCC 15700 and B. longum BB536 for 48 h were also displayed in Table 4. The results showing that, there is no significant different in the amounts of daidzin, daidzein and equol in soymilk supplemented by glucose compare to those of the plain soymilk during the fermentation time. This finding is consistent with that of Tsangalis et al. (2002) who stated that, the concentrations of daidzin; daidzein and equol after 48 h incubation of 4 strains of Bifidobacterium in soymilk supplemented with glucose were approximately the same in complemented soymilk and in ordinary soymilk by 24 h of fermentation. The effect of supplementation of soymilk by FOS on equol production is varying within the Bifidobacteria species (Table 5). B. breve ATCC 15700 showed high amount (4.94 mmol/l) of equol after 48 h incubation period comparing to plain soymilk. Co-culture from B. breve ATCC 15700 and B. longum BB536 showed high level (8.63 mmol/l) of equol after 48 h incubation period. These findings remained parallel to those published by Uehara et al. (2001), who disclosed that the growth of bifidobacteria and furthermore the transformation of isoflavone conjugate to produce the correspondence aglycones and equol can be stimulated by FOS. The present results also agree with the finding that addition of FOS to soymilk professionally and significantly (P 0.05) increases the β-glucosidase activity, and this was dominant in soymilk fermented with L. acidophilus (Yeo and Liong 2010) and with Ohta et al. (2002) who reported FOS enhanced cecal β-glucosidase action and daidzein conversion to equol in both OVX and SH mice. Consequently, these finding viewed that, FOS increased the growth of bacteria species responsible for the transformation, β-glucosidase activity and subsequently the bioavailability of isoflavones. Alternatively, Decroos et al. (2005) and Zafar et al. (2004) established that addition of fructo-oligosaccharides to the food could be a reason for equol production inhibition. As the digestion of FOS by gastrointestinal bacteria result in a great relief of hydrogen, the incidence of FOS possibly will change the colonic Microbiota and destroy the bacteria accountable for equol production and at the same time initiates alteration in hydrogen utilization; therefore, daidzein may not be metabolized to dihydrodaidzein or equol. The present results indicate that, addition of FOS and sucrose to soymilk significantly (P 0.05) increases equol production from daidzein in fermented soymilk. Instead, Tsuji et al. (2010) confirmed that the addition of FOS or sucrose to soymilk significantly inhibited equol production by the human isolated bacterium Slackia sp. Strain NATTS. The results demonstrating the influence of inulin in transformation of isoflavones to produce equol are shown in table (5). It was noticed addition of inulin to soymilk offered the highest (co-culture ¼ 11.49 mmol/l) amount of equol among both single and co-culture comparing to other carbohydrates added to soymilk. However, these findings are differing from those established by Zafar et al. (2004), who published that the absorption and concentrations of plasma equol were affected negatively by inulin. Levels of equol in serum were significantly lesser in the group nourished in inulin relative to that nourished in inulin free isoflavones diets. Another study revealed that inulin exhibited the greatest impact in hydrolyzing the malonyl daidzin, and this was most dominant in soymilk fermented by Bifidobacterium FTDC 8943 (P < 0.05). Addition of inulin to soymilk is significantly (P < 0.05) reduced the level of malonyl daidzin in soymilk fermented with Bifidobacterium FTDC 8943 about 49.3 % (Yeo and Liong 2010). Moreover, a study was described that ingestion of soy isoflavones with inulin for 21 days result on increases of plasma daidzein concentration in postmenopausal women compared with intake of intake of soy isoflavones without inulin (Zafar et al., 2004). This indicated that inulin has an influence on transformation of isoflavones glucosides via enhancing the growth of the colonic bacteria and therefore increasing the amount and activity of the bacterial enzymes responsible for isoflavones metabolism in the gut and besides increases their absorption and bioavailability (Piazza et al., 2007). Yet, these results agreed with our finding which showed that the high rate of conversion of daidzin to daidzein when inulin was added to soymilk medium during the fermentation process which made daidzein (the primary precursor of equol) more available. Table 6 summarizes the results for equol produced in fermented soymilk. The amount of equol produced by single culture (B. breve ATCC 15700/B. longum BB536) was less than that produced when fermentation was carried out with the co-culture of B. breve ATCC 15700 and B. longum BB536.The co-culture promotes high rates of β-glucosidase hydrolysis to aglycones than a single bacterial culture. Also it may offer nutrients and circumstances that someway preserve the sustainability of the other bacteria in the mixture of cultures (Garro et al., 2004).
Conclusion
Estimation of β-glucosidase activity for bacterial species found that, both bacterial species tested can generate different levels of β-glucosidase activity according to fermentation time. However, B. breveATCC15700 exhibited maximal β-glucosidase activity at 36 h, while B. longum BB536 got it by 24 h of fermentation period (48 h) in soymilk. Therefore, the hydrolytic ability and enzyme activity could be unique for each strain. These results enhance our understanding of the impact of prebiotics on equol production from soymilk isoflavones. However, the results established that, all tested prebiotics had significant effect in equol production, but inulin exhibited the highest level of equol production comparing to FOS. So it was recommended that, in order to gain high levels of equol from soymilk isoflavones it is better to use bacterial co-culture and enrich soymilk with inulin.
Declarations
Author contribution statement Salma Elghali Mustafa: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Wrote the paper.
Funding statement
This work was supported by the Malaysian Government.
Additional information
No additional information is available for this paper. | 2020-10-29T09:03:20.502Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "1aa3ea89c76da01fcfc474b4fce4cdb85c6e70e9",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2405844020321411/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da68efbe8c2ce96a1831ea0cb73cdbea9b524872",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
203985514 | pes2o/s2orc | v3-fos-license | DECA, A Comprehensive, Automatic Post-processing Program for HDX-MS Data*
The open-source software, DECA, provides comprehensive back-end analysis of HDX-MS data that addresses the recent recommendations for HDX-MS data analysis and presentation. It provides options for back-exchange correction and rigorous statistical analysis of the significance of differences in exchange. Graphical Abstract Highlights Open source software for comprehensive HDX-MS data analysis. Automatic back-exchange correction options. Rigorous statistical analysis of the significance of uptake differences. High quality visualization tools. Amide hydrogen-deuterium exchange mass spectrometry (HDX-MS) has become widely popular for mapping protein-ligand interfaces, for understanding protein-protein interactions, and for discovering dynamic allostery. Several platforms are now available which provide large data sets of amide hydrogen/deuterium exchange mass spectrometry (HDX-MS) data. Although many of these platforms provide some down-stream processing, a comprehensive software that provides the most commonly used down-stream processing tools such as automatic back-exchange correction options, analysis of overlapping peptides, calculations of relative deuterium uptake into regions of the protein after such corrections, rigorous statistical analysis of the significance of uptake differences, and generation of high quality figures for data presentation is not yet available. Here we describe the Deuterium Exchange Correction and Analysis (DECA) software package, which provides all these downstream processing options for data from the most popular mass spectrometry platforms. The major functions of the software are demonstrated on sample data.
Hydrogen deuterium exchange mass spectrometry (HDX-MS) 1 probes protein structure and dynamics by measuring amide proton exchange. HDX reports on solvent-accessible surface area, protein-protein interfaces, and allosteric changes, and the data can be used to constrain docking or homology modeling (1)(2)(3)(4). Our group recently showed that HDX-MS experiments in which proteins are incubated in a deuterated solvent for seconds to minutes probe microsecond to millisecond motions in samples (5).
Two caveats limit HDX-MS analysis. First, HDX-MS resolution is limited to the length of observable peptides. Approaches to achieve single amino acid resolution with HDX-MS include using several proteases to increase peptide overlap (6), ETD fragmentation to better localize deuterons on each peptide (3,7), and deconvolution of isotopic envelopes to extract information from the peptide data itself (8,9). Second, HDX-MS suffers from deuterium back exchange during sample handling and chromatography. Several methods exist for the correction of back exchange, a necessary step for downstream data processing to obtain reproducible numbers of amides exchanged. Correction for back exchange has previously been performed using several different methods including internal standards or using fully deuterated control samples to determine the extent of uptake on each peptide (8,10,11).
To decongest complex mixtures of biomolecules, it is helpful to integrate ion mobility (IMS) and m/z data simultaneously (12,13). IMS-integrated HDX-MS uniquely excels in the study of large, complex protein samples (14,15). Waters pioneered IMS as a third dimension of resolution for HDX-MS experiments with the SYNAPT system and the Protein Lynx Global Server/DynamX software workflow. IMS provides a third independent piece of information with which to identify each peptide and markedly improves the accuracy of ion clustering and assignment of peptides from HDX-MS data (14).
As HDX-MS became more high-throughput, various efforts to automate aspects of the data analysis became available. HXExpress, a Microsoft Excel utility, was the first freely-available semi-automatic data analysis platform (16). This utility was further developed for deconvoluting overlapped peptide mass envelopes (17). Several automatic analysis platforms were developed beginning with The Deuterator and HD Desktop, which were further developed into HDX Workbench, a comprehensive fully automatic program that incorporates analysis of ETD fragments, isotopic fitting, overlapping peptide segmentation, statistical analysis, and data visualization (8). During this time, Schriemer's group developed Hydra (18), Mayer's group developed Hexicon (19), and Englander's group developed ExMS (9), all of which provide a similar list of functionalities. Realizing the need for more rigorous backexchange correction, Z. Zhang developed MassAnalyzer, a fully-automated software that generated data in the form of protection factors (10). Two commercially available software programs were also developed, HD Examiner by Sierra Analytics and DynamX by Waters. These two software programs aptly demonstrate the two cultures that have grown out of the HDX-MS community; those researchers who desire a fullyautomated platform such as that provided by HD Examiner and HDX Workbench versus those who wish to have some automation but with user-controlled examination of the raw data such as is provided by DynamX. Currently, only HD Examiner and DynamX allow import of IMS data. Several software programs also emerged that provided further data analysis once the initial centroid data was obtained. HDX Analyzer provides rigorous statistical analysis (20). MEMHDX (21) and Deuteros (22) allow downstream analysis of HDX Workbench and DynamX output data, providing further statistical analyses and data visualization. However, neither address back exchange analysis, which is missing from DynamX.
Here we present the Deuterium Exchange Correction and Analysis (DECA) software package, which was designed as a simple, rapid backend to DynamX, but can also be used for other data in .csv format. DECA provides several options for back exchange correction, resolution-increases by Overlapping Peptide Segmentation (OPS), a rigorous evaluation of the statistical significance of observed differences, and visualization tools. To our knowledge, DECA is the only freely-available software that provides all these functionalities in a single platform. In addition, DECA is able to extract detailed ion mobility, retention time, and deuterium uptake information from DynamX project files in order to measure the summary statistics of the ions assigned to each peptide, check the quality of the data through outlier analysis, and more accurately determine the statistical significance of differences between protein states than is possible with exported summary data. For data from other platforms which require back exchange correction and/or automatic analysis of overlapping peptides, DECA can be a helpful backend as well.
Here, we demonstrate the effectiveness of DECA on a representative HDX data set from DynamX and explain how it can be used for uptake data output files from other platforms. DECA is an initial attempt to provide software that will allow the HDX-MS community to comply with the recently published recommendations for performing, interpreting, and reporting HDX-MS experimental data. DECA is open-source with the intent that users will contribute additional functionalities and improvements.
MATERIALS AND METHODS
Software Design-DECA was written entirely in Python, and it implements modules from external libraries for visualization and analysis. The graphical interface implements the built-in Tkinter framework, and it was initially designed in PAGE, a Python GUI Generator. DECA performs statistical analysis of imported data by implementing functions from the Scipy and Statsmodels libraries (23). Curve fitting in deuterium uptake plots uses a linear regression function from the Scipy library. Uptake plots, coverage maps and spectra are generated through functions from the Matplotlib library (24). The retention time prediction feature implements a function from the Pyteomics library (25). The PyInstaller script is used to package all DECA code and python dependences into executable binaries for OSX and Windows (26). The packages used are PSF, BSD, GPL or Apache licensed (All free software licenses, with some share-alike copyleft restrictions). The DECA source-code is compatible with Python 2.7 and Python 3.7, which are freely available and pre-installed on UNIX operating systems. This source code is executable through the python environment upon the installation of the external libraries mentioned above. The source code and binaries are available at github.com/komiveslab/DECA.
Data Format-DECA primarily accepts tabular peptide lists of deuterium uptake data from CSV files. It was designed to import DynamX state data, HDXWorkbench data, and a third generic alternative. Sample data sets of each type are available in the supplementary information. Between the three data styles, DECA can import most tabular HDX-MS data with only minor alterations to the data set's column headers. For deeper statistical analysis and visualization of the raw spectra, a ".dnx" DynamX project file can be imported into DECA.
Data Analysis-Each data set imported is evaluated to identify high relative standard deviations or errors in peptide mass values, retention times, and ion mobility values. The statistical significance is determined for every time point between each protein state in the data set. If a DynamX project file is imported, DECA interrogates every ion assigned to each peptide in order to flag outlier ion assignments and report on the spread of m/z, retention time, and mobility values for each ion cluster.
Sample Data Set-HDXMS experiments were performed and published previously (27). The data set includes peptides of a single protein, RelA, in two states: the RelA homodimer and the RelA-p50 heterodimer. Deuterium exchange was measured in triplicate at 30 s, 1 min, 2 min, and 5 min. Independent biological replicates of the triplicate experiment were performed to verify the results. Peptides were identified using the Protein Lynx Global Server and analyzed in DynamX 3.0 taking advantage of the ion mobility data. The peptides were identified from triplicate MS E analyses and data were analyzed using PLGS 3.0 (Waters Corporation). Peptide masses were identified using a minimum number of 250 ion counts for low energy peptides and 50 ion counts for their fragment ions. The peptides identified in PLGS were then analyzed in DynamX 3.0 (Waters Corporation, Milford, MA) implementing a score cut-off of 7, the peptide must be present in at least 2 files, have at least 0.2 products per amino acid, a maximum MHϩ error of 5 ppm, and less than a 5% error in the retention time. The relative deuterium uptake for each peptide was calculated by comparing the centroids of the mass envelopes of the deuterated samples versus the undeuterated controls following previously published methods (28) and corrected for back-exchange as previously described (29). The experiments were performed in triplicate, and independent replicates of the triplicate experiment were performed to verify the results.
Fitting of Deuterium Uptake Plots-A curve-fitting algorithm was implemented based on a least-squares minimization fit to the doubleexponential function y ϭ a*͑1 Ϫ e Ϫbt ͒ ϩ c*͑1 Ϫ e Ϫ0.01t ) to the HDX data. Constants a and b reflect the asymptote and rate of uptake by fast exchange, whereas constant c scales the contribution of uptake by slow exchange. If the timepoints do not cover the curvature of the first exponential, the minimization may overfit the data. In such cases, the first exponential is minimized so y tϭ1/4*n ϭ 3/4*y tϭn , where n is the lowest non-zero time point, to solve for the value of b that creates a curve that passes through the point at 3 ⁄4 of the uptake in 1 ⁄4 of the exposure of time point n. This curvature adjustment may be modified in the global settings. Constants a and c are subsequently minimized with the new b value to create a smooth curve to the first non-zero time point. Notably, in such scenarios where the data does not cover both sides of the inflection of the uptake curve, the curve fit does not reflect the rate of exchange of the peptide. DECA makes no attempt to extract the rates of exchange of individual amides from the peptide uptake curve. Such estimates are usually unreliable.
RESULTS
Graphical User Interface-The DECA interface enables rapid review and visualization of HDXMS data (Fig. 1). From the main window, the user can import a data file, which will display the contents of the file in a spreadsheet and enable data correction and visualization functions. Deuterium uptake plots are displayed adjacent to the data table, whereas additional figures will be displayed in separate windows. Diagnostic messages and errors are printed to a console window that opens with the main interface.
File Merging-HDX experiments from a single study may be analyzed separately because of gaps in time between the experiments, changes to the instrumental setup, or in order to reduce the computational demand. Because data sets may contain redundant or complimentary data, DECA may be used to intelligently merge data sets together by combining data or renaming proteins/states. Additionally, File Merge may be implemented either before or after performing back exchange correction on separate files in order to compare data sets with different levels of back exchange.
Back Exchange Correction-DECA is designed around solving the need to perform back exchange correction on hydrogen-deuterium exchange data before further processing or visualization. Two distinct forms of back exchange may influence deuterium uptake. Back exchange can occur from the quench to the point of sample injection into the mass spectrometer, and this can vary from peptide to peptide because of differences in retention time. The most accurate back exchange correction comes from the use of a fully deuterated control sample in order to calculate the level of back exchange for each peptide.
Correction Factor ϭ Uptake toc MaxUptake
Correction ϭ Uptake CorrectionFactor The second form of back exchange is a systematic time point-dependent difference resulting from different liquid handling procedures for shorter and longer times of deuterium exposures. When a LEAP robot is used for sample preparation, for timepoints shorter than 2 min the mixing syringes skip a step resulting in a slightly lower back exchange.
Correction Factor tϾt maxD ϭ ͑Uptake maxD Ϫ Uptake t ͒ Uptake maxD Correction Factor t Յt maxD ϭ 1 Back Exchange Correction is implemented in DECA with two settings: (1) a global correction factor or a fully-deuterated exposure correction factor per peptide, and (2) the Long Exposure Adjustment Patch, a set of exposure correction factors which are applied universally to every peptide in the data set. These two settings may be used separately or in tandem. Back exchange corrected data can be saved and reimported as desired. Fig. 2 demonstrates the necessity for both forms of back exchange correction. A perceptible difference of nearly a half deuteron from the first non-zero time point to the final time point demonstrates clear time point-dependent back exchange. Following global back exchange correction, LEAP correction values should be obtained from the peptide with the greatest difference between the maximum observed uptake, in this case 3.5 Da at 30s, and the uptake at the final time point (or fully deuterated control). This should correspond to the peptide with the highest fractional uptake (typically a spiked-in control peptide or a peptide corresponding to a completely disordered region of the protein). The correction factors for timepoints longer than the time point with the highest exchange will be set to adjust the uptake to the highest exchange value. The correction factors for each time point shorter than that with the highest exchange will be set to keep their ratio to the maximum the same. These correction factors are then applied to all other peptides.
Generation of Coverage Maps from Back-exchange Corrected Data-At this point, the heat map and uptake plots representing the complete data set can be rapidly generated in DECA. We recommend plotting the coverage map of all the final peptides for which data is analyzed and presented in the manuscript as also recommended recently by the HDXMS community (30). DECA provides additional functions allowing visualization of deuterium uptake, fractional uptake, standard deviation of uptake, or relative standard deviation of uptake, for all real peptides in the data set according to sequence position. The coverage maps reveal the sequence coverage as well as the redundancy or average number of peptides that cover each amino acid. Several colormaps are available to choose from, and the range of the color assignment can be manipulated to highlight features in the data set.
Generation of Publication-quality Deuterium Uptake Plots from Back-exchange Corrected Data-A key feature of DECA is the generation of deuterium uptake plots after back exchange correction (Fig. 2). Plots generated by DECA allow accurate comparison of deuterium uptake for different regions of a protein sequence, which requires prior correction of back exchange. The deuterium uptake plots generated by DECA show the deuterium-uptake for each peptide, with time on the x axis and deuterium uptake in Da on the y axis with the maximum corresponding to the maximum possible uptake. Publication quality deuterium uptake plots generated in DECA have various features such as automatic plotting of error bars, plot annotation with the peptide sequence and residue num-bers, and options for data symbol types and colors. The data are most readily visualized by scrolling through the peptides viewing each uptake plot.
In the data set used as an example here, deuterium uptake into the RelA homodimer was compared with uptake into the RelA-p50 heterodimer. We wanted to compare the uptake into both RelA and p50 as well. These data were collected months apart and required file merging to present both the p50 uptake and the RelA uptake in a manner where they could be directly compared. This process, which took several weeks to be done manually took only one hour when done with DECA.
Comparison of Overlapping Peptides for Increased Resolution of Deuterium Uptake Data-DECA contains a feature called Overlapping Peptide Segmentation (OPS) which computationally increases the sequence resolution of the data. HDX-MS is limited by the size of peptides observable on a mass spectrometer, which usually is in the range of 10 -30 amino acids. As a result, HDX data is spread over a large sequence range such that the uptake values may not always localize the exchange events effectively. Because of the use of nonspecific proteases for HDX-MS proteolysis, however, overlapping peptides are often produced. OPS exploits overlapping peptides to assign better-resolved uptake values to the non-overlapping regions in a manner similar to that previously described (31) (Fig. 3, Table I). This propagates error, however, and may lead to a mischaracterization of the data, so DECA implements this OPS only once per data set. In other words, the function may not be repeatedly applied to continue generating overlaps from overlap peptides until there are no new peptides being generated. HDX-MS analysis programs such as HDXWorkbench, HDsite and HRHDXMS offer similar features.
Visualizing Highest Resolution Deuterium Exchange Data-Heat maps, butterfly plots and pymol scripts for coloring 3D structures are generated in DECA using the highest resolution data at each residue by assigning the smallest peptide covering each amino acid to that position, including the OPS analysis, if performed (Fig. 4). The visualized data is subsequently taken from the assigned peptide at each position. Heat maps display data in the same format as coverage maps but with only a single value per residue instead of showing multiple peptides covering each position (Fig. 5A). Butterfly plots consist of two line plots for the comparison of two states (Fig. 5B). OPS can be used to isolate sites of deuterium uptake and visualize them in these types of plots.
The PyMOL Script function creates a script that can be imported into PyMOL to easily assign data values to the b factor of each residue of a protein structure. This script simultaneously assigns a color gradient and range to visualize the differences by replacing the b factor data column (Fig. 5C). Importantly, this feature implemented in DECA considers the back-exchange-corrected uptake amounts which is important for the colors to be comparable across the protein molecule.
Statistical Significance-DECA produces a report on the consistency and accuracy of ion assignments upon import of a DynamX project file. DECA flags outliers in each ion cluster and calculates the standard deviation of the m/z, retention time, and mobility values for every charge state and every replicate. In the example data sets, assigned ions were within 5% of the mean retention time and mean mobility per peptide. 93-98% of all assigned ions had a search error below 10 ppm. DECA makes such details accessible through the GUI in DECA or exported as a spreadsheet. The raw spectra can be visualized along m/z, retention time, and mobility dimensions FIG. 4. Peptide-to-Residue Assignment. In order to generate onedimensional, residue-resolved data needed for heat maps, butterfly plots, and structure-coloring scripts, residues are assigned the data from the most representative (smallest) peptide covering that region of the protein. Data merged or processed by back-exchange correction or peptide recombination may be saved into a comma-separated value (CSV) formatted spreadsheet mimicking an import format. Exported spreadsheets additionally contain information about any back-exchange correction performed on the data set which complies with community recommendations (28). DISCUSSION HDX-MS is a rapidly growing technique, yet back-exchange, noisy data, instrumental drift, and poor peptide resolution limit the information content of this type of data. Significant additions to the HDX-MS workflow over the last decade include back-exchange correction, analysis of overlapping peptides, and extraction of data from isotopic envelopes. Waters' DynamX uniquely enables the study of high complexity data sets that otherwise would be limited by spectral overlap through the implementation of ion mobility. Because IMS enables extension to much larger data sets, an FIG. 5. Visualizing and Comparing Uptake Data. Several options for the visualization of HDX-MS data are provided in DECA. The full coverage map including uptake information may be generated, or, through Peptide-to-Residue assignment, the data can be displayed in one of the three formats shown above. Heat Maps (A), PyMOL coloring (B), and Butterfly Plots (C) can visualize fractional uptake differences between protein states. automatic downstream data analysis tool is required to not only ascertain the statistical significance underlying large data sets but also to prepare back-exchange corrected uptake plots and PyMOL scripts in a seamless and rapid manner.
Here we present DECA, a feature-rich data analysis backend, that provides many of these functionalities in an opensource, cross-platform package. Although ion mobility enables the separation of otherwise-overlapping spectra, automatic processing of large data sets in DynamX can result in occasional incorrect assignments, and manual correction is time-consuming and can sometimes leave the incorrect assignments undetected. DECA performs statistical evaluation on DynamX assignments and implements well established back exchange correction as well as the Long Exposure Adjustment Patch, which corrects for systematic time point-dependent differences we have observed when the LEAP robot is used for sample preparation. DECA enables overlapping peptide analysis that has been previously implemented to take advantage of high redundancy and sequence coverage to generate virtual peptides with higher resolution. DECA can subsequently export the analyzed, filtered, and corrected data to a spreadsheet, or it can produce publication-ready visuals, including 2D and 3D spectra, deuterium uptake plots, coverage maps, heat maps, butterfly plots, and pymol scripts from the back-exchange corrected data.
DECA is developed entirely in Python and compiled into executable binaries compatible with macOS or Windows. DECA can also be run directly from the source code on any computer with Python installed, enabling developers to modify and improve the software. Both the source code and the executable files are available at https://github.com/komiveslab/DECA. Acknowledgment-We thank Dominic Narang for use of his previously published data.
DATA AVAILABILITY
The sample data set raw files, DynamX project, and peptides list are available at the MassIVE repository (massive.ucsd.edu) under data set ID: MSV000084200. 6. Analysis of Variance Determines Significance. Analysis of Variance tests are implemented in DECA to identify statistically significant differences between deuterium uptake in each peptide from different protein states. A, Uptake plot for a peptide from two different states with a deuterium uptake difference of 0.76 Da at 5 min. A t test was used to determine that the difference is significant with a p value of 2.7 ϫ 10 Ϫ3 . B, Confidence intervals can be calculated and plotted in DECA which illustrate the confident difference between the two states. | 2019-10-10T09:21:40.780Z | 2019-10-08T00:00:00.000 | {
"year": 2019,
"sha1": "3fceaf683fd103c96b82b4e7759f7787d3ee7712",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1074/mcp.tir119.001731",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "ae1b824de58603dbdd82fd280917d4849abcce43",
"s2fieldsofstudy": [
"Chemistry",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
241692265 | pes2o/s2orc | v3-fos-license | Private Security in Ethiopia: Key Challenges and the Ways Forward
This study sought to explore private security in Ethiopia focusing on the challenges that hindered it to provide effective security service to its clients. To this end, the study employed a qualitative research method. Data were generated from both primary and secondary sources using in-depth interviews, focus group discussion, observation, and document review. Primary data were purposively gathered from lawmakers, security service providers, law enforcement organs, and private security service users. They are identified as the major actors that have a direct relevance to the subject under investigation. Secondary data were consulted from pertinent legislations and other materials. Thematic analysis was used to analyze the data. The study found that the major challenges as regards private security are lack of comprehensive legal and policy frameworks, absence of institutional arrangements, lack of standardized training and education. Given these, the researcher, among other things, suggested strong normative frameworks, institutional arrangements, standardized training along with periodic evaluations.
Introduction
The private security (PSC) industry is becoming one of the key players in the security arena. It plays a significant role in guaranteeing and maintaining peace and security of the public at various levels. Nowadays, private security personnel are in the process of outnumbering the police. As Lemon (2018:4) states, "in the private security industry more than a million officers contribute daily to crime prevention and reduction". They have played a great role in providing a partial solution to the state of affairs in terms of improving the situation of security. The size and role the private security industry plays in the respective national security systems of the concerned countries are extremely significant.
In the private security industry, relevant professional knowledge and skills are required so that private security personnel can effectively accomplish their roles. These skills are attained through continuous and robust professional training. As Olonisakin, Ikpe, and Badong (2009:15) argued training improves the tactical and operational competence of these agencies and then contributes to required security service. Depending on the nature of undertakings in the field of PSC, it is a must that the person has adequate practical and theoretical knowledge and psychological readiness before joining the industry. This helps the security personnel to perform their security tasks legally, ethically, and professionally and reduce the possible dangers the personnel may exhibit as a result of needless deeds. Proper and continuous professional training can increase quality services and performance; allows the security personnel to acquire sufficient knowledge and experience in the field and enables the security personnel to develops self-confidence and provide security and safety services competently (Akinade, 2019). To George (2020) security personnel training provides pertinent information that helps them to respond to incidents; gives the security personnel a better understanding of legal standards and limitations that enables them to observe their duties and rights and open access to career options in the industry. "As private security becomes more involved in public patrols, interaction with citizens, securing sensitive installations, and engaging in more frequent detentions and arrests, it becomes paramount that the industry is expected to train their security personnel in an appropriate manner" (Law Commission of Canada, 2006:103). There should be a mechanism to ensure that everyone can undergo proper security training before they are granted a license to work as a security practitioner.
Studies showed that the growth of the private security industry has brought many problems as far as entrenching peace and security is concerned. Private security compromises national and human securities. Furthermore, this industry is often criticized for its lack of institutional arrangements, lack of adequate recruitment and training standards including problems of certification and regulation of the behaviors and practices of recruits. People engage in the industry without having proper initial training and relevant experiences. The duration and content of training of the courses also vary in different countries. According to Griffiths et al., (2010), some initiatives have been taken by few countries and security companies, but the degree of enforcement is insignificant. As a result, companies, employees, clients in a particular society, in general, are not beneficiaries of these.
In Ethiopia, PSCs are now becoming an important partner that complements police efforts. At the moment,
Literature Review 2.1 Private Security: Explained
The term private security is highly contested. As a result, numerous definitions have been used various scholars. The private security industry is made up of entities of various forms, ranging from multinational companies to individual contractors. In regulating the industry, it is left to each country to define private security providers and set out what type of entities and services are covered for its regulation. The definition hinges on the local context and the services performed within the respective country's jurisdiction (The Danish Institute for Human Rights, 2019). Few model definitions are discussed in this article. To Shah (2016) private security means the protection of property, people, and information by the private sector without the help of the government. This includes protection in case of manmade and natural catastrophes including terrorist attacks and natural disasters. Private security refers to the individuals providing all types of security-related services, such as investigation, guard, patrol, detection, alarm, and armored transportation all aimed at crime prevention and detection the term private security mainly indicates security service provided by a person than a public servant. It protects or guard people or property or both. This includes the provision of armored car service (Anicent, 2014). Similarly Abrahamsen (2011) defines 'private security agency' means a person or body of persons, other than a government agency, department, or institutions engaged in providing private security services such as training to private security guards, or their supervisor or providing security guards to any industrial or business undertaking or a company or any other person or property.
Hence, the term private security embodies a wide array of institutions such as security guard companies and investigative services among others. The security personnel hired by these companies can be armed or unarmed, can be employed as either in-house or contract employees and can have different powers, depending on where they work and what responsibilities they fulfill. The term private security is not defined in Ethiopia.
Institutional and Normative Frameworks vis-à-vis Private Security in Ethiopia
The role of the FDRE Attorney General and Federal Police Commission (FPC) is pertinent as private security providers are actively engaged in one of the major policing tasks. At the federal level, the Attorney General has the power to supervise the overall activities of policing tasks including crime prevention and investigation by directing the FPC. In this regard, Attorney General may play its role in deciding who must or must not engage in the service of crime prevention as an appropriate law enforcement organ directly involved in the enforcement of the law (Federal Attorney General Establishment Proclamation No. 943/2016, Articles 5 and 6).
Under the Attorney General, the FPC is also established by law to serve the public, respect, ensures the observance of human and democratic rights, and maintains peace and welfare of the public. Among other things, the FPC is entrusted with the general objectives to maintain peace and security of the public by complying with and enforcing the Constitution and other laws of the country, and by preventing crime through the participation of the community. Further powers and functions of the FPC which has particular relevance to the context of security service are also under the Ethiopian Federal Police Commission Establishment Proclamation No. 720/2011. For instance, under its Article 6 sub Article 9 and 10 the FPC has the power and duties to safeguard institutions of the Federal government; provide security protection to higher officials of the Federal government and dignitaries and diplomats of foreign countries, and install CCTV cameras at the proper places to prevent and investigate crimes. Under Article 7(16) of the Ethiopian Federal Police Commission Establishment Proclamation No. 313/2003, the FPC can delegate its powers and functions for law enforcement tasks to other institutions when it deems necessary. One can argue that it is in this context that the private security industry can provide security services to private institutions like banks and insurance companies, hotels, different industrial sectors, and so on. Article 6(28) of Proclamation number No.720/2011 states that the FPC issues certificates of competence to private institutions to enable them to take part in providing security service.
The FDRE National Crime Prevention Strategy (2020) has also recognized private security institutions as 22 important partner in law enforcement activities. The private security institutions along with the police and community are required to perform crime prevention activities. Yet the role of these institutions in crime prevention is not indicated. Hence, the role of the private security industry is to some extent legitimized under the Ethiopian legal infrastructure. Nevertheless, nowhere is mentioned the role the security industry can play. In other ways, the roles that the FPC can delegate to these agencies are not indicated.
The FPC also prepared a working guideline through which PSCs are governed. The Guideline 01/2011, inter alia, insists that PSCs should have a training syllabus and be obliged to send their personnel for proper training. According to the guideline, PSC personnel must be accomplished 90 days of training under category I and 15 days under category II (Article-7 of Police Guideline). However, this does not amount to the existence of a national standard for the training of private security guards in the country. The Guideline also doesn't specify who (the police, the company itself, or any other institution) conducts this training. In addition, the standard curriculum and module including the contents of the training and its quality are not mentioned. As a consequence, the duration and content of training for recruits and the quality of trainers in the industry are left to the discretion and capability of various PSCs. During the preliminary observation, it was also observed that the training facilities of some companies are poor and below standard.
In the 18 th session of the United Nations Commission on Crime Prevention and Criminal Justice (UNCCPCJ), the Secretary-General requested the member states to examine the role played by private security agencies in their jurisdiction. Later in its 20 th session, an analysis of the replies provided by member states was presented to the Commission and as result, most of the states noted that the private security industries had a role in policing such as crime prevention and community safety. The states had adequate normative frameworks and institutional arrangements on private security along with adequate monitoring mechanisms. However, some of the states noted deficiencies in this regard (UNCCPCJ, 2009).
The International Code of Conduct for Private Security Service Providers (ICOC, 2013) is relevant here to mention as far as the private security service provider is concerned. The signatories to this code were private security service providers mainly UK, USA, Sweden, Canada, South Africa, and India. Neither private security industries from Ethiopia signed the code of conduct nor did the Ethiopian government still enact proclamation or regulation. The Code regulates both private security industries and private security guards operating in the country. Its preamble affirms the fact that in providing security services, the activities of private security companies can have positive and negative impacts for their clients, the local population in the area of operation, the general security environment, and the enjoyment of human rights and the rule of law.
Research Method
The study employed a qualitative research method. The qualitative approach helps to understand the real-life setting and allows the active involvement of the research participants (Creswell, 2007, p.40;Yin, 2011). As it allows for the possibility of gaining significant knowledge about the problem under investigation, this study used a case study approach (Yin, 2003). The study setting is also selected purposefully for the reason that the researcher's knowledge of the population in terms of research objectives. The researcher collects data from the respondents with better knowledge and experience about the problem (Creswell, 2007).
As a result, this study used an exploratory case study approach. As Creswell (2007) stated, the major data collection tools in a qualitative approach include an interview, observation, and document review. Hence, a case study design involves detailed, in-depth data collection tools involving multiple sources of information (2014). Both primary and secondary data sources were used to collect pertinent data. To draw adequate conclusions, the researcher used various data collection tools such as in-depth interviews of key informants, focused group discussions, direct observation as well as and document review.
As stated above, the researcher employed the purposive sampling technique to select major institutional data sources. Accordingly, lawmakers, security service providers, law enforcement organs, and private security service users are identified as the major sectors that have a direct connection to the subject under exploration. A total of 65 respondents were nominated purposefully for both interview (36) and FGD (29): 34 from Police and civilian government officials from the concerned ministries and bureaus; five representatives from the inhouse/proprietary security institutions; 18 from private security companies and eight from service recipients.
Result and Analysis
Professionalism is an important element in the provision of private security services and hence any securityrelated tasks must be undertaken with the necessary security knowledge, skill, and competency. In defining the professionalism of private security, a code of ethics, security experiences, proper education, and training are needed (Fischer et al, 2008). This means that security personnel needs to receive proper training regarding their duties as well as ethical standards and professional conduct in addition to those developed by the companies. Being a security professional means displaying competence in one's area of expertise and striving to demonstrate the core values and competence of the professional (Lawrence, 2017). As ISS (2017) states, "a security guard is 23 professional and ethical if he/she does what is considered to be morally sound and acts in a manner one would expect him/her" (para,2). Today, private security is moving towards professionalism and several efforts have been made by the participants of the industry.
Despite some efforts to professionalize the PSC, there are major obstacles that need to be overcome. One such problem is the training and education of security guards. Many of these guards are poorly paid and undertrained. Some minimal standards exist in different places, however, there is unwillingness by countries and companies to educate and train these personnel adequately. Considering the importance of private security personnel in the crime prevention effort, it is expected to provide them with a minimum standard of training. However, this is not the case since most of these personnel are received so little training when compared with their public-sector counterparts (Inter-State Security, 2017).
As confirmed by the respondents, the Ethiopian Private Security Industry (EPSI) lacks professionalism. For example, one of the police interviewee stated that currently any person who has a degree or diploma can register and get a license for establishing a security firm and practice security-related tasks. He added that there is a common practice of recruiting incompetent individuals who perform specialized tasks that require special skills and training. Another police interviewee stated that most of the private security guards in Ethiopia have a very low educational background. This is because the job is regarded as a low-level job and it doesn't require any profession or skill except physical fitness. According to most FGD respondents and key informants, the majority of the people in the industry are less educated whose training is limited to primary school or less. Today, the sector accommodates everyone since the country is characterized by high unemployment rates and has left job seekers to look for any employment that would bring income for the sustenance of the household.
As noted above, lack of adequate training is one of the challenges that the PSC industry in Ethiopia faces. Currently, many PSCs are operating in Ethiopia having a big size of personnel. Despite this big size, there are no specific standards and policies that govern the training activities of the companies as well as their personnel. As one of the private security company owner replied, the quality and content of training offered by these companies differ significantly from one to the other. This is also confirmed by one of the police interviewee, most PSCs in Ethiopia offer some security-related courses for their personnel, but many PSCs deploy security personnel to duty without any knowledge or with little know-how about security issues. As the FGD discussants replied, this problem is mainly the concern of unregistered PSCs but yet several registered big security firms also show little interest to embark on periodic training of their personnel. They further claims that this problem has contributed to the incompetence, lack of professionalism, and inefficiency characterizing many private security personnel in Ethiopia. Under the existing system, the FPC is a responsible organ for the issuance of a license for the private security companies in Ethiopia. The FPC issued a working guideline through which PSCs are governed. The guideline 01/2011 insists that PSCs should have a training syllabus and be obliged to send their personnel for proper training. According to the guideline, PSC personnel must be accomplished 90 days of training under category I and 15 days under category II (Article-7 of Police Guideline). However, this does not amount to the existence of a national standard for the training of private security guards in the country. The Guideline also doesn't specify who (the police, the company itself, or any other institution) conducts this training. In addition, what is to be provided (contents, quality of training, any standard curriculum and modules) is not mentioned. As a consequence, the duration and content of training for recruits and the quality of trainers in the industry are left to the discretion and capability of various PSCs. During the fieldwork, it was also observed that the training facilities of some companies are poor and below standard.
Due to the aforesaid problem, each PSC has adopted its ways and procedures of training for its personnel. Currently, PSCs have failed to coordinate their efforts and to harmonize the content of the training they provide. As most of the respondents described, this is due to the failure to have a comprehensive and unified standard of training for PSCs. Since each PSC administers its training, the quality of security service varies. As a result, no one is sure that whether the security personnel undergo rigorous training periodically. While the FPC has duty bound to inspect the training, it does not engage in this properly. The Police Guideline 01/2011 mandates security guard certification but the actual practice shows that no one abides by this rule. As one of the police interviewee revealed, each PSC offers training with diverse training methodologies and philosophies and this resulted the personnel to engage in private security service without acquiring common techniques, ethics, knowledge, and skill.
As one of the police interviewee stated, many security companies have designed below eight hours training packages for their security personnel and this training are provided by way of orientation. In most cases, on-thejob training is rare and security personnel go without adequate information about their engagement in the industry. In similar way, the focus group discussants reported that several PSCs recruit staff with questionable backgrounds. For them, many of this security personnel lack training, license, and are deployed for security work without proper background checks.
According to police officials and industry experts, several security companies are investing little or nothing www.iiste.org ISSN 2409-6938 An International Peer-reviewed Journal Vol.75, 2021 24 in training their employees. Many of these companies are shaping their subjects and courses in line with the owner's interest and these courses are mostly given either by employers themselves or lower-level staff such as supervisors. In this case, the quality of instruction is questionable since these people do not have the required knowledge and skill in the area. As one of the police interviewee stated, these situations allow non-experienced security practitioners to enter the industry. One of the private security company owner described that his company employed several personnel for the last five years. However, these often "appear unprofessional since they lack the requisite skills and knowledge both at individual and the institutional levels". This lack of standard training does not adequately prepare PSC employees to competently engage in providing private security service. Their training does not capacitate PSC personnel to respond to more complex tasks and emerging security threats such as organized crimes and terrorism. It also makes them vulnerable to manipulation "as they have no bargaining power for higher salaries or better working conditions nor are they aware of their labor rights" (Vinko, 2013, p.11). Today criminals make the task of security providers very tough because of the emergence and use of advanced technologies. To counter this, security providers and their personnel must go for proper training and make sure that they have the necessary knowledge, skills, and experiences. The quality and skills of security personnel have a positive impact and influence on the quality of security provision. In circumstances where proper training and license are absent by the security personnel, there will be a low standard of professional practices. In this respect, Akinade (2019:4) argued that the standard of the profession will suffer because of a lack of relevant experience by the guards engaged in security practice. The lack of experience can lead to the occurrence of clumsy and substandard performance which could result in serious mistakes and inadequate professional practice. It can also cause reduced self-confidence, which in turn decreases motivation and morale. According to Inter-State Security (ISS) (2017), the development of specialized skills and training, are very crucial for the development of professionalism. It is, therefore, important for the companies to ensure that security employees should have all of the basic training that qualifies them to serve as professional private security officers. It is also essential for the state to introduce some changes in the area of training requirements for all security practitioners before licensing them. For Akinade (2019), this move aims to improve the profession, knowledge, and skill of individual security personnel thereby increasing the level of satisfaction both by the security provider and the user on safety matters. The presence of a high commitment to professional value and occupational integrity is an essential component for the true test of professionalism in the security industry. So, companies and their personnel must adhere to a professional code of ethics. Even without considering the government's frameworks, PSCs and security practitioners can set some standards to increase their professionalism in the industry.
Regarding the institutional framework, Hans (2007) explained that in some countries the issue of training is regulated by the ministries of the interior like the case in Spain. Others recommend that companies can handle this and the training given by them is sufficient (Italy). Some other countries prefer an autonomous body for handling the case. In South Africa, different security institutions that are recognized by the Private Security Industry Regulatory Authority (PSIRA) provide training for PSC personnel. In Kenya, the Private Security Training Academy (PSTA) is responsible for the provision of security training (Usalama Reforms Forum, 2019). Interviewed PSC managers and police officers recognized an enhanced training package as the most significant part of the industry. Most of these respondents had the opinion that there should be certified training centers and approved courses for the PSC personnel to harmonize and raise standards. This strongly suggests that adequate and proper training should be provided by an autonomous institution, possibly by the certified and wellrecognized institution rather than by PSCs themselves. However, there are no such systems and models in Ethiopia.
To qualify for the work, PSCs' personnel must undergo meaningful training and enhance their professional development. In this regard, standardization of training before assignment, and on-the-job training are of paramount importance. According to UNODC (2014), several issues need to be considered by the state when dealing with the training standards of civilian private security providers. The identification of contents of basic training, mandatory numbers of hours, and the types of refresher courses are very important and must be recognized. This enables states to pass the best decision on the issue and to create an effective training regime. As could be understood from the foregoing, the nature of PSC services entails front-line practitioners who have some knowledge and skill in the subject area. In this respect, UNODC (2014) suggests that PSCs' personnel must cover the following basic courses before their deployment: role (when, how, limitations); and use of security equipment and devices such as security alarms, screening equipment, and radio (p.65). Though it varies greatly from country to country, there should be also a minimum number of hours for training. In the USA, the states are responsible for formulating training standards and the compulsory training hours for security personnel vary from eight hours in Washington to 40 hours in California. In Europe, the standard is different from country to country. For instance, "a security guard needs 320hrs of training in Hungary, 288hrs in Sweden, 180hrs in Spain, 127hrs in Belgium, 70hrs in France, 40hrs in Bulgaria and Germany" (p.72). When looking at the African experience, security personnel go for 15 to 30 days in Kenya and two weeks in Tanzania, to mention some. Compared to other countries' experiences, a-90-day training in Ethiopia is too long and thus needs some adjustment.
Similarly, it is very important to reiterate here that private security personnel should have the latest knowledge and skills. To enhance service efficiency and maximize outputs, periodical upgrade of employees' skills is paramount. One such method is on-the-job training. This method can improve the skills and knowledge of the security personnel and increases their level of performance. In this respect, Button (2012) provides different models that can be applied by states: In South Korea, for example, the law obliges each security personnel to take monthly refresher training in different topics and subjects. In Belgium, there are compulsory refresher courses associated with the law and security matters every five years. These courses are lasting for eight hours. Similarly, guards must go for two days refresher course every year in the United Arab Emirates. Moreover, states must create training standards for the level of expertise since it requires additional knowledge and skill. Examples include critical infrastructure security, crowd management, VIP protection, and cash transport security. However, one thing that must be noted here is that states must ensure that all security personnel including supervisors and managers should go through and accomplish the course as a part of the requirement for licensing. This helps them to have similar operational security know-how as the frontline security workers and for the proper management of their subordinates.
From the overall analysis of this study, private security in Ethiopia is facing many challenges vis-à-vis training, normative frameworks, and institutional setup and professionalism. These absences of standards pose major challenges to PSCs and their personnel. These challenges tend to drastically reduce capacity in the effort of effectively providing essential services for meeting the security needs of clients. It is also curtails the bargaining capacity of the security personnel while exercising their right. Hence, it needs to develop feasible policies and strategic options that could meet international standards as well as clients' expectations.
Conclusion and the Ways Forward
The study unveiled that lack of legal and policy frameworks, lack of standardized training and regular supervision, and lack of coordinated training between the private security agencies and the police are among the major challenges that hinder effective security service to clients. The study uncovered that PSCs and their personnel lack professionalism in the areas of security. Currently, there is no comprehensive law and standard that facilitates the training of security personnel in Ethiopia. There are different methods and curricula offered to PSC staff. However, these do not adequately prepare PSC employees to discharge the expected duties and drastically reduced their capacities to effectively provide essential services for the security needs of their clients.
The study further identified that there is no independent institution that provides training targeting PSCs' personnel. At the moment, the provision of training is haphazardly undertaken by each private security agency. However, it is suggested that adequate and standard training should be provided by an autonomous institution, possibly by the certified and recognized agency rather than by private security companies themselves.
As the finding revealed given the role that the private security sector is playing and can play in improving the overall security situations in Ethiopia, it is important to develop feasible policies and strategic options that could address recurrent problems. In this respect, the existence of training standards and requirements at the national, local, industry associations, and individual industry levels are highly recommended. Furthermore, it is essential to take the appropriate steps to ensure that the activities of PSCs conform to the national standards and policy as well as internationally recognized standards and best practices. | 2021-09-09T20:36:19.084Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "447bc9266672a8f389dc2ed12c197fa412c7728d",
"oa_license": "CCBY",
"oa_url": "https://www.iiste.org/Journals/index.php/JAAS/article/download/57001/58865",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bb1dd98b214ca07ad81f9859808f5aca2ea7c0f7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
64297068 | pes2o/s2orc | v3-fos-license | New insights of the local immune response against both fertile and infertile hydatid cysts
Background Cystic echinococcosis is caused by the metacestode of the zoonotic flatworm Echinococcus granulosus. Within the viscera of the intermediate host, the metacestode grows as a unilocular cyst known as hydatid cyst. This cyst is comprised of two layers of parasite origin: germinal and laminated layers, and one of host origin: the adventitial layer, that encapsulates the parasite. This adventitial layer is composed of collagen fibers, epithelioid cells, eosinophils and lymphocytes. To establish itself inside the host, the germinal layer produces the laminated layer, and to continue its life cycle, generates protoscoleces. Some cysts are unable to produce protoscoleces, and are defined as infertile cysts. The molecular mechanisms involved in cyst fertility are not clear, however, the host immune response could play a crucial role. Methodology/Principal findings We collected hydatid cysts from both liver and lungs of slaughtered cattle, and histological sections of fertile, infertile and small hydatid cysts were stained with haematoxylin-eosin. A common feature observed in infertile cysts was the disorganization of the laminated layer by the infiltration of host immune cells. These infiltrating cells eventually destroy parts of laminated layer. Immunohistochemical analysis of both parasite and host antigens, identify these cells as cattle macrophages and are present inside the cysts associated to germinal layer. Conclusions/Significance This is the first report that indicates to cell from immune system present in adventitial layer of infertile bovine hydatid cysts could disrupt the laminated layer, infiltrating and probably causing the infertility of cyst.
Methodology/Principal findings
We collected hydatid cysts from both liver and lungs of slaughtered cattle, and histological sections of fertile, infertile and small hydatid cysts were stained with haematoxylin-eosin. A common feature observed in infertile cysts was the disorganization of the laminated layer by the infiltration of host immune cells. These infiltrating cells eventually destroy parts of laminated layer. Immunohistochemical analysis of both parasite and host antigens, identify these cells as cattle macrophages and are present inside the cysts associated to germinal layer.
Conclusions/Significance
This is the first report that indicates to cell from immune system present in adventitial layer of infertile bovine hydatid cysts could disrupt the laminated layer, infiltrating and probably causing the infertility of cyst. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
Cystic echinococcosis (CE) is a major zoonotic disease caused by infection with the metacestode stage (hydatid cyst) of the flatworm Echinococcus granulosus. It has a worldwide distribution with an estimated 4 million people infected and another 40 million at risk [1]. High parasite prevalence is found in Eurasia, Africa, Australia and South America. CE affects more severely South American countries characterized by extensive grazing livestock farming including Argentina, Brazil, Chile, Peru, and Uruguay [2]. The life cycle of this parasite includes two mammal hosts. The definitive hosts are dogs and other canids; while ungulates and other mammals act as intermediate hosts [3] such as sheep, goats, cattle, pigs, buffaloes, horses and camels [4]. The most common infection sites in cattle are the liver and lungs [5][6][7]. Within these viscera, a unilocular cyst forms, that will grow gradually from 1 to 5 cm a year [8]. The hydatid cyst is circumscribed by a layer generated by the intermediate host in response to the parasite, named adventitial layer, which mainly consists of epithelial cells and connective tissue [9]. The adventitial layer can have variable thickness and may present some focal fibrosis as result of host immune response that considers the cyst a foreign body [10]. The lumen of the hydatid cyst is filled with the so called hydatid fluid and is surrounded by two layers of parasite tissue; the innermost cellular layer is called germinal layer and is intimately attached to an acellular layer called laminated layer, the latter is in close contact with the adventitial layer. The germinal layer is composed by embryonic cells whose function is to elaborate the different elements of hydatid cysts [10]. These embryonic cells differentiate into buds that finally generate protoscoleces (PSC), the infectious parasite form for the definite host [8]. The laminated layer is generated by the germinal layer and is described as a specialized extracellular matrix that is found only in the genus Echinococcus [11]. Macroscopically, it is seen as a whitish membrane, formed by various layers of mucopolysaccharides and keratin, evolutionarily adapted to maintain the physical integrity of metacestodes and to protect the cells of the germinal layer from host immunity [12]. In the intermediate host, it is possible to find two different types of hydatid cysts: fertile hydatid cysts, in which PSC are attached to the germinal layer and free into the hydatid fluid. Fertile hydatid cyst PSC viability, that is, the percentage of live PSC, varies between 100% and 2,8% [13][14][15][16][17]. Contrarily, infertile hydatid cysts (also called sterile hydatid cysts [18][19][20][21][22][23]), have no PSC neither attached to the germinal layer nor floating free in the hydatid fluid, and thus are unable to continue with the parasite life cycle. The reason behind why infertile hydatid cysts are unable to produce PSC remains unclear [24]. In many geographical areas, including Chile [25], cattle has been associated with low fertile hydatid cysts counts (<30%) in both Echinococcus granulosus sensu lato [26][27][28][29] and Echinococcus granulosus sensu stricto [6,22], so it is a suitable model to study cyst infertility mechanisms. Our research team, so far has been working in understanding the causes of infertile hydatid cyst in cattle, identifying both higher apoptosis levels in germinal layer of infertile cysts [24] and different immunoglobulin profiles [30]. Possible relations of the laminated and adventitial layers with fertility or infertility, however, have never been addressed. In this work, we present a systematic comparative study of both the laminated and adventitial layer in fertile and infertile hydatid cysts obtained from naturally infected cattle. Protoscolex viability of fertile hydatid cysts is determined; morphohistological characteristics of hydatid cysts are described and compared, demonstrating the infiltration of host immune cells inside infertile hydatid cysts, and providing evidences of their effect on and contribution to cyst integrity and fertility.
Sample collection, classification and genotyping
All bovine hydatid cyst samples were obtained at an abattoir in Santiago, Chile, as part of the normal work of the abattoir and with consent from both the veterinarians and the owners of the abattoir for sample collection. Both lung and liver samples were manually inspected by the official health office veterinarian and afterwards by our research team. This protocol study was approved by the Universidad Andres Bello Bioethics Board (protocol number 016/2016). For each positive viscera, the hydatid cysts were removed and placed in a sealed plastic bag and were stored at 4˚C.
In the laboratory, each hydatid cyst was assigned a number, the hydatid fluid was aseptically aspirated and the cyst was opened along the longer axial plane. Fertile hydatid cysts were classified as such when: 1.-The laminated and germinal layers were white and thick and easily detach from the adventitial layer; 2.-PSC were found both attached to the germinal layer and floating in the hydatid fluid. PSC viability was determined with trypan blue staining, and samples with viability lower than 85% were classified as low viability. Infertile hydatid cysts were classified as such when 1.-The laminated layer was yellow/ochre and thin, and tightly adhered to the adventitial layer; 2.-There were no visible PSC attached to the germinal layer nor floating in the hydatid fluid; 3.-a sample of the germinal layer was observed under a conventional light microscope to confirm the absence of PSC. Since smaller hydatid cysts (<1 cm in diameter) are still developing and have not started producing PSC, they were classified as "small cysts". After the cyst fertility assessment, a sample of the cyst wall containing all three layers was fixated in Glyo-Fixx and embedded in paraffin. All samples were processed within 24 h of their procurement.
All hydatid cyst samples were genotyped using the method described by Bowles et al [31] in combination with sequencing of PCR products. Only samples from Echinococcus granulosus sensu stricto were used in this study.
Haematoxylin-eosin morphohistological analysis
Paraffin blocks were cut to obtain 5 μm thickness sections and were stained with haematoxylin-eosin (H&E). Using an Olympus FSX100 microscope, each slide was examined to confirm that the three hydatid cyst layers were present. All infertile and small cyst samples that lacked the laminated layer were excluded from the analysis, however, all fertile cysts were included due to the small sample size.
A seasoned pathologist examined blindly each slide, describing the germinal, laminated and adventitial layer features. For the adventitial layer, a score index was assigned to the overall inflammation present in the tissue, as well as scores for Lymphocytes, Plasmatic Cells, Fibroblasts, Macrophages, Giant Multinucleated Cells and Eosinophils, grading them with a score from 0 to 3 according to the following criteria: Mild (1) up to 30 inflammatory cells per high power field (HPF); Moderate (2) 30 to 100 inflammatory cells per HPF; Severe (3) more than 100 inflammatory cells per HPF. Assessment was done in an average of 10 HPF. Host immune cells were identified by the pathologist based on their morphology and staining pattern in H&E. For statistical analysis, inflammation scores were divided between low (0.5 to 1.5) and high (2 to 3).
The thickness of the laminated layer was evaluated with the FSX-BSW software package that is included with the FSX100 microscope. The laminated layer length was measured in 20 consecutive areas in μm, obtaining both an average and its standard deviation.
For statistical analysis, Data was recorded in Excel 2010 datasheet and analyzed using RStudio IDE version 1.0.136 with R version 3.3.3. Outliers were identified using ROUT method with Q = 1% and differences were calculated using a two-way ANOVA with Tuckey's post hoc analysis for quantitative variables. Chi squared test was applied to compare differences between categorical variables. Statistical significance was considered with a P-Value <0.05.
Immunohistochemical (IHC) analysis
To differentiate between parasite and host cells, we used two antibodies: one that targets Echinoccocus granulosus aldolase (EgAldo) and another that targets host macrophages (Invitrogen S100A9 Monoclonal Antibody [MAC387]). Briefly, paraffin blocks were cut to obtain 3 μm thickness sections, and after deparaffinization and rehydration, for EgAldo, antigen retrieval was performed with Citrate Buffer (10 mM sodium citrate, 0,05% Tween-20, pH 6.2), the primary antibody was incubated overnight at 4˚C at a dilution of 1:1000. For cattle macrophages, antigen retrieval was performed by incubating slides with a 0.05% trypsin solution at 37˚C for 15 minutes, the primary antibody was incubated 1 hour at room temperature at a dilution of 1:200. HRP-Conjugated secondary antibody (Jackson ImmunoResearch) anti-Rabbit (EgAldo) or anti-mouse (cattle macrophages) was incubated for 1 h at room temperature at a dilution of 1:1000. Finally, DAB-Plus Substrate Kit (Life Technologies) was used for detection and slides were counterstained with haematoxylin.
The laminated layer thickness varies according to cyst location and type
Gross hydatid cyst examination of the laminated layer between fertile and infertile hydatid cysts was verified with microscopic measurement; fertile hydatid cysts have almost a six-fold thicker laminated layer than infertile and small hydatid cysts, 217 μm (± 18,4 μm) vs 33 μm (± 5,4), this difference is statistically significant (p <0.05). Conversely, small hydatid cysts have a laminated layer thickness similar to infertile hydatid cysts. There are also differences in the laminated layer thickness according to hydatid cyst location; whereas fertile and infertile hydatid cysts have overall thicker laminated layers when found in the lungs (36 μm in lungs vs 29 μm in livers), small cysts have thicker laminated layers when found in the liver (26 μm in lungs vs 49 μm in liver), these differences although, are not statistically significant (p > 0.05) (Fig 1). The complete laminated layer measurements are available as S1 Table. No other statistically significant differences were found between lungs and liver cysts, so future results of fertile, infertile and small hydatid cysts are from both lungs and liver samples.
The laminated layer disorganizes and is infiltrated by the adventitial layer
Histological sections of hydatid cyst wall tissue sections, reveals that the adventitial layer infiltrates the laminated layer. This is visualized as a disorganization of the layers that compose the laminated layer, with host cells in between. A representative image of this feature is shown in Fig 2. Also, whole sections of the laminated layer can be found within the adventitial layer while in other samples the difference between laminated and adventitial layer becomes difficult to establish (S1 Fig).
Fertile, infertile and small hydatid cysts present hallmark histological features in the adventitial layer
Fertile hydatid cysts with high PSC viability, have germinal layer with PSC attached, cells in different developmental stages and thick laminated layers (>100 μm) that easily detach from the adventitial layer ( Fig 3A); this adventitial layer is composed mainly of collagen fibers and fibroblasts (Fig 3B). Inflammatory cells, when present, are found beneath the collagen and fibroblast layer. Fertile hydatid cysts that have low PSC viability, have thinner laminated layers (<100 μm), and while the collagen and fibroblast layer is present in the adventitial layer, the inflammatory cells are found beneath the laminated layer (Fig 3C and 3D). Conversely, infertile hydatid cysts all share common features: a thin laminated layer, sometimes thinner than 5 μm. Beneath the laminated layer, all infertile cysts have palisading foamy macrophages. Supporting these macrophages, there are lymphoid follicles and multinucleated giant cells throughout the adventitial layer, with little presence of collagen fibers or fibroblasts (Fig 3E and 3F). Likewise, small hydatid cysts share the same histological features of infertile cysts (Fig 3G and 3H). Inflammatory cell composition of the adventitial layer in fertile, infertile and small hydatid cysts All hydatid cyst samples had immune cells in the adventitial layer; however, the magnitude and presence inflammation is different. Fertile hydatid cysts had low adventitial layer inflammation scores, with relative high numbers of lymphocytes and fibroblast. None of the fertile hydatid cysts had high infiltration with eosinophils. On the contrary, infertile hydatid cysts had more than 50% of the samples with high adventitial layer inflammation scores. High lymphocyte infiltration was present in less than 80% of the samples, and high infiltration of giant multinucleated cells present in more than 60% of the samples. Small hydatid cysts follow the same pattern, with high adventitial layer inflammation scores in more than 70% of the samples, with high infiltration of lymphocyte and giant multinucleated cells in more than 80% of the samples (Fig 4). Total inflammatory cells and specially lymphocytes and multinucleated giant cells were significantly higher in the infertile and small cysts when compared fertile ones (p<0.05). Raw inflammation score data is available as S2 Table.
Infertile and small hydatid cysts present host immune cells inside the germinal layer
Morphohistological analysis of both infertile and small hydatid cysts, revealed that between the germinal layer cells, there are many cells that have bigger nuclei than the small pignotic nuclei of Echinococcus granulosus germinal layer cells, suggesting a mammalian origin rather than parasite tissue. This feature was absent in fertile hydatid cysts, regardless of PSC viability. Cells with big nuclei could be found as big sheets (Fig 5A and 5B), as single cells (Fig 5C), and in some cases within both the laminated and germinal layers (Fig 5D). Morphologically, these To confirm the host origin of these cells, IHC analysis of fertile, and both infiltrated and non-infiltrated infertile hydatid cysts, using EgAldo antibody for parasite cells and Macrophage marker for host cells, demonstrates that these cells are not of parasite origin, as EgAldo is strongly positive in PSC (Fig 6A) and the germinal layer of non-infiltrated infertile hydatid cyst (Fig 6B), and is only partially positive in the germinal layer of infiltrated infertile hydatid cysts, with negative detection in the cytoplasm of big nuclei cells ( Fig 6C); this in conjunction with the macrophage marker, that is negative in both PSC ( Fig 6D) and germinal layer of noninfiltrated infertile hydatd cyst (Fig 6E), while strongly positive in the adventitial layer (Fig 6E and 6F) as well as in the germinal layer of infiltrated infertile hydatid cysts (Fig 6F).
Discussion
Echinococcus granulosus infection of the intermediate host elicits a granulomatous tissue reaction; characterized by the accumulation of cells of monocytic origin and are thought to be directed both at walling off and at eliminating the persistent foreign body. The hallmarks of granulomatous reactions are special types of activated macrophages called epithelioid cells and multinucleated giant cells [32].
The adventitial layer is usually described as a fibrous layer due to the host's reaction to the parasite [9,[33][34][35][36][37][38][39], and several studies have described the cell composition of this layer. A study with fertile hydatid cysts found in the liver, showed that the adventitial layer has a significant amount of B lymphocytes, occasional polymorphonuclear cells and monocytes, however, they did no correlate this data with PSC viability, as the study was done in formalin-fixed, paraffin-embedded tissue samples [33]. Another study compared the differences in the adventitial layer between ovine and macropod hydatid cysts, with the adventitial layer of fertile hydatid cysts consisting of palisading macrophages with foamy cytoplasm and multinucleated giant cells or with granulation-type fibrous tissue devoid of a discernable covering epithelial layer [35]; and although the authors ponder whether hydatid cysts will develop and become fertile under inflammatory conditions, they do not correlate PSC viability with the characteristics they describe. The most complete study of the adventitial layer of bovine hydatid cysts was done by Sakamoto and Cabrera [9], where they describe that infertile cysts have lymphocytes, macrophages, granulocytes and polynuclear giant cells infiltrating the adventitial layer, with the smaller infertile cysts being surrounded by macrophage derived cells while eosinophils are involved in the response against larger hydatid cysts, and although they did exhaustive IHC analysis of the adventitial layer, surprisingly they did not report positive staining in the germinal layer of infertile hydatid cysts. Our results expand on the characteristics of the adventitial layer of hydatid cysts, adding to the cellular infiltrate the presence of disorganization of the laminated layer and infiltration of adventitial layer cells, a feature not reported by the previous authors. Also we have a correlation between PSC viability and the overall inflammatory infiltrate in the adventitial layer of fertile hydatid cysts, and found that low viability fertile hydatid cysts correlate with higher inflammatory infiltrates.
When comparing hydatid cysts from both liver and lungs, the thickness of the laminated layer was different between fertile and infertile hydatid cysts and also between liver and lungs tissue. It has been described that the parenchyma of these organs limits the growth rate of the hydatid cyst, with the lung being less dense that the liver [37], so it makes sense that the laminated layer is thicker in the lung cysts compared to liver cyst; small cysts on the other hand, had an inverse tendency; however, these differences were not statistically significant. No differences where found between the inflammatory reaction in the adventitial layer of either liver or lung hydatid cysts; as both organs have resident macrophages (Küpffer cells and interstitial macrophages, respectively [40]), the granulomatous reaction against the hydatid cyst could have similar mechanisms.
Although a fertile hydatid cyst is defined solely by the presence of PSC, we propose that a more complete definition should include viable PSC; as shown in this study, fertile hydatid cysts with low viability or dead PSC have adventitial layer characteristics of infertile hydatid cysts, and if sampled later, they possibly would be classified as such. However, because we work with natural infection samples, we are not able to confirm when the animals acquired the infection.
Many small hydatid cysts samples had adventitial layer characteristics found in both low viability fertile hydatid cysts and infertile hydatid cysts. There is evidence that cysts from the same parasite strain, in the same organ and host can have differences in size, viability and fertility [39]. After examining 16 small hydatid cysts, all of which had adventitial layers with a strong immune reaction, we propose that small hydatid cysts with adventitial layer featuring palisading foamy macrophages, lymphocytes arranged in follicles, multinucleated giant cells, thin (<50 μm) laminated layer and host immune cells inside the germinal layer, should be regarded as infertile; while small cysts without these characteristic histological features could develop into either fertile or infertile hydatid cysts. Moreover, small cysts as well as infertile ones, showed significantly higher inflammatory infiltrates, particularly of lymphocytes and multinucleated giant cells, suggesting that immune response is directly involved in the cyst viability.
The laminated layer, which is secreted by the parasite contains mucins with O-type glycosylations and inositol hexakisphosphate (InsP 6 ); these features are related to the parasite survival inside large mammals, and it has been proposed that the laminated layer is involved in down-regulating the local inflammatory response [11]. Also, it has been shown in mice that both macrophages and dendritic cells are activated by portions of the laminated layer [41]. In our results, many infertile and small hydatid cysts samples have clear signs of the host immune cells infiltrating the laminated layer and disorganizing it; with the eosin staining being more intense were this is happening; this could be due to the host macrophages secreting a combination of cathepsin K [32] and MMP-9 [42], although further experiments are needed to corroborate this.
The presence of host immune cells in direct contact with the germinal layer of infertile hydatid cysts has not been described before. Both the hydatid fluid and germinal layer of both fertile and infertile hydatid cysts usually contain many proteins of host origin [1,43]. How these proteins enter the hydatid cyst is not clearly defined; the germinal layer consists of a distal cytoplasmic syncytium from which microtriches project into the laminated layer; these two parasite structures form a barrier that deny access to both host defense macromolecules and cells [44]. Particles of the laminated layer are described to inhibit macrophage proliferation [45] and induce the production of arginase (inhibiting nitric oxide (NO) activity) [46]. In fact, infertile hydatid cysts are correlated with higher levels of NO, and it has been proposed that NO-producing immune cells are unable to penetrate the physical barrier imposed by the laminated layer [47], however, as seen in Fig 5D, this is a possibility.
The production of both the laminated layer and protoscoleces is a major metabolic activity of the germinal layer [44]. Our results show that cattle palisading macrophages are found surrounding infertile hydatid cysts, and are able to disorganize the laminated layer and infiltrate between the layers of this structure. The hydatid cyst will continue to grow as long as there is a steady production of laminated layer instead of protoscoleces, maintaining its infertility; if the balance is shifted towards the host immune cells, they can reach the germinal layer with the later destruction of the metacestode.
In conclusion, fertile hydatid cysts with mostly viable protoscoleces have adventitial layers with scar tissue, which means that the immune regulation molecules that the parasite secretes probably are aimed at triggering inflammation resolution in the adventitial layer. In cattle this event is rare, with a granulomatous immune response associated with low protoscolex viability, laminated layer disorganization and consequential immune cell infiltration. Table. Inflammation scores of fertile, infertile and small bovine hydatid cysts, from both liver and lungs. Sheet one has the raw data of each sample. Sheet two shows the sample proportion for each inflammatory score value. Sheet three has dichotomized data used for statistical analysis. (XLSX) | 2019-02-01T14:02:47.295Z | 2018-08-13T00:00:00.000 | {
"year": 2019,
"sha1": "b3848f162a9f32b5997815a0bd7515357a4d245c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0211542&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3848f162a9f32b5997815a0bd7515357a4d245c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
257639108 | pes2o/s2orc | v3-fos-license | Retracted and Republished from: “Gut Microbiota Mediates the Therapeutic Effect of Monoclonal Anti-TLR4 Antibody on Acetaminophen-Induced Acute Liver Injury in Mice”
ABSTRACT Acetaminophen (APAP) overdose is one of the most common causes of acute liver injury (ALI) in Western countries. Many studies have shown that the gut microbiota plays an important role in liver injury. Currently, the only approved treatment for APAP-induced ALI is N-acetylcysteine; therefore, it is essential to develop new therapeutic agents and explore the underlying mechanisms. We developed a novel monoclonal anti-Toll-like receptor 4 (TLR4) antibody (ATAB) and hypothesized that it has therapeutic effects on APAP-induced ALI and that the gut microbiota may be involved in the underlying mechanism of ATAB treatment. Male C57BL/6 mice were treated with APAP and ATAB, which produced a therapeutic effect on ALI and altered the members of the gut microbiota and their metabolic pathways, such as Roseburia, Lactobacillus, Akkermansia, and the fatty acid pathway, etc. Furthermore, we verified that purified short-chain fatty acids (SCFAs) could alleviate ALI. Moreover, a separate group of mice that received feces from the ATAB group showed less severe liver injury than mice that received feces from the APAP group. ATAB therapy also improved gut barrier functions in mice and reduced the expression of the protein zonulin. Our results revealed that the gut microbiota plays an important role in the therapeutic effect of ATAB on APAP-induced ALI. IMPORTANCE In this study, we found that a monoclonal anti-Toll-like receptor 4 antibody can alleviate APAP-induced acute liver injury through changes in the gut microbiota, metabolic pathways, and gut barrier function. This work suggested that the gut microbiota can be a therapeutic target of APAP-induced acute liver injury, and we performed foundation for further research.
ATAB altered the gut microbiota composition during ALI treatment. First, we used a 16S rRNA sequencing method to identify the changes in the gut microbiota after APAP overuse. Besides the control group, we chose two time points to detect the gut microbiota, 6 h and 24 h after mice were treated with APAP (APAP 6-h group and APAP 24-h group). Principal-coordinate analysis (PCoA) and nonmetric multidimensional scaling (NMDS) methods showed that the gut microbiota structures were different among the three groups (see Fig. S1 in the supplemental material), and the heat map at the genus and phylum levels also showed changes (Fig. S2).
To explore the influence of ATAB, the composition of the gut microbiota was evaluated. We analyzed the Shannon diversity index and the Chao1 index. The ATAB group had higher values than the APAP group ( Fig. 2A). These results indicated that the gut microbiota diversity in the APAP-induced ALI mice had changed and that ATAB could enhance the biodiversity of the gut microbiota. We also examined the Firmicutes/ Bacteroidetes (F/B) ratio and the composition of the gut microbiota at the phylum level. These results showed that the APAP group had a higher F/B ratio than the ATAB group. Besides, the compositions at the phylum level differed between the two groups. The ATAB group had lower Firmicutes and Bacteroidetes proportions, but Proteobacteria . (E and F) The ATAB group exhibited low levels of inflammatory cytokines, LPS, and chemokines in plasma and liver tissues (n = 7 for each group). IFN-ã, interferon alpha. (G) HE staining revealed less necrosis of liver cells in the ATAB group than in the APAP group. The data are presented as the means 6 SEM. *, P , 0.05; **, P , 0.01; ***, P , 0.001; NS, not significant (for the comparisons).
Gut Microbiota and TRL4 Antibody in Acetaminophen ALI Microbiology Spectrum proportions were higher than those in the APAP group ( Fig. 2B and C). Next, we analyzed the gut microbiota composition using PCoA and NMDS methods ( Fig. 2D and E). These results revealed significant differences among the APAP, ATAB, and control groups, indicating that the use of ATAB can alter the gut microbiota composition. The ATAB group had higher abundances of some bacterial genera such as Roseburia, Lactobacillus, and Akkermansia (Fig. 2F). ATAB altered the gut microbiota metabolome during ALI treatment. A metabolomic analysis was performed using liquid chromatography-mass spectrometry (LC-MS) to determine whether ATAB can change the gut microbiota metabolome. More than a thousand metabolites were identified, including sugars, amino acids, fatty acids, and organic acids. These compounds are involved in metabolism, genetic information processing, environmental information processing, cellular processes, and organismal systems. Principal-component analysis (PCA) and partial least-squares discriminant analysis (PLS-DA) were used to identify metabolites (we separated the positively and negatively charged metabolites) and revealed that the APAP group displayed metabolic profiles that were significantly different from those of the ATAB group ( Fig. 3A and B). A heatmap (Fig. 3C) revealed the differences in the metabolites between the two FIG 2 ATAB altered the gut microbiota composition during ALI treatment. (A) The ATAB group had higher Shannon diversity indices and Chao1 indices. (B and C) The APAP group showed a higher F/B ratio of the gut microbiota, showing that the composition at the phylum level differs between the two groups. The ATAB group has lower Firmicutes and Bacteroidetes levels, but the level of Proteobacteria appears to be higher than that in the APAP group. (D and E) PCoA and NMDS analysis of the gut microbiota. These results show significant differences among the APAP, ATAB, and control groups. (F) The ATAB group had higher abundances of some bacterial genera such as Roseburia, Lactobacillus, and Akkermansia (n = 3 to 4 for each group). The data are presented as the means 6 SEM. *, P , 0.05; **, P , 0.01; ***, P , 0.001 (for the comparisons). MDS, metric multi-dimensional.
Based on 16S rRNA sequencing and metabolome analysis, we performed a conjoint analysis, which revealed key bacteria, key metabolites, and their potential correlations; for example, Lactobacillus, a key bacterial genus, was correlated with isopropyl myristate, vitamin B 2 , and taurocholic acid, etc. (we separated the positively and negatively charged metabolites). (Fig. 3E and F). These results provided us with many target bacteria and their metabolites for conducting further research.
The gut microbiota of the ATAB group alleviated ALI via fecal microbiota transplantation. To further analyze the role of the gut microbiota in the progression of ATAB treatment, fecal microbiota transplantation (FMT) was performed. The feces of (D) Various metabolic pathways, including fatty acid degradation, linoleic acid metabolism, fatty acid metabolism, fatty acid elongation, the biosynthesis of unsaturated fatty acids, retrograde endocannabinoid signaling, biotin metabolism, the PI3K-Akt signaling pathway, and the mTOR signaling pathway. (E and F) Conjoint analyses of 16S rRNA sequences and the metabolome show key bacteria and metabolites as well as potential correlations (the metabolites are separated by positive and negative charges) (n = 3 to 4 for each group). IKK, inhibitor of kappa B kinase; FPK, fructose-phosphokinase; iPF2a, isoprostaglandin F2a; HpODE, hydroperoxyoctadeca-10,12-dienoic acid; Cer-AP, caerulein-AP; RNK, the oligopeptide of arginine, asparagine, and lysine. the ATAB and APAP groups were orally administered to recipients whose gut microbiota had been depleted with broad-spectrum antibiotics (ABX) for 5 days. The gut microbiota of the recipients was similar to that of the donors, indicating that the FMT was successful. After 3 days, the mice were treated with 600 mg/kg APAP and sacrificed 24 h after APAP treatment.
The mice that received feces from the ATAB group had lower interleukin-1 (IL-1), tumor necrosis factor alpha (TNF-a), and IL-6 levels than the APAP group (Fig. 4A). HE staining showed that ALI in the recipients of the feces from the ATAB group was less severe than that in the recipients of feces from the APAP group (Fig. 4B). However, the amelioration of ALI was not as evident as that observed with the direct administration of ATAB. Moreover, the colon tissues of recipients of feces from the APAP group showed inflammatory cell infiltration (Fig. 4C), indicating that these feces may contribute to inflammation.
ATAB promoted gut barrier function compared with APAP administration. We used fluorescein isothiocyanate (FITC)-dextran oral administration to detect gut barrier function. Approximately 20 h after APAP injection, mice were administered 4-kDa FITCdextran (FD4) orally, blood samples were collected after 4 h, and the fluorescence intensity of FITC-dextran in the peripheral blood was measured. The APAP group showed a high fluorescence intensity, whereas the ATAB group showed a lower fluorescence FIG 4 The gut microbiota from the ATAB group can alleviate ALI through FMT. (A) Mice that received ATAB feces had lower IL-1, TNF-a, and IL-6 levels than the APAP group. (B) HE staining shows that ALI in mice receiving feces from the ATAB group was less severe than that observed in mice receiving feces from the APAP group. (C) The colon tissues of mice receiving feces from the APAP group display inflammatory cell infiltration, but those of the mice receiving feces from the ATAB group show no obvious changes (n = 4 to 5 for each group). The data are presented as the means 6 SEM. *, P , 0.05; **, P , 0.01; ***, P , 0.001 (for the comparisons). intensity in the serum (Fig. 5A). We also observed colon tissues, and the APAP group showed extravasation of FITC-dextran, whereas the ATAB group showed confinement of FITC-dextran to the colon lumen (Fig. 5B). As zonulin is a modulator of intestinal epithelial cell tight junctions, we quantified the protein content in the colon tissues of mice from the APAP and ATAB groups using an enzyme-linked immunosorbent assay (ELISA). The results revealed that the APAP group had higher zonulin levels than the ATAB group (Fig. 5C). Therefore, our results demonstrated that ATAB can promote gut barrier function.
Short-chain fatty acids can alleviate APAP-induced ALI without ATAB. Metabolomic analysis revealed that the fatty acid metabolic pathways differed between the two groups, so well-established metabolites, short-chain fatty acids (SCFAs), were chosen to determine whether they might influence APAP-induced ALI. The SCFAs were dissolved in water and given to mice for free drinking for 7 days. Thereafter, APAP was injected, and the blood and liver tissues of mice were collected after 24 h. The results showed that liver congestion was milder (Fig. 6A) and plasma ALT and AST levels were much lower in the SCFA group (Fig. 6B) than in the control group. HE staining showed reduced liver cell necrosis in the SCFA group (Fig. 6C). Besides, the survival rate was 100% in the SCFA group but only 40% in the APAP group (Fig. 6D). The treatment effect of SCFAs was evident as they alleviated APAP-induced ALI without ATAB administration.
DISCUSSION
In this study, we investigated the therapeutic effect of ATAB on APAP-induced ALI and analyzed the role of the gut microbiota, gut metabolism, and the gut barrier in this process.
The ATAB in our experiment is TLR4 IgG2 antibody (30). There are four subclasses of IgG (IgG1, IgG2, IgG3, and IgG4), which are defined by their unique structural and functional characteristics (31). IgG1 has functions in antibody-dependent cellular cytotoxicity (ADCC) and apoptosis induction (32-34). IgG4 is a neutralizing inhibitory signal in T cells (35)(36)(37). IgG3 has a long hinge region and a polymorphic nature; it increases the risk of stability and immunogenicity (38,39). Thus, they were inappropriate for the development of a TLR4 antibody, and we chose IgG2 to develop the ATAB.
We found that ATAB treatment altered the composition of the gut microbiota, including the F/B ratio, the composition at the phylum level. We observed that specific bacteria differed between the ATAB and APAP groups, such as Lactobacillus, Akkermansia, and Roseburia. Lactobacillus is a well-studied probiotic, and some specific strains are useful for treating liver disease; for example, Lactobacillus rhamnosus LGG can ameliorate liver injury and hypoxic hepatitis in a rat model of CLP (cecal ligation and puncture)-induced sepsis (40), Lactobacillus plantarum CMU995 can ameliorate alcohol-induced liver injury by improving both the intestinal barrier and antioxidant activity (41), and Lactobacillus acidophilus LA14 alleviates liver injury induced by D-galactosamine (D-GalN) in rats (42). ATAB may increase the abundance of Lactobacillus in the gut microbiota and play a role in the alleviation of ALI. Akkermansia muciniphila, a novel probiotic, has recently attracted increasing attention. A recent study showed that Akkermansia muciniphila can protect mice from high-fat diet (HFD)/CCl 4 -induced liver injury (43). In our study, ATAB enhanced Akkermansia levels, which may contribute to the therapeutic effect. Roseburia is another bacterial genus that is more abundant in the ATAB group. Roseburia can ameliorate alcoholic fatty liver in mice (44) and is the source of propionate and butyrate in the gut (45)(46)(47). Therefore, these bacteria may have different probiotic functions.
Interestingly, the change in the gut microbiota lasted for a longer time than expected, as some studies had confirmed that centrilobular necrosis occurred within 6 to 12 h after APAP overuse (48), but our results showed that 6 h after APAP overuse, the gut microbiota had changed, and after 24 h of APAP overuse, the gut microbiota was continually changing. This showed that the change in the gut microbiota is different from the necrosis and apoptosis of hepatocytes. So the therapeutic modality targeting the gut microbiota may have a longer therapeutic window.
FMT research is necessary for microbiome studies to transform correlation to causation (49); therefore, we performed FMT to evaluate the exact function of the gut microbiota. Feces from the ATAB group can alleviate ALI, although the therapeutic effect is not as obvious as that of ATAB, suggesting that the gut microbiota and metabolites Gut Microbiota and TRL4 Antibody in Acetaminophen ALI Microbiology Spectrum mediate the therapeutic effect of ATAB, at least in part. Thus, ATAB may also function via other underlying mechanisms that treat APAP-induced ALI. The importance of metabolites in gut microbiota studies is acknowledged; they exert their effects as signaling molecules and substrates for metabolic reactions (50). In this study, we found that several metabolic pathways differ between the ATAB and APAP groups, such as fatty acid-and bile acid-relevant metabolic pathways. The synthesis and secretion of bile acids are two of the most important functions of the liver, and they influence the gut-liver axis. Lipopolysaccharides from gut bacteria can enter the liver through the portal circulation, which can combine with TLR4 and aggravate the inflammatory response, causing liver cell death (51). Therefore, this may also represent the mechanism by which ATAB alleviates liver injury.
SCFAs are the products of bacteria in the cecum and colon (52), have many beneficial effects (47), and are also involved in fatty acid metabolism. However, there have been few studies on SCFAs and liver injury. We provided SCFAs directly to APAPtreated mice and found that SCFAs alleviated ALI unexpectedly. Metabolites of the gut microbiota influence physical health; therefore, we performed metabolomic analysis to reveal multiple metabolites to be further explored as research targets.
Intestinal barrier function is associated with intestinal and extraintestinal diseases, including liver disease (27,53). APAP increases intestinal permeability, which may explain the induction of ALI but not the direct cytotoxicity to liver cells (54,55). We found that ATAB could improve gut barrier functionality; this may contribute to the mechanisms underlying the therapeutic effects of ATAB on ALI.
To the best of our knowledge, this is the first study to explain the role of the gut microbiota during ATAB therapy for APAP-induced ALI. However, this study has certain limitations. First, ATAB is a monoclonal anti-TLR4 antibody, and its effects were most likely mediated through the antagonism of TLR4. Our study lacked treatment of APAPtreated mice with an isotype-matched control antibody, so we cannot rule out off-target effects in this study. Second, the causal relationship between the gut microbiota and the therapeutic effect of ATAB was analyzed insufficiently; for example, we did not perform research into specific bacterial strains. Different bacterial strains between the two groups should be screened and their functions should be verified in APAP-induced ALI. For instance, previous research has shown that Lactobacillus acidophilus LA14 could alleviate liver injury (42). Regarding the metabolites, we verified the function of SCFAs in ALI; however, we observed only that they can alleviate ALI, and the exact mechanism is still unclear and requires further research. Third, ATAB can improve gut barrier function; however, the molecular mechanism is unknown. Gut barrier function requires further analysis because of its importance in liver disease.
In conclusion, we reported a new therapeutic method for APAP-induced ALI in a mouse model and analyzed the role of the gut microbiota in this therapy. The fundamental data provided in this study may provide the foundation for further studies in this research area, leading to the development of new therapeutic strategies to treat APAP-induced ALI.
MATERIALS AND METHODS
Mouse model and treatment. Male C57BL/6 mice (aged 4 to 6 weeks) were obtained from Sibeifu Biotechnology Company Limited (Beijing, China). The mice were housed in a specific-pathogen-free animal experiment facility with a temperature of 23°C 6 1°C, 53% 6 2% humidity, and a 12-h light/dark cycle. The mice were fed a standard laboratory diet (ad libitum) in individual standard stainless steel cages. To eliminate sex as an influencing factor in our experiments, we used only male C57BL/6 mice in this study. Therefore, it is important to bear in mind that the results for female mice may be different. All animals were fed adaptively for 1 week before the experiment. The mice were randomly assigned to three groups (APAP group, ATAB group, and control group [n = 8 for each group]). The animals in the APAP group were intraperitoneally injected with 600 mg/kg acetaminophen dissolved in PBS, the animals in the ATAB group were intraperitoneally injected with 5 mg/kg ATAB 2 h after acetaminophen injection, and the animals in the control group were intraperitoneally injected with the same volume of PBS. For antibiotic treatment, mice received vancomycin (100 mg/kg), neomycin sulfate (200 mg/kg), metronidazole (200 mg/kg), and ampicillin (200 mg/kg) intragastrically once daily for 5 days. Mice were not fasted before APAP treatment and were sacrificed 24 h after APAP treatment (56,57).
The mouse experiments were carried out in accordance with the recommendations of the ethics provision for experiments on mice of the ethics committee of the Centre for Diseases Prevention and Control of Eastern Theater. The protocol was approved by the ethics committee of the Centre for Diseases Prevention and Control of Eastern Theater.
Determination of biochemical parameters of serum and tissues. The serum ALT and AST levels were determined using a detection kit (Jiancheng Bioengineering, Nanjing, China). Total SOD (T-SOD), MDA, GSH, and CAT levels in liver tissues and zonulin levels in colon tissues were determined using a detection kit (Jiancheng Bioengineering, Nanjing, China). The serum LPS level was measured with a detection kit (Jiancheng Bioengineering, Nanjing, China) according to the manufacturer's instructions.
Real-time fluorescence quantitative PCR experiments. Total RNA was extracted from tissues using an RNA extraction kit. (Fastagen, Shanghai, China) according to the manufacturer's instructions. Next, the total RNA concentration was determined by using a visible spectrophotometer, and reverse transcription was then used to synthesize cDNA. Real-time PCR was carried out on an ABI 7500 real-time PCR system. The real-time quantitative PCR (RT-qPCR) primers are shown in Table 1.
Morphological analysis. Tissue was collected and fixed in 4% paraformaldehyde. The sample was then dehydration and embedded (in paraffin), sliced, and stained with hematoxylin and eosin (HE).
Fecal microbiota transplantation. The mice were randomly assigned to four groups (ATAB group, APAP group, ATAB.R group, and APAP.R group [n = 6 for each group]). The ATAB.R and APAP.R groups received antibiotics intragastrically once daily for 5 days to deplete the gut microbiota. The feces of the donor mice (APAP group and ATAB group) were collected and resuspended in PBS at 0.125 g/mL. A total of 0.15 mL was administered to the ATAB.R and APAP.R groups. After 3 days, mice were intraperitoneally injected with 600 mg/kg APAP and sacrificed 24 h after APAP treatment.
Microbial analysis. Fresh fecal samples were collected using a metabolic cage and stored at 280°C. The fecal contents were resuspended in PBS (pH 7.4) containing 0.5% Tween 20, and the fecal suspension was stirred with a stirrer to destroy the bacterial membrane. The samples were sequenced on the Illumina Novaseq platform according to the manufacturer's recommendations. It mainly includes DNA extracted from fecal contents for the amplification of variable region 4 (V4) of the bacterial 16S rRNA gene by PCR. Next, the product was purified, and the library was prepared and sequenced. A t test, a Wilcox rank sum test, and a Tukey test were used for data processing to analyze the differences between alpha diversity indices and beta diversity indices of the groups. Other charts were implemented by using the R package.
Metabolomics analysis. The metabolites were extracted from feces, the supernatant was collected, and the sample was injected for LC-MS analysis. The chromatographic column used was a Hypersil gold column (100 by 2.1 mm, 1.9 mm). The column temperature was 40°C, the flow rate was 0.2 mL/min, and the injection volume was 2 mL. Mobile phase A was 0.1% formic acid, and mobile phase B was methanol. Gradient elution was performed as follows: 0 to 1.5 min, 98% mobile phase A and 2% B; 1.5 to 3 min, 98% A and 2% B; 3 to 10 min, 0% A and 100% B; 10 to 10.1 min, 98% A and 2% B; and 10.1 to 13 min, 98% A and 2% B. The specific conditions for mass spectrometry were as follows: a spray voltage of 3.5 kV, a sheath gas flow rate of 35 arb (arb is a unit of measure of the gas flow rate), an auxiliary gas flow rate of 10 arb, a capillary temperature of 320°C, an S-lens radio frequency (RF) level of 60, and an auxiliary gas heater temperature of 350°C.
Intestinal permeability. To determine intestinal mucosal barrier permeability, 20 h after APAP injection, mice were given 4-kDa FITC-dextran (FD4) (500 mg/kg of body weight) (Sigma) orally. After 4 h, blood samples were collected from the orbit, and sera were separated. The fluorescence intensity of glucose molecular protein in peripheral blood was measured using a full-wavelength automatic enzyme labeling instrument (Tecan Spark) (excitation wavelength, 485 nm; emission wavelength, 525 nm). Paraffin sections of mouse colon specimens were dewaxed and dehydrated. The fluorescence intensity of colon tissue was observed by using a fluorescence microscope (Olympus, Japan).
Application of SCFAs. The mice were randomly assigned to three groups (APAP group, SCFA group, and control group [n = 8 for each group]). For the SCFA group, a short-chain fatty acid mixture (67.5 mM sodium acetate, 40 mM sodium propionate, 25.9 mM sodium butyrate) was dissolved in water and given to mice for free drinking for 7 days; next, APAP (600 mg/kg) was injected. For the APAP group and the control group, the experimental methods were the same as the ones described above. The blood and liver tissues of mice were collected after 24 h.
Statistical analysis. Data are expressed as means 6 standard errors of the means (SEM). Unpaired two-tailed Student's t test was used to assess differences between two groups. Data sets involving more than two sets were evaluated using a Newman-Keuls test. MetaX software was used to process the metabolomics data and for multivariate statistical analysis. Correlation analysis for different metabolites
SUPPLEMENTAL MATERIAL
Supplemental material is available online only. SUPPLEMENTAL FILE 1, PDF file, 0.3 MB.
ACKNOWLEDGMENTS
This study was supported by NSFC81871242 and the National Key R&D Program of China (2018YFC1200603).
Z.Y. and Y.W. conceived and designed the experiments; X.S. and Q.C. performed the animal and molecular experiments; J.N., X.L., and T.Z. collected the samples; K.O., H.H., and J.Z. analyzed the data; and Z.Y. wrote the paper.
We declare that we have no conflict of interest. | 2023-03-22T06:16:48.069Z | 2023-03-21T00:00:00.000 | {
"year": 2023,
"sha1": "dacf41540580dd09714f04c52efcf83a84608308",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/spectrum.04715-22",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "70b232ecdf89adb31d516b53870a000e15dfda6b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
85499862 | pes2o/s2orc | v3-fos-license | On the connection between radiative outbursts and timing irregularities in magnetars
Magnetars are strongly magnetized pulsars and they occasionally show violent radiative outbursts. They also often exhibit glitches which are sudden changes in the spin frequency. It was found that some glitches were associated with outbursts but their connection remains unclear. We present a systematic study to identify possible correlations between them. We find that the glitch size of magnetars likely shows a bimodal distribution, different from the distribution of the Vela-like recurrent glitches but consistent with the high end of that of normal pulsars. A glitch is likely a necessary condition for an outburst but not a sufficient condition because only 30\% of glitches were associated with outbursts. In the outburst cases, the glitches tend to induce larger frequency changes compared to those unassociated ones. We argue that a larger glitch is more likely to trigger the outburst mechanism, either by reconfiguration of the magnetosphere or deformation of the crust. A more frequent and deeper monitoring of magnetars is necessary for further investigation of their connection.
INTRODUCTION
Pulsars are one of the most precise clocks in the Universe. They are powered by rotational energy and show periodic signals with gradual spin-down. Two types of timing irregularities, timing noise and glitches, were commonly found in young pulsars. They provide hints of the stellar interior and its interaction with the magnetosphere. Timing noises have several forms and their mechanisms remain unclear (see, e.g., Lyne, Hobbs, Kramer, Stairs, & Stappers, 2010). A glitch is a sudden increase of the spin frequency, and it is often followed by a recovery (Espinoza, McCulloch, Hamilton, Royle, & Manchester, 1983). A statistical analysis shows that the glitch size is bimodally distributed, which could indicate different triggering mechanisms . Several theoretical interpretations have been proposed, such as rearrangement of the crust shape triggered by starquakes (Baym, Pethick, Pines, & Ruderman, 1969;Baym & Pines, 1971) and a catastrophic break down of vortex pinning in the superfluid component (Alpar, Pines, Anderson, & Shaham, 1984;Anderson & Itoh, 1975).
Magnetars are a special class of pulsars that contain extremely high magnetic fields (see review by Kaspi & Beloborodov, 2017). The most remarkable features of them are the short-term bursts with a time scale of seconds and long-term outbursts with a time scale from months to years. They usually have thermal luminosities higher than that inferred from the spin-down and hence are believed to be powered by the decay of the magnetic field (Duncan & Thompson, 1992). The triggering mechanisms of the burst and outburst remain clouded with controversy. A burst could be triggered internally such as instability of the core and cracking of the crust (Thompson & Duncan, 1995, 2001, or externally like a sudden reconnection of a twisted magnetosphere (Lyutikov, 2003;Parfrey, Beloborodov, & Hui, 2013).
An outburst, generally accompanied by an intensive burst epoch (Woods et al., 2007), could be powered by gradually untwisting of the magnetosphere (Beloborodov, 2009;Thompson, Lyutikov, & Kulkarni, 2002). Observations of several magnetars showed that additional hotspots, which are originated from the bombardment by particles accelerated in the magnetosphere, shrank gradually during the tail of the outburst and hence supporting this model (Beloborodov & Li, 2016).
Magnetars also show glitches frequently. Five bright magnetars, 1E 1841−045, 1RXS J170849.0−400910, 1E 2259+586, 4U 0142+61, and 1E 1048.1−5937, have been monitored with the Rossi X-ray Timing Explorer (RXTE) between 1996 and 2012, and 17 glitches/timing anomalies were observed (Dib & Kaspi, 2014). Their fractional glitch sizes (Δ ∕ ) are huge but their absolute size (Δ ) spreads over a wide range with much lower values than those of Vela-like pulsars . Moreover, all the outbursts were accompanied by glitches, but not vice versa (Dib & Kaspi, 2014). Timing anomaly and radiative outburst are believed to have some connection because they share several common origins. However, it remains unclear if glitches associated with outburst have any distinct properties compared to others. This motivates us to examine the differences between these two types of glitches with an extended database.
We describe the current glitch sample of magnetars in section 2. The statistic of the glitch size and its correlation with physical properties are shown in section 3. We discuss the possible connection between the radiative outbursts and the glitches in Section 4. We then summarize our work and propose future prospects in Section 5.
ANALYSIS RESULTS
We first investigate the glitch size distribution of magnetars. Figure 1 shows the histogram of jumps in frequency (Δ ) and frequency derivative (Δ̇ ). Glitches without outburst have Gaussian-like distributions in 10 −8 Hz< Δ < 10 −6 Hz and 10 −16 Hz s −1 < Δ|̇ | < 10 −12 Hz s −1 . They are consistent with the high-end of the glitch distribution of the major pulsar population (see although they are mainly observed from limited sample. On the other hand, glitches with outbursts have a much wider distribution in Δ but have no significantly different distribution in Δ|̇ |. They occupied the saddle between the major pulsar population and the Vela-like pulsars.
We then search for the connection between Δ and the physical parameters, including characteristic age and -field strength. All the glitches of canonical RPPs are also included for comparison. We adopt ∼ 480 glitches in canonical RPPs from the Pulsar Glitch Catalog 2 . The result is shown in Figure 2 . We found that magnetar glitches with outburst show larger size than those without outbursts, but they have no significant dependences on and -field.
We further investigate the relation between Δ and Δ̇ (see Figure 3 ). Since Vela-like pulsars have large Δ ∼ 10 −5 -10 −4 Hz and large Δ̇ ∼ 10 −14 -10 −12 Hz s −1 , they occupy the upper-right corner. Other glitches show a positive correlation between Δ and Δ̇ , but several outliers can be seen in between ( 5 × 10 −7 Hz≲ Δ ≲ 10 −5 Hz and 10 −16 Hz s −1 ≲ Δ̇ ≲ 10 −11 Hz s −1 ). MagnetarsâĂŹ glitches without outburst followed the positive trend well, implying that they belong to the major glitch class. On the other hand, those glitches with outbursts are distributed between the positive correlation trend and the Vela-like glitches. They could belong to the outliers and may have different triggering mechanisms. (2010) a Followed by a spin-down glitch of Δ = −1.27(2) × 10 −8 and a short-term, limited flux increase. b Followed by a spin-down glitch of Δ = −3.7(1) × 10 −8 and a short-term, limited flux increase. c A change in torque, but unlikely to be a glitch.
It has been proposed that magnetars have strong toroidal -fields. This non-dipolar term drives the decay of the -field and heats the surface of the magnetars (Glampedakis, Jones, & Samuelsson, 2011;Pons & Geppert, 2007;Pons, Miralles, & Geppert, 2009). The thermal luminosity could provide hints about the hidden -field components and ages of magnetars Viganò et al., 2013). Therefore, we plot the glitch sizes against thermal luminosities in Figure 4 . A more luminous magnetar is believed to be a younger one with a higher total -field. Most of the glitches without outbursts are observed from these bright sources. Glitches with outbursts have a size distribution with a higher mean value and a wider deviation. Unfortunately, the timing behaviors of those transient magnetars in quiescence are difficult to monitor due to insufficient sensitivity of current X-ray observatories. PSR J1119−6127 is the only source that show one glitch without outburst (Δ = 1.1 × 10 −8 Hz) and a quiescent thermal luminosity of ∼ 2 × 10 33 erg s −1 . In contrast, its glitch accompanied with outburst has a much larger size of Δ = 1.4 × 10 −5 Hz. This provides a hint that the size could probably play an important factor in the triggering of outbursts.
DISCUSSION
We have collected historical glitch events of magnetars and found that glitches associated/unassociated with outbursts could have a bimodal distribution in Δ although it could be biased due to a limited sample. Moreover, we do not observe significant age and -field dependences of glitches in magnetars. They are consistent with glitches of canonical RPPs with ≲ 10 5 yr on the Δ̇ -Δ plot. The current leading neutron star model suggests a superfluid layer under the solid crust. The angular momentum of the superfluid component is proportional to the density of vortices, which are pinned to the lattice in the inner crust. This effectively forms a detached component containing a higher angular momentum as the neutron star (NS) spins down. When the pinning force suddenly brakes, the vortices migrate outwards and bring angular momentum out. The superfluid component is attached to the rest of the NS and causes a sudden spin-up (Anderson & Itoh,(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) 10 -18 10 -16 10 -14 10 -12 10 -10 10 -11 10 -10 10 -9 10 -8 10 -7 10 -6 10 -5 10 -4 with outburst without outburst FIGURE 3 Glitch size Δ versus Δ̇ for RPPs and magnetars.
From statistics, radiative outbursts are almost always accompanied with glitches (Dib & Kaspi, 2014). Radiative outbursts are believed to be determined by the untwisting of the closed field lines (Beloborodov, 2009;Beloborodov & Thompson, 2007). The footprints of the magnetic field lines could be twisted by the motion of the crust. Hence, the triggering mechanism and location of the glitch could determine whether radiative outbursts will occur or not (Archibald et al., 2017). Moreover, the degree of twist could also correlate with the glitch size. We compared the glitch size and the flux increment of the outburst but found no significant correlation.
It was also suggested that glitches are always accompanied by radiative events but those bright sources have limited flux increases and much shorter decay time scales compared to faint magnetars (Pons & Rea, 2012). Two tiny radiative outburst events accompanied with glitches were indeed observed in 4U 0142 (Archibald et al., 2017;Gavriil et al., 2011). This could explain the lack of correlation between the glitch size and flux increment. However, a difference in size between glitches with/without outbursts is seen in Figures 1 and 3 . Those glitches with outbursts and small Δ values are mainly observed from bright magnetars. Their sizes are comparable to those glitches of regular RPPs with < 10 5 yr. Other glitches with huge sizes are observed in faint sources with violent and long outbursts. They are not located on the linear trend of regular RPPs and the clustering region of Vela-like pulsars ( Figure 3 ). We suggest that these glitches have a more violent deformation of crust and cause a significant twist of the -field lines. Similar events could also be seen in canonical RPPs but their -fields are not strong enough to trigger radiative events. Because the sample remains limited, we are unable to determine if violent glitches with outbursts are more often to occur in faint magnetars or high -field RPPs. Fortunately, PSR J1119−6127 provides a good opportunity to test the connection between glitches and outbursts because its timing behavior in the quiescent state can be achieved in the radio band (Janssen & Stappers, 2006;Weltevrede, Johnston, & Espinoza, 2011;Weltevrede et al., 2011). The glitch accompanied with outburst has the largest size, supports the above idea. Therefore, monitoring timing behaviors of high -field RPPs in radio bands could play an important role to explore the connection between glitch size and the outburst behaviors. Moreover, monitoring faint magnetars in quiescence with future X-ray missions is also critical to see if there are any glitches without triggering outbursts.
SUMMARY
We have carried out a comprehensive analysis of glitches in the current magnetar sample. A bimodal distribution of Δ is observed. The size does not show significant correlation with and -field. Glitches without outbursts are fully consistent with glitches in regular RPPs on the Δ -|Δ̇ | plot, while those ones with outburst have a distribution with a larger size and they are more likely consistent with those glitches scattered between regular RPPs and Vela-like pulsars in Δ -|Δ̇ | plot. Unfortunately, the lack of knowledge about the timing properties of low-luminosity magnetars in quiescence prevents us to draw a strong conclusion on the connection between timing irregularities and radiative outbursts. Monitoring them with future X-ray mission and monitoring high -field RPPs in other wavelengths will be helpful for building a complete sample. | 2019-03-23T00:13:19.000Z | 2019-03-23T00:00:00.000 | {
"year": 2019,
"sha1": "758c0cca63b4414dfe3e6eddf6e49bc1dbd2b457",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1903.09736",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "758c0cca63b4414dfe3e6eddf6e49bc1dbd2b457",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
51704682 | pes2o/s2orc | v3-fos-license | Calibration and Noise Identification of a Rolling Shutter Camera and a Low-Cost Inertial Measurement Unit
A low-cost inertial measurement unit (IMU) and a rolling shutter camera form a conventional device configuration for localization of a mobile platform due to their complementary properties and low costs. This paper proposes a new calibration method that jointly estimates calibration and noise parameters of the low-cost IMU and the rolling shutter camera for effective sensor fusion in which accurate sensor calibration is very critical. Based on the graybox system identification, the proposed method estimates unknown noise density so that we can minimize calibration error and its covariance by using the unscented Kalman filter. Then, we refine the estimated calibration parameters with the estimated noise density in batch manner. Experimental results on synthetic and real data demonstrate the accuracy and stability of the proposed method and show that the proposed method provides consistent results even with unknown noise density of the IMU. Furthermore, a real experiment using a commercial smartphone validates the performance of the proposed calibration method in off-the-shelf devices.
Introduction
An inertial measurement unit (IMU) and a camera form a widely used sensor configuration for mobile platform localization. For example, visual-inertial SLAM [1][2][3][4][5][6] is an alternative navigation method in GPS-denied environments such as tunnels and indoor areas where GPS signals are not available. In particular, a MEMS-based IMU and a rolling shutter camera, which capture images line-by-line, are commonly used owing to their low-cost sensing capability.
Most IMU-camera calibration methods estimate the calibration parameters with fixed noise densities of the IMU and camera measurements, which are typically handled as tuning parameters. To achieve accurate calibration, we need to know such noise densities of the IMU and camera. Fortunately, for cameras, the variation in noise densities is not large because calibration patterns (e.g., a checkerboard) are used for corner extraction. Therefore, it is dependent on feature extraction algorithms, whose accuracy in terms of pixels can be easily evaluated and is well-known [20]. In contrast, setting the IMU noise density parameters is difficult and heuristic, because IMUs have various noise densities according to the types and costs of IMUs, and the noise density is not intuitive. Furthermore, noise information of low-cost IMUs is not provided in general.
To address these problems, we jointly estimate the noise density of the low-cost IMU and the intrinsic/extrinsic calibration parameters through the graybox system identification, which estimates the unknown parameters describing the prediction of the dynamic system well [21][22][23]. In general, the graybox method is implemented on a nonlinear optimization problem that minimizes the residual between the predicted and observed measurements of filtering (i.e., the Kalman filter). The proposed framework is composed of two types of graybox methods, which include filtering and optimization, for calibration and noise identification, respectively.
For calibration, rather than finding the calibration parameters in a single optimization step, we divide the calibration process into the filtering step for initialization and the optimization step for refinement. As a result, the convergence time of our framework becomes much faster than the single optimization method. This is because it finds sub-optimal calibration parameters in the filtering step, whose computation time is more efficient than the optimization, and then uses the parameters as an initial value of optimization. Hence, the good initial estimates from filtering reduces the convergence time of optimization for the calibration problem. In fact, it is difficult to design both the calibration and noise parameter estimation problems together in the single optimization step. In the filtering step, we utilize the unscented Kalman filter (UKF) to address the strong nonlinearity of our calibration problem, and its state vector consists of the intrinsic/extrinsic calibration parameters as well as the IMU motion parameters including the position, velocity, and orientation. In the optimization step, we refine the calibration parameters with the noise density estimated from the previous filtering step, while the UKF only estimates the IMU motion including the position, velocity, and orientation.
For noise identification, the optimizer estimates the noise density that attempts to make the UKF converge properly while enabling the estimation of calibration parameters by UKF. Unlike the previous graybox method [23], we define both the prediction errors and state error covariances, which is a constraint, as the cost function of optimization. The IMU noise density is parameterized in terms of system noise covariances in UKF, and both the prediction errors and state error covariances in the cost function involve the system noise covariance in each frame. Without the constraint, the optimization solver forces the system noise covariance to become very large, because it estimates the noise density, which is over-fitted to measurements minimizing the errors (i.e., residuals) in UKF. To avoid such divergence, the proposed method not only minimizes the residuals but also constrains the state error covariances. The initialization of the noise parameters for this optimization is performed using a grid search strategy with the cost function for noise estimation.
We demonstrate the overall procedure of the proposed method with the input and output in Figure 1, and summarize the outputs as follows. Figure 1. Overall framework of the proposed algorithm. x 0 is the initial state vector and θ i,c,e indicates IMU intrinsics (i), camera intrinsics (c), and extrinsics (e). u and z denote inertial and visual measurements, respectively. R is a known measurement noise covariance. The notation· denotes the final estimates and subscript 0 denotes an initial step.
The remainder of this paper is organized as follows. Section 2 briefly reviews the previous studies related to IMU-camera calibration. Section 3 illustrates the problem formulation for our calibration method. A detailed description about the calibration system model and its measurement model is provided in Section 4. Section 5 describes the framework and the implementation of our calibration method. Section 6 analyzes the experimental results to validate the advantages of the proposed calibration method. Finally, we conclude the paper in Section 7.
Related Work
As mentioned in Section 1, the goal of calibration is to obtain the intrinsic parameters of an IMU and a camera and extrinsic parameters of these two sensors. In this section, we review the separate calibration methods of each sensor and the joint calibration methods of an IMU and a camera. In addition, we introduce several studies related to the noise identification of an IMU.
Separate Calibration
The intrinsic calibration of a camera is a well-known problem in computer vision [24]. Therefore, rather than reviewing conventional camera intrinsic calibration methods, we focus on rolling shutter calibration, which additionally finds the readout time delay between lines. Geyer et al. [10] proposed the exploitation of LED flashing at a high frequency to estimate the delay. They assumed constant velocity models and removed the lens for good illumination. In contrast, Ringaby et al. [11] introduced a more efficient approach that does not need to remove lens. O'Sullivan and Corke [25] designed a new rolling shutter camera model, which can handle arbitrary camera motions and projection models such as fisheye or panoramic model. Oth et al. [12] used video sequences with a known calibration pattern. They used a continuous-time trajectory model with a rolling shutter camera model, and applied a batch optimization approach to estimate the sequential camera poses and the line delay of the rolling shutter camera.
The intrinsic calibration of an IMU has been performed using mechanical platforms such as robotic manipulators, which precisely moves IMU along known motion trajectories [7][8][9]. The intrinsic parameters of an accelerometer and a gyroscope are estimated by comparing the output of IMU and the motion dynamics, i.e., acceleration and angular velocity, which represent the generated known motion. In [26,27], authors proposed to exploit a marker-based motion tracking system or GPS measurements rather than using the expensive mechanical platforms. Recently, calibration methods that do not require any additional device have also been proposed in [13,28].
The extrinsic calibration of an IMU and a camera has been studied for the last decade in robotics community. Mirzaei and Roumeliotis [14] first presented a filter-based IMU-camera calibration method. They exploited the extended Kalman filter (EKF) to approximate nonlinear system and measurement models. Hol and Gustafsson [23] adopted the graybox system identification with which they integrated a filter-based framework and an optimization method. Kelly and Sukhatme [2] employed UKF to handle the nonlinear calibration model. They also proposed a general self-calibration algorithm that does not require additional equipment. Unlike other approaches, they included gravity as one of the state parameters to be estimated considering geo-location dependency. Dong-Si and Mourikis formulated rotation calibration as a convex problem [29]. Fleps et al. [30] proposed a new cont function that includes alignment between each trajectory of an IMU and a camera. Furgale et al. [15] proposed spatial and temporal calibration by modeling the IMU and camera motion based on a spline function. In photogrammetry community, the calibration of multiple heterogeneous sensors is called bore-sight alignment. Blazquez and Colomina [31] utilized INS/GNSS and cameras on unmanned aerial vehicle (UAV), and estimated relative boresight (rotation) and lever-arm (translation) between these sensors. Cucci et al. [32] proposed self-calibration of the sensors and 3D pose estimation using a low-cost IMU, GNSS, and cameras in a small UAV.
Joint Calibration
Joint estimation of the intrinsic and extrinsic parameters of an IMU and a camera is efficient because it does not need to conduct calibration independently for each sensor. Furthermore, it generates more accurate results because it minimizes the intrinsic and extrinsic calibration errors simultaneously. Moreover, the joint estimation is more practical since most mobile devices are equipped with a low-cost IMU and a rolling shutter camera. The joint estimation has been introduced in [16], where a low-cost IMU and a pinhole camera model were used, and in [33], where a low-cost gyroscope and a rolling shutter camera were used. Li et al. additionally considered the temporal synchronization between the IMU and the camera and the rolling shutter camera model for calibration of a low-cost IMU and a rolling shutter camera [34,35]. Lee et al. proposed to suppress uncertain noises of low-cost IMU for IMU-camera calibration [18]. Rehder et al. proposed the calibration of multiple IMUs and a camera, and considered the transformation between individual axes of the IMUs [19]. Li et al. introduced self-calibration methods for a low-cost IMU-camera system while estimating sequential camera poses in vision-aided inertial navigation [17]. It estimates the intrinsic parameters, extrinsic parameters, rolling shutter readout time, and temporal offset between an IMU and camera measurements. We highlight the differences between the proposed method and the aforementioned methods in Table 1. Table 1. Comparison of related studies: the parameters denote IMU-camera translation ( I C t), IMU-camera rotation ( I C q), IMU-camera time delay (λ t ), translation between IMU axes ( B I t), camera focal length ( f ), camera radial distortion (d), rolling shutter readout time (λ rs ), IMU bias (b a,g ), IMU scale factor(s a,g ), IMU misalignment(m a,g ), and IMU noise density (σ a,g ). Refer to Figure 2 and Table 2 for the notations I, C, and W.
Noise Identification
The Allan variance (AV) and the non-parametric power spectral density are the classical tools used to describe noise characteristics of an IMU. Refer to [36] for a detailed description of these procedures. Vaccaro et al., introduced the automatic noise identification by computing and matching the covariances of the AV [37]. In addition, the IMU noise parameter estimation methods were proposed in [38,39] by using a maximum likelihood with the KF in both online and offline manners.
Problem Formulation
Our system comprises a low-cost IMU and a rolling shutter camera, and these two sensors are rigidly connected as in Figure 2. The IMU measures 3-DOF acceleration and 3-DOF angular velocity, and a rolling shutter camera captures an image row-by-row. In this paper, we represent the coordinates of the IMU, the camera, and the world by {I}, {C}, and {W}, respectively. These notations are used as a left superscript to denote a reference coordinate and a left subscript to represent a target coordinate, as shown in Figure 2. For example, the 3D position p of the IMU coordinate {I} from the world coordinate {W} is represented by W I p, and the 3D translation t of the camera {C} coordinate from the IMU coordinate {I} is denoted by I C t. To generate calibration data, we sequentially obtain inertial measurements and images by capturing a checkerboard along three-axis linear and angular motion. The inertial measurements u from the IMU are composed of acceleration a ∈ R 3 and angular velocity w ∈ R 3 . Visual measurements z ∈ R 2×M comprise M corners of the checkerboard in the image coordinate. Given the visual and inertial measurements, we formulate the calibration and noise identification as a minimization problem as follows.θ = min where the cost function E(·) illustrates the alignment error between motions of the IMU and the camera, which is computed from the visual and inertial measurements u, z, with the calibration and noise parameters denoted by θ = {θ i , θ c , θ e , θ n }. The intrinsic parameter θ i of the IMU describes the inherent difference between the raw measurements of sensors and the real-world value. The intrinsic parameter θ c of the rolling shutter camera explains the mapping from the camera coordinate to the image coordinate. The extrinsic parameter θ e between the IMU and the camera describes the relative geometric difference between the IMU coordinate and the camera coordinate. The noise parameter θ n of the IMU represents noise characteristics of inertial measurements.
: IMU noises (2) where u indicates the intrinsically calibrated inertial measurements, and X C ∈ R 3 indicates the position of a three-dimensional point in a camera coordinate. a(·), b(·), c(·), and d(·) are functions for describing each parameter. The calibration and noise parameters are composed of several variables. Table 2 shows a detailed description of the unknown variables in each parameter.
Model
In this section, we describe the system and measurement models for calibration as follows.
where the state vector x represents the sensor motion state and calibration parameters, and are propagated with the inertial measurements u based on the system model F (·). The nonlinear measurement model H (·) describes the relation between the state vector and the measurement z. w is a process noise and v is a measurement noise. Both are assumed to be white Gaussian noise, w ∼ N (0, Q) and v ∼ N (0, G) with the noise covariance Q and G, respectively. Subscripts k − 1 and k denote time steps. We explain the aforementioned notations in detail as follows.
We include the sensor motion parameters in the state vector. Therefore, the state vector x ∈ R 40 is composed of a motion state ρ, extrinsic parameters θ e , IMU intrinsic parameters θ i , and camera intrinsic parameters θ c .
Here, the motion state ρ ∈ R 10 is represented by where W I p ∈ R 3 and W I v ∈ R 3 indicate the position and velocity of the IMU coordinate from the world coordinate, respectively; and W I q ∈ R 4 is an orientation expressed as a unit quaternion of the IMU coordinate from the world coordinate.
The extrinsic parameter θ e ∈ R 8 is denoted by where C I t ∈ R 3 is a translation from the IMU coordinate to the camera coordinate, and C I q ∈ R 4 is a rotation represented as a unit quaternion from the IMU coordinate to the camera coordinate. λ d is a temporal delay between the IMU and the camera.
The IMU intrinsic parameter θ i ∈ R 18 is represented by where b a ∈ R 3 , m a ∈ R 3 , and s a ∈ R 3 indicate a bias, a misalignment, and a scale factor of the accelerometer, respectively. b g ∈ R 3 , m g ∈ R 3 , and s g ∈ R 3 are the parameters of the gyroscope. The camera's intrinsic parameter θ c ∈ R 4 is represented by where f ∈ R is the focal length, d ∈ R 2 is radial distortion, and λ rs ∈ R is the rolling shutter readout time of the rolling shutter camera.
System Model: Low-Cost IMU
We describe the system model F(·) in Equation (3) performing the transition of the state vector x. The visual and inertial measurements are obtained at different frame rates and their time intervals have some variation. An example of this time interval is demonstrated in Figure 3. The state transition (or prediction) is sequentially performed with the inertial measurements u s until a new visual measurement is obtained, as demonstrated in Figure 3. The number of predictions becomes #prediction = #u s + 1. We define the time interval ∆t for each prediction as follows.
where k is the time-step of the visual measurements, while s is that of the inertial measurements. The nonlinear system model in Equation (3) is defined as follows.
Here, the motion state is sequentially predicted with the IMU measurement u s and the time interval ∆t, and the other calibration parameters are considered as constant because they are static and two sensors are rigidly connected. Note that we do not regard the bias states as smoothly time varying parameters, unlike [19] because the input inertial measurements for calibration are obtained for a short time (about 1-2 min).
where the quaternion kinematic function Ω : R 3 → R 4×4 and the angular velocity w s explain the time variation of IMU orientation. The linear acceleration W I a s and angular velocity w s are converted from the raw measurements a m and w m via the intrinsic parameters of the IMU as follows.
where W I R ∈ R 3×3 is a rotation matrix expressing the orientation of the IMU, which is converted from W I q, and g is the gravity vector in the world coordinate. The scale factor matrices S a and S g are generated by the scale factor states s a and s g as and the misalignment matrices (i.e., M a and M g ) between the orthogonal IMU coordinate and the accelerometer/gyroscope coordinates are represented as below The lower triangular elements become zero because we assume that the x-axes of the accelerometer and the gyroscope coincide to one of the orthogonal IMU coordinates as in [13,19].
Measurement Model: Rolling Shutter Camera
The measurement model in Equation (3) describes the relation between the state vector x and the measurements z. The visual measurement z is a set of calibration pattern corners obtained from an image. Unlike a global shutter camera, a low-cost rolling shutter camera captures an image row-by-row. The projective geometry of a rolling shutter camera is formulated as follows.
where W X ∈ P 3 denotes a 3D point in the world coordinate. The intrinsic matrix is represented by K ∈ R 3×3 . The notation ∼ denotes normalization of a projected point. The rotation matrix and translation vector of the world coordinate from a camera coordinate are represented by C W R ∈ SO(3) and C W t ∈ R 3 , respectively. They are computed from our state vector, which is the position W I p and orientation W I q of the IMU from the world coordinate and transformation I C t, I C q between the IMU and the camera.
C v t denote rolling shutter transformation from the center row ( h 2 ) to a row (v) of the projected point because we use the center row as the reference row of an image when defining the rolling shutter distortion. Here, h is the height of the image. To compute the rolling shutter transformation, we use the Cayley transform model described in [40].
where λ rs is the rolling shutter readout time included in the state vector and C w and C v are angular and linear velocities of the camera. The angular and linear velocities are computed from gyroscope measurements and states.
where C I R is the rotation of the camera from the IMU, I w s is an intrinsically calibrated angular velocity of the IMU in Equation (12), and W I v k|k−1 is a predicted velocity of the IMU. We use them at the closest point to the time step t img + λ d , as follows.
Finally, each corner z i ∈ z in the visual measurement is obtained by considering radial distortion.
where the subscript i is an index of a projected point and {d 1 , d 2 } ⊂ d is the radial distortion state.
To utilize multiple points, we concatenate the corner positions in the image coordinate as follows.
where M is the number of corner points.
Proposed Method
Our calibration and noise identification is based on the graybox system identification [22,41], which is a nonlinear optimization method minimizing the residuals of the Kalman filter (KF). Figure 1 demonstrates the overall framework of the proposed method. We first initialize the noise parameters by grid search because we do not have any prior knowledge on the noise. Then, we estimate the calibration and noise parameters with our novel graybox method, whose cost function considers not only residuals but also state error covariances of KF. Here, the noise parameters are estimated by nonlinear optimization, while the UKF estimates the motion state and the calibration parameters together. Finally, in the refinement step, the optimization module estimates the calibration parameters using the conventional graybox-based calibration [23], whose cost function considers only residuals, while the UKF module only estimates the motion state.
We define the cost function in Equation (1) as a square form of the state error covariances of the UKF and the residuals between the predicted and observed measurements.
where diag(·) is the diagonal of a square matrix and M is the number of visual measurements. The covariance S k|k is computed in filtering with the updated state error covariance P k|k and a Jacobian matrix of the measurement function H. The computation of the predicted measurements and the state error covariances in UKF is described in Equation (22). For the initialization of the noise parameters, we use the same cost function and a grid search strategy. The search range is 0 < θ n < 1, and the grid size is 5 with log-scale intervals. Figure 4 demonstrates the cost map computed by the grid search. Then, we estimate the noise and calibration parameters using the graybox method. For the optimization, we exploit the Levenberg-Marquardt algorithm, which is a nonlinear least square solver. arg min θ n E 1 (θ n ) (19) At this time, the calibration parameters are simultaneously estimated by the UKF. Therefore, we estimate the noise parameters that result in the optimal convergence of the UKF for calibration.
Noise and Calibration Parameters in UKF
We adopt the UKF [42] owing to its efficient performance on the nonlinear system and measurement model. The prediction of the state x k−1|k−1 and its error covariance P k−1|k−1 are formulated as below.
where sp is the sigma point generation function, χ is the generated sigma points of the state, the superscript i is the index of the sigma point (i = 1, · · · , N), and W s , W c are weights of sigma points for the state and covariance, respectively. The sigma points χ i k−1|k−1 are generated with the state x k−1|k−1 , the state error covariance P k−1|k−1 , and the system noise covariance Q. Each sigma point is predicted using system model F(·) with the inertial measurement u k−1 and the IMU intrinsic parameter θ i . The system noise covariance Q ρ for motion states is defined from the IMU noise parameter {σ a , σ g } ⊂ θ n . The system noise covariances for other states are set to zero because they are static. As mentioned before, we regard the bias states as constant parameters, unlike [19].
The update of the predicted state x k|k−1 and the predicted state error covariance P k|k−1 are formulated as follows.
The predicted sigma point χ i k|k−1 is transformed through the measurement model H(·) with the camera intrinsic parameter θ c and the IMU-camera extrinsic parameter θ i . Then, the Kalman gain K k is computed with the predicted measurement covarianceP z k z k and state-measurement cross-covariance matrixP x k z k . The predicted state and state error covariance are updated with the Kalman gain. The measurement noise covariance G is defined as a block-diagonal matrix.
Refinement
We refine the estimated calibration parameters with the conventional graybox method. Here, we only use the residuals between the predicted measurement and the observed measurement in the cost function, and the filter module only estimates the IMU motion state. As a result, the fixed system noise covariance Q ρ ∈ R 10 , instead of Q ∈ R 40 , for motion states is used in UKF. However, unlike [23], which estimated only extrinsic parameters, we estimate the intrinsic parameters of the IMU and the camera as well as the extrinsic parameters. Therefore, the cost function in Equation (1) is defined as follows: arg min Since this optimization is performed in batch manner, unlike calibration parameter estimation by filtering in the previous step, outliers or disturbances such as abrupt motion or illumination changes can be corrected in this step.
UKF Initialization
In this section, we describe the initialization of the states, the state error covariance, and the measurement noise covariance for the UKF. The initial position and orientation of the IMU are computed by transforming the position and the orientation of the camera at the first frame through the initial relative rotation and translation between the IMU and the camera. The velocity of the IMU is initialized to zero. The initial focal length f is set to the width of the image. The relative rotation between the IMU and the camera C I q is initialized by the angular velocities of the IMU and estimated rotation from the camera [15]. The IMU provides the raw angular velocity measurements. To compute angular velocities of the camera, we estimate the camera orientations with respect to the checkerboard pattern using homography [24]. Then, the angular velocity is obtained from the derivative of the orientation of the camera. The other parameters-the relative translation C I t and time delay λ d between the IMU and the camera, biases b a , b g , misalignments m a , m g , distortion coefficient d, and rolling shutter readout time λ rs -are initially set to zero, and scale factors s a , s g are set to one. The initial state error covariance matrix is empirically set as shown in Table 3. Based on the pixel localization error of the corner extraction algorithm, the standard deviation σ z for the measurement noise covariance is set to 1. The gravity g in the world coordinate is set to [ 0, 0, 9.81 ] , and the z-axis of the world coordinate is the vertical direction. The initial calibration parameters for the refinement step are obtained from the estimated calibration parameter from the UKF in the calibration and noise identification step.
Experimental Results
We evaluate the proposed method on the synthetic and real data. The synthetic data contain synthetically generated motion information of an IMU-camera system and corresponding calibration pattern points. Note that the two sensors are rigidly connected. Using the synthetic data, we validate the accuracy and stability of the proposed method by running 100 Monte-Carlo simulations with fixed noise parameters. Furthermore, we compare the calibration results with and without noise identification using random noise parameters to demonstrate the effect of noise estimation. In real-data experiments, we utilize two experimental setups: a self-designed system, which is equipped with two rolling shutter cameras and one low-cost IMU, and a commercial smartphone, which has a rolling shutter camera and a low-cost IMU. The first setup is specially designed to evaluate the extrinsic calibration accuracy. Since the ground truth cannot be obtained for real-data experiments, we indirectly measure the performance based on loop-closing errors between the two cameras and the IMU. With these two setups, we first analyze the performance of the proposed method based on the standard deviation of the estimate. We also compare the proposed method with hand-measure estimates in terms of extrinsics and existing calibration methods in terms of intrinsics and extrinsics for reference. These comparisons show that the proposed method produces comparable results on the camera intrinsic calibration. In addition, the smartphone experiments show that the proposed method is more robust to measurement noises than the existing calibration method.
Synthetic Data
We generate the synthetic measurements for a frame length of 60 s. For this, we synthetically make 20 corners of a checkerboard pattern in the world coordinate and a smooth trajectory of the mobile platform by applying a sinusoidal function, as shown in Figure 5a. The points in the world coordinate are projected to the image coordinate as in [40], and are used as synthetic visual measurements for IMU-camera calibration. We set the image resolution to 1280 × 960. The focal length is set to 700 and the radial distortions are set to [0.1 −0.1] . The rolling shutter readout time λ rs is set to 41.8 µs. The standard deviation of Gaussian noises for the visual measurements is set to 1 pixel. The IMU acceleration and angular velocity measurements along the world coordinate are obtained by differentiating the trajectory of the IMU. Then, to generate uncalibrated measurements, intrinsic parameters including a scale factor, a misalignment, a bias, and gravity in the world coordinate are used to perform the inertial measurements based on Equation (12). The scale factors (s a , s g ), misalignments (m a , m g ), and biases (b a , b g ) are set to 1. In the first experiment, we evaluate the proposed method with fixed IMU noise parameters, which represent the noise density of the IMU set to σ a = 0.1 m/s 2 / √ Hz, and σ g = 0.1 • /s/ √ Hz. The purpose of this experiment is to show how accurately and stably the proposed method estimates the calibration and noise parameters. Figure 6 shows the estimates of the calibration parameters and the standard deviation of their errors, which are a square root of diagonals of the state error covariance, over the 60-s frame length in the calibration and noise identification step. In our framework, the calibration parameters and their covariances are estimated by the UKF while estimating the noise parameters using nonlinear optimization. The graphs in Figure 6 show the calibration estimates obtained by the UKF at optimal noise parameter estimates by nonlinear optimization. In Figure 6a, the errors of the states, including the rotation I C R, translation I C t, time delay λ d , biases b a and b g , misalignments m a and m w , scale factors s a and s w , and focal length f , the radial distortion d and the rolling shutter readout time λ rs , are rapidly converged to zero. The errors are computed using the ground truth calibration parameters. The rotation errors are displayed in the Euler angle form for intuitive representation. In Figure 6b, the error standard deviations obtained by the UKF are rapidly converged. covariances (φ, t, b a , b g , s a , s g , Table 4 summarizes the final calibration and noise parameter estimates of the proposed method by running 100 Monte-Carlo simulations. We describe the mean and standard deviation of the estimates to show statistically meaningful results. Table 4d shows the noise parameter estimates obtained by the proposed method. We find that the ground truth of the noise parameter is within the error bound of the noise parameter estimates. This result indicates that the proposed noise identification reasonably estimates the noise parameters. Therefore, the estimated noise parameters can be used to further refine the calibration parameters in the refinement step. Table 4a-c shows the extrinsic and intrinsic parameter estimates of the IMU and the camera. The differences between the estimates and the ground truth are smaller than their standard deviations. This validates that the proposed method accurately estimates all calibration parameters and noise parameters. Table 4. Estimated calibration and noise parameters obtained using the proposed method in synthetic data. The noise parameters are fixed. We demonstrate the results by using "a parameter mean ± and its standard deviation. The standard deviation of ground truth is set to zero. (a) Extrinsic parameters; (b) IMU intrinsic parameters; (c) Camera intrinsic parameters; (d) Noise parameters. In the second experiment, we generate 100 sequences based on different IMU noise parameters, and, using these sequences, we analyze the proposed calibration method with and without the noise density estimation. The purpose of this experiment is to show that how the proposed noise identification robustly estimates the calibration parameters under unknown IMU noise density. The noise parameters σ a , σ w are randomly selected from 0.001 to 0.2, and the generated noises are added to the inertial measurements. These values cover most of the noise characteristics from lowto high-cost IMUs. We run the proposed method with noise estimation and fixed noise parameters (σ a / σ w = 0.2/0.2 and 0.001/0.001). Table 5 shows the calibration parameter estimates of the proposed method with and without noise estimation. With noise estimation, the proposed calibration method estimates accurate calibration parameters. The mean of errors between the estimated results and the ground truth is smaller than the standard deviation of the estimates. On the contrary, fixed noise densities cause inaccurate and unstable estimates. The mean of the translation estimates without noise estimation is out of the error bound, and their standard deviation is about two times larger than that of the proposed method with noise estimation. For bias estimates, the mean of the estimates is not within the error bound and their standard deviation is larger. Besides, the estimates of the focal length and rolling shutter readout time are largely affected by the IMU noise parameters, whereas the rotation, misalignments, scale factors, and radial distortions are not dependent on the IMU noise parameters. These results validate that the proposed noise identification approach improves the accuracy and stability of the calibration parameter estimation, especially under unknown IMU noise density. Table 5. Comparison of the proposed calibration method with and without the proposed noise identification in synthetic data. The noise parameters are randomly selected. Here, we denote the estimates with noise identification as "auto". We demonstrate the results by a parameter mean ± and its standard deviation. (a) Extrinsic parameters; (b) IMU intrinsic parameters; (c) Camera intrinsic parameters.
Real Data
For the real-data experiments, we use two setups: a self-designed system and a commercial smartphone. Since it is difficult to obtain ground truth for real data, we indirectly evaluate the performance of the proposed method using the results of existing methods and the hand-measured values. In addition, we use the standard deviation of estimates as an evaluation metric to analyze the algorithm stability. Moreover, the loop closing error metric with two cameras and the IMU is utilized to evaluate extrinsic calibration performance.
At first, we evaluate the proposed method using the self-designed system, which consists of two rolling shutter cameras and a low-cost IMU. Figure 7a shows our experimental setup. The rolling shutter cameras are Logitech C80 (Logitech, Lausanne, Switzerland) and the low-cost IMU is Withrobot myAHRS+. The image resolution is 640 × 480, and the frame rates of the camera and the IMU are, respectively, 30 and 100 Hz. We record a set of measurements for approximately 90 s and perform experiments on the 10 datasets. Figure 7. Two types of the low-cost IMU and the rolling shutter camera setup for real-data experiments: (a) our self-designed system having two rolling shutter cameras and a low-cost IMU; and (b) Samsung Galaxy Alpha (Samsung, Seoul, South Korea) having a rolling shutter camera and a low-cost IMU. Table 6 demonstrates calibration parameter estimates of camera 1 and the IMU. For extrinsic and IMU intrinsic parameters, we compare our method with the existing calibration method (i.e., the Kalibr-imucam method [19]) and the hand-measured values. The noise density of the IMU for the Kalibr-imucam method is set to the noise density estimated by the proposed method. Table 6a describes the extrinsic parameter estimates of the proposed method, the Kalibr-imucam method, and the hand-measured values. The rotation parameter I C q is parameterized by Euler angle φ instead of unit quaternion for intuitive understanding. The average rotation, translation, and time delay estimates of the proposed and Kalibr-imucam methods are close to each other, and the average rotation and translation estimates of both methods are within the error bound of the hand-measured value. In addition, the standard deviation of the extrinsic parameter estimates from both methods is very small. This comparison shows that the proposed method successfully estimates the extrinsic parameters; moreover, the estimated noise parameters from the proposed method lead to the successful calibration of the existing calibration method. Furthermore, with the two rolling shutter cameras, we evaluate the extrinsic calibration performance using the loop-closing error metric, which is defined as the transformation error between the low-cost IMU and the two cameras (i.e., C 1 and C 2 ). Here, I C 1 T represents the relative transformation from the IMU to camera 1, C 1 C 2 T represents the relative transformation between the two cameras, and C 2 I T denotes the relative transformation from camera 2 to the IMU. C 1 C 2 T is estimated through the stereo camera calibration algorithm [43], and I C 1 T and I C 2 T −1 are estimated through the IMU-camera calibration algorithm. We run 10 sequences with the left camera-IMU and right camera-IMU. Then, we compute 10 × 10 loop closing errors by using bipartite matching (i.e., total 100 sets). Table 6b shows the mean and standard deviation of the loop closing errors on rotation and translation motions. The proposed and Kalibr-imucam methods produce very small errors on rotation and translation. Since the error bound of C 2 T is about ±2 mm, the errors in our method and the Kalibr-imucam method are negligible. [7][8][9] because they require special equipment such as robot arms to estimate intrinsic parameters. Besides, since the Kalibr-imucam method regards biases as time-varying parameters, unlike the proposed method, we do not compare it in this table. In both methods, the standard deviations of the misalignments and the scale factors are small enough, which indicates that the estimates are reliable. The results show that the z-axis bias of the accelerometer is larger than the x,y-axis biases due to the effect of the gravity vector located on the z-axis of the IMU coordinate (i.e., z-axis values of inertial measurements are much larger than those of x,y-axis). Interestingly, the misalignments are close to zero and the scale factors are close to one. This means that the IMU is somewhat intrinsically calibrated, although it is a low-cost sensor. The average estimates of the misalignments and scale factors from both methods have small differences because the Kalibr-imucam method also considers the effects of linear accelerations on gyroscopes. However, there are negligible numerical errors in the misalignments and scale factors.
For camera intrinsic parameters, we compare our method with the Kalibr-rs method [12] and the MATLAB calibration toolbox, as shown in Table 6d. The Kalibr-rs method uses image sequences, which are the same measurements as those used for the proposed method, and the MATLAB toolbox uses 20 still images of the same checker board. The focal length and radial distortion estimates of the three methods are close to each other. In addition, the average rolling shutter readout time estimates of the proposed method are close to the average estimates of the Kalibr-rs method. Although there are small differences between the estimates obtained from the three methods, they are negligible. Besides, the smaller standard deviation of the estimates from the proposed method indicates that the proposed method is more reliable than the Kalibr-rs method. This comparison of the intrinsic parameter estimates from the proposed and the two existing methods, which use various measurements, validates the camera intrinsic calibration of the proposed method. Table 6e describes the estimated noise parameters. The standard deviation of IMU noise density estimates is small enough and their mean can be used for other sensor fusion algorithms such as IMU-camera calibration and visual-inertial SLAM methods. Although we do not compare the noise density provided from the other algorithms or manufacturers, the noise estimation performance of the proposed method is indirectly verified through comparison with extrinsic parameters of the hand-measured value and camera intrinsic parameters from the MATLAB Toolbox. In addition, the successful calibration of the Kalibr-imucam method with the noise density estimates obtained from the proposed method validates the noise estimation of the proposed method.
Second, we evaluate the proposed method using a smartphone "Samsung Galaxy Alpha", which is equipped with a rolling shutter camera and a low-cost IMU; its configuration is shown in Figure 7b. The image resolution is 720 × 480, and the frame rates of the camera and the IMU are, respectively, 30 and 50 Hz. We record 10 sets of measurements whose frame length is about 90 s. Table 7 shows the calibration parameter estimates of the smartphone. We compare the proposed method with the existing calibration method (i.e., Kalibr-imucam [19]) and hand-measured values, similarly to the above experiment. The average rotation estimates of the proposed method and the Kalibr-imucam method are close to those of the hand-measured value. The standard deviation of the rotation estimates from the proposed method is smaller than that obtained from the Kalibr-imucam method, and this result indicates that the proposed method is more reliable than the Kalibr-imucam method. Besides, the translation estimates of the Kalibr-imucam method are converged to unrealistic values that are zero, but our estimates are close to the hand-measured value. The average time delay estimates of the proposed method are close to those of [19]; however, the standard deviation is large because an irregular time delay occurs due to the camera module activation of the smartphone. In summary, in this experiment, the extrinsic calibration of the proposed method outperforms the Kalibri-imucam method unlike the first experiment, which uses the self-designed setup. We argue that the proposed method is more robust than the noisy measurements because our framework is based on the graybox method, which internally uses the Kalman filter. In practice, the rolling shutter camera and the IMU of the commercial smartphone contain larger noises than those of the self-designed setup, as described in Table 7d. Table 7d demonstrates the IMU intrinsic estimates of the proposed method. Similar to the results of the first experiment, the z-axis bias of the accelerometer is larger than those of the other axes. The misalignment and scale factor estimates of the accelerometer from the proposed and Kalibri-imucam methods are, respectively, close to zero and one, and the biases and misalignments of the gyroscope are close to zero. This result indicates that they were already intrinsically calibrated. However, the scale factors of the gyroscope are calibrated through the proposed and Kalibr-imucam methods, as shown in the table. Although the scale factor estimates from two methods are slightly different, the low standard deviation of our estimates indicates that the proposed method is more reliable. For camera intrinsic parameters, we compare our estimates to the Kalibr-rs method [12] and the MATLAB calibration toolbox, as shown in Table 7c. The focal length, radial distortion, and rolling shutter readout time estimates of the proposed method are close to the estimates of two existing methods. Although the estimates of the second coefficient of radial distortion are different for each method, the first coefficient is more important than the second. Besides, the proposed method provides the lower standard deviation of the rolling shutter readout time estimates, as in the first experiment. Table 7d describes the estimated noise densities. They are two times larger than the noise density estimated on the smartphone setup. This means that the IMU noise of the smartphone setup is more severe than that of the self-designed setup. This experiment shows that a full calibration of off-the-shelf devices is possible with the proposed method.
Furthermore, we compare the operating time of the existing method (Kalibr [12,19]) and our method. The experimental environment for the comparison is Intel i7-4790K CPU running at a single core 4.00 GHz. Table 8 demonstrates the operating time of the Kalibri [12,19] and the proposed method. The timing statistics are measured on the 10 datasets used for the above "Samsung Galaxy Alpha" experiment. In case of the existing method, it is required to use both the Kalibr-rs [12] and the Kalibr-imucam [19] together because they separately estimate the intrinsic and extrinsic parameters, whereas, the proposed method does not. The total operating time of the Kalibr method is about 2 h in average for the full calibration. In particular, the operating time of Kalibri-rs [12] is longer than 1 h because of the iterative batch optimization for adaptive knot placement. However, the proposed method estimates noise parameters as well as calibration parameters, and it is 1.733 times faster than the Kalibr method in average. Table 8. Timing comparison of the existing and proposed methods. Mean and standard deviation of the operating times are described together.
Time (min)
Kalibri Kalibri-imucam [19] 24.80 ± 2.89 Kalibri-rs [12] 102 We also compare the prediction errors of the UKF for motion estimation with calibration parameters estimated by the Kalibr method and the proposed method. In this experiment, the states of the UKF are a position, velocity, and orientation of the IMU, and they are predicted by inertial measurements and corrected by visual measurements, which are corners of a checkerboard pattern. We record a set of inertial and visual measurements using the smartphone "Samsung Galaxy Alpha" for 60 s. The prediction errors using the Kalibr are accumulated due to the IMU noise and they result in the motion drift. However, Figure 8 shows that the calibration parameters of the proposed method reduce the mean and variance of prediction errors. The average RMSE in terms of pixel for all frames decreases from 3.03 to 2.45 (about 19.2%). Finally, we compare the localization accuracy with calibration parameters estimated by the Kalibr method and the proposed method. In this experiment, we capture image sequences and inertial measurements using the smartphone "Samsung Galaxy Alpha" in an outdoor vehicle driving environment. The trajectories are estimated by VINS-mono [44], which is an open source visual-inertial SLAM algorithm. In this experiment, we do not use loop closure correction and online calibration on the extrinsic parameter and temporal delay. Figure 9 shows the estimated trajectory with the proposed method is close to the ground truth trajectory, whereas the trajectory estimates with the Kalibr method suffers from scale drift as the trajectory becomes longer. The experimental results validate that the proposed method improves the performance of the real-world system as well.
Conclusions
This paper proposes a robust and accurate calibration method for a low-cost IMU and a rolling shutter camera. The proposed joint calibration estimates not only the intrinsic and extrinsic parameters but also the IMU noise parameters. To improve calibration efficiency including runtime, we divide the framework into two steps. In the first step, we roughly estimate the intrinsic and extrinsic parameters through filtering while estimating the noise parameters in the optimization module. In the second step, we refine the intrinsic and extrinsic parameters via the optimization module while estimating the sensor motion in filtering. The experimental results of the synthetic data demonstrate the superiority of our framework, and, in particular, the experiments on two real-data setups validate the performance of the proposed method in off-the-shelf devices.
As a result, the proposed method enhances the runtime about 73.3% and reduces IMU drift by 19.2% in comparison with those of the Kalibr method [12,19]. In particular, the results of the visual-inertial SLAM on the real-world system demonstrates that the proposed method outperforms the Kalibr method. | 2018-08-06T13:07:18.741Z | 2018-07-01T00:00:00.000 | {
"year": 2018,
"sha1": "5d5201201db4366813d997029e778d91e190aff1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/18/7/2345/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d5201201db4366813d997029e778d91e190aff1",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
232381978 | pes2o/s2orc | v3-fos-license | Comparison of Two Transmission Electron Microscopy Methods to Visualize Drug-Induced Alterations of Gram-Negative Bacterial Morphology
In this study, we optimized and compared different transmission electron microscopy (TEM) methods to visualize changes to Gram-negative bacterial morphology induced by treatment with a robenidine analogue (NCL195) and colistin combination. Aldehyde-fixed bacterial cells (untreated, treated with colistin or NCL195 + colistin) were prepared using conventional TEM methods and compared with ultrathin Tokuyasu cryo-sections. The results of this study indicate superiority of ultrathin cryo-sections in visualizing the membrane ultrastructure of Escherichia coli and Pseudomonas aeruginosa, with a clear delineation of the outer and inner membrane as well as the peptidoglycan layer. We suggest that the use of ultrathin cryo-sectioning can be used to better visualize and understand drug interaction mechanisms on the bacterial cell membrane.
Introduction
Gram-negative bacterial pathogens exhibit high-level resistance to most classes of antibiotics due to the presence of an impermeable outer membrane [1,2]. Polymyxins are considered as last-line agents for the treatment of Gram-negative infections due to their unique mechanism of action targeting the outer membrane [3][4][5][6]. However, polymyxins are highly nephrotoxic and neurotoxic agents if high doses are used [7,8], resulting in a narrow therapeutic window for Gram-negative infections. The usage of polymyxins in combination with other agents is being considered as a strategy for overcoming reduced polymyxin susceptibility and toxicity without increasing polymyxin exposure [3,9]. The mechanism of beneficial combination treatment is proposed to involve complete integration of polymyxins into the outer membrane causing disorganization and neutralization of cell surface charge and consequently loss of envelope barrier function. Subsequently, the affected outer membrane is hypothesized to transiently open, allowing entry of the second antibiotic and interaction with otherwise inaccessible drug target sites [2,[10][11][12][13].
Antibiotics 2021, 10, x 2 of 14 entry of the second antibiotic and interaction with otherwise inaccessible drug target sites [2,[10][11][12][13]. Our ongoing studies have indicated potential therapeutic options using the novel pyrimidine NCL195, 4,6-bis(2-((E)-4-methylbenzylidene)hydrazineyl)pyrimidin-2-amine ( Figure 1) combined with subinhibitory concentrations of polymyxin B (PMB) or colistin against Gram-negative infections [10,11]. We showed synergistic activity of the NCL195-PMB or NCL195-colistin combination against clinical Gram-negative bacterial pathogens, with MICs for NCL195 ranging from 0.25-4 µ g/mL for Acinetobacter baumannii, Escherichia coli, Klebsiella pneumoniae and Pseudomonas aeruginosa, whereas NCL195 alone had no activity. For decades, transmission electron microscopy (TEM) has been a valuable research tool in microbiology for high-resolution structural studies of bacteria and their components [14,15]. TEM was applied to study the effect of drug treatment on both Gramnegative and Gram-positive bacteria [16,17]. We have also used TEM to study NCL195colistin interactions on the Gram-negative cell membrane [10]. During the investigation, the stability of Gram-negative bacterial cell morphology was affected by several factors, including buffer conditions, selected fixatives, type of resin and the embedding method. Cryo-EM and Tokuyasu cryo-ultramicrotomy have been shown to offer some advantages over conventional TEM for investigating bacterial ultrastructure, including better resolution, artifact reduction, clearer visualization of bacterial cytoskeleton and better preservation of bacterial structural integrity [18][19][20][21][22][23]. Therefore, determining the most effective technique to accurately visualize and elucidate drug interactions on bacteria is essential. The objective of the present investigation was to compare two sample preparation methods for TEM (conventional resin embedding and Tokuyasu cryoultramicrotomy) to visualize the morphological changes occurring on the cell membrane of E. coli and P. aeruginosa after exposure to NCL195 alone, colistin alone or NCL195colistin combination.
Antibiotics and Chemicals
NCL195, a novel pyrimidine compound [24,25] (Figure 1), was synthesized at the University of Newcastle. The compound was stored in a sealed container in the dark at 4 °C at the Infectious Diseases Laboratory, Roseworthy campus, The University of Adelaide. Colistin sulphate, kanamycin and tetracycline were purchased from Sigma-Aldrich (Australia). Stock solutions containing 25.6 mg/mL of each compound (NCL195 dissolved in DMSO, colistin and kanamycin dissolved in water and tetracycline dissolved in 70% of ethanol) were stored in 1 mL aliquots at −20 °C away from direct light. Ruthenium red, L-lysine acetate and sucrose were purchased from Sigma-Aldrich, Australia, and dissolved in water to the appropriate concentrations. Fixatives and cacodylate buffer were provided by Adelaide Microscopy, The University of Adelaide, Adelaide, South Australia, Australia. For decades, transmission electron microscopy (TEM) has been a valuable research tool in microbiology for high-resolution structural studies of bacteria and their components [14,15]. TEM was applied to study the effect of drug treatment on both Gram-negative and Gram-positive bacteria [16,17]. We have also used TEM to study NCL195-colistin interactions on the Gram-negative cell membrane [10]. During the investigation, the stability of Gram-negative bacterial cell morphology was affected by several factors, including buffer conditions, selected fixatives, type of resin and the embedding method. Cryo-EM and Tokuyasu cryo-ultramicrotomy have been shown to offer some advantages over conventional TEM for investigating bacterial ultrastructure, including better resolution, artifact reduction, clearer visualization of bacterial cytoskeleton and better preservation of bacterial structural integrity [18][19][20][21][22][23]. Therefore, determining the most effective technique to accurately visualize and elucidate drug interactions on bacteria is essential. The objective of the present investigation was to compare two sample preparation methods for TEM (conventional resin embedding and Tokuyasu cryo-ultramicrotomy) to visualize the morphological changes occurring on the cell membrane of E. coli and P. aeruginosa after exposure to NCL195 alone, colistin alone or NCL195-colistin combination.
Antibiotics and Chemicals
NCL195, a novel pyrimidine compound [24,25] (Figure 1), was synthesized at the University of Newcastle. The compound was stored in a sealed container in the dark at 4 • C at the Infectious Diseases Laboratory, Roseworthy campus, The University of Adelaide. Colistin sulphate, kanamycin and tetracycline were purchased from Sigma-Aldrich (Australia). Stock solutions containing 25.6 mg/mL of each compound (NCL195 dissolved in DMSO, colistin and kanamycin dissolved in water and tetracycline dissolved in 70% of ethanol) were stored in 1 mL aliquots at −20 • C away from direct light. Ruthenium red, L-lysine acetate and sucrose were purchased from Sigma-Aldrich, Australia, and dissolved in water to the appropriate concentrations. Fixatives and cacodylate buffer were provided by Adelaide Microscopy, The University of Adelaide, Adelaide, South Australia, Australia.
Bacterial Strains and Growth Conditions
Bioluminescent E. coli Xen14 (derived from the parental strain E. coli WS2572) and bioluminescent P. aeruginosa Xen41 (derived from the parental strain PAO1) were purchased from PerkinElmer Inc. (Waltham, MA, USA). E. coli Xen14 was grown on horse blood agar (HBA) containing 30 µg/mL kanamycin and P. aeruginosa Xen41 was grown in HBA containing 60 µg/mL tetracycline overnight at 37 • C in normal air for selection. A single (Table 1) to minimize factors that may affect the quality of TEM images. The Xen14 cells were cultured as described above, and then harvested by centrifugation at 2900× g for 5 min at 4 • C to avoid cell damage. The cells were initially resuspended in either cacodylate buffer (pH 7.0) or phosphate-buffered saline (PBS; pH 7.0) and centrifuged twice for 5 min at 2900× g. Thereafter, cell pellets were fixed overnight in fixative containing 3.0% formaldehyde, 0.035% glutaraldehyde, 4% sucrose in cacodylate buffer (Procedure 1); fixative containing 4.0% formaldehyde, 1.25% glutaraldehyde, 4% sucrose in PBS buffer (Procedure 2); fixative containing 4.0% formaldehyde, 1.25% glutaraldehyde in cacodylate buffer without sucrose supplementation (Procedure 3); fixative containing 4.0% formaldehyde, 1.25% glutaraldehyde, 4% sucrose and 0.01 M CaCl 2 in cacodylate buffer (Procedures 4 and 5), as detailed in Table 1. The fixed cells were then washed in the corresponding buffer as described above, and post-fixed in 1% osmium tetroxide in cacodylate buffer or PBS containing 0.075% ruthenium red for 1 h, and subsequently washed as described above. Cells were then dehydrated in graded series of ethanol (50%, 70%, 90%, 2× each for 10 min and 100%, 3× for 15 min). Thereafter, the cells were infiltrated for 1 h each in propylene oxide: Epon-Araldite resin (50:50 ratio; Procedures 2, 3 and 5) or 100% ethanol: LR-White resin (50:50 ratio; Procedures 1 and 4). Samples were incubated in 100% Epon-Araldite resin (Procedures 2, 3 and 5) or LR-White resin (Procedures 1 and 4) overnight, followed by two resin changes 5 h apart the following day. Subsequently, the cells were polymerized in fresh Epon-Araldite resin or LR-White resin at 70 • C or 58 • C, respectively, for 48 h.
Xen41 Processing for TEM
Xen41 cells were prepared essentially as described for Xen14, then processed for TEM using Procedure 4 (Table 1) with either 1 h fixation or overnight fixation followed by post fixation in 1% osmium tetroxide for 1.5 h on ice.
Sections of Xen14 and Xen41 embedded in resin were cut to 1 µm using a glass knife, stained with 1% toluidine blue containing 1% borax and viewed under a light microscope at 400× magnification to identify stained bacteria. Ultrathin sections were then cut to 90 nm with an ultramicrotome EM-UC6 (Leica) using a diamond knife (Diatome) and placed on 200-mesh copper EM grids (Proscitech). Sections were sequentially stained with uranyl acetate (4% in distilled H 2 O) and Reynolds lead citrate for 10 min each, with three washes in distilled water in-between each stain. Sections were then viewed on a Tecnai G2 Spirit (FEI Company, Hillsboro, OR, USA) Transmission Electron Microscope operated at 100 KV at Adelaide Microscopy, The University of Adelaide.
Cryo-Ultramicrotomy
Xen14 and Xen41 cells were prepared as described above before being fixed in 1 mL cacodylate buffer containing 4.0% formaldehyde, 1.25% glutaraldehyde, 0.01 M CaCl 2 , 4% sucrose and 0.075% ruthenium red, 0.075% L-lysine acetate (to stabilize the peptidoglycan layer and aid in locating the bacteria during sectioning) [26,27]. Samples were then stored at 4 • C until processing for cryo-ultramicrotomy. Thereafter, cells were washed twice in buffer and embedded in 12% gelatin. Small gelatin blocks containing bacteria (<1 mm 3 ) were cut and infiltrated with 2.3 M sucrose in phosphate buffer overnight at 4 • C with gentle rocking. Blocks were stored in 2.3 M sucrose at 4 • C prior to sectioning. Blocks were transferred to aluminum sectioning pins (Leica) and quickly plunge-frozen in liquid nitrogen. Thin cryosections (80 nm) were cut at −100 • C with an EM-UC6/FC7 cryo-ultramicrotome (Leica) using a cryo-diamond knife (Diatome). Cryo-sections were removed from the knife with 2.3 M sucrose using a wire loop and transferred to formvar/carbon-coated, plasma-cleaned 200-mesh copper EM grids. Grids were stored in an airtight container on sucrose droplets at 4 • C. To stain, grids were floated face down on 2% gelatin for 30 min at 37 • C before washing in PBS (3 × 2 min) and staining with 2% uranyloxalate acetate pH7 (5 min, 22 • C) and methyl cellulose-uranyl acetate (pH 4) on ice (10 min). Grids were looped out, drained and allowed to dry. Samples were imaged with a Tecnai G2 Spirit electron microscope (FEI Company) operated at 100 kV at Adelaide Microscopy, The University of Adelaide.
Treated Samples Processing for TEM and Cryo-Ultramicrotomy
To determine the optimal conditions to observe NCL195-colistin interaction on Gramnegative membranes, Xen14 cells were initially grown until A 600nm = 0.1 (early logarithmic phase) and 0.5 (mid logarithmic phase) and then treated with colistin at 0.5 µg/mL for 1 h. Subsequently, Xen14 cells grown to A 600nm = 0.1 were chosen for further analysis and were incubated with 0.5 µg/mL colistin for 2 h and 4 h to determine optimal treatment time.
Bacterial Cell Morphology Is Affected by the Fixative Used, Buffer Conditions and the Embedding Method
In this work, we sought to determine the most effective technique to accurately visualize drug interactions on the bacterial membrane as part of our on-going research aimed at gaining a better understanding of the complex interactions between membraneactive drugs and the consequent morphological changes occurring on the bacterial surface. To accomplish this, we compared two sample preparation methods for TEM (conventional resin embedding and cryo-ultramicrotomy) to visualize the cell membrane of E. coli and P. aeruginosa after exposure to NCL195 alone, colistin alone or NCL195-colistin combination. For this study, we examined the morphological changes to bacterial cells exposed to the test drugs using bioluminescent derivatives of E. coli and P. aeruginosa used routinely in our real-time in vivo assessments of drug efficacy. We initially used Xen14 cells to optimize the best TEM protocol for observing NCL195-colistin interaction on Gram-negative membranes. We found that several factors affected the morphology of the bacterial cells. i.
Fixative: Fixative containing 3.0% formaldehyde, 0.035% glutaraldehyde, 4% sucrose and 0.075% ruthenium red, 0.075% L-lysine acetate (Procedure 1) caused shrinkage of the bacterial cell as well as detachment and perturbation of the cell membrane (Figure 2A,B). Therefore, the low concentration of formaldehyde and glutaraldehyde used in this procedure was not high enough to preserve cell membrane structure and could have affected the cell size. To circumvent this, Li, et al. [28] described a 4.0% formaldehyde solution in fixative as optimal for preservation of bacterial cell size, and our result supports this observation. ii.
Buffer: We also found that the type of buffer used resulted in altered cell membrane morphology. PBS buffer caused detachment of the cell membrane ( Figure 2C,D; Procedure 2), a similar observation as those described by others [14,29]. Furthermore, the addition of sucrose to the buffer improved preservation of cell morphology, as the cell membrane appeared brittle if sucrose was omitted from the fixative ( Figure 2E,F; Procedure 3), in agreement with a previous study [30]. iii.
Embedding method: Following from the optimized fixative and buffer conditions above (Procedure 2), we observed that a TEM protocol using cacodylate buffer with fixative containing 4.0% formaldehyde, 1.25% glutaraldehyde, 0.075% ruthenium red, 0.075% L-lysine acetate, and 4% sucrose followed by embedding in LR-White resin (Procedure 4) provided the best delineation of the outer membrane, cell wall and inner membrane, with no wavy, detached or shrunk membranes ( Figure 3A). This protocol is similar in some respects to that described by Voget et al. [14], but differs in the buffer and fixative composition, treatment time and embedding method. The use of Epon-Araldite resin (Procedure 5) did not appear to make a difference to the overall TEM result ( Figure 3A vs. Figure 3B). However, given our findings, we suggest using LR-White resin due to its ease of use during TEM processing.
for 1 h, followed by washing in PBS buffer and a subsequent continuing the process as described in Procedure 4. Usi membrane, cell wall and outer membrane could clearly be o 4B). for 1 h, followed by washing in PBS buffer and a subsequent fixation s continuing the process as described in Procedure 4. Using this membrane, cell wall and outer membrane could clearly be observed u 4B). Having optimized the best TEM protocol for visualizing Xen14 morphology (Procedure 4), this was then applied to P. aeruginosa Xen41. However, due to the production of exopolysaccharide by P. aeruginosa [15], which may prevent access of fixatives to bacterial cell membrane, Xen41 was grown overnight on horse blood agar to reduce the amount of polysaccharide produced. Cells were washed twice in PBS buffer as described before fixation. Overnight fixation resulted in darker images which masked cell membranes/walls ( Figure 4A). Therefore, the procedure was modified by first fixing cells for 1 h, followed by washing in PBS buffer and a subsequent fixation step for 1.5 h before continuing the process as described in Procedure 4. Using this method, the inner membrane, cell wall and outer membrane could clearly be observed under TEM ( Figure 4B).
Bacterial Cell Morphology Is Affected by Cell Density and Exposure Time to Drugs
It is known that colistin interacts with the lipopolysaccharide on the surface of Gramnegative bacteria and then across the outer membrane via the self-promoted uptake pathway, resulting in disruption of the normal barrier property of the outer membrane [2,31]. Subsequently, the outer membrane is hypothesized to transiently open thereby allowing passage of NCL195 into the cell to the drug target site(s), likely to be located on the plasma membrane, as we described recently [11]. Based on this hypothesis, we initially determined the optimal time point of colistin treatment that would result in the disruption of the outer membrane using two growth stages of Xen14 (A 600nm = 0.1 or A 600nm = 0.5). For this initial analysis, the Xen14 cells were treated with colistin at 0.5 µg/mL for 1 h. Significant morphological changes were observed following 1 h incubation in colistin at A 600nm = 0.1. Compared to untreated cells ( Figure 5A,B), the majority of cells showed a swollen envelope morphology with tubular and fimbria-like radiant appendages; different layers of membrane structure could also be distinguished ( Figure 5C,D) under these conditions. However, treatment at A 600nm = 0.5 showed less effect ( Figure 5E,F). Therefore, A 600nm = 0.1 was used for subsequent experiments as it gave better results. A600nm = 0.1. Compared to untreated cells ( Figure 5A,B), the majority of c swollen envelope morphology with tubular and fimbria-like radiant different layers of membrane structure could also be distinguished (Figure these conditions. However, treatment at A600nm = 0.5 showed less effect Therefore, A600nm = 0.1 was used for subsequent experiments as it gave bette The effect of 2 h and 4 h colistin treatments on Xen14 cells grown to A 600nm = 0.1 was also investigated. Compared to untreated cells (6A,B), the majority of the tubular and fimbria-like radiant appendages were broken and had disappeared, although layered membrane structure could still be observed ( Figure 6C,D) after 2 h treatment. Following 4 h treatment, all tubular and fimbria-like radiant appendages had disappeared, membrane layers could not be distinguished, and shrinkage of cell contents and detached/wavy membrane structures were observed ( Figure 6E,F). These results are similar to those reported previously for polymyxin B [14], a drug with a similar mechanism of action to colistin [32]. 4 h treatment, all tubular and fimbria-like radiant appendages had membrane layers could not be distinguished, and shrinkage of cell detached/wavy membrane structures were observed ( Figure 6E,F). Thes similar to those reported previously for polymyxin B [14], a drug w mechanism of action to colistin [32].
Comparison of TEM and Cryo-Ultramicrotomy for Visualizing NCL195-Colistin Interaction on Cell Membrane
On the basis of the foregoing outcomes, subsequent experiments to determine the effect of NCL195 + colistin combination were conducted on Xen14 and Xen41 at A 600nm = 0.1 followed by a 1 h drug treatment. TEM of Xen14 sections cut under cryo conditions provided a clear delineation of the membrane structure, showing the outer and inner membrane and wall peptidoglycan layer, typical of Gram-negative bacteria compared with a more traditional, resin embedded TEM preparation ( Figure 7A vs. Figure 7B). This result was observed in both the untreated and NCL195-treated cells ( Figure 7C vs. Figure 7D).
Comparison of TEM and Cryo-Ultramicrotomy for Visualizing NCL195-Colistin Interaction on Cell Membrane
On the basis of the foregoing outcomes, subsequent experiments to determine the effect of NCL195 + colistin combination were conducted on Xen14 and Xen41 at A600nm = 0.1 followed by a 1 h drug treatment. TEM of Xen14 sections cut under cryo conditions provided a clear delineation of the membrane structure, showing the outer and inner membrane and wall peptidoglycan layer, typical of Gram-negative bacteria compared with a more traditional, resin embedded TEM preparation ( Figure 7A vs. Figure 7B). This result was observed in both the untreated and NCL195-treated cells ( Figure 7C vs. Figure 7D). For Xen14 cells exposed to colistin at 0.125 µg/mL and sectioned under cryo conditions, mesosome-like structures and swollen membranes were observed, whereas conventional TEM micrographs of cells exposed to colistin at 0.125 µg/mL showed no difference in membrane morphology compared to untreated cells ( Figure 7E vs. Figure 7F). With increased colistin concentration (0.25 µg/mL), swollen envelopes and mesosome-like structures (m) were observed following cryo preparation ( Figure 7G) in addition to the presence of tubular appendages (t) observed using the traditional TEM technique ( Figure 7H). Cells treated with a combination of NCL195 (2 µg/mL) and colistin (0.25 µg/mL) and processed under cryo conditions exhibited increased morphological damage including coronate tubular appendages and mesosome-like structures ( Figure 7I). Cells treated with the combination and visualized using conventional TEM also showed increased morphological damage, coronate tubular appendages and a swollen and detached membrane ( Figure 7J) similar to that observed with cells treated with 0.25 µg/mL colistin alone. These results are summarized in Table 2. The observed morphological effects of NCL195 in the NCL195 + colistin combination are consistent with the dissipation of inner cell membrane potential demonstrated in our recent work [10], potentially resulting in leakage of vital metabolites [25].
As seen with Xen14, untreated Xen41 cells processed under cryo conditions produced a clear image of bacterial morphology with cell walls and inner and outer membranes clearly distinguishable ( Figure 8A) compared to traditional TEM processing methods ( Figure 8B). Cells treated with NCL195 alone under cryo conditions showed similar ultrastructural morphology as the control cells ( Figure 8C), although a slightly wavy membrane morphology was observed when using traditional TEM preparations ( Figure 8D).
The addition of colistin (1 µg/mL) produced clearly visualized membrane damage in cells processed under cryo conditions ( Figure 8E), whereas a ruffling of the cells and wavy cell membrane structure was observed following traditional TEM embedding ( Figure 8F). Increased ultrastructural damage was observed following treatment with the combined colistin and NCL195 treatment ( Figure 8G vs. Figure 8H). Again, cells processed under cryo conditions showed clearer increased morphological changes with broken outer membranes and mesosome-like structures within the cell ( Figure 8G vs. Figure 8H). These results are summarized in Table 3.
Conclusions
In this study, we describe optimized TEM conditions for visualizing changes to Gramnegative bacterial morphology induced by treatment with a combination of NCL195, a novel pyrimidine, and colistin. We show that cacodylate buffer works better than PBS buffer, and that fixative containing 4.0% formaldehyde, 1.25% glutaraldehyde, CaCl 2 0.01 M, 4% sucrose and 0.075% ruthenium red, 0.075% L-lysine acetate is the optimal mixture for the stability of bacterial cell membrane. We also suggest using LR-White resin due to its ease of use during TEM processing. Additionally, we show the cryo-ultramicrotomy technique provides higher resolution, artifact reduction, clearer visualization of bacterial cytoskeleton and better preservation of bacterial structural integrity compared to conventional TEM processing methods. To our knowledge, this study is the first to use Tokuyasu cryoultramicrotomy to examine the effects of multiple drug-interactions on the bacterial cell surface. Cryo-ultramicrotomy can also be employed in conjunction with other imaging techniques such as that described for correlative light and electron microscopy [33]. We suggest cryo-ultramicrotomy can be used for a wide range of applications including host-pathogen interaction studies and high-resolution visualization of macromolecular interactions occurring on the prokaryotic surface or other biological membranes. These should promote a better understanding of complex cellular and molecular interactions.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to size and access restrictions. | 2021-03-29T05:21:25.662Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "68545529d899103ca61e58eebeb488610b6fee2c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/10/3/307/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "68545529d899103ca61e58eebeb488610b6fee2c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119262885 | pes2o/s2orc | v3-fos-license | The FLARE mission: Deep and Wide-field 1-5$\mu$m Imaging and Spectroscopy for the Early Universe: a proposal for M5 Cosmic Vision call
FLARE (First Light And Reionization Explorer) is a space mission that will be submitted to ESA (M5 call). Its primary goal (about 80% of lifetime) is to identify and study galaxies that dwell in the universe before the end of the reionization up to about z = 15, a redshift that might not be statistically reachable for JWST, Euclid and WFIRST. A secondary objective (about 20% of lifetime) is to survey star formation in the Milky Way. The strategy selected for FLARE optimizes the science return: imaging and spectroscopic integral-field observations will be carried out simultaneously on two parallel focal planes and over very wide instantaneous fields of view. FLARE will feature an instantaneous field of view of about 0.2 deg$^2$ with 0.2-arcsec pixels and an instantaneous integral-field spectroscopic field of view of about 1 arcmin$^2$ with R = 500 - 1000 and an angular resolution of about 0.4 arcsec. To detect first-light galaxies, the imaging and spectroscopic survey (parallel observations over about 6 years) will reach m$_{AB}$ = 28 and f$_{\lambda}$ = 10$^{-18}$ erg/cm$^2$/S. FLARE will help addressing two of ESA's Cosmic Vision themes: a)"How did the universe originate and what is it made of?"and b)"What are the conditions for planet formation and the emergence of life?"and more specifically,"From gas and dust to stars and planets". FLARE will provide to the ESA community a leading position to statistically study the early universe after JWST's deep but pin-hole surveys. Moreover, the instrumental development of wide-field imaging and wide-field integral-field spectroscopy in space will be a major breakthrough after making them available on ground-based telescopes.
INTRODUCTION
Large ground-based and space-born telescopes have started to scratch the discovery space in the universe, before the end of the reionisation, i.e., the first Gyr of the universe's lifetime. In this era, we will find the very first-light objects, galaxies and black holes. A primordial galaxy, i.e. zero metallicity, that might, contain some pop III stars, that is the first generation of stars created in the early universe at z > 10 [1]. Even more recently, the first solid candidate at z ~ 11, i.e. about 400 Myrs after the big bang, was detected and confirmed by [2]. This latter galaxy is apparently very massive (given the redshift) and rare and wide-field surveys will be necessary to detect them if they happen to be common in the early universe.
On the other hand, the very high redshift of these objects (10 < z < 15) asks for an excellent sensitivity in the nearinfrared with a spectral coverage large enough to detect the Lyman break but also fluxes beyond the Lyman break which is at z = 10, the Lyman break is at λ ~ 2 µm, that is beyond Euclid and WFIRST wavelength range (Fig. 1). *Denis.Burgarella@lam.fr Figure 1. The spectral energy distribution of the galaxy discovered at z = 11.09 [2] presents a discontinuity at λ = 1.6µm. To get a correct and safe estimate of photometric redshifts at 10 < z < 15, it is mandatory to make use of data below and above the Lyman break. The 1 -5µm wavelength range is therefore a must-do. That is JWST's range but on small fields unlike FLARE. On the other hand, WFIRST, EUCLID and E-ELT will not provide such data.
BUILDING A CENSUS OF THE OBJECTS AT 10 < Z < 15
The science objectives have been detailed during a workshop http://mission.lam.fr/flare/AgendaMar2016.html.
In the following, we will detail FLARE's science case and why FLARE's capabilities allow to build a census of objects in the early universe up to z ~ 15. FLARE aims at using 3 complementary approaches on the same mission to build a census of the objects at 10 < z < 15. At z = 15, the universe is only 0.3 Gyr old. It is neutral and (probably) sees the formation of the first stars and maybe larger objects.
Detection and identification of a sample of about 100 primordial galaxies at z ~ 15
The expected density of these objects is estimated to be about 1 deg -2 at m AB = 28 [3]. To reach this 100-number (Tab. 1), we therefore need to cover at least 100 deg 2 . JWST is highly unlikely to build surveys much larger than about 1 deg 2 , i.e., HST-like and will not observe the same type of objects. Moreover, even though JWST will reach magnitudes fainter by about 2 magnitudes, even the E-ELT will not be able to confirm their redshifts spectroscopically because they are too faint. Besides, detecting these primordial galaxies requires any facility to have at least 2 bands at λ > 1.3µm (for z = 10) and λ > 2.0µm (for z = 15). To date, only JWST and FLARE have a wavelength range extending beyond λ = 2.0µm. This sample of galaxies at z ~ 15 will bring invaluable information of the very first phases of galaxy formation but also on larger scale structures. This topic will mainly use the imaging survey, which is the only one to cover an area > 100 deg 2 . A recent work detected the most remote object in the universe to z = 11.1 [2]. This galaxy is remarkably, and unexpectedly, luminous for a galaxy at such an early time but also very rare. If confirmed, this result implies that the best strategy to detect z > 10 is wide fields, as featured by FLARE.
Blind spectroscopic survey
Imaging surveys allows detecting galaxies with a strong continuum. However, we know that part of the galaxies, younger and undergoing strong starbursting events are better (only?) detectable via spectroscopic surveys aiming at strong emission lines without any priors coming from broad-band surveys. About 30% of the emission lines objects have no HST counterparts down to I814 > 29.5 (Fig. 2) and the redshift distribution is clearly flatter and reach much higher redshifts [4]. A blind and relatively wide-field integral-field spectroscopic (IFS) survey is the unique way to detect these objects. No other facilities on the sky now or planned will feature such an instrument. FLARE integral-field spectrograph will build a survey via parallel observations and reach magnitudes as deep as the shallow JWST NIRSpec survey but will cover of about 1.5 deg 2 for about 500 acrmin 2 for NIRSpec (that is 10% only of FLARE to the same limiting flux). Moreover, JWST/NIRSpec will not be blind since a prior photometric detection is needed to define the slits. Figure 2. Integral-field spectroscopic observations with MUSE on the VLT [4] in the Hubble Deep Field South allowed to detect as many as 30% of the entire Lyα emitter sample, that have no HST counterparts and thus have I814 > 29.5. Moreover, the MUSE-HDFS (bottom) and VUDS (top) normalized redshift distributions are quite different. This was expected given the very different observational strategy: the VUDS redshift distribution is the result of a photometric redshift selection zphot > 2.3 ± 1σ (with first and second peaks of PDF) combined with continuum selection IAB < 25, while MUSE does not make any pre-selection. With 22% of galaxies at z > 4 in contrast to 6% for the VUDS, MUSE demonstrates a higher efficiency for finding high redshift galaxies. We expect FLARE IFS to produce a large number of sources that will not be detected, even by JWST. ID#553 is a z = 5.08 Lyα emitter without HST counterpart. The HST images in F606W and F814W filters are shown at the top left, the MUSE reconstructed white-light and Lyα narrow band images at the top right. The one arcsec radius red circles show the emission line location. The spectrum is displayed on the bottom figures; including a zoom at the emission line. At the bottom left, the full spectrum (in blue), smoothed with a 4 Å boxcar, and its 3σ error (in grey) are displayed. A zoom of the unsmoothed spectrum, centred around the Lyα emission line, is also shown at the bottom right.
Quasars before the end of the reionisation
Why do we wish to study quasars beyond the end of the reionisation? The four main motivations are: • To better understand the formation of the first massive black-holes: what are black hole seeds and their early growth? • To study the environments of the first massive black holes: their metallicity, obscuration, host properties (mass; star formation rate; larger-scale environment), quasar-driven outflows • Constrain the reionisation of the Universe: contribution to reionisation from quasars and bright sight lines through the reionisation era • Understand the link between the evolution of galaxies and the evolution of the first black holes as suggested by the similar shape of the redshift evolution of the star formation rate density and the black hole accretion density (Fig. 2).
The density of very high redshift (z > 6 before the end of the reionisation) quasars is low. FLARE's imaging survey is large enough to directly detect 200 of them that we can directly follow up with FLARE integral-field spectrograph. What is unique to FLARE will be the opportunity to detect objects that might be missed by small-field telescopes like JWST but also by wide-field, lower-wavelength and shallower surveys like Euclid and WFIRST via their near-infrared emission and, also, via emission lines.
An important point to stress is that we need to keep in FLARE's observation schedule enough time to directly observe spectroscopically these quasars in the rest-frame ultraviolet-optical frame (Fig. 4) and maybe others that would be detected by ATHENA. They will provide a unique information of the early co-evolution of galaxies and super-massive black holes (Fig. 4) but also, they will allow to study the intergalactic medium on the line of sight. Figure 3. Using [6] we can check that the main ultraviolet-optical lines will be detectable in FLARE's 1-5 µm wavelength range at redshifts z > 6, beyond the end of the reionisation. Of course, this also applies to star-forming galaxies. Figure 3. This plot extracted from [5] shows that the Star Formation Rate and the Black Hole Accretion Densities almost present the same evolution in redshift. This suggests a co-evolution of the star formation process along with the black hole one. What happened before the end of the reionisation is of major interest and requires a synergy between several facilities observing quasars at z > 6, ATHENA, FLARE, E-ELT and, of course, JWST.
STAR FORMATION IN THE MILKY WAY
Herschel and Spitzer allowed to look into the star formation region and to study the interstellar medium (ISM) in the Milky Way. High angular resolution is very important to analyse locally the physics of star formation. But, the dust attenuation in these regions is very high because the young (proto-) stars are still embedded in their molecular, dusty cloud. Both the angular resolution and the wavelength range are needed to « see » deep inside the clouds. Once again, JWST can provide them. But, JSWT's field of view is much too small to cover large areas in the Milky Way • Unveiling the dark cloud structure and content in the Gould Belt with FLARE: possibility to study dust scattering and extinction on the scale of full regions (Taurus, Rho Oph, Cham, …) in a few days for the small regions, a month for the «Taurus-Perseus-Aurigae » region, less than 6 months total and the possibility to retrieve ≥100,000 stellar spectra with significant extinction in 3 months total. • 3D Tomography of the Milky Way ISM: FLARE will enable full 3D Tomography of the ISM (Fig. 5)
REQUIREMENTS AND OBSERVATION MODES
FLARE will make the best use of the 5-year mission lifetime by observing photometrically and spectroscopically in parallel. This strategy allows to have enough observing time to build, both a wide imaging survey over 100 -200 sq. deg and a (relatively) wide integral-field spectroscopic survey over 1 -2 sq. deg. (the size of the present COSMOS survey).
To minimize the risks, we will not have any filter mechanisms in FLARE. As illustrated in Fig. 6, the imaging survey will be divided in 6 filters (so far identical to NIRCAM's filters for simplicity by an optimization of the shape needs to be performed). Table 1 summarizing the present requirements assumed for FLARE. Figure 5. FLARE's strategy combines a wide-field imaging survey along with a wide-field integral-field spectroscopic survey. To make these two surveys efficient, we plan to carry them out in parallel. In addition, targetted observations must be possible to observe rare quasars.
MISSION AND INSTRUMENT CONCEPT
The proposed FLARE mission consists of a spacecraft module with a large telescope and a payload module including a wide field photometer and an integral field spectrometer. FLARE could be launched by an Ariane 6.2 launcher for injection into an orbit around the L2 Lagrangian point of the Sun Earth system.
As the thermal is a strong driver in the design, and in particular for the detectors, the satellite concept is based on two separate thermal zones: the cold zone integrates the whole optical instrumentation, including the telescope, imager and spectrometers, and the warm zone which is mainly composed by the satellite platform including the electronic boxes used for the command and control of the instrumentation. The thermal dissipation in the cold zone is limited at the best: no mechanisms, only detectors and their front-end electronics are active and will dissipate. These two zones are thermally highly isolated from each other to limit conductive and radiative thermal fluxes between them. During observations, the optical instrumentation zone is fully protected from the Sun by the satellite platform. This configuration authorized a passive cooling of the focal plane assemblies at temperatures close to 40K needed by the detectors for fine observations in the required 1 -5 µm wavelength range.
The telescope dimensioning and the large number of detectors allow a deep imaging survey with a relatively high angular resolution while preserving a wide field of view. The instrument fields of view are arranged in such a way that the observing strategy will allow to map nearly 200 square degrees in imagery and nearly 2 square degrees in spectroscopy in all spectral bands during the nominal six year life in orbit.
Telescope optical design
The baseline optical concept is based on a 3-mirror Korsch [9] telescope with an effective diameter of 1.8 meters. The combination of the primary and secondary mirrors looks like a Cassegrain configuration forming a real image just behind the primary. This intermediate image is re-imaged by a tertiary with a magnification close to one. This Korsch configuration forms an achromatic image limited by diffraction over a planar annular field of more than 1 square degree.
Mirrors must be in a material able to guaranty a good WFE when cooled down at the low working temperatures. Although the baseline for the primary mirror diameter is 1.8m, this parameter is not currently frozen, and it could vary in the range 1.5 -2.0 meters. Indeed, it will be optimized in order to achieve the best possible sensitivity, while preserving the overall cost of the mission. Figure 7 illustrates this optical design for a telescope length of 2.8 meters.
Photometer
The photometer consists of 12 (possibly 10) HgCdTe, 2kx2k pixels, infrared detectors located at the focal plane of the telescope mainly for observations in a 15 degree half cone around the zodiacal poles through fixed broadband filters distributed in the 1 -5 µm wavelength range. Its total field of view covers about 0.16 square degrees.
As shown in Fig. 6, each detector is dedicated to a specific color (with 2 detectors per color).
Integral Field Spectrograph (IFS)
FLARE will also have spectroscopic capability with an integral field spectrograph (IFS) working in the same spectral regions than the imager (i.e. 1.25 -5 µm) with an instantaneous field of view of about 1 arcmin 2 . The IFS consists of 2 identical arms each made of three main modules: the Fore-Optics unit, which formats the 2D entrance FoV; a set of 3 Image Slicer Units feeding 3 Spectrograph units. Each spectrograph has two spectral channels in order to accommodate the two octaves of the wavelength range. The total spectral range of the IFS prevents for inserting lenses or dioptric elements within the overall optical layout. An all-reflective design takes advantage of its facility to adapt to a cryogenic environment and presents a higher throughput.
The Fore-Optics (FO) unit re-images the input F/10 telescope field onto the slicing mirrors and introduces an anamorphic magnification of the field (F/130 in the spectral direction; F/65 in the spatial direction). The anamorphous allows squeezing the spatial direction of the FoV to limit the length of the slices. The Fore-Optics unit is composed of four reimaging mirrors (FM1 to FM4) plus one folding mirror as shown in Fig. 9. The two mirrors FM1and FM3 are toroidal and perform the anamorphous of the beam by a factor 2 in one direction. The secondary is elliptical while the forth mirror is spherical. Despite the apparent complexity of the use of four mirrors this keeps the manufacturability and allows reaching high surface quality of each component.
Each Fore-Optics accepts a larger FoV of 1x2 arcmin (instead of 1x1 arcmin required) in order to accommodate the 6 Image Slicer Units. Each Image Slicer Unit re-arranges the 2-D FoV at the input focal planes of one spectrograph to form two entrance slits for the Spectrograph.
One Image Slicer Unit consists of three optical assemblies: the two slicing mirror arrays at the output image planes of the Fore-Optics, an array of pupil mirrors and an array of slit mirrors. Fig. 10 shows 6 channels (i.e. 6 slices) of one Image Slicer Unit.
The slicing mirror array consists of two stacks of 28 (56 in total) concave spherical mirrors (0.5 mm wide and 12 mm long), which slice the anamorphic field and produce an array of individual images of the pupil on the pupil mirrors. The proposed method for the manufacturing of the slicing mirrors is by glass polishing. Slices are made in Zerodur and assembled by optical contacting. The current design of the slicer (stack of 28 slices) is compatible with an innovative method [10] enabling the manufacture of one or more stacks of slices by a single standard polishing process thus reducing both the time and cost of production.
The pupil mirror array consists of four staggered lines of 7 rectangular mirrors. Each pupil mirror re-images its own slice of the anamorphic field on to a dedicated slit mirror located at the input focal plane (slit plane) of the Spectrograph. The slit mirror array consists of a single line of 28 rectangular mirrors each with 2 mm width. Each slit mirror re-images the telescope pupil on its pupil mirror to the entrance pupil of the Spectrograph Unit. The surface of each slit mirror is spherical and concave. Each spectrograph unit is based on an Offner configuration with a magnification of 0.5. At the entrance of the spectrograph, a dichroic is located to split the octaves of the wavelength range. The secondary mirror acts as the grating and produces spectrum with a spectral resolution higher than 500 on an HgCdTe, 2kx2k pixels, infrared detector, identical to the one of the photometer. Each spectrograph accepts two entrance slits, which are dispersed and imaged on a common detector.
CONCLUSION
FLARE can open up a new domain of exploration for the early universe during the first giga-year or the universe by proving simultaneously deep photometric and integral-field spectroscopic data over wide fields (100 -200 sq. deg in photometry and 1 -2 sq. deg. in integral-field spectroscopy) in the near-infrared range up to 5µm. No other projects in operation or planned present the same capabilities. It offers a new opportunity that is complementary to the other missions a) wide-field, λ < 2µm and b) small fields, λ > 2µm.
FLARE's strategy will be optimized to build a census of the objects residing in the very early universe, namely: continuum-bright objects like Lyman break galaxies using the wide-field imaging survey, emission-line objects like Lyman alpha emitters using the 1 -2 sq. deg integral-field spectroscopic survey, and, a sample of at least 200 quasars.
Beside this early universe facet that will be dominant with about 70-80% of the mission lifetime, we plan to keep ~20% to observe star-forming regions in the Milky Way, and, finally some parts of the mission could be dedicated to targeted observations in synergy with ATHENA, SKA or the E-ELT. | 2019-04-13T15:31:11.165Z | 2016-07-22T00:00:00.000 | {
"year": 2016,
"sha1": "9f912c20bfd997ce256611df87e848823c14c8a6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1607.06606",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "79d22f565b1807aa4c27299f2b2bc6e5b653a13c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering",
"Physics"
]
} |
44065103 | pes2o/s2orc | v3-fos-license | The effect of inlet and outlet boundary conditions in image-based CFD modeling of aortic flow
Background Computational modeling of cardiovascular flow is a growing and useful field, but such simulations usually require the researcher to guess the flow’s inlet and outlet conditions since they are difficult and expensive to measure. It is critical to determine the amount of uncertainty introduced by these assumptions in order to evaluate the degree to which cardiovascular flow simulations are accurate. Our work begins to address this question by examining the sensitivity of flow to several different assumed velocity inlet and outlet conditions in a patient-specific aorta model. Methods We examined the differences between plug flow, parabolic flow, linear shear flows, skewed cubic flow profiles, and Womersley flow at the inlet. Only the shape of the inlet velocity profile was varied—all other parameters were identical among these simulations. Secondary flow in the form of a counter-rotating pair of vortices was also added to parabolic axial flow to study its effect on the solution. In addition, we examined the differences between two-element Windkessel, three element Windkessel and the outflow boundary conditions. In these simulations, only the outlet boundary condition was varied. Results The results show axial and in-plane velocities are considerably different close to the inlet for the cases with different inlet velocity profile shapes. However, the solutions are qualitatively similar beyond 1.75D, where D is the inlet diameter. This trend is also observed in other quantities such as pressure and wall shear stress. Normalized root-mean-square deviation, a measure of axial velocity magnitude differences between the different cases, generally decreases along the streamwise coordinate. The linear shear inlet velocity boundary condition and plug velocity boundary condition solution exhibit the highest time-averaged wall shear stress, approximately \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$8\%$$\end{document}8% higher than the parabolic inlet velocity boundary condition. Upstream of 1D from the inlet, adding secondary flow has a significant impact on temporal wall shear stress distributions. This is especially observable during diastole, when integrated wall shear stress magnitude varies about \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$26\%$$\end{document}26% between simulations with and without secondary flow. The results from the outlet boundary condition study show the Windkessel models differ from the outflow boundary condition by as much as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$18\%$$\end{document}18% in terms of time-averaged wall shear stress. Furthermore, normalized root-mean-square deviation of axial velocity magnitude, a measure of deviation between Windkessel and the outflow boundary condition, increases along the streamwise coordinate indicating larger variations near outlets. Conclusion It was found that the selection of inlet velocity conditions significantly affects only the flow region close to the inlet of the aorta. Beyond two diameters distal to the inlet, differences in flow solution are small. Although additional studies must be performed to verify this result, the data suggest that it is important to use patient-specific inlet conditions primarily if the researcher is concerned with the details of the flow very close to the inlet. Similarly, the selection of outlet conditions significantly affects the flow in the vicinity of the outlets. Upstream of five diameters proximal to the outlet, deviations between the outlet boundary conditions examined are insignificant. Although the inlet and outlet conditions only affect the flow significantly in their respective neighborhoods, our study indicates that outlet conditions influence a larger percentage of the solution domain.
Background
Cardiovascular computational fluid dynamics (CFD) models have the ability to aid physicians in non-invasive diagnostic decision making, and over the past decade, commercial, patient-specific modeling has become more common owing to numerous advancements in computing speed [1], medical image acquisition, and 3D data processing and visualization techniques [2][3][4][5].
Cardiovascular diseases (CVDs) are the leading cause of death globally [6], with the most common conditions including coronary artery disease (CAD), stroke, heart failure, rheumatic heart disease, heart arrhythmia, aortic aneurysms, and thromboembolic diseases [6,7]. CAD and stroke account for about 77% of CVD deaths [6], but many other conditions contribute to impairment or decreased quality of life of the patient. As a means to diagnosing and understanding these conditions, commercial, patient-specific modeling of CVDs has become more common in recent years. For instance, HeartFlow, Inc., Redwood City, California has developed a non-invasive CFD-based tool to identify lesions causing ischemia [8,9]. Another application of cardiovascular CFD is designing new surgical techniques and implantable medical devices [10,11]. Procedures and devices have traditionally been validated via clinical trials, animal tests, and evaluation of patients post-surgery. Cardiovascular modeling is now increasingly aiding these developments [11][12][13][14][15][16][17][18]. For example, [10] designed a 'virtual surgery' for pediatric surgeons based on patient-specific images. Their framework also computed post-operative hemodynamics based on the virtual surgery, thereby aiding surgeons in surgical planning. Furthermore, hemodynamic alterations are known to be a significant cause of ischemic disease progression [19]. Owing to these uses and other promising applications, there is a substantial need for accurate modeling of cardiovascular flows.
Unfortunately, much of the information required to perform accurate cardiovascular CFD is usually unavailable due to the difficulty of making in vivo flow measurements on live patients. Consequently, in order to formulate a well-posed problem, most researchers must guess parameters such as flow boundary conditions, vessel wall properties, and sometimes even geometric vessel parameters if patient imaging is not of sufficient quality. It has been shown that these factors and others can significantly alter the flow solution [20][21][22][23][24][25]. For example, [25,26] performed a numerical study to quantify the sensitivity of wall shear stress fields in the carotid bifurcation to geometric and secondary flow perturbations. They found that small geometric variations could significantly affect the flow solution. Sankaran et al. [27] quantified uncertainties due to geometry, boundary conditions, and blood viscosity in coronary blood flow simulations using a stochastic collocation method [28]. They concluded that solutions from modeling were most sensitive to variations in minimum lumen diameter. Sankaran et al. [29] developed a reduced-order model based on a machine learning approach to quantify uncertainties due to geometric variations. They found that larger arteries with significant stenosis were most sensitive to geometric variations. Liu et al. [19] modeled a patient-specific circle of Willis coupled with a zero-dimensional lumped parameter boundary condition. They determined that the accuracy and consistency of their method were improved relative to a resistance-based boundary condition. Steinman et al. [22] was a collective study by 25 research groups to predict the variability of pressure drop in a giant aneurysm model with a proximal stenosis. Various research groups performed CFD analysis with the same lumen geometry, flow rates, and fluid properties. However, the researchers were free to choose their own numerical methods, discretization, and solution strategies. They concluded that pressure could be predicted with reasonable accuracy by CFD in the giant aneurysm model but transitional patterns and derived quantities varied widely. Liu et al. [30] developed a new methodology for functional assessment of stenotic carotid arteries. Their methodology based on thresholding pressure gradient successfully delineated severe stenosis from mild-moderate ones. Xiong et al. [31] investigated the effect of blood pressure variability on carotid atherosclerotic plaques. They determined that beat-to-beat blood pressure variability could severely exacerbate longterm outcomes of atherosclerosis. Wong et al. [32] studied the effect of fluid structure interaction on carotid bifurcation models with varying degrees of atherosclerosis. They concluded that wall shear stress and geometric deformation are significantly influenced by the severity of the disease. Liu et al. [33] simulated fluid structure interaction of blood flow and elastic arteries with eccentric stenotic plaques. They showed that wall shear stress, pressure drop and von Mises stress were positively correlated with the degree of vessel occlusion via plaques. Pekkan et al. [23] examined variations between solutions from a first-order accurate commercial software and a second-order accurate in-house flow solver. Only the second-order methods could accurately match the three-dimensional flow features found in an experimental model. Recent studies [20,21] showed the effect of mesh resolution on patient-specific models and concluded that a typical mesh resolution in comparison to a higher mesh resolution resulted in pronounced underestimation of quantities such as wall shear stress and oscillatory shear index. They also showed that higher resolution meshes were able to capture flow instabilities.
Since cardiovascular CFD simulations are used to make critical decisions in diagnosis [30], surgical planning [10], and medical device designs [12,13,15], it is essential to verify that the assumptions made by the researcher do not negatively impact the fidelity of the solution. In this paper, we focus on the impact on flow solution of assumed inlet velocity boundary conditions in the human aorta. Some have argued that researchers concerned about the choice of inlet conditions should merely extend the size of the simulation domain so the flow is fully-developed by the time it reaches the point of interest. However, this is rarely a realistic solution since real arteries are poorly approximated by long, straight tubes, thus the flow is never truly fully-developed within the body. Furthermore, it is often prohibitively complex to add realistic upstream sections of the vasculature, as in the case of the aorta, which is immediately distal to the heart.
The aorta is of particular interest not only due to its position proximal to all other arteries, but also because invasive and non-invasive experimental measurements on the aortic arches of animals and humans have reported wide variations in the shape of the velocity profile, including flat [34], skewed [35], and highly patient-specific [36]. Consequently, in cases where patient-specific profiles are unavailable, the optimal profile shape to assume is not clear, and researchers have made many different choices [37][38][39][40][41][42][43][44]. To our knowledge, it is thus far undetermined to what extent the researcher's choice of aortic inlet boundary condition changes the solution, or how far distal to the inlet the flow is significantly affected by the choice of inlet condition. In addition, it is not always clear how the choice of outlet boundary condition affects the flow solution; most researchers choose between an outflow outlet condition, in which flowrate is specified at each outlet, and a Windkessel model, in which distal resistances and capacitances are modeled [45][46][47][48][49]. It is critical to answer these questions to determine the extent to which the hundreds of published studies with non-patient-specific inlet and outlet conditions are accurate. In the current study, we begin to address these issues by simulating aortic flow with a variety of idealized inlet and outlet conditions. At the inlet, we examine plug flow, parabolic flow with and without secondary flow, linear shear flows, skewed cubic profiles, and Womersley flow. At the outlet, we study the two-element and threeelement Windkessel models and compare them with specified mass flow rate and zero diffusion flux (ANSYS ® Academic Research [Fluent], release 16.2, outflow boundary conditions, ANSYS, Inc.). The overall goal is to quantify the differences in flow solution caused by choice of inlet and outlet conditions for the purposes of evaluating the impact of assumed boundary conditions on previously-published aortic flow studies.
Methods
An image-based model of a patient-specific aorta of a healthy adult including the brachiocephalic trunk, common carotid arteries, and subclavian arteries was obtained through a personal correspondence (A. Marsden, personal communication, January 11, 2016). A perspective view of the model is shown in Fig. 1a.
The commercial CFD software package ANSYS Fluent (ANSYS ® Academic Research [Fluent], release 16.2) was used for our analysis. The built-in ANSYS meshing tool was employed to discretize the patient-specific geometry using tetrahedral grid elements. We solved the incompressible 3D Navier-Stokes equations shown in Eq. 1 using a finite volume discretization. While the pressure was computed using a second order discretization, the momentum was determined employing a second order upwind scheme. Pressure and velocity were coupled following the Semi-Implicit Method for Pressure Linked Equations (SIMPLE) algorithm [ [51] and a viscosity of 0.004 Pa s [52]. Although Newtonian models consistently underestimate significant physiological factors such as wall shear stress, the qualitative patterns have been shown to be similar to those predicted by non-Newtonian models [53][54][55][56][57][58]. In particular, the average difference in wall shear stress between Newtonian and non-Newtonian models, as demonstrated by [57,58], is about 10% . We also assumed the vessel walls to be rigid, which has been shown to overestimate quantities such as instantaneous wall shear stress. However, time-averaged wall shear stress has been shown to vary by only about 4.5% [59][60][61]. Moreover, this work is an attempt to study the effect of varying inlet boundary conditions along with the most commonly assumed parameters in cardiovascular simulations [62][63][64][65][66], rather than to perform an optimally realistic simulation of aortic flow.
In the present study, 6,484,130 tetrahedral elements were used to discretize the geometry with a minimum element size of 6.98 × 10 −5 m and a maximum element size of 2.52 × 10 −4 m. Doubling the number of elements contributed to only about 1.8% rootmean-square (RMS) differences in the velocity magnitude. A zoomed-in section of the grid is shown in Fig. 1b. Temporally, we employed a first order implicit scheme with a time step of 0.01 s. This scheme was found to be both stable and efficient with our model in comparison to the other options. (1)
Inlet sensitivity studies
In the first part of this study, the sensitivity of flow solutions to velocity inlet conditions was investigated. For these simulations, a zero diffusion flux for all flow variables at the outlets and an overall outlet flow rate were employed to impose specified % mass flow splits (ANSYS ® Academic Research [Fluent], release 16.2, 7.3.10, outflow boundary conditions, ANSYS, Inc.). The average outflow rates were obtained from [45,67].
The average outlet flow rates in the daughter vessels are shown in Table 1. Inlet boundary conditions in the model were set up using user-defined functions (UDFs). An external code (in C++) was written to generate custom inlet velocity boundary conditions. The total flow rate vs. time waveform, shown in Fig. 2, was adapted from [68]. Eighth order Fourier decomposition of the aforesaid waveform was used in the current study. For the different simulation cases, the shape of the inlet velocity profile was varied without changing the flow rate. Plug flow, parabolic flow, linear shear flows, skewed cubic flow profiles, and Womersley flow were examined. A schematic of all the primary flow inlet conditions examined except the Womersley condition is shown in Fig. 3. Womersley flow with an identical flow rate was modeled following the formulations of [69]. In addition to the aforementioned conditions, a parabolic primary inlet flow with a counter-rotating vortex pair secondary flow was simulated. The numerical formulation of this secondary flow is described by Eqs. 2 and 3. The mean secondary flow speed was 24% of the mean primary flow speed, as reported in [70], during the systolic periods. Realistic secondary flow in some vessels can be modeled by adding a simple proximal geometry extension, as is done for coronary arteries in [25]. However, such a model is not accurate for the aorta due to the complex in vivo upstream conditions caused by the beating heart and the aortic valve. In the current study, the secondary flow specified by Eqs. 2 and 3 was selected not because it accurately represented flow in an in vivo aorta, but because it aided evaluation of the effect on the flow of an arbitrary secondary flow of reasonable strength and shape [71][72][73][74]. The effect of secondary flow was studied on the parabolic primary velocity profile since it is the most commonly assumed primary velocity profile shape in cardiovascular simulations.
In Eqs. 2 and 3, − → V and − → W are velocity vectors perpendicular to the axial velocity vector, − → U ; (x 1 , y 1 ) and (x 2 , y 2 ) are the coordinates of the centers of vortices; − → V and − → W were set to − → 0 at the vortices' centers ( < 15% of vessel radius) to suppress the blowup of velocity components. K(t) was chosen to ensure that the mean secondary flow speed was 24% of the mean primary flow speed as reported in [70].
Outlet sensitivity studies
In the second part of this study, we examined the sensitivity of the flow to the choice of outlet boundary conditions, focusing on those most commonly assumed: the outflow condition and the three-element (RCR) and two-element (RC) Windkessel models [75][76][77]. For these simulations, a parabolic inlet velocity condition was prescribed and there was no secondary flow at the inlet. The Windkessel model was invented in 1899 [78] and later adapted to model transient outflow boundary conditions in [75]. The three-element Windkessel model is an electric-circuit analogue consisting of a proximal resistance, R p in series with a parallel network of a capacitor, C, and a distal resistance, R d , as shown in Fig. 4a. The two-element model is identical to the three-element model except for the absence of proximal resistance, as shown in Fig. 4b. While the proximal resistance models the viscous resistance of the vasculature immediately downstream of the vessel, the distal resistance accounts for the resistance of the capillaries and the venous circulation. The capacitor is representative of the compliance of the downstream vessels. Assuming such an analogue yields us Eq. 4 [75,79,80]. The outlet pressure was then obtained using an implicit time discretization of Eq. 4 as described in [80].
In Eq. 4, p represents the outlet pressure, and Q represents the flow rate through the vessel. Typically resistance and capacitance parameters for the Windkessel model are tuned to match the outlet flowrate from the in vivo model. However, since flowrates through the outlets were unavailable for this particular patient, these parameters were adapted from a similar aorta model [77]. Table 2 lists resistance and capacitance values used for the various daughter vessels. For all simulations, flow was assumed to be laminar since the Reynolds number, Re D based on the inlet aortic diameter, D was about 1700 at peak systole. The simulations were performed until the fifth cardiac cycle. Wall shear stress (WSS), pressure, and vorticity contours were examined from the fifth cardiac cycle. The centerline of the model was computed and data slices perpendicular to the centerline were extracted
Effect of the shape of the inlet axial velocity profile
This subsection discusses the influence of the axial velocity profile shape on the solution. These flows had no secondary flow at the inlet. Data slices perpendicular to the centerline of the model were extracted at various locations along the aorta. Figure 5 shows data at streamwise coordinates of 0.5D and 1D, where 'D' is the diameter of the aorta's inlet. Axial velocity magnitudes are depicted by contours. In-plane velocities are represented by the vectors in Fig. 5. Surfaces closer to the inner and the outer arch are denoted by the letters 'I' and 'O' respectively. The effect of inlet boundary conditions is more pronounced closer to the inlet of the vessel. For instance, the peak in axial velocity is approximately at the center of the cross-section for the parabolic inlet boundary condition, as shown in Fig. 5a. Similarly, the contours in Fig. 5b, c show marked similarities to their respective inlet conditions, linear shear flows 1 and 2. Owing to inertia, flow inside the curved vessel gets pushed towards the outer side of the arch, labeled 'O' . This effect is apparent in the in-plane velocity vectors of the parabolic velocity inlet cross section in Fig. 5a. The counter-rotating vortex (CRV) pair, formed because of the aforesaid effect [81,82], is retained at a streamwise position of 1D for the parabolic inlet boundary condition. In addition to the CRV pair, there is a smaller vortex closer to the inner arch, 'I, ' for the parabolic inlet boundary condition. A counterclockwise rotating vortex is present in the flow with the linear shear 1 inlet condition. However, the linear shear 2 inlet has a clockwise rotating vortex, observed in Fig. 5c. For linear shear flow inlet boundary conditions, there is a change in the direction of rotation of the tangential velocity vectors with increasing streamwise coordinate. This effect can be observed by comparing Fig. 5b, e for the linear shear 1 inlet condition. A similar trend is also noticeable in Fig. 5c, f for the linear shear 2 inlet boundary condition. It is also notable that both the primary and the secondary in-plane flows look considerably different for the three boundary conditions illustrated in Fig. 5, but in all three cases, secondary flows are only a small percentage of the total flow velocity. Figure 6 shows data slices at streamwise distances of 1.75D and 2.5D from the inlet, where D is the inlet diameter, during peak systole. At these cross-sections, all boundary conditions shown yielded a clockwise-rotating secondary flow. Branching vessels have been shown to have a considerable effect on the secondary flow [40] so it is possible this was caused by the branching daughter vessels and the effect of the curvature of the vessel [40,83,84]. The velocity of the streamwise flow is skewed towards the inner wall of the vessel. This result agrees well with various other studies such as [40, Fig. 6 Axial velocity magnitude contours and in-plane velocity vectors along planes normal to the centerline at 1.75 and 2.5 inlet diameters downstream from the inlet during peak systole; a, d parabolic velocity inlet, b, e linear shear velocity 1 inlet, c, f linear shear velocity 2 inlet; note that the scales of the axial velocity contours are different for the two cross sections illustrated [85][86][87][88], which have observed reversed and skewed flow along the inner wall of the vessel. Although a direct validation of our simulation cannot be performed due to lack of availability of patient velocity data, the qualitative features from our simulations match well with previous aortic flow studies as indicated above.
There are a few minor differences between the three cases shown in Fig. 6, such as the shape of the peak in axial velocity contours and the direction of vectors in the secondary flow, especially at 1.75D. The differences in axial flow may be caused by a combination of the varying inlet velocity profiles and distortions to the secondary flow caused by the vessel's curvature. Figure 7 quantifies differences between various inlet boundary conditions and the parabolic inlet velocity boundary condition using normalized root-mean-square deviation (NRMSD) of axial velocity magnitude as described in Eq. 5, integrated over cross-sectional slices at the coordinates indicated. NRMSD generally decreases with increasing streamwise coordinate, although there is a slight increase at 1.75D. It is notable that NRMSD is within 0.03 at 2.5D for every inlet boundary condition examined. This is more than an order of magnitude smaller than its value at the inlet for most boundary conditions. Figure 8 compares surface pressure and wall shear stress contours for two representative inlet velocity profiles: parabolic and linear shear 1. The two cases are very similar except for minor differences close to the inlet of the vessel. This was also typical for other inlet velocity profile cases not shown in the figure. Table 3 shows differences between integrated wall shear stress of flows with different inlet conditions compared with the parabolic inlet condition, calculated using Eqs. 6 and 7. The spatial integrals in the aforementioned equations were computed following (ANSYS ® Academic Research [Fluent], release 16.2, 20.3, Surface Integration, ANSYS, Inc.). Temporally, the integrals were calculated using a composite trapezoidal rule. The (6) Integrated wall shear stress (WSS) Time averaged wall shear stress (TAWSS ) = 100 · cardiac cycle wall (τ w ) inlet condition −(τ w ) parabolic dA · dt cardiac cycle wall (τ w ) parabolic dA · dt contains comparisons for integrated wall shear stress at peak systole (Eq. 6) and time-averaged wall shear stress over a cardiac cycle (Eq. 7). Both of these parameters are integrated over the entire simulation domain. These differences are also quantified locally across the vessel wall up to 1D from the inlet. Linear shear flow 1 and plug flow exhibit the largest differences integrated over the entire domain, about 8% in time-averaged wall shear stress. During peak systole these numbers are as high as 15% for plug flow. However, in the first 1D from the inlet, linear shear flow 1 has the largest local variations, about 18% in time-averaged wall shear stress and about 33% in integrated wall shear stress during peak systole. It is also notable that the parabolic inlet condition has the lowest integrated wall shear stress and time-averaged wall shear stress among the inlet conditions examined.
Effect of adding secondary flow to the inlet
In this subsection, the effect of adding secondary flow to a parabolic axial inlet velocity profile is discussed. Only the parabolic axial flow is considered since it is the most commonly assumed inlet velocity profile shape in cardiovascular simulations. Table 4 illustrates the variations in wall shear stress magnitudes between parabolic inlet flows with and without secondary flow at the inlet. Wall shear stress magnitude variations are significantly higher during diastole. Wall shear stress magnitudes vary the most near the inlet, but this phenomenon is also observed when wall shear stress is integrated over the entire domain.
The magnitude of these differences must be interpreted in the context of other uncertainties in cardiovascular flow simulation. For example, in an image-based coronary arterial model examined by [25,26], different models of blood rheology accounted for about 8% variability in the solution, the effect of secondary inlet flow yielded 13% variability, and geometric uncertainties resulted in 47% variability in wall shear stress. It is notable that they generated secondary flow using an extension to their model with added curvature and helical pitch. Another study, [83], examined the effect of curvature and inlet velocity profile on a right coronary artery model. They concluded that inlet velocity profile had little effect on the flow compared with the effect of changing the curvature of the model. From our study, it is evident that the effect of changing the shape of the primary flow inlet velocity profile is not felt significantly beyond 1.75D, with D being the aortic root diameter. However, upstream of 1D, the shape of the axial flow can lead to as much as 18% variability in terms of timeaveraged wall shear stress. Adding secondary flow on top of parabolic axial flow also results in significant variability in wall shear stress upstream of 1D, as high as 26% during diastole. Consequently, if accurate temporal modeling closer to the inlet and the aortic arch is desired, our results emphasize the need to model patient-specific inlet velocity conditions including secondary flow. Table 5 illustrates the differences in wall shear stress magnitude between the threeelement Windkessel model, the two-element Windkessel model, and the prescribed percentage outflow boundary conditions. All three of these cases had identical parabolic inlet axial velocity conditions and no secondary flow at the inlet. The data show no significant difference in wall shear stress between the two-element Windkessel and the three-element Windkessel conditions. However, the two-element and the threeelement models vary as much as about 18 % from the case with an outflow boundary condition. Comparing these results with the magnitude of variations from other factors suggests that outlet boundary conditions are a significant contributor to uncertainty in the solution. Figure 9 shows the differences between the Windkessel boundary conditions and the outflow condition using normalized root-mean-square deviation (NRMSD) of axial velocity magnitude as described in Eq. 5, integrated over cross-sectional slices at the coordinates indicated. A general increase in NRMSD is observed with increasing streamwise coordinate, although there is a slight decrease at 4.5D relative to that observed at 3.5D. Furthermore, variation in NRMSD beyond 3.5D is constant within 2.5% for both the Windkessel boundary conditions examined. The fact that NRMSD is highest near the outlet is expected when comparing cases that vary outlet conditions. However, it is notable that whereas NRMSD decayed nearly to zero for all inlet conditions by 2.5 diameters from the inlet, NRMSD remained high more than 5 diameters proximal to the outlet. This suggests that the choice of outlet condition has a noticeable effect on a larger percentage of the solution domain than the choice of inlet condition.
Conclusions and summary
This work investigated the variation introduced into a simulation of aortic blood flow by choice of inlet and outlet boundary conditions. Inlet plug flow, parabolic flow, linear shear flows, skewed cubic flows, and Womersley flow were simulated and the resulting flow solutions were compared to study the effect of inlet conditions. Parabolic flow with and without secondary flow at the inlet was also studied. All other parameters were identical among these simulations. While the parabolic inlet condition without secondary flow has the lowest time-averaged wall shear stress, linear shear flow and plug flow have the highest time-averaged wall shear stress, about 8% higher than parabolic inlet condition without secondary flow. The axial and in-plane velocities for the different flow solutions are considerably different across data slices extracted at 0.5D and 1D from the inlet, where D is the inlet diameter. Data slices at 1.75D and 2.5D are qualitatively similar but there are minor differences between secondary flows at 1.75D. Normalized root-mean-square deviation (NRMSD) evaluated between the parabolic inlet condition without secondary flow and other axial velocity boundary conditions generally decreases along the streamwise coordinate and is less than 0.03 at 2.5D for all cases. These statistics show that the effect of inlet conditions becomes less pronounced as the streamwise coordinate increases. Adding secondary inlet flow to parabolic axial flow results in a slight variation of about 4% in terms of the time-averaged wall shear stress. However, between the inlet and a streamwise coordinate of 1D, there are larger differences. This is especially noticeable during diastole when shear stress magnitude differences integrated up to 1D are as high as 26%.
Outlet conditions prescribing a zero-diffusion flux with specified mass flow rate (ANSYS ® Academic Research [Fluent], release 16.2, outflow boundary conditions, ANSYS, Inc.), two-element Windkessel, and the three-element Windkessel conditions were investigated. Both the two-element and the three-element Windkessel models don't vary much near the inlet as seen from the time-averaged wall shear stress variations. For instance, both the two-element and the three-element models differ from the outflow boundary condition by 0.3544 and 0.3571% respectively in terms of time-averaged wall shear stress integrated up to 1D. However, in terms of time-averaged wall shear stress integrated throughout the model, they differ from the outflow boundary condition by as much as about 18% . Normalized root-mean-square deviation (NRMSD) evaluated between the outflow boundary condition and the Windkessel models generally increases along the streamwise coordinate. However, beyond 3.5D NRMSD varies by less than 2.5% along the streamwise coordinate. These statistics indicate that NRMSD remains constant for more than 5 diameters proximal to the outlet and that the effect of outlet conditions are more pronounced as the streamwise coordinate increases.
Based on the current results along with other studies on the subject [70,89,90], it is reasonable to conclude that inlet conditions, including both primary and secondary velocity profile shape, significantly affect the solution up to about two inlet diameters distal to the inlet. Similarly, the type of outlet condition chosen affects the solution significantly up to five inlet diameters proximal to the outlet. This suggests that the outlet boundary conditions influence a larger percent of the solution domain. The amount of variation observed between the various flow cases in this study can be interpreted as a lower bound on the error that can be expected in aortic flow simulations that do not use patient-specific boundary conditions. Although this study is limited to one healthy model, the underlying mechanisms of flow over the curvature of the vessel and the effect of branches would likely render qualitatively similar results in other subject-specific models. Nevertheless, studying more subject-specific models along with corresponding physiologically realistic inlet velocity boundary conditions to verify our conclusions is of interest for future work. Greek letters τ : shear stress.
Non-dimensional numbers
Re : Reynolds number.
Authors' contributions EK designed the study and evaluation procedures. SM worked on implementing the study; preparing the model, simulating, and post processing the data. SM wrote this manuscript. EK contributed in reviewing and revising it. Both authors read and approved the final manuscript. | 2018-06-01T16:04:39.981Z | 2018-05-30T00:00:00.000 | {
"year": 2018,
"sha1": "4b03c0eecf545197401848da05fb4c6b951804d9",
"oa_license": "CCBY",
"oa_url": "https://biomedical-engineering-online.biomedcentral.com/track/pdf/10.1186/s12938-018-0497-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51326a12164548af5dfac1f68d7ca8a6f6c8a471",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Geology"
]
} |
241615894 | pes2o/s2orc | v3-fos-license | The role of the University of Jordan in fighting against intellectual and religious extremism among their students
This study investigated the role of the University of Jordan in fighting against intellectual and religious extremism among their students. It aimed to explore that from the perspective of the members of the councils of the departments of the educational sciences faculty at the University of Jordan. 73 members were sampled from such members. It was found that the University of Jordan plays a moderate role in fighting against intellectual and religious extremism among their student. That is because the overall mean in terms of the role of the University of Jordan in fighting against intellectual extremism is 2.91. It is because the overall mean in terms of the role of the University of Jordan in fighting against religious extremism is 2.91. The researcher recommends activating the role of Jordanian universities in fighting against all types of extremisms. She recommends conducting similar studies targeting Jordanian private and public universities to compare their results with the results of the present study
leaves. They are made in accordance with the applicable regulations. The faculty council is responsible for making the annual budget of the faculty and looking into the issues assigned to it by the faculty dean (regulations No 18 of 2018/ the regulations of the University of Jordan).
Youth represent the majority of the students enrolled at universities. They are the ones who are vulnerable the most for social problems. That's because youth are very active and strongly motivated to make renewal and changes. It's because youth face many political, economic and social challenges. Such challenges made youth suffer from many problems. Such problems include: alienation, negligence, deprivation, and cultural and political marginalization. They led to the prevalence of extremism among youth (Al-Zahrani, 2013).
Extremism is a complex phenomenon. It involves a set of deviant beliefs, feelings, acts, strategies and attitudes that differ from the prevalent beliefs, feelings, acts, strategies and attitudes in society. Such deviant beliefs, feelings, acts, strategies and attitudes may be adopted by one or a group (Chuprov and Zubok, 2010). Extremism refers to every behaviour that violates the prevalent social norms in society (Al-Rukabi, 2013). It is a serious deviations that cause threat to the security of society. That applies whether extremism is religious, social or political extremism (Ameen, 2013).
Extremism is associated with violence and terrorism. All societies worldwide suffered from extremism during all ages. However, during the modern ages, extremism have been showing aggression against the ones who are innocent, damaging properties, engaging into conflicts with authorities and causing threat to security. Despite the prosecution of extremists in societies, prosecution by itself isn't enough to fight against extremism. In fact, fighting against extremism requires investigating the psychological, social, economic and political problems lying behind extremism. Such investigation shall enable decision makers to take precautionary measures (Al-Kawari, 2012).
There are several reasons that led to the prevalence of extremism in Arab and Islamic societies. Such reasons include: the decline of the development of such societies. Due to such decline, the enemies of Islam started to make devilish plans to control Arab and Islamic societies and take over their fortunes. The reasons of the prevalence of extremism in those societies include: prevalence of western values, hopelessness and ignorance among people in those societies. They include: the poor compliance with Islamic principles in those societies and the poor faith of people. They include: the increasing number of conflicts between people and the prevalence of a rigid way of thinking (Yousif, 2011).
Thus, it is necessary to make reforms to the way youth think. Making such reforms is sought by people during all ages. It shall contribute to fighting against extremism and deviant thinking and foster intellectual and social development. It shall lead to the prevalence of a flexible way of thinking. It shall contribute to addressing many problems faced by society and taking effective decisions for addressing them (Qatami and Al-Esbai'y, 2018).
Extremism is a global phenomenon that all countries suffer from. It is attributed to political, intellectual, economic and social reasons. Addressing such reasons shall contribute to fighting against extremism. Extremism involves having a deviant belief that violate the social norms and the principles of Islam and hinders people from meeting public interests. There are two types of extremism. The first type is represented in the extremism shown by one. It manifests in the way one thinks and behaves. The second type is represented in the extremism shown by a group. It is more dangerous than the first type.
Statement of the Problem
Based on several studies (Al-Asali, 2010, Al-Khaza'leh, 2018, and Al-Qudah and Ashour, 2019, the third millennium is associated with several technical and knowledge-related challenges. Such challenges affected people in political, cultural, economic, and social areas. They affected the educational process in universities and schools. Due to opening to other cultures, universities failed to preserve the cultural identities of people and promote intellectual security among students. Thus, extremism started to spread among students. There are also other academic, social and economic reasons that led to the prevalence of extremism among university students. Based on a survey made by the Strategic Research Center in the University of Jordan (2016), 6% of the students enrolled at the latter university adopt extremist ideas and beliefs. About half of the students enrolled at the latter university have political interests. 11% students enrolled at the latter university adopt Islamic thought.
Youth play a significant role in society. Many young university students are influenced by the extremist ideas, attitudes and thoughts. As far as the researcher knows, there isn't any study that aimed to explore the role of the Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.6, 2021 University of Jordan in fighting against extremism among students in the light of the contemporary challenges in society. Thus, the researcher conducted this study The study's question: This study aimed to answer the following question Q.1 What is the role of the University of Jordan in fighting against intellectual and religious extremism among their students from the perspective of the members of the councils of the departments of the educational sciences faculty at the University of Jordan?
The study's objective: This study aimed to 1)-Explore the role of the University of Jordan in fighting against intellectual and religious extremism among their students. It aimed to explore that to identify the strengths and weaknesses in this regard and address such weaknesses.
The study's significance: This study is significant due to the reasons below: -This study is significant because it sheds a light on a significant topic (i.e. extremism and its types, reasons and impacts). It is significant because it sheds a light on a significant issue (i.e. the role of universities in fighting against extremism). It's significant because extremism is still receiving much attention by researchers. Researchers today are concerned in exploring the reasons behind extremism.
-This study is significant because it aims at providing university students with knowledge about the way they ought to think. That shall contribute to protect those students from adopting deviant and extremist ideas.
-As far as the researcher knows, there isn't any study that aimed to role of Jordanian public universities in fighting against intellectual and religious extremism among their students.
-
This study serves as a significant reference for the researchers who are interested in such issues. It assist those researchers in conducting studies about such issues. It provides decision makers with knowledge about the mechanisms of fighting against intellectual and religious extremism among university students. The researcher believes that it is necessary to explore the role of the University of Jordan in fighting against intellectual and religious extremism among their students. That's because there aren't many studies that aim to explore the role of Arab universities in fighting against intellectual and religious extremism among their students. It's because there isn't any study that aimed to explore the role of Jordanian universities in this regard. Thus, this study assists researchers in conducting studies that contribute to fighting against extremism
Definition of terms:
Role: It refers to a set of functions, tasks, and duties that one, institution or organization must do in order to meet specific goals in society (Ahmad, 2000, 35).
Role (the procedural definition): It refers to a set of functions, duties, and acts that must be carried out by Jordanian universities in order to fight against extremism. It is investigated through the instrument designed by the researcher Extremism: It refers to the process of adopting deviant values, rules, thoughts and acts that aren't consistent with the prevalent norms in society. Such values, rules, thoughts and acts may make one carry out violent acts. They may be adopted by one or a group. Extremism aim at promoting specific opinions in society and make changes through using force (Al-Omari, 2009: 42).
Intellectual extremism: It refers to extremist thought that are adopted by one. Such thought violate laws and principles, such as: fairness, equality and justice (Al-Sultani, 2015: 56) The study's limits: This study was conducted during the second semester of the year 2019/2020. It targets the members of the councils of the departments of the educational sciences faculty at the University of Jordan.
Previous studies:
The researcher reviewed several studies that are related to the role of universities in fighting against intellectual and religious extremism among their students. She reviewed several MA theses, and articles published on journals them through accessing the relevant journals. Such studies are shown below and arranged from the oldest to the latest ones: Rezeq (2006) investigated the reasons and signs of religious extremism and terrorism adopted by young university students. She explored the role of Islamic education in fighting against such religious extremism and terrorism. 322 female and male students were sampled. They were selected from Mansourah University at Egypt. To meet the study's goals, a descriptive analytical approach was adopted. A questionnaire was used to explore the signs, reasons and impacts of religious extremism. It was found that the signs of religious extremism include: the misinterpretation and misunderstanding of religious texts. It was found that the reasons behind religious extremism include: terrorism, having too much unutilized free time and bad friends. It was found that family and mosque play a significant role in promoting security and protecting people from religious extremism and terrorism.
Al-Asali (2010) investigated the degree to which the religious extremism is prevalent among the students enrolled in Palestinian universities from the perspective of faculty members. He investigated the reasons behind such prevalence. He suggested mechanisms for fighting against religious extremism. 157 faculty members were sampled. To meet the study's goals, a descriptive approach was adopted. Interviews were conducted and a questionnaire was used. The psychological area is ranked first. The human relationship area is ranked second. It was found that there isn't any significant difference between the respondents' attitudes towards the prevalence of religious extremism among the students enrolled in Palestinian universities which can be attributed to gender, university and major. It was found that is a significant difference between the respondents' attitudes towards the prevalence of religious extremism among the students enrolled in Palestinian universities which can be attributed to academic qualification. The latter difference is for the favor of the professor. It was found that the most significant reasons behind the prevalence of religious extremism include: economic sanctions, ignorance about the provisions of Shariah, divisions in society, conflicts between political parties and refusal to accept the different opinions. Davydov (2015) investigated the reasons behind extremism among youth. He aimed to suggest mechanisms for fighting against extremism through educational institutions. To meet the study's goals, a survey-based approach was adopted. A survey was used. The researcher selected a sample that consists from 70 experts who are specialized in education and fighting against extremism. It was found that economic reasons are the most significant reasons behind the prevalence of extremism. Such economic reasons include: the low family income, and unemployment. The reasons behind the prevalence of extremism include: the use of an ineffective parenting style. They include the impacts of the political parties, media, other cultures, and failure of educational institutions to do their functions effectively. They include: having a great number of immigrants and the lack of the culture of tolerance. The latter researcher suggests that educational institutions are responsible for preventing the prevalence of extremism through addressing the reasons behind extremism and delivering education and promoting knowledge. He suggests that media plays a significant role in promoting social and religious awareness. He suggests that fighting against unemployment shall contribute to fighting against extremism.
Al-Khaza'leh (2018) investigated the role of faculty members in Jordanian universities in promoting awareness among students about the risks and implications of terrorist thinking and extremism and promoting national belonging. It aimed to explore that from the perspective of the students in Jordanian universities. 459 female and male students were sampled. They were selected through the random sampling method from Jordanian Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.6, 2021 universities. A descriptive analytical approach was adopted. A questionnaire was used to meet the goals. It was found that faculty members in Jordanian universities play a moderate role in promoting awareness among students about the risks and implications of terrorist thinking and extremism and promoting national belonging. It was found that there is a statistically significant difference between the respondents' attitudes which can be attributed to gender. The latter difference is for the favor of males. It was found that there is a statistically significant difference between the respondents' attitudes which can be attributed to faculties for the favor of human sciences faculties.
Al-Eslaihat (2018) investigated the degree to which faculty members in Jordanian universities acknowledge the reasons behind intellectual. He explore the impact of several variables on such perceptions. To meet the study's goals, he developed a questionnaire that sheds a light on 4 areas. This questionnaire consists from 61 items. The questionnaire forms were passed to faculty members who were selected from three public universities. Those universities were selected from three provinces in Jordan. A survey-based approach was adopted. It was found that the degree to which faculty members in Jordanian universities acknowledge the reasons behind intellectual is high. The political reasons are ranked first. It was found that there isn't a statistically significant difference between the respondents' attitudes which can be attributed to gender, faculty type and academic rank.
Rabee'an and Al-Zboon (2018) investigated the role of Hail University in preventing the spread of intellectual extremism among youth. A survey-based descriptive approach was adopted. A survey was used. 162 faculty members were sampled. They were selected from the latter university. It was found that the role off Hail University in preventing the spread of intellectual extremism among youth is moderate. The economic area is ranked first, the social area is ranked second, the academic area is ranked third, and the political area is ranked fourth. . It was found that there isn't any statistically significant difference between the respondents' attitudes which can be attributed to gender. It was found that there is a statistically significant difference between the respondents' attitudes which can be attributed to experience for the favor of the ones whose experience is less than 5 years and the ones whose experience is within the range of 5-10 years when comparing them with the ones whose experience is 10 years or more.
Al-Qudah and Ashour (2019) investigated the role of the faculties of shariah and education in Jordanian public universities in fighting against religious extremism among students from the perspective of the faculty members. They aimed to explore the obstacles that hinder those faculties from fighting against religious extremism. They aimed to make suggestions for fighting against religious extremism from the perspective of educational leaders. 262 faculty members were sampled through using the random sampling method. 15 educational leaders (i.e. deans, and heads of departments) are sampled from the latter faculties. To meet the study's goals, two instruments were developed. The first instrument is represented in a questionnaire. The second instrument is represented in interviews. It was found that the role of the faculties of shariah and education in Jordanian public universities in fighting against religious extremism among students is moderate in all areas jointly and separately. The most significant obstacles that hinder those faculties from fighting against religious extremism include: the lack of financial, and political support for those faculties. They include: the absence of a clear plan for fighting against religious extremism. The most significant suggestions for fighting against religious extremism include: adding courses that aim at fighting against religious extremism and promoting knowledge about Islamic principles and behaviours.
Al-Shahrani and Azab (2019) investigated the role of universities in fighting against religious extremism through education, scientific research, and community service. They aimed to make suggestions and recommendations for fighting against religious extremism. A descriptive analytical approach was adopted. A questionnaire was developed and used for collecting data from the sample. The sample consists from 246 faculty members who were selected from King Saudi University in Saudi Arabia. The researchers found that the universities play a significant role in fighting against religious extremism through education, promoting respect for others and their ideas, opinions and religion. Universities fight against religious extremism through offering students discussion opportunities and encouraging faculty members to conduct research about the reasons of extremism. They also encourage faculty members to conduct research that contribute to solving the problems faced by society and promote moderate Islamic beliefs, tolerance and ethics. They fight against religious extremism through serving community. For instance, they develop programs that aim at enabling students to utilize their free time and meet the needs of the community. Daboos and Salhah (2019) investigated the role of Palestinian universities in fighting against violence and extremist ideas. They used two scales. The first scale aimed at exploring the role of Palestinian universities in fighting against extremist ideas. It consists from 45 items and sheds a light on five (5) areas. The second scale Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.6, 2021 137 aims at investigating the role of Palestinian universities in fighting against violence. It consists from 46 items and sheds a light on four (4) areas. 44 faculty members and 288 university students were sampled from An-Najah National University and Palestine Technical University. It was found that the severity of the reasons behind the prevalence of violence and extremist ideas among students in those two universities is high. It was found that there is a statistically significant difference between the respondents' attitudes towards such reasons which can be attributed to gender and faculties. It was found that there isn't any statistically significant difference between the respondents' attitudes towards such reasons which can be attributed to role in university (i.e. faculty member or student).
Comments on the aforementioned studies
Some of the aforementioned studies aimed to shed a light on the prevalence of extremism in universities and offer suggestions for fighting against extremism. They include: the ones conducted by Rezeq (2006), Al-Asali (2010) and Davydov (2015). As for the studies conducted by Al-Khaza'leh (2018), Rabee'an and Al-Zboon (2018), Al-Qudah and Ashour (2019), Al-Shahrani and Azab (2019) and Daboos and Salhah (2019), they investigated the role of universities in fighting against extremism. As for Al-Eslaihat (2018), he aimed to explore the degree to which faculty members in Jordanian universities acknowledge the reasons behind intellectual extremism.
Through reviewing the aforementioned studies, the researcher was capable to develop the questionnaire and choose the most effective approach for meeting the goals. She adopted a descriptive approach.
Contrary to the aforementioned studies, the present study aimed to explore the role of the University of Jordan in fighting against intellectual and religious extremism among their students. It aimed to explore that from the perspective of the members of the councils of the departments of the educational sciences faculty at the University of Jordan. It offer new knowledge in this regard. It also offers suggestions for activating the role of the University of Jordan in fighting against such extremism.
Methodology
The study's approach: The researcher adopted a descriptive approach in order to explore the role of the University of Jordan in fighting against intellectual and religious extremism among their students.
Population
The population consists from all the members of the councils of the departments of the educational sciences faculty at the University of Jordan (i.e. 133 members)
Sample
The researcher selected a sample consisting from 73 members of the councils of the departments of the educational sciences faculty at the University of Jordan.
Instrument
The researcher developed the study's questionnaire based on the relevant literature, such as: the studies conducted by Rezeq (2006), Al-Asali (2010), Al-Khaza'leh (2018), and Al-Eslaihat (2018). She developed the questionnaire based on the study's goals to collect data. The questionnaire sheds a light on the role of the University of Jordan in fighting against intellectual and religious extremism among their students.. It consists from 34 items. It sheds a light on two areas which are: represented in (intellectual and religious extremism).
The five point Likert scale was adopted to explore the role of the University of Jordan in fighting against intellectual and religious extremism among their students. It consists from the following rating categories: (to a very great extent, to a great extent, to a moderate extent, to a little extent and to a very little extent). Those categories stand for the following scores respectively: (5, 4, 3, 2 and 1).
Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.6, 2021 138 In order to set criteria for classifying means, the following equation was used.
(The highest score of the scale-the minimum score of the scale)/ the number of the required criteria= the interval The interval = (5-1) / 3 The interval= 1.33 Thus, the following criteria were used for classifying means 2.33 or less: Low 2.34 0-3.67: Moderate
or greater: High
The Instrument's Validity To check the instrument's validity, the initial version of the questionnaire was passed to 10 experts. Those experts work as faculty members. They were selected from several Jordanian public universities. They were asked to assess the questionnaire in term of ability to measure what's intended to measure. They were asked to assess the relevancy of the items to the relevant area, and clarity. After making changes, the final version of the questionnaire consists from 30 items. 17 items shed a light on intellectual extremism and 13 items shed a light on religious extremism.
The instrument's reliability:
The instrument's reliability was measured through calculating the Pearson correlation coefficient values. The overall Pearson correlation coefficient value is 0.94. The Pearson correlation coefficient value of intellectual extremism is 0.98. The Pearson correlation coefficient value of the religious extremism is 0.91. In addition, the researcher calculated the Cronbach alpha coefficient values to measure the instrument's reliability. The overall Cronbach alpha coefficient value is 0.96. The Cronbach alpha coefficient value of intellectual extremism is 0.85. The Cronbach alpha coefficient value of the religious extremism is 0.90. Table (1 Results and discussion Q.1 What is the role of the University of Jordan in fighting against intellectual and religious extremism among their students from the perspective of the members of the councils of the departments of the educational sciences faculty at the University of Jordan?
) presents the relevant values
Journal of Education and Practice www.iiste.org ISSN 2222-1735 (Paper) ISSN 2222-288X (Online) Vol.12, No.6, 2021 139 To answer the first question, means and standard deviations are calculated for each area. They are shown below: First: The role of the University of Jordan in fighting against intellectual extremism among their students The results related to the role of the University of Jordan in fighting against intellectual extremism among their students are presented in table (2): (2), the University of Jordan plays a moderate role in fighting against intellectual extremism among their students. The overall mean is 2.91 and the standard deviation is 0.73. The means are within the range of (3.43 -2.24). The mean of statement (1) is 3.43 which is moderate and ranked first. The latter statement states the following (The University of Jordan seeks promoting a culture of respect for the ones having different opinions among students). The mean of statement (17) is 2.24 which is low and ranked last. The latter statement states the following: (The University of Jordan holds brainstorming sessions for addressing the problems related to extremism).
Second: The role of the University of Jordan in fighting against religious extremism among their students The results related to the role of the University of Jordan in fighting against religious extremism among their students are presented in table (3): (3), the University of Jordan plays a moderate role in fighting against religious extremism among their students. The overall mean is 2.91. The means are within the range of (3.60 -2.32). The mean of statement (1) is 3.60 which is moderate and ranked first. The latter statement states the following: (The faculty members at the University of Jordan adopt moderate ideas when dealing with others). The mean of statement (13) is 3 . 32 which is moderate and ranked last. The latter statement states the following: (The University of Jordan seeks promoting knowledge about the political goals of extremist groups through holding symposiums).
In terms of the answer of the first question, the University of Jordan plays a moderate role in fighting against intellectual and religious extremism among their students. That's concluded from the perspective of the members of the councils of the departments of the educational sciences faculty at the University of Jordan. That's because the mean of intellectual extremism is 2.91 and the mean of religious extremism is 2.91. The latter result may be attributed to the fact that there isn't any clear plan or model in the University of Jordan for fighting against intellectual and religious extremism. It may be attributed to the fact that the management of the University of Jordan doesn't exert much effort to improve the capability of faculty members to guide students and improve their way of thinking. Such effort include: holding training workshops and symposiums. It may be attributed to the fact that the management of the latter university doesn't engage in dialogue with students and faculty members to explore the students' attitudes and way of thinking.
Conclusion
It was found that the University of Jordan plays a moderate role in fighting against intellectual and religious extremism among their student. That is because the overall mean in terms of the role of the University of Jordan in fighting against intellectual extremism is 2.91. It is because the overall mean in terms of the role of the University of Jordan in fighting against religious extremism is 2.91.
Recommendations:
In the light of the study's results, the researcher of the present study recommends: -Activating the role of Jordanian universities in fighting against all types of extremisms.
-Conducting similar studies targeting Jordanian private and public universities to compare their results with the results of the present study.
-Holding training courses and workshops for carrying out brainstorming activities that contribute to solving problems related to extremism. | 2021-08-25T17:47:11.072Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "b5bb25e75061a0ee2da890b4881f4f6b0736828c",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/JEP/article/download/55727/57552",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7b9b729b31d5d57126a8ed811dded824f22d4852",
"s2fieldsofstudy": [
"Education",
"Political Science"
],
"extfieldsofstudy": []
} |
240543420 | pes2o/s2orc | v3-fos-license | Race, Racism, and the Hebrew Bible: The Case of the Queen of Sheba
: The Queen of Sheba, best known for visiting Solomon at the height of his rule, is commonly understood to be one of the most famous Black queens of the Bible. However, biblical texts record nothing of her family or people, any physical characteristics, nor where, precisely, Sheba is located. How did this association between the Queen of Sheba and Blackness become naturalized? This article answers this question by mapping three first millennium textual moments that racialize the Queen of Sheba through attention to geography, skin color, and lineage in the writings of Origen of Alexandria, Flavius Josephus, and Abu Ja’afar al‑Tabari. These themes are transformed in the Ethiopic text the Kebra Nagast , which positively claims the Queen of Sheba as an African monarch in contrast to the Othering that is prominent in earlier texts. The Kebra Nagast has a complex afterlife, one which acts as the ground for the also‑complex modern reception of the character of the Queen of Sheba.
the late antique and medieval sources laid the groundwork for modern understandings of the Queen. I argue that texts that discuss the geographic location whence the Queen of Sheba came (whether Ethiopia, Egypt, or Yemen), her skin color, and her lineage are utilizing strategies of race-making to lay claim to the Solomonic past. It is not the goal of this paper to suggest that the Queen of Sheba is not Black. Absence of (biblical) evidence is not evidence of an absence (of Blackness, in this case). It might be tempting to reduce the disconnect between biblical reticence and modern assertiveness to some moment of invention between now and then, but to do so would belie the complexity both of race (as a mutable, culturally contingent category) and of the Queen of Sheba's reception history. There are more complex literary and social dynamics at play that offer a window into the historical process of race-making as it intersects with the reception history of the Queen of Sheba.
"Race, Racism, and the Hebrew Bible", is a timely but historically complex topic to tackle, not least because modern understandings of race and racism were not operant in the period that biblical texts were written, although other forms of race-making may have been present. 3 Despite the historical incongruity, in the modern period the Bible was used to articulate racist concepts (e.g., the belief that the "curse of Ham" is a curse of Blackness, or the Cushites were a despised Other) and etiologies of race. 4 In contrast to this approach, Black diaspora communities associated figures like Hagar with Black enslaved women in positive acts of reclamation, a dynamic that is certainly at play with respect to the Queen of Sheba. Recent scholarship has done much to de-naturalize these associations; while the understanding of Hagar or the Cushites as Black figures tells historians certain truths about the beliefs and/or lived realities of those who promulgate said views, they also come with attendant modern assumptions that can obscure the textual and historical dynamics of biblical texts. Rodney Sadler, for example, has argued that although Cush was a known African polity, there is no evidence in biblical texts that the Cushites were understood in a racialized or inherently negative manner; rather than viewed as an abject Other, they were powerful, even potentially threatening, allies to the powers in Jerusalem in the Iron Age. 5 Relatedly, Nyasha Junior has argued that the view of Hagar as a model of Black womanhood emerged in the nineteenth and twentieth centuries and was not a critical feature of earlier periods. 6 Neither of these arguments deny the existence of Black figures in the Bible, but instead they historicize some of the ways that race became a prominent feature of the reception history of the Bible. This article will contribute to this line of scholarship by historicizing the racialization of the Queen of Sheba-that is to say, by tracing the history of reception of the character that lays between the characteristically laconic scriptural sources and the positive identification of the Queen of Sheba with Blackness in modern thought. In order to articulate this history, the article will first establish the theoretical language of racialization and the framework of Premodern Critical Race Studies as explained by Margo Hendricks and Geraldine Heng. Biblical and first-millennium sources on the Queen of Sheba will be explored under three thematic foci: geographic origins, skin color, and lineage. The writing of Origen of Alexandria, Flavius Josephus, and Abu Ja'afar al-Tabari are given special attention as the earliest sources that explicitly discuss these themes. Skin color, geography, and lineage are not the only modes by which the race of the Queen of Sheba was articulated, but they appear early in our archive of materials and are transformed significantly in the final text under discussion, the Kebra Nagast. The Kebra Nagast is the single most important text for understanding the modern reception of the Queen of Sheba as Black, inasmuch as it inverts the Othering seen in earlier texts and models the positive reclamation of the Queen that is common in modern media.
Racialization and Premodern Critical Race Studies
This paper uses the term "racialization" to describe the dynamic process by which the Queen of Sheba came to be understood as Black. Nyasha Junior's Reimagining Hagar traces the links between Hagar and Blackness, and in doing so contextualizes the intersection of race and biblical studies. Junior notes that different ethnicities-a related but distinct concept-receive differentiated treatment in the Hebrew Bible, and that sources from the ancient world reflect an awareness of phenotypical differences, but those were often attributed to environmental or primordial reasons, rather than the biological reasoning that is used as a cover in modern racist discussions. 7 She notes that although black skin is often described or mentioned in ancient sources, such uses do not map onto racial categories and there is no consistency between different texts. Despite this and "despite the lack of physical description in the text, some biblical characters have become identified as Black or linked with Blackness". 8 The Queen of Sheba is one such character. Rodney Sadler offers a cogent synthesis of various theories of "race", noting that it is a political category, not one that can be traced solely to hereditary, genetic, or phenotypical features. 9 Rather, Sadler notes that a precursor to racism is "racial thought", and it is racial thought that is the object of his investigation. Through a chronological study of Hebrew writing from the Iron Age through the Rabbinic period, Sadler argues that biblical writings do not reflect racial thought, which is to say that they do not assume an essential and inherent link between, e.g., negative behavioral patterns, somatic features, group ontological differences, and legitimating ideology. Although Sadler's monograph does not claim to be a definitive statement about racial thought in all forms of biblical literature, his work nevertheless suggests that biblical texts do not straightforwardly reflect racial thought, and that racial associations with biblical figures emerge outside of biblical texts.
Sadler's work on the Cushites, and his persuasive argument that we do not see evidence of racial thought towards this group, as well as Junior's discussion of the process by which Hagar came to be associated with Blackness, together open up space for us to consider diachronically how race became such a significant feature to popular understanding of the Queen of Sheba, in what Margo Hendricks has called a "structuring process" of race-making visible in some premodern materials. 10 Hendricks' articulation of premodern critical race studies undergirds this article: she argues that race is not a one-time event or state of being, which we can divide into "before" race and "after" the concept gained traction. Rather, Hendricks invites scholars to identify key moments and processes by which we can better understand the structuring process of race-making-i.e., the dynamic means by which race or racial associations emerged and garnered cultural currency.
Heng's The Invention of Race in the European Middle Ages usefully distinguishes between the multiple locations of race in the premodern world: epidermal race, which indexes race by skin color and bodily features, but also cartographic race, the result of "marking differences of place through the insertion of distinctive objects, narratives, and peoples that it locates into place as stakeholders for the meaning of a site". 11 One of her crucial insights is that biological or somatic understandings of race have dominated discussions of race in the premodern world, and in order to counter this tendency, she "fan[s] out attention to how religion, the state, economic interests, colonization, war, and international contests for hegemony, among other determinants, have materialized race and configured racial attitudes, behavior, and phenomena across the centuries". 12 Utilizing Heng's methodological insights, this paper will focus on the elements of cartographic race, epidermal race, and discussions of the lineage of the Queen of Sheba in order to draw out multiple ways that racial attitudes about the Queen of Sheba have been articulated. This history has cumulatively become the ground upon which modern understandings of the Queen of Sheba as Black rest. 13 2. Geography: Where Is Sheba?
Sheba in the Hebrew Bible
Strong's Concordance lists 17 mentions of Sheba (a region, rather than a character) in the Hebrew Bible, eight in Kings and Chronicles in reference to the Queen of Sheba, with nine other references scattered across Ezekiel, Jeremiah, Isaiah, Psalms, and Job. These include Job 1:15, which states that raiders from Sheba enslaved the people of Job's household, as well as Psalm 72:10, which says kings from Sheba and Saba would offer tribute יב͏וּ( ִ ר ַקְ י ָר ͏כּ ͏שְׁ ,אֶ א בָ ͏וּסְ א בָ ͏שְׁ ֵי כ לְ )מַ to Israel. The references never articulate the geographic proximity (or lack thereof) of Sheba, nor is it associated with anything beyond material wealth, whether in the form of the wealth of the Queen of Sheba of Kings and Chronicles, the gold of the tribute mentioned in Psalms, or the raiders described in Job. This is not unusual for place names in the Hebrew Bible-indeed, biblical archaeology has a long and hotly-debated history of arguments about the historical location of cities like Gath, Sodom, etc.-but it tells us, as modern scholars, that we cannot assume the location of the Queen of Sheba's kingdom. 14 At most, we know that Sheba is associated with desert trade, but it is evoked less as a known entity and more as a far-off, wealthy land, akin to mid-twentieth century American references to "Timbuktu". Much scholarship assumes that the land of Sheba is Saba, a port city on the southern Arabian peninsula in modern-day Yemen. 15 When not an independent (and relatively small) city-state, Saba was variously controlled by kings in Yemen as well as rulers from across the Red Sea in Ethiopia. 16 The later history of interpretation of the Queen of Sheba is marked by the variety of associations between Saba and various hegemonies to claim a connection between the figure who visited Solomon and later Arabian and African histories. 17 There are many reasons to make this assumption. For one, the distinctive spelling in Latin letters collapses in Hebrew; an unpointed shin could very well have been read as a sin instead, suggesting a direct linguistic overlap. Further, in many other Semitic languages, especially Arabic, and Ge'ez, Saba and Sheba are spelled in exactly the same way, leading writers such as Muslim polymath al-Tabari to assert that the Queen of Sheba was from Yemen. Because of the philological reasons as well as longstanding traditions, modern scholarship often naturalizes the assumption that biblical Sheba is the historical Saba.
There is some evidence that more caution is warranted before fully endorsing such a position. As noted at the opening of this section, at least one biblical writer portrayed Sheba as a location distinct from Saba, as Psalm 72:10 refers to kings of Sheba and Saba א( בָ ͏וּסְ א בָ )͏שְׁ as two poetically parallel but separate locations. It very well may have been that other biblical authors-such as the authors of Kings and Chronicles-considered themselves to be describing what we know today as Saba. However, the connection cannot be assumed. I suggest here an epistemologically cautious approach: it is entirely possible early writers, editors, redactors, tradents, and communities understood the references to Sheba to be references to the city-state Saba, of which we have significant archaeological evidence. However, we also know that not all biblical writers considered Sheba and Saba to be the same. Holding space for this uncertainty allows us to understand that later associations between the Queen of Sheba and specific locations do not evoke or occlude an obvious or natural connection, but rather make specific, historicizable claims which we can interrogate to better understand the process by which the Queen of Sheba's Blackness became obvious (at least, to modern eyes).
The Queen of Egypt, Ethiopia, and/or Yemen
In his History of the Jews, Josephus, the late first century Jewish historian, describes the Queen of Sheba as "The Queen of Ethiopia and Egypt". Here, Josephus is translating "Saba" as "Ethiopia", which mirrors the Septuagintal translation of Cush (and Seba, with which it was associated) into "Ethiopia". 18 Other than this, Josephus' retelling of the story of the Queen of Sheba's visit does not depart significantly from either the Masoretic or Septuagintal versions of the narrative. With characteristic expansion into the interior state of the Queen and her emotional reactions to Solomon, Josephus follows the early narrative in plot points and detail. Here we see the work of cartographic race: Josephus inserts a distinctive (biblical) narrative into a place in order to generate meaning of the location of Egypt and Ethiopia. Josephus is the first writer in our archive of materials about the Queen of Sheba to introduce a clear connection between the Queen of Sheba and Africa. This is picked up by Origen of Alexandria, who uses Josephus' identification of her as a queen of Egypt and Ethiopia as justification for understanding her as the Black beloved of the Song of Songs.
Skin Color
Skin color is not an especially useful index of racial thought. As Junior notes, the identification of certain groups with certain physical characteristics, especially skin tone, shifts over time in culturally contingent ways. 19 Heng has a sharper critique: "color as the paramount signifier of race-the privileged site of race-is too commonly invoked as the deciding factor adjudicating whether racial attitudes and phenomena existed in premodernity". 20 She argues that the binary of white and black was used in theologically fruitful paradoxes in medieval literature precisely because "a signifying field has stabilized to the point that enables such play, and to the degree that allows paradox to be formed". 21 The Blackness of the Queen of Sheba was first discussed in the writings of third-century Christian exegete Origen of Alexandria, which established (part of) the symbolic ground upon which later discussions of the race of the Queen of Sheba flourished.
In the mid-third century (~260 CE), Origen wrote his Commentary on the Song of Songs in Greek, which was preserved in a Latin translation that Rufinus of Aquilea composed in the fourth century. Origen associates the Queen of Sheba with the beloved of the Song of Songs, who says that she is "Black and beautiful" in Songs 1:5. 22 He pairs this assertion with his argument for the metaphorical nature of Song of Songs triggered by verse 1:2b ("your breasts are better than wine"). Origen argues that the beloved was the Queen of Sheba, who symbolically represents the Gentile Church making a union with Solomon (i.e., Israel), and is also allegorically equivalent to the Cushite wife of Moses. Origen argues that the visit of the Queen of Sheba to Solomon as described in Kings and Chronicles must be an allegorical story, because her praise of Solomon-of his house, his food, his servants-is too ordinary; someone praising Solomon must be doing so in order to value his extraordinary spiritual position, not the daily mundanities of his household. Thus, while Origen is our earliest extant example of the association of the Queen of Sheba with Blackness, he does so by piling referents and allegorical meanings onto his reading of the Queen of Sheba (and the Song of Songs).
Heng writes that, particularly in medieval European literature, there is a distinction between "hermeneutic blackness in which exegetical considerations are paramount and often explicitly foregrounded, and physiognomic blackness linked to the characterization of black Africans in phenomena that extended beyond immediate theological exegesis. It is equally vital, of course, to recognize that distinct, and distinguishable, discourses on blackness might also at times converge and intertwine for ideological ends". 23 Indeed, Origen's exegesis of the Song of Songs represents precisely such a discursive moment, one that fuses these two concerns, the somatic and the symbolic. Because of this, it is not useful to draw too much of a contrast between historical and allegorical significance, as for Origen these were complementary epistemological modes, but it is notable that in the moment the historical claim of the Queen of Sheba's Blackness was made, it was already explicitly interwoven with broader symbolic significance rather than proffered as a link to a specific racial group.
Although the Blackness of the beloved does not function in Origen's third century context the way it does in our contemporary world, it does mark a significant moment in the history of reception of this character, one which is most often associated with the Blackness of the Queen of Sheba. Origen was a hugely influential figure who inspired "Origenist" Christians in the centuries after his death; these Christians were condemned as heretics and participated in what has been called the "Origenist Crises" of the late fourth and sixth centuries. 24 In the wake of these controversies, Origen was not held up as a Church Father or mainstream thinker. Despite the complex afterlife of Origen's writings, this moment in the Homily on the Song of Songs is often used to exemplify the early association between the Queen of Sheba and Blackness. 25 This may be a case where a particular textual moment seems significant in retrospect more than a reflection of Origen's actual influence, but it speaks to the fact that the Queen of Sheba's foreign status was linked to particular, differentiated physical attributes at a relatively early period, which was picked up in new and creative ways in the late antique and medieval world.
Lineage
In this final thematic section, I will depart slightly from Heng's framework, which has proved so useful thus far. In the Introduction to The Invention of Race, Heng notes several times that genealogy is far less important to premodern discourses about race than has often been assumed by scholars of antiquity and the European Middle Ages. 26 However, discussions of the lineage of the Queen of Sheba do not necessarily foreshadow the modern preoccupation with biological race, but rather work in tandem with cartographic race to emplace her and her ancestors (or descendants) and delineate her distinctiveness from Solomon.
The most important first millennium commentator who discusses the Queen of Sheba's ancestry is Abu Ja'afar al-Tabari, who discusses the Queen of Sheba in his tafsir (commentary) on the Qur'an as well as in his tarikh, a universal history of the world from creation until the Abbasid period. 27 Al-Tabari's tafsir contains longer statements, with detailed chains of transmission attesting to various details of the life of the Queen; al-Tabari's tarikh is more condensed, with Qur'anic material re-ordered to form a tighter narrative and much abridged chains of transmission. 28 In both, al-Tabari asserts that the Queen of Sheba came from Yemen. 29 Al-Tabari cites others who say that the Queen of Sheba was part jinn, although he never asserts as such himself. 30 When the Queen of Sheba visits Solomon, according to al-Tabari, the jinn under Solomon's control became afraid that they would have a child together. They feared this child because he might rule them eternally, unlike Solomon, whose control was limited to the span of his lifetime. 31 The jinn suggested that the Queen of Sheba had donkey legs underneath her skirt in order to dissuade Solomon from having a romantic interest in her. 32 In order to verify this rumor, Solomon has his supernatural servants set up the glass floor so that he would have an opportunity to verify what the Queen of Sheba's legs looked like. 33 Thus, a certain preoccupation with the presumed monstrousness of the Queen of Sheba's body is closely intertwined with a particular understanding of her genealogy as a part-jinn, part-human individual. This is closely connected with other inappropriate aspects of the Queen of Sheba's person: she comes to test Solomon with riddles and asks him about the color of God, a question which is so out of bounds that Solomon faints in response. It is at this point when the rumors of the Queen's legs are introduced, suggesting that her inappropriate curiosity is closely linked to her inappropriate body.
It is notable that the Queen of Sheba is not associated with Africa in the writings of al-Tabari but rather with Yemen, although, of course, Yemen is a short hop from the Horn of Africa across the Red Sea and Gulf of Aden and was at times controlled by Ethiopian polities. Al-Tabari's ninth-century moment in the history of interpretation of the Queen of Sheba marks a tendency, picked up by later writers, to associate the Queen of Sheba's Othered body with her lineage (inasmuch as the jinni assert that the Queen's demonic lineage caused her to have donkey legs). One intriguing aspect of this interest in her lineage is the fact that it stands in some contrast with the family ties that most interest the authors of the Kebra Nagast; where the Kebra Nagast dwells extensively on the children and descendants of the Queen of Sheba, it never discusses her parents or ancestors. In contrast, al-Tabari and other Muslim interlocutors explore, albeit briefly, her non-human ancestors, and give little if any attention to her descendants (except inasmuch as they might threaten the jinni). This concern with lineage is not in and of itself an example of racial thought, but it is a previously-unseen thematic interest in the Queen of Sheba that came to have enormous influence on later interpretations of her character.
The Kebra Nagast
The Kebra Nagast (the "Glory of the Kings") positively identifies the Queen of Sheba, there named Makeda, with the community of the compilers of the text; in other words, it claims her as ours in a way that was different from the narratives of the Queen of Sheba that came before and much of what came after. It is the single most important text for under-standing the positive identification of the Queen of Sheba with Blackness in the modern world. It was written in order to claim the Solomonic past through the Queen of Sheba, claiming the two biblical monarchs as the ancestors of the Solomonic dynasty, which ruled Ethiopia between the thirteenth and the twentieth centuries.
The book, the longest premodern engagement with the Queen of Sheba, is a compilation of a number of sources that tells a selective history of Ethiopia from the period of the biblical patriarchs. The Kebra Nagast collates earlier traditions and builds on biblical frameworks, including, of course, the biblical texts of 1 Kings 10:1-13 and 1 Chronicles 9:1-12, which detail the visit of the Queen of Sheba to Solomon's court and the resulting son who, years later, took the Ark of the Covenant back to Ethiopia.
The Kebra Nagast reflects a wide array of Syriac, Coptic, and Arabic literary influences, detailed impressively by David A. Hubbard in his 1956 dissertation. 34 Written in Ge'ez, the liturgical language of the Ethiopic Orthodox Tawahedo Church, it was translated into German, French, and English, beginning in the nineteenth century. 35 Before that, its chapter titles and brief summaries were known in Portuguese and French literature as early as the last quarter of the sixteenth century. Although not everyone would readily recognize the title, the themes and narratives represented there have been widely adapted in the Western world. 36 The Ethiopic text was, according to a colophon found in many early manuscripts, translated from Arabic in the first half of the fourteenth century CE, which in turn was a translation of an earlier Coptic text. Scholars such as Gizachew Tiruneh and Muriel Debié have argued that some form of the text may have existed as early as the sixth century CE, because that is the latest "king" of whose glory is spoken in the text. 37 Recent work by Wendy Belcher and Stuart Munro-Hayes suggests that the Kebra Nagast that we have is a snapshot of a dynamic Ethiopian tradition, 38 but the Ethiopic version we have now dates itself to the thirteenth century, which suggests that it is best to consider it a culturally contingent creation that reflects earlier traditions such as the first millennium sources already discussed.
The account of the relationship between the Queen of Sheba and Solomon in Kebra Nagast was used to explain the reign of the Solomonic royal family of Ethiopia, which ruled the country from the thirteenth to the twentieth century. The Solomonic royal family claimed to be the true inheritors of the Aksumite Empire, which ruled the Ethiopian highlands and various environs from the first century BCE to the ninth century CE. The Solomonic family made this claim to distinguish itself from the Zagwe dynasty, which ruled Ethiopia from the tenth to the thirteenth century, between the Aksumite and Solomonic periods. Complicating the claims to Aksumite heritage is the relative paucity of evidence from Ethiopia before the Middle Ages. As Aaron Butts has noted, our evidence of Aksumite rule-while certainly more substantial than evidence of the Zagwe dynasty-is relatively thin on the ground; we have coinage, monumental stone thrones, and archaeological architectural evidence, but relatively little writing or other textual evidence that might help us to understand Aksumite Christian self-conception of the relationship between Solomon and Ethiopia. 39 The Kebra Nagast, uniquely, presents the Queen of Sheba as a shrewd politician, moral exemplar, and native queen to the community for whom the text was written, a distinct departure from the foreign status that marks her appearance in the Hebrew Bible; Christian Gospels; and early Jewish, Christian, and Muslim accounts, although, as Luis Salés points out, the text is marked by an androcentric perspective that ultimately disempowers the Queen over the course of the narrative. 40 The Kebra Nagast devotes some forty chapters (of over one hundred) to the Queen of Sheba, detailing her visit to Solomon, their conversation together, and the complex circumstances that led to a sexual relationship between the monarchs. Here, she is an intelligent, self-assured woman, as Belcher has highlighted: by claiming her as indigenous to Ethiopia, and moreover asserting that the son produced of their union inherited the Ark of the Covenant and Solomon's blessed status, the Kebra Nagast offers a relatively positive vision of the Queen of Sheba that stands in some con-trast to earlier and contemporaneous Jewish, Muslim, and Latinate Christian traditions that suggest she is monstrous or demonic. 41 In the Kebra Nagast, the Queen of Sheba (Makeda) is a wise queen. She learns of the wisdom of Solomon from Tamrin, a local merchant who had traveled to Jerusalem. She had worshipped the sun but is persuaded to worship the God of Israel because of what she learns of Solomon and Israel. She visits Solomon and they have an extended philosophical discussion; on the last night of her visit, he tricks her into having sexual intercourse with him and gives her a ring to give to their child to identify themselves. The Queen of Sheba has a son, Menelik I, who eventually comes to visit his father. He is feted and beloved in Jerusalem, and when he decides to return to his mother's kingdom to rule, many of the sons of Jerusalem's elites were sent with him. This group of young men takes the Ark of the Covenant from the Temple and successfully return home with it. This text is the basis for the widespread belief amongst adherents of the Ethiopic Tawahedo Orthodox Church that the Ark of the Covenant is held in a church in Lalibela. 42 The Ethiopic text does not describe the physical appearance of the Queen of Sheba in terms of her Blackness, nor does it concern itself with her ancestors. Instead, there is a repeated concern with her descendants. The birth of Menelik I is the beginning of the Solomonic dynasty in Ethiopia, according to the text, and in this we can clearly see an interest in reproduction and social status within Ethiopia. Intriguingly, Ethiopians are described as Black in the Kebra Negast, but only by outsiders. One of Menelik's attendants from Jerusalem, Azariah, teaches the Ethiopians how to obey the laws of Israel, and in so doing mentions their Black faces in passing, to contrast with the lightness of their hearts under the laws of the God of Israel. The only other time the Blackness of Ethiopians is mentioned is in chapter 64, when Pharoah's daughter who seduced Solomon into worshipping idols (cf 1 Kings 11) describes Menelik as of a foreign people and color-Black-in order to emphasize how lost the Tabernacle is to Solomon, to persuade him to turn to new gods. This argument is not persuasive to Solomon, for he asserts that the Egyptians and the Ethiopians are both descendants of Ham, and therefore the pharaonic princess is as foreign to Solomon as Makeda and Menelik. These discussions show that the Ethiopian translators were aware that Israelites and Ethiopians were understood to be distinct epidermically, to utilize vocabulary from Heng, but also suggests they understood themselves to be closely related to Egyptians and did not view these differences as a source of inferiority. Indeed, the fact that these statements are placed in the mouths of non-Ethiopians underscores that the skin color of Ethiopians generally was only notable to non-Ethiopians.
The Kebra Nagast was extremely influential in Ethiopia and the global African Diaspora, and also in early modern and modern Europe and America. Belcher's upcoming project on the Queen of Sheba and the global influence of the Kebra Nagast will offer a fuller picture than can be surmised here, but consider two nineteenth century examples. 43 Karl Goldmark's 1875 German opera, Die Konigin von Saba, which centers on an invented love triangle between the Queen of Sheba, an ambassador of Solomon's court, and the ambassador's beloved. In the opera, the Queen of Sheba is a seductive, beautiful figure with whom Solomon's advisor, Assad, falls in love, going so far as to blaspheme against God in his praise of her, ruining his wedding day. There, the Queen of Sheba's desirability is a major feature of her character, even more than her wisdom or wealth. Similarly, the 1862 French opera, La reine de Saba, by Charles Gounoud, inspired by the poetry of Gérard de Nerval, presents the Queen of Sheba as a beautiful figure who breaks up an engagement, although this time, it is her own engagement to Solomon broken through her inconvenient love for Solomon's court sculptor. In these, in the surviving stills from the now-lost Sheba, starring Betty Blythe, and in Neil Gaiman's 2001 American Gods, we can see several examples of the romantic and sexual potential of the Queen of Sheba, realized as a celebration of her wisdom and power in the medieval Kebra Nagast, flattened in modern European and American imagination. These are familiar Orientalist sexualized fantasies that have particular cultural currency because of the racialization of the Queen of Sheba. We see aspects of modern racial discourses, particularly the sexualization of Black women and children, lending a distinct cadence to narratives of the Queen of Sheba that are relevantly similar to but distinct from the Kebra Nagast. 44 Despite this European and American treatment as a Black woman, the claim that the Queen of Sheba is the ancestor of the Solomonic royal house does not necessarily mean that the text makes the claim that she is Black. Indeed, many Ethiopians understand themselves to be Habesha rather than Black. 45 Moreover, Ethiopia is home to many different ethnic groups, including the Oromo and Amhara, who see themselves as categorically distinct from one another. Hailie Selassie, the last emperor of the Solomonic House of Ethiopia and messianic figure of Rastafarianism, has been accused of acts of ethnic cleansing against the Oromo people. 46 Despite the importance of Selassie and the Queen of Sheba to many diasporic African communities, then, in some ways to read her as a Black figure generally is to undermine the specificity of the claims made about her lineage, which are made to justify the dominance of the Solomonic house over other Ethiopians.
While portions of the Kebra Nagast speak to late antique concerns (particularly the focus on King Ezana, a sixth century figure, as has been noted by Debié) our evidence extends to the early Solomonic period, in the thirteenth century at the earliest. 47 More significant than even this complex web of evidence, however, is the fact that the earliest manuscripts of the Kebra Nagast have a colophon that notes it was translated from Arabic in the thirteenth century. 48 These colophons also assert that the Arabic was in turn a translation from earlier Coptic texts. This translational matrix suggests that the Kebra Nagast is best understood not only as an example of Ethiopic scribes understanding their local history through the universalized figure of Solomon, but also as a result of an international, multilingual contact and exchange of ideas between different groups. The Kebra Nagast is essential for understanding the racialization of the Queen of Sheba in the modern world, and the power of this text is best understood in light of the transformation of earlier accounts of the Queen of Sheba, where she is Othered and racialized to various degrees.
Conclusions
What does historicizing the racial dynamics of the history of interpretation of the Queen of Sheba do? Kimberly Anne Coles and Dorothy Kim write in a forthcoming volume: "Race is a strategy. Each time that we examine the strategies that naturalize structures of power, we better understand the strategies themselves -and how these polemics serve specific interests". 49 Tracing modern perceptions of the Queen of Sheba back to the laconic early sources that first discuss her offers a lens to consider some of the complex dynamics of biblical reception history, which, following Gadamer, is often understood in terms of "filling in the gaps" of a limited frame of material. This model can be helpful but does not, in my view, account for the drastic differences between different iterations of the story of her visit to Solomon, which include divergent motifs, themes, secondary characters, literary genres, etc. I have, in the preceding pages, attempted to sketch where and why certain motifs about the Queen of Sheba emerged in our record of materials, which cumulatively lay the groundwork for a naturalized relationship between the Queen of Sheba and Blackness. By tracing the lines of tradition by which the Queen of Sheba came to inhabit the complex position she has today, this argument underscores the contingent, fraught history of the racialization of this particular figure, a contingency not dissimilar to the process by which race became an operant category in the modern world. This is to say that the insidious effects of race and racism notwithstanding, they are also social constructs with a history, one we can learn in the hopes of deconstructing and hopefully undermining their pernicious effects.
The preceding argument is based not on all or even most references to the Queen of Sheba in Jewish, Muslim, and Christian history, but rather on the most important elements of our remaining evidence. In the distinct visions of the Queen of Sheba portrayed in the Kebra Nagast, al-Tabari's Tarikh, Origen's Commentary on the Song of Songs, and Josephus' Antiquities, which display important innovations in the reception of the figure, the texts show how a character from brief scriptural passages came to be understood as one of the most important Black figures from Israel's past. Further study might explore how both the modern sexualization of the Queen of Sheba and her status as a venerable ancestor are historically intertwined with her Blackness, or how the not-infrequent association with animal legs in ninth-century and later texts functioned as another trajectory of racialization. The production of the racial identity of the Queen of Sheba has a rich history. It is, among other things, the history by which her genealogy, bodily features, and association with Africa come into our archive of materials about the Queen of Sheba. The Kebra Negast is not a singular moment of invention-it is not the point at which the Queen of Sheba became Black-but it is a singularly important moment in the history of reception of the Queen of Sheba. Under the constraints of space and evidence, I have highlighted the most important early ambiguities and historically contingent claims made about the person of the Queen of Sheba, showing how later interpreters-from medieval Christian writers to modern Hollywood depictions-rely on the often-contradictory earlier bodies of tradition that serve as a ground to a rich field of possibilities about the Queen of Sheba.
In a general sense, the case of the Queen of Sheba underscores the point made by Edward Said in Beginnings-the originary moment of a tradition offers far less heuristic value than does the events and moments that connect present traditions to the past. 50 Said argues that origins are divine, rhetorically useful moments highlighted as part of an ideological program to construct or attribute a particular character to the history told. Origins are moments of rupture, unlike what comes before or after. Rather than origins, Said argues that scholars should concern themselves with beginnings, which precede a middle and an end of a story and are definitionally and inherently tied up with what comes afterwards. The Queen of Sheba amply demonstrates the value of this argument, inasmuch as the biblical origins of the Queen do very little to explain the later history of reception of the figure, including and especially the racialization of the Queen. 2 This is not always the case, as characters such as Saul, David, and Absalom are described as beautiful, with some attention to their bodies; Solomon and the Queen of Sheba, however, are not given this treatment within the context of Kings or Chronicles.
3
The period at which concepts of race and racism become operant is a complex, debated issue; Geraldine Heng provocatively argued that race was invented in Medieval Europe in her important 2018 volume on the topic. Scholars such as Sarah Pearce have argued that the rhetoric of the title downplays the history of racialized thinking that is visible in earlier non-European (particularly Islamicate Jewish and Arabic) texts: (Pearce 2020). 4 See discussions in (Goldenberg 2003;Haynes 2002). The intersection of the Hamitic Hypothesis with modern depictions of biblical scenes has been fruitfully explored in (Reed 2021). Junior's work is in conversation with a significant and growing body of scholarship that has explored the reception of biblical figures in African American communities, including (Wimbush 2000;Weems 2003, pp. 19-32;Smith 2017). (Heng 2018, p. 33). (Heng 2018, pp. 181-82).
13
This is not to say that those who understand or represent the Queen of Sheba as Black are aware of every earlier iteration, but rather that the existence of multiple overlapping-but-distinct discourses of the race of the Queen of Sheba acts as the ground upon which later iterations of the character are built.
14 Pioske (2018, pp. 85-133), offers a useful overview of the debates surrounding the location of cities such as Gath.
15
Examples of this tendency include Jamme (2003, pp. 450-51), whose entry in the New Catholic Encyclopeida is labeled "Saba (Sheba)"; (Bienkowski and Millard 2000, p. 266) who similarly conflate the two names in the Dictionary of the Ancient Near East. Bryce (2009) similarly labels a map in the Routledge Handbook of People and Places of Ancient Western Asia as "Saba (biblical Sheba)". Note that these examples include a dictionary, an encyclopedia, and a handbook, so the inclination to flatten the potential differences is naturalized in several volumes that serve as introductory texts. (Bowersock 2013;Hatke 2013;Schippmann 2001).
18
Josephus Antiquities 8: 165-75. On the translational relationship between Seba, Cush, and Ethiopia, see (Sadler 2009, pp. 29-30). Sadler notes, following Müller, that the difference between Sheba and Seba may be a mere dialectical difference, but I would argue that this possibility does not undermine the fundamental ambiguity of the location of Sheba.
22
This phrase has also been translated as "I am Black but beautiful", understanding the Hebrew vav as oppositional rather than conjunctive, but this choice speaks more to the assumptions of later translators than to any inherent meaning of the text. See (Lowe 2012, pp. 544-55). (Heng 2018, p. 185).
24
For an overview of the Origenist crises, see (Clark 1992).
28
For more on Ibn Abbas in Tabari and ibn Kathir, see (Jaffer 2007).
29
Al-Hamdani (d. 945), a fellow ninth-and tenth-century writer most famous for his geographic and historical account of Yemen, also repeats the claim that Bilqis was a member of the Yemeni royal family. See the (Al-Hamdani 2004, pp. 132, 136, and 152). Al-Tha'labi (d. 1035), an eleventh-century Muslim writer, picks up on this tradition from al-Tabari and asserts that she was the queen of Yemen, part jinn and part human. Discussed in (Lassner 1993, pp. 51-52). (Al-Tabari 1960, scts. 582-83).
32
This particular tradition is picked up in some later European depictions of the Queen of Sheba, which suggest she had donkey or goose legs; see the discussion in (Baert 2004, pp. 289-349). Intriguingly, in several of the narratives in which she is described as having animal legs, she is also associated with the Gentile Church that comes to Israel, complicating what might otherwise seem to be a straightforward example of racialization. These traditions did not have a discernable impact on the Kebra Nagast, which offers no suggestion that the Queen of Sheba might be anything other than an exemplary human.
35
This history of this reception is first discussed in English in Budge ([1922Budge ([ ] 2000.
36
Perhaps the most famous twentieth century example is the Spielberg film Indiana Jones and the Raiders of the Lost Ark. The claim that the Ark of the Covenant was removed from Jerusalem and taken elsewhere has a multifaceted afterlife. For the broad influence of the Kebra Nagast, see (Belcher 2009, pp. 441-59).
40
(Salés 2020, pp. 1-2). Salés crucially integrates gender studies and a feminist perspective into his literary reading of the text, arguing that Pauline "androprimacy" undergirds the presentation of the Queen of Sheba. | 2021-10-19T15:43:10.222Z | 2021-09-23T00:00:00.000 | {
"year": 2021,
"sha1": "36722333c749997d5e0f549c59a7f6611a7400e4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/12/10/795/pdf?version=1632407366",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3e6b91dfdab64835fd29b4b8f10b1bea91ed4bf5",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
} |
229396535 | pes2o/s2orc | v3-fos-license | Urban Heat Island in Moscow at different heights, depths and on the surface
The urban heat island (UHI) in Moscow was for the first time studied not only at the ground air level, but also at different heights, depths and on the surface using stationary, radiosonde and satellite data. Long-term dynamics of the UHI intensity in the ground air layer has been estimated since the end of the 19th century both as traditional ‘maximal intensity’ (the difference between the city centre and rural zone), and as ‘average intensity’ (the difference between all urban and all rural stations). In recent years they have been 2.0 and 1.0 ºC, respectively. The quasi-stabilization of both parameters in the second half of the 20th century was probably the result of extensive city growth at that time; the new increase in the UHI intensity seems to be connected with the densification of urban development and heat sources in the last 20 years. The mean daily vertical extension of the UHI in the atmosphere is approximately 300 m. In the upper soil layer (up to 160 cm deep) the maximal UHI intensity was about 1.6-1.7 ºC half a century ago. The average UHI intensity at the field of the surface temperature in recent years is 2.7 ºC.
Introduction
The Urban heat island (UHI) is a well-known geographical phenomenon first discovered in London [1]. Since then, this phenomenon has been found almost everywhere in the world except for oasis cities in dry deserts. Moscow is an excellent object for studying this phenomenon due to its simple ellipsoid shape, flat relief, absence of sea coasts at a distance of at least 600 km from the city, and a symmetrical decrease of the urban built-up density from the city centre to its peripheries. Heat islands are usually studied in the ground air layer using thermometers installed at a height of 1.5-2.0 m above the ground. However, the density of the ground-based meteorological network is usually low. The more detailed structure of UHI can be studied with special measurement campaigns, including the use of either temporary stations or travelling tools equipped with thermal sensors. However, these campaigns are sporadic and short in duration. Satellite radiometric measurements of the surface temperature have high spatial resolution (1 km or less), but they are only available in clear sky conditions, so it is a problem for cloudy climates. Classic reviews about UHI studies are presented in [2][3][4][5][6], etc.
Urban heat island in the ground air layer in Moscow
The study of the UHI in Moscow using stationary data (measurements by thermometers installed at weather stations at a height of 2.0 m above the ground) has been available since 1879, when two stations in Moscow region, Landmark Institute and Petrovsko-Razumovskoe (now the Mikhelson Observatory), began to operate simultaneously for the first time. Two versions of the UHI intensity can be used. Traditional, i.e. the maximal intensity IMAX is the difference between the air temperature T in the city centre TC and at one or several rural stations TR outside the city: where j is any of rural stations and m is their number around a city.
Another possible parameter suggested in [7] is the averaged UHI intensity IA. It can be used if there are several other urban stations in the city in the urban periphery around the centre. They usually represent intermediate values of TU between the conditions of the warmest city centre and the relatively cool rural zone. So, the averaged UHI intensity is: where i is any of the stations in the urban periphery and n is their number. The long-term dynamics of both parameters in Moscow was discussed in detail [8] for five separate periods (see Fig.1) based on (1) and (2). The traditional maximal intensity was close to 1.0 ºC at the end of the 19 th century (1887-1889); 1.2 ºC during World War I (1915-1916); 1.5-1.6 ºC both in the middle and at the end of the 20 th century (1954-1955 and 1991-1997, respectively) and again increased to 2.0 ºC in 2010-2014 (Fig. 2). Thus, this parameter in Moscow became twice as high as in the 1880s. It should be noted that there were two different stations in the centre of Moscow: the Landmark Institute until its closure in 1932 and the Balchug station since its foundation in 1946. However, both of them were located close to the Moscow Kremlin (city centre), and therefore the difference between their thermal conditions seems to be negligible. The average UHI IA intensity can be analyzed over a shorter period since the 1940s, when many weather stations appeared in the city. It was 0.7-0.8 ºC in both the middle and late 20 th century and rose to 1.0 ºC in the 2010s. Probably, the quasi-stabilization of both parameters in the second half of the 20 th century was associated with the extensive growth of the city at that time. As shown in [8], the population density in the city centre has dropped significantly since the early 1960s due to the mass resettlement of inhabitants from the overpopulated city centre to its new periphery. It is likely that the new rise in UHI in Moscow over the two past decades is due to the increase of urban density, especially in the city centre, and, in addition, the rapid growth in power consumption since the late 1990s.
Urban heat island in the lower troposphere of Moscow
Evidently, the UHI is a three-dimensional phenomenon. Its vertical extension was studied using longterm data of stationary measurements at different levels of the TV Tower (located in Ostankino urban district of Moscow, 7 km from the city centre), a high meteorological mast in Obninsk town, Kaluga region (96 km south-west of Moscow centre in almost rural conditions) and according to the data of radiosondes in Dolgoprudny (a suburban town 2 km north of the borders of Moscow). The Ostankino TV Tower is 540 m high and is equipped with meteorological sensors at levels of 2, 85, 128, 201, 253, 305, 385 and 503 m. On the high (310 m) mast in Obninsk there are sensors at three levels: 2, 121 and 301 m. The aerological station in Dolgoprudny has long used Soviet and Russian MRZ radiosondes; their data on T have a spatial resolution of 100 m in the layer up to 1 km. The thermal sensor of these radiosondes was a MMT semi-conductor thermistor with a time constant of 5-7 sec. To compare the data from all three sources, it should be taken into account that, firstly, the climatic displacement of Obninsk according to the map of mean annual air temperature [9] is +0.3 ºC. In other words, Obninsk, due to its southern location, on average, is 0.3 ºC warmer than Moscow. In addition, according to international comparisons of radiosondes in Dzhambul in 1989, the T values based on the data of MRZ radiosondes can be somewhat underestimated due to radiation cooling of the sensor surface during its ascent [10]. Table 1. Mean daily air temperature in the layer from 2 to 500-503 m above the ground according to the data of the Ostankino TV Tower, radio sounding in Dolgoprudny and the high mast in Obninsk on average for the period 2006-2013 [11]. The profiles of T in all three locations were calculated and compared for an average of eight years (2006 to 2013) for both 02:30 a.m. and 02:30 p.m. during radiosonde ascents, which are carried out twice a day. As a result, UHI was observed in the ground air layer at any time, but at night there was an elevated 'cool layer' above it [11]. This effect (the opposite difference between the city and the suburbs at night, the so-called crossover effect, when the air above 100 m in the city is cooler due to stronger surface inversions outside the city), apparently, was first discovered by [12]. As can be seen from Table 1, the vertical extension of the UHI on average per day is ~300 m, where all three values are the same, taking into account the Obninsk climatic displacement. A slightly lower (by 0.1º C) value at 500 m in Dolgoprudny may be result of sensor cooling. The same estimation of the UHI vertical extension was obtained for New York using helicopter sounding [13].
Underground urban heat island in Moscow
Besides the atmosphere, the urban heat island also exists in the upper soil layer beneath large cities. This phenomenon, the so-called 'underground urban heat island' (UUHI), is caused by various factors such as the total heat flux from warmer urban atmosphere into the soil; local downward heat flux from individual buildings; the influence of underground heat sources (heating pipes, subway tunnels, etc.); specific albedo of the surface in the city, less heat loss for evaporation in drier urban soil due to artificial rainfall runoff; warm industrial wastewater drain, etc.
In the Russian Empire, the USSR and the Russian Federation soil temperature has been regularly measured with mercury thermometers in the upper soil layer since the late 19 th century. The results of these measurements at five urban and eight rural weather stations in Moscow region, on average for one year (1960), demonstrate a clear positive thermal anomaly in the soil at all different depths up to 320 cm [14,15]; two examples are presented in Fig.3. Unfortunately, later the number of stations in Moscow region decreased, and therefore a detailed UUHI analysis is possible only for only until 1960. Both UUHI intensity parameters (IMAX and IA) were calculated according to (1) and (2). Unfortunately, at the central Balchug station, the thermometer at 320 cm was absent, so the IMAX values are limited only to the 160 cm range. It was found that in 1960 IMAX was 0.7 °C 40 cm, 1.0 °C 60 cm, 1.2 °C 80 cm, 1.6 °C 120 cm and 1.7 °C 160 cm. The IA values, including the data of Balchug station, range from 0.6 to 0.8 °C in the depth range from 20 to 160 cm. A separate calculation of IA without Balchug data demonstrates values from 0.4 to 0.6 °C in deeper limits from 20 to 320 cm. However, they are evidently underestimated, because we are comparing only the urban periphery (without the city centre) with the rural zone. The downward extension of the 'underground urban heat island' under Moscow cannot be determined from these stationary data, because it evidently extends below a depth of 320 cm.
It should be noted that the spatial field of deep soil temperature (more precisely, groundwater temperature) according to measurements in 35 deep wells [16] demonstrates a positive thermal anomaly in 'the neutral layer' at a depth of about 30 m under the city. This anomaly evidently means the same UUHI phenomenon. Its intensity was discovered by V.I. Prosenkov, on average, about 5 °C, although at some local points it was higher (up to 14 °C). However, the specifics of groundwater temperature should be taken into account. The annual dynamics of the difference between soil temperature in urban and rural areas according to the data of meteorological network reaches a maximum in winter (from +0.9 to +1.2 °C depending on the depth, see Fig. 4) due to strong urban heating and drops to its minimum in summer (from -0.5 to +0.4 °C at different depths). The inverse (positive) sign of the UUHI intensity at the depth closest to the surface (20 cm) in July seems to be an unexpected surprise. Probably, this can be the result of some local factors (different degrees of solar illuminance of local areas of the surface, etc.).
The UUHI phenomenon has also been studied in German cities [17]; in Winnipeg, Canada [18]; in Ankara, Turkey [19], and in some other cities.
Surface urban heat island in Moscow
In addition to heights and depths, the UHI phenomenon can be also studied using the surface temperature TS. The "surface urban heat island" (SUHI) in Moscow was investigated as a thermal anomaly of the TS based on long-term data from Aqua and Terra satellites for the period from 2008 Figure 4. Annual course of the "underground urban heat island" intensity at different depths under Moscow in 1960 [14].
to 2018. Both tools are supplied with MODIS radiometers; the spatial resolution and accuracy of the TS radiometric measurements of the land surface are 1 km and ±1 ºC, respectively. The results of these measurements in 36 spectral bands from 0.6 to 14.4 μm are automatically converted to the brightness temperature TB using the Planck function and, then TB is converted to TS using the surface emissivity data. The SUHI intensity is taken as the difference between average TS values within the city and in the rural zone around it. Evidently it is similar to IA (2). However, there are some methodical problems, and one of them is the cloudy climate of the central part of European Russia: the average cloudiness in Moscow is 7.7 according to long-term observations at the Meteorological Observatory of Moscow University since 1954 [20]. Thus, ideal clear sky conditions are extremely rare events in Moscow: only ~3% from the total sample of images. However, as shown in [21] satellite images may be used to analyze the SUHI even in the presence of clouds. Numerical experiments to simulate clouds were carried out when some of the cells were cut out from an ideal image of a clear sky, and the SUHI intensity was recalculated only for the remaining cells. As a result, it was found that if the total amount of clouds (i.e. a part of excluded cells) is less than 20% of the area of Moscow and less than 50% of the area of Moscow region, then the possible systematic error of the SUHI intensity is still relatively small (no more than ±0.2 of the intensity value). According to these conditions the total sample of used images of both satellites was 747 for 489 separate days over 11 years. Another important problem is the correct sizing of the outer rural zone. Three options for this were tested: the real boundaries of Moscow region (47,000 km 2 , see Fig.5); a small inscribed rectangle within these boundaries (16,000 km 2 ) and a large described rectangle around them (95,000 km 2 ). As a result of our calculations, it was found that the SUHI intensity does not significantly depend on the size of the outer rural zone around the city which is compared with the urban area: the difference in intensity depending on this size is only about ±0.1 ºC if this size is more than 16,000 km 2 [21]. So, the described rectangle, which has the largest area, was chosen to calculate the SUHI intensity: it covers the entire territory of Moscow region and nearby areas of neighboring regions. As one can see from Fig The overall geographical zonality is expressed in the general increase of values from northwest to southeast. Besides, a clear positive heat anomaly exists in Moscow and in its close eastern suburbs. This is confirmed by two closed isotherms around the city (+7 and +8 ºC) and, in addition, by a semiclosed isotherm (+7 ºC). Thus, the mean annual SUHI intensity appears to be close to 2.5 ºC. Another zone of high TS values (+8…+9 ºC and even slightly higher) is located in the southeastern part of the Figure. This can be explained mainly by the general climatic zonality and, possibly, by the additional influence of the SUHI of the city of Ryazan. The annual course of the SUHI intensity is presented in Figure 6. It is noted by a clear maximum in summer (4.3 and 4.0 ºC in June and July, respectively) and a minimum in autumn (1.0 and 1.1 ºC in October and November, respectively). In winter and spring, the values are intermediate. These seasonal differences are statistically significant even with 0.999 confidence probability. The main probable cause is the vegetation cycle: rich vegetation in summer causes heat loss for the transpiration of trees and grass in the rural zone and, as a result, the greatest difference between TS in the city and in the surrounding zone. On the contrary, the decay of vegetation in late autumn leads to minimal differences in TS within and outside the city. In winter, the snow cover in the rural zone is usually continuous, whereas in the city, higher air temperatures lead to frequent thawed patches, so the ground is partially bare. But even in the case of continuous snow cover in the city, it is usually dirty and grey, i.e. it has a lower albedo compared to the rural zone. That is why the SUHI intensity in winter and in spring is not as low as in late autumn. The mean annual intensity of the SUHI in Moscow over 11 years is 2.7 ºC (it varies from -1.3 to +7.7 ºC in some images). Extremely high values are connected with strong anticyclone conditions in the centre of anticyclones or on the ridge axes. Extremely low and even negative values are observed, as a rule, in late autumn at the periphery of anticyclones or ridges in the zones of intensive gradient currents, when the wind is very strong.
Summary of results
Thus, for the first time, the UHI intensity in a large city was estimated at various heights, depths and at the surface using different methods. All estimations obtained and discussed above are summarized in Fig.7. This combined result needs some additional comments. First, different time and different length of the averaging periods for separate UHI estimations are a significant problem. As shown in [7,8], the general tendency of the UHI in Moscow is its strengthening in time. Thus, the oldest estimations of the UUHI intensity obtained from measurements of soil temperature in 1960 may not be comparable with other values. In addition, there is another issue related to the SUHI. As can be seen, satellite data of radiometric measurements demonstrate the highest (2.7 ºC) intensity value among other sources. However, it should be noted that this value seems to be biased (overestimated), since satellite data are available only with a clear sky, which is connected with anticyclone conditions (location of a site close to the anticyclone centre, ridge axis, saddle or any weak-gradient baric field). These synoptic situations lead to a larger difference of T between urban and rural areas due to intensive cooling of both the surface and the ground air layer outside the city at night with the cloudless sky and calm weather (or light wind). | 2020-12-03T09:06:39.073Z | 2020-11-27T00:00:00.000 | {
"year": 2020,
"sha1": "d1785f2ee7a688b3149edf6b9602d671111eea0c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/606/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7609bdb6d38ed372d109eeef9d504b21b1553471",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geology"
]
} |
62402466 | pes2o/s2orc | v3-fos-license | Modeling and Simulation of Controller for Single Phase and Three Phase PWM Rectifiers
This paper presents the modeling of single phase, three phase rectifiers and suitable controller design for the modeled rectifier circuit the primary application of rectifiers is to derive DC power from an AC supply. Virtually all electronic devices require DC, so rectifiers are used inside the power supplies of all electronic equipment. PWM rectifier is now becoming popular due to its low distortion input current, unity power factor operation, bi-directional power flow ability and can offer excellent dynamic response of the dc output voltage. A three-phase PWM rectifier used together with closed-loop dc–dc converters for converting DC power from one voltage to another is much more complicated. In order to perform the above operation rectifier switches should be controlled properly. Hence it is much needed to design a suitable controller for single phase and three phase rectifier. From the small signal transfer function model, the controller scan be designed and the simulated results are presented.
Introduction
The single-phase Voltage Source PWM Rectifier (VSR) is widely used in improving the quality of the power energy. Recently, lots of researches have been investigated for the Single-Phase PWM rectifier. The single-phase PWM rectifier is now becoming more and more popular due to its low distortion input current, unity power factor operation and bi-directional power flow ability. As the filter capacitor required is generally small under balanced supply voltage conditions, it may also be believed that these converters can offer excellent dynamic response of the dc output voltage.
In general, the selection of an appropriate controller of a PWM rectifier in consideration of stability and dynamic performances requires good knowledge about the characteristics of the system to be controlled. Various strategies have been applied on the single-phase PWM rectifier, such as states-space averaging and circuit methods.
A three-phase PWM rectifier is often used together with closed-loop dc-dc converters as loads. The three phase PWM rectifier can be compared with the working of DC-DC boost converter. SISO model can be derived for the three phase PWM rectifier so that it can be comparable with the DC-DC boost converter In this paper, the suitable controller for small signal transfer function model for single phase PWM rectifier was designed under closed loop operation and also SISO model, controller for closed loop operation of three phase rectifier was designed.
Boost Rectifier
Single-phase PWM rectifier based on the small signal model are dealt in detail. The purpose is to decrease the voltage fluctuations and increase the dynamic performance of the rectifier when the load changes suddenly 1 . The procedure for deriving the transfer function are to make assumptions then define the state variables and write state equations for each interval of operation, average the state equations over a switching cycle and introduce perturbation in state variables then equate ac and dc quantities and proceed with ac equations and take Laplace transform, prepare matrix Small signal model and calculate desired transfer functions The Figure 1 shows the basic equivalent circuit of single phase PWM rectifier. Figure 2 and Figure 3 shows the mode of operation for PWM rectifier acting as a boost rectifier. PWM rectifier can be used as bidirectional rectifiers. Rectifier operation is performed by the diodes, while the inverter operation is performed by switches. As tabulated in Table 1, it can operate the rectifier in a boost mode by controlling the switches T 1 , T 2 with the help of the inductor at source 2 . Under steady state operation, mode 1, when switch T 2 is ON, the conduction path is given by V S -L-T 2 -D 4. Hence the inductor is charging, since there is no connection to the load 3 . This mode of operation is similar to that of the DC-DC boost converter operation when switch is ON. In Mode 2, when T 2 is OFF, the conduction path will be V S -L-D 1 -load-D 4. Here the inductor is discharging mode. More over the switches T 3 and T 4 has no effect on the operation of a boost rectifier.
Design of Converter Parameters
Inductor and capacitor will play a major role in the boost rectifier as shown in the operation. Inductance is used for bi-directional power flow and boost operation, while the capacitance is used to maintain the constant DC output for a period of time by reduce the output DC ripples 4 . Hence designing of inductance and capacitance have a significant role in the operation. Moreover the modulating index must be less than 1 for PWM pulse. Hence the amplitude of modulating signal must be greater than of the amplitude of carrier signal 5 .
Voltage Gain
The boost voltage obtained in the output can be calculated by Where, M. I = Modulating index d = switching period
Inductance
The fundamental component of PWM switch should be given by V r . It should be varied from the supply voltage at an angle of δ as the line is similar to that of transmission line.
The inductance can be derived as follows From the above equation we can find L as
Capacitance
I L , the load current contains DC current and ripples current 6 . Capacitor makes I L perfect DC and ripple should be maximum of 5%. By equating output power equals to input power and equate I L with the AC part we can obtain the value of capacitance as follows
Carrier Frequency
The carrier frequency should be minimum in the order of 11*f s , where f s is the supply or fundamental frequency 7 . For 50Hz supply it should be above 550Hz. Most probably 2KHz is chosen.
Small Signal Mode
Assume all of the switches are ideal time-variant switch model. Also, suppose the inductor current and the capacitor voltage is state variable 8 . Then we can get the math model when the switches turn on/off. The mathematical model can be calculated as So the following equations show the ideal small signal model
Average State Equation
The first step is to use the average state function instead of the two partition state functions 9 . According to the proportion weighted average of the state function in on/ off state, we get the state space average model in on switch period. Assume D is the average of the switch variable The state-space average mode can be depicted as that
Perturb and Linearization
We know that low frequency and small ripple characteristic of the rectifier is satisfied, so the derivative of state vector in stable state equals to zero. According to that, we obtain that Assume the small disturbance arises,
Transfer Function
So we get the transfer function of state variable and output variable expressed as
Closed Loop Control
The closed loop control consists of inner current loop and outer voltage loop. The inner loop needs a current controller and outer loop requires a voltage controller 10 . The overall block diagram of closed loop control for single phase rectifier is given as follows The comparison of V ref and V 0 produce changes in capacitor voltage, change in capacitor voltage which in turn alter the output current, as output power is equal to input power change in output current will alter the input current. Hence the controller output is given as I s *(referrence supply current) 11 . To match I s * and I s it produce fundamental component of V r .
If I 0 matches I load , no fluctuations in capacitor voltage, but in case not matches means it will produce fluctuations in capacitor voltage and as mentioned above all the parameters alter in order to match the fluctuations.
If we add a product block, multiplying with a sign term which is in phase with V s, then it will give V s and I s in phase with each other, which means UPF in input side.
Controller Design
Input current regulation in the converter is achieved by adjusting the duty cycle. Generally three basic algorithm used are P, PI and PID. Here PI controller in the Inner current loop which regulate the input current, reduce peak over shoot and steady-state error. The PI consists of two basic modes that are proportional modes and integral modes. A proportional controller (k p ) reduced settling time and reduced the error but not eliminated it. An integral controller (k i ) will have the effect of eliminating the steady-state error. Limiter is used to control the duty cycle within the desirable band. sin sin sin( ) 2 cos sin ) The physical model of closed loop control is given as follows Figure 8 shows the inner current loop control and Figure 9 shows the outer voltage loop control of a single phase PWM rectifier. Figure 10 shows the combination of inner current loop and outer voltage loop. Figure 11 shows the inner current loop settling for change in reference current. Figure 12 shows the outer current loop settling for different output voltages. Figure 13 the simulated waveform shows the inner current loop settles for different source voltage (V s ). Figure 14 shows the outer voltage loop settles for different load currents. The closed loop control using physical model is simulated with the V ref is given as 400V. The simulation output representing the output voltage settles at 400V is given in the Figure 15 and the input current is stabilized using inner current loop which makes input current and voltage in phase, (UPF) is given in the Figure 16.
Simulation Results
Here the PI controller is designed as K p =.99724, K i =53.3. I ref =4A, here after the initial oscillations and peak over shoot, it settles with the given reference value. For outer voltage loop the PI controller is designed as K p =.913,K i =2.27, The V ref =400V.
Introduction
A simple single-input-single-output model is constructed by separating the d-axis and the q-axis dynamics through appropriate nonlinear feed forward decoupling while maintaining nearly unity power factor operation. The model exhibits a close similarity to a dc-dc boost converter under both large-signal and small-signal operating conditions. This makes it possible to extend the system analysis and control design techniques of dc-dc converters to the three-phase PWM rectifier also.
Three Phase PWM Rectifier
Over the past several years, considerable research work has been carried out on the control of ac-to-dc Pulse Width Modulation (PWM) rectifiers, since these converters possess many desirable features such as sinusoidal line currents at a required power factor, a nearly constant dc output voltage, and bidirectional power delivery capability. As the filter capacitor required is generally small under balanced supply voltage conditions, it may also be believed that these converters can offer excellent dynamic response of the dc output voltage. Control of three-phase PWM rectifiers in the d-q Synchronously Rotating Frame (SRF) has been developed from field oriented control techniques for ac drives in early 1980s. Normally, the control objectives of a PWM rectifier are to regulate the dc output voltage on the dc side, achieve Unity Power Factor (UPF) operation on the ac side, and also to achieve fast dynamic response to line and load disturbances. A state-space averaged model has been proposed for the three-phase PWM rectifier in the d-q SRF. However, the model, though accurate, does not give sufficient insight into the controller design and behavior of the three-phase PWM rectifier system due to its complex Multi Input-Multi Output (MIMO) nonlinear structure and the presence of a non minimum phase feature. Due to this, designing a proper controller for such a converter has been generally a challenging task. The SISO model equation has been derived in 1 . It deals with the closed loop control of three phase rectifier with SISO model.
Non Linear Feed Forward Decoupling Controller
In Figure 19 the coupling terms between the d-axis and the q-axis are represented by the two current-controlled dependent voltage sources. Decoupling may be achieved, if the effects of these two voltage sources are nullified by appropriately adjusting the control inputs v d and v q. Figure 19. Equivalent circuit in SRF after decoupling q-axis.
Closed Loop Control
It is well known that the output performance of a singlephase power factor correction unit is limited by the slow response of the bulky capacitor. This drawback is overcome by a three-phase PWM rectifier as it successfully gets rid of the line-frequency related ripple on the dc side. This allows ripple-free output voltage operation to be achieved even with a small filter capacitor. Here, Fi(s) is the control to-d-axis current transfer function given in 1 .
In designing a multiloop controller, one first designs the inner loop. Here the controller used is PI controller Once the current loop is closed, the converter can be treated as a new open-loop plant with transfer function Fvi(s) given in 1 . Here this acts as an outer voltage loop model. Figure 21 shows the inner current loop design with a PI controller and Figure 22 shows the outer voltage loop control.
Simulation Results
Here the closed loop is designed for input voltage E d =230V, L can be designed as .003H and C=.000136F. The controller used is PI controller and K p propotional constant is kept unity, while K i integral constant is designed as 53.3 in Time domain to reduce the steady state error
Conclusion
The physical model and mathematical model are designed for single phase rectifier. The closed loop control for single phase PWM rectifier with PI controllers also designed using mathematical model and verified with physical model. SISO model for three phase rectifier and closed loop control are designed for a three phase rectifier. Simulation is done using MATLAB and the simulation results are shown. In future the mathematical SISO model can be simulated. | 2019-02-15T14:22:33.511Z | 2015-11-15T00:00:00.000 | {
"year": 2015,
"sha1": "97f7afbb68ccfc3b2d6fe8f27910ae9c69e1e1e7",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2015/v8i32/87869",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ba6dd16f971dd26d1d6e21ac7d462fb317a3a1e5",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119180496 | pes2o/s2orc | v3-fos-license | Atomic scale electron vortices for nanoresearch
Electron vortex beams were only recently discovered and their potential as a probe for magnetism in materials was shown. Here we demonstrate a new method to produce electron vortex beams with a diameter of less than 1.2 \AA. This unique way to prepare free electrons to a state resembling atomic orbitals is fascinating from a fundamental physics point of view and opens the road for magnetic mapping with atomic resolution in an electron microscope.
The first experimental realisation of laser light carrying topological charge came in 1990 1 , founded on a theory of field vortices 2 . Optical vortices, as they are called, have opened a new era in optics 3,4 . Today, there are many applications ranging from optical tweezers exerting a torque 5 over optical micromotors [6][7][8] , cooling mechanisms 9 , toroidal Bose Einstein condensates 10 , exoplanet detection [11][12][13][14] to quantum correlation and entanglement in manystate systems [15][16][17][18] . For a review, see 19 21,23 . With an effective diameter of several micrometers, these were still far away from the goal of atomic resolution.
Here, we describe the production of free electrons localised in Angstrom sized regions and carrying topological charge. Electron vortex beams are free electrons carrying a discrete orbital angular momentum of m . They are characterized by a spiraling wavefront, similar to optical vortices 3 . They also carry a magnetic moment, even for beams without spin polarization equal to one Bohr magneton per electron and per unit of topological charge 24,25 .
Their original interest was in the use as a filter for magnetic transitions 21 , thus facilitating energy loss magnetic chiral dichroism (EMCD) experiments in the electron microscope [26][27][28] .
Their actual potential is much wider, ranging from probing chiral structures to the manipulation of nanoparticles, clusters and molecules, exploiting the transfer of angular momentum and the magnetic interaction. The holographic aperture used in 21 was located in a position conjugate to the object plane. Here we use an aperture in the condensor plane of a TEM.
This setup allows to form a small probe in the object plane of the microscope by means of the condensor lenses as schematically outlined in fig.1A. For an ideal electron point source and ideal lenses, the probe is given by the Fourier transform of the aperture 21 . In practice however, there are two effects which have to be taken into account. First of all, the electron source is not a point emitter but can be modelled as an incoherent source distribution over an area characterising the source size. The effect on the probe is a convolution of the intensity of the image produced by a point source with this distribution. A second effect, affecting the probe size is caused by the aberrations of the probe forming lens system. These can be expressed as a distortion of the ideal wavefront by a phase change. A goal in the design of electron microscopes is to minimise both effects as much as possible. Current state of the art electron microscopes can reach probe sizes which deliver a resolution of better than 0.8Å when used in a scanning probe approach 29 . Here we use such a state of the art microscope to produce electron vortex beams making use of a holographic mask. The microscope used is the Qu-Ant-EM microscope installed at the university of Antwerp. This is a double aberration corrected FEI Titan G2 80-300 instrument capable of routinely making small probes which enable 0.8Å resolution at an acceleration voltage of 300 kV.
Imaging such a fine probe requires a second set of lenses with similar requirements as the probe forming lenses. Therefore, another aberration correction device in the image formation lens system is used. Nevertheless, no imaging lens is perfect and the image obtained will always overestimate the real size of the probe. Chromatic aberration due to a finite energy spread in the gun and image blurring in the electron detector further increase the size of the image of the probe.
The convergence semi angle α can be changed over a wide range which enables the user to choose between very small probes (large convergence angles) or larger probe size (smaller convergence angles). As a probe defining aperture we use a similar holographic mask with a fork dislocation as described in 21 Our experiments demonstrate that it is possible to obtain sub-nm free electron vortices even on standard equipment. Aberration correctors allow vortices with diameters as small as 1.2Å. Comparing this to the size of a typical orbital in atoms, the shape of the beam in the focal plane resembles the electron distribution of e.g. a 2p-orbital in a Nitrogen atom as sketched in fig.1B,C,D in both radial distribution and phase. The main difference between electrons in an atomic orbital and the free vortex is that in the latter, the wave function evolves in time as the electron propagates in the electron optical system. A detailed theoretical background of the properties of free electrons in a vortex state is given in 25 . The finite source size of the electron gun is not detrimental to the typical doughnut shape of a vortex down to the sub-nm scale but limits the ability to observe the smallest Angstrom sized vortex beams that can be produced for the time being. At the same time, the finite source size, reduces the vortex character of the beam in the center while maintaining its characteristics further away from the optical axis as was studied in optics 30 . This would lead to a reduction of any scattering effect that hinges on the vorticity of the probe. Ongoing simulations show the trend that useful effects in inelastic scattering remain as long as the finite source size is smaller than the difraction limited size of the beam that would be obtained with an ideal point source 31 . Nevertheless, a further reduction of the finite source size in future electron microscopes would be strongly desirable for vortex experiments.
Engineering these atomic sized electron vortices opens the road to magnetic information mapping on the atomic scale 32 . Indeed it was shown in 21 that electron vortex beams provide information on the magnetic state of materials. With Angstrom sized electron vortices, one would obtain magnetic information on the atomic scale. In this paper we have measured the diameter of such vortex probes as their full width at half maximum, the resolution that can be obtained with such a probe is better for two reasons. First, as already mentioned, the measurement we present here is an overestimate, and source size effects play an important role. Secondly, resolution in electron microscopy is commonly defined as the spatial frequency that still gives an interpretable contrast. This difference is apparent from the measurement of the central beam which was found to have a FWHM of 1.0Å while the resolution which can be obtained with this probe is approximately 0.8Å. Extrapolating this to the sidebeams that carry angular momentum we could estimate the possible resolution to be less than 1Å. Sub-nm free electrons with topological charge can be produced in standard TEM equipment. Aberration correctors allow vortex probe sizes of less than 1.2Å. The dominant factor that puts a limit on the probe size is the finite electron source size. The probe with topological charge m focussed on the specimen has a phase structure, extension and radial intensity distribution very similar to atomic p-orbitals even in light atoms. This fact opens new options to couple a fast electron probe directly to the internal degrees of freedom of atoms and allows to probe magnetic information on a sub nm level. | 2014-05-28T14:10:58.000Z | 2011-11-16T00:00:00.000 | {
"year": 2014,
"sha1": "1a56b8ed1eed877e50082f7b80012fb18f01e8dc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.7247",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1a56b8ed1eed877e50082f7b80012fb18f01e8dc",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
13740708 | pes2o/s2orc | v3-fos-license | Local-duality QCD sum rules for strong isospin breaking in the decay constants of heavy–light mesons
We discuss the leptonic decay constants of heavy–light mesons by means of Borel QCD sum rules in the local-duality (LD) limit of infinitely large Borel mass parameter. In this limit, for an appropriate choice of the invariant structures in the QCD correlation functions, all vacuum-condensate contributions vanish and all nonperturbative effects are contained in only one quantity, the effective threshold. We study properties of the LD effective thresholds in the limits of large heavy-quark mass \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m_Q$$\end{document}mQ and small light-quark mass \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m_q$$\end{document}mq. In the heavy-quark limit, we clarify the role played by the radiative corrections in the effective threshold for reproducing the pQCD expansion of the decay constants of pseudoscalar and vector mesons. We show that the dependence of the meson decay constants on \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m_q$$\end{document}mq arises predominantly (at the level of 70–80%) from the calculable \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m_q$$\end{document}mq-dependence of the perturbative spectral densities. Making use of the lattice QCD results for the decay constants of nonstrange and strange pseudoscalar and vector heavy mesons, we obtain solid predictions for the decay constants of heavy–light mesons as functions of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$m_q$$\end{document}mq in the range from a few to 100 MeV and evaluate the corresponding strong isospin-breaking effects: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{D^+} - f_{D^0}=(0.96 \pm 0.09) \ \mathrm{MeV}$$\end{document}fD+-fD0=(0.96±0.09)MeV, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{D^{*+}} - f_{D^{*0}}= (1.18 \pm 0.35) \ \mathrm{MeV}$$\end{document}fD∗+-fD∗0=(1.18±0.35)MeV, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{B^0} - f_{B^+}=(1.01 \pm 0.10) \ \mathrm{MeV}$$\end{document}fB0-fB+=(1.01±0.10)MeV, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f_{B^{*0}} - f_{B^{*+}}=(0.89 \pm 0.30) \ \mathrm{MeV}$$\end{document}fB∗0-fB∗+=(0.89±0.30)MeV.
Introduction
The method of QCD sum rules [1], based on the exploitation of Wilson's operator product expansion (OPE) in the study of properties of individual hadrons, has been extena e-mail: simula@roma3.infn.it sively applied to the decay constants of heavy mesons [2][3][4]. An important finding of these analyses was the observation of the strong sensitivity of the decay constants to the precise values of the input OPE parameters and to the algorithm used for fixing the effective threshold [5][6][7][8][9]. For any given approximation of the hadronic spectral density based on quark-hadron duality, the effective threshold determines to a large extent the numerical prediction for the decay constants inferred from QCD sum rules: even if the parameters of the truncated OPE are known with high precision, the decay constants may be predicted with only a limited accuracy, which represents their systematic uncertainty. In a series of papers [10][11][12][13][14], we proposed a new algorithm for fixing the effective threshold within the Borel QCD sum rules which allowed us to obtain realistic estimates of the systematic uncertainties. Our procedure opened the possibility to get predictions for the decay constants with a controlled accuracy [15][16][17][18] and thus allowed us to address subtle effects that call for a profound accurate treatment, such as the ratios of the decay constants of heavy vector and pseudoscalar mesons [19,20] or the strong isospin-breaking (IB) effects in the decay constants of heavy-light mesons [21], generated by the mass difference (m d − m u ) between up and down quarks.
Here, we discuss the application of another variant of QCD sum rules to the evaluation of the strong IB effects in the decay constants of heavy-light pseudoscalar and vector mesons. Our analysis takes advantage of the fact that the OPE provides the analytic dependence of the correlation functions on the quark masses; this allows us to study, e.g., the impact of the light-quark mass on heavy-meson decay constants, thus providing access to the strong IB effects. The approach we describe in this work seems quite promising for studying the dependence of a generic hadron observable on quark masses. 1.1 QCD sum rule in the local-duality limit A typical Borel QCD sum rule for the decay constant f H of a heavy (pseudoscalar or vector)Qq meson H of mass M H , consisting of a heavy quark Q with mass m Q and a light quark q with mass m q , has the form Here, τ is the Borel parameter, N is an integer that depends on the Lorentz structure in the correlation function chosen for the sum rule and on the number of subtractions in the corresponding dispersion representation, and s eff is the effective threshold such that √ s eff lies between the mass of the groundstate and the first excited state [1], namely s eff = (M H +z eff ) 2 with z eff 0.4-0.5 GeV.
Nonperturbative effects appear on the r.h.s. of (1) at two places: as power corrections given in terms of vacuum condensates and in the effective threshold s (N ) eff . Depending on the chosen value of N , nonperturbative effects are distributed in a different way between power corrections and the effective threshold. Perturbative effects are encoded in the spectral density ρ pert , in the effective threshold s (N ) eff and in the power corrections Π (N ) power . Recall that Eq. (1) is based on modelling the hadron continuum as the effective continuum, i.e., on the substitution ρ cont (s) = θ(s − s eff )ρ pert (s). This relation is fulfilled pointwise at large values of s above some s up , but is a "weak" relation and requires an appropriate smearing for s in the midenergy region above the physical hadron continuum threshold s th . An appropriate smearing is reached by performing the Borel transform For nonzero τ , the contribution of the hadron continuum given by (2) is exponentially suppressed compared to the ground-state contribution (1). Therefore, in the conventional use of QCD sum rules one works in some window of nonzero values of τ . However, one may ask whether or not Eq. (1) may be extended down to τ = 0, the so-called local-duality (LD) limit. 1 Obviously, at τ = 0 an appropriate smearing in (2) is guaranteed by the integration; on the other hand, the excited states are not suppressed and one can doubt that 1 The LD limit in Borel sum rules was introduced and discussed in [22][23][24][25] in connection with the pion and nucleon elastic form factors, and later applied to the analysis of meson transition form factors in [26][27][28][29]. A specific feature of this limit is the vanishing of power corrections in the two-and three-point Borelized correlation functions of axial-vector and vector currents of light quarks. for N = 1 [32][33][34] modelling the hadron continuum as the effective continuum remains a good approximation at small τ . First, note that power corrections contain singular terms of the form τ 2−N log(τ ). Therefore, the limit τ → 0 cannot easily be taken in the sum rule (1) for N ≥ 2. For N = 0 and N = 1, the limit τ → 0 in (1) is mathematically well defined. To demonstrate that this limit is also physically meaningful, one needs to show that the corresponding s eff indeed lies in the expected range. Figure 1a presents the effective threshold obtained in the case of the vector B * -meson by solving Eq. (1) for N = 0 and N = 1, using in its l.h.s. the results of recent lattice QCD simulations for f B * [30] and the experimental value for M B * [31]. Figure 1b shows the truncated series of power corrections including operators of dimension up to 6 for both N = 0 and N = 1. Note that power corrections at τ = 0 vanish for N = 0 and take a finite value for N = 1. The vanishing of N = 0 power corrections at τ = 0 is related to the absence in QCD of a dimension-2 condensate. Obviously, the truncated power corrections for N = 1 remain under control in a rather broad range of τ , but for N = 0 explode relatively soon as τ increases.
It is clear that for the N = 1 case the OPE is under good control in a broad range of τ and therefore the lower boundary of the Borel window can be safely extended down to τ = 0. For N = 0 only at relatively small τ the OPE is under control and the approximation of a τ -independent effective threshold may work well. The relevance of the unknown higher-order power corrections is reflected by the sharp rise of the effective threshold visible in Fig. 1a. Obviously, the standard QCD sum-rule analysis based on a stability window with a constant effective threshold may be problematic. In this respect an alternative approach based on a τ -dependent effective threshold seems to be more appropriate, but this issue goes well beyond the scope of the present paper. Here, the only important property is that the effective threshold at τ = 0 has the value expected on the basis of the standard considerations [1], i.e., it is around (M B * + z eff ) 2 with z eff ∼ 0.4-0.5 GeV. Therefore, for the correlators with N < 2, modelling the hadron continuum as an effective continuum remains a valid and equally accurate approximation as τ → 0, which does not represent a point of discontinuity.
In this work we show that the sum rule (1) for N = 0 can be of particular interest. Obviously, considering the sum rule at only one point, τ = 0, does not allow for the use of the usual sum-rule stability criteria [1] for determining s eff . Consequently, the decay constants cannot be determined entirely within the QCD sum rules; some "external" inputs are needed to determine s eff . Nevertheless, in this work we will show that, besides the reduction of the uncertainties related to the absence of the condensates, the LD sum rules represent an efficient tool to investigate the dependence of the pseudoscalar and vector meson decay constants on quark masses and their perturbative behaviour in QCD. Moreover, the LD sum rules turn out to be particularly suitable for the analysis of the strong IB effects in the two-point functions when implemented with only few "external" inputs, e.g., from experiment or lattice QCD.
1.2 Strong isospin breaking from a QCD sum rule in the LD limit We are interested in the dependence of the decay constants of heavy-light mesons on the quark masses, in particular, in the strong IB effects in the decay constants (i.e., the difference between the decay constants ofQd andQu mesons induced by the small mass difference δm = m d − m u ). We therefore need to properly take into account all effects depending on the light-quark flavour q in the correlation function of the appropriateQq interpolating currents. Clearly, the m q -dependence on the l.h.s. of (1) is encoded both in the decay constant f H and in the meson mass M H . On the r.h.s, the IB effects come from several sources: the m qdependence of ρ pert (s, m Q , m q , α s ), the m q -dependence of the effective threshold s eff , the m q -dependence of the power corrections, and the flavour dependence of the quark condensates, in particular, of qq . In general, all these effects mix together, which renders the goal of isolating the IB effects in f H a complicated task. A careful analysis has been carried out recently in [21], following the standard choice N = 2 for pseudoscalar and N = 1 for vector mesons.
There is, however, a special case which makes the sum rule (1) particularly suitable for the analysis of IB effects. As discussed above, for N = 0 and N = 1 power corrections are regular functions at τ = 0. Moreover, for N = 0 power corrections at τ = 0 vanish (power corrections for N = 1 are nonzero at τ = 0) and Eq. (1) is reduced to On the l.h.s., the M H contribution has dropped out, thus opening a direct access to the m q dependence of the decay constant f H . Since power corrections do not contribute to the sum rule in the LD limit, all nonperturbative effects enter now through a single quantity -the effective threshold. The functional dependence of the perturbative spectral density on the quark masses m Q and m q can be calculated to the necessary accuracy. The functional dependence of s eff on the quark masses may be determined from the general properties of the decay constants of heavy-light mesons in QCD. Namely, its dependence on m q may be parameterized by a polynomial formula in m q plus chiral logs, which can be determined by matching to heavy-meson chiral perturbation theory. The numerical coefficients in the polynomial function are not known but may be determined using only few results on the decay constants from lattice QCD, e.g., for nonstrange and strange heavy mesons. Having at our disposal the explicit m q dependence of the effective threshold and of the spectral densities opens direct access to the strong IB effects related to the small difference of the u-and d-quark masses in QCD. We will demonstrate that the main m q dependence of the decay constants originates from the calculable m q dependence of the perturbative spectral densities. Therefore, the LD limit opens the possibility of a reliable analysis of the m q dependence and the strong IB effects in the decay constants of heavy-light mesons (and, in principle, also in other quantities).
This paper is organized as follows: in Sect. 2, we recall the spectral densities of the QCD correlation functions relevant for our LD sum-rule analysis. In Sect. 3, we study the m Q -and m q -dependences of the effective thresholds by making use of an appropriate mass scheme (pole mass and running mass) for the heavy quarks. In Sect. 4, we perform the numerical analysis of the decay constants of heavy-light pseudoscalar and vector mesons and obtain predictions for strong IB effects in the decay constants. Section 5 gives our conclusions. The Appendix collects some details of treating the IB effects within the OPE, which, in our opinion, deserve to be presented.
Local-duality sum rules for f P and f V
Let us consider two-point QCD sum-rules for decay constants of pseudoscalar (P) and vector (V ) mesons built up of one massive quark Q with mass m Q and one light quark q with mass m q . We consider the axial-vector current and the vector current as interpolating currents for the pseudoscalar and vector mesons, respectively. The corresponding correlation functions involve two Lorentz structures, the transverse structure g μν p 2 − p μ p ν and the longitudinal structure p μ p ν : For Π 5 μν ( p), we study the longitudinal structure p μ p ν , as it contains the ground-state pseudoscalar-meson contribution with For Π μν ( p), we study the transverse structure g μν p 2 − p μ p ν , which contains the ground-state vector-meson contribution with As already noted, power corrections for dimension-2 correlation functions vanish in the LD limit τ = 0. We now present their explicit form. The leading power correction to Π 5 μν ( p) is given by the quark condensate and easily derived: The Borel transform By changing the sign of the light-quark mass, the power corrections for the vector correlator Obviously, the Borelized power corrections to both the pseudoscalar and the vector correlators vanish in the limit τ = 0.
Since the power corrections do not contribute to the LD sum rule under consideration, we need to consider only the perturbative contributions. After applying the duality cuts at s eff , separately in the pseudoscalar and the vector channels, performing the Borel transform p 2 → τ , and setting τ → 0, the corresponding sum rules take the form The functions ρ pert P and ρ pert V in (14) are the spectral densities of the invariant functions Π 5 L ( p 2 ) and Π T ( p 2 ), respectively. Let us emphasize that in (14) both the full spectral densities and the decay constants are scale independent quantities. Therefore, the effective thresholds are scale independent objects, too. In perturbation theory, the spectral densities are calculated as power expansions in a ≡ α s (μ)/π , α s (μ) the strong coupling in the MS-scheme at scale μ: with i = P, V . In practice, one adopts truncated expansions of the spectral densities; this leads to a scale dependence of the spectral densities. As the result, the effective thresholds will also depend on the scale, to compensate the scale dependence of the spectral densities emerging in the course of trun-cation. Explicitly, the leading-order (LO) spectral densities read Obviously, the lower integration limit in (14) is determined by the threshold in the correlation functions. In Eq. (15), we may employ different definitions of the quark masses: The most advanced calculation of the pseudoscalar and vector spectral densities including order-O(a 2 ) terms was performed [35,36], for a massless light quark, in terms of the heavy-quark pole mass. The expansion in terms of the heavy-quark pole mass is appropriate for considering the heavy-quark limit, which we address in Sect. 3.1.
However, the pole-mass expansion leads to a rather slow convergence of the perturbative expansion for the decay constants [15][16][17][18]37]. The convergence improves considerably when one rearranges the perturbative expansion in terms of the running MS masses. Therefore, for the practical analysis of the m q -dependences of the meson decay constants in Sect. 4, we make use of the perturbative expansion in terms of the running MS masses of the light and the heavy quarks. The corresponding NLO and NNLO functions ρ (i) (15) necessary for such an analysis are found from the spectral densities of the pseudoscalar correlation function given in [37] by multiplying them by 1/s 2 . Similarly, the transverse spectral densities ρ (15) are found from the spectral densities of [32][33][34] by multiplying them by 1/s. In our analysis, we make use of the exact LO perturbative spectral density given by (16), at the NLO we keep the terms O(am 0 q ) and O(am 1 q ), and in the NNLO we keep only the known terms of order O(a 2 m 0 q ). We would like to emphasize that the perturbative spectral density (15) does not generate terms of order m q log(m q ) in the dual correlator (14). This observation will be crucial for discussing properties of the effective thresholds in the next section.
Dependence of the effective thresholds on the quark masses
Let us now consider the dependences of the effective threshold on the quark masses m Q and m q .
3.1 Heavy-quark limit in the pole-mass scheme We start with the heavy-quark limit of the decay constants, originally discussed in Refs. [38,39] within the Heavy Quark Effective Theory (HQET). In what follows, however, we do not consider the static decay constants and we work in full QCD.
For the sake of argument we consider first a massless light quark: m q = 0. We can make use of any scheme for the heavy-quark mass, but we start with the pole-mass scheme, which leads to a more transparent behaviour of the effective threshold. We first isolate the pole mass, which we denote M Q , in the effective threshold: For the decay constant of pseudoscalar and vectorQq mesons, using results from [35,36], we obtain in the limit Hereafter, we denoteā ≡ᾱ s (M Q )/π ,ᾱ s (M Q ) the running strong coupling in the MS scheme at scale M Q , and we use the standard notations C F = (N 2 c − 1)/(2N c ), C A = N c , T = 1/2, and n l the number of massless quarks [35,36].
Since only the near-threshold behaviour of the spectral densities is relevant for the leading behaviour in the large-M Q limit, we may obtain also the O(ā 2 ) terms in the dual correlation functions [i.e., the r.h.s. of (19)] from the analytical expressions for these spectral densities given by Eqs. (30) and (31) of [35,36].
In the limit M Q → ∞, the dual correlation functions, expressed in terms of z pole eff (M Q ), do not contain corrections of orderā n M Q (this property will not hold in the runningmass scheme) but still contain log(M Q ) terms of the type (ā log(M Q /z pole eff )) n ,ā(ā log(M Q /z pole eff )) n−1 , etc. The terms (ā log(M Q /z pole eff )) n , although formally of orderā n , remain unsuppressed in the limit M Q → ∞. To treat all terms containing log(M Q ), it is important to emphasize that they are exactly the same in the vector and the pseudoscalar sum rules. Therefore, they may be resummed by introducing a properly defined effective threshold z HQ eff , one and the same in the pseudoscalar and the vector channels. The explicit relation between z pole eff (M Q ) and z HQ eff , including also the a 2 terms, reads The new quantity z HQ eff , which has the meaning of the effective threshold in HQET, absorbs all log(M Q ) terms on the r.h.s. of the sum rules (19); the latter assume a form in which the HQ limit may easily be taken: We did also calculate the O(ā 2 ) contributions but do not present their explicit expressions here. The expressions (21) immediately lead to the ratio of the decay constants in the heavy-quark limit [40,41]. Including also O(ā 2 ) corrections, we obtain 2 with ζ 3 1.202. The O(ā 2 ) term in (22) reproduces the result first presented in Eq (3.12) of [41].
Notice that for finite M Q , z pole eff contains not only the logarithmic corrections, which are the same in the pseudoscalar and the vector channels, but also the 1/M Q corrections, 2 The second-order pseudoscalar and vector spectral densities near the threshold, Eqs. (30) and (31) in [35,36], contain three unknown constants,c F F ,c F A , andc F L , which cancel in the ratio f V / f P .
which are different for the thresholds in the pseudoscalar and the vector sum rules.
Combined heavy-quark and chiral limits in the pole-mass scheme
The results (19) are obtained for a massless light quark.
Switching on a small light-quark mass m q , the leading corrections generated by integration of the perturbative spectral densities are proportional to m q . As already noted in [21], no chiral logs of the kind m q log(m q ) arise from integrating the spectral densities. Therefore, chiral logs in the decay constants may be generated only by chiral logs in the effective threshold. Moreover, in order to study the chiral logs in the decay constants, it is sufficient to make use of the perturbative spectral densities for m q = 0. On the other hand, heavy-meson chiral perturbation theory (HMChPT) (S. R. Sharpe, private communication; see the appendix in [21]) [42], requires the appearance of chiral logs, which we denote as z HQ L in the chiral expansion of the decay constants in the heavy-quark limit. Since the only source of such terms is the effective threshold, we write where the dots denote linear and higher-order terms in the light-quark mass m q . The coefficient z HQ L can now be fixed by matching to HMChPT (S. R. Sharpe, private communication; see the appendix in [21]) [42], which provides the explicit chiral logs R χ (m q ) in the ratio f H q (m q )/ f H ud , with H ud a heavy meson with a light valence quark of the average mass m ud ≡ (m u + m d )/2. Finally, we obtain The explicit expression for R χ (m q ) was derived in (S. R. Sharpe, private communication; see the appendix in [21]) [42] and presented by Eq. (A.3) of [21].
Quark-mass dependences of the effective threshold in the running-mass scheme
For practical sum-rule analyses of decay constants, one prefers the MS running-mass scheme since it entails a better convergence of the perturbative expansion [15][16][17][18]37]. It is not difficult to perform the limit m Q (μ) → ∞ also for the running-mass correlation function. Also therein one can write The effective thresholdz eff (μ) in the MS scheme is related to z pole eff introduced in the pole-mass scheme through an obvious relation which just expresses the fact that the upper integration limit s eff is a scheme-independent quantity: In particular, for μ = m Q , taking into account that one finds Since z pole eff does not contain terms scaling as M Q in the limit M Q → ∞,z eff (μ) should contain terms which diverge as powers of a n M Q in this limit. This is, of course, no obstacle for usingz eff (μ) in the analysis of the decay constants of charmed or beauty mesons but makes this quantity not particularly convenient for studying the heavy-quark limit of the sum rules. The terms inz eff (μ) divergent as m Q → ∞, however, do not lead to divergent terms in the decay constants; also, the behaviour of the spectral densities in the MS scheme is a bit more tricky than in the pole-mass scheme. The dual correlator is determined by the end-point behaviour of the spectral densities; as already mentioned in [37], the higherorder spectral densities in the MS scheme do not vanish at the threshold but behave as Finally, when the MS spectral densities are used and the duality cut is expressed via z HQ eff , all terms containing powers of m Q -those coming from the integrals of the spectral densities and those contained in z eff (m Q ) -cancel each other, yielding a sum rule for f 2 H which can also be obtained just by expressing M Q via m Q in (19), e.g., Let us now switch on a small light-quark mass m q . The spectral densities are now treated as functions of m Q (μ) and m q (μ). Taking into account that the effective threshold depends on the scale μ only because of the truncation of the perturbative series, and that the chiral logs have been fixed in the pole-mass scheme, it is convenient to work with the following parameterisation for s eff : The pole mass M (2) Q here is understood as being expressed via the running mass m Q (μ) (e.g., [43]) at O(a 2 ) accuracy, the available accuracy of the correlation function. We can rewrite this expression in a form similar to (26) Let us recall that the chiral logs z HQ L have been calculated in the heavy-quark limit; at finite values of m Q , chiral logs receive corrections which are unknown. So we take into account only the known leading effect of chiral logs, to study whether or not their impact on the IB is crucial. Two other parameters of the effective thresholdz pole eff andz 1 (μ) if one makes use of the parameterisation (31), orz 0 (μ) andz 1 (μ) if one works with (32) -are unknown and will be fixed by using some external benchmark results for the decay constants from lattice QCD. The inclusion of higher-order terms in the light-quark mass has no impact on the decay constants; thus, such terms are not considered.
Numerical analysis of the sum rules
Now, we turn to the numerical estimates. For the relevant OPE parameters, we adopt the following numerical input: We work with the effective threshold in the form (32) and consider the following three Ansätze: 1. "Constant" threshold: thez 1 (μ) term in the effective threshold (32) and the chiral logs z HQ L are neglected; the only unknown parameterz 0 (μ) is fixed from the lattice results for the decay constants of the isospin-symmetric heavy mesons, with m q = m ud . Table 1 Parameters of the effective thresholds and resulting IB in the decay constants of heavy pseudoscalar and vector mesons. The parameter z L in the effective threshold for the "linear + log" ansatz is fixed by ChHQET in the heavy-quark limit
Meson
Threshold 2. "Linear" threshold: the chiral logs z HQ L are neglected and the parametersz 0 (μ) andz 1 (μ) are fixed by the lattice QCD results for the decay constants at two m q values, for the isospin-symmetric and the strange heavy mesons. 3. "Linear + log" threshold: the known leading chiral logs represented by z HQ L are included; the parametersz 0 (μ) andz 1 (μ) are fixed from the lattice QCD results for the decay constants at two m q values, for isospin-symmetric and strange heavy mesons.
As we have already noted, because of the truncation of the perturbative expansion, the truncated spectral densities depend on the scale μ. Obviously, the parametersz 0 andz 1 are also μ-dependent.
For fixing the parameters of the effective thresholds, we make use of the following results from lattice QCD: f D s f D = 1.1716 ± 0.0032 [24], In these formulae, f H denotes the decay constant of the isospin-averaged heavy-light mesons with the light-quark mass m ud , whereas f H s denotes the decay constant of the heavy strange mesons. Table 1 summarizes the effective thresholds corresponding to our three Ansätze and presents estimates of the strong IB effect. For our final estimates, we perform a bootstrap analysis of the uncertainties assuming that the OPE parameters in (33) have Gaussian distributions with corresponding Gaussian errors, whereas the scale μ has a flat distribu-tion in the range 1 < μ (GeV) < 3 for charmed mesons and 3 < μ (GeV) < 6 for beauty mesons.
As soon as the effective thresholds are known, we readily get the decay constants f H q as a function of the scale independent ratio (m q − m ud )/(m s − m ud ). The results for the ratios of the decay constants f H q / f H ud are shown in Figs. 2 and 3.
Notice that the results corresponding to a constant effective threshold [ansatz (1)] are quite close to those obtained including the m q -dependence [ansatz (2)] and to the results of Ref. [21], which contain effects in the decay constants at any order in the light-quark mass. So, an important conclusion to be drawn from our results is that effects at order O(m 2 q ) in the effective threshold are not crucial for describing the m q -dependence of the decay constants and for estimating the slope of the IB effect at the physical value of the light-quark mass: the latter are both determined to a large extent by the known m q -dependence of the spectral densities and can thus be reliably controlled in our approach.
Summary and conclusions
We addressed the local-duality (LD) limit, τ = 0, of the Borel QCD sum rules for the decay constants of heavy-light pseudoscalar and vector mesons. An invaluable feature of the LD limit is that for a proper choice of the correlation function, all vacuum-condensate contributions vanish and the full nonperturbative QCD dynamics is parameterized in terms of merely one quantity -the effective threshold. Our analysis demonstrates that the effective threshold has a nontrivial functional dependence on the masses of the heavy and the light quarks, m Q and m q , respectively. This dependence has been parameterized in the form suggested by the behaviour of the decay constants in the known limits: the chiral limit for m q and the heavy-quark limit for m Q . In the heavy-quark Table 1 are displayed. We also show results from an alternative analysis based on Borel QCD sum rules [21] (a) (b) Fig. 3 The same as in Fig. 2 but for pseudoscalarbq (a) and vectorbq (b) mesons limit, we clarify the role played by the radiative corrections in the effective threshold for reproducing the pQCD expansion of the decay constants of pseudoscalar and vector mesons.
This paper elucidates the dependence of the decay constants on a light-quark mass m q in the range m ud < m q < m s . Fixing a few numerical parameters of the effective threshold by using the available accurate inputs from lattice QCD, we have derived the full analytic dependence of the decay constants f H (m q ) on the light-quark mass m q . The resulting dependence of the decay constants f H (m q ) on m q emerges from two sources: (i) from the m q -dependence of the QCD perturbative spectral densities known explicitly as expansion in powers of α s and (ii) from the m q -dependence of the effective threshold known approximately. An important outcome of our analysis is that the variation of the decay constants with respect to m q comes to a great extent (70-80% of the full effect) comes from the rigorously calculable dependence on m q of the perturbative spectral densities and is therefore under a good theoretical control.
Noteworthy, the known perturbative expansion of the correlation functions [32][33][34][35][36][37], where the sea-quark mass effects are neglected, limits the accuracy of the decay constants of the heavy-light mesons to O(m sā 2 ) accuracy,ā ∼ 0.1 at the appropriate renormalisation scales. Therefore the accuracy of the individual decay constants obtained from QCD sum rules does not exceed a few MeV. Nevertheless, we would like to emphasize that the IB difference of the decay constants, f M d − f M u , where the sea-quark contributions of order O(m s,u,dā 2 ) cancel each other, may be predicted with a much higher accuracy, O(δmā 2 ). Therefore, the proposed method can potentially provide a higher accuracy of the IB effects than other approaches.
As our final estimates of the IB, we take the average of the results corresponding to the linear and the linear + log effective thresholds in Table 1: Sizeably larger uncertainties of the IB in the decay constants of vector mesons compared to pseudoscalar mesons are related to larger uncertainties of the input lattice QCD results for the corresponding ratios f H s / f H ud . These estimates are in good agreement with the results of our recent analysis within a different version of QCD sum rules -the Borel sum rules with τ -dependent threshold [21]: The only exception is the D * case, where one observes tension between these two sets of the results; note, however, that also the uncertainties of these predictions are rather large.
Very recently [48] a new precise determination of the strong IB effect in the decay constants of D-and B-mesons has been carried out by the FNAL and MILC lattice collaborations.
In the charm sector their result is f D + − f D 0 = 1.13(15) MeV, which nicely agrees with our findings (35) and (39). As for the bottom sector, it is shown that the available HPQCD and RBC/UKQCD calculations [49][50][51] overestimate significantly the strong IB effect because of an inappropriate use of unitary lattice points (i.e. those having the same mass for valence and sea light-quarks). The FNAL/MILC result is f B 0 − f B + = 1.12 (15) MeV, which is in excellent agreement with our findings (37) and (41).
Thus, our sum-rule predictions are nicely confirmed quantitatively by lattice QCD both for the central values and the overall uncertainties. This is reassuring: the strong IB effect and its uncertainty in the decay constants of heavy-light mesons can be reliable and accurately estimated within the QCD sum-rule approach.
It should be emphasized that the present approach based on the combination of OPE and a few inputs from lattice QCD potentially has fewer theoretical uncertainties than other formulations of QCD sum rules: first, the condensate contributions, in particular, those of the quark condensate, which produced the main OPE error in the decay constants, vanish in the LD limit; second, the systematic uncertainty of the sum-rule method is now encoded in only one quantity -the effective threshold, which may be fixed to good accuracy due to the use of the few accurate lattice inputs.
Thus, QCD sum rules for the mass dimension-2 Borelized invariant amplitudes at τ = 0 (i.e., an infinitely large Borel mass parameter) provide an efficient tool for the analysis of the dependence of decay constants (and potentially of other hadron observables) on quark masses.
Finally, we want to mention that, besides the strong IB effect due to the up and down quark mass difference, there are other isospin violating effects due to electromagnetism, i.e. to the difference between the up and down quark electric charges. However, the inclusion of such electromagnetic corrections within a sum-rule approach is not a trivial task and it requires the development of new strategies going beyond the traditional QCD sum-rule approaches. In this respect it is worth mentioning a new lattice strategy [52] developed to deal with QCD + QED effects on quantities that require the cancellation of infrared divergences in the intermediate steps of the calculation, like, e.g., the decay rate of charged pseudoscalar mesons [53].
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 .
Appendix: Isospin breaking in the OPE
The two-point Green function Π of interest is given by the functional integral where j 1 and j 2 are (gauge-invariant) operators constructed from quark and gluon fields, and Here, L (0) (x) is the SU (2)-symmetric Lagrangian describing two equal-mass quarks, with the quark-mass term m(dd +ūu), m ≡ 1 2 (m d + m u ).
After expanding Eq. (43) in powers of δm, one finds with the superscript "0" indicating that the full Green functions correspond to the SU (2)-symmetric QCD with two light quarks with degenerate mass m. Let us emphasize the appealing feature of the expansion (47): at each order in δm, one encounters the full Green function of the SU (2)symmetric QCD. One may expect that the power corrections in the OPE for the three-point Green function Γ of the SU (2)-symmetric QCD are the SU (2)-symmetric condensates, e.g., ūu = d d ≡ qq . However, in order to obtain the contributions of interest, we need to perform the limit q → 0. This step cannot be done easily: the OPE for Γ is only given by local SU (2)symmetric condensates if one keeps q 2 large and negative; a straightforward extension of the known power corrections to q → 0 leads to a wrong result: it is well known that if one naively extends power corrections in the vector three-point function to q = 0, then they do not satisfy the Ward identity (see, e.g., [26][27][28][29]).
On the other hand, one can proceed by expanding Π( p) in powers of the small quark mass; then the mass derivatives emerge. Translating the expression (47) into momentum space, to O(δm) accuracy we obtain where Π (0) ( p) is the full two-point function of SU (2)symmetric QCD, and Γ (0) ( p, q = 0) is the three-point function of the scalar currentqq at zero momentum transfer, also calculated in the full SU (2)-symmetric theory. Consequently, finding the leading-order SU (2)-breaking effects reduces to calculating the Green functions in SU (2) Using the well-known relation the three-point function at zero momentum transfer may be related to the mass derivative of the two-point function, which then leads to the appearance of the mass derivatives of the quark condensate. | 2018-05-03T02:53:33.721Z | 2017-02-24T00:00:00.000 | {
"year": 2018,
"sha1": "bbd2bdb3ec7bb295c81cfeef2906705d71748ca0",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-018-5637-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c2d0635eebcad43eddf694b1dd91b3ebce217391",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
265460147 | pes2o/s2orc | v3-fos-license | eIF2A represses cell wall biogenesis gene expression in Saccharomyces cerevisiae
Translation initiation is a complex and highly regulated process that represents an important mechanism, controlling gene expression. eIF2A was proposed as an alternative initiation factor, however, its role and biological targets remain to be discovered. To further gain insight into the function of eIF2A in Saccharomyces cerevisiae, we identified mRNAs associated with the eIF2A complex and showed that 24% of the most enriched mRNAs encode proteins related to cell wall biogenesis and maintenance. In agreement with this result, we showed that an eIF2A deletion sensitized cells to cell wall damage induced by calcofluor white. eIF2A overexpression led to a growth defect, correlated with decreased synthesis of several cell wall proteins. In contrast, no changes were observed in the transcriptome, suggesting that eIF2A controls the expression of cell wall-related proteins at a translational level. The biochemical characterization of the eIF2A complex revealed that it strongly interacts with the RNA binding protein, Ssd1, which is a negative translational regulator, controlling the expression of cell wall-related genes. Interestingly, eIF2A and Ssd1 bind several common mRNA targets and we found that the binding of eIF2A to some targets was mediated by Ssd1. Surprisingly, we further showed that eIF2A is physically and functionally associated with the exonuclease Xrn1 and other mRNA degradation factors, suggesting an additional level of regulation. Altogether, our results highlight new aspects of this complex and redundant fine-tuned regulation of proteins expression related to the cell wall, a structure required to maintain cell shape and rigidity, providing protection against harmful environmental stress.
Introduction
mRNA translation is a key cellular process composed of four stages: initiation, elongation, termination and ribosome recycling.Protein synthesis is mainly regulated during translation initiation, which mostly occurs through a 5'-Cap-dependent mechanism.The initiation process begins with the formation of a ternary complex between the eukaryotic initiation factor 2 (eIF2, consisting of three subunits α, β, and γ), one molecule of GTP and Met-tRNA.The 43S pre-initiation complex (PIC), composed of the ternary complex, the 40S ribosomal subunit and different initiation factors (eIF1, eIF1A, eIF5 and eIF3), is recruited on the cap structure at the 5' end of mRNAs.The eIF4F complex consisting of the helicase eIF4A, the cap-binding protein eIF4E and the scaffolding protein eIF4G is then recruited to form the 48S initiation complex (IC).After unwinding of mRNA cap-proximal regions, PIC scans the 5'-UTR until it encounters the initiation codon, which then triggers eIF2-GTP hydrolysis, mediated by eIF5 and the release of initiation factors, including eIF2.The following recruitment of eIF5B strengthens the association of initiator tRNA with the small ribosomal subunit and joining with the 60S ribosomal subunit, resulting in the formation of an 80S ribosome competent for polypeptides synthesis [1-3 and references therein].
Although most mRNAs use this canonical mechanism, translation initiation can also be mediated by alternative cap-independent mechanisms, for example, through internal ribosome entry sites (IRES).This mechanism requires specific, often structured, RNA sequences within 5 0 UTRs, that are recognized by the 40S ribosomal subunit to initiate translation, bypassing the scanning process.In mammalian cells, IRES-driven translation was first discovered in viral mRNAs, which are usually uncapped [4,5].IRES translation is not limited to viral mRNAs since 10 to 15% of cellular mRNAs can also be translated by this alternative mechanism under stress conditions, such as DNA damage, amino acid starvation or hypoxia, that could impair the canonical translation initiation pathway [6,7].
Eukaryotic translation initiation is a highly complex process and although enormous progresses have been achieved to elucidate the molecular mechanisms controlling protein synthesis initiation, the function of many factors remains still unclear.
In mammals, in addition to eIF2, there is another factor, eIF2A/IF-M1, which coordinates the binding of the initiator tRNA to the 40S ribosomal subunit.In contrast to eIF2, eIF2A does not require GTP for delivering Met-tRNA to the 40S ribosomal subunit [8,9].eIF2 activity is highly regulated by four stress-activated kinases, in response to challenging environments, such as amino acid starvation or virus infection.For example, the "general control non-derepressible 2" kinase (GCN2) phosphorylates the eIF2α subunit, inhibiting eIF2B-mediated nucleotide exchange from eIF2-GDP to eIF2-GTP, thus downregulating eIF2-dependent translation initiation [10 for review].Translation of a subset of cellular and viral mRNAs is refractory to the inhibitory effects of eIF2α phosphorylation, as shown in the case of hepatitis C viral (HCV) mRNA.During HCV infection, eIF2A coordinates translation of HCV mRNA though its recruitment to the HCV IRES [11].eIF2A also mediates translation initiation of subgenomic (26S) Sindbis virus mRNA in the absence of functional eIF2 [12].It has been proposed that under standard conditions, translation initiation is mainly mediated by the canonical pathway via eIF2, while eIF2A may be required to ensure persistent translation initiation under stress conditions.However, other did not observe any eIF2A involvement in the translation driven by Hepatitis C virus IRES in human cells [13] or in the translation of Sindbis subgenomic mRNA [14], showing that eIF2A function in translation initiation regulation of viral mRNAs is not yet fully understood.Furthermore, under stress conditions, eIF2A was also required for translation of a non-receptor protein tyrosine kinase c-Src mRNA though its recruitment to the IRES, required for cell proliferation and programmed cell death [15].
Protein synthesis is usually initiated at an AUG, but translation initiation can also occurs at non-AUG start codons, such as AUA, UUG, CUG, ACG or GUG, [16][17][18][19].Furthermore, in response to a range of physiological changes such as accumulation of misfolded proteins, amino acid starvation, viral infection or ER stress, the integrated stress response (ISR) is activated, leading to eIF2α phosphorylation and downregulation of cap-dependent protein synthesis.However, translation of ISR-induced proteins must be maintained, as it was reported for the endoplasmic reticulum chaperone BiP (binding immunoglobulin protein).BIP mRNA harbors upstream open reading frames (uORFs) in its 5'UTR and eIF2A functions as a cis-acting regulatory element required for UUG-initiated uORF translation during the ISR [20].
eIF2A homologs are found in a wide range of eukaryotic species, suggesting conserved physiological function.In Saccharomyces cerevisiae, eIF2A homolog is coded by the YGR054W non-essential gene [21].This yeast eIF2A protein shares 28% identity and 58% similarity with the human eIF2A [21].eIF2A specifically associates with the 40S and 80S ribosomal subunits but is not found in the polysome fraction.Furthermore, eIF2A physically and genetically interacts with the translation initiation factors eIF5B and eIF4E.Indeed, eIF2A deletion associated with either eIF5B deletion or with an eIF4E-ts mutation, strongly delayed cell growth compared to the wild-type strain [21,22].Moreover, eIF2A acts as a negative regulator of IRESmediated translation of URE2 mRNA, encoding a regulator of nitrogen metabolism [22,23].
Both mammalian and yeast studies show that under standard conditions, eIF2 is the major initiation factor, while eIF2A appears to act in an alternative translation initiation pathway [24 for review].However, considering the diverse roles reported for eIF2A, the function of this protein is currently still obscure.
In this study, we showed that eIF2A specifically bound mRNA encoding proteins required for cell wall biogenesis.EIF2A overexpression did not induce major changes in the transcriptome, but decreased the amount of several cell wall proteins, strongly suggesting that eIF2A controls expression of its mRNA target at the translational level.We also showed that eIF2A interacted with the RNA binding protein, Ssd1, independently of RNA and this interaction was required for eIF2A binding to some of its targets.Altogether, these results are consistent with a role of eIF2A as a protein interfering with translation of a very specific population of mRNAs coding for cell wall-related proteins.
Yeast and E. coli strains, growth conditions
Yeast strains used in this study are listed in S1 Table .All S. cerevisiae strains were derived from BY4741 and grown at 30˚C in YPGlu rich medium or in -URA or -HIS minimal medium.When necessary, media were supplemented with antibiotics used at the following concentrations: 0.5 mg/ml G418, 0.25 mg/ml hygromycin or 10 μg/ml doxycycline to repress gene expression under the control of the P tet-off promoter or with 100 μM Auxin/IAA (SERVA), to deplete Xrn1 protein fused to the degron.
The yeast strains were constructed by homologous recombination using PCR fragments to transform appropriate strains (S1 Table ).The pAG32 plasmid was used to replace the KanMX6 cassette by the HphMX4 marker, conferring hygromycin resistance (S2 Table ).
NEB 10-beta E. coli competent cells were used as the general cloning host and were grown at 37˚C in LB medium supplemented with 50 μg/ml ampicillin.
To induce cell wall damage, CFW (Fluorescent Brightener 28 disodium salt solution, Sigma-Aldrich) was added at a final concentration of 100 μg/ml for -URA medium and either 150 or 250 μg/ml for YPGlu medium.
DNA manipulation.Plasmid DNA was extracted from E. coli using the NucleoSpin plasmid miniprep kit (Macherey-Nagel).S. cerevisiae chromosomal DNA was isolated as previously described [25].PCRs were carried out from bacterial colonies with Q5 high-fidelity DNA polymerase (NEB) to amplify DNA fragments used for cloning or strain constructions.PCR products were purified using a PCR cleanup kit (Macherey-Nagel).
Plasmid constructions.Plasmids used in this work are listed in S2 Table .The eIF2A and SSD1 coding sequences were amplified using the appropriate oligonucleotides (S3 Table ) with genomic DNA from BY4741 strain as the template.The PCR fragments were digested by BamH1/Not1 enzymes and then ligated into the expression vector pCM190 under the control of the P tet-off promoter, repressed by doxycycline.
eIF2A or SSD1 overexpression.A pool of transformants harboring pCM190: eIF2A or pCM190: SSD1 vectors were grown at 30˚C in -URA medium containing doxycycline up to an OD 600nm 0.6.Cells were harvested, washed twice with -URA medium without doxycycline and diluted to an OD 600nm 0.08 in -URA medium devoid of doxycycline.After 5 hours at 30˚C with shaking, to allow eIF2A or SSD1 overexpression, cells were harvested and the pellet was stored at -80˚C.
eIF2A-TAP affinity purification for mass spectrometry analysis and RNA sequencing.The yeast strain expressing eIF2A-TAP protein was grown in YPGlu medium overnight at 30˚C with shaking.We used a strain in which the catalytic site of Xrn1 was mutated (D206A) in order to globally prevent mRNAs from degradation.Upon reaching OD 600nm 2.0, two liters of culture were centrifuged at 4˚C, cell pellets were washed with cold water and subsequently stored at -80˚C.Cells were then resuspended in 1mL of cell lysis buffer per gram of cells (20mM Hepes pH7.4,10mM MgCl 2 , 100mM KOAc) containing a protease-inhibiting reagent (Roche).The cell suspension was then added to 500 μL/mL acid-washed glass beads and vortexed three times for 40 sec at 4˚C, 6m/sec (MP FastPrepTM, Fisher Scientific).The obtained cell lysate was clarified by centrifugation (14000 rpm, 4˚C, 20 min), 0.5% Triton was added, and an aliquot of lysate was taken prior to eIF2A-TAP purification for analysis of total proteins (input).Next, 25 μL of covalently coupled IgG-Dynabeads1 magnetic beads were resuspended in lysis buffer and added to the remaining supernatant and incubated for 2 hours at 4˚C with gentle agitation.The beads were harvested and washed five times with washing buffer (20mM Hepes pH 7.4, 10mM MgCl 2 , 100mM KOAc, 0.5% Triton) and once in lysis buffer.Proteins associated with eIF2A-TAP were eluted by incubation in elution buffer (2% SDS and 1X TE), combined with heat treatment at 65˚C for 15min.and gentle agitation (300 rpm).For mass spectrometry analysis, SDS was removed from the supernatant using a HiPPR TM Detergent Removal Resin kit (Thermo Fisher Scientific) and proteins were precipitated by the Methanol/Chloroform method [26].
For Co-Immunoprecipitation, yeast strains expressing eIF2A-TAP or Ssd1-TAP and Ssd1-HA or eIF2A-HA (S1 Table ) were used and eIF2A-TAP or Ssd1-TAP purification was done as described above, except that beads were washed 3 times and before elution of Ssd1 or eIF2A-associated complexes, samples were treated with 1 μL of micrococcal nuclease or nothing (NEB, 2.10 6 U/mL) for 10 min.at 37˚C in washing buffer supplemented with 1mM CaCl 2 .RNA extraction was performed (see below RNA extraction and Northern Blot) on remaining samples to confirm its absence after RNAse treatment.Proteins were denatured, separated into a polyacrylamide gel and detected with the appropriate antibody (S4 Table) using clarify ECL substrate (Bio-Rad), as described above (see Western Blot analysis).
For RIP-Seq experiment, upon reaching OD 600nm 0.8, twelve liters of culture were centrifuged, and eIF2A-TAP purification was carried out as described above, except that 1μl of RNasin ribonuclease inhibitors (Promega) was added per ml of lysis and washing buffers.Cells were broken by vortexing two times for 90 sec.at 3000 rpm (MagNAlyser, Roche).The lysates were recovered by centrifugation (20 min., 14 000 rpm, 4˚C) and incubated with magnetic beads for 1 hour at 4˚C.The beads were harvested and washed as described above except that the last washing was done with buffer not containing MgCl 2 (20mM Hepes pH7.4,100mM KOAc, 1μL/mL RNAsin).mRNAs associated with eIF2A-TAP were resuspended in elution buffer (2% SDS, 1X TE, 30mM EDTA), eluted after heat treatment at 65˚C for 15 min.and extracted as described below (see RNA extraction and Northern Blot).An aliquot of lysate was taken prior to eIF2A-TAP purification for the analysis of total RNA (input).
After transfer on Nylon membrane (BrightStar TM Plus, Invitrogen), RNAs were UV crosslinked at 0.120 Joules and revealed with strand specific DIG-labeled riboprobes.RNA probes were transcribed using the DIG RNA labelling Kit (SP6/T7, Roche) and as template, preannealed oligonucleotides or PCR products including T7 promoter (S3 Table ).Probes were then purified using illustra microspin G-25 columns (GE Healthcare) and diluted in hybridization buffer (Ambion ULTRAhyb1 ultrasensitive, Invitrogen).After denaturation at 85˚C for 5 min., RNA probes were used to hybridize Nylon membrane overnight at 60˚C or 65˚C for probes generated from pre-annealed oligonucleotides or PCR products, respectively.After washing using Wash and Block buffer Set (Roche), DIG-labelled RNA probes were revealed with the Anti-DIG antibody (S4 Table ) and clarity ECL substrate (Bio-Rad).
Illumina RNA sequencing
Five micrograms of RNA were depleted for abundant ribosomal RNA using Ribominus transcriptome Isolation kit (Invitrogen) and the remaining RNAs were then sequenced by using TruSeq Stranded mRNA kit (Illumina).Briefly, mRNA samples were chemically fragmented and used as templates to be transcribed into first strand complementary DNA (cDNA) using reverse transcriptase and random primers.The second strand cDNA was then produced and after purification with AMPure beads, the 3' ends of the blunt fragments were adenylated and index adapter sequences were added by PCR amplification, generating a dual-indexed library.The resulting products were purified using AMPure beads and the concentration and quality of the libraries were checked by Qubit and Bioanalyzer (Agilent).Libraries were pooled at a final concentration of 2.1 pM, denatured and sequenced with a NextSeq500 sequencing system.
RNA seq data analysis
After demultiplexing and removal of adapter sequences from Fastq files with Cutadapt [28], reads were mapped on S. cerevisiae S288C genome using RNA STAR [29].Default parameters were used except for maximum intron size (1500), maximum gap between two mates (1500), minimum overhang for spliced alignments (25) and the annotated GTF file Saccharomyces cerevisiae (R64-1-1.104)from ENSEMBL was used for mapping.Indexed BAM files were generated using Samtools_sort [30] and then read counts were obtained using featureCounts [31] and a GTF file from [32], was used as the gene annotation file.Default parameters were used except that both multi-mapping and multi-overlapping features were included.Differential analysis of RNA-Seq data from three independent biological replicates was performed by SAR-Tools, using DESeq2 software [33].To select the most significantly enriched targets based on a single factor, we multiplied log 2 Fold Change by -log 10 (p-value) to generate a Vfactor, which depends both on the level of variation and the significance.We retained the mRNAs having a Vfactor >50.This filter led to a selection of 146 targets as the most enriched mRNAs by eIF2A.
LC-MS acquisition
Briefly, after reduction and alkylation, protein samples were treated with Endoprotease Lys-C (Wako) and Trypsin (Trypsin Gold Mass Spec Grade; Promega).LC-MS/MS analysis of digested peptides was performed on an Orbitrap Q Exactive Plus mass spectrometer (Thermo Fisher Scientific, Bremen) coupled to an EASY-nLC 1200 (Thermo Fisher Scientific).Mass spectra were acquired in data-dependent acquisition mode with automatic switching between MS and MS/MS scans using a top-10 method.
Protein database search
All RAW files were processed together in a single run by MaxQuant [34] version 2.0.3.0 with default parameters unless otherwise specified (http://www.maxquant.org).Database searches were performed with the built-in Andromedasearch engine against the reference yeast proteome (downloaded on 2021.10.09 from Uniprot, 6050 entries).Precursor mass tolerance was set to 6 ppm in the main search, and fragment mass tolerance was set to 20 ppm.Digestion enzyme specificity was set to trypsin with a maximum of two missed cleavages.A minimum peptide length of 7 residues was required for identification.Relative label-free quantification of proteins based on intensities was done using the MaxLFQ algorithm integrated into MaxQuant [35].Proteins that shared same identified peptides were combined into a single protein group.
Proteomic data analysis
To identify interactors, replicates of affinity-enriched bait samples were compared to a set of negative control samples (n�3).Proteomics data analysis was performed in the Perseus environment (version 1.6.15)(https://maxquant.org/perseus/)[36]."Proteingroups.txt"file from MaxQuant was loaded.Protein groups identified by a single "razor and unique peptide" were filtered out from the data set.Protein group LFQ intensities were log2 transformed.A minimum of valid values (60%) was required in at least one group.Missing values were assumed to be biased toward low abundance proteins that were below the MS detection limit.Imputation of these missing values was performed separately for each sample from a distribution with a width of 0.3 and downshift of 1.8.Student's t-test calculations, used in statistical tests of LFQ intensities, implemented in Perseus, showed that all data sets approximated normal distributions, with FDR = 0.01 (False Discovery Rate) [36].Significant interactors were determined by a volcano plot-based strategy, combining t test p-values with protein ratio information.
eIF2A binds mRNAs encoding proteins required for the cell wall organization
While, the function of eIF2A is not yet fully understood, some data from the literature indicate that eIF2A could be a non-canonical translation initiation factor [24 and references therein].
To identify mRNAs associated with eIF2A, we performed an RNA-binding protein immunoprecipitation followed by a sequencing (RIP-Seq) experiment.For this purpose, we affinity purified an eIF2A-TAP protein, which is functional, as it allowed the growth of the cells under stress conditions that affected the eif2aΔ strain, as shown in S1A Fig.Total and eIF2A-associated mRNAs from exponential-phase culture were extracted, reverse transcribed into cDNA and sequenced.Enrichment of each mRNA in the immunoprecipitated fraction relative to total RNA was calculated from three independent biological replicates and normalized using DESeq2 [33] from three independent biological replicates.We found 146 mRNAs significantly enriched with eIF2A, according to the selection criteria that we applied (see Materials and Methods) (Fig 1, S5 Table and S1 Dataset).Gene ontology (GO) term enrichment analysis was done on the Saccharomyces Genome Database (SGD) site according to [37].It revealed that 24% of the selected mRNAs encode proteins required for cell wall biogenesis with a p-value of 6.56 e-22.This high percentage revealed a strong enrichment for mRNAs involved in the cell wall biogenesis, while this class of mRNAs represented less than 3% of the total mRNA [38].Among the most highly enriched mRNAs, we found TOS1, CCW14, which encode covalently bound cell wall proteins; SUN4, which encodes a glucanase localized in bud scars; CTS1 and SRL1, which respectively encode an endochitinase and a mannoprotein exhibiting a tight association with the cell wall ( The specific association of cell wall biogenesis related mRNAs with eIF2A suggests that this protein could play a role in cell wall homeostasis.The yeast cell wall is composed of a network including beta-1,3-glucan and 1,6-glucan, chitin and mannoproteins [39].We decided to address eIF2A involvement in the control of cell wall integrity by using calcofluor white (CFW), a drug which binds chitin and interferes with cell wall biogenesis [40].Compared to the wild-type strain, growth of the eif2aΔ mutant was not significantly affected in YPGlu rich medium (Fig 2A ) but was delayed when cells were cultivated in the presence of CFW (Fig 2B).This slow-growth phenotype was fully restored by eIF2A episomal expression in the eif2aΔ mutant strain (S1B Fig) .Note that to limit the eIF2A induction and consequently, to avoid the toxicity, the cells were plated on YPGlu instead on -URA.In contrast, eIF2A overexpression from pCM190:eIF2A vector, severely impaired wild-type strain growth under standard conditions (Fig 2C), but CFW addition did not increase the cell growth defect observed when eIF2A was overexpressed, suggesting that eIF2A overexpression is epistatic over the effect of CFW (Fig 2D).
Together, our results are consistent with eIF2A association with cell wall-related mRNAs and strengthen the hypothesis that eIF2A could be involved in the expression of transcripts required for cell wall biogenesis.
To validate RIP-Seq results, we investigated four identified eIF2A mRNA targets (TOS1, CCW14, SUN4, CLN1) by Western Blot analysis, to further explore the behavior of the corresponding TAP-tagged proteins upon eIF2A overexpression.Interestingly, eIF2A overexpression reproducibly decreased the amount of Tos1, Ccw14 and Sun4 proteins by approximatively 2-fold and by 5-fold in the case of Cln1, while as a control, G6PDH protein level did not change (Fig 3A and 3B).
In agreement with the RIP-Seq data (Fig To further explore whether the protein level changes observed upon eIF2A overexpression conditions occurred at the transcriptional or translational level (Fig 3), we performed a genome-wide RNA sequencing (RNA-Seq) analysis.For that purpose, we compared the transcriptome of strains harboring either pCM190: eIF2A or the pCM190 empty vector and showed that no major changes in the general mRNAs levels were detected after 5 hours of eIF2A overexpression (Fig 4A and S2 Dataset).Similarly, compared to the wild-type strain, the absence of eIF2A did not substantially disturb the transcriptome (Fig 4B and S2 Dataset), suggesting that eIF2A does not control the expression of its mRNA targets at a transcriptional level.
Together, our results combined with the fact that eIF2A associates specifically with 40S and 80S ribosomal subunits [21], strongly suggest that eIF2A acts as a negative translational regulator of a class of mRNAs involved in the cell wall organization and biogenesis.
The RNA binding protein Ssd1 is highly enriched by eIF2A independently of RNA
To gain further insight into the mechanistic role of eIF2A, we performed an in vivo affinity purification of proteins using eIF2A-TAP as bait.After purification on IgG coupled magnetic beads, the eIF2A-associated complex was analyzed on a silver-stained polyacrylamide gel (Fig 5A ) and eIF2A interaction partners were identified and quantified by mass spectrometry (S3 Dataset).A total number of 1509 proteins were identified in either input or purified samples but only 1037 were quantified in both.The enrichment level for each protein is indicated in the S3 Dataset.A volcano plot (Fig 5B ) represents analysis of the label-free quantitative (LFQ) MS data.This analysis highlighted that three groups of proteins were significantly enriched by eIF2A.The most enriched proteins are related to mRNA degradation pathways, such as the 5'-3' exonuclease XrnI [41]; Ska1, a SKI complex-associated protein involved in degradation of mRNAs containing long 3' UTR devoid of ribosomes [42] (Fig 5B).We also found the decapping mRNA complex (Dcp1, Dcp2, Edc3 and Pby1) [43] and deadenylation-dependent mRNA decapping enhancers including the Lsm1-7 complex and Pat1 [44,45] (see below) (Fig 5B ).As expected, a second group of eIF2A-enriched proteins (RPL/RPS) is related to the small 40S and large 60S ribosomal subunits.Interestingly, we observed a robust enrichment of Ssd1 (Fig 5B).Ssd1 is a RNA-binding protein (RBP) and previous studies have reported that Ssd1 binds to about a hundred mRNA coding for proteins involved in cell wall biogenesis [46][47][48][49].Ssd1 directly recognizes a consensus motif usually located in the 5' UTR of these mRNA targets [46].
We confirmed the eIF2A-Ssd1 interaction by Co-Immunoprecipitation using Ssd1-TAP or eIF2A-TAP as bait in a strain expressing eIF2A-HA or Ssd1-HA fusion proteins, respectively.We first showed the ability of tagged proteins to complement the CFW-sensitive phenotype of the eif2aΔ or ssd1Δ mutants (S1A Fig).
We next carried out purification of Ssd1-TAP or eIF2A-TAP interaction partners and verified by Western Blot that Ssd1 interacts with eIF2A (Fig 5C and S2A Fig).To see whether this interaction requires the presence of RNA, we performed a nuclease treatment before elution of Ssd1-or eIF2A-associated complex and showed that the eIF2A-Ssd1 interaction was RNAindependent (Fig 5C and S2A Fig).To ensure that RNA was properly digested, we extracted RNA from immunoprecipitated fractions and verified the absence of RNA in samples treated
eIF2A requires the presence of Ssd1 to associate with several of its mRNA targets and to regulate the expression of SUN4
RNA binding proteins (RBPs) play an important role in post-transcriptional control of fungal cell wall biogenesis and several studies characterized many RBPs to identify their RNA targets.In a systematic RIP-Seq study, at least four of a set of six cell wall-related RBPs (Ssd1, Khd1/ Hek2, Pub1, Mrn1, Scp160 and Nab6) [50 for review] enriched a common subset of 78 mRNAs, suggesting that RBPs act together to regulate cell wall organization.
We observed that eIF2A deletion did not increase the sensitivity of the ssd1Δ mutant to CFW (S3 Fig), This epistatic behavior of ssd1Δ over eif2aΔ suggests that Ssd1 and eIF2A act in the same regulatory pathway in response to cell wall damage.
Comparison of Ssd1-associated mRNAs identified by CRAC analysis [46] with previous RIP and transcriptome data [47][48][49] showed that 11 mRNAs were systematically bound by Ssd1 (Table 1).Among these 11 mRNAs, 8 were also highly enriched by eIF2A, including CTS1, SUN4 and SRL1 (Fig 1 , Table 1, S5 Table and S1 Dataset).As mentioned above, Ssd1 directly binds its mRNA targets [46], while currently none of our results demonstrated that this is the case for eIF2A.Given that Ssd1 and eIF2A seem to share the regulation of several targets, we tested the hypothesis that eIF2A binds some of its mRNA targets through Ssd1.
For that purpose, we performed a RIP experiment of eIF2A in the presence or absence of Ssd1 and evaluated the associated mRNA by Northern Blot analysis.In agreement with our and S1 Dataset), we confirmed that SUN4, CTS1, SRL1 mRNAs were strongly enriched by eIF2A-TAP in the wild-type strain, while SSD1 deletion dramatically decreased eIF2A association with these mRNAs (Fig 6).However, we found that the eIF2A binding to these mRNAs targets was not always Ssd1-dependent, for example, CCW14 mRNA The interaction between eIF2A and Ssd1 was confirmed by Co-Immunoprecipitation of eIF2A-HA with Ssd1-TAP.Cells expressing eIF2A-HA and Ssd1-TAP proteins were cultivated to exponential-growth phase and Ssd1-TAP and its interaction partners were purified as described in Materials and Methods.The Ssd1-associated complex was eluted after a nuclease treatment (+) or not (-) using micrococcal nuclease.A strain lacking the TAP-tag fused to the Ssd1 protein was used as a control.Total (input) as well as purified proteins were separated on a polyacrylamide gel and TAP-or HA-tagged proteins were revealed by Western Blot with PAP or HA-tag antibodies, respectively.https://doi.org/10.1371/journal.pone.0293228.g005Table 1.eIF2A shares common targets with Ssd1.Ssd1 mRNA targets systematically identified by RIP [47][48][49] and CRAC [46] experiments.The functions were extracted from SGD https://www.yeastgenome.org/.was still enriched by eIF2A even in the absence of Ssd1 (Fig 6).As a control, we observed that the RPL28 mRNA encoding the 60S ribosomal protein L28, was not differentially enriched by eIF2A in the presence or absence of Ssd1 (Fig 6).Finally, due to the impact of Ssd1 on eIF2A binding to target mRNAs, a 2-fold decrease in the Sun4 protein amount upon eIF2A overexpression was abolished in the absence of Ssd1 ( Fig 7).Taken together, our results show that the presence of Ssd1 is required for the binding of eIF2A to SUN4 mRNA and consequently, is needed to control its translation.
eIF2A physically and genetically interacts with the 5'-3' exonuclease Xrn1
Surprisingly, in addition to Ssd1, we identified by affinity purification, a strong interaction between eIF2A and several players in the 5' to 3' mRNA degradation pathway, including the exoribonuclease Xrn1 [41], the decapping complex (Dcp1, Dcp2, Edc3 and Pby1) [43] and the cytoplasmic Lsm complex (Lsm1-7 and Pat1) [44,45].The label-free quantitative analysis of the eIF2A partners identified by mass spectrometry revealed that the most enriched factor was Xrn1 (Fig 5B).This result confirmed the interaction previously identified by Co-Immunoprecipitation using eIF2A-HA as bait [51].Moreover, a Co-Immunoprecipitation experiment using eIF2A-TAP as bait confirmed the interaction between eIF2A and Xrn1 (S2 Fig) .Notably, a Co-Immunoprecipitation experiment using Ssd1-TAP as bait did not purify Xrn1, whereas eIF2A-HA was highly enriched (Fig 5C).These results suggest that eIF2A, but not Ssd1, forms a sub-complex with the 5' to 3' mRNA degradation machinery.
To investigate a potential functional link between Xrn1 and eIF2A, in addition to the physical link, we looked at the effect of combining mutants affecting Xrn1 and eIF2A on viability.Since XRN1 deletion affected the cell viability under standard rich culture conditions (Fig 8 ), we first achieved conditional depletion of Xrn1, using an auxin-inducible degron system (AID) in which rapid degradation of the protein is induced in the presence of auxin (IAA).We combined eIF2A deletion with a Xrn1-degron fusion to compare the cell viability when Xrn1 was depleted in the presence or absence of eIF2A.Under standard growth conditions, eIF2A deletion increased the growth defects of the xrn1-deg mutant in the presence of auxin (Fig 8).Interestingly, when we combined the double mutation with the presence of CFW, cell growth was dramatically affected (Fig 8 and S4 Fig).Thus, the synthetic slow-growth phenotype between Xrn1 depletion and the eif2aΔ mutation was amplified when the cell wall biogenesis was affected.We noted that Xrn1 depletion in the presence of CFW affected cell growth, highlighting a potential role of Xrn1 in the cell wall biogenesis or maintenance.
Discussion
In eukaryotes, translation initiation is a highly regulated process, which involves many factors.Most mRNAs translation is initiated by eIF2-mediated binding of the initiator Met-tRNA to the 40S ribosomal subunit.However, an additional factor, eIF2A has been described to ensure persistent translation initiation of a subset of cellular and viral mRNAs under stressful conditions [24].
The data presented here revealed the negative role of eIF2A on the synthesis of proteins related to the cell wall biogenesis in S. cerevisiae.A large fraction of the most enriched mRNAs, in the eIF2A associated complex, belongs to the cell wall pathway ( Taken together, our results combined with the fact that eIF2A-HA is associated with the 40S ribosomal subunit [22], strongly suggest that eIF2A acts as a negative translational regulator.Although eIF2A physically and genetically interacts with translation initiation factors eIF5B and eIF4E [21,22], polysome profile analysis reported that eIF2A surprisingly associates with the 80S ribosomal subunits [21], which is unusual for initiation factors, commonly released from the 40S prior to 60S ribosomal subunit assembly.It has been previously proposed that eIF2A is slowly released from the initiation complex and participates in a late stage of translation initiation or during translation of the first amino acids, blocking the subsequent elongation step [21].The exact function of eIF2A in translation regulation remains to be elucidated.The mammalian eIF2A mediates translation initiation of a subset of cellular and viral mRNAs under stress conditions, either through its recruitment to IRES [13,15] or by initiating translation from non-AUG start codons or uORFs.It has been already reported in yeast, that eIF2A negatively regulates IRES-mediated translation of URE2 mRNA [22,23], and it would be of interest to investigate whether IRES or alternative start codons are found among the eIF2A mRNA targets.
The budding S. cerevisiae yeast cells are surrounded by a cell wall, which provides protection against environmental stress and maintains the shape and the rigidity of the cell.It is composed of crosslinked molecules, comprising β-1,3 glucans, β-1,6 glucans, mannoproteins and chitin [38 for review, 52].Cell wall homeostasis is a dynamic process involving hundreds of proteins whose expression must be highly and quickly regulated by many RBPs, in response to environmental changes or depending on the cell cycle state [50 for review].Ssd1 is one of the most studied RBPs linked to the cell wall.This protein directly binds to 5'UTR of many cell wall-related mRNAs to mediate translational repression of bound transcripts [46][47][48][49]53].Two other RBP, Nab6 and Mrn1, bind the 3'UTRs of mRNAs coding for cell wall proteins, but have antagonistic functions, supporting a model in which both proteins compete for RNA binding [53].
In contrast, in this study, biochemical characterization of the eIF2A-associated complex revealed that eIF2A robustly interacts with Ssd1 (Fig 5
and S2 Fig).
We also showed that the double ssd1Δ eif2aΔ mutant displays the same CFW-sensitive phenotype as the ssd1Δ mutant (S3 Fig) and we found that Ssd1 and eIF2A share some mRNA targets (Table 1).Taken together, our results suggest that Ssd1 and eIF2A act in the same regulatory pathway in response to cell wall damage.Even though Ssd1 and eIF2A have overlapping mRNA targets (Table 1), eIF2A or SSD1 overexpression in the ssd1Δ or eif2aΔ mutants, respectively, did not restore, even partially, resistance to CFW (S1B Fig), suggesting that Ssd1 and eIF2A have distinct functions in the regulatory pathway controlling synthesis of cell wall-related proteins.Currently, none of our results demonstrate that eIF2A directly binds its mRNA targets and we observed that eIF2A-Ssd1 interaction is RNA-independent (Fig 5 and S2 Fig).This is consistent with the predicted structure of eIF2A, which harbors a WD-repeat ß-propeller fold in the N-terminal part but no RRM [24].We also showed that Ssd1 is required for eIF2A binding to CTS1, SUN4 and SRL1 mRNAs (Fig 6) and consequently for the regulation of their expression, for example, SUN4 (Fig 7).
More than 500 RBPs have been identified in S. cerevisiae and at least seven are related to cell wall synthesis with substantial overlap of their targets [47 for review, 54].The multiplicity of the RBPs could exert synergistic effects on mRNA stability, localization or translation efficiency.We found that all eIF2A targets are not systematically shared by Ssd1 and some eIF2A and Ssd1 targets, for example, CCW14 mRNA, is still enriched by eIF2A in the ssd1Δ mutant.This observation leads us to hypothesize that eIF2A might regulate some of its targets though the binding with other RBPs.The recruitment of several RBPs might lead to combinatorial effects that could allow a fine-tuned regulation of gene expression.
eIF2A also targets other mRNAs, including mRNAs involved in the cell cycle process, such as CLN1, CLN2 and CLN3 mRNAs.Since the cell wall biogenesis must be coordinated with the cell growth, it is not surprising that eIF2A can directly regulate the expression of cell cycle actors.A significant number of eIF2A mRNA targets are related to the mitochondria, plasma membrane and endoplasmic reticulum.Among those, Bgl2, Gas5 [55] or the SUN family genes, such as Sun4 and Uth1, have a cell wall and mitochondrial localization [56].Furthermore, recently, Barbara Koch and Ana Traven proposed a model in which cell wall, plasma membrane, endoplasmic reticulum and mitochondria could be interconnected to respond to signal transduction during stress [57].Our results suggest that eIF2A may be involved in the translation of a set of genes composed of mRNAs coding for macromolecular structure, which could be co-regulated in space and in response to environmental signals.
Finally, we highlighted that eIF2A physically interacts with several actors of mRNA degradation pathways, including the decapping complex and the cytoplasmic Lsm complex and we found that the most enriched eIF2A interactor is the 5'-3' exonuclease Xrn1 (Fig 5B).We also observed that eIF2A functionally interacts with Xrn1 and this interaction is even more prominent in response to cell wall damage induced by the presence of CFW (Fig 8 [49,58,59] and interestingly, our results here indicate that additional control by the 5'-3' mRNA degradation machinery may exist.Further work is required to better understand the molecular function combining Xrn1 and eIF2A.
Here, we revealed the role of eIF2A as a new actor involved in the maintenance of cell wall homeostasis.The identification of post-transcriptional regulation mechanisms, controlling cell wall biogenesis in fungal species, can serve to better understand human fungal pathogens, enabling the design of novel antifungal therapeutic strategies.
Fig 1 .
Fig 1. mRNAs related to cell wall biogenesis are enriched in the complex associated to eIF2A.(A) Volcano plot of RNA sequencing results.The xaxis displays the log 2 fold change between the average number of reads in the eIF2A-associated fraction relative to the total RNA fraction, while the yaxis displays the -log 10 of the associated p-value.Significantly enriched transcripts (as described in Material and Methods) are displayed in blue, with a yellow overlay for cell wall-related mRNAs.Red dots indicate candidates that were used for functional analyzes (see below).(B) MA plot.The x-axis displays the average number of reads between the two conditions (log 10 scale), while the y-axis represents the log 2 fold change between the average number of reads in the eIF2A-associated fraction relative to the total RNA.Colored dots correspond to the same categories as those mentioned in (A).https://doi.org/10.1371/journal.pone.0293228.g001
Fig 2 .
Fig 2. eIF2A absence is detrimental for S. cerevisiae when the cell wall is affected, and its overexpression is detrimental under standard conditions.(A and B) Wild-type strain and cells deleted for eIF2A gene were spotted in 10 −1 dilution series on rich medium plates without CFW (A) or with (B) and incubated at 30˚C for 40 hours.(C and D) Wild-type strains harboring either empty pCM190 (ø) or pCM190: eIF2A vectors, allowing eIF2A overexpression (OE eIF2A) (on) or not (off), were serially diluted and spotted on (-URA) minimal medium supplemented (D) or not (C) with CFW.The precultures were done in the presence of doxycycline (Dox) to prevent the expression of eIF2A which is under the control of the P tetoff and spotted on a -URA without Dox.Plates were incubated at 30˚C for 40 hours.https://doi.org/10.1371/journal.pone.0293228.g002
Fig 3 .
Fig 3. eIF2A overexpression decreases the cell wall protein levels.(A) Cells harboring pCM190 or pCM190: eIF2A plasmids and producing Tos1, Ccw14, Sun4 or Cln1 TAP-tagged proteins were grown in -URA medium and harvested 5 hours upon eIF2A overexpression (+) or not (-).Protein extracts were separated on a denaturating polyacrylamide gel and TAP-tagged proteins were revealed by Western Blot with PAP antibodies, as described in Materials and Methods.G6PDH was used as a loading control.(B) Quantification of Western Blot analyzes was performed using ImageJ and was based on the expression levels of target proteins relative to G6PDH reference protein.Fold change indicated on y-axis was defined as the ratio between relative abundance of target proteins in cells harboring pCM190: eIF2A (+) and pCM190 plasmids (-).Error bars indicate the standard deviations of averages for at least three independent experiments.Statistical analysis was performed by using a t-test, with the following obtained p-values: p = 0.00182; p = 0.01818; p = 0.01077; p = 0.02992 for Tos1, Ccw14, Sun4 and Cln1 respectively.Asterisks indicate statistical significances (*: p-value �0.05, **: p-value � 0.01).The dots correspond to the value obtained for each individual replicate.https://doi.org/10.1371/journal.pone.0293228.g003
Fig 4 .
Fig 4. Overexpression or deletion of eIF2A has no effect on the transcriptome compared to the wild-type strain.(A) Scatter plot of RNA-Seq data comparing mRNA abundance between transformed strains with pCM190-eIF2A or pCM190 plasmids.Cells were grown in-URA medium to exponentialgrowth phase and harvested after 5 hours of eIF2A overexpression or not.The x-axis and the y-axis represent the mRNA levels of strains harboring pCM190 or pCM190: eIF2A respectively.(B) Scatter plot comparing transcriptomes of WT and eif2aΔ strains transformed with pCM190 plasmid and grown on -URA medium to exponential-growth phase.The color dots code is the same as in Fig 1.
Fig 5 .
Fig 5. eIF2A interacts with the RNA binding protein Ssd1.(A) Affinity purification using eIF2A-TAP as bait.Total proteins (tot.Ext.) and eIF2Aassociated complex (SDS eluate) were separated on a polyacrylamide gel and visualized by silver staining.MW: Molecular Weight marker.(B) Volcano plot showed proteins enriched by eIF2A-TAP identified by mass spectrometry (LC-MS/MS).The x-axis represents the log 2 fold change of each protein in the eIF2A-TAP enrichment from the lysate.The y-axis shows log 10 p-value calculated using a Student's t-test.Proteins above the curved lines on the right part of the plot are significantly enriched by eIF2A (visualized by red diamond) purification.Proteins linked to RNA degradation are indicated by yellow dots.Proteins of the small 40S or large 60S ribosomal subunits are visualized by blue or green dots, respectively.Results are from six independent experiments.(C)The interaction between eIF2A and Ssd1 was confirmed by Co-Immunoprecipitation of eIF2A-HA with Ssd1-TAP.Cells expressing eIF2A-HA and Ssd1-TAP proteins were cultivated to exponential-growth phase and Ssd1-TAP and its interaction partners were purified as described in Materials and Methods.The Ssd1-associated complex was eluted after a nuclease treatment (+) or not (-) using micrococcal nuclease.A strain lacking the TAP-tag fused to the Ssd1 protein was used as a control.Total (input) as well as purified proteins were separated on a polyacrylamide gel and TAP-or HA-tagged proteins were revealed by Western Blot with PAP or HA-tag antibodies, respectively.
Fig 6 .
Fig 6.The SUN4, CTS1, SRL1 mRNAs are enriched by eIF2A-TAP only in the presence of Ssd1.Wild-type and the ssd1Δ mutant strains expressing eIF2A-TAP protein were cultivated to exponential-growth phase.RIP experiment was performed as described in Materials and Methods.8 μg of total RNA (input) and 1 μg of immunoprecipitated RNA (RIP eIF2A-TAP) were respectively separated by an agarose gel.SUN4, CTS1, SRL1 and RPL28 mRNAs were revealed by Northern Blot with appropriate DIG-labeled probes and anti-DIG antibody.https://doi.org/10.1371/journal.pone.0293228.g006
Fig 1 )
. eIF2A was important for the cell fate when the cell wall integrity was disturbed (Fig 2).eIF2A overexpression led to a growth defect correlated with a decrease of several cell wall-related proteins encoded by the mRNA targets, notably TOS1, CCW14 and SUN4 (Fig 3), while no major changes were found in the transcriptome compared to the wild-type strain (Fig 4).
Fig 7 .
Fig 7. Decrease of Sun4 protein level upon eIF2A overexpression requires the presence of Ssd1.(A) Wild-type and the ssd1Δ mutant cells harboring pCM190 or pCM190: eIF2A plasmids and expressing Sun4-TAP protein were grown in -URA medium and harvested after 5 hours of eIF2A overexpression (+) or not (-).Protein extracts were separated on a polyacrylamide gel and Sun4-TAP was revealed by Western Blot using PAP antibodies.G6PDH was used as a loading control.(B) Quantification of Western Blot analyzes was performed as described in Fig 3B.Error bars indicate the standard deviations of averages from at least three independent experiments.Statistical analysis was performed using a t-test, p = 0.0054.The dots correspond to the value obtained for each individual replicate.Asterisks indicate statistical significances (*: p-value �0.05, **: p-value � 0.01).https://doi.org/10.1371/journal.pone.0293228.g007
Fig 8 .
Fig 8. Xrn1 depletion and eIF2A deletion are synthetic lethal.Wild-type and mutant strains were serially diluted and spotted on YPGlu rich medium supplemented or not with CFW in the presence or not of IAA (100 μM Auxin) to deplete the cells of Xrn1.uth1Δ and hsp150Δ mutants were used as a control.https://doi.org/10.1371/journal.pone.0293228.g008 and S4 Fig).In contrast, Co-Immunoprecipitation experiments using Ssd1-TAP as bait, showed that Xrn1 is not enriched by Ssd1 (Fig 5C).The importance of post-transcriptional regulation by RBPs has been extensively characterized to ensure cell wall homeostasis [50 for review].Recently, RNA exosome activity was shown to be necessary for maintaining cell wall stability Table and S1 Dataset).Other eIF2A-enriched mRNAs are related to the endoplasmic reticulum (11%), plasma membrane (7%), mitochondria (8%) or are involved in cell cycle regulation (6%) such as the G1/S cyclins CLN1, CLN2 and CLN3 (Fig 1, S5Table and S1 Dataset).
1, S5 Table and S1 Dataset), these results confirm that eIF2A protein is involved in the regulation of TOS1, CCW14, SUN4 and CLN1 gene expression (Fig 3). | 2023-11-29T05:04:21.823Z | 2023-11-27T00:00:00.000 | {
"year": 2023,
"sha1": "947dc6ff6c895d996339054a208c4d71eef5a6ee",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "947dc6ff6c895d996339054a208c4d71eef5a6ee",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222301156 | pes2o/s2orc | v3-fos-license | Is an ounce of prevention worth a pound of cure? A cross-sectional study of the impact of English public health grant on mortality and morbidity
Objectives The UK government is proposing to cease cutting the local authority public health grant by reallocating part of the treatment budget to preventative activity. This study examines whether this proposal is evidenced based and, in particular, whether these resources are best reallocated to prevention, or whether this expenditure would generate more health gains if used for treatment. Methods Instrumental variable regression methods are applied to English local authority data on mortality, healthcare and public health expenditure to estimate the responsiveness of mortality to variations in healthcare and public health expenditure in 2013/14. Using a well-established method, these mortality results are converted to a quality-adjusted life year (QALY) basis, and this facilitates the estimation of the cost per QALY for both National Health Service (NHS) healthcare and local public health expenditure. Results Saving lives and improving the quality of life requires resources. Our estimates suggest that each additional QALY costs about £3800 from the local public health budget, and that each additional QALY from the NHS budget costs about £13 500. These estimates can be used to calculate the number of QALYs generated by a budget boost. If we err on the side of caution and use the most conservative estimates that we have, then an additional £1 billion spent on public health will generate 206 398 QALYs (95% CI 36 591 to 3 76 205 QALYs), and an additional £1 billion spent on healthcare will generate 67 060 QALYs (95% CI 21 487 to 112 633 QALYs). Conclusions Additional public health expenditure is very productive of health and is more productive than additional NHS expenditure. However, both types of expenditure are more productive of health than the norms used by National Institute for Health and Care Excellence (£20 000–£30 000 per QALY) to judge whether new therapeutic technologies are suitable for adoption by the NHS.
and treatment services have on mortality. How can the authors assume the same effect on morbidity? A more detailed explanation and justification are needed in order to understand and validate this approach. Otherwise, only mortality effects estimates should be presented. 2. It is also quite unconvincing the assumption that the effect of public health on mortality plays out in two years from the time the money is spent. And this is particularly relevant when comparing treatment and prevention care, where the lag effects are likely to be very different. The authors mention a Californian study that finds out that more than half of lives saved by public health spending occurred after two years, although they seem to have phrased that in a way that supports they approach? This is a crucial point that merits more than a few lines in the limitations section. 3. The uncertainty around the estimates appears to be very large. In fact, the 95% confidence intervals presented in the abstract overlap, showing that we could not reject the null hypothesis that the effects of public health and healthcare on mortality are the same. This is not in line with the overall confidence granted to the results in the discussion, conclusion and abstract sections of the paper. 4. The implications of using an IV approach might need to be considered. Its use implies that the estimated coefficients now have a LATE interpretation. Is it important for policy recommendations the fact that the effects on mortality estimated on this paper are only relevant for the local areas that increase/decrease expenditure due to MFF and DFT issues? 5. Finally, in more general terms, the applications of these estimates into decision making need further discussion. Do they provide evidence for allocative or productive efficiency purposes? The authors appear to suggest both. They argue that their findings provide evidence for setting a larger size of the budget to public health than currently achieved, although one can ask, how far more and how should this be established? They also mentioned that under current budget constraints new public health interventions should have a cost per QALY lower than that estimated according to their findings. Does this imply that we ought to use a different (considerably lower) cost-effectiveness threshold value for preventive rather than curative care? Is this feasible?
GENERAL COMMENTS
Summary This paper estimates the responsiveness of mortality to changes in healthcare and public health expenditure in the UK. The authors employ similar estimation approaches as their previous work in the field and come up with estimates of £3800 and £13500 per QALY gained of public health and healthcare spending, respectively.
Major/general comments 1. The major contribution of this work is that we, for once, now can compare different broad approaches to generate population health; "prevention" and "cure". If we have faith in these results, they carry major policy implications and I believe the authors have done some great work here.
2. I am not familiar with the details of the UK system, and to be fair the authors are rather humble about this in the text, but it remains a bit unclear to me how much of preventative measures are included in the healthcare budget and how much "treatment" is part of the public health budget. If we could get some figures here it would give the non-initiated reader an idea of these magnitudes. 3. Another general comment is around the balance of the paper in terms of technical sophistication and the policy issue at hand. It is difficult to present such a comprehensive study with plenty of analytical choices and corners to cover and I believe the authors are doing this well although it takes some effort to cover all details and supplementary material even for a very interested reader (including myselfand I have not been through every detail of the supplementary material). Having said this, with the information provided the results and analytic methods are up for scrutiny and this is to be applauded. I would still consider the balance of the presentation and perhaps move some more of the regression methodology to the appendix and focus slightly more on the highly important policy issues raised. I also wonder whether some kind of graphical representation of the conceptual model including casual links would be a way to make the paper slightly more accessible for the general audience.
Minor/technical comments 1. Abstractmethods. Are the methods for estimating QALY effects really well-established? I would argue they are probably still associated with a fair amount of uncertainty.
2. Abstractconclusion. This comment is also valid for the conclusions in the main text to some extent. I believe the comparison with the NICE threshold may be considered a bit off target. At least it should perhaps not take up more than half of the conclusion in the abstract. I believe there is a very important policy story to be told here, preventative measures should get more funding. I would not let that conclusion half disappear in a discussion about the NICE threshold.
3. P12.L43-50: The authors should clarify that they are not doing forbidden regression, i.e., all five instruments are used in both first stages. Also, could the authors explain the three additional instruments?
4. P8.L55-P9.L5: What are the authors thoughts on health care input prices as an exogenous variable? Is price not determined by equilibrium in the market for these inputs? 5. The effect of DFT on spending seems fairly robust to specification ( Please leave your comments for the authors below The paper entitled "Is an ounce of prevention worth a pound of cure? Estimates of the impact of English public health grant on mortality and morbidity" compares the average effect of health care (treatment) expenditure and the average effect of public health (prevention) expenditure on mortality using data across 150 local areas in England. This effect is then translated into a cost per QALY estimate. The paper makes use of a similar methodology developed by the authors to estimate the average opportunity cost of the English NHS. The authors find that public health expenditure is more productive than treatment expenditure, in the range of three to four times so. The paper thus concludes that "the recent proposal to shift resources away from [NHS healthcare expenditure] and towards [public health expenditure] is an evidence-based one". This is a well-written paper, focusing on an interesting topic that uses a robust methodology. Their findings can also have relevant implications for policy making.
Authors' response: Thank you for your kind comments.
My main concerns are as follows: 1. It is not clear how the authors translate the estimated effect of expenditure on mortality into a cost per QALY. They appear to have used a previous estimate derived from an assumption: that the effect on morbidity is proportional to the estimated mortality effect. However, instead of applying the estimated mortality effects derived from their own calculations they used that from a previous analysis that is only based on treatment expenditure data, and used that to both public health and healthcare. This is particularly unconvincing when one sees the different effects that public health and treatment services have on mortality. How can the authors assume the same effect on morbidity? A more detailed explanation and justification are needed in order to understand and validate this approach. Otherwise, only mortality effects estimates should be presented.
Authors' response: We would like to thank the referee for drawing this issue to our attention and giving us the opportunity to clarify this in the paper. As suggested by the reviewer, we have added two columns to table 3 that report the cost per death averted for public health and treatment expenditure. These mortality-based estimates confirm our broader QALY-based results that public health expenditure is more productive than healthcare expenditure. The purpose of the paper is to try and demonstrate the relative health benefits of these two types of expenditure and, if possible, to compare the size of these benefits with those associated with particular types of health care expenditure (for example, on new medical technologies). To do this it is necessary to convert mortality effects into broader QALY effects. At the moment there is no evidence about the mortality effects by disease area of public health expenditure. In the absence of any evidence, we assume that the distribution of mortality benefits across disease areas for public health is similar to that for healthcare expenditure. This assumption is now made explicit on p.19 (of the revised submission) so that readers can judge for themselves the usefulness of our cost per QALY estimates. It is not obvious that this assumption will either over-or under-estimate the total QALY benefits of public health expenditure. By making this assumption we are able to compare the health effects of both types of expenditure and we can compare these effects with, for example, the NICE threshold for the adoption of new medical technologies in the NHS. Moreover, by making this (now) explicit assumption we hope to stimulate research that will examine its accuracy.
2.
It is also quite unconvincing the assumption that the effect of public health on mortality plays out in two years from the time the money is spent. And this is particularly relevant when comparing treatment and prevention care, where the lag effects are likely to be very different. The authors mention a Californian study that finds out that more than half of lives saved by public health spending occurred after two years, although they seem to have phrased that in a way that supports they approach? This is a crucial point that merits more than a few lines in the limitations section.
Authors' response: In an ideal world we would have access to expenditure and mortality data that stretch back many years so that we could address this issue properly. Unfortunately such data do not exist and, instead, we make the best use of what data are available and we draw the reader's attention to the limitations that such data imply for our results. However, and as we point out in the paper, the way in which we use the expenditure and mortality data might not be as troublesome as first appears. In support of our approach we cite the Californian study that suggests over one-half of all cumulative lives saved through public health expenditure occur in the two years following that expenditure, and our mortality measure includes deaths in the expenditure year and the following two years.
Moreover, although we omit mortality effects for later years, some current mortality may reflect public health expenditure from many years ago. Implicitly we are assuming that the data represent a quasi long-run equilibrium situation, that relative expenditure levels and health outcomes within each local authority have been reasonably stable over a period of time, and that any lagged of effect of current expenditure on future mortality is offset by the impact of previous expenditure on current mortality. These are not unreasonable assumptions in the English context but they are just assumptions, and they might be less appropriate for other geographies where, for example, relative expenditure and outcomes have changed through time. We have added a few sentences to this effect on pp.23-24.
3.
The uncertainty around the estimates appears to be very large. In fact, the 95% confidence intervals presented in the abstract overlap, showing that we could not reject the null hypothesis that the effects of public health and healthcare on mortality are the same. This is not in line with the overall confidence granted to the results in the discussion, conclusion and abstract sections of the paper.
Authors' response: We must thank the reviewer for highlighting this issue and enabling us to address it in the paper. Using the point and standard error estimates associated with the mortality elasticities in table 3, we undertook a simulation study of the difference between the public health and CCG QALY gains associated with the budget boost described in columns 7 and 8 of table 3. We made one million pairs of draws from the two distributions. We found that the size of the public health QALY gain was greater than the size of the CCG QALY gain in just over 94% of the draws from the backward selection estimates, and that this proportion increased to over 99% when the forward selection estimate were used. We feel that this allows us to conclude that the public health QALY effect is greater than the CCG effect. We have added details of this simulation to the paper on p.20.
4.
The implications of using an IV approach might need to be considered. Its use implies that the estimated coefficients now have a LATE interpretation. Is it important for policy recommendations the fact that the effects on mortality estimated on this paper are only relevant for the local areas that increase/decrease expenditure due to MFF and DFT issues?
5a.
Finally, in more general terms, the applications of these estimates into decision making need further discussion. Do they provide evidence for allocative or productive efficiency purposes? The authors appear to suggest both. They argue that their findings provide evidence for setting a larger size of the budget to public health than currently achieved, although one can ask, how far more and how should this be established?
Authors' response: Our findings provide evidence for allocative efficiency purposes. Our study is motivated by the UK government's proposal to stop cutting the local authority public health grant by re-allocating part of the treatment budget to preventative activity. Our results suggest that this proposal is an evidence-based one although our estimates and other evidence also suggest that an increase in CCG expenditure would provide good value for money too (eg when compared with the Treasury's estimate of the consumption value of health). Our results do not allow us to recommend the size of the increase in the public health budget. However, given the very low PH marginal cost per QALY relative to that for CCG expenditure and to the NICE threshold for the adoption of new technologies by the NHS, we would recommend a return to pre-austerity expenditure levels.
5b.They also mentioned that under current budget constraints new public health interventions should have a cost per QALY lower than that estimated according to their findings. Does this imply that we ought to use a different (considerably lower) cost-effectiveness threshold value for preventive rather than curative care? Is this feasible?
Reviewer Name Martin Henriksson
Institution and Country Linköping University, Sweden Please leave your comments for the authors below Summary This paper estimates the responsiveness of mortality to changes in healthcare and public health expenditure in the UK. The authors employ similar estimation approaches as their previous work in the field and come up with estimates of £3800 and £13500 per QALY gained of public health and healthcare spending, respectively.
Major/general comments 1. The major contribution of this work is that we, for once, now can compare different broad approaches to generate population health; "prevention" and "cure". If we have faith in these results, they carry major policy implications and I believe the authors have done some great work here.
Authors' response: Thank you for your kind comments.
2. I am not familiar with the details of the UK system, and to be fair the authors are rather humble about this in the text, but it remains a bit unclear to me how much of preventative measures are included in the healthcare budget and how much "treatment" is part of the public health budget. If we could get some figures here it would give the non-initiated reader an idea of these magnitudes.
Authors' response: Precise figures of the break down between prevention and treatment within the PH grant and NHS budget are not available.
As one very rough guide to the volume of preventative expenditure within the treatment total, CCG programme budgeting data for 2013/14 reports a total spend of £65bn of which £411m (less than 1%) is in the 'Healthy Individuals' programme and could be described as for preventative activity. With regard to the public health grant, there is the issue about how to view treatment expenditure that also has a preventative effect. For example, of the £2.5bn public health grant about £489m was spent on drug and alcohol misuse, and £381m on STI testing/treatment. So, if we ignore the preventative element associated with these expenditure components, it could be argued that up to £870m (35%) of the public health grant is on treatment. But, of course, part of this expenditure will have a preventative effect too. This issue is acknowledged in section 2.1 with further discussion in appendix section A1. Strictly speaking, we are comparing the productivity of the public health grant with CCG healthcare expenditure. However, we believe that it is reasonable to think of this as a comparison of the marginal productivity of preventative and treatment expenditure although our primary purpose is to estimate the marginal effect of these two different sources of public expenditure which are subject to different budgetary constraints/choices.
3. Another general comment is around the balance of the paper in terms of technical sophistication and the policy issue at hand. It is difficult to present such a comprehensive study with plenty of analytical choices and corners to cover and I believe the authors are doing this well although it takes some effort to cover all details and supplementary material even for a very interested reader (including myselfand I have not been through every detail of the supplementary material). Having said this, with the information provided the results and analytic methods are up for scrutiny and this is to be applauded. I would still consider the balance of the presentation and perhaps move some more of the regression methodology to the appendix and focus slightly more on the highly important policy issues raised. I also wonder whether some kind of graphical representation of the conceptual model including casual links would be a way to make the paper slightly more accessible for the general audience.
Authors' response: Thank you for your kind comments. We agree that it is very difficult to present this material so that it is both robust enough for the academic audience yet reasonably easy to follow for the more policy orientated reader. A good deal of the regression methodology is already in the appendix and we are reluctant to move more. However, we think that your suggestion to add a graphical representation of the conceptual model including casual links as a way of making the paper slightly more accessible for the general audience is an excellent one. Hence we have added a new figure and an additional explanatory paragraph of text to address this issue on p.9.
Minor/technical comments 1. Abstractmethods. Are the methods for estimating QALY effects really well-established? I would argue they are probably still associated with a fair amount of uncertainty.
Authors' response: These methods have been around for a few years now and have been used in several studies. We are not aware of any major criticisms or better alternatives given the data available so we are reasonably happy with them. Moreover, the very recent paper by Soares, Sculpher and Claxton (2020) presents an application of the structured elicitation of the judgments of key individuals (including clinical experts) about the size of the QALY benefits associated with English healthcare expenditure. This study, available at https://journals.sagepub.com/doi/abs/10.1177/0272989X20916450?journalCode=mdma, finds that although most experts found replying to the questions challenging, they were able to express their beliefs quantitatively. The experts' judgements suggest that the assumptions made by earlier work that estimated the quality-adjusted life-year (QALY) impacts of changes in expenditure are likely to have underestimated the QALY benefits and, as a consequence, to have overestimated the "central" estimate of the health opportunity cost associated with NHS expenditure (£12,936 per QALY) .
2. Abstractconclusion. This comment is also valid for the conclusions in the main text to some extent. I believe the comparison with the NICE threshold may be considered a bit off target. At least it should perhaps not take up more than half of the conclusion in the abstract. I believe there is a very important policy story to be told here, preventative measures should get more funding. I would not let that conclusion half disappear in a discussion about the NICE threshold.
Authors' response: We believe this paper provides evidence which can inform resource allocation and decisions across these two categories of public expenditure, which includes decisions made by NICE which carry a funding mandate (approved interventions must be funded). We are not convinced that a comparison of our results with the NICE threshold dilutes the finding that PH expenditure is more productive of health than NHS expenditure (at the margin). We feel that it is important to draw attention to just how much more productive of health both types of expenditure are than the threshold currently used by NICE.
3. P12.L43-50: The authors should clarify that they are not doing forbidden regression, i.e., all five instruments are used in both first stages. Also, could the authors explain the three additional instruments?
Authors' response: Our understanding of forbidden regression in the IV context comes from section 4.6.1 of Angrist and Pischke's book 'Mostly Harmless Econometrics: An Empiricist's Companion'. This focuses on such issues as how to handle dummy instruments and non-linearities in the first-stage, and the importance of including the same group of covariates in both the first and second stages. We are not convinced that the inclusion of all five instruments in both first stages is 'forbidden' regression.
We use Stata to estimate our specifications and we are unaware of how to estimate the specifications without including all 5 instruments in both first stages. We start by estimating the 'full' specification (i.e., with all controls and all instruments included) whether we are estimating a public health only or public health and treatment expenditure regression. We then use backward or forward selection to eliminate irrelevant controls and/or problematic instruments. The three additional instruments are for CCG expenditure and are explained in appendix A3. They are very similar to those for public health expenditure but relate to the allocation of CCG budgets rather than the public health budget. They comprise: the distance from the target allocation; the market forces factor; and the prescribing cost age index.
4. P8.L55-P9.L5: What are the authors thoughts on health care input prices as an exogenous variable? Is price not determined by equilibrium in the market for these inputs?
Authors' response: This instrument is suggested by the funding rule approach and, as always, we are guided by the Hansen-Sargan test for instrument validity. The local input price index could be correlated with unmeasured determinants of mortality but conditionally exogenous variation is likely to remain once other controls for need are used since the price index is unlikely to be a perfect adjustment. We have over a dozen potential socio-economic covariates (including the Index of Multiple Deprivation) in the full specification mortality equation and hence it is difficult to imagine what deprivation effect the input price index would detect that our covariates do not. Moreover, both MFFs (for public health and treatment expenditure) are not included as instruments in our preferred backward and forward parsimonious specifications so the issue does not really arise with our final results.
5. The effect of DFT on spending seems fairly robust to specification (Table A2). What are the results (first and second stage) using only this instrument and no controls? If the results are affected by the inclusion of covariates (column 6, Table A1 and A2), why are these covariates necessary for conditional independence?
Authors' response: We agree that the effect of DFT on spending seems fairly robust to the precise specification (Table A2). If we re-estimate the specification in column 6 of tables A1 and A2 (i.e., mortality as a function of spend with no controls, and DFT is the only instrument) then the coefficient on expenditure in the second-stage is +0.133 [t-ratio=1.97] and in the first-stage equation the coefficient on DFT is +1.178 [t-ratio=9.77]. Without the controls for need we detect a positive association between spend and mortality rather than the causal effect of expenditure on outcome. We require the controls for health care need to make the instruments conditionally exogenous.
6. I am not familiar with all of the diagnostics test used by the authors. (1) Why do we care about nonlinearities (RESET test) in IV regression? (2) My understanding is that the threshold for weak instruments goes up quite drastically with more than one endogenous regressor. Do the tests used reflect this issue?
Authors' response: (1) We care about nonlinearities because they suggest an omitted effect which, if ignored, might result in inconsistent coefficient estimates. (2) We use the Sanderson-Windmeijer test for the strength of the instruments associated with each individual endogenous regressor. These statistical results are generated as part of the output associated with the ivreg2 routine in Stata and are specifically designed for the presence of more than one endogenous regressor. We understand that there is no widely accepted rule of thumb threshold for weak instruments when there are two endogenous instruments. In the absence of theoretical guidance, we persevere with the single instrument rule of thumb and report the relevant test statistic so that the reader can judge for themselves. Moreover, the Sanderson-Windmeijer F-statistic for the public health instrument is way above ten in both the preferred backward and forward selection specifications (it is 70.8 in the former and 57.0 in the latter).
Reviewer: 4 Dr Gemma Bilkey Department of Health Western Australia Australia
Please leave your comments for the authors below Nicely written and timely work. While the final paragraph of the discussion describes discounting in a broad sense, it would be valuable to provide a comment how this may have changed the results, or a further justification for why this was not included.
Authors' response:
The referee is quite right that we do not discuss discounting of the QALY effects of PH and NHS expenditure. We now report cost per death averted, which does not require discounting and cost per QALY in Table 3 (see our responses to referee 2).
The translation of the estimated mortality effects to QALY effects is based on previous work which also used estimated mortality effects of changes in NHS expenditure to calculate the QALY effects. In this previous work the reported estimates reflect changes in undiscounted QALYs associated with changes in expenditure. Discounting these quality adjusted life year effects in previous work at 3.5% led to a very modest increase the cost per QALY (from £12,936 to £13,141 in Claxton et al 2015 (see https://www.ncbi.nlm.nih.gov/books/NBK274315/). The effects of discounting are modest because the health effects of changes in expenditure are restricted to one year. A large proportion of this health effect is quality of life (which occurs in that year so is not subject to discounting). The change in mortality due to a change in spend that occurs in that year does have life year effects (adjusted for quality) in subsequent years which are subject to discounting. Some changes in mortality will have life year effects over many years and other mortality effects will not. On average 4.5 life years is associated with each death averted, so, on average, the effect of discounting is modest even when a rate of 3.5% rate is applied, when 1.5% or lower is arguably more appropriate for health.
Since discounting future health effects would apply equally to the effects of PH and NHS expenditure it would not change the comparison of the effects of PH or NHS expenditure. Of course, should more waves of data make it possible to estimate a longer lag structure then discounting would become more important (see our responses to reviewer 2 on these issues). However, overall this is likely to capture more total discounted health effects of changes in expenditure, reducing rather than increasing the estimates of cost per QALY for both PH and NHS expenditure. If the effects of PH expenditure tend to have longer lags than NHS expenditure then the total albeit discounted effects would tend to be greater for PH, reinforcing the findings of this paper.
For all these reasons we would on balance prefer to avoid the complications of a full discussion of discounting in the current text as doing this issue justice would not be feasible in this paper and failing to fully explain the issues is likely to confuse most readers who may not be familiar with debate about why health effects should be discounted and what an appropriate discount rate for health should be.
As an international reader, I was not clear on the remit of the CCG (is this purely tertiary spending?) | 2020-10-13T13:05:48.761Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "d42fbb07bab79ff008560e1f21a010e86a254904",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/10/e036411.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "872a4967f33efaf8decb304def83cbd3cb10dc02",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25625625 | pes2o/s2orc | v3-fos-license | Shifting to Value-Based Principles in Sickness Insurance: Challenges in Changing Roles and Culture
Purpose Management principles in insurance agencies influence how benefits are administered, and how return to work processes for clients are managed and supported. This study analyses a change in managerial principles within the Swedish Sickness Insurance Agency, and how this has influenced the role of insurance officials in relation to discretion and accountability, and their relationship to clients. Methods The study is based on a qualitative approach comprising 57 interviews with officials and managers in four insurance offices. Results The reforms have led to a change in how public and professional accountability is defined, where the focus is shifted from routines and performance measurements toward professional discretion and the quality of encounters. However, the results show how these changes are interpreted differently across different layers of the organization, where New Public Management principles prevail in how line managers give feedback on and reward the work of officials. Conclusions The study illustrates how the introduction of new principles to promote officials’ discretion does not easily bypass longstanding management strategies, in this case managing accountability through top-down performance measures. The study points out the importance for public organizations to reconcile new organizational principles with the current organizational culture and how this is manifested through managerial styles, which may be resistant to change. Promoting client-oriented and value-driven approaches in client work hence needs to acknowledge the importance of organizational culture, and to secure that changes are reflected in organizational procedures and routines.
Introduction
New Public Management (NPM) has been a dominant principle for public agencies for several decades, where the principles have their background in a strive to design more efficient government services by adopting principles from private corporations: central aspects are to treat recipients as customers rather than citizens, and to emphasize constant monitoring of service performance to secure efficient use of resources [1]. Over the last decade or so, there has been a development away from the market-oriented principles of NPM into more value-driven principles of operation, e.g., through emphasizing democratic values, public interests and quality of services, which also have effects on the roles of the officials working within the organizations [1,2]. One example of a more value-based principle is lean, which is both a rhetoric and a set of managerial tools; key elements are to increase customer value, reducing 'waste' in the organizational processes, and increasing quality through including employees in improvement work [2,3]. The emerging value-based approach to public administration has its background in a networked and complex environment including several actors, where government agencies find themselves acting not only as administrators, but also as catalysts, collaborators or conveners [1]. In a sickness and return to work (RTW) setting, this is of importance given the multi-stakeholder environment in which administration of work disability and RTW promotion is taking place [4,5]. Changes in objectives of public organizations also influence the approach to accountability and key values within those organizations, calling for a focus on dialogue and deliberation with multiple actors, and to move from control-oriented to trust-oriented management [6].
In practice, however, post-NPM reforms may prove to be a continuation rather than a break with NPM principles. For instance, the introduction of lean in the UK tax agency has been criticized for strengthening the industrial approach to public administration, with focus on efficiency through performance monitoring and a skewed sense of 'value' that undermined the public ethos [7]. What 'value' means in such reforms, and if the reforms actually change how organizations approach issues of accountability or performance measurements is an empirical question. Hence, there is a need for studies of the practical consequences of reforms, and how they are received and perceived on different levels within public organizations.
In this article, we aim to analyze how the introduction of value-based organizational principles in a sickness insurance agency affects the role of the officials working in the organization. This is done through an empirical investigation of the perceptions of Swedish sickness insurance officials, their office managers, and representatives from the senior management, on the role of sickness insurance officials; how this role has changed after new principles have been introduced; the conditions for officials to fulfil their role in a purposeful way; and how the work is managed and monitored. The data is analyzed through theories on accountability and discretion.
Accountability may be categorized differently depending on which aspects are in focus, e.g., public, administrative, legal, professional or personal accountability [8]. This article focuses on public and professional accountability. Public accountability may be defined as the obligation of a public servant to uphold the public interest, or through a principal-agent framework, in which agents (e.g., public servants) are responsible for acting in the interest of the principal (e.g., the authority), and be answerable to this through rewards or punishments. While public organizations are supposed to work for the benefit of the citizens, the interpretation of public values and what constitutes a public interest may differ over time. Public accountability has under NPM been interpreted through production-oriented managerial models as value at the end of the chain. NPM uses top-down strategies where accountability is measured through constant monitoring of civil servants in order to make them accountable for their actions [9]. Moving towards value-based management could imply a decreased focus on such top-down approaches to accountability, by focusing more on trusting officials' professionalism; hence, post-NPM reforms will to a larger extent emphasize professional discretion as a key value. Professional accountability refers to the role of the public official and their use of discretion in making decisions, where specific issues are maintaining equity and fairness, while being concerned with the law and procedures [8].
It is not possible to imagine public administration work without professional discretion; it is necessarily embedded in any rule structure, and hence a natural part of officials' work which they are forced to use [9,10]. Discretion may be considered as a value in itself, and public encounters considered as positive and necessary for attaining public accountability [11]. There are different elements of discretion [12], defined as rule discretion (limited by legal, fiscal or organizational constraints), value discretion (determined by notions of fairness or codes of conduct), and task discretion (the ability to carry out prescribed tasks). Discretion has often been depicted as a managerial problem, where research has focused on whether it is effective and desirable, and whether government agencies fulfil their goals better through top-down control or through trusting officials to exercise their discretion to deal with complex problems [11]. Increases in rules and accountability may decrease the rule discretion of public servants, although they may still have some discretion in cases where rules are not operable, and task discretion may be high for complex tasks where there are no clear procedures [12]. It has also been argued that discretion always needs to be analyzed in relation to the type and structure of the organizational context, where discretion is influenced both by the level of managerialism and the level of formalization within the organization [13]. Further, discretion is related to the complex networks between professionals and organizations [9,11] and how organizations are governed. Governing takes place in several layers of an organization [14], where some facets (mainly the productive components) tend to remain more stable while others are aiming to shield the core production from environmental influences in order not to disturb its operations, and as a strategy to reduce uncertainty. On a production level, governing may hence be much focused on managing day-to-day challenges in accordance with established routines and procedures, while governing on a top managerial level may be focusing more on buffering and adapting current managerial and organizational trends external to the organization. This implies that organizations are both open and closed systems at the same time; or, in other words, simultaneously moving targets and relatively stable entities with specific modes of operation. The implementation of new organizational structures or changes in professional roles hence needs to be analyzed on several organizational levels, where changes orchestrated from the top managerial level, from a perspective of a layered organization, may paradoxically contribute to stability and prevent change [15] through discrepancies between hierarchical levels in how reforms are interpreted and carried out (or not carried out). In this article, the notion of layered organizations is used to analyze whether or not the introduction of principles that aim to increase the professional discretion actually will influence the discretion in daily work.
Swedish Sickness Insurance as a Case
The Swedish sickness insurance system is a general social security system available to all people working in Sweden, and offers income protection in cases of work disability due to illness or injury, regardless of cause. Disability policies in most countries have changed over the last decades, from focusing primarily on passive compensation schemes to promoting activation and integration into work [16]. In this respect, the Swedish sickness insurance system is no exception: since the 1990s there has been a strong focus on activation, e.g., through policy pushes toward promoting job mobility, by introducing time limits and different criteria for work ability assessments at different points [17]. As a consequence, much of the work of sickness insurance officials has become focused on performing eligibility assessments in accordance to the pre-defined time limits. There has also been a general strive toward centralization and standardization, e.g., through centralizing regional offices into a state authority, and through introducing insurance medical guidelines for sick-listing. Much of these changes were introduced while principles of NPM were applied, which included a strong focus on results and performance measures [1,18].
Recently, The Swedish Social Insurance Agency (SSIA)-the authority in charge of administering most state benefits related to sickness, parental leave, housing allowances, etc.-introduced value-based principles with the ambition to develop the role of the sickness insurance official toward a more holistically oriented case manager. The vision presented was 'a society where people feel secure if life takes a new turn', which emphasized the role of the SSIA in offering social insurance services in a timely and reliable way. The SSIA has also promoted certain 'customer promises': to be more human, secure and simple, where these promises were developed in dialogue with the employees of the SSIA. To promote a new value-driven agency, the SSIA initiated a set of educational interventions and organizational changes: all officials were given a course in Motivational Interviewing [19], and the organization of the agency was changed into a structure based on their clients' 'life situations' (e.g., being temporary work disabled, or living with a functional disability). A new managerial philosophy was introduced based on 'trust, respect and compassion' and a holistic perspective based on creating value for customers. For employees, this also meant the introduction of lean tools [20], and organization of officials into self-governing teams. Examples of lean tools used were value-flow analyses, mapping and simplifying processes, and visualization of results on whiteboards. These changes were introduced primarily as a response to a period of negative media attention and declining trust in the agency, reported in yearly surveys where citizens rate their perceptions of various state agencies. Hence, the changes were driven by internal development projects to improve public legitimacy, rather than by changes in regulation. Similar changes have later been promoted also through government initiatives, with the purpose to promote trustful and quality-oriented management systems.
The SSIA has around 13,500 employees in offices across the country, of which approximately 3500 officials are working with the administration of the sickness insurance system. The system comprises benefits to people on sick leave, where the officials have responsibility for specific cases from the onset of sickness absence and onwards. Insurance officials are to administer pay-outs of benefits, but also to coordinate the rehabilitation process, involving contacts with stakeholders such as healthcare and employers. Officials have mixed educational backgrounds, in recent decades exclusively with university diplomas. When newly recruited, officials receive internal training in regulations and other competences considered necessary to manage the professional role.
Previous research on the role of sickness insurance officials has shown how this group of professionals are generally client-oriented, rather than adherent to regulations [21], especially those with longer work experience. In the last decade, however, the SSIA has had a relatively high employee turnover, combined with increased old age retirement [22], which leads to an increasing number of officials with less work experience. There has also been much pressure on the SSIA during the last decade to lower the number of people on sick leave. This, in combination with the introduction of new regulations, may imply an increasing rule-orientation among insurance officials. Combined with the introduction of new managerial principles, there are thus many demands on sickness insurance officials to balance production requirements (i.e., handling enough cases) and securing a purposeful coordination of individuals' rehabilitation processes.
Methods
The data material for this study was originally collected through a project commissioned to study the implementation of Motivational Interviewing into the SSIA, which has been reported in a separate article [23]. In the current article, this material is used to analyze the broader change in managerial principles of which this implementation was a part. The data consists of 57 interviews with employees in different positions within the SSIA, comprising 24 sickness insurance officials in four different offices in the west, east, north and south parts of Sweden; 20 office managers; four regional coordinators; and nine senior management representatives (involving people with strategic functions in the SSIA headquarters, such as analysts and a national insurance coordinator). An overview of the material is presented in Table 1.
Insurance officials had a variety of educational backgrounds, including social work, political science, human resource management, economics, social sciences, behavioral sciences and nursing. Officials had previous professional experience from different sectors, such as education, private insurance companies, and healthcare. Some officials had worked in the agency for many years and lacked university education. Officials are generally not medically trained, but may consult specialists in insurance medicine when needed in their case work.
Data Collection
The data collection was carried out between May and September 2013. Four offices were chosen in dialogue with contact persons at the SSIA, where the purpose was to choose offices of average size in middle-sized cities in different regions (the north, south, west and east parts of Sweden). All interviews were semi-structured using an interview guide, covering questions about perceptions of the role of the sickness insurance official, how this has changed over time, and about the implementation and utilization of new tools, such as Motivational Interviewing, lean and teams. Interviews were between 45 and 60 min and were transcribed verbatim. All interviews were carried out in the respondents' workplaces, apart from three office managers and one coordinator, who were interviewed over the phone.
Analysis
The analysis was performed in two steps. The first step involved sorting the material according to the principles of a qualitative content analysis [24], where an inductive approach was used. The authors first read through the transcribed interviews repeatedly, to obtain a comprehensive view of the content. Comments and notes were made from first impression, which became the initial coding. Parts of the text that seemed to intercept key thoughts or concepts based on the aim of the study were marked in different colors. The colored parts were then read more systematically with the purpose of organizing the data into categories. Quotes from colored parts of the text were inserted in a separate table. To reduce the text, quotes were condensed into codes describing the data categories. This was made systematically with the first fifteen interviews where officials, coordinators and office managers from the different offices were represented. The remaining interviews were then analyzed in order to confirm the categories, where any opposing data were highlighted. The initial categorization was discussed repeatedly among the authors and continuously during the emerging analysis. Senior management representatives were analyzed separately, where focus was on the more general development of the SSIA over time. Categories identified in this step were (1) the general development of the SSIA; (2) the past and current role of the sickness insurance official; and (3) conditions for managing the current role.
In the second step, a theoretical analysis of the material was carried out. The categories identified in the first step were here related to theories that corresponded to the topics in the material, where issues of accountability and discretion were identified as central for how the respondents described the organizational reforms and the changes in the role of the insurance official. In the analysis, the theories informed an organization of the material into two themes: (1) re-interpretation of professional roles; and (2) managing the implementation of new principles.
Ethical Considerations
All participants were informed about the purpose of the study and that they could withdraw their participation at any time. The project was approved by the regional ethics board in Linköping (dnr 2013/83-31).
Results
In this section, the results are presented in two broad themes: the first referring to how the changes in the SSIA has influenced a re-interpretation of the professional role of the insurance official, and the second how the implementation of new organizational principles are managed.
Re-interpretation of Professional Roles
Within the SSIA, the introduction of value-based principles was a reaction to declining public trust in the agency, following a period of increased restrictions in the sickness insurance system. In the interviews with senior management representatives, the respondents make a general description of the development of the SSIA over the last decade, where a broad image emerges of the SSIA having been too rigorous in their focus on assessing eligibility for sickness benefits, and thereby neglecting their responsibility for coordinating rehabilitation processes. Here, public accountability is raised as a central concern, and is described as a driver of the organizational changes. The centralization of the SSIA from several regional to one national authority is mentioned as an explaining factor for the previously strong focus on standardization. After this re-organization has settled, the pendulum now shifts toward a more client-centered way of working in order to meet expectations from the public. A senior management representative notes how today's role demands flexibility on the side of the official in coordinating stakeholders and adapting their actions to different circumstances, which calls for more discretion.
The discretion was probably greater 10 years ago; the process wasn't as structured, not as detailed as it is now. So, I definitely think officials perceive having less discretion. And I think that's counter-productive, since there's many stakeholders involved, and much variety in sick leave cases. There's different solutions needed, and officials need to act with flexibility and adapt their actions to the situation (National insurance coordinator).
Managers also emphasize how the public lacks knowledge about the current insurance regulations; there are unrealistic expectations of what the SSIA can offer their clients, where such expectations are reflections of the more generous system of the past. This understanding of public accountability as trust in the agency has introduced new interpretations of professional accountability, where officials are expected to focus more on how they are meeting clients and their pedagogical responsibility in describing the current system in order to manage expectations.
Officials also describe how their role has changed. Previously, their work was detail-oriented and all cases should be handled in a standardized way. In the present role, the work is described as gentler and broader with a holistic perspective of the client. Further, the role is now concerned with the entire process of a client, compared to how officials previously could be responsible for only parts of that process. However, officials with a longer experience note how the current role in a historical perspective is much more controlled with less discretion for officials to decide upon their work. Today's role is more structured, and the performance of the officials is measured in greater detail, which affects what is prioritized.
It's much more regulated today, what we are supposed to be doing. There are a number of measurements of our work and more goals, so it's more structured today. 10 years ago, you were expected to work toward a broader goal, such as bringing people back to work, or shorten the sick leave spells, or retire people, those kinds of overarching goals. There weren't these goals with numbers attached to them, such as managing an application within a certain number of days, or making a specific assessment within 180 days. It changes the work entirely (Insurance official 4, office 2).
That managers and officials describe a shift from administration to case management may thus be seen, at least partly, as a shift back to how the role was perceived before the SSIA was centralized, and before the regulations became stricter.
The values introduced into the SSIA in recent years are focusing on service-oriented, efficient and fair management of cases. Officials also emphasize working with coordination activities in order to promote stakeholder collaboration, and to promote the clients' own responsibility and involvement in their rehabilitation process. This is seen as related to changes in regulations where individual responsibility is more clearly emphasized.
To have this coordinating role is important, that people on sick leave can feel that they can talk to us. A plan where this person is helped in taking a new step, so that it may be more sustainable (Insurance official 1, office 1).
Officials describe their role as divided, as their responsibilities tends to move in two different directions, with focus on administration on the one hand, and coordination on the other. Managers clearly point out that the officials' role should not therapeutic, which may be interpreted as a resistance to officials taking on too much responsibility for their cases.
You should be clear about that we are no therapists. We should listen to what is needed and try to arrange so that others do what they are supposed to. So, we shouldn't be too caring, it isn't our job. If you are, I think you will have problems managing it (Office manager 1, office 1).
The respondents identify several skills required for officials. Both office managers and officials describe a complex role where knowledge about the insurance and legislation is of central importance, as is the ability to deal with people. Further, the respondents mention the need of both experience-based knowledge and formal higher education.
I'm not sure we have been up to date in this development, in supplying the officials with knowledge and competence. You have to be holistic and see the context to come up with good solutions, and it's around that we have started to think about how to best professionalize our officials for this. Because this is a tough task, where many stakeholders are involved, and we are supposed to coordinate it (National insurance coordinator).
Several managers emphasize the importance of higher education for officials. Still, young academics have less life experience compared to older officials, which may have an impact on the professional ability to address the client's situation. A few managers also stress that it may be easier for the organization to form employees with less education. Hence, managers express an ambivalent stance toward the required skills of officials, where they simultaneously need to be educated enough to practice professional discretion, and enough obedient to comply with organizational procedures.
Discretion is described as embedded in a context of routines and procedures that are quite restrictive. Officials have discretion to decide over their daily agenda and planning (i.e., task discretion), although within the limits of a structured routine anchored in the time limits in the sickness insurance regulations (rule discretion). Hence, the officials view their current role as both more controlled than previously (due to regulatory changes and a recent history of micromanagement), and as involving more professional discretion (due to the new focus on case management). The SSIA of the past is described as having broader objectives and less detailed procedures and routines, and the SSIA under NPM as heavily structured through performance measures and micro-management. The accounts from the officials suggest that the current SSIA appears to be a mix of the two. Here, the professional accountability is interpreted as dependent on the professional skills of the official in meeting clients and managing cases, linked to an interpretation of public accountability into values related to client-orientation. The current approach to professional accountability is however still influenced by NPM, as the administrative routines are abundant and performance is heavily monitored.
Managing the Implementation of New Principles
Both office managers and officials with longer work experience describe the SSIA as an organization where changes in what is prioritized comes and goes with a certain regularity, and that they are therefore accustomed to reforms. Most of the interviewed officials had however been employed for a shorter time, and did not share these experiences of previous changes. Notably, most interviewed officials had been employed after the SSIA was centralized and stricter eligibility criteria were introduced into the sickness insurance system. An official who was employed in 2003 explains how the focus has shifted over the years: When I started, we were just leaving one way of managing sick leave cases, where we were, what can I say… more generous.
[…] Then we entered a period with rising sick leave numbers, where we had increasing caseloads and it became a political issue that sick leave rates had to come down. It changed the atmosphere completely, and case management changed. We were told to end sick leave cases. And that didn't turn out good, I don't think many officials enjoyed working then, and we had to take a lot of frustration from people on sick leave. So that was a bit rougher. But now it has changed again (Insurance official 3, office 2).
While the changes toward client-orientation are broadly welcomed, it can be noted how the organizational culture changes slowly and that previous principles based on NPM and traditional public administration co-exist with new initiatives. Some officials express a certain weariness with the constant reforms and indicate that they usually work 'as usual' anyway. Leadership emerges as an important aspect of the change process, where the office managers carry much of the organizational culture and values in their managerial style, which still appears to be much influenced by NPM. One manager express how there needs to be a balance between change and stability: The work never stands still, but at the same time I hope we have employees who likes that there's always a development, a change. But it cannot be too much of those things; I can worry about that (Office manager 10).
Senior management representatives express how the new principles should lead to managers shifting from the previous system of micromanaging to focusing on coaching and emphasizing professionalism, e.g., through delegating responsibilities to teams of officials.
To show that you have faith in the employees and the work they are doing, and supply competence in the right direction, so to speak. To manage through knowledge and competence instead of numbers and statistics. I am personally convinced that this is a recipe for success (Competence manager). This is not mirrored by the officials, who express how their performance is still very much measured in terms of quantitative production quotas. Hence, the NPM principles still linger in the workplace culture, and, according to the officials, the rhetoric of increased discretion is challenged by the plethora of routines. The public accountability of officials is, as a consequence, still perceived as being much determined by keeping to procedure, rather than by the quality of encounters with the clients. Performance measures do not appear to have changed when the new principles were introduced, and there are no strategies for determining the quality of meetings, or whether officials are using the proposed client-oriented methods in order to promote RTW (e.g., Motivational Interviewing).
Management representatives describe the recent changes as a coherent development, where the introduction of lean, teams and Motivational Interviewing are seen as parts of a larger strategy. Among officials, on the other hand, the various changes are generally seen as separate from each other, where it is uncommon to link these changes to a broad perspective of how the SSIA is moving toward a new direction. It is illustrative how the changes toward team and lean organization by some officials is considered to be examples of focusing on effectiveness rather than quality: Motivational Interviewing is pretty far from the numbers, from the results and the graphs and everything. It becomes more of a quality issue, which is often lost when we speak about this team and lean stuff; that's more focused on the work, meaning results and effectiveness, in a way (Insurance official 4, office 2). This indicates different interpretations in different layers of the organization, where the broad picture emphasized by management is not as common among the officials. Although officials do not connect the specific reforms to one another or to an overall strategy, they do however describe a general development of their role as moving from being an administrator focusing on assessing eligibility, to becoming a case manager, implying that they have perceived the broad orientation of the organizational changes.
Officials appear to struggle to balance the production demands of the line organization with the more clientoriented values that are currently promoted by the senior management. They mention how the constant stream of new initiatives from the management tend to increase their workload, resulting in increasingly poor working conditions. The heavy workload makes the work challenging, especially as the role becomes increasingly complex. The changes toward a more client-centered approach with more focus on coordination result in greater demands on officials to participate in different meetings, since other stakeholders are requesting their presence. Officials describe how the high workload and the need to attend meetings inhibit their ability to fulfil their role in relation to the client, and that they have to prioritize the most urgent issues (most often securing pay-outs of sickness benefits). Officials also express difficulties related to the coordinating function, where they do not have control over other stakeholders' activities, e.g., waiting lists in healthcare. Further, different views on the situation of the client among stakeholders can obstruct the rehabilitation process, as can other stakeholders' lack of understanding for the role and responsibilities of the SSIA.
When it comes to employers [of sick-listed clients], I can't do very much since they have their [concerns], you have to accept that. And healthcare has their waiting lists, and that I cannot influence at all (Insurance official 5, office 1).
It is likely that the more complex tasks, such as coordinating rehabilitation processes, managing stakeholder interactions and promoting RTW, are those requiring the most professional discretion, since routines and regulations are less detailed in this area. When working conditions force officials to focus on core tasks, the room for discretion diminishes, as do the room for reflection and the possibility for officials to engage in continuous improvements. For instance, the introduction of teams is mentioned as positive, although the officials in the present study primarily used the teams for scheduling, and not for peer consultation or developmental work.
Discussion
Respondents in the study describe a pendulum between standardization and client-orientation, where officials were given an increasing responsibility for managing their daily work, while still needing to comply with a complex set of regulations and routines. The recent shift of the pendulum is toward more task and value discretion, albeit within the limits of a strict legislative framework, which limits the rule discretion. Further, the task discretion is limited by detailed administrative routines. The officials in this study welcome increasing discretion since it facilitates the complex tasks of managing stakeholder coordination and promotion of RTW. On the other hand, they are struggling to manage the different concurrent management principles within the organization. While lean and team organization are being put forward on an organizational level, performance measures based on NPM and traditional bureaucratic principles are still prioritized in the daily work and in the feedback given by managers.
The introduction of new principles was due to a crisis in public trust in the system. Given that trust is mutual, this can be related to studies of how systems differ in how well they trust their clients, where social-democratic welfare regimes generally are more trusting [6]. It may be argued that the activation policies and the NPM principles has caused the sickness insurance system to deviate from the elements that generated high public trust. As reflected in previous research, NPM has been used as a tool to introduce the activation paradigm into disability policies, where caseworkers have internalized this into a belief system where a good caseworker is a person who has understood the notion of activation, and hence the importance of being more strict in relation to clients [25]. It may be argued that the NPM principles are much in line with such policies, and that these values are still strong within the organization.
Implementation of organizational reforms are complex, and often lead to only parts of the intended results. Since public organizations are embedded in administrative traditions forming an institutional path dependency, new ways of organizing and managing officials' work may be difficult to implement [26,27]. Further, the notion of "value-based" principles is slippery, and the promoted values may be transformed during the implementation process. For instance, studies has pointed out how the adaptation of lean is often narrow when implemented, and often limited to certain tools [3], most likely those that are in line with the current principles and procedures within an organization and therefore are considered easier to implement. In this case, the use of lean tools to promote client-oriented values is complicated by the authoritative context of a state agency, and the NPM principles and performance measures. Holmgren et al. [28] argue that there are three parallel management principles in today's SSIA: the traditional public administration model with focus on bureaucracy and regulations; balanced scorecard management inspired by NPM with focus on results through detailed measures of performance; and lean, focusing on value for customers and efficient processes. These principles may complement each other, but may also come into conflict. The challenge of introducing lean in public organizations with strong cultures and structures has been recognized in previous literature [2], either since the change is in conflict with professional values, or because they are implemented top-down which causes frontline staff to focus more on internal measures and targets than on the end-users. In this case, one ambition of introducing new principles was that officials, through using lean, should be contributing to continuous improvements, which in a context of an authority is complicated by the amount of legislation that governs the work of the employees, and where the organizational culture is hierarchical and rule-oriented.
A previous study of the recent reforms in the SSIA has pointed out that the number of rules have not decreased, which makes the application of professional discretion through team work complicated and frustrating for officials, where rule-and result-orientation is still heavily prioritized by the management [29]. The recent reforms may hence be seen as supplements to the existing NPM paradigm, rather than as a distinctive break with it. Officials' discretion is limited to small details, while the overall work routines are much controlled: they are struggling with their professional discretion in relation to the legal demands, the detailed work routines and the recent history of micromanagement. The new value-based principles also place large demands on the front-line management, especially in how to combine safeguarding fundamental adherence to insurance regulations with supporting professional discretion, i.e., keeping a balance between discretion and governance. In turn, officials balance not only consideration toward clients and obedience to legislation, but also loyalty to their superiors [30], and hereby to the organizational culture that managers project through their management style.
This discrepancy between the rhetoric of the senior management and the practice of office managers and officials may be seen through the lens of a layered organization [14], in which the senior management adapts to new values in order to meet expectations from the public, while other layers (office managers) are still under the influence of how the organization measures its results. This creates a de-coupling of the managerial (and externally-oriented) rhetoric of the organization from the actual work performed by officials, where office managers' NPM-based strategies effectively shield off the influence of the rhetoric from how the work is monitored and rewarded.
The reforms introduced to promote professional discretion may be seen as an initiative to prevent negative effects of detailed top-down management, while the complications surrounding its implementation points to challenges in promoting discretion in a state authority governed by strict legislation. The material displays an interesting combination of bottom-up rhetoric (teams being drivers of innovation and flexible case management) and top-down management strategies (micromanagement, a strong focus on organizational order and routines, and detailed regulations). In their promotion of professional discretion, the SSIA has largely used top-down strategies where the officials, used as they are in following orders, have carried out the required changes, e.g., establishing teams. The actual changes, however, are limited in scope due to the high workload of officials, which is largely managed by falling back into the routines established in the previous NPM paradigm (also illustrated by the largely failed implementation of Motivational Interviewing, as reported elsewhere [23]). While top-down and routine-based strategies may be effective for repressing doubt and promoting standardized work routines, they may be destructive for employees' creativity and use of intellectual resources [31]. This, in turn, may be related to the conditions for officials to perform their work in an ethically sound way (value discretion), where the application of regulations needs to be balanced with attention to characteristics of the individual client. It may be argued that managing that balance requires an organization and a management that does not suppress officials' intellectual abilities or room to question the application of rules, especially in situations where routinized actions may have serious consequences.
The purpose of the reforms was to improve client satisfaction and public legitimacy through more client-oriented procedures. A possible implication for clients of introducing value-based management principles is, therefore, that they receive services that display more respect for the details of their individual case and hence get more adequate support in managing their situation. Although this study did not study clients' perceptions, we can conclude from the data that the reforms did not have the desired effect on procedures, since previous organizational principles prevailed and complicated the introduction of new approaches. Finally, it should be mentioned that yet newer reforms have taken place since the data in this article was collected, which again shifts the balance between discretion and governance, towards the latter. At the time of writing, the SSIA has had a change of Director-General, which tends to lead to new reforms, and Sweden has had a change of government, which tends to lead to new policies. The influence of leaders and government policies on the practices in public agencies, and as a consequence on the services provided to clients, is outside the scope of this article, but may be an important topic for future research in this field.
Methodological Considerations
This is a qualitative study, where the perceptions and attitudes reported are to be seen as representative of the informants, and not necessarily of the SSIA as a whole. The results are however much in line with previous studies of the SSIA, both qualitative and quantitative, which strengthen the trustworthiness of the results. Since this is a single-case study, the results are limited to a Swedish sickness insurance context; the results may however be transferred to studies of public organizations in other contexts, where similar developments in management and organizational principles are described.
Conclusions
This study illustrates how re-interpretation of public values may lead to a change in how public and professional accountability is defined, where the focus shifted from routines and performance measurements toward professional discretion and the quality of encounters. This development is in line with current evidence on work disability prevention and promotion of RTW, where trustful cooperation structures between the central stakeholders is commonly argued for. However, the results also show how these cultural changes were interpreted differently across different layers of the organization, which illustrates the complexities of introducing changes. The NPM discourse is strong within many public organizations where accountability is commonly managed through top-down performance measures; the results point to the lingering influence of NPM which makes it challenging to promote discretion and client-centered principles. It is therefore important for public organizations to reconcile new organizational principles with the current organizational culture and how this is manifested through managerial styles, which may be resistant to change.
Data Availability
The dataset analyzed during the current study is available from the corresponding author on reasonable request.
Funding
The study was funded by the Swedish Social Insurance Agency. The funding organization had no part in the analyses or the writing of the manuscript. | 2018-04-03T02:53:29.802Z | 2018-02-12T00:00:00.000 | {
"year": 2018,
"sha1": "3d485ba89769974f278774e38d34880613c50fa7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10926-018-9759-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d485ba89769974f278774e38d34880613c50fa7",
"s2fieldsofstudy": [
"Business",
"Political Science"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
} |
104312983 | pes2o/s2orc | v3-fos-license | Metal flux synthesis and atom probe tomography analyses of different intermetallic al–mg phases
Abstract Aluminium alloys and intermetallic are being widely investigated as potential aerospace materials. In this study, metal flux has been employed as a synthesize method for different intermetallic phases. The considerable potential of aluminium liquid is demonstrated as a powerful synthesis solvent of important intermetallic phases such as , and CaMgSi. The mechanical properties of the synthesized system have been estimated through the hardness analysis using Nano-indentation hardness test. The microstructure evolution and the phase analyses were examined using scanning electron microscopy (SEM) and X-ray diffraction (XRD). The interactions between intermetallic phases and the eutectic microstructure of molten flux is rather complex and yet to be fully understood. Thus, tracing of local chemistry on an atomic scale is crucial. The atom probe tomography technique is utilized to characterize the intermediate reaction steps of the flux-grown intermetallic phases. The study proposed a direct approach to investigate the involved reactions during the formation of the synthesized intermetallic phase.
Introduction
Al and Mg are two highly important lightweight metals that are commonly used in applications that require reduced vehicle weight to improve fuel economy. Innovations in design strategies are always directed toward weight-saving measures; therefore, it is common to employ lightweight materials [1]. Other light elements such as Si, Ca and Zn are usually added to Al and/or Mg alloys to maximize their functionality. Al-Mg-Si alloys, for example, are versatile heat-treatable alloys with high strength/weight ratios. They are easy to extrude and have good hardening characteristics, and thus find application in a wide range of areas [2]. The potential of these alloys to save weight, improve fuel economy, and decrease exhaust emissions has led to a growing research interest in understanding and discovering new intermetallic compounds that contain light metals, i.e. Al and Mg.
In general, studying the intermetallic phases of known compounds is an important subject in metallurgy. To create advances in the understanding and discovery of new intermetallic compounds, it is also desirable to employ other subjects, such as solid-state chemistry. The synthetic tool box for intermetallic compounds contains many powerful techniques. These techniques, such as arc melter and induction heating, involve very high temperatures that pose a serious limitation for these methods [3]. High temperatures usually produce the most thermodynamically stable product and leave little room for kinetic control because of the high energies involved [3]. For this reason, methods that permit reactions to be carried out at lower temperatures are preferable. A molten metal flux is a good example of such a method. Metal flux has been highlighted as an important tool for the exploration of intermetallic compounds. It has also been used as a medium for the synthesis of a new class of intermetallics. This approach allows the discovery of new materials as well as the growth of large crystals of known materials [3]. The method for synthesizing alloys in this approach is simply to mix appropriate metals that will act as a flux. The aim is to lower the melting point of the solvent by forming eutectics. Moreover, a mixed flux introduces an additional avenue for controlling the reaction chemistry [4]. It is not necessary to completely dissolve the elemental components of the desired product when using the metal flux methodology, as the flux will act as a transportation medium that can dissolve a component in one location while the product grows at another location in the sample container [3].
Among the different fluxes used as media for the growth of single crystal phases, Al has been shown to be useful in the synthesis of several interesting intermetallic compounds, due to its good characteristics. It has a low melting point (660°C) and can dissolve a large number of elements. It dissolves readily in non-oxide acids.
A large variety of intermetallic aluminides have been prepared from liquid Al, and many of them feature fascinating new structures. Many of these intermetallic aluminides are also key components in advanced Al alloys [5][6][7]. In particular, the use of an Al/Mg mixture is of practical interest [7]. The Al/Mg phase diagram exhibits a wide, low melting range (40-60 at.% Mg, ∼ 450°C) between two binary phases (Mg 2 Al 3 and Mg 17 Al 2 ) [7]. Moreover, Al/Mg mixtures have proven to be good solvents for the synthesis of silicides such as CaMgSi and R 5 Mg 5 Fe 4 Al 12 Si 6 (R = Gd, Dy, Y) [4,6].
In light of the information presented above, knowledge of the solidification process and of crystallization within molten Al is required. Here, the synthesis of technologically important phases from an Al metal flux is presented. Then, different characterization techniques such as X-ray diffraction (XRD), scanning electron microscopy (SEM), hardness test (HT) and atom probe tomography (APT) are applied to shed light on the growth mechanism of the different synthesized phases. The role of these synthesized phases on the mechanical properties of Al-Mg based alloy was also conducted in this study.
Experimental
Appropriate quantities of Al sheet (99.9%) obtained from (Sigma Aldrich), Mg metal slugs (99.9%) obtained from (Sigma Aldrich), metallic Ca shots (99.9%) obtained from (Acros Organics) and a Si wafer (99.9%) obtained from (SVM) were weighed out in a 15/15/3/3: Mg/Al/Ca/ Si mmol ratio in an N 2 glove box and placed in a Ta crucible (Figure 1(a)). The Ta crucible was welded shut using an arc-melter (Edmund Buhler Gmbh). After the welding was completed for both sides of Ta crucible, it was placed into a quartzite ampule. The Ta crucible was used to prevent Al vapour from attacking the quartzite, which would lead to loss of the protective atmosphere. After placing the Ta crucible in a sealed, evacuated quartzite ampule and slightly elevating it off of the bottom of the ampule by shaking the quartzite ampule, the ampule was gently placed into an electric furnace. The sample was heated from room temperature to 950°C and held for 5 h, cooled to 750°C over a period of 60 h and then held at 750°C for 20 h, at which point, the reaction tubes were removed from the furnace. A chart illustrating this temperature programme is shown in Figure 1(b). The tube was then quickly inverted and quenched in cold water. The samples were then extracted from the Ta tube.
Samples for SEM -EDX analysis were prepared via a standard metallurgy procedure. This has been done by mounting the disk samples in a goniometer holder and mechanically grinded by distilled water and SiC paper of 600, 1200 and 2400 grit size. This step is followed by final polishing to eliminate scratches by using colloidal silica on cloth and diamond paste with a size of 6, 3, 0.5, 0.2 and 0.1um, respectively. The samples were then ultrasonically cleaned using acetone. The samples were examined using an SEM (Quanta 200) equipped X-ray spectroscopy (EDX). XRD was carried out after the samples were prepared by grinding the specimens in an agate mortar to have a powder shape. The analysis was carried out using a STOE STADIMP powder diffractometer using CuKα 1 -radiation operated at 40 kV.
The mechanical properties of the synthesized material have been tested using hardness measurements. HT was performed using a Nano-indentation system (Nano test Vantage) with a load of (200 mN) and dwell time of (10 s). The hardness value has been estimated by averaging the values of eight measurements taken at different locations in the sample.
Preparation of APT-tips has been done using the focused ion beam (FIB) method. FIB-based, site-specific specimen preparation was performed by using Helios FEI. APT experiments were performed in a Cameca LEAP 4000 HR using a laser pulse mode. A diode-pumped (Nd:YAG) solid-state laser operating in the frequencytripled ultraviolet region with a wavelength of 355 nm, a pulse duration of approximately 12 ps and a repetition rate of 200 kHz was used. The best analytical parameters were chosen by performing different test analyses. The best parameters for analysis are found to be 50 pJ for the laser pulse energy and 50 K for the base analysis temperature. The detection rate was set to 0.01 ions per pulse. The location of the laser spot on the specimen was monitored using a charge -coupled device camera. Data reconstruction was carried out using the IVAS software [8].
Results and discussion
The XRD diffraction pattern for the synthesized material is shown in Figure 2. The acquired diffraction pattern was found to match the calculated patterns for Al 2 Mg, Mg 2 Si and CaMgSi. An optical micrograph of the microstructure is shown in Figure 3. Areas of different contrast are readily seen in the image, and they indicate the presence of different phases within the microstructure.
SEM / EDX analyses were used to quantify the different phases that exist in the material. Selected pieces of synthesized material were arranged on double-sided carbon tape adhered to an Al sample puck. An example of an SEM micrograph is shown in Figure 4. Using EDX analysis, the several phases that exist in the synthesized material were identified asAlMg, Al 2 Mg, Mg 2 Si and CaMgSi as shown in Figure 4(a). The existence of multiple phases in the microstructure can be explained as follows; the use of a stoichiometric reaction does not result in the formation of a pure single crystalline phase but instead yields a mixed phase powder composed of CaMgSi, Mg 2 Si and Ca 2 Si [4]. Reactions carried out in pure Mg flux do not usually yield any of the title compounds, such as CaMgSi. However, the addition of Al may promote this reaction. Al usually acts either as a transport agent within the flux or as a solvent for the incorporation of elements that have low solubility in molten Mg. Al metal has been shown in the literature to be a highly reactive solvent that facilitates the growth of aluminide intermetallics [9]. However, the observed four phases in Figure 4(a,b) might be in their metastable stage. This would be expected due to the absence of a real single crystal phases from the microstructure as has been shown in Figure 4. According to the EDX analysis that was applied to different regions of the sample and is presented in Figure 4, Al exists in the matrix in the form of AlMg and Al 2 Mg, and the incorporation of Al atoms into the observed products, such as CaMgSi and Mg 2 Si, was not recorded. Figure 4 In general, the separation of a single crystal phase from the surrounding flux or matrix has been carried out by centrifugation or by dissolving the excess flux in sodium hydroxide (NaOH) because the Al can be eaten away by NaOH [10]. Dissolving the excess flux or separating out a crystalline product is outside the scope of this study, and so a powder composed of multiple phases was used instead.
The correlation between the microstructure and the mechanical properties is also of great interest, as the ternary system of Al, Mg and Si is known to have excellent age -hardening characteristics [11]. Moreover, alloying Al/Mg elements with SiC particles has been recorded to yield a good ultimate tensile strength of 250 ± 6 MPa [12]. In our study, the evaluation of the hardness properties of the synthesized material was performed. According to the hardness measurement, the material has a relatively high hardness value of (2.8 ± 1 GPa). This hardness value proves that our synthesized material shows a good hardness behaviour. However, using of other mechanical tests are required in order to make a quantitative estimation about the overall mechanical properties. The relationship between the observed hardness behaviour and the microstructure of the synthesized material requires an investigation at an atomic scale. Through such an investigation, the correlation between the macroscopic and microscopic properties can be understood. Moreover, an atomic scale investigation will allow the transition between different phases to be followed.
Knowledge of the mechanisms that govern the growth of an intermetallic product from a molten flux is limited. Therefore, the APT technique is used to investigate this mechanism. APT involves one of the most spectacular microscopes in existence. It provides a three-dimensional (3D) image at the atomic scale with single-atom sensitivity in which each atom or isotope in the image is defined. The fundamental data format is the 3D position and identity of individual atoms in a volume that contains potentially millions of atoms; thus, different information about the analysed material can be gleaned. Detailed information about APT can be found in Ref [13,14]. The first step in preparing an APT needle-shaped specimen using site-specific FIB preparation is to select an area of interest as marked in Figure 5. A number of FIB-based preparation techniques have been developed to create APT tips that contain the features of interest. The methods are described elsewhere [15]. Figure 6 illustrates the followed steps to prepare APT tips with the aid of the standard lift-out method by using FIB. After imaging the surface of the sample using the backscattering signal, the region of interest has been selected ( Figure 6(a)). A FIB-deposited platinum strip was added to protect the surface and mark the region to be extracted ( Figure 6(b)). The platinum layer is typically 2-3 um wide and 2.5 um thick with a length depending on the geometry of the region of interest (3 um in our case). After depositing the Pt layer, staircase-shaped cross sections are cut on both sides beyond the Pt layer, resulting in a lamella of approximately 16 um × 12 um × 15 um in size (Figure 6(c)). A micrometer-sized needle called an Omniprobe, is then introduced and attached to the lamella via Pt welding (Figure 6(d)). After the lamella can be cut free, it will be manipulated and positioned on commercial arrays of the post (micro-tip Tm arrays) in flat-topped form (Figure 6(e)). The last step is to convert a left-out lamella in Figure 6(f) into a sub-100-nm diameter sharp needle. This is accomplished through a series of annular milling steps with progressively smaller beam currents and inner diameters (Figure 7(a)) [14], until the desired radius has been achieved. The ideal shape of the tip prepared via this technique is shown in Figure 7(b).
After the preparation of several APT tips have been done, the first APT experiment was devoted to analysing the Al/Mg matrix or flux region (Figure 8). The radius of the tip apex was estimated to be approximately 50 nm (Figure 8(a)). The data set measured is a collection of 5 million atoms (Figure 8(b)). The data quality was assessed by observing the desorption map. Figure 8(c) shows the desorption map, the hit map that forms during analysis, of the data set in Figure 8(b). It is clear from Figure 8(c) that the hit density at the detector is almost homogenous, which indicates that the atoms are fieldevaporated from the specimen in a highly controlled order. An accurate chemical composition for the whole reconstructed volume shown in Figure 8(b) is given in Table 1. The distribution of impurity atoms such as Si and Ca in the Al/Mg flux is shown in the 3D atom map in Figure 8(b).
Transitioning from the matrix Al/Mg or the flux region toward a single phase region of Mg 2 Si comprises the next step of the analysis. In this case, lamella that has been extracted from the sample to prepare an APT tip was selected from the Mg 2 Si region (Figure 9(a)). Performing APT analysis for this tip yielded a collection of 20 million atoms (Figure 9(b)). Once again, the desorption map in Figure 9(c) demonstrated that the APT data set is of good quality. The chemical composition of the whole reconstructed volume is shown in Table 1. This chemical composition is in fairly good agreement with that obtained via EDX analysis. The distribution of impurity atoms of Al and Ca is also shown in Figure 9(b).
To understand the growth mechanism, it is important to perform APT analysis in an area of the sample that contains both flux and intermetallic phases. In this case, the position for cutting the lamella was selected to include the area of at the interface between the phases and the flux. The top view of the reconstructed volumes of the analysed tips that were prepared from selected areas in Figure 5 is shown in Figure 10. In this case, the presence of different phases is clearly visible. Moreover, the APT analysis for the region corresponding to the CaMgSi phase is shown in Figure 11. In this case, the distribution of Al was not observed. This observation confirms the idea that Al does not incorporate into the final product. The chemical composition of the reconstructed volume is shown in Table 1.
To follow the transition between these phases, different small volumes from the reconstructed volumes in Figure 10 were cut and quantified individually. Quantification of the chemical composition was done using the concentration depth profile method. Detailed information about this method can be found elsewhere [13,14]. A series of the small cutting volumes together with their corresponding depth concentration profiles are shown in Figure 12. According to the depth concentration profiles drawn from Figure 12(a), the chemical compositions for the observed phases have been identified as Ca 2 Si and Al 2 Mg. For the observed phases in Figure 12(b), the depth concentration profile shows that these phases are corresponding to AlMg and MgCaSi . Moreover, there is one region in the reconstructed volume represents the presence of Mg 2 Si phase according to the depth concentration profile (Figure 12(c)). The chemical composition of the Ca 2 Si and Al 2 Mg phases are also shown in Table 1.
A eutectic flux composed of Mg and Al solidifies at 450°C, but above this temperature, it becomes very viscous and difficult to remove. This could explain the presence of both AlMg and Al 2 Mg phases (Figure a,b)). Before the crystallization of CaMgSi takes place, the formation of a solid solution between Mg 2 Si and Ca 2 Si is also observed (Figure 12(a,c)). The probability of forming this solid solution was investigated theoretically using density functional theory (DFT) [15]. In this study, the authors reported the substitution of Mg atoms in the Mg 8 Si 4 unit cell with Ca atoms and substitution of Ca atoms in the Ca 8 Si 4 unit cell with Mg atoms, which clarified the possible formation of an Mg 2 Si -Ca 2 Si solid solution.
12(
Theoretical investigation of this system also confirmed that CaMgSi of the Ca 2 Si-type, where all of the Ca atoms that occupy one type of 4c site are completely substituted by Mg and all other 4c sites remain occupied by Ca, is energetically quite stable. The APT analysis presented in Figure 12 proposes that the transition path is happening in which solid solution of Mg 2 Si and Ca 2 Si reacts with AlMg phase. It can be expected that the transition produces the CaMgSi phase through an intermediate Mg 2 Si-Ca 2 Si solid solution. It has been reported that CaMgSi is the only equilibrium phase in the Mg 2 Si-Ca 2 Si pseudobinary system [16].
Based on this quantitative APT analysis above together with the DFT study [16], the transition from the molten flux to the single crystal phase of CaMgSi can be summarized as follows: From the SEM micrograph ( Figure 4) and the above quantitative APT analysis, it seems that the processed reaction to form a single phase of CaMgSi is a peritectic reaction, in which the two solid phases in equilibrium and the transition has been performed from solid and liquid phases to different solid phases. However, the shape of CaMgSi phase dose not matches the recorded shape of this crystal [4]. This might be explained in the term of short annealing time resulting in uncomplete shape of this crystal.
The presence of this mixture of phases in the microstructure, in addition to the distribution of impurity atoms, is responsible for the good hardness characteristics of our synthesized material.
In this study, the considerable potential of aluminium liquid is demonstrated as a powerful synthesis solvent of important intermetallic phases such as Mg 2 Si, Al 2 Mg and CaMgSi. The atom probe tomography technique is utilized to characterize the intermediate reaction steps of the flux-grown intermetallic phases. The study proposed a direct approach to investigate the involved reactions during the formation of the synthesized intermetallic phase by using the APT technique.
Conclusion
The great potential of Al liquid as a powerful solvent for the synthesis of important intermetallic phases is demonstrated. Different important phases such as Mg 2 Si, Al 2 Mg and CaMgSi have been synthesized. The APT technique provides critical knowledge of the Nanoscale evolution of microstructure. Intermediate steps in the mechanism of the flux -grown intermetallic phases were investigated. The proposed path of the reaction is: AlMg + Al(l) + Mg 2 Si + Ca 2 Si → Al 2 Mg + 2CaMgSi.
The processed reaction to form a single phase of CaMgSi is peritectic reaction, in which the solid phase of AlMg is reacted with liquid Al and solid solution of Mg 2 Si-Ca 2 Si to produce other intermetallic phases, i.e. Al 2 Mg and CaMgSi.
A combination of physical metallurgy and solidstate chemistry might be used as a direct approach to improve the production of Al alloys and improve their mechanical properties.
Disclosure statement
No potential conflict of interest was reported by the author. | 2019-04-10T13:12:56.613Z | 2018-11-08T00:00:00.000 | {
"year": 2019,
"sha1": "e0ec30748cee2a633214c9a502f5c3cc6bb3fc2f",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/16583655.2018.1542869?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "e47cdfd25f62c33f65cef464408664ab1fe941ed",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
261522050 | pes2o/s2orc | v3-fos-license | Acetaminophen oxidation under solar light using Fe-BiOBr as a mild Photo-Fenton catalyst
environment. This work demonstrates that Fe – BiOBr using the solar photo-Fenton process eliminates acetaminophen at mild pH in aqueous media. Fe-BiOBr is produced using microwave-assisted solvothermal synthesis, and the formation of the BiOBr phase is confirmed with XRD. SEM and TEM demonstrated the flower-like morphology, in which crystallite size reduces as a function of the Fe loading. The chemical environment at the surface of Fe – BiOBr is investigated with XPS. The results are connected with Raman analysis, which suggests the presence of oxygen vacancies in Fe – BiOBr. Furthermore, the effect of Fe in BiOBr is assessed by determining the optical band gap with UV – Vis. The Fe-BiOBr functionality is assessed during acetaminophen degradation. Fe-BiOBr revealed excellent performance in degrading acetaminophen in the first minutes ( Q = 10 kJ m (cid:0) 2 ) under natural sunlight. Results reveal that 1% Fe content in BiOBr can degrade acetaminophen and its main byproduct (30 min, Q = 50 kJ m (cid:0) 2 ) at pH 5 and using 0.25 gL -1 of catalyst. A synergistic mechanism between heterogeneous photocatalysis and Fenton processes with primary superoxide ( • O 2 – ) radical, followed by hydroxyl ( • OH) radical and photogenerated holes ( h + ), is proposed. Our research contributes to the degradation of pharmaceuticals under mild conditions and sunlight irradiation.
Introduction
Acetaminophen (ACP) is an analgesic used as a first-choice treatment for pain and fever [1]; an average of 4% can be excreted by humans through the urinary tract after intake [2], yielding high concentrations of ACP residues in wastewater [3].The environmental impact of ACP is worrisome due to its high consumption during the COVID outbreak, which nearly tripled between 2019 and 2021 [4].Some studies reported ACP concentrations in wastewater up to 22.8 μg/L in North Scotland, 51.22 μg/L in North Mexico, 105.8 μg/L in Singapore, and 667 μg/L in Colombia [5][6][7][8].In response to pharmaceutic discharges in wastewater, the European Union has implemented measures through the Water Framework Directive (WFD) to monitor and evaluate health risks associated with emerging contaminants.The Watch List (WL) is updated every two years to track the presence of contaminants and issue legal regulations, if necessary, to limit their environmental impact.Since 2020, the WL has included antibiotics like amoxicillin, ciprofloxacin, sulfamethoxazole, and trimethoprim, among others, to evaluate bacterial resistance.However, until now, ACP (Fig. 1) is not in the WL despite its high consumption and discharge [9], probably because it is not used as an antibiotic, although similar painkillers have been demonstrated to have antimicrobial properties [10,11], which might lead to creating resistance in microorganisms [12].Like European countries, other countries like Mexico have not defined a permissible limit for the ACP in water.Other organizations, such as the Minnesota Department of Health (MDH) in the U.S.A., have established permitted levels of drinking water (200 μg/L) [13].Similar guidelines, like in the U.S.A., should be followed since large concentrations of APC in water (4.46 μg/L) can lead to ecotoxicological risk in aquatic ecosystems, e.g., Daphnia magna has a half maximal effective concentration (EC 50 ) of 2.04 mg/L [14].From this view, pharmaceuticals-free water is vital since it can lead to plant and animal genotoxic, mutagenic, and ecotoxicological effects [15].
Advanced oxidation processes (AOPs) can remove pharmaceuticals from water.The process involves the generation of radical species that oxidize organic molecules, such as ACP, until mineralization, i.e., the formation of innocuous products such as CO 2 , H 2 O, and inorganic salts.Among the AOPs, the photo-Fenton process produces hydroxyl radical • OH, a very reactive and highly oxidizing species.Homogeneous photo-Fenton uses iron salts to catalyze the decomposition of hydrogen peroxide to • OH.Nevertheless, it requires highly acidic conditions (pH 2-3) to avoid the formation of hydroxides, which leads to iron deactivation and limits photocatalytic activity.This is the case of photo-Fenton degradation of ACP, which has been studied to its finest detail [16][17][18].Other photo-Fenton approaches require chelation and/or immobilization of iron (Fe) on solid surfaces, which are attractive options to overcome deactivation [19].The approaches are known as heterogeneous Fenton or photo-Fenton if the light is involved in the degradation process [20].The heterogeneous photo-Fenton degradation mechanism is a synergistic combination of photocatalysis like Fe and a semiconductor.When the light reaches the semiconductor, the photogenerated electrons in the conduction band are used to accelerate the Fenton redox reaction on the photocatalyst surface (∋ Fe) by increasing the cycle rate of Fe 3+ /Fe 2+ , which promotes the decomposition of H 2 O 2 , yielding strongly oxidizing • OH [21].For example, in Eq (1) to (6): Among the heterogeneous photo-Fenton catalysts to degrade ACP under controlled UV or visible light include Fe 3 O 4 , Fe 2 O 3 , Fe 2 O 3 -TiO 2clay, αFe 2 O 3 /g-C 3 N 4 , F-C 3 N 4 /NiFe 2 O 4 and zero-valent Fe [22][23][24][25][26].
Recently, BiOBr, a p-type semiconductor, has received attention because of its narrow band gap (2.62-2.90eV) that allows degradation under visible light [27,28].Photo-Fenton activity can increase further with the incorporation of Fe.Diverse approaches have been used to synthesize Fe-BiOBr, including solvothermal and microwave-assisted solvothermal.
A benefit of microwave-assisted solvothermal is that it is less timeconsuming (~a few minutes to 1 h) [29][30][31] than conventional solvothermal (8 h) [32][33][34].Time reduction can be related to uniform heating by electromagnetic waves and ease photocatalyst synthesis [34,35].Applications of solvothermal synthesized Fe-BiOBr include the degradation of rhodamine B and bisphenol A [20], phenol [31], and atrazine [36], achieving complete degradation within 30 and 120 min.The main degradation mechanism has been suggested to be related to • OH and h + as the main oxidizing species [31], of great similitude to the described chemical mechanism in Eq (1) to (6).
This work demonstrates that ACP can be degraded under solar illumination and mild conditions using low Fe loadings in Fe-BiOBr photo-Fenton catalyst synthesized using a less time-consuming microwaveassisted solvothermal method.The BiOBr phase in Fe-BiOBr is confirmed with XRD.SEM and TEM demonstrated the flower-like morphology, in which crystallite size reduces as a function of the Fe loading.The chemical environment at the surface of Fe-BiOBr is investigated with XPS.The results are connected with Raman analysis, which suggests the presence of oxygen vacancies in Fe-BiOBr.Furthermore, the effect of Fe in BiOBr is assessed by determining the optical band with UV-Vis.The Fe-BiOBr functionality is assessed during ACP photo-Fenton catalytic activity, carried out for 0.25 mg L -1 of catalyst load and pH 5. The latest conditions are most favorable since most international regulations allow wastewater discharges to surface water with a pH of 5.5 to 9.5 [37,38].The Fe-BiOBr catalyst showed excellent performance in degrading ACP in less than 10 min (Q = 10 kJ m − 2 ) under natural sunlight.Our results can contribute to developing a Fenton photocatalyst to degrade pharmaceuticals under natural sunlight and mild conditions.
Photocatalysts synthesis
The photocatalysts xFe-BiOBr (x = 1 and 3 wt% Fe) were synthesized as follows: 3 mmol of Bi(NO 3 ) 3 ⋅5H 2 O and appropriate amounts of Fe (NO 3 ) 3 ⋅9H 2 O (0.19 and 0.56 mmol) were dissolved in 30 mL of ethylene glycol by sonication; similarly, 3 mmol of CTAB were also dissolved in EG (30 mL).Subsequently, both clear dissolutions were mixed and stirred vigorously for 20 min and transferred to Teflon vessels of MARS 6 equipment (CEM Corp. USA) which was operated at 160 • C, 20 min, and 450 W. Later, the microwave vessels were cooled to room temperature, and solids were recuperated by centrifugation.The prepared materials were naturally cooled to room temperature, repeatedly washed with ethanol and distilled water, and dried in an oven at 80 • C for 12 h.As a reference, pristine BiOBr was prepared using the same procedure without the addition of Fe precursor.
Structural and morphological characterization
The photocatalysts xFe-BiOBr (x = 1 and 3 wt% Fe) morphology was analyzed using scanning electron microscopy (SEM, JEOL JSM6510-LV) equipped with an energy-dispersive X-ray detector (EDX) and highresolution transmission electron microscope (HR-TEM), using FEI TITAN G2 80--300 (operated at 300 keV).X-ray diffraction (XRD) determined the catalysts' crystalline phase and crystallite size.XRD was carried out using a Bruker AXS Model D2 PHASER diffractometer (Cu Kα radiation, λ = 1.5406Å) at a scan rate of 0.1 • /s over a diffraction angle 2θ from 5 to 80 • .The average crystallite sizes of the photocatalysts have been calculated according to the classical Scherrer's equation: where D is crystallite size (nm), K is a constant equal to 0.9, λ is X-ray wavelength (1.5406 Å), β is a half-high width of the diffraction peak (1 1 0), and θ B is the 2θ diffraction angle [39].
Chemical composition and band gap determination
Chemical species in Fe-BiOBr were obtained using a Raman Thermo Scientific microscope with a laser diode as a radiation source (780 nm).Chemical species present at the surface of BiOBr and Fe-BiOBr were investigated with X-ray photoelectron spectroscopy (XPS).The spectra were recorded using a Thermofisher Nexsa G2 (Al-Kα, 1486.6 eV, 120 W).The amounts of Fe in the prepared photocatalysts were determined using atomic absorption spectroscopy (AAS, SpectrAA 220FS, Varian); before the AAS analysis, 0.015 g of each powder was digested in a mixture of HNO 3 and HCl (4:1) by microwave heating (MARS 6, CEM) for 20 min at 180 • C and 800 W. UV-Vis diffuse reflectance spectroscopy (UV-Vis/DRS) was recorded using a spectrophotometer Lambda 365 (Perkin Elmer) equipped with an integrating sphere using BaSO 4 as reference material.From the acquired spectra, the band gap (E g ) of the prepared catalysts was estimated by extrapolation from the plot of [F(R) hν] 1/n vs. hν, where F(R) is the Kubelka-Munk function obtained from the UV-Vis diffuse reflectance data, hν is the photon energy described by the Planck constant (h = 4.135 × 10 -15 eV·s), and the light frequency (ν) given by the ratio of the speed of light (3 × 10 8 m/s) and the wavelength (m); n depends on transition characteristics of the semiconductor, a value of n = 2 was used due to BiOBr have indirect transitions [40].
Textural analysis
The specific surface area (SSA) of the catalysts was estimated by the Brunauer-Emmett-Teller method (BET), and pore sizes by the Barret-Joyner-Halenda method (BJH) using an N 2 adsorption-desorption equipment (TriStar II Plus, Micromeritics).
Photocatalytic activity
The experiments were carried out under natural sunlight in Apodaca City in Nuevo Leon, Mexico (25 • 45′ N, 100 • 7′ O).The photocatalytic activity of the catalysts was evaluated on the degradation of ACP solution (15 mgL -1 ) (ACP, p-acetaminophen, C 8 H 9 NO 2 ) prepared using local drugs tablets (Medimart, 500 mg) in distilled water.The experiments were carried out in 100 mL of ACP solution on a Pyrex reactor, the pH was adjusted at 3 or 5 using HCl 0.1 M, and the catalyst added (0.25 gL -1 ) was stirred for 30 min in darkness to allow the adsorption-desorption equilibrium between ACP and catalyst.After that, 102 μL of H 2 O 2 (30% Fisherbrand, 10 mM) was added, and the reaction was conducted outdoors using natural sunlight.At regular intervals, samples were collected and filtered through a 0.45 μm nylon syringe filter to remove the photocatalyst.In solar degradation, once the light exposition started, the accumulated solar radiation was measured with a Delta OHM HD2102.2 radiometer (range: 315-400 nm), and samples were collected as needed, reaching a total of 300 kJ m − 2 for each experiment.ACP quantification was performed by liquid chromatography on HPLC Agilent Technologies 1260 Infinity equipment with diode array detector using a column Thermo Scientific Accucore C18 (150 × 4.6 mm).The mobile phase was a mixture of 25:75 v/v methanol (HPLC grade, Tedia) and 4% (v/v) acetic acid prepared in water, and the flow rate was 1.5 mL min − 1 .The injection volume was 25 μL, and the detection wavelength (λ) was 242 nm.Similar tests were carried out under simulated solar radiation until reaching an accumulated energy of 300 kJ m − 2 (Solar simulator Suntest, XLS + Model Atlas, Germany, equipped with a daylight filter that emits radiation from 300 to 800 nm).The mineralization degree was determined by the total organic carbon (TOC) diminished on a Shimadzu TOC-V CSH analyzer with an ASI-V model autosampler.Finally, to determine the main reactive species involved in ACP degradation, different scavengers of radicals were added in degradation tests under the same studied conditions.For this, terbutanol (TBA, 5 mM), p-benzoquinone (BQ, 0.5 mM), and sodium oxalate (OXA, 5 mM) were added as • OH, • O 2 -and h + quenchers.All experiments were carried out in duplicates.The experiments have been carried out in duplicates.The difference between each pair of experiments was less than 10%; in the end, all reached total ACP degradation.
Results and discussions
A BiOBr photocatalyst with Fe is synthesized via the microwaveassisted solvothermal method.The synergy between Fe and BiOBr is investigated morphologically, structurally, chemically, and optically.Fe-BiOBr functionality is assessed during the photocatalytic degradation of ACP under solar irradiation.Finally, a photocatalytic action mechanism is proposed.
Morphological and structural characteristics of Fe-BiOBr
The morphology of the synthesized BiOBr and BiOBr loaded with 1 wt% (1Fe-BiOBr) and 3 wt% (3Fe-BiOBr) of Fe is investigated with SEM, as shown in Fig. 2. In Fig. 2 (a), BiOBr has spherical flower-like morphology, which is assembled by nanosheets in the form of interlocking petals, with interstitial spaces between them [32].No apparent morphological differences exist between BiOBr and 1Fe-BiOBr and 3Fe-BiOBr in Fig. 2 (b) and 2 (c), showing the same morphology and behavior as BiOBr and Fe-BiOBr materials reported by others [31,41,42].Fig. 2 (d) shows an SEM-EDX map of the 1Fe-BiOBr image to compare with (d1) Bi, (d2) Br, and (d3) Fe elemental composition.This analysis reveals that the elements present in the samples are homogeneously distributed, confirming that the microwave-assisted solvothermal synthesis method is suitable for this material.
The morphological and structural properties are further investigated with TEM.Fig. 3 shows TEM, STEM-HAADF, and HR-TEM images for (a, b, c) BiOBr, (d, e, f) 1Fe-BiOBr, and (g, h, i) 3Fe-BiOBr.TEM images (a, d, and g) reveal that the structure retained a flower-like shape with a size of approximately 1.5 µm and did not show morphological changes as the Fe content increased.However, it seems that the 3Fe-BiOBr is less dense than BiOBr and 1Fe-BiOBr.STEM-HAADF images (b, e, and h) confirm the intercalated sheet-like morphology characteristics of BiOBr.Nevertheless, 3Fe-BiOBr has a bright core and darker spike-like features in a different sample area; bright dark regions can be associated with regions with heavier atoms, most probably from Bi. HR-TEM images (c, f, and i) show the presence of crystals with sizes around 10 nm.The crystal size decreases with increasing iron content, which is consistent with the XRD results in Fig. 4a.In Fig. 3, high crystallinity and lattice fringes with an interplanar lattice spacing of 0.36, 0.23 and 0.27 nm are estimated and assigned to BiOBr (1 0 1), (1 1 2), and (1 1 0) crystallographic planes.1Fe-BiOBr and 3Fe-BiOBr exhibit slightly smaller crystallites compared to BiOBr.
The optical properties of BiOBr, 1Fe-BiOBr, and 3Fe-BiOBr are measured using UV-Vis DRS in Fig. 4b to estimate and compare the bandgap.BiOBr has a steep increase of absorption at wavelengths shorter than 413 nm, which can be assigned to the intrinsic band gap of pure BiOBr (~3.00 eV) [45].The spectra of 1Fe-BiOBr, and 3Fe-BiOBr exhibit a redshift and increased photoabsorption in the visible light range and near-infrared region [31,32], which can indicate the Fe loading.This observation is also reflected in the color change of these materials from white to reddish yellow.A possible mechanism for such redshift may be related to the transition of electrons between the conduction band or valence band of BiOBr and Fe ions or the internal transfer charge between Fe ions (Fe 3+ + Fe 3+ → Fe 4+ + Fe 2+ ) [32,46,47].The band gap (E g ) for 1Fe-BiOBr and 3Fe-BiOBr are in the order of 2.73 and 2.75 eV (Table 1).
Raman analysis in Fig. 4c shows the characteristic bands of BiOBr observed at 56.8, 95.0, 112.4,and 163.0 cm − 1 [48], which are assigned to the internal Bi-Br stretching modes [49].Likewise, a weak and broad signal corresponding to the motion of oxygen atoms is observed at 385 cm − 1 [50], and the signal at 86 cm − 1 is ascribed to the formation of oxygen vacancies (OVs) [51].It should be noted that the Raman signal for BiOBr is the most intense, indicating its higher crystallinity than the 1Fe-BiOBr and 3Fe-BiOBr.Furthermore, the decrease in the intensity of the Raman signal is attributed to the formation of oxygen vacancies [52].These results are coherent with TEM (Fig. 3) and XRD (Fig. 4a) analyses.Compared to BiOBr, a blue shift for 1Fe-BiOBr and 3Fe-BiOBr is observed.These variations could be associated with changes in the structural and chemical environment in BiOBr due to the incorporation of Fe.
Chemical species at the surface of Fe-BiOBr
XPS analysis confirmed the elemental compositions of BiOBr, 1Fe-BiOBr, and 3Fe-BiOBr.The XPS spectra are shown in Fig. 5; the signals present in samples are Br 3d (a), Bi 4f (b), O 1 s (c), and Fe 2p (d).The high-resolution XPS Fe 2p core level spectra illustrate the increasing iron concentration in BiOBr, 1Fe-BiOBr, and 3Fe-BiOBr (Fig. 5d).The Fe 2p3/2 and Fe 2p1/2 contributions located at ca. 711.5 eV and ca.725.1 eV suggest the insertion of Fe 3+ in BiOBr [31,53,54].The presence of Fe in 1Fe-BiOBr before and after the reaction is shown in Fig. S2; no major differences in Bi, O, Br, and Fe content have been found.However, in Table S1, a slight reduction in the Fe content in the used catalyst is observed, which can be explained by an increase in the carbon content due to the pollutant adsorption or a low iron leaching [55,56].For all studied samples, the high-resolution XPS spectra of the Br 3d (Fig. 5a) and Bi 4f core levels (Fig. 5b) revealed the contribution of doublet pairs, i.e., Br 3d5/2 at ca. 68.7 eV and Br 3d3/2 at ca. 69.7 eV and Bi 4f7/2 at ca. 159.4 eV and Bi 4f5/2 at ca. 164.8 eV (indicating the presence of Bi 3+ in the materials) [57,58].The high-resolution XPS O 1 s core level spectra all exhibit a dominant contribution at ca. 530.5 eV attributed to O in BiOBr (Fig. 5c).Additional contributions, ascribed to surfaceadsorbed oxygen, H 2 O, and -OH groups [54,57,59], are also detected at higher binding energies (ca.532.0-534.0eV).
Photocatalytic activity of Fe-BiOBr
The solar photo-Fenton is conducted under natural sunlight to oxidate ACP using Fe-BiOBr is showcased.Fig. 6 shows the degradation of ACP [mg/L] as a function of the solar accumulated energy Q [kJ/m 2 ].For 1Fe-BiOBr and 3Fe-BiOBr in Fig. 6a and 6b, ACP is degraded in the first minutes (Q = 10 kJ m − 2 ) (closed circles and squares).Concurrently, the ACP byproduct is formed (open circles and squares) and progressively degraded.For BiOBr, ACP degradation is not achieved until 240 kJ/m 2 (i.e., 2.5 h) in Fig. S3.Fig. 6a and 6b show that 15 mg/L of ACP and its main byproduct is fully degraded in 30 min under natural solar light and 10 mM of H 2 O 2 .It is important to highlight that under solar irradiation, 1Fe-BiOBr and 3Fe-BiOBr degrade ACP at mild pH conditions, i.e., close to pH 5.At pH 5, more byproduct (Bypr) is produced (Fig. 6a and 6b), similar to the obtained by homogeneous solar photo-Fenton, where hydroxylated species are initially formed as intermediates, improving the degradation of ACP.Such species can lead to an improvement in the reduction of Fe 3+ to Fe 2+ in comparison to more recalcitrant intermediates produced, such as acetamide, hydroquinone, or benzoquinone (Fig. S4) [17,60].Regarding the mineralization percentage, this has been measured using TOC (Fig. 6c).The 1Fe-BiOBr achieved the highest mineralization of ACP ca.58% at pH 5, similar to what was reported for atrazine and bisphenol A, under visible light, but at pH 3 using similar catalysts [31,36].The results represent a great advantage of our system under natural solar light and pH 5, considered mild conditions.
The results demonstrate that 1Fe-BiOBr promotes the degradation of ACP.Since the estimated Eg are similar (Fig. 4b), variation in the product degradation can be related to the higher degree of crystallinity of 1Fe-BiOBr shown in Fig. 3f and Fig. 4a.Higher degree of crystallinity of 1Fe-BiOBr can promote the necessary pathway to charge carriers to be readily available, possibly degrading ACP side-products more efficiently [61].Moreover, the effect of OVs in 1Fe-BiOBr should not be disregarded.
We further essay APC degradation under controlled solar irradiance (i.e., solar simulator), using 1Fe-BiOBr at pH 3 and pH 5. The results compare with ACP photo-oxidation without H 2 O 2 (photocatalysis) and with H 2 O 2 (photo-Fenton) in Fig. 7.The photocatalytic degradation of ACP shows a significantly better performance in the presence of H 2 O 2 (Fig. 7a).At pH 5 ACP oxidation without H 2 O 2 is slightly better; the results are in accordance with other studies and relate to the surface interaction between ACP and the catalyst [62].The results of the heterogeneous photo-Fenton process (with H 2 O 2 ) show that the pH has a low effect since similar outcomes are obtained at pH 3 and 5 (Fig. 7a).Nevertheless, pH 5 is attractive for the process because it is well-known that the limitation of the Fenton reaction often needs more acidic conditions (near pH 3) to achieve maximum performance and avoid iron precipitation at higher values.On the other hand, iron leaching in an acidic medium has been widely reported and depends on the catalyst's stability.However, it has been observed that low iron leaching and the presence of chelating agents (such as organic acids or byproducts) could contribute to contaminant elimination through the homogeneous Fenton reaction [55,56].This phenomenon could explain the slight degradation increase at pH 3 in Fig. 7c compared to these observed at pH 5.
The effectiveness of natural solar illumination is contrasted with controlled solar irradiance.Compared with solar irradiation (Fig. 6), results in Fig. 7a show longer times for ACP degradation, close to 2 h (Q = 150 kJ m − 2 ).Furthermore, for the byproducts, a longer time is needed for degradation, ca. 4 h (Fig. 7b).The main difference between the photocatalytic experiments under solar light and controlled solar irradiance is the solar energy contribution, thus affecting the degradation times.In other words, the solar spectrum has UV, Vis, and IR contributions, while controlled solar irradiance might not fully include UV contributions (Fig. S5).The results explain the benefit of natural solar light-induced reactions, which use a significant portion of the solar spectrum.
It is well known that heterogeneous photo-Fenton results from a synergistic combination of photocatalysis and Fenton reaction processes.Therefore, it is important to compare the mineralization with and without H 2 O 2 (Fig. 7c).Here, higher mineralization is observed in heterogeneous photo-Fenton than in photocatalysis.In the latest, the • OH is mainly produced by splitting water in the valence band, while in heterogeneous photo-Fenton, the possibility of producing -OH is greatly increased, and other oxidizing species may also play an important role.To clarify the main species involved in ACP degradation by the synergistic combination of photocatalytic and Fenton processes using 1Fe-BiOBr, p-benzoquinone (BQ), sodium oxalate (OXA), and tert-butanol (TBA) are used as radical scavengers of superoxide radicals ( • O 2 -), h + and • OH, respectively [30].As seen in Fig. 8, total ACP degradation is achieved using 1Fe-BiOBr + H 2 O 2 without adding any quencher.In contrast, when TBA and OXA are added during degradation, diminishes 20% and 30% of ACP, which proved that • OH and h + play a significant role in the heterogeneous photo-Fenton process.However, the ACP degradation dropped abruptly (60 %) in the presence of BQ, indicating that • O - 2 is the most important active species in the synergistic degradation process.The results demonstrate that • O 2 -, followed by h +, and • OH contribute to the degradation of ACP, prominently produced under
Proposed mechanism for the photo-Fenton system
From our results, the proposed reaction mechanism for ACP degradation by heterogeneous photo-Fenton using Fe-BiOBr considers the synergy between photocatalysis and Fenton processes.The mechanism from Fig. 9 considers solar irradiation, which promotes photoelectrons (e -) generation at the CB and holes (h + ) at the VB; the e -is moved to the surface of BiOBr and is used to reduce Fe 3+ (Fe(III)) to Fe 2+ (Fe(II)).This leads the Fe(III)/Fe(II) cycle over the BiOBr surface and promotes the active generation of • OH in the presence of H 2 O 2 .Within this cycle, Fe (II) can react with oxygen O 2 in the presence of protons (H + ) to generate Fe (III), which can then be reduced and participate in further cycles of the photo-Fenton reaction.In addition, the e -available in the OVs can react with the H 2 O 2 to form • OH.On the other hand, the h + in the VB contributes to the • OH generation by splitting the H 2 O molecule.Also, h + can degrade ACP by direct oxidation.It is worth highlighting the importance of OVs in the process.OVs involve releasing two electrons per removed oxygen on the surface [63] that can act as an electron pump, leading to the formation of • O 2 -radicals.It is proposed that • O 2 -is the most important oxidant species for this system, in which the generation takes place in three ways: i) traditionally, the dissolved oxygen reacts with e-photogenerated or those in the OV to form These mechanisms inhibit the recombination of photogenerated h + / e -pairs and favor the Fenton degradation process.The proposed mechanism can explain the observed effect for 1Fe-BiOBr, which shows a higher ACP degradation.
Conclusions
Fe-BiOBr has been successfully synthesized by microwave-assisted solvothermal method and exhibited excellent degradation activity in the solar photo-Fenton system, which allowed the total and almost immediate ACP removal at mild conditions.The results highlight that 1% Fe content in BiOBr is enough to degrade ACP and its main byproduct (30 min, Q = 50 kJ m − 2 ) at pH 5 and using 0.25 gL -1 of catalyst loading.The incorporated Fe in the BiOBr with a spherical flower-like shape decreased the recombination of photocarriers e -/h + and favored the generation of OVs.The solar photo-Fenton yield for ACP degradation is attributed to the synergistic combination of heterogeneous photocatalysis and Fenton reaction processes.This leads the Fe(III)/Fe(II) cycle over the surface catalyst in the presence of light and H 2 O 2 .The main active species involved in this process were confirmed to be superoxide radicals ( • O 2 -), followed by photogenerated holes (h + ) and hydroxyl radicals ( • OH).Our research reliably contributes to the development of Fenton photocatalysts for degrading pharmaceuticals under mild conditions and natural sunlight irradiation.
Fig. 9 .
Fig. 9. Proposed reactions mechanism by heterogeneous photo-Fenton for ACP degradation under solar light. | 2023-09-05T15:03:54.259Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "590c8e9a1019fe728702af57404149528f7f1144",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jphotochem.2023.115124",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "eae0b7dfad98e5dcda41a88e42330f1a58aac6bd",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
235663121 | pes2o/s2orc | v3-fos-license | “Nobody's Children”? Political Responses to the Homecoming of First World War Veterans in Northern and Southern Ireland, 1918–1929
Abstract At the time when Irish veterans of the Great War were being demobilized, Ireland was in a period of profound social, political, and cultural change that was irreversibly transforming the island. Armistice and the veterans’ relief at having survived the conflict and being back with family could not eclipse the overwhelming political climate they met on their homecoming. This article draws on the 1929 Report by the Committee on Claims of British Ex-servicemen, commissioned by the Irish Free State to investigate whether Irish veterans were discriminated against by the Southern Irish and British authorities. The research also makes use of a range of underexploited primary sources: the Liaison and Evacuation Papers in the Military Archives in Dublin, the collection of minutes of the Irish Sailors’ and Soldiers’ Land Trust in the National Archives in London, and original material from the Public Record Office of Northern Ireland and the National Archives of Ireland relating to economic programs for veterans. A comparative approach of to the respective demobilizations of veterans in Northern and Southern Ireland in the 1920s reveals that disparities in formal recognition of their sacrifice and in special provision for housing and employment significantly and painfully complicated their repatriation.
while the country struggled with repercussions of a global conflict and a national rebellion, acute, unresolved tensions between the aspirations and allegiances of Unionist and Nationalist constituencies almost led to armed conflict between paramilitary organizations. The December 1918 General Elections ushered in a new political era for the country, establishing the conditions for the convening of the first national assembly. In January 1919, the British government's refusal to recognize the legitimacy of the Irish parliament Dáil Éireann (a corollary of which was the determination of the newly elected Southern Irish MPs not to sit at Westminster) began the War of Independence. In the North, resistance to the prospect of pledging allegiance to a parliament in Dublin led six of the nine counties of Ulster to separate and form Northern Ireland under the Government of Ireland Act (1920). In December 1921, the Anglo-Irish Treaty established the Irish Free State, an autonomous entity associated with the British crown. Ratified by a majority of members of Dáil Éireann in January 1922, the treaty conjured up a political schism between pro-and anti-treaty forces that resulted in the Irish Civil War , ending with the capitulation of the anti-treaty factions in May 1923.
Irish servicemen had fought in the First World War and helped win it, but they now looked at the political transformation of their homeland with uncertainty. The Armistice and their relief at having survived the conflict and being back with their families could not eclipse their many political doubts. No sooner had they been demobilized than many war veterans became engaged in the struggle for independence alongside the republican brigades. 2 Among the 115,550 republicans allegedly belonging to the Irish Republican Army (IRA) during the War of Independence 3 (an estimated fifteen thousand men actively participated in the armed conflict against the crown forces 4 ) were hundreds of veterans, possibly as many as a thousand, who joined the IRA between 1919 and 1921. 5 That number did not reflect most of the trajectories of more than 150,000 Irish veterans, 6 as clearly "the great majority of ex-servicemen did not take part in the struggle for the independence of their 2 Richard Grayson, Dublin's Great Wars: The First World War, the Easter Rising and the Irish Revolution 1919-1921(Oxford, 1975, 179. 5 Tracking veterans of the Great War enrolled in the IRA remains a challenging and almost impossible task. Weekly and monthly reports from the Royal Irish Constabulary inspectors sometimes underline a unit training under the supervision of former servicemen. However, they do not offer exact numbers of British veterans fighting with the republicans. These reports refer only to "large numbers of ex-servicemen," "some ex-servicemen," or "a number of ex-servicemen" fighting in the IRA. Witness statements from former IRA members give more precision when it comes to individuals. After consulting the weekly and monthly police reports between January 1919 and July 1921 in The National Archives in London and searching the Bureau of Military History (Military Archives of Ireland), I have identified seventyeight witness statements and police reports allowing me to estimate at least 240 veterans. Several files mentioned a "large number of veterans" fighting with the IRA. Without being able to state with certainty what "large number" meant (most likely at least fifty), I therefore suggest that possibly up to one thousand veterans of the Great War took part in the struggle against the British Forces. For more information on the participation of First World War veterans in the Irish War of Independence, see Emmanuel Destenay, "Allégeances et transferts de loyauté: La contribution des anciens combattants irlandais de la Première Guerre mondiale à la guerre d'indépendance (1919)(1920)(1921)," 20 & 21 Revue d'Histoire 142, no. 2 (2019): 61-74. 6 Report by the Committee on Claims of British Ex-servicemen (Dublin, 1929), 3. country." 7 Therefore, I do not focus on the active veteran minority who rejected British rule in Ireland and helped transform the IRA into a paramilitary organization. Instead, I focus on the return to civilian life of veterans of the First World War. Historical accounts have been swift to portray the Irish veterans of the First World War as a community that suffered persecution and discrimination. In the late 1990s, historians Jane Leonard and Peter Hart argued that the Irish Republican Army purposely persecuted veterans during the War of Independence. 8 Claims were made that in Southern Ireland after partition, the Irish Free State sought to erase any public memory of the imperial war dead, 9 while the authorities in Northern Ireland honored and praised their sacrifice. 10 This disparity fed the feeling among Southern veterans that they were not welcomed back. Recent studies have questioned these conclusions as inadequately considering the reasons why these veterans were targeted by the IRA; 11 they explore veterans' homecoming in Southern Ireland in relation to the established Irish Free State. 12 Paul Taylor has concluded that the British government fulfilled its obligations toward the Irish war veterans; he maintains that their "war service brought no privilege from the [Irish Free] State or community but neither did it result in discrimination." 13 While Taylor sheds valuable light on veterans' homecoming, a comparative approach to their repatriation in Northern and Southern Ireland would help determine whether Northern Ireland, as still a full member of the United Kingdom, did more to reintegrate veterans socioeconomically than the autonomous Irish Free State. Furthermore, a comparative approach would indicate whether there were significant differences in their reception by their respective societies and the attention they received from the imperial government after partition.
In this article, therefore, I explore the homecoming of veterans of the First World War in both Northern and Southern Ireland. 14 Moreover, I go beyond comparing their respective reintegration and situate the question of political responses to their homecoming in relation to state building and national identities. My research reappraises the claim that political actions were aimed at erasing or putting aside the 7 Henry Harris, The Irish Regiments in the First World War (Cork, 1968), 203. 8 Jane Leonard, "Getting Them at Last: The I.R.A. and Ex-Servicemen," in Revolution? Ireland, 1917Ireland, -1923 13 Taylor, Heroes or Traitors?, 245. 14 I use Northern and Southern Ireland throughout to refer to the two political entities as established by the Government of Ireland Act 1920. The twenty-six counties of Southern Ireland became, in January 1922, the Irish Free State, following the Anglo-Irish Treaty; the six counties of Ulster that seceded from the rest of the island became Northern Ireland. While Northern Ireland remained fully part of the United Kingdom, Southern Ireland was given some degree of autonomy.
634
▪ DESTENAY memory and sacrifice of returning veterans in Southern Ireland, while questioning the view that veterans there were discriminated against in comparison with their former comrades-in-arms in the North, as some at that time believed. Comparing the grievances of Northern and Southern Irish veterans can help determine whether those who remained in the United Kingdom after the Anglo-Irish Treaty considered themselves to be better treated and were more at ease with their adjustment to civilian life than were their former comrades in the South. I assess how the challenges faced by Southern veterans differed from those in the North, whether the Anglo-Irish Treaty (1921) complicated daily life for Southern veterans, and whether the Irish Free State and the British government bore some responsibility for the resentment that Southern Irish veterans felt.
In contrast to Taylor's conclusions, my research reveals that veterans in Southern Ireland were undeniably angered by the absence of official recognition, even as Northern Ireland enshrined its veterans' collective sacrifice within the Unionist commemorative canon. Yet while that lack of recognition remained a legitimate concern for Southern Irish veterans, both Northern and Southern veterans resented the shortage of "houses for heroes," and in both jurisdictions they faced unemployment. Both groups relied heavily on private employers and companies to provide for their living, but this support did not materialize on a wide scale, as employment in this sector was limited. Local authorities helped alleviate unemployment among Northern and Southern Irish veterans both before and after partition. However, while state authorities in Northern Ireland endeavored to pass resolutions in support of veterans independently from the imperial government, the Irish Free State rejected any moral obligation, clearly regarding "British" veterans as an imperial debt. If a feeling of injustice prevailed among Southern Irish veterans, it was due not only to the transition from an imperial to an autonomous political entity but also to the indifference of the Dáil Éireann, which reinforced feelings among them that they were "nobody's children." 15 My research draws on the Report by the Committee of Claims of British Ex-servicemen (1929). The committee was established by the Irish Free State following a motion presented by William Archer Redmond and backed by other members of Dáil Éireann to investigate whether veterans had been discriminated against by the Southern Irish and British authorities. 16 To strengthen the analysis, I have consulted a range of underutilized primary sources: the minutes of the Irish Sailors' and Soldiers' Land Trust at the National Archives in London, the original material in the Public Record Office of Northern Ireland and in the National Archives of Ireland relating to the economic programs for former servicemen living in the thirty-two counties, and testimonies in the Colonial Office documents. The research brings to light the principal reasons that significantly complicated the homecoming of veterans and contributed to their despondency in the 1920s. First, following the Irish War of Independence and the Irish Civil War, Irish society underwent a political crisis that established a host of new martyrs and new heroes who were celebrated for their opposition to the British army. Southern authorities privileged this new group of veterans closely 15 associated with the establishment of the Free State over veterans of the First World War as a social group. Whereas Northern Irish authorities remembered, honored, and commemorated veterans and gave them a privileged place within the political ethos, the Irish Free State deliberately excluded them from the national myth as it went about the task of shaping the collective memory of Southern Ireland. Spurred by a desire to revive the nation's Gaelic past, Southern authorities built a national myth in accordance with a political and cultural agenda. Even though commemorations throughout Ireland anchored the memory of the First World War in the political landscape, veterans were differently commemorated in the twenty-six Southern counties.
In the second section of the article, I analyze the preoccupations of veterans in relation to the construction of so-called colonies 17 for former British officers and men. Veterans in both Northern and Southern Ireland wrongly blamed the British authorities for the straitened conditions in which they found themselves, feeding their frustrations. In the third section, I deal with unemployment and explore how veterans reacted to the various schemes enacted in both Northern and Southern Ireland. Unemployment, exacerbated by the British government's prohibition of Irish emigration in 1914, plagued both the Northern and Southern communities. The British government adopted several employment schemes for veterans of all ranks. However, even though British and local authorities in Ireland unconditionally backed the economic reintegration of demobilized troops, the scarcity of employment resulted in an undercurrent of despair and resentment among Northern and Southern veterans. Most importantly, after the signing of the Anglo-Irish Treaty (1921), the decision of the Southern authorities to offer preferential treatment in terms of employment to former members of the National Army reinforced the belief among imperial veterans that they had been abandoned.
COMPETING HEROISMS: THE MAKING OF HEROES IN NORTHERN AND SOUTHERN IRELAND
Throughout Ireland, those who had served during First World War had expected to be honored for having fought to defeat the Central Powers. But while their contribution to the European restoration of peace was enshrined within the collective European memory, in Southern Ireland a new generation of combatants was being valorized.
Two watershed constitutional enactments-the Government of Ireland Act in 1920, dividing North and South, and the Anglo-Irish Treaty of December 1921, recognizing Southern Ireland as an autonomous free state still associated with the British crown-brought about a partition in terms of the official orchestration of popular memory. The two geographical spheres differed profoundly in their integration of veterans of the First World War in their national myths. From 1918 onward, and particularly after the signing of the Government of Ireland Act and King George V's inauguration of the Northern Irish Parliament in June 1921, Northern Ireland sought to reassert its British identity and loyalty to London. The six loyal counties anchored the memory of the First World War within their Unionist canon, welcoming and acknowledging their returning soldiers as heroes and martyrs. 18 Politicians, county councils, and government agencies praised their sacrifice. Northern Irish veterans played an important part in cementing political, cultural, and historical bonds with Great Britain. Commemorations of Armistice Day were substantial gatherings. Beyond the two-minute ritual silence to honor the memory of the departed, 11 November displayed the defining cultural and political features of the newly created state. Unionist banners and British anthems and songs all contributed to the explicitly British pathos at the heart of the commemorations. The 1916 Battle of the Somme, in which so many "sons of Ulster" died, became a new historical and cultural benchmark for the Unionist majority. 19 In the Unionist ceremonies of 12 July 1918 and 12 July 1919, the battle came to be incorporated within the liturgy of a loyal Ulster identity. 20 During the unveiling of the war memorial in Coleraine in November 1922, Northern Ireland's prime minister, Sir James Craig, asserted that the sacrifice of the 36th (Ulster) Division reinforced the need to "stand firm to give away none of Ulster's soil." 21 The commemorative liturgy associating the First World War with the celebration of British patriotism angered many Northern Irish Catholic and Nationalist veterans; 22 in 1924, a group of Derry Nationalist veterans chose not to participate in the 11 November ceremonies "as they felt the political overtones of the event was antithetical to their reasons for volunteering in the first place." 23 As Richard Grayson has explained, "Any Nationalist attending would be surrounded by the flags and symbols of a country to which they felt no allegiance, in a crowd singing songs that had nothing to do with nationalists' national identity." 24 Local and governmental authorities of Northern Ireland faithfully commemorated veterans' role in the First World War, yet enfolded them within a political ethos that meant that only Unionist veterans could identify with the ritual. Catholic and Nationalist veterans felt excluded from commemorations that seemed to imply that their participation in the First World War denoted unconditional loyalty to Britain. Partition magnified divisive political cultures and accentuated the Unionist liturgy of the Northern Irish State, triggering a reactionary identity in opposition to the South and unleashing an overarching unifying culture at the expense of Catholic and Nationalists groups.
In Southern Ireland, the postwar Irish Free State refrained from shaping any collective memory of the Great War. Authorities redefined the cultural benchmarks of 18 the Irish collective memory, generating the state's own myths and its own veterans, thus establishing a clear difference between former IRA members and war veterans. Faced with the impossibility of achieving a United Ireland, the Irish Free State had to accept that Northern Ireland would not be subject to its authority. The newly elected members of the Dáil Éireann, in close association with the Catholic Church, undertook to set in motion the Sinn Féin agenda and to revive the country's Gaelic past. To do so, they relied on ancient myths and glorified the generations of Irish men and women who had participated in the struggle for independence. 25 Not only were veterans of the First World War demobilized in the middle of a conflict pitting the Irish Republican Army against the British forces but they now witnessed the redefinition of a collective identity in which they had no particular role. The Southern collective memory crystalized an Irish identity "founded on the Catholic-Gaelic cultural nationalism, which had developed in the nineteenth century in reaction to British domination and to the unionist discourse." 26 The Free State revived a "traditional vision of national identity derived from Irish cultural nationalism." 27 The newly crafted national myth anchored through the education system a Catholic and Nationalist ethos in the collective mentality of primary and secondary school pupils. 28 From the end of the conflict and throughout the 1920s and 1930s, commemorations of 11 November were the focus for a strong feeling of pride throughout Ireland. On Armistice Day, 1924, tens of thousands of people gathered to watch twenty thousand veterans parade through the center of Dublin. 29 Garrison towns and ports such as Tralee 30 and Cobh 31 observed a two-minute silence in the presence of veterans and the relatives of departed soldiers. In the Irish Free State, "one of the most famous, visible, public and participatory charitable events for ex-servicemen was its annual Poppy Day Appeal." 32 Before and after the War of Independence, civil populations actively joined remembrance ceremonies alongside veterans of the First World War. 33 But while Southern authorities acknowledged the First World War, it did not feature prominently in the Free State's calendar of commemorations, whose aim was instead to enshrine its existence and legitimacy within the genealogy of Irish rebellion and revolution. From 1918 onward, commemorations of the First World War operated on a vernacular basis. 34 Those who had participated in the Irish War of Independence (1919)(1920)(1921) maintaining alive the memory of that war. Communities, villages, towns, veteran associations, and families commemorated the sacrifice of their sons. The gap between vernacular memorials orchestrated by veterans and civilian communities and the lack of state-sponsored national commemorations has led some historians to suggest that the Southern authorities sought to erase any memory of the First World War. However, as Grayson revealed, claims that veterans were not "officially" remembered was an overstatement, as between 1924 and 1932 the Irish Free State sent representatives to Armistice Day commemorations in Dublin, 35 while the Irish high commissioner participated in the ceremony at the Cenotaph in London on 11 November up until 1932. 36 De Valera's government later granted a public subsidy "for the construction of the national memorial at Islandbridge." 37 Such evidence, then, requires a more nuanced approach to the Irish Free State's attitude toward commemorating the memory of Irishmen who died in the First World War.
The absence of government-sponsored national commemorations spoke not only to the radical nature of the postimperial Irish Free State but also to its identity: a state born in reaction to British imperialism. This identity was again reflected in the issue of the national memorial to the Irish Fallen. Between 1918 and 1923, the ongoing conflict forced the Irish National War Memorial Committee to suspend the task of building a suitable memorial and instead to focus on producing the War Memorial Records. 38 When the civil war came to an end, the committee eventually considered ideas for the erection of a national memorial for the First World War. In 1923, as the Dáil debated the monument's location, the idea that it might be erected in Merrion Square in the center of the capital close to the parliament stirred up vehement resistance. William Cosgrave, president of the Executive Council, whose two brothers had served in the war (one was killed), recognized that "a large section of nationalist opinion regards the scheme as part of a political movement of an imperialist nature." 39 He warned the British Legion and the representatives of the Irish National War Memorial that erecting "a memorial distasteful to a large body of citizens" in the middle of the city was unthinkable. 40 The vice-president of the Assembly, Kevin O'Higgins, explained: "No one denies the sacrifice, and no one denies the patriotic motives which induced the vast majority of those men to join the British army to take part in the First World War; and yet, it is not on their sacrifice that this State is based, and I have no desire to see it suggested that it is." 41 While some individuals effectively sought to push commemorations away from Dublin's center (but refrained from overtly saying they did not want an official memorial to the First World War), the Dáil argued that the war, having not directly contributed to the state's creation, could not occupy a significant place in the political landscape of the capital. 35 It would "give a wrong twist, as it were, a wrong suggestion to the origins of this State," 42 argued O'Higgins. "The State has other origins." 43 From the ashes of the War of Independence rose the Free State.
After years of hesitation and political opposition, Southern Irish authorities decided in 1929 to erect the national First World War memorial. Designed by Sir Edwin Lutyens in the 1930s, and completed by 1939, 44 its location at Islandbridge, opposite Phoenix Park, has generated much debate among historians. Some concluded that the state deliberately sought to put away any sign of Ireland's involvement in the conflict. 45 In fact, veterans and members of the British Legion in Ireland wanted the memorial to be erected in Phoenix Park. 46 General Sir William Hickie, a member of the council of the Irish National War Memorial, representative of the British Legion, and former officer-in-command of the 16th (Irish) Division, pointed out that building a national memorial in the capital's center would generate logistical issues for parades, marches, and gatherings: Merrion Square does not lend itself to accommodate the vast concourse that we have reason to believe will assemble in days to come. Some 50,000 people came all the way to the Phoenix Park last November. When we consider that the businessmen and businesswomen of Dublin and their employees can attend without long absence from work at a place where the Irish national war memorial is to be to men from the four provinces and as the crowd would greatly exceed the numbers that have already been seen in Dublin, we must ask ourselves whether the city authorities or the Government itself would permit such a gathering in the centre of the city. 47 Sir Bryan Mahon, former officer-in-command of the 10th Division, spoke in favor of Islandbridge as an ideal location to assemble for remembrance ceremonies: "Nor do I consider Merrion Square, in any way, a suitable site for a war memorial . . . Dublin is fortunately possessor of one of the finest, if not the finest, public parks in Europe. Why not take advantage of that and erect a memorial in the Phoenix Park and let it stand for ever as a memorial to 50,000 brave Irishmen who voluntarily gave their lives for their king, their country and for liberty?" 48 In the end, the choice of Islandbridge was acceptable to opponents of any sign of commemorations, the Free State government, the National War Memorial Committee, and leading veterans such as Hickie and Mahon. The choice of the site reflected the transitioning nature of the Irish state from an imperial to a national entity.
In contrast, Northern Ireland anchored its commemoration of the First World War, the Belfast Cenotaph, originally unveiled in 1929, close to City Hall. 49 54 Although they did not commemorate the sacrifice of Irishmen in the manner of contemporary British and Northern Irish public ceremonies, Southern authorities never forbade First World War commemorations and ceremonies and in fact provided some moral and financial support for them. Yet apart from the question of the degree of recognition that veterans met with, whether official in the North or at best unofficial in the South, was the reality of material hardship that both Northern and Southern veterans faced on their homecoming. Without question, British economic support for Irish veterans in the thirty-two counties represented an indirect benefit for Ireland, and after partition, for both Northern and Southern authorities. The building of houses for veterans helped relieve a general housing shortage while bringing immediate employment to Irish labor and profit to Irish constructors. Ireland had experienced a severe housing crisis since early in the twentieth century. 58 In 1914, "14,000 houses in Dublin were urgently needed to relieve congestion and to close tenements which were unfit for habitation." 59 In April 1917, the Irish Convention's Housing Committee demanded a large postwar program to build 67,500 houses throughout Ireland. 60 Thus British policy for imperial veterans "contributed to the Irish housing stock at a time when the country's housing problems and shortages were acute and economic circumstances discouraged public initiatives in housing." 61 Reconstruction work financed by the British Treasury alleviated the general problem of unemployment and enabled the undertaking of necessary work that would otherwise have been a charge on public funds or been left undone for lack of money. Moreover, the decision to give priority to the veterans and to hire them to build housing projects facilitated their socioeconomic integration and contributed to the Irish economy in the long run. 62 After partition and during negotiations between Southern authorities and the British government, it was clearly stipulated that arrangements had to be made between the respective governments to ensure the continuance of special assistance to veterans. 63 When the British suggested that a trust could provide and maintain veterans' housing, both the Free State and the Northern Irish authorities immediately accepted. William Ormsly-Gore, undersecretary of state for the colonies, saw immediate benefits for both sections of Ireland: "I have every reason to think that, as all the money comes from this country, to discharge an obligation to ex-servicemen, neither 56 the North nor the South will reject that money." 64 The British Treasury would continue financing the scheme. During the state-building phase of Northern Ireland (1920) and the Irish Free State (1921), their respective representatives were aware that they could not financially provide for veterans. Thus both political entities welcomed the housing programs for veterans; both desperately needed houses and favored a pragmatic approach. In 1923, the Free State minister for finance, Ernest Blythe, publicly acknowledged the efficiency and importance of the British policy of house building for veterans: We all know that within the Saorstát 30,000 or 40,000 houses are wanted. It will be a great asset to this State to have this money made available for the provision of houses, apart altogether from the carrying out of the pledges given the men who will actually be put into them . . . It would not have been easy, even if it had been possible, to have a British Government Department carrying out any function in the Saorstát. At the time it certainly was desirable that provision should be made so that the work which had been begun of providing for these men, having regard to the promises given them time and again by the British statesmen, should be continued. 65 The Irish Sailors' and Soldiers' Land Trust, or ISSLT, then took over the Board of Works, an Irish agency. The trust was an imperial entity operating on an all-Ireland basis under the direct control of the British Treasury, entirely funded by the British government, in order to alleviate the shortage of houses in both jurisdictions. The imperial trust had no involvement whatsoever from the Free State or the Northern Irish government but was welcomed by both. As Ormsly Gore noted, "Whatever happens, we will continue to have this obligation, which was incurred in the 1919 Act, to the soldiers and sailors. We, in this Parliament, have undertaken that obligation and, whatever happens, it is up to us to see that it is properly discharged." 66 Through the ISSLT, Britain clearly asserted that it would fulfill its obligations to veterans of the First World War in Northern and Southern Ireland.
But whereas Section 1 of the Irish Land Act (1919) recognized "any men who had served in His Majesty's Naval, Military or Air Forces in the present war" as eligible and entitled to receive untenanted land and a lodging, 67 the Land Act (1923) passed by the Free State abolished all existing special classes of persons for whom land might be provided. While in Northern Ireland the special provisions remained to acquire land for veterans of the First World War, in Southern Ireland British veterans were no longer specifically mentioned as a class; land could only be provided under Subsection 1 (f) of Section 31, which stated that land could go to any other person or body to whom in the opinion of the Land Commission it ought to be given. Whereas in 1919, under Section 17 of the Irish Land Act (1919) passed by British Parliament, former servicemen (including officers) in all thirty-two counties were to be given priority, from 1923 onward that privilege came to an end in the 64 69 The ISSLT, which supervised construction of the dwellings, thus found itself in a delicate situation.
That same year, the British government fixed at 1,046 and 2,626 the total number of dwellings to be erected in Northern and Southern Ireland, respectively. 70 For more than 150,000 veterans, 71 the provisions were completely inadequate, stirring up disillusionment among veterans in both jurisdictions. The limit imposed on the building of houses provoked widespread suspicion among veterans that Britain was not fulfilling its moral obligations. In 1915, to motivate men to join the war effort, recruiting agents in Ireland had promised that forty thousand houses for veterans would be available after the war. Sir Henry McLaughlin recalled that Lord Kitchener, secretary of state for war, had authorized him in his capacity as general director of recruitment in Ireland to guarantee accommodation to new recruits after the war. 72 The huge gap between the original forty thousand houses promised and the 3,672 lodgings to be built deepened feelings of despair among veterans. However, apart from the unofficial promises, the British had never stipulated how many houses would be built.
On the outskirts of the town of Boyle (County Roscommon, Southern Ireland), thirty-seven applicants submitted files to obtain one of eight lodgings erected in 1926. 73 83 In April 1929, veterans occupied 297 houses in Belfast, and the ISSLT had recently authorized an additional expenditure of £37,000 for the erection of fifty-eight more cottages. 84 The resentment and despair generated by the woefully insufficient housing for veterans throughout Ireland was laid at the door of British authorities. But the special provision from the Northern Irish authorities also created a widening gap between Northern and Southern veterans, illustrating the consequences of one section of the island agreeing to take some financial responsibility for them and the other section refusing to do so. Whereas the Irish Free State depended entirely upon British taxpayers for funding to build houses for the former servicemen, Northern Ireland did not. By November 1922, two years after the state's creation, Northern Irish authorities had "built between 800 and 900 houses in Belfast, and not one of those houses is occupied by other than ex-service men." 85 These dwellings were erected independently of the housing schemes supervised by the ISSLT. Overall, including the 1,046 dwellings that would be built by the ISSLT, nearly 2,000 houses in Northern Ireland would be erected for veterans only-almost as many as in the Free State. 86 Local initiatives accounted for the higher number of houses for veterans in Northern Ireland, and for the small numbers of complaints relative to those from Southern Irish veterans.
Moreover, the Northern Irish state had built the only colony dedicated for disabled men, as the MP Thomas M'Connell observed: "We have also built, something which does not exist anywhere else in the whole, of these islands-a colony for absolutely disabled men. In that colony the men are living free of rent and taxes for the term of their natural life." 87 By contrast, under the ISSLT, veterans all over Ireland paid a weekly rent for every lodging. 88 Thus it seemed that, thanks to the Northern Irish taxpayer, disabled veterans living in the colony in Belfast were better treated than other disabled veterans under the ISSLT. In the House of Commons, on 22 November 1922, representatives from the three nations of Great Britain praised the initiatives of the Northern Irish ruling body: "We, on this side of the House, are exceedingly pleased that the representatives of Northern Ireland have been able to secure such exceedingly good conditions for the ex-servicemen, and we should have liked to see such conditions given, where required, to any other body of men or women." 89 MPs openly praised the endeavor of Northern Ireland in acting independently from the ISSLT to alleviate unemployment among its veterans.
To distinguish how Irish veterans fared in comparison with the rest of the United Kingdom, more comparisons are required. 90 In Scotland, by 1923, 1,304 holdings for veterans had been erected. 91 The same year, in England and Wales, some 56,000 applications had been received from veterans, and 16,800 (30 percent) had received a lodging. Clearly, a higher rate of veterans registered in England obtained a dwelling than did those in Ireland. However, the legislative situation of England must be taken into account. While in England the Land Settlement (Facilities) Act (1919) meant the national government worked "through county councils to increase the number of smallholdings," 92 in Southern Ireland, the body in charge of housing veterans operated with no support from the Irish Free State, county councils, or local authorities. (To some extent, the situation was similar in Scotland). In Northern Ireland, authorities worked alongside the ISSLT and even complemented its work in financing their own lodging schemes.
Once veterans moved in, all over Ireland, the deplorable quality of some construction soon led to numerous claims against the British government. Unreliable water supply endangered the lives of children and, coupled with problematic sewerage systems, enraged the veterans. For the fifteen tenants of Brookville, the nearest available water supply was located in the town of Tipperary, more than a mile distant. 93 Rain seeped in through the outside walls of all the houses. Even after four or five days of fine weather, "large damp patches were visible on the plaster of the inside walls." Rain came through the roof and into the bedrooms, forcing tenants "to resort to buckets to contain the flow." Road drainage also had serious effects on the habitability of the site. Water often came right up to the doors, washing away the thin layer of gravel on the pathways. Tenants were obliged to "wade knee deep in mud" to their houses. 94 The "Bluebell" colony had thirty-two houses crowded on eight acres of land. 95 At Killincarrig, ten houses were ideally located on nineteen acres of land on the outskirts of Bray, with each tenant entitled to half an acre of garden. However, the bad quality of the building work brought protests. The colony had to endure the stench of "open sump pits," infuriating tenants who also had to put up with the driving rain that came in "through the walls and windows," and "the gusting of the gales under the doors." 96 While the grievances were completely legitimate, they were wrongly directed against British authorities. After the establishment of the Irish Free State, the ISSLT had inherited 1,508 properties from the Board of Works, the Irish authority 89 responsible for the construction of the houses. 97 The Board of Works had made some disastrous mistakes in the construction of these housing schemes; its inability to inspect and check the quality and reliability of the builders and hold them accountable was a major one. The ISSLT had to cover all the costs of repairs.
Northern Irish veterans experienced similar complications. In Ballymena, no provision had been made for a water supply and tenants had to fetch their water from a well 540 yards away. 98 Numerous complaints were received from the tenants of Cookstown; windows let in rain, mainly through the top portion of the sash. 99 In Hillsborough, defective bricks in the chimneys resulted in dampness leading to the rapid deterioration of the walls. In eight out of the nine cottages, chimneys had to be taken down and rebuilt. 100 In Lisburn, improvements to drainage had to be carried out to alleviate flooding. 101 In 1926, the Northern Irish daily Northern Whig published a diatribe from the chairman of the Ministry of Finance overtly targeting the British authorities: "It was a perfect scandal the way in which the ex-servicemen of Lurgan were treated by the Imperial Government." Originally, eighteen houses were to be built in Lurgan, but the number was decreased to twelve, generating much resentment among veterans. 102 News of the deplorable living conditions in the colonies for veterans spread across the island, to the point where in 1927 the Irish Times in Dublin published an article titled "Tenements for Heroes" 103 to illustrate the conditions in which veterans were left to live in both Northern and Southern Ireland. However, as scandalous as conditions were, they were already common. The sump-pit system had "generally been a failure throughout the country; a cause [was that it was] not fool-proof and the drains easily blocked by old rags, stones of all kinds and abnormal matter." 104 Overcrowding and poor sanitation were also widespread. In 1911, out of 861,879 dwellings registered for the whole island, more than 58,000 (6.4 percent) had only one room. 105 In 20 percent of cases in the category, more than five household members shared the single room, and as many as twelve people could be living there in conditions of severe overcrowding. 106 The Irish Times wanted to blame British authorities but failed to note that the Board of Works responsible for building the houses operated as an Irish agency totally independent from the British government. 107 Such unfounded accusations spoke to the charged political climate where all claims and concerns could be regarded through the lenses of ambiguous and conflicting Anglo-Irish relations.
Demand greatly exceeded supply in both North and South. Nevertheless, to blame the British government for the deplorable quality of the houses built would be erroneous. The ISSLT had inherited the properties and covered the costs of bringing them up to standard. The capping of the number of houses to be built indeed generated resentment, but both Northern and Southern Ireland were affected by limited stock. It is, however, true that the Northern Irish Parliament in Stormont passed a number of resolutions permitting the financing of hundreds of dwellings for veterans of the First World War; the Dáil Éireann did not. 108
PARTITION, IMPERIAL OBLIGATIONS, AND STATE BUILDING
Throughout the United Kingdom, unemployment severely affected the lives of those who had served in First World War. One factor in the high unemployment rate was the Irish government's wartime prohibition of emigration. When, "under normal conditions," thirty thousand people on average had emigrated from Ireland every year, the unemployed population "might have been absorbed by emigration." 109 By 1919, "emigration from Ireland had declined by 90% as compared with 1913 and the actual number of emigrants leaving Ireland in 1919 was something less than 4,300." 110 At Ministry of Pensions in the Dublin district at that period 117 -the numbers were insignificant, representing only 350 veterans. 118 In Belfast and Tipperary, Government Instructional Factories undertook disability training. By January 1921, the instructional factory in Belfast had trained 102 men. One year later, in January 1922, it registered 1,101 men trained, 824 in training, and 2,669 on the waiting list. At that time, the instructional factory in Tipperary listed 1,772 men who had completed their training, 795 were in training, and 2,699 were awaiting a place. 119 In May 1923, taking into account all the various training schemes, Northern Ireland registered 2,330 men who had completed their training, while 1,057 were still finishing. In Southern Ireland, British authorities estimated that 2,966 veterans had completed the scheme and 617 were still in training. 120 In September 1919, King George V had launched a network throughout the United Kingdom intended to support the economic rehabilitation of veterans. The King's National Roll asked that British employers, industries, and companies hire a minimum of 5 percent of disabled war veterans. 121 In 1921, more than one hundred firms in Northern Ireland had signed up to the scheme. By 1924, the number had increased fivefold, with 505 companies having hired disabled veterans. 122 In Southern Ireland, however, no steps were taken to apply to the King's National Roll, as the country had no local employment committees. 123 That situation accounted for the claim that British veterans in the Irish Free State were at a decided disadvantage to their comrades in Ulster and across the water. 124 However, as Taylor noted, "employers who desired to co-operate in the scheme were entitled to do so." 125 Some of the largest employers in the Free State, such as the Guinness Brewery and Jacob's Biscuit Factory, adopted the principle of the King's Roll and gave preferential treatment to British veterans. 126 Local war pension committees also registered for the scheme. 127 In addition, local authorities and county councils in Southern Ireland contributed, with limited effect, to the employment of veterans. Between 1921 and 1922, the urban district of Fermoy (County Cork) hired 154 veterans to build and repair roads. The urban district of Lurgan (County Armagh) employed sixteen veterans, the Down county council (Northern Ireland) employed eleven, the urban district of Dun Laoghaire employed twenty-five, and Galway county council employed three. Twenty veterans found employment with the county council of Kilkenny, fortyone with the county council of Offaly, and seventy found work in the urban district of Longford. 128 By the end of 1919, local authorities and county councils had enabled 3,400 veterans to find employment in public works throughout Ireland. By February 1920, 8,610 veterans in Ireland had been placed in employment. 129 However, in 1920, in Belfast alone, 3,500 veterans and officers were still looking for work. 130 At that time, the Belfast Local Committee reported that six hundred men were awaiting training, and while no doubt a number of them would be placed in training with various employers and in the recently opened Instructional Workshops, the committee secretary noted, "A great many of them-I fear the majority of them-will not be placed for months to come." 131 After partition, Northern Ireland had decided not to be entirely dependent upon funding from the British government to relieve unemployment. Between 1920 and 1922, £1,000,000 had been spent from Northern Irish revenues in Belfast for the purpose of relieving unemployment among war veterans, and an additional £1,000,000 had been granted for the same purpose. 132 In 1924, Belfast's mayor invited the assistance of a delegation of more than one thousand business owners to employ the 6,363 disabled veterans living in the city. 133 Thus local and state authorities clearly became involved in the socioeconomic reintegration of veterans, something that did not happen in Southern Ireland.
Instead, in 1923, the Free State's Executive Council's president, William Cosgrave, called on private employers to rehire former members of the National Forces: "Manifestly, it is the first duty of employers to reinstate men who left their employment to join the National Forces in the hour of the country's need; and secondly, to set aside a fair proportion of vacancies for those who have rendered such loyal service to the people's cause." 134 In other words, the Southern authorities requested private employers to help reintegrate former members of only the IRA and the National Army, incorporating these groups of "loyal" veterans in the local framework-but no calls were launched to relieve unemployment among "British" veterans.
However, private employers and owners of companies and shops in Southern Ireland could, if they wanted, hire them. Mathew Delaney found employment in the accounting department of Messrs. Henry Ford & Son after serving two years in the Royal Air Force. 135 Former members of the crown forces received support from William Robinson, who carried on a business as wholesale and retail merchant under the name of James Pim & Son: "Over 25 per cent of my hands," Robinson stated, "were British ex-service men." 136 A well-known loyalist in Mountmellick (County Laois), Robinson's action was consistent with his Unionist convictions. Indeed, some employers who had enrolled in the British army during the war and returned to Ireland offered preferential treatment to their comrades-in-arms; comradeship among veterans played an important role in helping demobilized veterans find jobs. 137 Thus some private employers found themselves in a position of partially compensating for the covertly discriminatory policies against British veterans in some parts of the country. Although their contribution cannot be precisely measured, they undoubtedly helped to attenuate unemployment among veterans, facilitating their transition into civilian life.
Yet in the climate resulting from the War of Independence and the Irish Civil War, businesses in some places refused to hire veterans identified as being former British army. This was mainly the case in places where employment was scarce and where a majority of the population nursed resentment toward Britain after the atrocities committed locally by the Black and Tans during the struggle for independence. When Thomas McCarty obtained his demobilization in January 1919 after having served two years and a half in the British Expeditionary Corps in Egypt, he presented himself to his former employer in Ennistymon (County Clare) and was abruptly "told to go and work for the people [he] fought for." Searching for employment elsewhere, he faced "a blank refusal in each instance." 138 Thomas O'Brien, his three brothers, and their father "had served in the British Forces and no one wanted to have anything to do with the family." 139 After returning to Clare in November 1919, O'Brien was unable to secure employment for twelve months. John Wallace regretted that on his return from the war, "there was not much room for an ex-service man in Cork City." 140 Opposition to hiring the veterans was embedded in economic conditions. As David Fitzpatrick pointed out, by 1919, "the arrival home of hordes of former soldiers seeking jobs" 141 seriously threatened agricultural and working-class communities in the competition for work. In the light of endemic plight of poverty and underemployment in Ireland, any policy of preference directed toward veterans fueled bitter hostility. Unemployment and underemployment in the post-independence period made it difficult for anyone to find a job, as evidenced by the following witness statements: "In Thurles . . . employment is scarce. Ex-servicemen are not young and sometimes disabled. There are many young men in competition against them and in the groups which congregate at corners ex-servicemen are not in the majority by any means. Templemore has no employment to offer a population 135 prominently made up of ex-servicemen, as is the case in most Southern towns." 142 In western counties such as Galway, Clare, and Mayo "the ex-service population is quite small," yet unemployment was still "most rife." 143 Free State authorities cannot be held accountable for the refusal of some private entrepreneurs to hire veterans of the First World War, a matter of personal initiative. Southern Irish veterans had good reason to be worried when faced with economic ostracism, but what increased their concerns was the lack of any preferential treatment by the state. The British government unquestionably undertook to stand back from its imperial obligations during Anglo-Irish Treaty negotiations with representatives of the Irish Provisional Government in 1921. Britain had tried to transfer some of the financial burden of supporting economic rehabilitation of veterans to the newly created Irish Free State, asking it to cover 50 percent of the costs. After several weeks of intense disagreement, the Free State insisted that rehabilitation of veterans was "clearly an imperial debt" 144 and categorically refused to contribute. It agreed that veterans were entitled to the fulfilment of promises made to secure their enlistment; Dublin was prepared to cooperate fully with the British government and to place at its disposal a reasonable extent of the powers necessary in order to fulfil these obligations. However, Free State authorities underlined that what was owed these Irishmen was "a British Imperial obligation and [it could not] be contended that it owed its existence to any action or promise of a representative Irish authority." 145 This staunch position resulted from pragmatic considerations. First, Dublin could not consent to having to raise funds from the Irish taxpayers for a purpose that would have been contrary to its perception of the public interest. In addition, the liability of the British government had plainly been defined in Article 5 of the Treaty of the 6th December, although the article did not mention any sum relative to what was unquestionably the imperial moral responsibility.
The Irish authorities' refusal to contribute 50 percent of the cost of the reintegration of First World War men and officers, coupled with the British authorities at least partially reneging on imperial obligations, outraged veterans. Their fury was exacerbated when they learned that Southern authorities were supporting the reintegration of men and officers who had actively fought against the British presence in Ireland during the War of Independence, and that the Dáil was passing a number of circulars to support former IRA members. From the signing of the Anglo-Irish Treaty onward, it was the veterans who had participated in the Irish War of Independence who received preferential treatment in employment schemes. 146 Veterans of the First World War serving in the Irish Free State army were still receiving monthly British war pensions. Talks between the Irish Free State and British authorities underlined the consistent and pragmatic approach of both governments. Both agreed that a citizen of the Irish Free State should not receive a British war pension while serving in the National Army. Veterans of the First World War who were currently members of the Irish Free State army were demobilized and allowed to keep their British war pensions. Irish authorities demobilized these veterans in order to allow unemployed former members of the IRA to join the department. Yet the Irish Free State's intent was not to discriminate against veterans of the First World War. In terms of purely budgetary logic, it was undesirable to retain men who were receiving both a British pension and a monthly payment from the Irish Free State. 152 The British Legion in Ireland echoed the concerns of its members and requested that the Free State enact "a preferential policy for WWI veterans" 153 as was the case in Northern Ireland. There veterans were given preferential treatment for some civil service jobs; private companies such as Sirocco, 154 156 and Belfast Ropeworks 157 also pledged support. Children of deceased soldiers received clothing from firms and local authorities, 158 something that did not happen in Southern Ireland. In 1926, under the Local Government (War Service Payments) Act, the Northern Irish government demanded that local authorities take into account for all veterans the increase in salary that would have occurred had they not enrolled in the British army. Gas and electricity companies, tramway companies, and city halls received claims for veterans. For the year 1926 alone, the law permitted £18,000 to be dispatched to that end. It also stipulated that the widow of a deceased veteran was entitled to restitution of his income for a period of twenty-six weeks. 159 Endemic consequences of the Anglo-Irish Treaty did not allow for preferential treatment of veterans of the First World War in Southern Ireland. In addition, the decision made by the British government caused resentment among veterans, leading the Committee on Claims of British Ex-servicemen to point out that they undoubtedly suffered from the directives, as compared to their Northern Irish comrades. Moreover, under the Empire Settlement Act passed on 31 May 1922 (between the date of signature of the treaty and the date of its ratification), the British government took on certain financial provisions in order to assist veterans in the United Kingdom wishing to emigrate to other parts of the empire such as Canada and Australia. British veterans resident in the Free State were not eligible for the benefits of the act. As to whether or not this was an omission or a deliberate exclusion, the Committee of Claims concluded that the British government knew that veterans in Southern Ireland would later be excluded from the provisions. 160 One is inclined to argue, as the committee seems to suggest, that the British government ensured that veterans of the First World War in the twenty-six counties were not granted the same privileges as were those in the United Kingdom: "The Act should have been framed so as to define the position of these men in the Free State. Further, that such men should not be placed in less fortunate position than men resident in Great Britain, and that they should be permitted to benefit by the financial provisions of the Act." 161 In other words, special dispositions for the emigration of Southern Irish veterans should have been incorporated to grant them the same rights as Northern Irish veterans.
But from a legal aspect, Southern Irish veterans came directly under the authority of the Free State. Questions related to their emigration were purely a domestic affair. Beyond the continuation of schemes and the building of colonies, the British government did not want to encroach on the sovereignty of the Dáil. Significantly, before the signing of the Anglo-Irish Treaty (1921), the Provisional Government had opposed the emigration of veterans of the First World War. That is evident from the fact that before the passing of the Empire Settlement Act (1922), the newly elected MPs had voiced their indignation at the imperial British policy allowing veterans to emigrate to British dominions. On 5 June 1920, Cathal Brugha, minister for defense, had published a manifesto in which he accused British authorities of bleeding the country of its youth. 162 Dáil Éireann believed the immigration schemes were a strategy undertaken by the British to weaken Ireland: "The enemy has declared that there are too many young men in Ireland, and he is anxious to clear them out." 163 Northern Irish authorities too resented the possibility of veterans emigrating. As Kent Fedorowich has pointed out, "Sir James Craig and his Unionist supporters could ill afford to lose too many of their Protestant brethren to emigration agents." 164 Even though less than 10 percent of veterans had left Ireland in 1919, both Northern and Southern Irish authorities were apprehensive that the emigration of young Irishmen would weaken Nationalist and Unionist movements in the island. 165 However, the British approach during the passing of the Empire Settlement Act worsened the situation of Southern veterans-pointing to the need to nuance Taylor's claim that "complaints regarding the exclusion of Southern Irish ex-servicemen from the Empire Settlement Act 1922 seem unjustified." 166 Newspapers noted that emigration would have helped Southern Irish veterans "to escape from misery and unemployment," 167 a belief shared by the Committee of Claims. 168 Both Northern and Southern Ireland had objected to the emigration of war veterans during the revolution (1918-24), but in the end, while Northern Irish veterans were eligible under the act, Southern Irish veterans could not benefit from it and so had no legal means to emigrate.
Unemployment among Southern Irish veterans increased significantly when, following the Anglo-Irish Treaty, British troops left the twenty-six counties. Large sectors of the local population who had relied on the British army for employment as auxiliary or maintenance staff or for business contracts in supplying provisions now found themselves losing employment or contracts. 169 This was a cause of widespread concern. 170 When the British garrison was evacuated from Birr in 1922, the nearby village of Crinkle, where a large number of veterans lived, had directly experienced "greatly altered local conditions." 171 In May 1927, former padre (chaplain) | 2021-06-29T13:14:06.004Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "06b32204c4dfcaccb2c6ef3ad53f668004724a4e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/C61FFC5E2CBC1C0D2D7E6A2C8A44FC4D/S0021937121000617a.pdf/div-class-title-nobody-s-children-political-responses-to-the-homecoming-of-first-world-war-veterans-in-northern-and-southern-ireland-1918-1929-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "06b32204c4dfcaccb2c6ef3ad53f668004724a4e",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
} |
270977296 | pes2o/s2orc | v3-fos-license | Characterisation of Pea Milk Analogues Using Different Production Techniques
SUMMARY Research background Among legumes, peas are characterised by their high protein content, low glycaemic index and exceptional versatility. However, their potential as a food is often compromised by their undesirable off-flavour and taste. Hence, this study focuses on minimising off-flavours through simple pretreatments with the aim of improving the potential for the production of pea milk analogues. Pea milk analogues are a burgeoning type of plant-based milk alternatives in the growing plant-based market. Experimental approach Pea seeds were subjected to different pretreatments: (i) dry milling, (ii) blanching followed by soaking in alkaline solution and subsequent dehulling and (iii) vacuum. Typical physicochemical properties such as pH, viscosity, colour, titratable acidity and yield were measured to obtain a brief overview of the products. Consumer acceptance test, descriptive sensory analysis, gas chromatography-mass spectrometry and gas chromatography-olfactometry were used to map the complete sensory profile and appeal of the pea milk substitutes. Results and conclusions The L* values of the pea milk analogues were significantly lower than those of cow’s milk, while a*, b*, viscosity and pH were similar. In the descriptive sensory analysis, sweet, astringent, pea-like, cooked, hay-like, boiled corn and green notes received relatively higher scores. The vacuum-treated pea milk analogues received higher scores for flavour and overall acceptability in the consumer acceptance test. The pretreatments resulted in significant changes in the volatile profiles of the pea milk analogues. Some volatiles typically associated with off-flavour, such as hexanal, were found in higher concentrations in blanched pea milk analogues. Among the applied pretreatments, vacuum proved to be the most effective method to reduce the content of volatile off-flavour compounds. Novelty and scientific contribution This study stands out as a rare investigation to characterise pea milk analogues and to evaluate the impact of simple pretreatments on the improvement of their sensory properties. The results of this study could contribute to the development of milk alternatives that offer both high nutritional value and strong appeal to consumers.
INTRODUCTION
In recent years, consumers have reduced their consumption of animal products due to growing awareness of sustainability, environmental impact of food and concerns about diseases associated with animal-based diets (1).In response to these trends, food manufacturers and researchers are developing plant-based alternatives such as meat and dairy analogues.The plant-based food market, open for further expansion and innovation, has experienced rapid growth in recent years and is expected to reach USD 161.9 billion by 2030 (2).Plant-based milk analogues represent the largest product category of the plant--based market (3).Plant-based milk analogues are water-soluble extracts of plant materials and they are similar in appearance and consistency to cow's milk.
Pulses are considered the most important raw materials for plant-based milk analogues due to their protein-rich and nutrient-rich properties.Commercially, the most popular and accessible pulse-based milk analogue is soy milk (4).Soybean is one of the richest sources of protein among pulses.However, soy allergy restricts the consumption of soy products (5).In addition, antinutrients such as enzyme inhibitors and tannins reduce the bioavailability of soy protein (6).
Peas, soybeans, wheat and rice are the most important sources for the production of plant-based alternatives (7).Peas are becoming a promising alternative to soy for the production of plant-based milk analogues due to their low allergenicity, widespread availability, and high nutritional value, thus attracting more and more attention (8).Pea (Pisum sativum L.) is one of the oldest crops in the world and is grown in 84 countries, including Australia, Canada, China and the United States (9).Moreover, the pea has the largest share (36 %) of total pulse production worldwide (10).Therefore, it is recognised as an excellent source of nutrients, especially its high-quality protein.Pea protein (~20-25 % of pea seed) is rich in essential amino acids such as tryptophan and lysine and characterised by its high digestibility and notably fewer allergenic reactions than soybean or other plant proteins (10).Peas are also rich in soluble and insoluble fibre, low in fat and sodium and a remarkable source of complex carbohydrates, B-group vitamins, folate and minerals, especially iron, calcium and potassium (9).In addition, the consumption of peas is associated with various health benefits, such as anticancer, antiobesity, antidiabetic and cardioprotective effects (11).However, the use of peas in food is limited, partly due to their undesirable sensory attributes, known as 'beany off-flavour' (12).
The off-flavour of peas can either be inherent or develop during processing and storage (13).The main off-flavours in peas are described as green, beany, earthy, hay-like, bitter and astringent.They are associated with volatile compounds such as aldehydes, ketones and alcohols, as well as non-volatile compounds such as isoflavones and saponins (13,14).The presence of off-flavour related volatiles is mostly attributed to the oxidation of unsaturated fatty acids catalysed by enzymes (15).In this context, lipoxygenase (LOX), hydroperoxide lyase enzymes and indirectly lipase have been reported to play an important role in the formation of volatile off-flavour compounds (16,17).
There are only a few studies on the improvement of the sensory properties of products made from green pea seeds.Azarnia et al. (18) investigated the volatiles in yellow, green and greyish-brown cotyledons of field pea cultivars grown under uniform conditions to evaluate the effect of cultivar, harvest year and processing methods (dry milling, cooking and dehulling) on the volatile flavour compounds.The authors indicated that the volatile flavour compounds in peas were affected by the cultivar, harvest year and processing conditions.Moreover, cooking significantly reduced the total area counts of these volatile compounds.
Bi et al. (19) performed roasting (160 °C for 30 min), high hydrostatic pressure (200-550 MPa for 10 min) and treatment with inhibitors (ascorbic acid, quercetin, epigallocatechin-3-gallate and reduced glutathione) to improve the sensory properties of pea milk.The authors found that high hydrostatic pressure in combination with quercetin had the best inhibitory effect on LOX-2 enzyme activity, which correlated significantly with hexanal content.
Ma et al. (8) applied different pretreatments to dried yellow peas, such as dehulling, blanching, acid soaking, alkaline soaking and their combinations.The authors produced pea milk yoghurt and found that a combination of blanching and acid soaking led to the highest sensory scores, as evaluated by a panel of ten trained members.It was concluded that this pretreatment improved the sensory appeal compared to the control sample.Yen and Pratap-Singh (15) reported that microwave-vacuum drying significantly reduced the total volatile compounds in pea protein and had great potential to reduce off-flavour intensity.Lan et al. (20) evaluated the effects of spray drying based on solid dispersions on the sensory properties of pea protein isolate and found that dispersions with gum Arabic and maltodextrin reduced the beany flavour.Tanger et al. (21) reported that both spray drying and freeze-drying reduced the beany off-flavour and improved the sensory properties of pea protein.
The main objective of this study is to evaluate the effectiveness of simple pretreatments (dry milling, which served as a control, blanching followed by soaking in alkaline water and subsequent dehulling, and vacuum), which can be easily transferred to large-scale production, in mitigating the characteristic off-flavour in pea milk analogues and to investigate the correlation between LOX activity and sensory acceptance.A further aim is to investigate the effect of the treatments on the physicochemical and sensory properties of pea milk analogues.
Materials
Pea (Pisum sativum L.) seeds were purchased at local markets in Çanakkale, Turkey.Pea seeds from three different brands were combined to increase the representativeness of the sample.The combined material had average mass fraction of moisture, crude protein, ash, crude fat, insoluble, soluble and total dietary fibre of 9.25, 24.05, 2.83, 2.32, 7.97, 0.56 and 8.53 %, respectively.The moisture mass fraction was measured at 130 °C (22).The crude protein mass fraction was measured by the macro-Kjeldahl method, with a nitrogen conversion factor of 6.25 to calculate the protein content (23).The ash mass fraction of the samples was measured by linear heating to 650 °C (24).The crude fat mass fraction was determined by the Soxhlet method with hexane as the solvent (25).Soluble, insoluble and total dietary fibre mass fractions were analysed using a commercial enzyme kit (Megazyme, Wicklow, Ireland) by an enzymatic-gravimetric mechanism (26).
Additionally, three different brands of whole milk and two different brands of semi-skimmed cow's milk were purchased to compare some physicochemical properties.
Pea seed pretreatments
Three different pretreatments of pea seeds were used: (i) dry milling (control): pea seeds were ground with a laboratory grinder (IC-02A; Yuhong Industry, Jiangsu, PR China) and sieved through a 300-μm sieve, (ii) blanching followed by soaking in alkaline solution and then dehulling.The pea seeds were blanched by immersing them in boiling water (~100 °C) for 3 min to inactivate the LOX enzyme.They were then soaked in alkaline water (pH=9) for 1 h, dehulled manually and wet milled using a blender (8011S; Waring, Stamford, CT, USA) for 5 min at high speed, and (iii) vacuum: the pea seeds were dry milled and then hydrated for 30 min on a magnetic stirrer at room temperature.The suspension (m(solid):V(water)=1:10) was then transferred to a rotary evaporator (RV 8; IKA, Staufen, Germany) and subjected to a constant vacuum (0.08 MPa) at 50 °C for 30 min with a rotation speed of 50 rpm.The pea milk analogues produced from peas subjected to the above pretreatments were named DPMA, BPMA and VPMA.
Determination of LOX activity
The LOX activity of pea seeds was determined according to Lampi et al. (27) with some modifications.To extract LOX, 10 g of pea seeds were weighed and milled with distilled water (1:10) in a blender (8011S; Waring) for 2 min.The mixture was centrifuged (NF 800R; Nüve, Ankara, Turkey) at 9435×g and 4 °C for 15 min and the supernatant was used as enzyme extract after dilution with M/15 phosphate buffer, pH=6.8.The substrate was a 10 mM linoleic acid (Sigma-Aldrich, Merck, St. Louis, MO, USA) solution in 1 % Tween 20 in water, which was clarified with 1 M NaOH.The change in absorbance at 234 nm was recorded immediately (UV-160A; Shimadzu, Kyoto, Japan) after the addition of 0.2 mL of enzyme extract to a mixture of 2.6 mL of M/15 phosphate buffer and 0.2 mL of substrate solution for a period of 270 s.The LOX activity results were calculated using the following equation proposed by Baltierra-Trejo et al. (28): where U is the enzyme activity (μmol/(min•L)), ΔA is the difference between the final and initial absorbance, V t is the total reaction volume (mL), D f is the dilution factor, 10 6 is the concentration correction factor (μmol/mol), t is the reaction time (min), ε is the molar absorption coefficient (26 000 M -1 • cm -1 ), d is the optical path (1 cm) and V s is the final volume of the sample (mL).
Production of pea milk analogues
All samples of pea milk analogues were prepared at m(pea):V(water)=1:10 for comparison.The suspension was exposed to the above pretreatments, then filtered through <100 µm sieve and heated at about 80 °C for starch gelatinisation.The starch was hydrolysed with commercial α-amylase enzyme (LT-300; Spezyme, Dupont, DE, USA) according to the instructions (1 µL enzyme solution per g sample).The mixture was then homogenised (T25 Digital; IKA) at 3276×g for 5 min and was sterilised in a screw-capped glass bottle (1 L) at 121.1 °C for 5 min using autoclave (HV-110L; Hirayama, Tokyo, Japan).
Physicochemical analysis
The viscosity of the final pea milk analogues (after sterilisation) was measured at 20 °C using a viscometer (LVDV-II+Pro; Brookfield, Toronto, Canada) equipped with an SC4-18 spindle rotating at a shear rate of 264 s -1 .The colour of the final pea milk analogues was measured according to CIE L*a*b* system using a cylindrical cuvette (cell holder CR-A503, tube cell CR-A504; Minolta, Osaka, Japan) and a colorimeter (CR-400; Minolta).Whiteness was calculated according to Milovanovic et al. (29).A digital pH meter (S20; Mettler Toledo, Colombus, OH, USA) was used for pH measurements.Titratable acidity was determined according to Nielsen (30) and the results were expressed as mass fraction of lactic acid equivalents in %.The yield was determined according to Moscoso Ospina et al. (31) and calculated as a mass fraction of sterilised pea milk analogue in its initial wet mass.
Consumer acceptance test
The effect of the pretreatments on the sensory appeal of the pea milk analogues was evaluated using a consumer acceptance test according to Meilgaard et al. (32).The participants (approx.60 % female and 40 % male) were predominantly university staff and students (N=58) aged from 21 to 53.A 9-point hedonic scale (1=dislike extremely, 2=dislike, 3=dislike moderately, 4=dislike slightly, 5=neither like nor dislike, 6=like slightly, 7=like moderately, 8=like, 9=like extremely) was used for the evaluation.The samples of pea milk analogues were coded with random three-digit numbers and served to panellists in plastic cups (~20 mL) at room temperature and under daylight.Drinking water was served between samples to cleanse the palate.
Descriptive sensory analysis
The sensory attributes of the pea milk analogues were evaluated using a descriptive sensory analysis according to Meilgaard et al. (32).Seven trained panellists (5 females, 2 males) aged between 27 and 54 developed potential sensory terms by tasting different types of commercial plant-based milk analogues in several rounds.The definitions and references of the developed descriptive terms are given in Table 1.Each type of milk analogue was assessed in duplicate for the sensory attributes using a 15-point scale (0 represents no attribute and 15 indicates a strong presence of the attribute).
The samples of pea milk analogues were coded with random three-digit numbers and served to panellists in plastic cups (~30 mL) at room temperature.Unsalted crackers and drinking water were provided between samples to cleanse the palate.
Gas chromatography-mass spectrometry analysis
The volatile compounds of pea milk analogues were extracted with the headspace solid-phase microextraction (HS--SPME) method and identified with gas chromatographymass spectrometry (GC-MS).Briefly, 5 mL of sample, 1 g of NaCl and 10 μL of internal standard (10 μL of 2-methyl-3-heptanone in 5 mL methanol) were mixed in a 40-mL amber vial capped with a PTFE/silicone septum (Supelco, Bellefonte, PA, USA).The content was incubated in a water bath at 50 °C for 30 min.Then, SPME fibre (Carboxen/DVB/PDMS 50/30 μm 2 cm; Supelco) was inserted into the vial and incubated under the same conditions for another 30 min to absorb volatile compounds.At the end of that period, SPME fibre was injected into the GC-MS (HP 6890 GC and 7895C mass selective detector; Agilent, Santa Clara, CA, USA) in splitless mode.HP--INNOwax column (60 m×0.25 mm i.d., 0.25 μm film thickness; J&W Scientific, Agilent) was used for the separation of volatile compounds.Helium was used as carrier gas at a flow rate of 1 mL/min.The GC oven temperature was initially set at 40 °C for 1 min, then ramped up to 250 °C at a rate of 4 °C per min, with a final hold time of 10 min.The MS was operated at ionization energy of 70 eV, interface temperature of 280 °C, mass range from 35 to 350 m/z and scan rate of 4.45 scan/s.National Institute of Standards and Technology (NIST) (33) and Wiley Registry of Mass Spectral Data libraries (34) were used for the identification of volatile compounds (based on >70 match score).Retention indices were calculated according to Van den Dool and Kratz (35) using n-alkane series (C 7 -C 23 ) (Sigma-Aldrich, Merck) as external references.
Gas chromatography-olfactometry analysis
Aroma-active compounds of pea milk analogues were extracted with the HS-SPME as mentioned above with the exception of the addition of internal standard.The SPME fibre equipped with an olfactory detection port was then injected into the GC system (HP 6890 GC; Agilent).DB-5 column (30 m×0,32 mm i.d., 0,25 μm film thickness; J&W Scientific, Agilent) was used for the identification of aroma-active compounds.Helium with a flow rate of 1.7 mL/min was used as a carrier gas.The GC oven temperature was initially set at 40 °C for 3 min, then ramped up to 200 °C at a rate of 10 °C per min, with a final hold time of 10 min.Intensities of aroma-active compounds were determined with a 10-point scale (left side: 0=no intensity, right side; 10=strong intensity).Odour descriptions were compared with: (i) n-alkane series (C 7 -C 23 ) (Sigma-Aldrich, Merck), which were injected under the same chromatographic conditions and the retention indices of each compound were matched to the NIST database (33) and literature, (ii) data obtained with GC-MS, and (iii) authentic standard compounds which were analysed under the same chromatographic conditions.
Statistical analysis
The data were evaluated using Minitab v. 21.4.2 (36), SPSS v. 27.0.1.0(37) and NCSS v. 11 (38) statistical software.Parametric data were assessed with analysis of variance (one-way ANOVA) and multiple comparisons were made with Tukey's test (p<0.05).Non-parametric data were assessed with the Kruskal-Wallis test and multiple comparisons were made with Dunn's test (p<0.05).All data were expressed as mean value±standard error.The mean values are of three replicates except for the GC-O analyses, which were conducted twice.
LOX activity
It is widely acknowledged that the volatile compounds responsible for inducing off-flavours primarily result from LOX enzyme activity, which catalyses the oxidation of unsaturated fatty acids in the presence of oxygen (17).Additionally, the LOX enzyme is associated with quality loss as it leads to discolouration, pigment degradation and loss of essential fatty acids (16).In this regard, the inactivation of the LOX enzyme appears to be crucial for pea processing.The effect of blanching on LOX activity as a function of process time is shown in Fig. 1.It was determined that LOX was completely inactivated after 3 min of blanching.In addition, it was observed that LOX activity increased in the early stages (0-60 s)
Physicochemical properties of pea milk analogues
The physicochemical properties of the pea milk analogues are shown in Table 2. Viscosity is a critical physical parameter used in quality control related to mouthfeel.During the preliminary assessments, it was observed that the viscosity of the pea milk analogues was primarily correlated with the mass fraction of solids and the hydrolysis of starch.It was not possible to obtain a final product with a drinkable viscosity after sterilisation if starch hydrolysis was not performed.The viscosity of the pea milk analogues, which were prepared at the same solid mass fraction (10 %), ranged between 2.53 and 3.25 mPa•s.The viscosity of both whole and semi--skimmed cow's milk samples from various brands, measured using the same method, ranged between 1.9 and 2.1 mPa•s.Similar viscosities for semi-skimmed (1.56 mPa•s) and whole cow's milk (2.00 mPa•s) were reported by Nikmaram and Keener (40).Jeske et al. (41) evaluated the physicochemical properties of 17 commercial plant-based milk analogues and found that their viscosity varied widely between 2.21 and 47.80 mPa•s.It is worth mentioning that the viscosity of the final product can be significantly modified by the hydrolysis of raw materials with high content of starch.
The pH and titratable acidity expressed as lactic acid of unformulated pea milk analogue were in the range of 6.84-6.86 and 0.06-0.08%, respectively (Table 2).Similar pH and titratable acidity values were reported in other studies about plant-based milk analogues (42).On average, the pH and titratable acidity expressed as lactic acid of commercial cow's milk samples were 6.5 and 0.16 %, respectively.
The yield of the pea milk analogues ranged between 72.2 and 87.2 %, with dry milling resulting in a significantly higher yield than wet milling (p<0.05)(Table 2).Previous studies have reported much lower yield values, in the range 50-60 % (43).The difference in yield values may be attributed to different processes, particularly the filtration and milling of the raw material, as well as differences in calculation methods.
Colour is a sensory attribute that significantly affects consumer preference.L* and whiteness values of pea milk analogues were quite low compared to cow's milk.The L* value of blanching and after that it showed a decreasing trend (Fig. 1).This is most likely due to inhomogeneous heat transfer.In other words, different regions of the grain reached the temperature at which the enzyme is inactivated at different times.Similar results were found by Gökmen et al. (39), who reported complete inactivation after blanching at 80 °C for 2 min.
of pea milk analogue ranged between 43.46 and 47.89 (Table 2), while the L* value of commercial cow's milk samples was between 76 and 79 (data not shown).The darker colour of the pea milk analogue was attributed to the chlorophyll degradation and non-enzymatic browning reactions that may occur during sterilisation.Similarly, studies have reported that the colour of soy milk that was heat-treated at increased temperatures is adversely affected by Maillard reactions.Additionally, the browning index of soy milk has been observed to increase with longer holding times at high temperatures (44).The lower L* value in BPMA, which involves a dehulling step, suggests that the pigments are not concentrated in the hulls of peas, unlike other pulses such as lentils, faba beans and mung beans (45).It is also important to note that ingredients added during the formulation step of PBMA can have a significant effect on the colour of the final product.For instance, the addition of oil and homogenisation of the mixture can result in a significant increase in the L* value (data not shown).The calculated whiteness value followed exactly the same trend as L* value (Table 2).Negative a* values, indicating greenness, and positive b* values, indicating yellowness, were observed in this study (Table 2) and the results were similar to those of the commercial cow's milk samples.On the other hand, Oliveira et al. (46) reported a decrease in L* and an increase in a* and b* values when increasing concentrations of pea protein isolate were added to skimmed cow's milk.
Consumer acceptance of pea milk analogue
The results of the consumer acceptance test for pea milk analogues are shown in Table 3.In consumer acceptance tests, food products are usually presented in their final form in which they would be consumed.However, the pea milk analogues were produced and presented in unformulated form to eliminate the masking effect of ingredients such as sugar and flavourings.Therefore, it is important to emphasise that these results apply to unformulated samples.Additionally, the addition of ingredients, especially sugar during the formulation stage, significantly increases consumer acceptance.Despite being unformulated, all samples received scores above 5 (meaning neither like nor dislike) on a 9-point hedonic scale (Table 3).The participants could not detect any significant difference between the samples subjected to different pretreatments in terms of appearance and consistency (p>0.05).However, VPMA received the highest scores for aroma/flavour and overall acceptability, which can be attributed to the volatilisation of undesirable off-flavours in a water bath at 50 °C and their subsequent elimination under vacuum.The vacuum treatment was carried out on a laboratory scale, suggesting that more efficient results can be achieved with vacuum systems on an industrial scale.Vacuum treatment has also been described as an effective strategy for removing beany flavour from soy milk (47).While no statistically significant difference was found between the consumer scores, DPMA received the lowest overall acceptance score on average, which was very close to that of BPMA (Table 3).Therefore, it can be hypothesised that blanching and dehulling pretreatments did not have a positive effect on the overall sensory perception of the pea milk analogue.In other words, the inactivation of LOX did not provide any additional benefit in terms of increasing consumer appeal.Similarly, Murat et al. (48) reported that off-flavours can occur even when LOX is inactivated.On the other hand, it is also worth noting that consumer acceptance tests are highly subjective and may not be reproducible when applied to a different or much larger consumer community.
Descriptive sensory analysis of pea milk analogues
The results of the descriptive sensory analysis of pea milk analogues are shown in Fig. 2. The panellists developed fifteen flavour descriptors, namely astringent, pea-like, cooked, sulphureous, nutty, earthy, hay-like, boiled corn, polish, dirty wet towel, metallic, green, fermented dough, medicinal and wet cardboard.Among these, sweet, astringent, pea-like, Values marked with different letters are significantly different (p<0.05).A 15-point hedonic scale was used, where 0 represents no attribute and 15 indicates a strong presence of the attribute.DPMA, BPMA and VPMA=pea milk analogues pretreated with dry milling, blanching and vacuum, respectively cooked, hay-like, corn and green received relatively higher scores than the other descriptive terms (Fig. 2).Statistically significant differences were found in the scores of astringent, boiled corn, and green in relation to the pretreatments.Similar descriptive terms have been reported in previous studies on pea milk (49,50).Zhang et al. (49) found that "earthy" notes received the highest score in pea milk, followed by "grassy/green", "mushroom" and "sweet".Bi et al. (19) conducted a sensory evaluation of pea milk, in which trained panellists were instructed to list as many attributes as possible to describe the sensory profile.The researchers found that the five terms with the highest frequency among all defined attributes were raw beans, grassy, milk-like, earthy and fatty.Moreover, Trikusuma et al. (50) reported that the notes beany, potato, pasta and cooked green bean were the most frequent in pea protein beverage.
In the present study, it was found that vacuum pretreatment resulted to a significantly lower intensity of the notes astringent, boiled corn and green (p<0.05).In addition, the intensities of the sensory attributes pea-like, earthy, polish, dirty wet towel, metallic, fermented dough and wet cardboard were lower in VPMA (Fig. 2).The sensory descriptors mentioned above are primarily perceived as undesirable and are often associated with off-flavours.It can therefore be concluded that the results of the descriptive sensory analysis are consistent with those of the consumer acceptance test.On the other hand, the intensities of "pea-like" and "green" notes were the highest in BPMA, which underwent blanching pretreatment to inactivate LOX (Fig. 2).This finding suggests that the off-flavour of peas is not solely due to LOX enzyme activity, as has been emphasised by other researchers (13).
GC-MS analysis of pea milk analogues
The volatile compounds of the pea milk analogues identified by GC-MS are listed in Table 4.Of the total 21 detected compounds, 9 of them -namely 2-ethyl-furan, 1-pentanal, hexanal, butanoic acid/2-methylpropyl ester, 2-heptanone, (Z)-2-heptenal, thujone, benzaldehyde and 2-furanmethanol -were present in all samples.The identified volatiles belong to different groups such as aldehydes, alcohols, ketones, esters, furans and phenols.Most of these identified volatiles are formed as a result of oxidation, enzymatic activity and/or Maillard reactions in materials such as pea flour, pea protein isolates and pea milk (12,48,51).
In this study, the main volatiles found at relatively higher concentrations (>10 µg/L) were hexanal and 2-heptanone in DPMA, 2-ethyl-furan, 1-pentanal, hexanal, 2-heptanone, 2-pentyl-furan and 1-pentanol in BPMA and 2-ethyl-furan, hexanal, 2-heptanone, 2-pentyl-furan and thujone in VPMA (Table 4).Similarly, Ma et al. (8) reported that pretreatments such as blanching and dehulling can significantly alter the content and type of volatile compounds.Most of these compounds are mainly derived from linoleic acid, the most abundant fatty acid in peas.The concentration and interaction of these compounds in the system significantly influence the sensory properties (12,52).Several studies suggest that hexanal is a key compound associated with off-flavours and that removing this compound from the material can improve its flavour (50,53).The hexanal content of BPMA heat-treated to inactivate LOX was higher than that of the pea milk analogues subjected to other pretreatments (Table 4).This result indicates that the formation of hexanal in pea milk analogues is not solely due to LOX activity, but may also result from other reaction pathways (48).Even the heat treatment itself, which is used to deactivate LOX, could possibly contribute to increased hexanal formation.Lin and Blank (54) found that hexanal is the major odour-active volatile degradation product of heated phospholipids.Similarly, Trikusuma et al. (50) reported an increase in the amounts of hexanal, 1-pentanol, 1-octen-3-ol, 2-heptanone and 2-pentyl-furan in pea protein beverages after ultra-high-temperature treatment.Moreover, Bi et al. (19) reported that although they found a significant correlation between hexanal content and LOX activity in pea milk, they only observed a 55 % reduction in hexanal content compared to a 90 % inhibition in LOX activity.
CONCLUSIONS
The results showed that the physicochemical properties of the pea milk analogues subjected to different pretreatments were generally similar, except for yield, which was higher in the samples treated with dry milling.Vacuum treatment reduced the green and pea-like notes in the descriptive sensory analysis.Additionally, vacuum-treated pea milk analogues received higher scores for aroma, flavour and overall acceptability in the consumer acceptance test.The concentration of certain volatile compounds believed to contribute to off-flavours, such as hexanal, 1-octen-3-ol and 1-pentanol, was increased in the pea milk analogues pretreated with blanching, alkaline soaking and dehulling.Although lipoxygenase (LOX) is known for its role in the production of off-flavours, the results suggest the existence of different mechanisms, as evidenced by the highest concentration of off-flavour markers in the pea milk analogues from blanched (LOX inactivated) peas.Overall, the olfactometric intensities showed only minimal variations in the different pretreatments.
The results of the study show that the off-flavour in pea milk analogues cannot be explained by LOX activity alone.However, vacuum pretreatment proved to be an effective method for removing the off-flavour.Nevertheless, further research is needed to fully investigate the effectiveness of vacuum treatment in a more efficient and large-scale system.
Fig. 1 .
Fig. 1.Lipoxygenase (LOX) activity as a function of the blanching time
Fig. 2 .
Fig. 2. Descriptive sensory analysis results of the pea milk analogues.Values marked with different letters are significantly different (p<0.05).A 15-point hedonic scale was used, where 0 represents no attribute and 15 indicates a strong presence of the attribute.DPMA, BPMA and VPMA=pea milk analogues pretreated with dry milling, blanching and vacuum, respectively None of the authors have any conflict of interest.AUTHORS' CONTRIBUTION A. E. Andaç contributed to the research by conducting formal analyses, curating data, interpreting data and reviewing the relevant literature.N. B. Tuncel made important contributions to the research, including the conceptualisation, supervision and design of the analysis.N. Y. Tuncel made a significant contribution to the research by conceptualising the study, designing the methods of analysis, supervising and actively participating in the writing and editing of the manuscript.ORCID ID A.E. Andaç https://orcid.org/0000-0002-0898-066XN.B.Tuncel https://orcid.org/0000-0001-9885-5063N.Y.Tuncel https://orcid.org/0000-0003-2700-5840
Table 1 .
Definitions and references for the descriptive terms used in descriptive sensory analysis *Reference numbers for the basic taste indicate their position on the 15-point hedonic scale (HS)
Table 2 .
Physicochemical properties of the pea milk analogues aResults are expressed as mean value±standard error.Mean values followed by different letters in superscript within the same row are significantly different (p<0.05).DPMA, BPMA and VPMA=pea milk analogues pretreated with dry milling, blanching and vacuum, respectively, TA=titratable acidity
Table 4 .
Volatile profile of the pea milk analogue determined by gas chromatography-mass spectrometry analysis Results are expressed as mean value±standard error, N=3.Mean values followed by different letters in superscript within the same row are significantly different (p<0.05).DPMA, BPMA and VPMA=pea milk analogues pretreated with dry milling, blanching and vacuum, respectively, RT=retention time, RI=retention index
Table 5 .
Aroma active compounds of the pea milk analogue determined by gas chromatography-olfactometry analysis Results represent the olfactory intensity on a 10-point scale, where 0=none or not perceptible intensity, and 10=extremely high intensity.O=olfactory identification, RI=retention indices matched to the NIST database (33) and literature, STD=authentic standard compounds which were analysed under the same chromatographic conditions, MS=mass spectrometry identification.DPMA, BPMA and VPMA=pea milk analogues pretreated with dry milling, blanching and vacuum, respectively | 2024-07-06T15:03:22.843Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "c9436062aebc435303be579c0d2bb6c0f215c746",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "695a5da624e3e8e9119731858ddd4bad38cd3f05",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207808274 | pes2o/s2orc | v3-fos-license | Factors That Control the Formation of Dendrites and Other Morphologies on Lithium Metal Anodes
Lithium metal is a promising anode material for next-generation rechargeable batteries, but non-uniform electrodeposition of lithium is a significant barrier. These non-uniform deposits are often referred to as lithium “dendrites,” although their morphologies can vary. We have surveyed the literature on lithium electrodeposition through three classes of electrolytes: liquids, polymers and inorganic solids. We find that the non-uniform deposits can be grouped into six classes: whiskers, moss, dendrites, globules, trees, and cracks. These deposits were obtained in a variety of cell geometries using both unidirectional deposition and cell cycling. The main result of the study is a figure where the morphology of electrodeposited lithium is plotted as a function of two variables: shear modulus of the electrolyte and current density normalized by the limiting current density. We show that specific morphologies are confined to contiguous regions on this two-dimensional plot.
INTRODUCTION
There is growing interest in the nature of electrodeposition at lithium metal electrodes due to the current focus on increasing the energy density of rechargeable lithium batteries (Girishkumar et al., 2010;Balsara and Newman, 2013). However, many fundamental challenges must be addressed before lithium electrodes can be deployed in practical devices (Aurbach et al., 2000(Aurbach et al., , 2002. One of the main challenges is the nucleation and growth of protrusions during battery charging (Selim and Bro, 1974;Besenhard and Eichinger, 1976;Epelboin, 2006), which limits the battery lifetime and compromises safety (Yamaki et al., 1998;Aurbach et al., 2002). These protrusions are often referred to as "lithium dendrites." Strictly, the word "dendrite" implies a branched structure; we propose to not use this term as many non-dendritic morphologies have been reported in the literature.
Lithium metal protrusions have been observed in electrodeposition and cycling experiments conducted in a wide array of electrolytes (Arakawa et al., 1993;Brissot et al., 1998;Ren et al., 2015). We focus on three classes of electrolytes: those based on organic liquids, organic polymers, and inorganic solids. The lithium ions are present in both liquid-and polymer-based electrolytes due to the addition of a suitable salt. In contrast, lithium ions are an integral part of the crystal structure of inorganic solid electrolytes. The passage of current results in salt concentration gradients in liquid-and polymer-based electrolytes (Chazalviel, 1990). On the other hand, these concentration gradients are absent when current flows through inorganic solid electrolytes. In principle, polymer electrolytes can also be single ion conductors if the anion is covalently linked to the polymer chain (Bouchet et al., 2013). Compared to polymer electrolytes with added salt, there is little information about dendrite morphologies obtained in polymeric single-ion conductors (Cao et al., 2019;Dai et al., 2019). We have thus chosen to only discuss liquid and polymer electrolyte systems with added salt. Our objective is to identify the parameters that control the nature of lithium electrodeposition in liquid electrolytes, polymer electrolytes, and ceramic electrolytes.
The morphology of electrodeposited lithium is affected by many factors, such as current density, salt concentration where lithium is plating, tip radius of the protrusion, temperature, pressure (Yamaki et al., 1998;Gireaud et al., 2006) solid electrolyte interphase (SEI), and the ion transport and mechanical properties of electrolyte (Barton and Bockris, 1961;Diggle et al., 1969;Jana and García, 2017). While we have focused on the anode and the electrolyte, it is well-known that spontaneous reactions between lithium metal and all known electrolytes result in the formation of an SEI layer (Peled, 1979), which plays a central role in stable cycling (Tarascon and Armand, 2001;Meyerson et al., 2019). We have also glossed over the fact that liquid electrolytes are often contained within porous separators that are necessary for battery operation. The observed morphologies are grouped in six different classes described in Table 1. Important features of the lithium morphology are summarized in Table 2.
Whiskers emanating from the anode represent the simplest morphology of lithium protrusions. These are generally long and thin structures, with widths of about 1 µm and lengths ranging from 10 to 100 µm (see first entry in Table 2). Panel a in Table 1 shows a scanning electron microscopy (SEM) image of whiskers. A schematic of whiskers is shown next to the SEM image in Panel b (Table 1). In this schematic, we represent the whiskers connected directly to the anode without an intervening SEI layer (Aurbach et al., 2002). We hypothesize that this must be the case; we are not aware of any direct support for our hypothesis. The whiskers are covered by an SEI layer (Peled, 1979;Aurbach et al., 1987) and surrounded by the electrolyte. Panel c in Table 1 shows an SEM image of mossy lithium and a schematic of this morphology is shown in Panel d (Table 1). This electrodeposited lithium presents solid interconnected pebbles with electrolyte filling gaps and pores. Panels e,f in Table 1 present lithium dendrites. Dendrites are thin-branched, fractal objects. Lithium globules are presented in Panels g,h ( Table 1). These objects are found in confined regions, unlike whiskers, mosses and dendrites which tend to form across the entire electrode. Globules are nucleated at an impurity particle in the electrode. Panels i,j in Table 1, show lithium trees emanating from the electrode. Unlike mosses that are irregular in shape, the lateral size of trees increases with increasing distance from the electrode. Panels k,l TABLE 1 | Name, experimental visualization, and schematic of common morphologies of electrodeposited lithium. Images of whiskers, mosses, dendrites, globules, trees, and cracks are reproduced with permission from (a) Steiger et al. (2014), (b) Qian et al. (2015), (c) Bai et al. (2016), (d) Harry et al. (2015), (e) Brissot et al. (1998), and (f) Cheng et al. (2017).
Morphology
Image Schematic
Moss
Dendrites Globules Trees Cracks in Table 1 show lithium deposition through cracks in ceramic electrolytes. In this case, lithium protrusions grow through grain boundaries and result in cracking of the electrolyte. Typical sizes and aspect ratios of the protrusions described above are given in Table 2.
There have been many attempts to model the growth of metallic protrusions during electrodeposition from different electrolytes (Barton and Bockris, 1961;Diggle et al., 1969;Monroe and Newman, 2005;Voss and Tomkiewicz, 2006;Mayers et al., 2012). In the case of liquid-and polymer-based electrolytes, the passage of current results in depletion of salt at the cathode where lithium is being deposited. The current density, i, at which the salt concentration at the cathode approaches zero is defined as the limiting current density, i L . The depletion of salt has been implicated in lithium protrusion nucleation and growth (Chazalviel, 1990;Bai et al., 2016). However, lithium protrusions have been observed to nucleate and grow at current densities far below the limiting current Xu et al., 2014;Jana and García, 2017). The nucleation and growth of lithium protrusions is also affected by the modulus and other mechanical properties of the electrolyte Newman, 2003, 2005;Barai et al., 2017). The importance of limiting current has been recognized in the literature (Chazalviel, 1990;Bai et al., 2016;Barai et al., 2017;Maslyn et al., 2018). In this review, we demonstrate that the lithium protrusion morphology obtained in different classes of electrolytes are mainly functions of two parameters: (1) current density normalized by the limiting current density and (2) modulus of the electrolyte. We present literature results for lithium protrusion morphology as a function of these two parameters on a "morphology diagram." Each morphology is roughly restricted to contiguous regions on this diagram.
METHODS
The classes of electrolytes covered in this review are given in Table 3. The first column in Table 3 Table 3 includes the orders of magnitude of relevant electrochemical properties: conductivity, κ, salt diffusion coefficient, D, and steady-state current ratio, ρ + . Conductivity is measured by ac impedance, salt diffusion coefficient is measured by restricted diffusion, and the steady-state current ratio is measured in lithium-lithium symmetric cells.
The steady-state current fraction deserves some clarification. This approach for characterization of electrolytes was pioneered by Bruce, Vincent, and Evans (Bruce and Vincent, 1987;Evans et al., 1987). In this approach, a fixed dc potential is applied to the symmetric cell and current is recorded as a function of time. At early times, the salt concentration in the electrolyte is uniform (as it is when the cell is at rest) and the current obtained under these circumstances is dictated by conductivity alone. We ignore contribution from interfacial impedance in this description; this is discussed extensively in the literature Evans et al., 1987). We refer to the current obtained in the absence of concentration gradient as i Ω . The passage of time results in the establishment of salt concentration gradients, which, in turn, lead to a reduction of current. The ratio of the final steady-state current obtained in such an experiment, i SS , to i Ω is defined as the steady-state current fraction ρ + (Gray and Bruce, 1995;Galluzzo et al., 2019). In the literature this fraction is often called the transference number, t + . It is known however that ρ + equals t + for the case of dilute electrolytes that are thermodynamically ideal (i.e., when the salt activity coefficient is unity). Since practical electrolytes are never dilute, it is our understanding that the transference number is different from ρ + in all of the electrolytes covered in this study. Nevertheless, the steady-state current fraction is an important characteristic of electrolytes. The analysis presented in this paper makes extensive use of this characteristic.
DISCUSSION
The systems chosen for this study are listed in Table 4. Each class of electrolytes is presented using a different color background: blue for liquid electrolytes listed first, red for polymer electrolytes listed second, and yellow for inorganic solid electrolytes listed third. The liquid electrolytes are alkylcarbonate-based systems used in lithium-ion batteries. The alkyl carbonates of interest include ethylene carbonate (EC), dimethyl carbonate (DMC), diethylene carbonate (DEC), and propylene carbonate (PC). The salts dissolved in these liquids include bis (trifluoromethanesulfonyl)imide lithium salt (LiTFSI), lithium perchlorate (LiClO 4 ), and lithium hexafluorophosphate (LiPF 6 ). The second class is polymer electrolytes, which are composed of a neutral polymer, such as poly(ethylene oxide) (PEO) or block copolymer poly(styrene)-bpoly(ethylene oxide) (SEO). The salt dissolved in these polymers is usually LiTFSI or lithium bis(fluorosulfonyl)imide (LiFSI). The last class is inorganic solid electrolytes, which can be either classical crystalline solids with lithium ions in the lattice, such as Li 7 La 3 Zr 2 O 12 (LLZO) or glassy solids, such as 80Li 2 S-20P 2 S 5 .
The importance of limiting current has already been discussed above. However, this parameter is seldom measured directly by experiment. We thus use a simple expression for the limiting current taken from (Monroe and Newman, 2003), where c b is the salt concentration in the conducting phase, F is the Faraday's constant, and L is the distance between the two electrodes. We have taken the liberty of replacing the transference number in equation (23.5) in Monroe and Newman (2003) by ρ + . Equation 1 only applies to electrolytes containing added salt. It does not apply to single ion conductors.
In Table 4, we provide values for the parameters required to calculate i L . For completeness we provide values for conductivity, though this parameter is not used in our analysis. The first entry in Table 4 is the classical lithium-ion battery electrolyte. The formation of lithium protrusions in this electrolyte was studied using two different kind of cells: lithium-lithium symmetric cells and a half cell with carbon as the cathode. Protrusions were obtained after cycling the cells, indicated in the cycling/deposition column by 'C'. The electrochemical parameters were obtained from references (Valøen and Reimers, 2005;Dahbi et al., 2011) as indicated in the first entry in Table 4. In many of the entries in Table 4, electrochemical characterization data was obtained from different references, as is the case for the first entry. In such cases, the reference is provided below the parameter. In cases where electrochemical characterization data were presented along with lithium protrusion characterization, no references are provided next to the characterization data. The parameter L, is the distance between the electrodes. In the case of composite cathodes like the carbon (graphite particles) used in the first entry, we ignore salt concentration gradients that occur within the electrolyte contained inside the pores of the composite cathode. In addition, when the cell has two different electrodes, one of the electrodes is always lithium metal and we are only concerned with electrodeposition of lithium on the lithium metal electrode. The second entry in Table 4 is similar to the first entry except that the cell was not cycled. The formation of lithium protrusion was studied after one-directional electrodeposition, indicated in the cycling/deposition column by 'D'. In the third entry in Table 4, the copper cathode is electrochemically inactive. Lithium metal is deposited onto this cathode, which is similar to a lithium metal electrode after sufficient passage of current. Other entries related to liquid electrolytes in Table 4 include surfaces treated with specific chemicals [e.g., copper treated with lithium fluoride (LiF) and lithium treated with silicon carbide (SiC)]. A majority of the liquid electrolytes studies reported in Table 4 were conducted on lithium-lithium symmetric cells.
In many systems reported in Table 4, the parameters needed to estimate i L were not reported in the lithium electrodeposition studies. We have relied on literature to estimate parameters in these cases. The deposition experiments for liquid electrolytes were conducted in the vicinity of room temperature (20 • C to 30 • C). We have not accounted for temperature variation between different studies. The values of i L reported in Table 4 vary from 0.06 to 380 mA.cm −2 . While many parameters affect i L , the wide range of i L values are largely due to differences in L (see Equation 1). We posit that the differences in mechanical properties of liquid electrolytes are small, and therefore not relevant. We thus do not report the modulus of these systems in Table 4.
The work on lithium electrodeposition through polymers is restricted to PEO homopolymers and PEO containing block copolymers. The electrodeposition experiments are conducted at elevated temperatures (e.g., 90 • C), due to poor ion transport in the vicinity of room temperature. The electrochemical parameters reported in Table 4 are applicable at the temperature at which the electrodeposition was performed. In polymer electrolytes the values of i L range from 0.001 to 4 mA.cm −2 . The in-phase shear modulus, G', of the polymers in the low frequency limit is also reported in Table 4. At the temperatures of interest, PEO homopolymers are rubbery liquids and in the low frequency limit G' is proportional to ω 2 , where ω is the frequency. In other words, the shear modulus of PEO homopolymer is negligible at low frequencies. On the other hand, SEO block copolymers exhibit a frequency independent G' in the low frequency limit. This solid-like behavior is due to the presence of the glassy polystyrene domains. The values of G' reported for these systems is thus non-negligible.
The last set of entries in Table 4 pertain to inorganic solid electrolytes. In these systems Li + is the only mobile ion and thus ρ + = 1. The limiting current in these systems cannot be calculated using Equation (1) (Monroe and Newman, 2003). The shear moduli of these materials are 4 orders of magnitudes larger than those of polymeric solids in Table 4. We report shear modulus, G', the ionic conductivity, κ, diffusion coefficient, D, the current ratio, ρ+ number or best estimated transference number, the applied current density, i, the measured or calculated limiting current, i L , the distance between the electrodes, L, the type of cell cycled, if the lithium was unidirectional electrodeposited (D) or cycled (C) and the reference. The number next to the polymer electrolytes refers to the number averaged molecular weight of the polymer in kg.mol −1 .
Frontiers in Energy Research | www.frontiersin.org There are many studies of lithium protrusion morphology in novel electrolytes such as ionic liquids and organicinorganic composite electrolytes. We do not include them due to the unavailability of the necessary electrochemical characterization data.
Since theory suggests that the limiting current plays an important role (Chazalviel, 1990;Monroe and Newman, 2003;Bai et al., 2016;Barai et al., 2017;Maslyn et al., 2018), we define a normalized current density as the experimental current density over the limiting current density.
Where i is the applied current density.
In Figure 1, we present results of lithium electrodeposition experiments that were listed in Table 4. The modulus of the electrolytes is given on the abscissa and the normalized current density is given on the ordinate. A red dashed line indicates where the applied current density to the cell equals the limiting current. All of the protrusion morphologies listed in Table 1 occupy contiguous regions in Figure 1 and are shaded in different colors with the corresponding cartoon for visual clarity. The vertical dashed lines in the figure distinguish the three different kinds of electrolytes considered in this review ( Table 3): (1) Liquids. In liquid electrolytes with negligible moduli, whiskers (green) are obtained in the regime i norm = 10 −7 .
At this low normalized current density, the driving force for lithium ion reduction is not sufficient to induce branching in the projecting structures. It is important to note that in this particular study, lithium electrodeposition was carried out without a separator. Increasing i norm in liquid electrolytes results in the formation of mossy lithium (blue). This morphology is obtained in the range 10 −3 < i norm < 7 × 10 −2 . In most studies mossy lithium was observed in coin cells where the electrolyte is contained in a porous separator. The spring in the coin cell does exert pressure on the cell components including the interface between lithium metal and the porous separator. It is possible that the mossy lithium seen at low values of i norm are compacted whiskers due to the pressure exerted on them by the separator. Some evidence for this is presented below. Increasing i norm to 0.08 results in coexistence of mossy and whiskers (hatched green and blue). Further increase of i norm to 0.3 results in dendrites that grow on the top of the mossy deposit (hatched gray and blue). The cross-over from mossy to dendritic deposits occurs over a range of i norm values, from 0.08 to 0.3. In this regime we see two isolated pockets: one where moss and whiskers coexist and one where only moss is observed (see Figure 1). There are no examples in Table 4 where lithium dendrites are seen to grow directly from the planar anode.
(2) Polymers. Low modulus polymers (G' <10 Pa) exhibit behavior similar to liquids at low values of i norm . At i norm = 0.001, whiskers are seen in a cell that does not contain a separator. Note that at this value of i norm , mossy lithium deposits have been observed in liquid cells with separator (see Figure 1). This suggests that some of the mossy deposits seen in liquid electrolytes may be due to the pressure exerted by the separator. Increasing i norm to 0.01 results in the formation of moss, which is seen in the range between 0.01 and 0.04. At higher normalized current densities, coexisting moss/dendrites are seen. With higher modulus polymers (G' = 10 3 Pa) and higher normalized current densities, coexisting moss/whiskers are seen for 0.02 < i norm < 0.05. At higher normalized current densities, i norm ≥ 0.2 trees are observed (orange). Increasing the modulus of polymers to 10 8 Pa results in stable lithium deposition at i norm = 0.005. While the current used under these conditions are too low for practical applications, it is important to note that whiskers are obtained at the same normalized current density in both liquids and polymers with G' < 10 Pa.
Increasing i norm to 0.01 results in the formation of globular protrusions. This morphology is seen up to i norm = 0.2. (3) Inorganic solids. The three inorganic solid electrolytes in Table 4 include crystal and glassy solids. In all cases failure due to the passage of current induces cracking (yellow) in the electrolyte. While lithium protrusions grow through grain boundaries in crystals, it is postulated that they grow through mechanically weak portions of glasses. In a recent study it was shown that grain boundaries in crystalline lithium ion conductors exhibit higher electronic conductivity than the bulk crystal (Han et al., 2019). This is one explanation for the observation of the growth of lithium protrusion in grain boundaries.
CONCLUSION
We have surveyed the literature on electrodeposition of lithium through a variety of electrolytes: liquids, polymers, and inorganic solids. We have focused on the morphology of lithium protrusions that often emerged in these experiments. We show that these morphologies are governed by two parameters: shear modulus of the electrolyte and current density normalized by the limiting current density. The main result of this work is Figure 1 where we plot the lithium protrusion morphology as a function of these two parameters. The different morphologies appear in contiguous regions in this figure, analogous to a phase diagram.
AUTHOR CONTRIBUTIONS
LF, GS, and JM compiled information from the literature. LF, GS, JM, and NB wrote the manuscript. | 2019-11-01T13:07:55.319Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "b1e727418cf6bfebc03e2b59ee674253d4a97e96",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenrg.2019.00115/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e9f730223507acb78876781b146efc3dc794740d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
247364662 | pes2o/s2orc | v3-fos-license | Impact of next-generation hormonal agents on treatment patterns among patients with metastatic hormone-sensitive prostate cancer: a real-world study from the United States, five European countries and Japan
Background Until five years ago, the metastatic hormone-sensitive prostate cancer (mHSPC) treatment landscape was dominated by the use of androgen deprivation therapy (ADT) alone. However, novel hormonal agents (NHAs) and chemotherapy are now approved for male patients with mHSPC. This study aimed to understand the impact NHA approvals had on mHSPC real-world treatment patterns and to identify the key factors associated with NHA or chemotherapy (± ADT) usage vs ADT alone. Methods Data were collected from the Adelphi Prostate Cancer Disease Specific Programme (DSP)™, a point-in-time survey of physicians and their consulting patients conducted in the United States (US), five European countries (France, Germany, Italy, Spain, and the United Kingdom), and Japan between January and August 2020. Data were analysed using descriptive statistics for individual countries, regions, and all countries combined. Pairwise analyses were used to further investigate differences between treatment groups at global level. Results 336 physicians provided data on 1195 mHSPC patients. Globally, at data collection, the most common mHSPC regimen initiated first was ADT alone (47%), followed by NHAs (± ADT) (31%, of which 21% was abiraterone, 8% was enzalutamide, and 2% was apalutamide) and chemotherapy (± ADT) (19%). The highest rates of ADT alone usage were observed in Japan (78%) and Italy (66%), and the lowest in Spain (34%) and in the US (36%). Our results showed that clinical decision making was driven by patient fitness, compliance, tolerance of adverse events, and balance of impact on quality of life vs overall survival. Conclusions This real-world survey offered early insights into the evolving mHSPC treatment paradigm. It showed that in 2020, ADT alone remained the most common initial mHSPC therapy, suggesting that physicians may prefer using treatments which they are familiar and have experience with, despite clinical trial evidence of improved survival with NHAs or chemotherapy (± ADT) vs ADT alone. Results also indicated that physicians prescribed specific mHSPC treatments primarily based on the following criteria: patient preference, disease burden/severity, and the performance status and comorbidities of the patient. To fully appreciate the rapidly changing mHSPC treatment landscape and monitor NHA uptake, additional real-world studies are required.
Background
Prostate cancer remains one of the leading causes of death among men worldwide [1]. Up to one-third of patients develop metastatic prostate cancer at some point in the course of their disease [2], with metastatic castration-resistant prostate cancer (mCRPC) being associated with poor prognosis and high mortality [3].
One of the disease states that precedes mCRPC is known as metastatic hormone-sensitive prostate cancer (mHSPC) or metastatic castration-sensitive/hormonenaïve disease and encompasses a heterogenous patient population with varying levels of disease biology, burden of disease, functional status, cancer-related symptoms, and outcomes [2]. De novo metastases are not uncommon; for example, they have been reported in 63% of patients in community oncology settings in the United States (US) [4]. This may be because prostate cancer screening is not routine and hence many patients end up presenting with later stage, locally advanced or metastatic disease [5,6]. Patients with de novo mHSPC may have more aggressive disease and poorer outcomes than patients who develop metastases later in the course of the disease [7].
After years of little advancement, it is only in the past five years that the treatment landscape for mHSPC has experienced important developments [2,3,8,9]. For seven decades, androgen deprivation therapy (ADT) alone, which was first introduced in the 1940s, was standard of care (SOC) for patients with mHSPC. It was not until 2015 that the therapeutic armamentarium began to develop, when docetaxel chemotherapy (CHAARTED [10]) demonstrated a significant overall survival (OS) benefit compared with ADT alone in mHSPC. Since then, the mHSPC treatment space has been rapidly evolving, with the introduction of novel hormonal agents (NHAs) abiraterone (STAMPEDE [11]; LATITUDE [12]), enzalutamide (ENZAMET [13]), and apalutamide (TITAN [14]). While the former two were originally approved and indicated for use in patients with mCRPC, all of them demonstrated superior OS benefit when added to ADT vs ADT alone or in combination with placebo or nonsteroidal antiandrogen [2,3,8].
All agents are now included in updated guidelines from the European Association of Urology [26], European Society for Oncology [27], National Comprehensive Cancer Network ® (NCCN ® ) [28], and American Urological Association/American Society for Radiation Oncology/ status and comorbidities of the patient. To fully appreciate the rapidly changing mHSPC treatment landscape and monitor NHA uptake, additional real-world studies are required.
Keywords: Metastatic hormone-sensitive prostate cancer, Metastatic castration-sensitive prostate cancer, Novel hormonal agents, Treatment patterns, Real-world evidence [25] Sept 2019 In combination with ADT, with or without prednisone or prednisolone, are indicated for the treatment of patients with mHSPC Japan N/A N/A Abiraterone US (FDA) [17,22] Feb 2018 In combination with prednisone for the treatment of high-risk patients with mHSPC EU (EMA) [15] Nov 2017 In combination with prednisone or prednisolone for the treatment of newly diagnosed high risk mHSPC in adult men in combination with androgen deprivation therapy (ADT) Japan (MHLW) [23] Feb 2018 For the treatment of hormone-naïve prostate cancer (HNPC) with high risk prognostic factors Apalutamide US (FDA) [19] Sept 2019 For the treatment of patients with mHSPC EU (EMA) [16] Jan 2020 Adult men with mHSPC in combination with ADT Japan (MHLW) [24] May 2020 For the treatment of men with prostate cancer with distant metastases Enzalutamide US (FDA) [18] Dec 2019 For the treatment of patients with mHSPC EU (EMA) [21] May 2021 For the treatment of patients with mHSPC Japan (MHLW) [20] May 2020 For the treatment of prostate cancer patients with distant metastasis Society of Urologic Oncology [29]. As the mHSPC treatment landscape continues to evolve, the optimal treatment strategy is likely to vary by disease burden, patient demographics and clinical characteristics, as well as cost, the latter being particularly relevant in low-and middleincome countries [9]. As NHAs have been approved only relatively recently, there are currently few real-world published data assessing NHA use in the mHSPC space [30]; these are restricted to the US, and largely based on data collected prior to NHA approvals for mHSPC. They indicate that, despite the emergence of docetaxel or NHA combination therapies, a large proportion of men continue to be treated with ADT alone [30][31][32][33].
To the best of our knowledge, the present survey represents the first real-world investigation to evaluate mHSPC treatment patterns in different regions of the world post NHA approval. Its main objectives were to understand the impact of the approval of NHAs in the mHSPC space by describing real-world treatment patterns (initial regimens), patient demographics, and clinical characteristics among patients with mHSPC in the US, five European countries (EU5: France, Germany, Italy, Spain, and the United Kingdom [UK]), and Japan, and to identify the key factors associated with NHA or chemotherapy (± ADT) usage vs ADT alone.
Study design
Data were drawn from the Adelphi Prostate Cancer Disease Specific Programme (DSP) ™ , conducted in the US, EU5, and Japan between January and August 2020. DSPs are large, point-in-time surveys of physicians and their patients presenting in a real-world clinical setting, whose methodology has been previously published and validated [34][35][36].
Participants
Physicians were identified by local fieldwork agents using physician panels and publicly available lists, and invited to participate if they had a specialty in medical oncology or urology, or were specialist surgeons; had personal responsibility for prescribing decisions; were seeing four or more patients (two or more patients in Japan) with metastatic prostate cancer per month, two of whom had to be diagnosed with mHSPC (one patient in Japan); and agreed to adhere to all survey rules and regulations.
Data collection
Participating physicians completed detailed electronic patient record forms (PRFs) for the next four (three in Japan) consecutive consulting patients with mHSPC, reflective of real-world clinical practice. The PRFs collected detailed information about patient demographics, clinical characteristics, and patient management, including treatment history at the time of data collection. A list of definitions and derived variables used in the interpretation of these study data is given in Table 2.
To be included, patients had to meet the following eligibility criteria at the time of data collection: being mHSPC diagnosed aged 18 years or older; receiving systemic drug treatment for their mHSPC; having never participated in a clinical trial; receiving any line of therapy for mHSPC treatment (i.e., initial or subsequent treatment).
Variables
The following variables of interest were collected or derived: patient demographics and clinical characteristics (age, employment status, Eastern Cooperative Oncology Group [ECOG] performance status, disease status, risk status, disease volume, sites of metastases, mean number of bone metastases, family history of prostate cancer, prostate-specific antigen [PSA], haemoglobin and alkaline phosphatase levels); initial treatment regimens for mHSPC; physician-reported drivers of initial mHSPC treatment choice (key clinical reasons for treatment choice).
Analysis
Descriptive statistics for demographics, clinical characteristics, and treatment patterns data were reported at individual country level, as well as at regional level (i.e., US, aggregated EU5 data, and Japan), and for all countries combined (i.e., aggregated global data). Chi-squared and ANOVA tests were used to test across all groups.
To further investigate differences between treatment groups following descriptive analysis of treatment patterns, pairwise analyses (t-tests or Chi-squared tests) were conducted: NHA (± ADT) vs chemotherapy (± ADT), NHA (± ADT) vs ADT alone, and chemotherapy (± ADT) vs ADT alone. No adjustments for multiplicity were made. For clinical reasons, we specifically focused on the NHA (± ADT) vs ADT alone, and chemotherapy (± ADT) vs ADT alone treatment groups; these data are presented in detail in the text and in tables and figures. In addition, NHA (± ADT) vs chemotherapy (± ADT) results can also be found in Table 4. For the pairwise analyses, this article presented aggregated global data and findings were interpreted by treatment type at global level. Analyses were performed on a complete case basis using Stata 16.1 [37].
Results
Globally, 336 physicians participated in the survey: 226 in the EU5 (France: 51; Germany: 50; Italy: 45; Spain: 45; UK: 35), 58 in the US, and 52 in Japan. Almost threequarters (70.8%; n = 238) of physicians at global level were medical/clinical/radio oncologists, while 28.3% (n = 95) were urologists, and 0.9% (n = 3) were prostate/ specialist cancer surgeons. The same pattern (i.e., oncologist as the majority specialty) was observed at EU5 and individual country level, except for Japan, where almost all physicians were urologists (92.3%; n = 48). At a global level, similar percentages of physicians worked primarily in an academic/cancer centre (49.7%) or a community setting (50.3%), and a similar pattern was observed in the US, Japan, and France. More physicians in Spain and the UK were based in academic/cancer centers compared with community settings (Spain: 84.4% vs 15.6%; UK: 80.0% vs 20.0%). The opposite was observed in Germany and Italy, where more physicians were community based (Germany: 80.0% vs 20.0%; Italy: 62.2% vs 37.8%) (data not shown).
Discussion
The main objective of the present real-world survey was to understand the impact of the approval of NHAs in the mHSPC setting in the US, EU5, and Japan. To the best of our knowledge, this was the first real-world survey evaluating NHA use in the mHSPC setting in different regions of the world. We found that globally, at the time of data collection, the most common mHSPC regimen initiated was still ADT alone (47%), followed by NHA (± ADT) (31%) and chemotherapy (± ADT) (19% Fig. 1 Initial mHSPC treatment received at time of data collection, split by regions. Note: Individual data labels that were < 3% are not shown. ADT: androgen deprivation therapy; EU5: France, Germany, Italy, Spain, and the United Kingdom; mHSPC: metastatic hormone-sensitive prostate cancer; NHA: novel hormonal agents (abiraterone, enzalutamide, apalutamide, darolutamide); UK: United Kingdom; US: United States. † Other combinations included 'Other NHAs' that were being used in 1.7% of patients overall across treatment lines; 'Other chemotherapy' that was being used in 0.4% of patients overall across treatment lines; 'Other combinations including NHA' that were being used in 1.5% of patients overall across treatment lines; 'Chemotherapy combination' which was being used in 0.3% of patients overall across treatment lines; 'Any other treatment combinations' that were being used in 1% of patients overall across treatment lines data collection this approval was still relatively recent (i.e., September 2019). The highest rates of ADT alone usage were observed in Japan and Italy, and the lowest in Spain and in the US. Differences in ADT alone usage between these regions/countries should be considered in the context of integration into treatment guidelines and insurance coverage, although other factors (e.g., patient choice, physician awareness, preference and/or specialty, drug cost/reimbursement) may have also played a role.
Ng and colleagues [2] have recently provided a useful overview of the potential decision-making factors influencing choice of first line treatment for mHSPC (in the absence of head-to-head clinical trial data for NHAs in the mHSPC setting), including patient and disease factors, as well as drug licensing and reimbursement. Importantly, clinical trial populations diverge from patients in real-world practice, with trial populations including more de novo patients, whereas in the real-world many patients are likely to have received radical treatment prior to developing metachronous metastases. In addition, patients in the clinical setting tend to be older, less physically fit and with more comorbidities, and hence treatment decisions might be complicated by competing risks [2]. For this subset of patients who are older, less fit, and with more comorbidities, ADT alone remains a reasonable option, although treatment intensification with NHAs or chemotherapy now represents a new SOC in the management of mHSPC treatment in many developed countries, with docetaxel generally being reserved for patients with high-volume disease.
NHAs have been used in the mCRPC setting since the approval of abiraterone in 2011 [38]. These agents were then approved for men with mHSPC as early as November 2017 in Europe (abiraterone); although our international survey took place in 2020, some delay in change in practice patterns following these approvals and inclusion in guideline recommendations may naturally be expected. There might be an association between this delay and the results of this study. It is likely that NHA use in the mHSPC setting, and in the treatment of metastatic prostate cancer overall, will increase over time. Such trend was recently confirmed by Ke and colleagues [31]. Furthermore, our findings suggested that this might already be the case in the US, where the most common mHSPC regimen initiated was NHA (± ADT) (41%), closely followed by ADT alone (36%), and chemotherapy (± ADT) (12%). Previous real-world investigations in the US, evaluating care in mHSPC patients who initiated treatment between 2014 and 2019, reported that overall (i.e., over the entire period), ADT alone was the most common mHSPC regimen initiated, with rates ranging from 47 to 63%. In contrast, NHA use over the same period was low (5-14%) [30][31][32][33], although this finding is unsurprising given that NHAs in the mHSPC setting did not receive US Food and Drug Administration approval until 2018-2019. Indeed, Ke et al. [31] reported that mHSPC patients in their 2017-2018 cohort were less often receiving ADT alone (43% vs 52%) and more often abiraterone (10% vs 4%) as initial regimen compared with the 2015-2016 cohort. Nevertheless, Swami et al. [32] recently pointed out that in 2018-2019, most men with mHSPC in the Optum health insurance claims database still received ADT alone, including those with visceral metastases (55%); the equivalent rate for NHA + ADT for the same group was 17%. Likewise, George et al. [4] reported that even in 2019, over half of mHSPC patients treated in real-world settings (oncology practices in the ConcertAI Oncology Dataset) did not initiate therapy now known to significantly improve survival (NHA + ADT or NHA + docetaxel) over ADT alone. Importantly, and in contrast with the present survey using patient data at a point-in-time in 2020, none of these previous investigations assessed the rate of NHA use in 2020, and instead they used broad data ranges from as early as 2014, which extended to prior to NHA approval for mHSPC in the US.
Although ADT alone still dominates the mHSPC treatment space outside the US, the use of NHAs is expected to increase in other countries/regions in the coming months and years. The speed of uptake of these new therapies in real-world settings, however, may be influenced by patient and disease factors, drug licensing, cost/reimbursement issues and physician awareness and education, as well as other local, regional, and national factors [2,4,39]. Fallara and colleagues, who evaluated three nationwide healthcare registries in Sweden, recently reported that uptake of the new indication for abiraterone in men with de novo mHSPC was low (12%) within 27 months after approval of the subsidized use of this agent, which indicates that even with subsidies uptake could be low in some countries or regions. The present survey found that NHA (± ADT) use differed across the EU5, with Spain, France, and Germany having the highest use, and Italy and the UK the lowest.
The second objective of this survey was to identify the key factors associated with NHA or chemotherapy (± ADT) usage compared with ADT alone, and we observed some similarities as well as differences in clinical decision making, based on factors such as patient fitness, compliance, patient preference, and tolerance of adverse events. Globally, physicians in our survey prescribed NHA or chemotherapy (± ADT) to younger patients who were able to tolerate more aggressive treatment, with the goal of extending life. Physicians reserved ADT alone for older patients who may have compliance issues and who were intolerant to AEs and, most importantly, whose goal was to maintain current QoL.
Likewise, Swami and colleagues [32] found that US patients in the Optum health insurance claims database who received chemotherapy (docetaxel) + ADT or NHA + ADT were younger (mean 68 and 73 years, respectively) than patients who received ADT alone (mean 75 years), although, unlike in the present survey, a greater proportion of patients with more aggressive disease (i.e., visceral metastases) received ADT alone (55%) vs chemotherapy + ADT (9%) or NHA + ADT (17%). Younger age in chemotherapy + ADT-treated men with mHSPC was also observed by Tagawa et al. [33], who also found that patients treated with NHA (abiraterone) + ADT or chemotherapy (docetaxel) + ADT were more likely to have metastatic disease in lymph nodes (also observed in our survey) and other sites at treatment initiation.
Fallara et al. 's [39] real-world investigation in Sweden indicated low adherence to the restriction that only men with high-risk mHSPC and men not suitable for docetaxel should receive abiraterone. By contrast, we found that, globally, NHA (± ADT) and chemotherapy (± ADT) rather than ADT alone were more commonly used in mHSPC with high-risk status, high disease volume, and distant metastases (largely in line with approved indications for abiraterone in the US, EU, and Japan [17,22,23], and apalutamide and enzalutamide in Japan [20,24]), although we did not assess specific NHA use, as the approval timelines for specific NHAs vary, from late 2017 to 2020 in some countries/regions.
The mHSPC treatment landscape has changed considerably over the past few years and continues to evolve [8].
In future, increased NHA use in mHSPC patients may impact treatment patterns in the mCRPC setting, too. NHA rechallenge may become the frontline treatment option in mCRPC in countries where this is an acceptable option (e.g., Germany [40] and Japan [41]). However, in other countries, such as the UK [42] and France [43], it remains to be seen how the treatment patterns will evolve, since rechallenging NHA is not an approved treatment option and therefore chemotherapy (docetaxel) may be the first-line treatment of choice in the mCRPC setting. Future studies are warranted to assess how the treatment patterns will evolve within the metastatic prostate cancer setting.
Several limitations should be considered in the interpretation of our findings. The DSP was not based on a true random sample of physicians or patients. Although the selection of participating physicians was based on minimal inclusion criteria, participation was influenced by their willingness to complete the survey. Physicians were asked to provide data for a consecutive series of patients to avoid selection bias, but no formal patient selection verification procedures were in place. In addition, we assessed perceived key clinical reasons for treatment choice, but other reasons may exist, including perceived key reasons not being applicable uniformly to all physicians, or patients participating in the survey not reflecting the general mHSPC population.
Another limitation of this analysis was that the Bonferroni correction was not applied to the results, which could have led to issues surrounding the family-wise type 1 error. Therefore, care should be taken when interpreting the results when looking for significance. Furthermore, recall bias, a common limitation of surveys, may have also affected physicians' responses to the questionnaires. On the other hand, data for these analyses were collected at the time of each patient's appointment and this was expected to reduce the likelihood of recall bias.
Finally, it is important to acknowledge that only developed countries were surveyed and therefore the findings of this study may not be generalisable to developing countries, which may face different challenges in the treatment of prostate cancer patients.
Despite these limitations, real-world studies play an important role in identifying areas of concern that are not usually addressed in randomised controlled trials (RCTs). Compared with RCT populations, real-world studies include more heterogenous samples, which are more reflective of real-world clinical practice. As such, real-world data can complement clinical trial evidence and provide insight into the effectiveness of interventions in patients commonly seen in clinical practice.
Conclusions
Until five years ago, the mHSPC treatment space was dominated by the use of ADT alone. However, novel agents have been approved since late 2017. The present survey found that, globally, at the time of data collection (January-August 2020), ADT alone was still the most common mHSPC treatment regimen initiated first, suggesting that physicians may prefer using treatments that they are familiar with and have experience with (although there are many other factors that may impact prescribing practice), despite clinical trial evidence of improved survival with NHA or chemotherapy (± ADT) vs ADT alone. This survey also indicated that physicians prescribed different mHSPC treatments based on specific criteria, including patient preference, disease burden/severity, and the fitness of the patient. Lastly, our survey offered an early look at the evolving mHSPC treatment paradigm, and it may be that physicians need to better understand the benefit:risk ratios of NHA (± ADT) over ADT alone before they begin to utilize these newer therapies routinely. In order to fully appreciate the rapidly changing mHSPC treatment landscape and monitor NHA uptake specifically, additional real-world studies are required. Future research could also evaluate the impact of NHA use in the mHSPC setting on treatment patterns in the mCRPC patients.
Funding
Data collection was undertaken by Adelphi Real World as part of an independent survey, entitled the Adelphi Prostate Cancer DSP. Merck & Co., Inc., Kenilworth, NJ, USA and AstraZeneca UK Limited did not influence the original survey through either contribution to the design of questionnaires or data collection. The analysis described here used data from the Adelphi Prostate Cancer DSP. The DSP is a wholly owned Adelphi product. Merck & Co., Inc., Kenilworth, NJ, USA and AstraZeneca UK Limited are two of multiple subscribers to the DSP.
Availability of data and materials
The data reported in this study were derived from an independent survey (the Adelphi Real World Prostate Cancer IV DSP). All data, i.e. methodology, materials, data and data analysis, supporting the study are the intellectual property of Adelphi Real World. The data that support the findings of this study are available from Adelphi Real World but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Adelphi Real World. All requests for access should be addressed directly to Andrea Leith at andrea.leith@ adelphigroup.com. | 2022-03-11T14:38:28.063Z | 2022-03-11T00:00:00.000 | {
"year": 2022,
"sha1": "bde76344634e429b542d0096459945ff4051ab6a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "bde76344634e429b542d0096459945ff4051ab6a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6757661 | pes2o/s2orc | v3-fos-license | 4′-Chlorobiphenyl-4-yl 2,2,2-trichloroethyl sulfate
The title compound, C14H10Cl4O4S, is an intermediate in the synthesis of the PCB sulfate monoester of 4′-chloro-biphenyl-4-ol. Both the sulfate monoester and 4′-chloro-biphenyl-4-ol are metabolites of PCB 3 (4-chlorobiphenyl). There are two molecules with different conformations in the asymmetric unit. The solid state dihedral angles between the benzene rings are 18.52 (10) and 41.84 (16)° in the two molecules, whereas the dihedral angles between the least-squares plane of the sulfated benzene ring and O—S (Ar—C—O—S) are 66.2 (3) and 89.3 (3)°. The crystal was an inversion twin with a refined component fraction of 0.44 (7).
Data collection
Nonius KappaCCD diffractometer Absorption correction: multi-scan (SCALEPACK; Otwinowski & Minor, 1997) T min = 0.679, T max = 0.861 25566 measured reflections 6476 independent reflections 4862 reflections with I > 2(I) R int = 0.063 PCB congeners with a lower degree of chlorination are especially prone to undergo oxidative metabolism to hydroxylated (OH)-PCBs (Letcher et al., 2000). OH-PCBs can be further transformed to glucuronides (Tampal et al., 2002) or sulfates (Liu et al., 2006, Sacco & James, 2005. These PCB metabolites are more hydrophilic than PCBs and OH-PCBs and are expected to be more easily excreted. Despite the potential importance of sulfated PCB metabolites, PCB sulfate monoesters and analogous compounds have not been synthesized experimentally and their detailed molecular structure is unknown. Similarly, only few structures of hydroxylated chlorobiphenyl derivatives (Rissanen et al. 1988a(Rissanen et al. , 1988bLehmler et al., 2001Lehmler et al., , 2002Desiraju et al., 1979;Vyas et al., 2006) and sulfuric acid aryl mono esters (Brandao et al., 2005) have been reported.
Herein we report the crystal structure of the title compound, a trichloro-ethyl PCB sulfate diester intermediate of a putative sulfate metabolite of PCB3 (4-chlorobiphenyl). The asymmetric unit of the crystal structure contains two molecules with different conformations (Fig. 1), an observation that highlights the flexibility of PCB derivatives that lack multiple ortho chlorine substituents. The dihedral angles between the two benzene rings in the biphenyl moiety are 18.52 (10) (Shaikh et al., 2008). The calculated dihedral angle of the title compound is 41.2° (Shaikh et al., 2008), which is comparable to that of molecule B, but significantly larger than that of molecule A. The dihedral angle formed by the least-squares plane of the sulfated benzene ring and O1-S1 (Ar-C4-O1-S1) was 66.2 (3)° and 89.3 (3)° for molecules A and B, respectively. These dihedral angles are larger than the calculated Ar-C4-O1-S1 dihedral angle of approximately 54° (calculated with AM1 as implemented by ArgusLab, Version 4.0.1). Overall, these deviations from the energetically most favorable conformation of the title compound are due to crystal packing effects, which allow the molecule to adopt an energetically unfavorable conformation to maximize intermolecular interactions, and thus the lattice energy in the crystal.
Experimental
The title compound was synthesized from 4-chlorobiphenyl-4-ol by sulfation with 2,2,2-trichloroethyl sulfonyl chloride using 4-dimethylaminopyridine as catalyst (Liu et al. 2004). Crystals of the title compound suitable for crystal structure analysis were obtained from a methanolic solution by slowly evaporating the solvent.
Refinement
H atoms were found in difference Fourier maps and subsequently placed in idealized positions with constrained C-H distances of 0.99 Å (CH 2 ) and 0.95 Å (C Ar H) with U iso (H) values set to 1.2U eq of the attached C atom.
supplementary materials sup-2 The crystal was an inversion twin with a refined component fraction of 0.44 (7), i.e. essentially equal amounts of each component. Fig. 1 | 2016-05-12T22:15:10.714Z | 2008-11-29T00:00:00.000 | {
"year": 2008,
"sha1": "fb31a3685c45463f156d50835d34a32c2e218b40",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2008/12/00/dn2403/dn2403.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48446282d83189aaea50aa8a34d4b21d5423bcad",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226571710 | pes2o/s2orc | v3-fos-license | Injuriousness of Blumeria graminis and Pyrenophora tritici-repentis in wheat crops and measures of its operation monitoring
Blumeria graminis and Pyrenophora tritici-repentis of wheat are deleterious infections that often require operation monitoring. The probability of mass lesion of crops by phytopathogens is determined primarily by the presence of pathogens' infectious origin, favorable conditions for their development and spread and host plants susceptibility. Protection strategy and tactics should be based on each particular field and conditions of the growing season. Effective protection of wheat from powdery mildew (75 -87%) was provided by fungicides based on 2-3 active substances, especially the drug Falcon (spiroxamine + tebuconazole + triadimenol). The biological fungicide Phytosporin-M (Bacillus subtilis) provided an average biological efficiency of 58% in years with moderate wheat lesion. Operational control of wheat leaves yellow speckle is more advisable to carry out with preparations on the basis of such active substances as “azoxystrobin + epoxyconazole”, “tebuconazole + propiconazole”.
Introduction
Infectious plant diseases cause significant lesion to crop yields. Crops loss of grain and legumes from fungous diseases can reach up to 30% [1]. It is known that fungicide agents are widely used to protect plants from them, which suppress the growth and development of pathogens. But it should be borne in mind that the intensive use of pesticides of biocidal nature leads to chemical contamination of ecosystems, as well as the emergence of pesticide resistant forms of pathogens.
The causative agents of cereal diseases such as powdery mildew and pyrenophorosis or yellow leaf speckle are harmful infections that can lead to yield losses of 5 -10% with moderate development of infections and up to 35 -50% -in the years of epiphytotics [1,2].
All bread and many forage and wild cereals are affected by powdery mildew. Blumeria graminis (DC.) Speer f. sp. Tritici March) is a complex species of fungus that includes specialized forms capable of infecting one or more species of cereal.
The distribution of powdery mildew is quite wide: Europe, Asia, Africa, America, Australia. In Russia, the disease is widespread, but especially harmful in the Ural and Volga-Vyatka regions, the North Caucasus, the Volga region, the Central Chernozem region [3]. It has economic significance in Belarus, Kazakhstan, Ukraine, Baltic States, Transcaucasia, as well as other grain-sowing regions of Eurasia [4]. Areas with cultivation of winter and spring cereals are in a special zone of phytosanitary risk, as fungi build a highly efficient "food conveyor".
Harmfulness of powdery mildew is manifested primarily in the reduction of assimilation surface and in the destruction of chlorophyll and other pigments [1,5,6]. Based on practical experience, it should be noted that agronomists do not often perceive plant lesion by the discussed pathogen as a crop threat as opposed to rust, for example. As a result, timely protection measures are not taken, which leads to a loss of wheat yield and decrease in grain quality.
The powdery mildew is only able to feed on living green plants. That is, while the host plant is green, until then the fungus lives as well. It releases no toxins and does not attempt to kill the plant quickly. A different relation to the host plant is shown by the pyrenophorosis pathogen, which is a necrophyte and produces host specific toxins [4,7,8]. These toxins induce symptoms of necrosis or chlorosis when interacting with their respective susceptibility genes [8].
Yellow leaf speckle is a relatively new wheat disease. The causative agent is ascomycete fungus Pyrenophora tritici-repentis (Died.) Drechsler. In North America and Australia, it manifested itself at the level of epiphytotics in the 1970s; in Europe (including Russia) -in the 1980s. Epiphytotics of this disease are periodically observed in different countries of the world, grain losses in susceptible varieties reach 65% [7][8][9].
It should be remembered that excessive attention to the creation of varieties resistant to one disease can lead to genetic vulnerability to other diseases, which has been the case previously in Canada. In addition to the given reason, the spread of yellow leaf speckle could also be facilitated by modern gentle tillage, where a large number of plant residues remain on its surface , which are habitat for wintering pseudo-perithecia P. tritici-repentis [8,10].
Operational control of infections is carried out through treatment with fungicides. The question about their justifiable use is not easy to answer, as their application is an investment in an often unpredictable future. In order for protective measures to be justified, it is necessary to make targeted fungicides introduction, taking into account the tension level of phytosanitary situation, the range of preparation action, price categories for grain and pesticides. Numerous scientific (own and literary) data show that a particular reserve of infection may not always lead to the mass development of the disease. Monitoring of phytopathogens development and weather conditions of the growing period is important.
The aim of the research was to determine the development level of powdery mildew and pyrenophorosis on spring wheat (Triticum aestivum L.) and to determine the effectiveness of fungicide preparations for control of leaf phytopathogens.
Materials and Research Methods
The the Omskaya 36 variety was used. The precursor of spring wheat was pure early fallow. The soil of the experimental site is leached medium-humus medium-clay chernozem. Processing of crops with fungicides was carried out in the phase of the flag coming out (f.37 according to Zadoks) using the sprayer "Solo 456", working solution consumption -300 l/ha. The area of the plot is 20 m 2 , the repetition is 4 times, the placement of the plots is systematic. To eliminate the influence of weeds, background processing of experiment with tank mix of herbicides "2.4-D + tribenuron-methyl + phenyxaprop-P-ethyl" (Ballerina 0.35 l/ha + Granat 15 g/ha + Ocelot 0.8 l/ ha) in the tillering phase was performed.
Observations and accounts were carried out according to the methods generally accepted in the Russian Federation [11][12][13].
The experiment scheme included the following fungicides: propiconazole 250 + ciproconazole Field experiments showed that combinations of fungicidal agents such as azoxystrobin 240 + epoxyconazole 160 g/l, propiconazole 300 + tebuconazole 200 g/l, methyl thiophanate 310 + epoxyconazole 187 g/l, and tebuconazole based drug (table 1) provided good biological efficiency (more than 80%) for pyrenophorosis. Weak control of Pyrenophora tritici-repentis was observed on variants with single component fungicides based on cyproconazole and propiconazole (38.0 -45.6%) ( Table 1). The prevalence of infections during the earing-flowering period of wheat was reduced only with the drug based on azoxitsrobin with epoxyconazole up to 45%.
Primary signs of wheat lesion with powdery mildew were fixed in the stalkshooting phase. The further rate of infection development was directly dependent on hydrothermal conditions. Multi-year evidence suggests that Blumeria graminis development was in a close positive relationship (r=0.77-0.82) with precipitation and a noticeable negative (r=-0.65-0.70) relationship with temperature in periods from tillering to flag leaf coming out and from earing to flowering.
Biological efficiency of chemical fungicides was observed as good (over 70%) in years with moderate to mass plant lesions. Biological fungicide in conditions of epiphytotia and depression did not cope with protective function (33-35% technical efficiency). In 2011 and 2015, the degree of plant lesion was characterized as moderate development (10% in f. 51-61). During these years, the use of fungicides in wheat crops had a lower payback than in mass lesion. The biological effectiveness of the chemical fungicides studied was 77-79%, which is almost at the level of infections mass development over the years. The biological drug with moderate development of powdery mildew reduced the degree of plant lesion by 58%. In these years there were fewer dry events in the second half of vegetation and bacteria inoculated more effectively on the leaves of wheat than in the scarcity of moisture.
During the years of mass development of aerogenic infections, timely fungicide treatments retained a significant part of the grain crop and quality. Correlation dependence of yield and degree of disease lesion was characterized as severe negative (r=-0.94) during the years of mass development of infections. With a moderate lesion of plants, the dependence tightness decreased (r=-0.85) The biological effectiveness of fungicidal preparations with respect to total lesions of all leaf infections was characterized as good -66-68% on Titul 390 and Recrut variants and high -80 -90% when treated with preparations based on 2-3 active substances. Phytosporin-M has, on average, controlled leaf disease by 40% over years of research.
The yield of spring wheat in the experiment averaged over the years of research to 21.9 /ha (from 10 dt/ha in extreme arid years 2010 and 2012 to 39 dt/ha -in favorable 2011), which is for conditions of moisture availability of the growing period 175-200 mm is good productivity.
Due to fungicide protection, the level of retained crop was 5-6% during the depression years, 18% of the yield was maintained with moderate development of leaf diseases due to chemical fungicides, and biopreparation provided an increase in productivity of 9% to control. During the epiphytotics years, chemical protection of crops maintained an average of 24% of wheat yield, the best performance and stability of action had polycomponent preparations.
Conclusions
Wheat powdery mildew and pyrenophorosis are malicious infections that often require operational control. The probability of mass lesion of crops by pathogens is determined primarily by the presence of infectious pathogens origin, favorable conditions for their development and spread, as well as susceptibility of host plants. Protection strategy and tactics should be based on each particular field and conditions of the growing season.
Effective protection of wheat from powdery mildew (75-87%) was provided by fungicides based on 2-3 active substances, especially the Falcon preparation (spiroxamine + tebuconazole + triadimenol). Biofungicide provided a medium biological efficiency in years with moderate wheat lesion.
Operational control of yellow leaf speckle is more advisable to carry out by preparations based on such active substances as azoxystrobin + epoxyconazole, propiconazole + tebuconazole. Biofungicide did not control a given type of infection. | 2020-06-25T09:06:01.065Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "bee46769aba5a6d98e480f41065631867544812f",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/36/e3sconf_idsisa2020_04008.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4fa06f778bfbbb5689b0ef5cc8bea680cc4f617b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
235797094 | pes2o/s2orc | v3-fos-license | What is Causal Specificity About, and What is it Good for in Philosophy of Biology?
The concept of causal specificity is drawing considerable attention from philosophers of biology. It became the rationale for rejecting (and occasionally, accepting) a thesis of causal parity of developmental factors. This literature assumes that attributing specificity to causal relations is at least in principle a straightforward (if not systematic) task. However, the parity debate in philosophy of biology seems to be stuck at a point where it is not the biological details that will help move forward. In this paper, I take a step back to reexamine the very idea of causal specificity and its intended role in the parity dispute in philosophy of biology. I contend that the idea of causal specificity across variations as currently discussed in the literature is irreducibly twofold in nature: it is about two independent components that are not mutually entailed. I show this to be the source of prior complications with the notion of specificity itself that ultimately affect the purposes for which it is often invoked, notably to settle the parity dispute.
Introduction
Traditionally, the philosophy of causation has put the focus on the project of distinguishing causes from non-causes by clarifying what it is to be a cause. Recently, the complementary project of distinguishing among causes is receiving considerable attention as well (Woodward 2010). This latter task makes use of additional causal concepts that characterize differences in causal contributions, enabling comparisons among them. This is an important philosophical project for clear reasons. Often, one is not exclusively concerned with detecting causal relations, but with the characteristic features of particular causal contribution. Moreover, we often acknowledge 1 3 a given effect to have multiple causes, so the question arises as to how they compare in terms of diverse characteristics. A theory of causality that merely tells us how to distinguish causes from non-causes would not help us answer that question. Analyses of the respects in which causal relations differ can be done on the basis of several properties (e.g. strength, proportionality, stability), but I will be concerned with one that has inspired interesting disputes in the philosophy of biology, namely causal specificity.
The notion of causal specificity has been spelled out in a number of ways; for Waters (2007), for instance, specificity is about the possibility that many different changes in the cause would produce many different changes in the effect. Philosophers of biology have invoked causal specificity to make the case that genetic and nongenetic developmental factors are not causally on a par, in other words, to argue against the causal parity thesis. The general rationale of specificity-based arguments is that it does not follow from the fact that there are multiple causes for a given effect that they are the same, ontologically speaking (i.e. qua causes). Thus, opponents of causal parity have invoked the concept of causal specificity to make the case that DNA is ontologically different. However, it has been replied that non-genetic factors can be highly specific as well.
Regardless of the stance taken with respect to the causal parity of developmental factors, there is a consensus view that specificity captures an interesting and distinct feature of causal relations. In this context, two things seem clear to me. First, the debate around causal parity in philosophy of biology is a bit stuck and, at this point, it is not more biological details that will help fix the situation (as, in fact, opposing conclusions have been drawn from biological details). Second, philosophers seem to assume that attributing specificity to causal relations is at least in principle a straightforward (if not systematic) task, and that causal specificity is a perfectly coherent, intuitive, well understood notion.
In this paper, I take a step back to reassess the value and limitations of the notion of specificity for the dispute. This is not carried out on the basis of further analysis of the overly discussed cases of DNA vs. the polymerase, or vs. alternative splicing. Rather, I point to prior difficulties with the notion of specificity itself and show that they ultimately affect the purposes for which specificity is often invoked, notably to settle the parity dispute. I argue that causal specificity, in all available accounts, is really about two distinct components, connectedness and repertoire, which-crucially-are not mutually entailed. Furthermore, I identify this dual nature as the source of the difficulties for attributions of causal specificity to any given causal relation. While my analysis has direct consequences for a particular debate in philosophy of biology that are made explicit in the paper, it will be of a broader interest in the philosophy of causation.
The paper is structured as follows. In Sect. 2, I present the causal parity thesis in philosophy of biology, clarifying the principles upon which it rests. I then outline the basic logic of the specificity arguments against causal parity, illustrating how specificity works there. In Sect. 3, I review the ways in which causal specificity has been spelled out in the literature. Next, I address the question as to whether specificity should be viewed as a quantitative or as a categorical property, and recommend the former option following some of the suggestions in the literature (Sect. 4). In Sect. 5, I raise a paradoxical reading of the so-called "switch-like" causal structures in order to reveal the dual nature of causal specificity and clarify its components and their mutual independence. This is identified as the source of the difficulties explained in Sect. 6 concerning the specific/nonspecific distinction and how to attribute causal specificity. The consequences of such difficulties for the parity dispute in philosophy of biology (the success of specificity-based arguments), and ways to deal with those difficulties, are examined in Sect. 7. Lastly, some conclusions in Sect. 8.
Causal Parity and the Specificity Argument
Parity claims in biology are mainly attributed to Developmental Systems Theory, a challenge to traditional dichotomous, gene-centric views of development and evolution (for good overview, see Oyama et al. 2001). According to this tradition, biological phenomena can be accommodated along a series of dichotomous categories: nature/nurture, inherited/acquired, genetic/environmental; where the latter is spelled in terms of a few others: information carriers/material support, replicators/ interactors, controllers/controlled, instructive/permissive, etc. Developmental Systems Theory is an attempt to move beyond the traditional dichotomous view on the grounds that it fails to account for the complexity of biological phenomena.
The talk of parity claims as such is not so widespread. The literature typically mentions "causal parity" or "the causal parity thesis", and often fails to distinguish among sorts of parity (an exception is Stegmann 2012). At least three senses of parity are distinct enough to call fairly independent analyses: causal, informational, and methodological (Ferreira Ruiz 2019). Here, we are only concerned with causal parity.
Advocates of causal parity (sometimes also called "symmetry") reject any principled, metaphysical, objective distinction between genetic and nongenetic causes 1 (e.g. Oyama 1985;Griffiths and Gray 1994;Griffiths and Knight 1998;Griffiths and Stotz 2013). Causal parity claims are motivated by the causal complexity of biological phenomena and emphasize that such phenomena are typically brought about a multiplicity of (interacting) causes. Taking RNA synthesis as an example, causal parity (minimally) highlights the fact that DNA is not causally sufficient to bring about the synthesis of RNA molecules; rather, a particular cellular milieu and a variety of proteins and other molecules are needed for synthesis to take place. Typically, parity advocates commit to the following tenets: T1: Multiplicity of causal factors: Biological phenomena are attributable to an assembly of multiple causal factors. T2: Insufficiency of single causes: No developmental factor alone is sufficient for the effect. T3: Metaphysical homogeneity of causal roles: However causal roles may differ, they do not differ metaphysically. T1 and T2 are thoroughly addressed in (Oyama 1985). T3 is especially articulated in an attempt to counter objections that the idea of parity obscures valuable distinctions: The 'strawman' parody of developmentalism says that all developmental causes are of equal importance. The real developmentalist position is that the empirical differences between the role of DNA and that of cytoplasmic gradients or host-imprinting events do not justify the metaphysical distinctions currently built upon them (Griffiths and Knight 1998, p. 254; see also Griffiths and Gray 1994, p. 277).
At first glance, it is unclear what such "metaphysical distinctions" would amount to 2 , but we get a better grasp of the causal parity thesis in the light of its criticisms, which invoke causal specificity.
Thus, let us now consider the rationale of anti-parity arguments. (For the ease of exposition, the meaning of specificity here will be bracketed until the next section). Causal specificity appears in the causal parity dispute in biology as a result of the way this thesis has been interpreted by its main critics. While virtually everyone would accept T1-T2 above (as do critics of causal parity), T3, on the other hand, is contentious.
Most replies to parity interpret T3 in connection with the problem of causal selection. This longstanding problem of concerns the grounds for singling out one or a few factors as 'the cause(s)' of an effect while relegating others to 'background conditions' (Broadbent 2008;Franklin Hall 2015;Ross 2018). Provided that causal explanation is typically selective in this sense, both in everyday causal judgement and in scientific practice (as we never cite every factor relevant to an effect), the question arises as to whether this practice of selecting causes responds to objective or only to pragmatic, interest-laden criteria. Here, some take it that the standard view in philosophy is that which can be traced back to John Stuart Mill (Mill 1974(Mill [1843, book III, ch. V). According to Mill, the way we select causes from among various conditions is interest-laden (p. 329).
Thus, T3 would express a Millean view that the only objective distinction is between causes and noncauses. Critics of causal parity reject such Millean view, and contend that it does not follow from the fact that there be multiple causes that they be ontologically the same ("the fallacy of parity arguments" in Waters 2007).
3
What is Causal Specificity About, and What is it Good for in… Opponents of the causal parity thesis have invoked the concept causal specificity to make the case that DNA is ontologically different. This means that the customary singling out of genetic factors does not merely follow from the needs and interests of particular investigations, but it (in addition) reflects some objective aspect of the world. Once an effect has been specified, the question about its causes is an ontological one. Similarly, once the causes of a given effect have been identified, the issue of their characteristics (here, whether they bear a specificity relation to the effect) is, too, an ontological one.
Specificity-based arguments against the causal parity thesis posit an (objective) difference that is either categorical, that is, whereby some factors (DNA, and perhaps some other) are causally specific with respect to (for example) RNA synthesis but others are not, in a clear-cut way; or settle for a quantitative type, such that DNA is causally more specific than other types of factors. Having presented the wider context of the discussions, we can now turn to the available formulations of specificity.
Specificity, a Few Ways
First, some working distinctions are in order. Indeed, because specificity is a key and pervasive concept in biology, we need to avoid easy misunderstandings. Very roughly, we can identify a biological sense and a philosophical sense of specificity. In the first case, we are talking about specificity as used by biologists to refer to distinct biological phenomena 3 . Even when the biological uses of specificity can, as it happens, be further analyzed philosophically, we can conceive of them as distinct -at least, in the sense an explicandum and its explicatum are distinct (Carnap 1950) 4 .
In the second case, we are talking about causal concepts, that is, concepts for features of causal relations that are the object of philosophical enquiry. This is the sense of specificity features in the causal parity debate, and has to do, in a nutshell, with the possibility that many different changes in the cause led to many different changes in the effect. When causal relations have this property, they can be exploited in various ways, allowing a fine-grained control of what happens with the effect (this is why it is also referred to as "fine-grained influence"). 5 In addressing the problem of causal selection, Kenneth Waters (2007) proposes that practices of causal selection in biology are often guided by the identification of the cause that actually made a difference, relative to a given population. However, he acknowledges that this alone fails to account for biologists' singling out of DNA in the context of protein synthesis, because factors other than DNA that are not similarly selected constitute actual difference makers too. The case requires, then, noting that not all actual difference makers are equal: some are causally specific. By this, he means that different changes in the sequence of nucleotides in DNA would change the linear sequence in RNA molecules in many different and very specific ways. Generalizing,
Specificity (Waters)
Many different changes in the cause produce many different changes in the effect. This is contrasted with the role of the RNA polymerase, as another salient factor that participates in the same process and contributes to the same effect (RNA synthesis) as DNA. Waters contends that the polymerase lacks this specificity because it is not the case that many different changes in this enzyme will lead to many different changes in the RNA product. Rather, such changes would slow down or stop the RNA synthesis altogether.
Waters acknowledges that the idea resembles David Lewis' notion of influence. In Lewis' (2000) view, roughly, influence is a matter of there being "not too distant" alterations of the cause and the effect, which are connected counterfactually: Specificity (Lewis) C influences E iff there is a substantial range C1, C2,… of not too distant alterations of C, and a range E1, E2… of alterations of E, at least some of which differ, such that if C1 had occurred, E1 would have occurred, and if C2 had occurred, E2 would have occurred, and so on.
A fundamental difference between the two concepts here is that Waters takes specificity to characterize certain causal relations, while Lewis' influence was aimed at characterizing causal relations simpliciter (and fails to do so according to a broad consensus in the philosophy of causality literature, see e.g. Schaffer 2001).
A different definition is provided by James Woodward (2010) within the interventionist framework. He claims to clarify Waters' notion of specificity by drawing on Lewis' notion of influence. The combination of both within the interventionist framework and terminology results in an idea of specificity as fine-grained control, which is presented intuitively with a radio analogy that compares its on/off switch to its tuning dial. There are many possible positions for the dial, many possible radio stations, and a relation holding between both that enables a fine-grained control over what is heard on the radio. I can intervene on the dial in many ways so as to tune various different stations. By contrast, the on/off switch, while causally relevant to whether a station is received, it has little influence on which one is received. One cannot intervene on the on/off button in order to tune various different stations, but only to turn the radio either on or off. Woodward defines specificity as follows (slightly simplified):
Specificity (Woodward)
There are a number of different possible states of C (c1… cn), a number of different possible states of E (e1… em) and a mapping F from C to E such that for many states of C each such state has a unique image under F in E (that is, F is a function or close to it, so that the same state of C is not associated with different states of E, either on the same or different occasions), not too many different states of C are mapped onto the same state of E and most states of E are the image under F of some state of C. F should describe patterns of interventionist counterfactual dependency.
This definition is meant to allow for a better characterization of biological causes and, thus, a better comparison among them (Weber 2017a contains the most elaborated argument based on the notion above). Claims that DNA is a highly specific cause (or a dial-like cause) of protein synthesis mean that: "there are many possible states of the DNA sequence and many (although not all) variations in this sequence are systematically associated with different possible corresponding states of the linear sequences of the mRNA molecules. (…) Thus, varying the DNA sequence provides for a kind of fine-grained and specific control over which RNA molecules or proteins are synthesized" (p. 306). Claims that the polymerase is not specific mean that interventions on it do not provide fine-grained control. The role of the RNA polymerase, by contrast, switch-like.
In the case of maximal specificity ('ideal'), the mapping is a 1-1 function (every value of the effect variable maps onto one value of the cause variable, and viceversa). Woodward notes that this might not be the rule in real-life cases, as they are typically not bijective. In such cases, he contends, we have more specificity the closer the mapping gets to a bijection (i.e., a one-to-one mapping). This is an explicit proposal that specificity is a property that comes in degrees (but Weber 2006 first suggested that this could be the next step). Waters' treatment of DNA and nongenetic causes seems to attribute either specificity or lack of specificity, and it can be odd to regard Lewis' influence as admitting of degrees, if the notion characterizes causation simpliciter (perhaps leading to the contentious idea of graded causation). In any case, Woodward's quantitative notion was taken seriously enough to motivate a measure of causal specificity.
In fact, Paul Griffiths et al. (2015) put forward a measure of causal specificity (in the sense of fine-grained control above) based on the formalism of Shannon's mathematical theory of information (1948). Their motivation is that the previous literature on causal specificity is mostly "qualitative", and that the notion had not been made "adequately precise". Thus, they claim, "a merely intuitive approach to causal specificity is unlikely to be helpful in settling disputes like this (p. 531). The idea is that we can measure how much knowing the value set by an intervention on a causal variable reduces our uncertainty about the value of an effect variable. To know this, one needs to compare the entropy (or information) of the probability distribution of the values of the effect variable before and after intervention on the cause. The greater the difference in these entropies, the more uncertainty is reduced by intervening on the cause, and the more specific the relation is.
Specificity (measure)
The specificity of a causal variable is obtained by measuring how much mutual information interventions on that causal variable carry about the effect variable. 6 Three quantities are key in this framework: the entropy of the probability distribution of the effect values, H(E) ; the conditional entropy of E having set the value of the cause by intervention, denoted with a hat, H(E|Ĉ) ; and the mutual information between cause and effect having intervened on the cause, which obtains as H(E) − H(E|Ĉ) . In this way, causal specificity becomes identified with mutual information between interventions on the cause and effect: it tells us how much an intervention on a cause specifies an effect 7 . This measure is used to show that alternative splicing factors bear a degree of specificity comparable to that of DNA with respect to RNA sequence. Notably, the same measure has been used to show (without denying the specificity of alternative splicing factors) that DNA still scores higher in specificity in this sense (Weber 2017b).
While the characterization of causal specificity has been refined and improved, we will see that crucial issues that remain and have been overlooked. Having depicted the rationale of the specificity argument and reviewed formulations of specificity, a question arises as to what sort of distinctions do the parity and anti-parity stances need or posit. This will prove relevant to the remainder of the paper, for reasons that will become clearer in the next section.
What Kind of Distinctions does Specificity Enable?
When we consider the question as to the sort of distinctions that are at play here (and as mentioned in passing in Sect. 3), two options come to mind: either specificity licenses categorical distinctions, or it enables distinctions in degrees. In the latter case, I will refer to specificity as a quantitative property, covering both views that yield comparative ascriptions ('A is more/less specific than B') and views that allow assigning a numerical value ('A scores 1 bit in specificity', e.g. following the measure above). This choice is generally relevant in the parity debate, as it bears on the strength of anti-parity claims. More importantly, it will prove relevant to this paper, as I will make the case that the difficulties with causal specificity that affect its use in parity arguments arise even for what seems to be the most plausible option, namely the quantitative distinctions.
Thus, is the specific/nonspecific distinction meant to be quantitative or categorical? What sort of specificity-based distinction is rejected by the causal parity thesis and defended by opponents? One option is to consider the relevant distinction to be categorical, this is, where causal parity requires showing that there are no categorical differences between biological factors, and non-parity requires showing that categorical distinctions are possible. Griffiths and Gray's claim above about there being "nothing that divides the resources into two fundamental kinds" seems to go along this line. There is no property of developmental resources on the basis of which we could draw a sharp distinction. The way Waters treat the DNA vs. polymerase situation seems to accept exactly these terms for the discussion: DNA is specific, the polymerase is not. This sounds like a categorical distinction. The radio analogy, in turn, reinforces this contrast: dials and switches are different kinds of things.
However, more recent contributions to the debate (from both parties) seem to have shifted towards a quantitative point of view. Note that quantitative here should cover both the measure and a comparative concept ("more/less specific than…"). Woodward's notion of specificity (and hence specificity-based distinctions) is explicitly quantitative at least in a comparative sense: in real-life cases where the mapping will typically not be bijective, we have more specificity the closer the function gets to a bijection. (I have doubts that the radio analogy, introduced by Woodward himself, is the best way to capture this notion of degrees of specificity. I turn to these issues in Sect. 7). The information-theoretic measure of causal specificity is another straightforward case where specificity is conceived as coming in degrees. In this case, specificity is not merely a comparative concept: we obtain numerical values for the degree of specificity. Now, what does the causal parity debate look like, from such quantitative angle? The point of causal parity would be to emphasize that different factors exhibit the same causal property, and put less weight on the degree to which this is so. They would hold that specificity cannot elevate a factor to a special status, or single it out, either because differences in specificity are negligible (which is an empirical matter and depends on the case) or because differences in degree would not in general ground the sort of ontological claim that non-parity is supposed to be. Opponents of the causal parity thesis, on the other hand, would put the emphasis elsewhere. They would stress how various causal factors exhibit a property to varying degrees despite it being the same. Consequently, they would hold that specificity does single out certain factors, that differential degree is what matters, and that this differential degree allows for exactly the sort of ontological differences that the stance requires.
The recent discussion between Weber (2017a, b) and Griffiths et al. (2015) shows exactly this dialectic. The latter propose and apply the measure hoping to show that alternative splicing factors can be nearly as specific as DNA with respect to RNA synthesis. In turn, Weber (2017b) uses the same measure to show exactly the opposite: that even if other, non-genetic factors bear a specific relation to some effect, it will never exceed the degree of specificity characteristic of DNA. The discrepancies here stem from various decisions concerning the application of the measure to a concrete, particular case. Notably, it depends on the range of variation that should be considered, and on how the DNA causal variable is construed in the comparisons 8 . As much as these decisions are relevant, they should not give the impression that this is the extent of the problem. On the contrary, problems arise even under full agreement as concerns such decisions.
Indeed, I will explain that the specific/nonspecific distinction made quantitative, while rings more plausible than the categorical alternative, is nevertheless itself puzzling (this is, not as a consequence of particular applications to concrete cases). This can now turn to the twofold nature of causal specificity.
Two Components of Causal Specificity
As things stand now, there is some tacit consensus that we have a good grasp of the specific/nonspecific distinction, at least at the level of intuitions. There is also consensus that this notion captures an interesting and distinct causal feature, notably but not exclusively in biology. In fact, recent disagreements featured in the literature stem more from diverging assessments of concrete cases, and/or different ways to implement the concept/measure in those cases, than from ambiguities or issues with the notion itself. I will now articulate some issues of the latter kind, centered around the specific/nonspecific distinction.
The quantitative shift in the specificity debate, as reviewed above, favors the picture of a specificity gradient, ranging from minimum to maximum specificity. This idea seems prima facie unproblematic, as quantitative properties are ubiquitous. It might even seem more plausible than a qualitative/categorical view of specificity. So, the literature suggests the following picture, where switches are to be placed in the minimum extreme of the gradient: However, on a closer inspection, it is far from obvious that the switch-like cases (recall: 2 values for each variable, plus a bijective function relating them) should be placed there, in the minimum extreme of the specificity gradient rather than in the opposite one. The reason why this happens is the very nature of specificity, as we will now see.
I submit that specificity, in every account, is a dual causal notion, this is, one about two components, that I will call repertoire and connectedness. Assuming the relevant relations are causal, and expressed here as terminologically neutral as possible, these components impose the conditions that: REPERTOIRE many possibilities exist on each side (cause; effect), and CONNECTEDNESS that the possibilities on both sides are connected in a particular, relevant way The formulations here are intendedly vague ("many", "in a particular or relevant way") for the sake of terminological neutrality, but we might as well turn to the Woodwardian terminology to express these ideas. REPERTOIRE refers to the number of different values that an effect variable and a cause variable can take. Note that CONNECTEDNESS is not simply about imposing a connection simpliciter (this would be trivial, as we are already dealing with causal relations). And it is also not about particular activities, actions, or the "special causal concepts" in an Anscombian sense (e.g. 'scratch', 'push', 'burn' (Anscombe 1971)). Rather, CONNECTED-NESS is about a further condition on the causal connection, irrespectively of the type of action. In Woodwardian terms, it can be expressed as a condition for the function connecting values of the cause and effect variables (in particular, imposing that the function be bijective, or similar).
We can bring the radio example for a simple illustration. Suppose the relevant effect (what I wish to have a fine-grained control over) is the music for my car ride. (We must assume, for the sake of the example, that the following is a good causal description of the situation, e.g., that the level or grain of description is accepted as appropriate). Three scenarios can be compared.
In a first scenario, my tuning dial that can take up about 20 positions and I have about 20 music options to choose from within my reach. This is a typical situation where each position in the dial corresponds to one radio station, so we have here a one-to-one mapping between values of the cause (dial positions) and values of the effect (music options). Now compare that situation to one where my dial has only four possible positions, and I am driving around a small village where I can only pick up four stations broadcasting music. The two scenarios show the same mapping of cause and effect; the difference here lies in the REPERTOIRE of both causal relata. Consider now a third situation we where we have again a 20-position dial. Suppose that five of these allow me to tune five different radio stations, transmitting various different kinds of music, but there is one program, "The best of Genesis" that is (for some reason) broadcast on 12 different stations. A country music program is, similarly, broadcast on three. There will be 12 different positions of the dial that would lead me to listen to exactly the same Genesis songs, three positions that will give me the same country music songs, and five remaining positions with which I can tune five different programs. In both the first and third scenarios, we have 20 possibilities for the dial. The difference between these two, then, corresponds to a difference in CONNECTEDNESS.
To be fair, there has been some recent recognition that causal relations might differ, in terms of their degree of specificity, in respects involving what I call REP-ERTOIRE and CONNECTEDNESS (see Stegmann 2014 and Weber 2017a) 9 . 9 On the one hand, Griffiths et al. (2015) state that "Fine-grained influence requires both that the repertoire of effects is large and that the state of the cause contains a great deal of information about the state of the effect." (p. 550). However, all that matters in this proposal is the entropy of the effect (not that of the cause). My own understanding of repertoire is not that the condition applies only to the effect but to both cause and effect. Thus, the distinction is not fully acknowledged in their work. (More on this in Sect. 6, where I discuss an issue with SPEC). On the other hand, Bourrat (2019) makes claims about two "dimensions" of specificity, but in a different sense. What he has in mind is Woodward's distinction, in footnote 5, between one-cause-one-effect specificity and fine-grained influence. Because the informationtheoretic measure captures only the latter, he argues, an additional measure is needed that would capture the former. My claim is different, as I am not concerned with one-cause-one-effect specificity. I contend that it is within the idea of fine-grained specificity (and slight variations thereof) that we can find significant ambiguity. However, he also claims that causal specificity amounts to both measures. This can be read in two very different ways. If it means that both INF and one-cause-one-effect are equally legitimately causal concepts, to which he also subscribes explicitly, I see no problems (although I do find it misleading to speak of "amounting to"). If, on the other hand, he means that there really is a single notion of causal specificity that is really a combination of two ideas, then having separate measures for each would become a problem unless there is also a way of articulating the two (especially on the conceptual level). For this reason, I am inclined to the former interpretation. However, two things are missing in previous work on specificity. First, the distinction has not been directly addressed and, as a consequence, the question has not been posed whether such duality is essential to the notion, or simply a drawback from particular definitions that happen to be inadequate, but which could be amended. Secondly, previous recognition that specific relation might differ in two respects has not been followed by reflections on the problems this leads to. However evident the dual nature of specificity is (if evident at all), it has not been identified as problematic for the coherence or, at least, the operationalization of our causal property.
Indeed, an important fact concerning these two components has been overlooked, namely that they are quite independent. Yet it is not difficult to observe that they are not mutually entailed: • REPERTOIRE does not entail CONNECTEDNESS: the existence of many possibilities does not necessitate a particular type of connection (e.g. a bijective function), • CONNECTEDNESS does not entail REPERTOIRE: the existence of a particular type of connection does not necessitate any particular number of possibilities 10 .
The only link between the two components is that this is how, as a matter of fact, we seem to reason about causal specificity and, to this extent, is constitutive of the idea of causal specificity. It seems that by focusing on only one component at a time, causal specificity would remain undefined.
Having identified the two (independent) components of specificity, the point I will make next is that this dual nature of causal specificity threatens the coherent attribution of specificity and raises issues that affect the purposes for which it is often invoked, as I will explain.
Attributing Causal Specificity
As we saw in Sect. 4, we can conceive of specificity arguments where specificity is either a categorical property or one that comes in degrees (either measurable or comparative). Spelling out the specific/nonspecific distinction as categorical would prove extremely difficult, and perhaps unnecessary, as we have clearer formulations of specificity that render it quantitative. Here, I will show that the attribution of causal specificity, and comparing degrees of specificity under a quantitative angle is hampered by the independence of the two components of specificity (Fig. 1).
We can start by taking a closer look at the switch-like cases. A in Fig. 2 below corresponds to what is called a switch: two values for the causal variable (up; down), two values for the effect variable (off; on), and a bijective mapping from cause to effect. Because the two components of specificity are not mutually entailed, depending on which component is given more weight, intuitions regarding the specificity we should ascribe to switch-like cases will vary (Fig. 3).
Indeed, switch-like cases can be considered either minimally specific, because there are very few possibilities, or maximally specific, because the function is bijective. This seems paradoxical, and perhaps even points to causal specificity being ill-defined. If a switch is an extreme case of a dial (rather than something categorically distinct), in which sense is a switch an extreme case? Arguably, these cases rank low in REPERTOIRE and high in CONNECTEDNESS. In the specificity debate, however, A-cases are introduced as contrasting with highly specific causal relation, this is, introduced as the opposite of a dial. But this must be acknowledged to work under the assumption that it matters more, if not exclusively (for the conclusion), that it scores low in REPERTOIRE. And the difficulty arises if we look at other types of situations, too. Consider case B: what should we make of it? We need to secure a place for A along a gradient if we are to maintain that, for example, case B is more specific than A (assuming this is an intuitive reaction).
Thus, the problem is not simply that there may not be room for a sharp, categorical difference between switches and dials. If this were the extent of the issue, we could perhaps find reasons to settle for differences in degree. But my diagnosis is -the way I see it-more severe: even spelling out the difference is in degree becomes, conceptually speaking, a more elusive task than prima facie thought, as we now have degrees of two independent components of specificity which can pull in different directions. At this point, one might think that a measure of causal specificity would solve (or dissolve) these worries. As a measure of causal specificity, SPEC is the most unmistakable construal of specificity as a quantitative property. But more importantly, it is expected to serve as a simple device for establishing the exact degree of specificity of any given cases, the (allegedly) least specific ones included. In fact, the measure can be applied to switch-like cases just the same. Far from any categorical difference between switches and dials, the measure yields a specificity value (a mutual information value, but where this is to be interpreted causally as specificity) in a common currency of bits, regardless of whether the cases under scrutiny were initially conceptualized as specific, nonspecific, more or less specific, dial-like, or switch-like. Yet, that the quantitative approach of SPEC solves (or dissolves) the issues proves to be a problematic claim, and the reason has to do, again, with the twofold nature of specificity.
In fact, if causal specificity is measured, as proposed by Griffiths and colleagues, by comparing the entropy of the probability distribution of the values of the effect variable before an intervention on the cause, and the entropy of the probability distribution of the effect variable having performed the intervention on the cause (this is, H(E) − H(E|Ĉ) ), then, as shown by Griffiths et al. (2015) themselves, the following two cases C and D become indistinguishable (and both also indistinguishable from A): They all yield a mutual information value of 1 bit, just like the switch case A. In case C we have an initial entropyBourrat of the effect of 1 bit, and because knowing the value of the cause leaves nil remaining uncertainty, the mutual information between Ĉ and E amounts to 1 bit. In the second case, D, the initial entropy of the effect is 2 bits (more values) and our uncertainty after knowing the value of the cause is not nil (either one of two values, thus 2 bits), so the mutual information value is exactly the same (1 bit). Yet, intuitively, we may want to say that they are different in important aspects 11 . The number of possible values for the causal variable is greater in C than in A and D; while the number of possible values for the effect variable is greater in D. In addition, in A, any change in the cause leads to a change in the effect, but not every change in the cause leads to a change in the effect in C or D (e.g., the change from C = c 1 to C = c 2 ).
In any case, the issue with the SPEC measure might be deeper than merely failing to account for the difference among presumably different cases. The quantity that would make the difference visible is the entropy of the source (Griffiths et al. 2015). However, this quantity plays no role in determining the specificity value. Recall the quantities relevant to SPEC from Sect. 3: the entropy of the probability distribution of the effect values, H(E) ; the conditional entropy of E having set the value of the cause by intervention, H(E|Ĉ) ; and the mutual information between cause and effect 1 3 What is Causal Specificity About, and What is it Good for in… having intervened on the cause, which obtains as H(E) − H(E|Ĉ) . The entropy of the source plays no role because our C here is always an intervened C, this is, set to some value c i regardless of the possible repertoire we could have for the variable. All we need to measure specificity is the entropy of the probability distribution of the values of the effect variable before an intervention on the cause: H(E) , and the entropy of the probability distribution of the effect variable having performed the intervention on the cause: H(E|Ĉ) . Consequently, only Ĉ (but not C ) is relevant to the SPEC measure.
But the full range of possible values for either variable should matter, according to previous notions of specificity. Hence, SPEC misrepresents REPERTOIRE, in that the full range of possible values for C is at the end of the day irrelevant and all that matters is Ĉ . For similar reasons, it also misrepresents CONNECTEDNESS, in that it only focuses on how one particular value of C maps with E value(s). This of course suggests that SPEC is not exactly a measure of causal specificity in any of the formulations in Sect. 4, but perhaps something different. It should be noted that others have discussed how to best implement the measure for particular comparisons (e.g. DNA and alternative splicing factors, Weber 2017b), but the measure itself has not been impugned or questioned as genuinely representing the philosophical idea of causal specificity. 12 If my analysis is correct and the notion of specificity is indeed about CONNECTEDNESS and REPERTOIRE, then, I believe that SPEC is not an adequate approach as it distorts both components. In any case, assessing the adequacy of a measure for a given property requires, at the very least, that we have a good grasp of the property to begin with, and this is what I am most concerned with. 13 13 A reviewer suggested to consider whether SPEC's emphasis on the probability distribution suggests a third component of specificity. This is an interesting suggestion, but I have some doubts about this way of looking at the role of probabilities here. I believe that while probabilities are relevant in general, they are not better viewed as a further component of specificity. I take that the reviewer refers not to the conditional probability of obtaining an effect given a cause, but to individual probabilities of the values of the cause/effect variables. For instance, the probability of me setting the tuning dial on position p i , the probability of me setting the dial on position p j , and so on. Now, I think that the characteristics of the causal structure as a whole are independent of individual probabilities in the sense above, at least, if we take available definitions of specificity. The way to think about the different states here should be rather counterfactual: if I were to set the dial on position p i , then, radio station r i would obtain. It does not matter how unlikely it is for me to do that for characterizing the causal structure. It seems to me that the probability of occurrence of p i which, suppose, is very low, would not by itself affect an ascription of specificity to the extent that one concedes the truth of the conditional 'if I were to set the dial on position p i , then, I would tune radio station r i '. In my view, individual probabilities are irrelevant to analyzing the causal specificity of a given causal structure, because specificity is not about any particular state but about the overall structure. However, this is not to deny that probabilities are relevant to the SPEC measure. Indeed, they are absolutely relevant to SPEC, insofar as they are essential to information theory and SPEC uses such formalism. But I believe that we should not draw the wrong conclusions from this fact for the simple reason that the measure might be failing to capture the notion of specificity (as I have suggested). Thus, I believe that that the key, distinct components of specificity are the two I have put forward.
Having identified and explained these issues with the very notion of causal specificity, we must take stock of the particular implications for causal parity. I discuss this next.
Implications for the Causal Parity Dispute in Biology
Along the last few sections, I argued, contrary to what is assumed, that a quantitative view of specificity (either measurable or comparative) is not as intelligible and cannot be applied as systematically as it would be desirable for certain contexts -the causal parity quarrel in philosophy of biology being a salient case in point. I attributed this problem to the twofold nature of causal specificity, as REPERTOIRE and CONNECTEDNESS are independent conditions. Such independence makes it particularly problematic to deal with both components simultaneously and coherently.
What does this mean for specificity-based arguments and the CP thesis? I see two possible reactions. One is to think that causal specificity is not a single property but a mixing of two distinct ones that are, in fact, not even predicated of the same items. In this case, REPERTOIRE is a property of cause/effect variables, one about the range of variation that is possible for them. CONNECTEDNESS becomes a property of the mappings between causes and effects, corresponding to different types of functions (and perhaps non-functions as well). Each property by itself is perfectly coherent, so splitting causal specificity into two distinct (genuine) properties means a way around our issues. But splitting causal specificity into two independent properties means that we cannot compare causal relations from a single point of view. Parity and nonparity necessarily become relative to either independent property, something like "parity REP " and "parity CON ". This seems acceptable, but it does complicate things. What should we conclude from cases where we have two competing causal factors, each ranking higher than the other in one of these two properties? In addition, it could be argued that neither REPERTOIRE nor CONNECTEDNESS alone capture the intuition behind the comparison of the roles of DNA and the polymerase in protein synthesis. It seems that systematic application comes with the cost of giving up the original intuition.
Alternatively, we may want to retain the original intuition that certain causes, like DNA, are peculiar in that many different changes in the DNA lead to many changes in the effect, this is, some combination of both REPERTOIRE and CONNECTED-NESS. It seems that we do capture some interesting feature of genetic causation by such remarks, and there is no prima facie reason to disregard it. But, in turn, we may need to give up any pretension of systematic application, this is, that a concept of causal specificity (or a measure thereof) will lead to a systematic procedure whereby any two causal relations can be clearly compared and ranked in terms of their specificity if we wish to simultaneously take both components into account.
Let us consider the implications of the latter case: can we compare molecular causes from the point of view of specificity? This will sometimes be possible, for instance, if we compare two cases that differ along one component only but score the same along the other, but we will not always have clear, compelling intuitions regarding particular cases whenever this is not the case. Some comparisons might be more difficult than others, and will probably be influenced by the wider context and goals of the comparison. More important is another set of questions: Can specificity ground substantial philosophical claims about the ways they contribute to an effect? Can specificity settle the parity quarrel in one way or another? In which sense is it helpful to characterize DNA as a dial-like cause, and the polymerase as a switch-like one? In general, is it helpful to characterize biological causes as either dials or switches? I take to be an inevitable consequence of this analysis that we should rethink the meaning and scope of specificity-based philosophical claims. Showing that DNA or RNA molecules are more specific kinds of factors than polymerases or alternative splicing agents (hence supporting a claim of nonparity) is not as straightforward as it seems, and this is not (simply) due to biological facts. This need not mean that characterizing biological causes (or otherwise) as specific, nonspecific, or relatively specific is not helpful: causal specificity still picks out some feature of certain type causal relations, it just may not be the best argument in support or rejection of causal parity. What other causal concepts would do a better job in this respect is the topic of another paper.
Conclusions
This paper undertook a deeper and encompassing exploration of causal specificity and the extent to which this notion can meet the goals that many have set for it. I first introduced the causal parity debate in philosophy of biology as the context of most discussions and elaborations on causal specificity, sketching the basic logic of specificity-based arguments against CP. Then, I reviewed several conceptualizations of specificity available in the literature. After raising the question as to what sort of property (categorical, quantitative) is specificity supposed to be, I laid out the rationale for a quantitative point of view. I argued, contrary to what seems to be assumed, that a quantitative view of specificity (either measurable or simply comparative), is not as intelligible and cannot be applied as systematically as it would be desirable. I attributed this problem to the twofold nature of causal specificity, as this property is about two logically independent conditions, REP-ERTOIRE and CONNECTEDNESS, which are not easily dealt with in tandem. The view in this paper is that, as long as such duality within the notion of causal specificity is not properly acknowledged, we cannot expect to arrive at anything but mixed and conflated claims that will not be particular profitable. I also contended that these are not minor issues, as they inevitably affect the very purposes for which specificity is invoked: in general, to compare/distinguish amongst causal factors; in particular, to settle the causal parity dispute in philosophy of biology. | 2021-07-13T06:16:38.210Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "490e038b4dbc814fdac67b3f43df3d79f89b6ba8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10441-021-09419-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e5c3b9a6a2defd23f8baaf6c44d9a8b58dc242e",
"s2fieldsofstudy": [
"Philosophy",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59353849 | pes2o/s2orc | v3-fos-license | Modelling substorm chorus events in terms of dispersive azimuthal drift
Abstract. The Substorm Chorus Event (SCE) is a radio phenomenon observed on the ground after the onset of the substorm expansion phase. It consists of a band of VLF chorus with rising upper and lower cutoff frequencies. These emissions are thought to result from Doppler-shifted cyclotron resonance between whistler mode waves and energetic electrons which drift into a ground station’s field of view from an injection site around midnight. The increasing frequency of the emission envelope has been attributed to the combined effects of energy dispersion due to gradient and curvature drifts, and the modification of resonance conditions and variation of the half-gyrofrequency cutoff resulting from the radial component of theE×B drift. A model is presented which accounts for the observed features of the SCE in terms of the growth rate of whistler mode waves due to anisotropy in the electron distribution. This model provides an explanation for the increasing frequency of the SCE lower cutoff, as well as reproducing the general frequency-time signature of the event. In addition, the results place some restrictions on the injected particle source distribution which might lead to a SCE.
Introduction
The substorm chorus event is a recognised VLF signature of the substorm expansion phase (Smith et al., 1996(Smith et al., , 1999)).Although chorus emissions are commonly observed in the postmidnight sector in association with substorm activity (Tsurutani and Smith, 1974), the SCE is distinctive as it consists of a band of chorus with ascending upper and lower cutoff frequencies.The duration of a SCE is usually ∼10 min to ∼1 h.The frequency of the leading edge of the emission envelope increases at a rate of between 20 and 1000 Hz/min (Smith et al., 1996), but typically around 150 Hz/min (Smith et al., Correspondence to: A. B. Collier (colliera@ukzn.ac.za) 2002).These events are commonly observed between midnight and dawn, following energetic particle injection into the nightside inner magnetosphere.The correlation of these two phenomena allows the substorm onset time to be estimated from the SCE epoch with an uncertainty of around 10 min (Smith et al., 1999).
Observations of SCE have been documented at a number of high-latitude stations including Eights [L=3.8] ( Carpenter et al., 1971), Roberval [L=4.0](Carpenter et al., 1975), SANAE-III [L=4.1](Hughes, 1995;Collier and Hughes, 2004), Siple [L=4.2](Carpenter et al., 1975;Park et al., 1981), Halley [L=4.4](Smith et al., 1996;1999;2002), and Byrd [L=7.1](Carpenter et al., 1971).Events have also been recorded simultaneously at conjugate stations (Carpenter et al., 1975) and observed by satellites near geosynchronous orbit (Isenberg et al., 1982).Observations made at SANAE-IV [L=4.3] are presented in Sect. 2. Smith et al. (1996) observed 243 SCE in 327 days at Halley during 1992, noting that the frequency of occurrence, around 250 y −1 on average (Smith et al., 1999), was greater at geomagnetically disturbed (K p >3−) than quiet times.Substorm chorus events appear to be observed less often at SANAE-IV (fewer than 50 during 2002), although this may simply be due to the compressed format of the Halley VELOX data (Smith, 1995) facilitating their identification.This disparity is, however, consistent with the lower frequency of occurrence of all VLF phenomena as one goes east from Halley to SANAE-IV, which might be ascribed to the effects of the South Atlantic Anomaly.The reduced magnetic field strength in this vicinity results in the lowering of trapped particle mirror points, leading to enhanced loss through precipitation, thereby producing a diminished trapped electron population east of the anomaly.
A SCE results from the collaboration of three processes: the injection of energetic plasma into the inner magnetosphere, the eastward drift of electrons from the injection region to a ground station's field of view, and the cyclotron resonant interaction of these electrons with whistler mode waves.
A. B. Collier and A. R. W. Hughes: Modelling substorm chorus events The impulsive injection of energetic particles into the near-Earth magnetotail routinely occurs at the beginning of the substorm expansion phase (e.g.McIlwain, 1974;Thomsen et al., 2001) in conjunction with heightened geomagnetic activity on the ground and in space.Two possible scenarios are generally considered to account for plasma injection: either the particles are energised in situ or they are transported from a tailward region and are energised en route.In the latter situation, injections may result from radial E×B drift in the inductive electric field arising from dipolarisation of the Earth's magnetic field (Li et al., 2003).
Injection events have been detected directly by geosynchronous satellites (e.g.Birn et al., 1998;Thomsen et al., 2001) and identified in energetic neutral atom images (Hendersen et al., 1997).As observed at geosynchronous orbit, injections consist of flux enhancements (by a few orders of magnitude) of particles with maximum energies up to hundreds of keVs, and average energies of a few keV (Parks et al., 1980).If a satellite is located within the injection region, then the enhancement is observed simultaneously for all energies concerned and the event is "dispersionless".If, however, the injected particles only reach the satellite after drifting out of the injection region, then they become dispersed, those particles with higher energies arriving first.The location and extent of the injection region have been examined by Friedel et al. (1996), who determined that the injected particles may be distributed over a range ±5 h around magnetic midnight and can extend inward to L=4.3.
The satellite observation of an injection event is reliant upon the particles being either injected directly into or drifting through the volume of space accessible to the satellite.The SCE, however, reflects the presence of injected particles within a ground station's field of view -a much larger region -and these ground-based data are thus complementary to satellite observations.The motion of an electron, and indeed any charged particle in the magnetosphere, may be understood in terms of the conservation of the three adiabatic invariants associated with each of its periodic motions -gyration, bounce and drift (Roederer, 1970).Since the gyration and bounce of the particle occur over time scales much shorter than that associated with the development of a SCE, the motion which is of principal interest in this context is drift, which transports the particle through local time.Conservation of the invariants corresponding to gyration and bounce are nevertheless still essential to the dynamics.The drift shell, a surface generated by the motion of a particle's guiding centre, is characterised by its L value and the magnetic field strength at the particle's mirror point (McIlwain, 1961).Azimuthal velocity around the drift shell is influenced by the presence of electric fields and the spatial gradient and curvature of the magnetic field.
The presence of a convection electric field directed from dawn to dusk and a corotation electric field orientated radially earthward produce an energy-independent E×B drift, which has both radial and azimuthal components.In the midnight-dawn quadrant this drift is directed radially earthward and azimuthally eastward.
The injected electrons are also subject to an energydependent drift as a result of the spatial gradient and curvature of the Earth's magnetic field.This drift carries them eastward, the more energetic particles and those located further from the Earth having a higher angular drift velocity (Roederer, 1970).
Subsequent to their injection, the electrons are thus transported eastward into a ground station's field of view, where they may resonantly interact with whistler mode waves.
VLF chorus is a whistler mode phenomenon encountered both in space and on the ground, consisting of numerous discrete emission elements.Particle injections are regularly correlated with enhanced chorus activity at geosynchronous orbit (Isenberg et al., 1982).Chorus is thought to be generated by the transfer of energy from anisotropic hot electrons to waves by the cyclotron resonance interaction (Sazhin and Hayakawa, 1992).Such wave-particle interactions play a role in the acceleration of electrons to relativistic energies (Meredith et al., 2001;Horne and Thorne, 2003) and induce the precipitation of energetic electrons (Horne and Thorne, 2003).
Doppler-shifted cyclotron resonance between whistler mode waves and counter-streaming electrons is described by (e.g.Hargreaves, 1992) where W is the component of the electron's kinetic energy associated with motion parallel to the magnetic field, B is magnetic field strength, f B is the electron gyrofrequency, f is the frequency of the waves and n e is the electron number density.The interaction represented by Eq. ( 1) is a transverse resonance where the wave frequency is Doppler-shifted up to the electron gyrofrequency.The derivation of this relationship proceeds from the assumption that the waves are propagating parallel to the magnetic field.In general, however, the waves may possess an arbitrary wave normal angle, which, for resonance at a given frequency, leads to larger W as the direction of propagation becomes more oblique.For a predetermined population of electrons this results in a shift to higher resonance frequencies for larger wave normal angles.Within the wave generation region field-aligned propagation is favoured as this produces the greatest instability (Nunn et al., 1997).
Observational evidence suggests that chorus originates near the geomagnetic equator (Parrot et al., 2003), although recent observations of Inan (2004) indicate that the chorus source region is in rapid motion.Since the energy of gyroresonant electrons is minimised at the equator and the electron distribution is generally dominated by low energy particles, Helliwell (1965) concluded that the equatorial plane is the region in which the number of particles available for gyro-resonance is maximised and is thus the most likely location for cyclotron resonance instability.Liehmon's (1967) results, however, indicate that the off-equatorial growth rate can become quite significant, especially at higher frequencies.Regions of lower magnetic field strength and larger electron number density are also advantageous (Rycroft, 1972).
The penetration of whistler mode waves through the ionospheric boundary requires that the wave normal angle lie within the transmission cone (Helliwell, 1965) and consequently chorus must propagate to the ground via ducts of enhanced ionisation.The wave vectors of ducted chorus are nearly aligned with the ambient magnetic field (Parrot et al., 2003).Chorus emissions with oblique wave normals have, however, also been observed in the vicinity of the equatorial plane (Hayakawa et al., 1984) but at frequencies above half the equatorial gyrofrequency and therefore not propagating in ducted mode.
The maximum frequency for trapping of a whistler mode wave in a channel of enhanced ionisation is half the local electron gyrofrequency, f B /2 (Helliwell, 1965).This fact may determine the cutoff of chorus emissions at the upper edge of the SCE envelope (Smirnova, 1984).Earthward motion of electrons results in an increasing half-gyrofrequency cutoff as the particles move into regions of higher magnetic field strength.The earthward motion of electrons also produces increasing resonance frequencies, which follow from Eq. ( 1) as a consequence of elevated magnetic field strength and background number densities.
Wave amplification via the cyclotron resonance instability is associated with an anisotropic pitch angle distribution (Kennel and Petschek, 1966).Specifically, a pancake-shaped distribution, where large pitch angles are predominant, is unstable, while a distribution in which particle motion is primarily parallel to the magnetic field is not.Although there is a tendency for the injected plasma to be anisotropic (Birn et al., 1997) as a result of inward radial motion subject to the conservation of the first and second adiabatic invariants (Southwood and Kivelson, 1975), the subsequent dispersive drift motion of the injected electrons also leads to anisotropy in the portion of the population within a station's field of view.
The form of the resonance condition Eq. ( 1) implies that particles with large parallel velocities are resonant with low frequency waves and vice versa.The most energetic electrons, which arrive first within a terrestrial station's whistler mode field of view, are thus resonant with low frequency waves.Progressively higher resonant frequencies are generated as lower energy particles drift around into the field of view.This, in part, accounts for the increasing frequencies characteristic of the SCE.The frequency dispersion arising from this mechanism depends on the proximity of the station to the injection region: stations located at later local times should observe greater dispersion.Smith et al. (1996) argued that the absence of a clear relationship between frequency dispersion and MLT in their data indicated that dispersive azimuthal drift was not the governing process in a SCE.The recent study of Abel et al. (2002) also suggests that little direct evidence exists supporting electron energy dispersion as the dominant process controlling the frequency evolution of the SCE.Neither of these publications, however, call into question whether the dispersion mechanism plays some role in the generation of the SCE phenomenon.In fact, Smith et al. (1996) proposes that the prevailing mechanism may be energy dispersion in the case of events with large frequency sweep rates and radial drift for more gradually rising events.
In this study we examine the contribution of energy dispersion in generating the frequency-time structure of the SCE by modelling the azimuthal drift of injected particles including the action of the corotation electric field.In order to isolate the effect of the gradient-curvature drift, the convection electric field is neglected and the particles are assumed to move along paths of constant L in the inner magnetosphere.Although the convection field is undoubtedly of import to the dynamics, an objective of this study is to demonstrate that the inward radial E×B drift is not a prerequisite for SCE occurrence and that energy dispersion is a sufficient mechanism.The neglect of the convection field may be justified as follows: for L=5 resonance at 3 kHz requires electrons with energies W 12 keV, which experience a gradient-curvature drift of 5 km/s.In a convection field of 0.25 mV/m, representative of mildly disturbed geomagnetic conditions (Kivelson, 1976), the E×B drift speed at this L is only ∼1 km/s, significantly less than the gradient-curvature drift speed.In addition, only a portion of the E×B drift is directed inward, while the balance contributes to the azimuthal particle drift, reducing the delay and duration of a SCE.
Observations
Examples of substorm chorus events observed at SANAE-IV [71 • 40 S 2 • 50 W, L=4.3] are presented as spectrograms in Fig. 1.A selection of events are included to illustrate the variation in duration and frequency range.The broad-band VLF data were recorded remotely using the Digital VLF Recording and Analysis System (DVRAS) described by Collier and Hughes (2002).The antenna system consists of two orthogonal rhombic loops with area 58 m 2 aligned geographic northsouth and east-west.The data were digitised at a sampling rate of 20 kHz.
The events on days 008, 123 and 213 commence at frequencies less than 2 kHz and proceed up to a maximum of around 6 kHz.In the interpretation of Smirnova (1984) the increase in the upper cutoff frequency over this range indicates motion of the generation region from L=6.0 to 4.4.The events on days 017 and 020 are confined to lower frequencies, suggesting that the source region remains at larger L. Furthermore, the short duration and poor dispersion of the event on day 017 indicates that the station was located close to or within the injection region and that the injected particles were not initially distributed over a very large range of local times.
The event on day 020 is particularly interesting as there is clear evidence of three drift echoes.An isolated injection of particles was detected by geosynchronous satellite 1990-095 at 03:50 UT.A weak SCE was subsequently recorded at SANAE-IV, starting just before 04:00 UT, followed by three stronger events at intervals of roughly 1 h measured at Hz.Each of the echoes was observed to have a progressively larger frequency dispersion.Resonance at 500 Hz for L between 5 and 6 is achieved by particles with W in the range 100 to 300 keV.Typical drift periods for such particles are given in Table 1.
Fig. 2. presen
The range of frequencies observed on the ground is characteristic of the conditions prevailing in the magnetospheric source region.Extensive propagation in the Earthionosphere waveguide should, however, result in waveguide cutoffs (Helliwell, 1965) becoming apparent.The event of day 213 has a definite reduction in emission intensity at frequencies below ∼2 kHz, consistent with waveguide cutoff under an ionosphere at an altitude of 75 km.This cutoff is not evident in the other SCE and one may thus conclude that these enter the waveguide in closer proximity to the station.
The events in Fig. 1 Each of the events presented in Fig. 1 is accompanied by particle injections detected at geosynchronous orbit.Electron flux data from Synchronous Orbit Particle Analyser (SOPA) instruments on board LANL geosynchronous satellites are displayed in Fig. 3, where it is evident that injections result in a dramatic increase in the flux of particles with energies of tens to hundreds of keV.Satellite 1990-095 is located closest to the meridian of SANAE-IV, on a field line 9 • west of the station, and should thus detect particle injections before they manifest themselves as SCE at SANAE-IV.
In Fig. 3a an injection of electrons is recorded on satellite 1990-095 at 02:13 UT, when the satellite is ideally situated in the vicinity of local midnight.This injection is a "near-dispersionless substorm onset" (Friedel et al., 1996) as particles of all energies arrive at the satellite almost simultaneously.The duration of the injection is short -comparable to the time scale of the magnetic field dipolarisation -but an elevated electron flux is sustained for a longer period as a result of the drift of electrons injected to the west of the satellite.The rapid flux enhancement is followed within 7 min by the onset of chorus emissions at SANAE-IV.
The SOPA data for day 123 in Fig. 3b indicates a minor injection at 03:16 UT followed by a more significant one at 03:42 UT.Some time later the injected particles are detected by satellites LANL-97A, 1994-084 and 1991-080, respectively, where the dispersive effects of the gradientcurvature drift are apparent.Careful examination of the VLF data in Fig. 1d reveals that chorus activity starts at around 03:00 UT and a second band of emissions begins at about 03:40 UT.In both cases the onset of the VLF activity precedes the geosynchronous injection, a somewhat surprising result, suggesting that this injection was centred in the midnight-dawn quadrant or that the VLF activity was initiated at L>6.6.
The bulk of the SCE emissions are generated by particles with energies lower than those detected by the SOPA instrument.The Magnetospheric Plasma Analyser (MPA) data in Fig. 4 reflects the low energy component of the injected particles, indicating a substantial increase in the flux of electrons with energies of a few to tens of keV.The SOPA and MPA data also illustrate the existence of upper and lower energy cutoffs in the spectrum of injected electrons (Reeves, 1998).
The onset of the substorm expansion phase is reflected in magnetic perturbations observed both on the ground and in orbit.The substorm AE index is a measure of the strength of the auroral electrojet, and as such, provides an indicator of substorm activity.Although neither provisional nor final AE data is presently available for 2002, quicklook data indicate substorm signatures in the AE and related indices at the appropriate times for all the events in Fig. 1.
The results of Smith et al.'s (1999Smith et al.'s ( , 2002) ) superposed epoch analysis register negative and positive bays, respectively, in the H and D magnetic field components recorded by the Halley magnetometer in association with substorm onset and coincident with the observation of a SCE.Halley is normally somewhat equatorward of the auroral oval around midnight, and negative H bays are consistent with a westward horizontal electrojet poleward of a Southern Hemisphere station at sub-auroral latitudes.The SCE's considered by Smith et al. (1999) were selected with the intent of study- Data presented in Fig. 5 represent H and D magnetic field variations at Halley for the periods corresponding to the events in Fig. 1.In each case H and D bays of the correct sense were discernible at a time indicating their association with the corresponding SCE and providing additional evidence that the SCE is consequent to substorm onset.
The cyclotron wave-particle interaction results in a decrease in the pitch angle of the resonant electrons and may lead to their precipitation (Foster and Rosenberg, 1976;Rosenberg et al., 1981).Electron precipitation can be monitored using a riometer, which indicates the degree of absorption of cosmic VHF radio noise due to charged particle den- sity variations in the ionosphere.Data from the Nordic chain of riometers for 2002 day 123 are presented in Fig. 6.The degree of absorption recorded by these instruments increases significantly just after 03:00 UT, which corresponds to the onset of the SCE in Fig. 1d.The precipitation is most evident for stations located outside the plasmapause, but is still discernible at JYV (Jyväskylä), which lies on a field line within the plasmasphere and OUL (Oulu), which is situated close to the plasmapause.The fact that enhanced precipitation is observed at stations located over the range L=3.7 to 6.0 indicates that wave-particle interactions occur over the corresponding region in the magnetosphere.
It should be stated that the data in Fig. 6 is a particularly fine example.Such a well-defined precipitation enhancement is not always observed in conjunction with a SCE, or it is detected at only a subset of the riometer stations.
Model
Consider a simple model based on the geometry illustrated in Fig. 7, with the injection of energetic particles taking place over a range of local time, δψ, centred on midnight and an observer located at local time ψ 0 at the moment of injection.The observer may be situated either in the magnetosphere or at a terrestrial station, where, in the latter case, propagation to the observer is assumed to take place along ducts of augmented plasma density.In practice, the injected particles would be distributed over a range of L; however, this model considers azimuthal drift alone and so we confine our attention to those particles originating on a particular L shell.
In addition, for the reasons cited in Sect. 1 we further constrain the model to examine only the distribution of particles in the equatorial plane.
The drift motion of the injected particles is calculated in a dipole magnetic field, where the assumption of a dipolar geometry is not unreasonable since particle injection accompanies relaxation of the field.The effects of a convection electric field are neglected but corotation drift, arising from the azimuthally symmetric corotation electric field, is implicitly taken into account as the particle trajectories are calculated in the Earth's rotating frame of reference.Although the electron dynamics are treated here in a simplified model of the magnetosphere, results for more realistic field geometries are to be found elsewhere (e.g.Reeves et al., 1991).
Suppose that the source of particles at local time ψ and time t is described by where describes the local time distribution of the source and F its energy and pitch angle dependence.The time impulse in Eq. ( 2) is justified by the fact that the duration of the injection event is short compared with the time scale of the evolution of the SCE.However, in principle, the distribution function resulting from a source with arbitrary temporal structure may be derived from that ensuing from an impulsive source by convolution.
The particle population is assumed to consist of a background low-temperature thermal component with density n c , The total number density in the injection region is n e =n c +n h , where n h n c and consequently n e n c (Liemohn, 1967).The injection intensity may be estimated using the LANL flux data, from which n h ∼10 5 m −3 is appropriate.Only a small fraction of the injected hot plasma density is present at the observer at any given time and it is assumed that only the non-thermal particles participate in the resonant interactions.
The functional forms which should be adopted for and F are somewhat uncertain.Experimental evidence suggests that the region into which particles are injected is centred on midnight (Birn et al., 1997;Thomsen et al., 2001); however, little appears to be known about the variation of the injection intensity as a function of local time.Selecting δψ=45 • produces an injection region spanning three hours of local time, consistent with the results of Reeves et al. (1991), although the extent of this region may vary appreciably (Reeves et al., 1992).The simplest assumption is that the particles are uniformly distributed: where is the rectangle function (Bracewell, 1965).Another option is to introduce a peak in the local time distribution of the source: 2 (ψ) = (ψ/δψ) cos m (πψ/δψ). (5) Setting m=2 provides a peaked distribution which goes smoothly to zero at its extremities.The flux signatures expected from injections conforming to 1 and 2 are given in Fig. 8, where it is apparent that the former is more consistent with the observations in Fig. 3.The kinetic distribution of the injected plasma may be represented by a superposition of Maxwellians with different characteristic temperatures (Parks et al., 1980), and consequently we consider a source spectrum described by a bi-Maxwellian distribution where k 1 = m 3 /2π 3 T 2 ⊥ T , characterised by temperatures T ⊥ and T , both of which have units of energy.
Measurements used to derive phase space densities indicate that a power law function of energy may be a more appropriate representation, especially at larger energies (e.g.Birn et al., 1997).The generalised Lorentzian distribution (Vasyliūnas, 1968) where κ is the spectral index and W 0 is related to the effective temperature by W 0 =(κ−3/2)T , may be used to model a plasma with a non-Maxwellian high-energy tail.The normalisation factor has the form 7) may be modified by introducing a factor dependent on pitch angle, sin 2p α.The data of Maeda and Lin (1981) suggest that values of p up to 1 are reasonable.
In the absence of a dawn-dusk electric field, the injected electrons undergo eastward gradient-curvature drift along paths of constant L. Particles with energy W and pitch angle α acquire a bounce-averaged equatorial azimuthal drift velocity of (Roederer, 1970) where R E is the Earth's radius, B 0 =30.1 µT, q is the charge of an electron and, to reasonable approximation, A better approximation for G(α), accurate for small α, may be found in Ejiri (1978); however, the simpler form Eq. ( 9) is employed for the calculations presented here.The phase space density of particles at the observer, g(W, α, t), can be derived from Eq. ( 2): following Jentsch (1976, Eq. 5), one has at time t, The contribution of particles that have made multiple orbits around the Earth may be consolidated in Eq. ( 10) by taking the argument of modulo 2π.
The growth rate of whistler mode waves of frequency f in resonance with electrons having parallel velocity v =v R was found by Kennel and Petschek (1966) to be where the anisotropy, in the form given by Etcheto et al. (1973), is expressed as and describes the relative number of particles in resonance.In Eqs. ( 12) and ( 13) g is evaluated subject to the condition W =W sec 2 α, where W is the resonant energy determined by Eq. ( 1).As the resonant energy is related to the wave frequency through the resonance condition, both A and η are also functions of frequency.These quantities are calculated on the observer's meridian and the longitudinal extent of the observer's field of view is ignored.Under conditions of azimuthal symmetry, a finite field of view could be accounted for by appropriately broadening the injection region.
The assumption n h n c allows for the use of n c instead of n e in the calculation of η, which is computationally expedient, as one does not have to determine the total density of the hot plasma component at the observer.However, the validity of this supposition is questionable, as with increasing L the density of the two populations may become comparable.
Instability of whistler mode waves depends on the anisotropy exceeding a critical value, Consequently, for a distribution with A=1, only waves with frequencies less than half the gyrofrequency are unstable.Since A c increases as f →f B , progressively more anisotropic distributions are required to produce instability at frequencies approaching the electron gyrofrequency.Spacecraft observations of chorus emissions above half the local electron gyrofrequency (e.g.Meredith et al., 2001) therefore either imply that they were triggered by a highly anisotropic population of electrons or that they originated in an offequatorial region of higher magnetic field strength.
An isotropic distribution attenuates waves at all frequencies, but there always exists a range of frequencies which are amplified by a distribution with A>0 (Liemohn, 1967).If the distribution function is a bi-Maxwellian, then the anisotropy is A=T ⊥ /T −1.If the angular dependence of the density function is of the form sin 2p α, then the anisotropy is simply A=p.However, since in the model considered here the distribution function at the observer at a given time is determined by those particles that drift from the source region to the observer in the time interval elapsed since injection, neither of these simple cases emerge.
If the distribution function, g(W, α, t), is independent of pitch angle, or isotropic, then the anisotropy is identically zero.The function enters into Eq.( 10) as a weighting factor which depends on the azimuthal drift velocity and time.Although the drift velocity is a function of α, it serves only to determine the location in the source region from which an electron originates, and hence the applicable weight.In order to obtain a non-isotropic distribution at the observer, ∂g/∂α =0, it is necessary that either ∂F (W, α)/∂α =0 or d /dψ =0 within the source region.The fact that the distribution at the observation point evolves with time as particles drift around from the injection region implies that if the source distribution is not isotropic or the source particles are not distributed uniformly in local time then the observed anisotropy is non-zero and also varies with time.
In principle, the calculated growth rate and a knowledge of the background embryonic wave amplitudes, combined with processes limiting the exponential wave growth, could be used to estimate the absolute wave amplitudes resulting from this interaction.Kennel and Petschek (1966) showed that the resonant particle distribution is driven towards a state of marginal stability as a result of its interaction with the waves.Modification of the electron population arising from wave amplification would lead to a reduction in the pitch angles of interacting particles, resulting in those particles close to the loss cone being precipitated into the upper atmosphere.For the purposes of this model the wave energy density is assumed to be significantly smaller than that of the particles and, consequently, action of the waves on the resonant particle distribution is neglected.The validity of this approximation is somewhat uncertain as there is compelling evidence (enhanced precipitation in riometer data) that the particle distribution is indeed modified during the interaction.Although imperfect we adopt this approximation as it results in substantial computational simplifications.
Results
The whistler mode field of view for satellite observations is restricted and therefore, using either models or observations of the plasma density and magnetic field strength, one is able to derive a unique mapping between the observed wave frequency and the resonant electron energy (e.g.Abel et al., 2002).In contrast, ground stations are able to detect waves originating over a large range of L and this introduces ambiguity into the relationship between the observed frequency and the resonant electron energy.
The whistler mode field of view for a ground station at L≈4 may be estimated as around 30 • in longitude and L roughly between 3 and 7 (Carpenter, 1966;Smith et al., 1996).The majority of the results quoted here are for injections at L=5.This particular value was selected as it falls in the middle of the aforementioned range of L and, furthermore, this represents the average location of Mauk and McIlwain's (1974) injection boundary for moderate K p .Although the simulations of Li et al. (1998) indicate that the injection boundary model is not necessary to explain dispersionless injections, it serves here as an approximation to the locus of the injected particles.
The radial electron density profile used is the empirical model of Carpenter and Anderson (1992).For mildly disturbed magnetospheric conditions, representative of those prevailing during the events in Fig. 1, one has in the morning sector n c =4.5×10 6 m −3 at L=5, while the plasmapause is located at L 4.3.The injected plasma is presumed to have n h =10 5 m −3 .
The shaded regions in Fig. 9 represent the relationship between W ⊥ and W for electrons that have the drift rate required to reach the observer at a selection of times after the moment of injection.Only electrons which have not made an entire orbit around the Earth are considered.The higher and lower energy borders of the shaded regions correspond to electrons injected at the two extremities of the source region, the upper boundary being associated with those particles originating at the edge most remote from the observer.The asymmetry between W ⊥ and W for the observed population is accounted for by the fact that for smaller pitch angles a higher energy is required to achieve the same drift rate.The form of these regions places an upper bound on W (indicated by the arrows) for a given t, which, in turn, places a lower bound on the resonant frequency.As time proceeds the distribution of particles at the observer shrinks towards the origin.This implies an upper cutoff in W that decreases with time, which leads to an increasing lower cutoff in the resonance frequency.
This lower frequency cutoff determines the latest time at which amplification at a given frequency ceases.However, if the anisotropy falls below the critical level before that, then wave generation will terminate sooner.In practice, the anisotropy becomes sub-critical before η→0.
It is informative to explore the influence that the form of the source distribution function exerts on the magnitude and evolution of the anisotropy at the observer.The integrands in the numerator and denominator of Eq. ( 12) are plotted as a function of α for two different source models in Fig. 10, reflecting the variation in these functions along the dashed line in Fig. 9.The shaded rectangles represent the range of α for which the functions are non-zero at the indicated time, taking into account the width of the source and the travel time from the source region to the observer.Within the range of pitch angles present at the observer at a given instant, the larger α correspond to particles arriving from the furthest part of the injection region.This is due to the fact that for a given value of W , both W and G(α), and hence the azimuthal drift velocity, increase with α.
For = 1 one must have ∂F /∂α =0 to achieve non-zero anisotropy.However, if is a non-uniform function of ψ then finite anisotropies are found at the observer even if F is independent of α.In Fig. 10 the bell-shaped curves for the denominator reflect the effect of the non-uniform = 2 distribution of the source in azimuth.This local time distribution introduces a bipolar shape to the gradient curves at the observer, arising primarily from the derivative of 2 with respect to ψ.Since both source distributions considered are isotropic, the asymmetry between the two lobes of the bipolar curve originates not only from the trigonometric factors but also from the dependence of F on W . Specifically, the distribution of particles from a Maxwellian injection declines far more swiftly with increasing α along a line of constant W than that resulting from a Lorentzian injection.Consequently, the difference in the area under the two lobes of the curve representing the numerator of Eq. ( 12) is larger in the case of the Maxwellian and the resulting anisotropy is accordingly greater.This is exemplified by the data plotted in Fig. 11 which illustrate that the anisotropy produced by a Maxwellian injection is initially very large but is reduced rapidly, while that resulting from a Lorentzian injection is smaller but dwindles more gradually with time.12) evaluated along the dashed line in Fig. 9, for W =20 keV with = 2 .The dashed curves represent the variation of the source density function while solid curves reflect the relationship for the density function at the observer at selected times.A narrow source region, δψ=10 • , is employed to prevent overlap of the curves for different times.
In fact, the isotropic Lorentzian produces an anisotropy which is always below the critical value and consequently, does not lead to wave growth.However, setting p=1 for the Lorentzian distribution enhances the proportion of particles with larger pitch angles and results in an anisotropy which is initially in excess of A c .
Maxwellian source
Consider first a bi-Maxwellian source distribution.The growth rate as a function of time for a range of resonance fre- quencies at an observer located in the midnight-dawn quadrant, ψ 0 =45 • , is plotted in Fig. 12a.Since in this case the injection is uniformly distributed over δψ, it is mandatory that the source be inherently anisotropic (T ⊥ =T ).The uppermost panels are greyed out as these lie above half the equatorial electron gyrofrequency and are consequently not observed on the ground.The peaks in growth rate at successively higher frequencies occur at progressively later times, in agreement with the observed SCE behaviour.In Fig. 12b an isotropic Maxwellian source (T ⊥ =T ) is assumed, but the distribution in azimuth is taken to be nonuniform: = 2 with m=2.The fact that the source is isotropic causes the anisotropy to approach its critical value more rapidly and the growth rate peaks have shorter duration.This illustrates the fact that even for an isotropic population of injected plasma, an anisotropic distribution may be observed at later local times due to energy dispersion if the plasma is injected non-uniformly in azimuth.
The general shape of the curves in Fig. 12 may be understood with reference to Fig. 13: the initial rise is due to an increase in η, resulting from the presence of electrons with progressively lower energies and hence higher densities (closer to the peak in the source distribution).The onset of enhanced emissions at a given frequency is thus a consequence of the rapid increase in the number of resonant particles.This should be contrasted with the results of Collier and Hughes (2004) where the upper edge of the envelope resulted from the f B /2 cutoff.The subsequent decrease in γ results from the last term in Eq. ( 11), which diminishes as the anisotropy approaches its critical value.
A substantial variation in peak growth rate, amounting to several orders of magnitude, exists over the range of frequencies plotted in Fig. 12 due to the large difference in the density of the more energetic particles responsible for the lower frequency emissions and the lower energy particles associated with the higher frequencies.The range in γ for the frequencies considered may be reduced by selecting higher temperatures for the source distribution.This, however, also results in the duration of the events declining dramatically.
One should note that following the growth rate peaks in Fig. 12, large negative excursions occur which are truncated by the axes of the plots.These arise as the distribution of particles at the observation point shifts to lower energies, the anisotropy approaches, then drops below the critical value, A c , and the final term in Eq. ( 11) becomes negative.However, the lower energy particles are also drawn from closer to the peak in the injected particle distribution and consequently lead to large values of η and hence γ 0. Wave growth is only expected for A>A c : once the anisotropy drops below this critical value, amplification of VLF waves ceases and damping may occur.
The trend of peak delay times increasing with frequency is reversed at higher frequencies.The origin of this effect is as follows: initially, the anisotropy at all W is large.As time progresses the anisotropies decrease, with those corresponding to resonance at higher frequencies declining least rapidly.However, the critical anisotropy, which starts to Fig. 13.Factors contributing to the shape of the growth rate curves for the source distribution applicable to Fig. 12a at W =20 keV.The initial rise in γ is associated with the increase in η as more particles with the specified W drift into the station's field of view.The subsequent decrease in γ occurs as the anisotropy of these particles approaches its critical value.
increase rapidly with frequency for f/f B 0.5, is larger at higher frequencies and consequently, above a certain frequency, which depends on both L and the particulars of the source, the difference A−A c is reduced more rapidly with time.
Lorentzian source
Proceeding next to a Lorentzian source distribution with T =2 keV and p=1.The selection of larger or smaller T effects a shift in the dominant emission frequency to lower or higher values respectively.Although the inherent anisotropy of this distribution, A=1, is large, it is not atypical of plasma in the magnetosphere (Hargreaves, 1992).In Fig. 14 the normalised growth rate, γ / , is plotted as a function of frequency and time for a selection of L. These growth rate spectrograms are most reminiscent of the observed events, exhibiting a band of emissions with well-defined upper and lower cutoffs, and a progression towards higher frequencies.The duration of the period of wave amplification and the range of frequencies concerned are consistent with observations.Comparison of the growth rates calculated here with the peak value of γ / =1.4×10 −2 determined from GEOS 2 data by Cornilleau-Wehrlin et al. (1985), reveals that this model produces relatively small values of γ , yet these scale directly with the choice of n h .Smith et al. (1999) observe that the amplitude of the signal begins to rise above the ambient level simultaneously in all VELOX channels, with the rate of increase diminishing at higher frequencies.This could be consistent with wave amplification within the injection region (Smith et al., 1999) but may also be accounted for by the rapid drift of the most energetic particles.The latter leads to the simulated growth rate rising above zero immediately following injection at all frequencies for which A>A c .
A far greater uniformity in peak growth rates is evident across the range of frequencies in Fig. 14 as compared with the results for Maxwellian injections.This originates from the augmented high energy tail in the Lorentzian distribution which admits more electrons in resonance with low frequency waves.Source distributions with smaller κ produce more substantial wave growth at lower frequencies, while as κ→∞ the Lorentzian distribution approaches a Maxwellian and the amplification of lower frequencies is curtailed.
In Fig. 14 it is apparent that a frequency exists above which there is no wave amplification.This frequency corresponds to that W for which the anisotropy is always sub-critical.The fact that this effective upper cutoff frequency falls at approximately half the gyrofrequency is entirely coincidental.Modification of the degree of anisotropy in the source distribution leads to variation in this cutoff frequency and in the duration of the emissions.In particular, increasing p results in electrons with large pitch angles forming a more substantial proportion of the population, and consequently, a larger range of frequencies exists for which the anisotropy is above the critical threshold and a more persistent event occurs because A>A c for a longer period.
Since the azimuthal drift rate is proportional to L, it is moderately surprising that the duration of the events appears to increase with L. This may be understood as follows: for a fixed frequency the resonant energy decreases more rapidly than L −1 , and hence the drift rate, proportional to both W and L, of particles resonant with a particular frequency wave declines with increasing L. The modelled frequency sweep rate therefore varies inversely with L. The range of df/dt observed by Smith et al. (1996) could thus be accounted for by particles drifting on different L shells.
Observer local time
The variation in the anisotropy, η and growth rate with the station local time is plotted in Fig. 15.At a fixed ψ 0 the anisotropy is initially maximised, it decreases with time until it reaches a minimum from which it recovers rather rapidly.The initial decrease is effected by electrons that have drifted directly from the source to the observer.However, as this population approaches cutoff, those electrons which have completed a full orbit around the Earth before returning to the observer begin to exert a larger influence.As time proceeds, the peaks in the curves for η shift to later local times as particles with the specified W drift eastwards.The peaks become progressively less sharp as these particles encompass a range of α (and hence W ) and dispersive drift therefore spreads The broader growth rate peaks predicted at later local times imply that events of longer duration should be expected for larger ψ 0 .This effect is also demonstrated in Figs.14b, 16a and b.A natural consequence of a model based on energy dispersion is that for observers located at later local times, the peaks at higher frequencies should occur with greater delays, suggesting that SCE with lower slopes should be anticipated at stations located further from midnight.Although this effect, noted by Hughes (1995), is not clearly evident in the more extensive data of Smith et al. (1996, Fig. 11), it is indeed produced in the current model.Failure to clearly discern this trend experimentally may be attributable to variability in the factors characterising individual substorms.
The extent of the injection region does not prove to be a particularly significant factor in determining the duration of the modelled SCE, although a small extension of the emission period is apparent for injections distributed over a larger range of local times.The event duration is, however, sensitive to the local time of the observer, with longer events observed at later local times.
Conclusion
A simple model has been presented which examines the variation of the electron population on an observer's meridian resulting from the gradient-curvature and corotation drift of plasma injected around midnight.No dawn-dusk electric field was considered nor were the effects of the atmospheric loss cone or other loss mechanisms taken into account.Subject to these conditions the anisotropy and whistler mode growth rate resulting from the cyclotron resonance interaction were calculated using the formulation of Kennel and Petschek (1966).
As noted by Cornilleau-Wehrlin et al. (1985), an analysis of this sort accounts for waves of a continuous broadband nature but does not describe the detailed frequency-time structure of chorus emissions, which consist of numerous short duration rising or falling tones.However, examination of the data presented in Fig. 1 over much shorter time scales reveals that the SCE may be composed of either discrete elements, consistent with the denomination of this event type, or broad-band emissions with very little structure.The model considered here is most applicable in the latter case.Collier and Hughes (2004) described SCE simulations based on a particle tracking approach, tracing the motion of injected electrons subject to E×B and gradient-curvature drifts.The effect of the radial motion of the injected population resulting from a dawn-dusk convection electric field was investigated.The conclusions of Smith et al. (1996) and Abel et al. (2002) regarding the limited contribution of dispersive drift to the SCE mechanism were supported by the results of Collier and Hughes (2004), where it was determined that a relatively large convection electric field was required to produce results resembling the observations.However, Collier and Hughes (2004) treated the action of each electron individually and a realistic assessment of the anisotropy and growth rate, which depend on the whole ensemble of particles, was not performed.
The model considered here neglects the convection electric field and treats only the azimuthal motion of the electrons.Furthermore, whereas Collier and Hughes (2004) only crudely accounted for the instability conditions, the present calculations accurately determine the growth rate arising from the anisotropy of the relevant portion of the electron population.The results presented here, in which a qualitative similarity between the modelled growth rate spectrograms and VLF observations is immediately apparent, suggest that it is possible to produce emissions with the characteristics of a SCE in the absence of a convection electric field, indicating that dispersive drift may indeed contribute substantially to the SCE mechanism.
A process has emerged which accounts for the rising frequency of the lower boundary of the SCE envelope.This transpires from a cutoff in the maximum W for electrons at the observer which decreases with time, and a consequent increase in the minimum possible resonance frequency.
The quoted results suggest certain characteristics of an injected electron population which would favour the generation of a SCE.If the injected particles are distributed uniformly over a segment of local time, then the population must be inherently anisotropic.A source distribution with a significant high energy tail produces more uniform growth rates over a range of frequencies.However, in the case of such a non-thermal distribution, it appears necessary to assume that the injected population is anisotropic in order for the dispersive drift to achieve an anisotropy greater than the critical threshold for a suitable frequency spectrum.In view of the fact that the spectrum of injected particles exhibits both upper and lower cutoffs, neither the Maxwellian nor Lorentzian description is entirely appropriate and further investigation should focus on a better characterisation of the injected population.
Fig. 1 .Fig. 2 .
Fig. 1.Broadband VLF data from SANAE-IV with examples of substorm chorus events.Each panel is a spectrogram representing the intensity of the signal as a function of frequency and time.Data
Fig. 1 .Fig. 2 .
Fig. 1.Broadband VLF data from SANAE-IV with examples of substorm chorus events.Each panel is a spectrogram representing the intensity of the signal as a function of frequency and time.Data were recorded in synoptic mode, with only one in every five minutes
Fig. 1 .
Fig. 1.Broadband VLF data from SANAE-IV with examples of substorm chorus events.Each panel is a spectrogram representing the intensity of the signal as a function of frequency and time.Data were recorded in synoptic mode, with only one in every five minutes sampled.
Fig. 1 .
Fig. 1.Broadband VLF data from SANAE-IV with examples of substorm chorus events.Each panel is a spectrogram representing the intensity of the signal as a function of frequency and time.Data were recorded in synoptic mode, with only one in every five minutes sampled.
Fig. 1 .
Fig. 1.Broad-band VLF data from SANAE-IV with examples of substorm chorus events.Each panel is a spectrogram representing the intensity of the signal as a function of frequency and time.Data were recorded in synoptic mode, with only one in every five minutes sampled.
are also apparent in the Halley [75 • 35 S 26 • 45 W, L=4.4] VELOX data, examples of which are given in Fig. 2. Halley is located around 1.5 h west of SANAE-IV, which may account for the differences in the frequency sweep rate and the duration of events observed at both stations.
Fig. 3 .
Fig. 3. Electron flux data from SOPA instruments on LANL geosynchronous satellites for SCE's presented in Figures 1(b) and 1(d).The five traces represent different energy channels; from top to bottom they are 50 − 75, 75 − 105, 105 − 150, 150 − 225 and 225 − 315 keV.In the upper right corner of each panel is an inset indicating the location of the satellite at approximately the moment of injection (the arrow points sunward).
Fig. 3 .
Fig. 3. Electron flux data from SOPA instruments on LANL geosynchronous satellites for SCE's presented in Figures 1(b) and 1(d).The five traces represent different energy channels; from top to bottom they are 50 − 75, 75 − 105, 105 − 150, 150 − 225 and 225 − 315 keV.In the upper right corner of each panel is an inset indicating the location of the satellite at approximately the moment of injection (the arrow points sunward).
Fig. 3 .Fig. 4 .
Fig. 3. Electron flux data from SOPA instruments on LANL geosynchronous satellites for SCE presented in Figs.1b and d.The five traces represent different energy channels; from top to bottom they are 50-75, 75-105, 105-150, 150-225 and 225-315 keV.In the upper right corner of each panel is an inset indicating the location of the satellite at approximately the moment of injection (the arrow points sunward).
Fig. 4 .
Fig. 4. Low energy MPA electron data from spacecraft 1990-095 for periods corresponding to the SCE's presented in Figures 1(b) and 1(d).
Fig. 4 .
Fig. 4. Low energy MPA electron data from spacecraft 1990-095 for periods corresponding to the SCE presented in Figs.1b and d.
Fig. 5 .
Fig. 5. Variation of the H and D magnetic field components recorded by the Halley fluxgate magnetometer for the events displayed in Figure 1.
Fig. 5 .
Fig. 5. Variation of the H and D magnetic field components recorded by the Halley fluxgate magnetometer for the events displayed in Figure 1.
Fig. 5 .
Fig. 5. Variation of the H and D magnetic field components recorded by the Halley fluxgate magnetometer for the events displayed in Figure 1.
Fig. 5 .
Fig. 5. Variation of the H and D magnetic field components recorded by the Halley fluxgate magnetometer for the events displayed in Figure 1.
Fig. 5 .Fig. 5 .
Fig. 5. Variation of the H and D magnetic field components recorded by the Halley fluxgate magnetometer for the events displayed in Figure 1.
Fig. 6 .
Fig. 6.Data from the Nordic riometer chain for 2002 day 123, indicating enhanced electron precipitation associated with the SCE in Figure 1(d).
Fig. 7 .
Fig. 7. Geometry of the model: injection over range of local time, δψ, centred on midnight (green curve) and an observer (open box) located at local time ψ0 at the moment of injection.The distribution of the injection intensity in local time is represented by the shaded region.
Fig. 6 .
Fig. 6.Data from the Nordic riometer chain for 2002 day 123, indicating enhanced electron precipitation associated with the SCE in Fig. 1d.
Fig. 5 .
Fig. 5. Variation of the H and D magnetic field components recorded by the Halley fluxgate magnetometer for the events displayed in Figure 1.
Fig. 7 .
Fig. 7. Geometry of the model: injection over range of local time, δψ, centred on midnight (green curve) and an observer (open box) located at local time ψ 0 at the moment of injection.The distribution of the injection intensity in local time is represented by the shaded region.
Fig. 8 .
Fig. 8. Simulated flux signatures at geosynchronous orbit, three (solid line) and six (dashed line) hours east of midnight.Injections distributed in local time according to Λ1 and Λ2.Energy channels are the same as in Figure 3.
Fig. 9 .Fig. 10 .Fig. 8 .Fig. 9 .Fig. 10 .
Fig. 9. Components of the particle energies at the observation point.The shaded regions represent the range of W⊥ and W occupied by particles for t = 20, 30, 40, 60 and 80 min, δψ = 10 • and ψ0 = 45 • at L = 5.The arrows indicate the cutoffs in W at each of these times.Dotted lines are curves of constant pitch angle.
Fig. 8 .
Fig. 8. Simulated flux signatures at geosynchronous orbit, three (solid line) and six (dashed line) hours east of midnight.Injections distributed in local time according to 1 and 2 .Energy channels are the same as in Fig. 3.
Li et al. (2003) used κ=1.8 while Birn et al. (1998) had κ=2.5; both set T =0.5 keV.The results quoted here are not especially sensitive to κ, and consequently an intermediate value of κ=2 is employed.The isotropic distribution Eq. (
Fig. 9 .Fig. 10 .
Fig. 9. Components of the particle energies at the observation point.The shaded regions represent the range of W⊥ and W occupied by particles for t = 20, 30, 40, 60 and 80 min, δψ = 10 • and ψ0 = 45 • at L = 5.The arrows indicate the cutoffs in W at each of these times.Dotted lines are curves of constant pitch angle.
Fig. 10 .Fig. 10 .
Fig.10.Integrands in the numerator and denominator of (12) evaluated along the dashed line in Figure9, for W = 20 keV with Λ = Λ2.The dashed curves represent the variation of the source density function while solid curves reflect the relationship for the density function at the observer at selected times.A narrow source region, δψ = 10 • , is employed to prevent overlap of the curves for different times.
Fig. 10 .
Fig. 10.Integrands in the numerator and denominator of Eq. (12) evaluated along the dashed line in Fig.9, for W =20 keV with
Fig. 12 .Fig. 11 .
Fig. 12. Normalised growth rate, γ/γ max , for an observer at ψ 0 = 45 • and L = 5.The upper two panels are shaded to indicate that these represent frequencies above f B /2, which are not observable on the ground.The plots indicate that at each frequency an interval
Fig. 12 .
Fig. 12. Normalised growth rate, γ/γ max , for an observer at ψ 0 = 45 • and L = 5.The upper two panels are shaded to indicate that these represent frequencies above f B /2, which are not observable on the ground.The plots indicate that at each frequency an interval of wave growth occurs; these intervals arise with greater delay at higher frequencies.
Fig. 12 .
Fig. 12.Normalised growth rate, γ /γ max for an observer at ψ 0 =45 • and L=5.The upper two panels are shaded to indicate that these represent frequencies above f B /2, which are not observable on the ground.The plots indicate that at each frequency an interval of wave growth occurs; these intervals arise with greater delay at higher frequencies.
Fig. 13 .
Fig. 13.Factors contributing to the shape of the growth rate curves for the source distribution applicable to Figure 12(a) at W = 20 keV.The initial rise in γ is associated with the increase in η as more particles with the specified W drift into the station's field of view.The subsequent decrease in γ occurs as the anisotropy of these particles approaches its critical value.
Fig. 13 .Fig. 14 .Fig. 13 .Fig. 14 .Fig. 13 .Fig. 14 .
Fig. 13.Factors contributing to the shape of the growth rate curves for the source distribution applicable to Figure 12(a) at W = 20 keV.The initial rise in γ is associated with the increase in η as more particles with the specified W drift into the station's field of view.The subsequent decrease in γ occurs as the anisotropy of these particles approaches its critical value.
Fig. 15 .
Fig. 15.Variation of (a) growth rate, (b) η and (c) anisotropy with the local time of the observer for electrons with W =40 keV and δψ=45 • at L=5.
Fig. 16 .
Fig. 16.Normalised growth rate, γ / , for an observer (a) at dawn and (b) in the morning sector on L=5 for the source described in Fig. 14.
Figure 15a indicates that the largest growth rates are predicted for stations located at the earliest local times.This is confirmed by a comparison of Figs.14b, 16a and b, which are each separated by 3-h of local time, and illustrate a systematic reduction in peak growth rate.The reduced growth rate is brought about by the dispersion of the particles, with the population becoming spread out in local time and fewer particles being present at a given instant for an observer located further east of midnight.
Table 1 .
Drift periods (min) in corotating frame for particles with equatorial pitch angle α=45 • in a dipole field. | 2018-12-27T16:50:17.214Z | 2004-12-22T00:00:00.000 | {
"year": 2004,
"sha1": "dd8a342bc251baf55e590f8a6faff6f05549f3ff",
"oa_license": "CCBY",
"oa_url": "https://angeo.copernicus.org/articles/22/4311/2004/angeo-22-4311-2004.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "dd8a342bc251baf55e590f8a6faff6f05549f3ff",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
258866162 | pes2o/s2orc | v3-fos-license | Evaluating Evaluation Metrics: A Framework for Analyzing NLG Evaluation Metrics using Measurement Theory
We address a fundamental challenge in Natural Language Generation (NLG) model evaluation -- the design and evaluation of evaluation metrics. Recognizing the limitations of existing automatic metrics and noises from how current human evaluation was conducted, we propose MetricEval, a framework informed by measurement theory, the foundation of educational test design, for conceptualizing and evaluating the reliability and validity of NLG evaluation metrics. The framework formalizes the source of measurement error and offers statistical tools for evaluating evaluation metrics based on empirical data. With our framework, one can quantify the uncertainty of the metrics to better interpret the result. To exemplify the use of our framework in practice, we analyzed a set of evaluation metrics for summarization and identified issues related to conflated validity structure in human-eval and reliability in LLM-based metrics. Through MetricEval, we aim to promote the design, evaluation, and interpretation of valid and reliable metrics to advance robust and effective NLG models.
Introduction
Evaluation metrics provide quantitative assessments to guide model development, benchmark scientific progress, and inform generalizability across tasks and domains (Novikova et al., 2017).Effective evaluation metrics can extract valuable signals and robust evidence from model outputs that describe model capability, diagnose model failures, and compare the strengths and weaknesses of different models, allowing for more informed decision-making in real-world deployment (Zhou et al., 2022).Conversely, problematic evaluation metrics can mislead model diagnoses, development, and deployment, resulting in downstream harms Figure 1: METRICEVAL: A framework that conceptualizes and operationalizes four main components of metric evaluation, in terms of reliability and validity to individuals and society (Yeo and Chen, 2020;Sheng et al., 2021).
Designing effective evaluation metrics for natural language generation (NLG) tasks has long been challenging due to the complex nature of language, open-endedness of tasks, multifaceted and context-dependent definition of language quality (Nema and Khapra, 2018;Zhou et al., 2022;Gehrmann et al., 2022;Sai et al., 2022).Most recently, the NLG evaluation challenge has been further exacerbated by the emergence of "generalpurpose" large language models (LLMs), further demanding evaluation methods to capture model utility for diverse downstream use cases.To address these challenges, researchers and practitioners have developed various types of NLG evaluation metrics, including word-based metrics (e.g., ROUGE, BLEU), embedding-based metrics (e.g., BERTScore, MoverScore) and to end-to-end metrics (e.g., BLEURT, G-Eval).
With this increasingly rich set of evaluation metrics being pursued, we must understand how good each metric is.While researchers pointed out shortcomings of popular metrics (e.g., ROUGE), such as their inability to capture semantic meanings, insensitivity to perturbations, and failure to reflect real-world performance (Sai et al., 2021;Liu arXiv:2305.14889v2 [cs.CL] 23 Oct 2023Oct et al., 2016;;Reiter, 2018;Celikyilmaz et al., 2020;Kauchak and Barzilay, 2006), there is a lack of principled approaches to evaluate NLG evaluation metrics, and to begin with, a lack of clear definition on what makes a metric good.
Some prior works have attempted to evaluate the quality of NLG evaluation metrics by their correlations with human judgments (Sai et al., 2021;Fabbri et al., 2021;Liu et al., 2016), which are deemed the gold standard for quality assessment.However, correlation with human preferences gives limited quality signals.More problematically, human evaluation data collection itself currently suffers from validation, standardization, consistency, and reproducibility issues (Clark et al., 2021;Howcroft et al., 2020;Belz et al., 2021;Khashabi et al., 2021).These issues subsequently undermine their validity as the foundation for evaluating automatic metrics.
In this paper, we introduce METRICEVAL, a theoretical framework to define the desiderata of and assess evaluation metrics by drawing from measurement theory in educational and psychological testing.Based on two core concepts in measurement theory that define a "good" metric in testing individual capabilities-reliability and validity, our framework conceptualizes and operationalizes four key desiderata: Metric Stability, Metric Consistency, Metric Construct Validity, and Metric Concurrent Validity.We further propose a set of statistical tools to quantify these desiderata to systematically evaluate evaluation metrics.METRICEVAL enables quantifying the standard error for each metric on a specific task which allows meaningful interpretations of the evaluation results.We demonstrate the utility of METRICEVAL with a case study of evaluating 16 metrics, including LLM-based metrics, on a summarization task.
This paper offers three contributions, • Introduce and transfer metrics evaluation desiderata and methods from measurement theory in educational and psychological testing to NLG evaluation.
• Propose METRICEVAL, a theory-driven framework with a set of statistical tools for systematically analyzing and evaluating NLG metrics.
• A case study demonstrating how to apply our framework and identify issues of evaluation metrics for a summarization task.
Measurement Theory
Originating from educational and psychological testing, measurement theory aims to inform evaluation processes that devise a coherent numerical representation of individual capabilities-for instance, evaluating a person's language proficiency through essay responses to questions.Scores on these tests have direct consequences for high-stakes decisions, such as school admissions.
Key to measurement theory is the distinction between the observed score on a test, e.g., the score of an examinee's essay in a language proficiency exam, and the true score on the general construct (Cronbach and Meehl, 1955) that the test is theorized to measure, e.g., language proficiency.The gap between the observed and true scores is referred to as measurement error (Allen and Yen, 2001).Measurement theory defines two sources of measurement error, random and systematic.Random measurement errors are fluctuations specific to a time, place, examinee, and exam question that are transient and balance out to 0 over repeated measures, and they have direct consequences on the reliability of the evaluation process.Systematic errors, on the other hand, are persistent shifts across one or more time, place, examinee, or exam questions, and they have direct consequences on test validity by producing observed scores with systematic deviations from the true score that the test purports to measure (e.g., a downward bias if the rubric on a language proficiency exam looks for specialized knowledge about a certain subject).
By evaluating and identifying the source of measurement errors, the test designer could iteratively improve their test design by adding or removing test items or changing their rubric.The results can also help the evaluator to interpret a test score with caution.For example, we can quantify the uncertainty as the standard error for meaningful comparison: when the difference between two models' metric scores on a benchmark does not afford conclusions about significant differences, the evaluator may consider narrowing the confidence interval by averaging scores from repeated measurements.
In short, as a safeguard to the trustworthiness of tests, measurement theory offers a conceptual framework for how the validity and reliability of a test should be formalized, evaluated, and optimized to reduce measurement error with the aid of statistical methods and tools.
Transferring Measurement Theory to the Context of NLG
There are obvious analogies between the measurement of human capability and the evaluation of NLG models.First, NLG evaluation is often performed via benchmarking.A benchmark data set is similar to an educational test consisting of a tailored collection of test examples, where questionspecific scores are calculated based on predefined evaluation metrics (similar to scoring rubrics for human testing) and are aggregated into an overall score for the model.Second, when evaluating a model, we similarly hope to derive scores based on a candidate model's observed performance on a benchmark, so as to (1) draw inferences about unobservable capability in a specific domain (e.g., summarization) and ( 2) provide guidance on the model's expected behavior in future tasks of the domain.Similar to educational testing, the score is often interpreted and used beyond its nominal meaning, i.e., implying the model's general performance beyond the particular benchmark dataset.As more models are considered "general-purpose" models, there is an increasing need for measuring an NLG model's unobservable capability.
The conceptual and statistical tools provided by measurement theory can be transferred to assist in evaluating NLG metrics, specifically to quantify and identify different sources of measurement errors.Not only can these tools help the community systematically assess the shortcomings of evaluation metrics and identify misleading ones, but they also guide the interpretation of their evaluation results, as well as the re-design of existing metrics and the development of new ones.In the next section, we elaborate on how we transfer these tools from measurement theory to a framework that defines and assesses the reliability and validity of NLG evaluation metrics, and how they may help us interpret and improve NLG evaluation.
Metric Evaluation Framework
In this section, we introduce METRICEVAL and its components that define and assess different aspects of the "goodness" of NLG evaluation metrics, inspired by the core concept of reliability and validity in measurement theory (See Fig. 1).METRICEVAL aims to evaluate and compare the reliability and validity of the metrics.For example, to evaluate a summarization model, one can apply referencebased metrics (e.g., ROUGE, BertScore), reference-free metrics (e.g., SUPERT), or human ratings on specific output quality aspects (e.g., coherence or relevance) on the same benchmark to draw inferences about model capability.For the remainder of this section, we will illustrate our framework with this running example of evaluating summarization models with diverse metrics.
It is important to note that the quality of evaluation results is also dependent on the chosen dataset and reference (for reference-based metrics), which, in NLG evaluation, are concerned with benchmark designs.Measurement errors may cascade from those components to the observed score.In this work, we focus on the metric aspect and answer questions such as "giving a CNN/Daily Mail benchmark, whether using ROUGE or BertScore or human ratings offer reliable and valid evaluation results.".This is an important question given the far-reaching impact that prevalent benchmarks can have on the output of the research community.
Reliability
The reliability of a metric is the extent to which the result is subject to random measurement error and thus (in)consistent across repeated measures, such as different (sub-)datasets within a benchmark or different raters scoring the model's output in human evaluation.Suppose two NLG models are scored on their performance based on Metric-A on a summarization benchmark.Researchers and practitioners often use the scores to draw inferences about the models' (relative) performances.When the two models are reported to differ in their scores (e.g., Metric-A = .39vs. .42),a natural question is how much this reflects actual differences (true signal) versus fluctuations due to random measurement error (noise).If Metric-A is unreliable, the measurement error may mislead the comparison.
Sources of random measurement error that impair the reliability of a metric may include: • Non-deterministic algorithms of some metrics may produce score variations on the same model outputs.
• The subsets of data points (e.g., different genres of articles) included in the benchmark.
• For human evaluations, the variability across raters, resulting from their subjectivity, inconsistency, errors, and so on.
In classical test theory (Spearman, 1904), the observed metric score of a model (X) is equal to the sum of true score T and error (E), which is assumed to be independent of T and fluctuates around 0 with variance σ 2 E .The goal of evaluating a metric's reliability is hence to quantify the expected amount of uncertainty in the observed score due to random measurement error, known as the standard error of measurement (σ E ).
Empirical estimation of σ E is done via the reliability coefficient of a metric, denoted ρ 2 XT ∈ [0, 1].Formally, the reliability coefficient is defined as the proportion of variance in the observed score explained by the variance in the true score across NLG models rather than error, or equivalently, the squared correlation between X and T : Metrics with higher reliability coefficients are more desirable.However, in reality, neither T nor E is observed.The reliability coefficient in Equ. 1 cannot be directly computed and is statistically approximated via several possible estimators.METRICEVAL proposes to estimate the reliability coefficient from both Metric Stability and Metric Consistency.They reflect different reliability issues that can arise in different types of metrics, as we elaborate below.By quantifying and identifying reliability issues, metric developers can improve the scoring algorithms, and metric users can make more informed decisions in choosing metrics, interpret performance differences, and adopt mitigation strategies, e.g., increasing the test set size to mitigate consistency issues (Spearman, 1910).
Metric Stability
Metric Stability refers to how a metric score may fluctuate when evaluated again on the same model output.While we would expect perfect stability (i.e., σ E = 0) for deterministic metrics, such as ROUGE-1 (Lin, 2004), the stochastic nature of some metrics (e.g., G-Eval (Liu et al., 2023)) may produce undesirable fluctuations when evaluating the same model outputs.As we see increasing use of automatic evaluation metrics with built-in stochasticity (e.g., LLM-based metrics), the stability of an evaluation metric in producing consistent scores on an output from one replication to another will be increasingly relevant.
We propose to quantify metric stability via the test-retest reliability coefficient: on the output generated by N models, we compute the metric score with the same output twice for each model.Across different models, the Pearson correlation between the two sets of scores is the test-retest reliability coefficient.One can show that this correlation is an estimate of the reliability coefficient ρ 2 XT as defined in Equ. 1.This is because, for the two metrics scores for a model, X 1 = T 1 + E 1 and X 2 = T 1 + E 2 , the correlation in the observed scores, ρ X 1 X 2 , is algebraically equivalent to ρ 2 X 1 T 1 , under the assumption that each model's true score doesn't change (i.e., T 1 = T 2 ) and that the expected fluctuation in metric evaluation remains the same (i.e., σ E 1 = σ E 2 ) across the two evaluations (see derivations in Allen and Yen, 2001).
Metric Consistency
Metric Consistency describes how the metric score fluctuates within a benchmark dataset, i.e., across data points.If the metric score computed on each individual data point (e.g., summarization of a specific news article) deviates substantially from the average score across the benchmark dataset (e.g., across 100 news articles), the metric score is less reliable, in that it is more sensitive to perturbations in the specific data points employed in the benchmark dataset.In this case, for a specific model, the average metric score on any subset of tasks (e.g., 50 out of 100 news articles) is expected to be sensitive to the choice of included examples, and a good proportion of difference across two evaluated models' average scores would also be attributed to this noise.Drawn from the estimation of internal consistency reliability in measurement theory,the estimation of metric consistency depends on the degree to which scores from different subsets of the benchmark dataset agree with one another.
The coefficient α (Cronbach, 1951) provides a measure of Metric Consistency.Let J denote the total number of data points in the test dataset, Y j the observed score (of a model) on the jth data point alone (e.g., the jth news article), and X = J j=1 Y j the overall score of the model on the full test set.Then α provides a lower bound to the true reliability of X, i.e., where σ 2 Y j is the variance of Y j across models.Equality holds when all the individual data point scores (Y j s) have equal correlations with the true score (T ), which may be violated in practice, leading to the underestimation of true reliability via the coefficient α formula.
Validity
Validity is another core component of METRICE-VAL.Metrics with low validity lead to systematic measurement errors that deviate the observed score from the true score that the test purports to measure.In other words, benchmarking is valid only when the metric scores can inform their intended interpretations (e.g., model capability) and uses (e.g., predicting models' real-world behavior).
Our framework is theoretically grounded in Messick's unified theory of test validity (e.g., Messick, 1995), under which the emphasis is given to the validation of inferences drawn from the test score, rather than the validation of the test itself.Different types of validities should be recognized as possible ways to gather supporting evidence for intended inferences (interpretations and uses) from the metric score.Our framework conceptualizes two types of a metric's validity, concurrent validity, and construct validity (e.g., Allen and Yen, 2001), which can be applied in different situations-when a validated reference criterion is available or not-as we elaborate below.
Metric Concurrent Validity
Metric Concurrent Validity relies on another validated metric as the reference criterion.This type of validity is most relevant when evaluating a metric as an alternative to existing ones that may be expensive and infeasible to acquire in practice.For example, evaluations by trained human experts are often challenging at a large scale, motivating the development of automatic alternatives.One can conclude that an automatic metric is a valid proxy if it has high concurrent validity using the expert valuation results as the reference criterion.
When both the target evaluation metric (X, e.g., a new automatic metric) and the reference criterion (Y , e.g., expert evaluation) are continuous, a straightforward way to quantify concurrent validity is via their Pearson correlation, ρ XY , often referred to as the (criterion-related) validity coefficient.One should note that measurement error in either X or Y is expected to attenuate this correlation (Spearman, 1910): At the population level, ρ XY is bounded above by the square root of the product of the two scores' (X and Y ) reliabilities.This again highlights the importance of safeguarding the reliability of the evaluation metric, as a noisy metric with low reliability is expected to yield poor predictive power on the criterion of interest.
Metric Construct Validity
Construct validity, a term coined by Cronbach and Meehl (1955), refers to the degree to which the observed behaviors on the test (e.g., test scores) can reasonably reflect the intended construct (e.g., language proficiency).This notion is directly applicable to evaluation metrics that are explicitly constructed to assess specific aspects of a model's performance or output quality, e.g., human evaluation (or automatic metrics, if developed specifically) on summarization, coherence, fluency, consistency, etc.However, even for metrics of which the intended construct is not explicitly defined, it is still necessary to understand what underlying dimensions of model capabilities they actually capture.
It is important to note that the underlying construct is often latent and not directly observable to assess its relation with the measure.Measurement theory, therefore, provides statistical tools to assess the construct validity of a measure through its relation with other observable variables (e.g., other tests purported to reflect the same or different constructs).We consider three such aspects of validity based on the measurement literature: • Metric Convergent Validity: Whether metrics of identical or related construct(s) are indeed related.For example, for the same aspect of summarization quality (e.g., coherence), scores provided by different evaluation methods (e.g., by different raters) should be highly correlated.
• Metric Divergent Validity: Whether metrics of unrelated constructs are indeed unrelated.For example, for distinct aspects of summarization quality (e.g., coherence and relevance), scores provided by the same method (e.g., by the same rater) should show substantially lower correlations than those for the same quality across methods.Low divergent validity could indicate method bias: e.g., the observed score depends greatly on the rater's subjective tendency rather than the model's performance on the rated dimension.
• Metric Factorial Validity: Whether the observed metric scores align with the theory about unobserved factors underlying the scores.For example, if scores on multiple evaluation metrics exhibit high correlations, this might suggest the presence of a common underlying factor causing these scores to move in unison.
We introduce two statistical tools to evaluate these aspects of construct validity.Specifically, Metric Convergent Validity and Metric Divergent Validity can be evaluated through the analysis of a multitraitmultimethod (MTMM) table, and Metric Factorial Validity can be evaluated via factor analysis.Note that these validity evaluation methods will only inform if there is an underlying construct or how many of them are being captured.Defining what these constructs are will require further conceptualization and theorizing.
The MTMM table presents a way to scrutinize whether observed metric scores act in concert with theory on what they intend to measure, when two or more constructs are measured using two or more methods (Campbell and Fiske, 1959).For example, when evaluating a summarization model, researchers may ask several raters to rate the generated outputs on four "traits" (aspects of output quality), e.g., coherence, consistency, fluency, and relevance.In this case, the MTMM table allows examining whether, across different raters (evaluation methods), the raters' scores indeed appear to characterize the model's performance on four distinct constructs.By convention, an MTMM table reports the pairwise correlations of the observed metric scores across raters and traits on the offdiagonals and the reliability coefficients of each score on the diagonals.The analysis of an MTMM table is exemplified in Sec.4.1.2.
Factor Analysis examines Metric Factorial Validity (e.g., Thurstone, 1947) when the observed metric scores are assumed to measure a smaller number of unobserved factors.For example, if scores from multiple evaluation metrics exhibit high correlations, this might suggest the presence of a common underlying factor causing these scores to move in unison.Under a factor analysis model, the distribution of the observed score on an indicator X j , such as a particular evaluation metric, is a function of a linear combination of the model's factor scores on K ≥ 1 general latent factors (f 1 , . . ., f K ) and the unique score U j on the indicator j unexplained by the latent factor, including measurement error, i.e., f (•) can be the identity function for normally distributed observed scores, but when scores are ordinal (e.g., expert ratings on a 5-point scale) or skewed, we suggest adopting an ordinal factor model (Muthén, 1984) where f (•) is a step function that evaluates whether Equ. 3 exceeds specific thresholds for each score category on the latent continuum.Factor analysis can be exploratory or confirmatory.In the latter, select loadings (λ jk s) are constrained to 0 to represent the theorized nomological network, e.g., an expert rating on consistency loads on no other dimensions.By establishing the Metric Factorial Validity through factor analysis, we could further develop more effective metrics by answering the following questions: • Fit indices: For confirmatory factor analysis, how well does the theorized factorial structure align with the observed data?
• Factor scores: What is an NLG model's factor score on a particular dimension?
• Factor loadings: How much does a specific factor affect an observed metric score?
• Residual correlation: For different evaluation metrics, are the residuals (unexplained score variation by the common factors) correlated, which may suggest additional dimensions?
Case Study
To illustrate how to apply METRICEVAL to evaluate NLG evaluation metrics, in this section, we ran a case study on evaluating summarization metrics.As noted earlier, our evaluation focuses on the metrics and the results should be interpreted as dependent on the benchmark dataset used.We leave it for future research to explore the generalizability of the results across different benchmark datasets.
Summarization Metric Evaluation
We analyzed the SummEval dataset (Fabbri et al., 2021), a benchmark for summarization tasks.The benchmark contains 1700 summaries generated by 17 models on the CNN/Daily Mail dataset.In this dataset, each generated summary was rated by three experts, who provided 5-point-scale ratings on four dimensions: Coherence, Consistency, Fluency, and Relevance.We ran 16 types of popular automatic metrics that include rule-based metrics, embeddingbased metrics, end-to-end metrics, and LLM-based metrics that are reference-based or reference-free, see Appx.A.1.Since the score distributions on many evaluation metrics were skewed, we normalized automatic evaluation scores for the subsequent analyses, see Appx.A.2.
Metric Stability and Consistency
To evaluate an automatic metric's stability, we computed the metric score twice for each model's output on each data point, calculated two sets of average scores for each model, and reported the correlation between the two sets of scores on the 17 models.A metric's consistency was evaluated via the coefficient α in Equ. 2. Fig. 2 presents the Metric Stability and Metric Consistency estimates of select metrics (full results in Fig. 5 in Appx).Most metrics achieved high stability.Metrics with non-deterministic algorithms, such as LLM-based metrics G-Eval, displayed higher levels of measurement error in terms of Metric Stability.While compared to G-Eval with GPT3.5, G-Eval with GPT-4 yields higher stability.For Metric Consistency, for the ROUGE family, a longer n-gram makes the metric less reliable and more prone to potential data perturbations in the test dataset.Therefore, to mitigate measurement error, for less stable LLMbased metrics, the metric user should consider aggregating scores over multiple runs, and for less consistent metrics such as ROUGE-4, the evaluator should consider using a larger test dataset.To illustrate, Fig. 3 shows the relationship between G-Eval metric consistency and test dataset size.Conventionally, a reliability coefficient above .9indicates good reliability.The metric stability and consistency estimates can help approximate the standard error of measurement of an average test dataset metric score (X), by observing from Equ. 1 that σ E = σ X 1 − ρ 2 XT .For example, the sam-ple standard deviation in the average test dataset METEOR score was .38 and the metric consistency estimate was .966,translating to an expected measurement error due to score variability across the 100 data points of .38 × √ 1 − .966≈ .07.A METEOR score difference between two models less than .07would thus be of limited interest, as the difference is less than the expected amount of fluctuation in the score due to measurement error.
Metric Construct Validity
We begin by evaluating the construct validity of expert ratings in the SummEval dataset.These evaluations were conducted in a confirmatory manner, assuming that the four ratings provided by each expert on a summarization output's Coherence, Consistency, Fluency, and Relevance indeed measure the four distinct dimensions.Tab. 1 presents the MTMM table for the three expert's ratings on four dimensions.Metric Convergent Validity can be examined by inspecting the bolded entries: Inter-rater agreements based on Kendall's τ on the same dimension were high (.40 − .81) in general but lower for Fluency (.40 − .63).Italic entries can inform the evaluation of metric divergent validity: Overall, an expert's ratings on different dimensions showed lower correlations than ratings by different experts on the same dimension, with the exception of Coherence and Relevance, which sometimes showed correlations (underscored, .65−.71) nearly as high as those on ratings for Coherence (or Relevance) across raters (.69 − .81).This may suggest that, although the expert raters were asked to separately rate Coherence and Relevance, they might inherently be rating the summarization outputs on the same underlying characteristic.
Confirmatory factor analysis was further con- ducted (see Appx.A.4) to test the observed conflated validity structure indicated by the MTMM analysis.The results show that the four-factor model fitted the observed data adequately well (Comparative Fit Index = .999,Tucker-Lewis Index = .999,Root Mean Square Error of Approximation = .047< .05),supporting the theorized loading structure, i.e., experts indeed rated on four factors.However, the estimated factor correlations suggested high correlations between dimensions, especially for Coherence and Relevance.This result supports a conflated validity structure.
Since Coherence and Relevance are distinct by definition (Fabbri et al., 2021), the conflated validity structure indicates potential issues in the expert rating process.Such issues may cascade if new automatic evaluation metrics were trained or validated on these expert ratings.In this case, several remedial steps are advised, including (1) revisiting the dimensions' conceptual distinctiveness and, if needed, revising the theoretical framework; (2) the human-annotation guidelines could be reviewed; and (3) the test set could be examined to assess its ability to distinguish model performance across dimensions.If these actions are insufficient, we recommend that the community consider alterna-tive conceptualizations of summarization quality, as suggested in recent works (Liu et al., 2022;Clark et al., 2023).
Metric Concurrent Validity
For each automatic evaluation metric, we evaluated its concurrent validity: i.e., should the metric score be used to predict expert rating on a dimension as a more cost-efficient alternative?Different from prior studies (Fabbri et al., 2021), we report Kendall's τ between each model's metric scores and the factor scores (instead of the raw means) based on expert ratings on the four dimensions.The metric concurrent validity coefficients are presented in Fig. 4 (see full results in Appx Fig. 8).
Although BARTScore and G-Eval were sensitive to detecting quality signals in all four expert-rated dimensions, the lack of variance in the validity coefficients surfaces another issue-their lack of capability to distinguish different dimensions.For example, although G-EVAL-4-COH is designed for Coherence, it strongly correlated with Fluency (.75) and Relevance (.75).As a reference, its correlation with Coherence was .65,and the correlation between G-EVAL-4-FLU (designed to assess Fluency) and Fluency was only .60.This in- (underscored italic entries), often exceeding the correlations on the same rated dimension across methods (bolded).On the contrary, SummaQA only reacts to Consistency which makes it a more desirable metric for Consistency even though its correlation with expert rating was slightly lower than G-EVAL-4-CON.This might guide the refinement/disambiguation of prompts for LLM-based evaluations, in search of one that correlates strongly with the target dimension but is less confounded by the other nuisance.
Summary of the Case Study
Our findings indicate that metrics based on LLMs exhibited lower stability, some below the conventional threshold of .9.In the ROUGE metric family, an increase in n-gram length was associated with decreased metric consistency, heightening susceptibility to benchmark task perturbations.Both MTMM and factor analysis identified a conflation between expert ratings of Coherence and Relevance.Lastly, while BARTScore and G-Eval demonstrated high agreements with expert-rated dimensions, the lack of variability in metric concurrent validity suggested a lack of differentiation between theorized dimensions.
Related Work
NLG evaluation metrics undergo validation through various methods.The most widely used method is examining correlation with human judg-ments (Sai et al., 2021;Fabbri et al., 2021;Liu et al., 2016).Beyond correlation, Ni'mah et al. (2023) proposed a comprehensive framework checklist, aiming to verify the faithfulness of automatic metrics to human preferences at both aspect and system levels.Fomicheva and Specia (2019) analyzed the local dependency between metric and human judgments and looked into the consistency of human evaluation.However, the inconsistency and subjectivity of human judgment, in addition to the non-transparent and non-standardized annotation process (Sai et al., 2021;Liu et al., 2016;Reiter, 2018;Celikyilmaz et al., 2020;Kauchak and Barzilay, 2006;Sai et al., 2022), create a shaking foundation.Another approach to evaluating metrics is data perturbation and resampling (Caglayan et al., 2020;Sai et al., 2021;Deutsch et al., 2021).Such a method can diagnose a metric's consistency and robustness across out-of-distribution datasets.
In addition, researchers conducted qualitative analysis (Zhang et al., 2019;Tao et al., 2018;Hanna and Bojar, 2021).Although qualitative analyses provide in-depth insights, they are not scalable and cost-efficient.Closely aligned with our efforts, Von Däniken et al. ( 2022) introduced a theoretical framework to examine the reliability of binary metrics.
Conclusion
Evaluation metrics inform model capability and guide model development.Drawing from the core concept of reliability and validity in measurement theory, we present METRICEVAL, a framework that conceptualizes and operationalizes four key desiderata for NLG metrics.With a collection of statistical tools, METRICEVAL offers the community an effective and principled way to analyze, evaluate and understand NLG evaluation metrics.
Limitation
Evaluating evaluation metrics for NLG models should not be treated as a single-shot task.Instead, as suggested in Messick's unified theory of validity (Messick, 1995), it is essential to continuously gather cumulative evidence of validity to ensure the ongoing effectiveness and reliability of the metrics.The process of accumulating valid evidence is an iterative and dynamic endeavor that aligns with the evolving landscape of NLG models and their applications.Future studies are necessary to collect other types of evidence, such as a metric's ability to predict users' preferences, to continuously evaluate the effectiveness of an NLG metric.
Measurement errors may surface and accumulate at every stage of the evaluation process, including benchmark design, data collection, etc.To perform the analysis of evaluation metrics, we have to assume the reliability and validity of the other parts of the evaluation process.Therefore, the results of the case study should be interpreted as dependent on the benchmark used, e.g., CNN/Daily Mail dataset.Future study is required to study the generalizability of the results across different benchmarks.
Our framework does not aim to provide comprehensive coverage of all measurement error sources in NLG evaluation metrics.For example, we did not discuss predictive validity in our framework despite its importance in education and psychological testing.We encourage researchers and practitioners to extend our framework for other types of reliability and validity and build datasets to support more comprehensive analysis, e.g., a dataset with the model's real-world performance, to deepen our knowledge of NLG metric evaluation.
A Appendix
A.1 Metrics Our selection of evaluation methods includes popular metrics for NLG tasks including both referencebased and reference-free metrics.Compared to the original SummEval dataset, we additionally selected end-to-end metrics and recent LLM-based metrics.
ROUGE (Lin, 2004) evaluates the generated summary by comparing the number of overlapping word sequences (n-grams) with a set of reference summaries.
S3 (Peyrard et al., 2017) is a model-based metric that combines existing evaluation metrics like ROUGE, JS-divergence, and ROUGE-WE.It utilizes these metrics as input features to predict the evaluation score.
BertScore (Zhang et al., 2019) calculates similarity scores by aligning the generated and reference summaries at the token-level.Token alignments are determined greedily to maximize the cosine similarity between contextualized token embeddings from BERT (Devlin et al., 2018).
MoverScore (Zhao et al., 2019) quantifies the semantic distance between a summary and a reference text by utilizing the Word Mover's Distance (Kusner et al., 2015).This distance measure operates over n-gram embeddings obtained from BERT representations.
SummaQA (Scialom et al., 2019) utilizes a BERT-based question-answering model to respond to cloze-style questions using generated summaries.This metric provides both the F1 overlap score and the confidence of the QA model.
BLANC (Vasilyev et al., 2020) is a referenceless metric that assesses the performance improvement of a pre-trained language model when provided with a document summary while performing language understanding tasks on the original document's text.
SUPERT (Gao et al., 2020) is a reference-less metric that measures the semantic similarity between model outputs and pseudo-reference summaries generated by extracting significant sentences from the source documents using soft token alignment techniques.
BLEU (Papineni et al., 2002) is a metric that focuses on precision at the corpus level.It calculates the n-gram overlap between a candidate utterance and a reference utterance while incorporating a penalty for brevity.
METEOR (Banerjee and Lavie, 2005) determines an alignment between candidate and reference sentences by mapping unigrams in the generated summary to 0 or 1 unigrams in the reference, taking into account stemming, synonyms, and paraphrases.
CIDer (Vedantam et al., 2015) calculates the cooccurrence of 1-4 gram units between the candidate and reference texts, giving less weight to common n-grams and computing cosine similarity between the n-grams of the candidate and reference texts.
BARTScore (Yuan et al., 2021) evaluates text directly based on the probability of being generated from or generating other outputs.It addresses the modeling challenge using a pre-trained sequenceto-sequence (seq2seq) model called BART (Lewis et al., 2019).
BLEURT (Sellam et al., 2020) is a BERT-based metric that can model human judgments with a few thousand training examples, which may introduce some bias.
G-Eval (Liu et al., 2023) is a framework that leverages LLM with Chain-of-Thoughts (CoT) (Wei et al., 2022) to evaluate the quality of generated text.The generated outputs are assessed using a set of prompts along with generated CoT.
Data Statistics (Grusky et al., 2018) define three measures of dataset extractiveness: extractive fragment coverage, density, and compression ratio.Extractive fragment coverage quantifies the percentage of words in the summary that are derived from the source article, indicating the degree to which the summary is a derivative of the original text.Density represents the average length of the extractive fragment to which each summary word belongs.Compression ratio measures the word ratio between the articles and their summaries.
A.2 Metric Normalization
Initial exploratory analysis revealed that the score distributions on many evaluation metrics were skewed.We thus normalized each automatic evaluation score (via the transformation X ) and subsequently worked with normalized automatic metric scores, which approximately followed N (0, 1) distribution and are more appropriate for correlational analysis and linear models.
Fig. 5 presents the Metric Stability and Metric
Consistency estimates of all automatic evaluations and the metric consistency estimates of all expert and automatic metrics.
A.4 Confirmatory Factor Analysis on Expert Ratings
Confirmatory factor analysis was further conducted on the 12 (3x4) expert ratings, assuming that each rating loads only on the corresponding dimension.Given that the expert ratings were highly skewed (see Fig. 6 in Appx.), an ordinal factor model (Muthén, 1984) was fitted.Judging from commonly used fit indices (Comparative Fit Index = .999,Tucker-Lewis Index = .999,Root Mean Square Error of Approximation = .047< .05), the four-factor model fitted the observed data adequately well, supporting the theorized loading structure, e.g., Experts rated on four factors.Tab 3 in Appx reports the estimated factor loadings and thresholds of each expert rating, assuming that the four latent factors each have mean 0 and SD of 1. Loadings were generally high (adopting the ≥ .4convention) but varied across experts and dimensions (generally lower for Relevance).Rater differences were also found in their leniency: For instance, expert 3 was more likely than experts 1 and 2 to provide a rating of 5 (with lower threshold estimates for score 5) on output Consistency, but less likely to do so (with higher threshold estimates) on Coherence and Relevance.The estimated factor correlations below suggested high correlations between dimensions, especially for coherence and relevance:
A.5 Multitrait-Multimethod Table for G-EVAL and expert-based metrics
Table 2 presents the multitrait-multimethod table on the four dimensions, Coherence, Consistency, Fluency, and Relevance, for three separate rating methods: expert-based ratings (average factor score across three raters), G-EVAL-3.5, and G-EVAL-4.
A.6 Residual Analysis
We further performed principal component analysis on the residuals of the automatic evaluations, which capture the unexplained variance by the 4 dimensions' factor scores.Plot of the first two principal components is shown in Fig. 7. Here, visual clusters of evaluation metrics are found, suggesting that select metrics likely tapped on common additional dimensions.The unexplained residual variance may guide future investigation on discovering other quality signals in summarization tasks.Notes: Entries in bold are the correlations of ratings on the same dimension by different methods.Entries in italic are the correlations of the ratings on different dimensions using the same method.Except for reliability coefficients, entries over .7 are underscored.While the cutoff is arbitrary, underscoring is more desirable for bolded entries (indicating good convergent validity) and less so for italic entries (indicating method bias, worse divergent validity).
Figure 2 :
Figure 2: Estimated Metric Stability and Metric Consistency of popular NLG Metrics.
Figure 3 :
Figure 3: Estimated Metric Consistency for G-EVAL Metrics by number of data points in evaluation benchmark.
Figure 4 :
Figure 4: Concurrent validity coefficients of the selected metrics in predicting the four expert-rated dimensions' factor scores.Values are based on Kendall's τ .
Figure 5 :
Figure 5: Metric stability and consistency estimates for all expert-and metric-based scores.
Table 1 :
Multitrait-Multimethod table of expert ratings.Diagonal entries are metric consistency coefficients between 0 and 1. Entries in bold are the correlations of ratings on the same dimension by different experts.Entries in italic are the correlations on different dimensions by the same expert.Underscored entries coherence and relevance rating correlations by the same expert, which showed strong correlations. Notes: | 2023-05-25T01:16:19.909Z | 2023-05-24T00:00:00.000 | {
"year": 2023,
"sha1": "a30d5f2f10cef8af4efd4f929dfe2ce90c8b3010",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a30d5f2f10cef8af4efd4f929dfe2ce90c8b3010",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
211483119 | pes2o/s2orc | v3-fos-license | In-Situ Analysis of Ultrashort Pulsed Laser Ablation with Pulse Bursts
Laser ablation using ultrashort pulsed (USP) laser sources enables contact free materials processing with very high precision at negligible thermal influence for the processed workpiece. However, the achieved productivity is still too low for industrial applications in many cases. To increase the throughput of USP laser processes, three approaches are followed: high repetition rates up to several MHz using fast beam deflection techniques, pulse bursts or parallel processing with multiple laser foci. In case of high repetition rates and pulse bursts the temporal and spatial distance of consecutive pulses becomes so small that heat accumulation and shielding effects of plasma and particles affect the processing quality and achieved efficiency significantly. In order to gain a better insight of these still barely investigated effects we apply in this work in situ imaging with an ICCD camera during ablation processes of copper and stainless steel with pulse bursts to evaluate the excitation of process luminescence as well as its relation to ablation efficiency and shielding effects.
Introduction
Ultrashort pulse (USP) laser processing with pulse durations in the range of picoseconds to femtoseconds enables a better surface quality, higher precision and less thermal impact compared to materials processing using longer pulsed laser radiation [1]. However, despite the high quality, USP processes are still not established for broad industrial application due to the lack of achievable productivity [2][3][4][5][6][7]. In order to improve the processing speed and throughput, the utilization of high power USP laser systems in the range of hundreds of Watt average power is one major research topic in the field of USP processing. Since the well-known and often reported optimum fluence of a few J/cm² [6,8,9] for metal processing is limiting the usable pulse energy, the high average power must be put into effect by other approaches. The most promising techniques for this purpose are the usage of high repetition rates [2] up to several MHz using fast beam deflection techniques, pulse bursts or parallel processing with multiple laser foci [6,10]. In this context, pulse bursts are trains of laser pulses with several rapidly following laser pulses, which are usually generated with the oscillator frequency of the laser source, which is typically in the range between 40 and 80 MHz. In the case of applying high repetition rates and pulse bursts the temporal and spatial distance of consecutive pulses becomes so small that the heat accumulation and shielding effects of plasma and particles affect the processing quality and achieved efficiency significantly [7]. The effect of heat accumulation refers to the rise of the cumulated residual thermal load of the individual pulses, which is not contributing to the ablation process and remains in the material. For high repetition rates, this residual heat is relevant for the processing, because of the insufficient time for the heat to spread into the workpiece. Heat accumulation can lead to a better surface finish by a small molten surface layer [10], but can also nullify the advantages of the highly precise USP laser processing. Shielding effects on the other hand prevent the laser pulses from reaching the surface with their full pulse energy and therefore reduce the achieved ablation rate. This effects are attributed to the interaction of the incident laser beam with the ablation plumes of previous pluses in which the radiation is reflected, scattered or absorbed by particles, vapor or plasma originating from the ablated matter.
In order to gain a better insight into these still barely investigated shielding effects we present in-situ analysis during ablation using pulse bursts with varied number of pulses at high average laser powers of up to 200 Watts for different materials. Therefore, the intensity of the process radiation is compared to the observed ablation efficiency of the associated process parameters. Furthermore, the propagation speed of the luminescent ablation plume front is evaluated and the half-life period of the luminous matter is calculated. In addition, two spatial areas of luminous matter accumulations are identified. A surface-near, fast propagating accumulation of matter and a long-lasting accumulation which aggregate at a certain distance from the treated surface.
Experimental setup and procedure
For this study, the laser radiation of an Amphos 400 high power USP laser was focused with an f-theta objective with a focal length of 160 mm achieving a 50 µm spot (1/e² of the intensity) on the samples to be investigated. The used USP laser system is capable of emitting laser radiation at a central wavelength of λ = 1030 nm and variable pulse durations between τ = 2 and 20 ps. The repetition rate is adjustable between frep = 0.4 to 40 MHz at a constant maximum average power of P = 400 W. Furthermore, the Amphos 400 can be used in a burst mode which makes it possible to emit pulse bursts with 1 to 10 pulses each instead of single pulses. The time separation between the single pulses within one burst is DOI: 10.2961/jlmn.2019.01.0015 -corresponding to the oscillator frequency of 40 MHz -25 ns and the averaged energy distribution of the pulses in the bursts (measured with Thorlabs DET10A/M, a fast, amplified photo diode, Fig. 1) is shown in Fig. 2.
With this setup, parameter studies were carried out on copper (CW024A) and stainless steel (1.4301) samples with varied number of pulses per burst and fluences to find suitable process parameters that enable good surface quality within surface ablation processes. Therefore, rectangular cavities were fabricated and the ablation efficiency was calculated based on the obtained ablation depth. Optimum parameters that give a good compromise between ablation efficiency and surface quality for all applied number of bursts were used for the in-situ experiments (see Table 1).
For the in-situ experiments a shadowgraphy setup (Fig. 3) with a 4 Picos ICCD camera from Stanford Computer Optics was used and synchronized to the lasers system with a Quantum Composers 9530 pulse generator. A high power LED (2.1 W) at 450 nm was used as an optional backlight illumination used to align the samples in front of the objective lens. The ICCD sensor with an intern upstream multichannel plate is capable of exposure times down to 200 ps which was set to 1 ns in the presented experimental investigations. With this setup images of the process luminescence were taken in which the background illumination was only used to align the sample and is outshined by the process luminescence due to its high intensity but due to the shadowgraphical setup a bandpass filter is integrated in the objective lens so only 450 ± 8 nm of the process luminescence spectrum reached the ICCD detector.
Experimental results
For the examinations in this work the used fluences for copper and steel were determined by previous studies in which the number of pulses per burst and the single pulse fluence was varied while fabricating 3 mm x 3 mm cavities into with a multi-pass process (parameters in Table 1). The number of layers has been set so that round about 100 µm depth of the cavities were reached. The real depth was subsequently measured by laser scanning microscopy and used to calculate the ablation efficiency η with = where dablation is the ablation depth, vscan is the scan speed, δy is the line pitch, nlayers is the number of the passes and � is the average output power. The determined optimum single pulse fluences are 5 J/cm² for copper and 1.5 J/cm² for stainless steel and were selected accordingly to optimize the optical surface quality, average surface roughness Sa and the ablation efficiency for all number of pulses per burst investigated (see Fig. 4 and Fig. 5). Further, it should be mentioned that the term fluence in this work refers always to the irradiated single pulse peak fluence 0 = 2• • 0 2 , which was determined based the average power measured by a thermal power head.
For copper the used fluence (5.0 J/cm²) is approximately 15 times and for steel (1.5 J/cm²) 21 times the by Neuenschwander et al. [4] reported single pulse threshold fluence of the respective material for a pulse duration of 2 ps. The referred values of Neuenschwander et al. [4] are in the range of Fth = 0.05 -0.07 J/cm² for steel and Fth = 0.30 -0.35 J/cm for copper. The used fluence is therefore more than double the reported optimum fluence for single pulse ablation for these materials which would be ~ 7.4 (e²) times the threshold fluence. This increase can probably be attributed to shielding effects that reduce the effective fluence reaching the sample surface, shifting the fluence of optimum ablation efficiency to higher values. Moreover, as already mentioned the processing fluences were not only chosen regarding the efficiency but also for optimization of the surface quality of all burst processes which is why the most efficient fluence for single pulse ablation was not chosen exactly. In Fig. 6, the ablation efficiency as a function of the number of pulses per burst is shown for the two sample materials stainless steel and copper. For stainless steel the ablation efficiency is decreasing with increasing number of pulses per burst. The decrease can probably be attributed to shielding effects that reduce the effective fluence of the later pulses in a burst train and is consistent with results obtained by Kramer et al. [5] and Neuenschwander et al. [11] and is also consistent with the previously mentioned effect which increases the optimum fluence. However, for copper another behavior of the ablation efficiency can be observed. Here, a low efficiency is measured for even numbers of pulses per burst and a significantly higher efficiency for all investigated odd numbers of pulses is observed a finding that is also reported by Neuenschwander et al. [11] and Jäggi et al. [3]. A possible explanation for this behavior is that the first and all other odd numbered pulses ablate material that shields the even numbered pulses. Therefore, these pulses ablate less material that reduces the shielding effect for the subsequent pulse. As a consequence, the following pulses (odd numbered) reach the surface without being shielded and ablate material efficiently.
While evaluating the ablation efficiency of bursts, it should be noted that the ablation rate (mm³/min) nevertheless increases with the number of pulses per burst as the used average power increases. Therefore, with well-adjusted burst parameters, a higher throughput compared with single pulse processes can be established in many cases of application which is the reason why bursts are used in the first place [10]. In order to examine the shielding of the single pulses of a burst the ablated volume per pulse is calculated. On the assumption that the n-1 pulses of a burst with n pulses behave like a burst with n-1 pulses the equation (2) can be used for this calculation. Where n is the number of pulses per burst, ηburst n is the ablation efficiency of the burst with n pulses, ηpulse i is the ablation efficiency of the i-th pulse of a burst which is calculated iteratively with (1), burst n is the average power applied and frep is the repetition rate of the bursts. Additionally, it has to be taken into account that the equation (1) is based on an equal energy distribution between the pulses in the burst which is almost the case with the used laser system (see Fig. 2). In Fig. 2 a loss of energy of the last pulse of each burst can be noted. Since the fluence in the experiments is set via the average power, this has an underestimation of the first n-1 pulses and an overestimation of the n-th pulse of an n-burst as consequence. This energy deviation was calculated, compared to the determined fluencedependent efficiency for single pulse ablation of the materials and compensated by correction factors for further calculation.
The in this way calculated ablation volumes are shown as a function of the pulse number in Fig. 7 for stainless steel and Fig. 8 for copper. Additionally, the integrated intensity of the process luminescence 24 ns after the related pulse is plotted. For this purpose, the backgrounds taken before each image without the laser process were subtracted from the images, the intensity of the so processed images has then been integrated and averaged over the three images taken for each point in time. The intensities of the ablation products as determined this way lie above the intensity of the background illumination and therefore, it is called luminous matter in the following. The 40 MHz seed frequency of the laser with which the pulses are emitted results in a temporal distance of 25 ns between the pulses. Knowing that, the plotted intensity in Fig. 7 and Fig. 8 shows the value 24 ns after the related pulse and also the value 1 ns before the subsequent pulse arrives. Therefore, it can be assumed that, if the luminous matter shields the following pulse by absorption, scattering or reflection, the intensity count should correspond to the extent of shielding for the upcoming pulse. This assumption is based on the hypothesis that regardless of whether the luminous matter consist of plasma, vapor, particles or a mixture of these the intensity should correlate with its density and therefore a high intensity before a pulse should indicate a pronounced shielding effect. In either case the intensity 24 ns after the pulse is rising steadily without significant conspicuity while the ablated volume per pulse alternates with the number of pulses. The even numbered pulses ablate less than the odd numbered ones. For stainless steel the fluctuation is less pronounced than for copper, so the burst efficiency in Fig. 6 is barley affected for steel. Nevertheless, by evaluating the ablated volume per pulse, a similar trend for burst ablation as for copper can be identified for which it is already known. This correlates with the observations of Sailer et al. [12] for double pulse ablation of steel. Thus, the unconventional behavior for the ablation of copper for even number of pulses within a pulse train can be identified for the processing of steel as well. However, the effect is less pronounced and is visible only if the ablation efficiency is calculated for the single pulses within a burst of pulses. For copper, on the other hand, this effect is so strong that the calculated ablated volume becomes negative for even numbered pulses and is increasing for the second and third pulse compared with the first pulse. Carefully considered, the negative values may be a hint of redeposition of ablated material as it is already suspected in the technical literature sometimes [6,13]. This would also reflect the conspicuous behavior of the ablation efficiency for copper in Fig. 6. The increase of the ablated volume per pulse on the other hand, could probably be explained by a higher absorption of a molten layer of copper on the surface [11] which therefore could overcompensate the loss of ablation of the even numbered pulses and be the reason for the highest ablation efficiency measured for five pulses per burst on copper (see Fig. 6). In addition, an increase in the evaluated intensity should be observed if shielding by absorption takes place which is not the case either. Another finding that suggests that there is no direct excitation of the luminous matter by the incident laser radiation is shown in Fig. 9 where the recorded images itselves are evaluated. In Fig. 9 three images of a pulse burst with eight pulses for three different times are shown. One nanosecond before the fourth pulse arrives on the surface, the moment of the incoming pulse and one nanosecond after the pulse has arrived. Two spatial areas with different behavior of the ablated matter can be identified: 1.) The long-distance luminous matter which comes to a standstill at a height round about 200 -400 µm with a high lifetime. 2.) The surface-near luminous matter which expanse and propagates fast from the surface. It becomes evident that no change in the intensity of the long-distance luminous matter is observed while the incident laser radiation of the fourth pulse reaches the sample surface. This implies no direct excitation by absorption of the radiation in the ablated matter which would probably lead to emission of black body radiation of the plasma or hot particles/vapor as it was observed by A. Semerok [14] for plasma shielding. Nevertheless, shielding is obviously happening because of the reduced ablation for both materials. This is why it must be assumed that the shielding performed by nonluminous matter which cannot be seen in this images. Despite the lack of direct excitation of the long-distance luminous matter the intensity of the area is increasing over the time of the burst. The reason for this is shown in Fig. 10 where four different times after the fourth pulse of an eight pulse burst on copper are imaged. 2, 12, 18 and 24 ns after the previous pulse the expansion of the surface-near and long-distance luminous matter can be seen. In the last image the beginning of the superposition of the two matter accumulations is imaged and the intensity is increasing locally. It becomes apparent that the long-distance luminous matter consists of the surface-near matter that expanse and propagates to this height which leads to an accumulation of this luminous matter and therefore indirectly excites this area. To get a first look of this indirect excitation effect the front propagation velocity of the surface-near luminous matter was determined and is shown in Fig. 11. Therefore the time range from the arrival of the pulse to 10 -20 ns after this moment was used for calculation, depending on how long the propagating front was clearly distinguishable before it merged with the long-distance luminous matter. For copper, the propagation velocity is lowest for the first pulse, increases until pulse number three and is levelling off to round about the value of the second pulse. A similar behavior can be assumed for steel, where the overshoot of the third velocity is less pronounced. One possible explanation of this behavior is that the first three pulses see another medium of propagation as the last pulses due to the influence of the ablation process to the atmosphere above the area of ablation. This is because of the slow propagation of longdistance luminous matter, which is still close to the surface for the first pulses of the burst as it can be seen in Fig. 12. For the last five pulses, the long-distance area has propagated far enough that these pulses see a less influenced medium of propagation so the propagation velocity levels out to a constant value. For single pulse ablation of copper with a pulse duration of τ = 50 fs, wavelength of λ = 800 nm and a pulse energy of E p = 20 µJ a similar behavior in a comparable order of magnitude was observed, with a velocity of propagation of 5.6 km/s in the first 15 ns which was then slowing down to approximately 1 km/s [14]. The reported slowdown could not be measured in this experiment because the front of the expansion was no longer clearly discernible and additionally after 25 ns the next pulse generated new ablation products.
As mentioned above, Fig. 12 shows the front propagation of the long-distance luminous matter as a function of time after the first pulse of the burst train. As a guide for the eye the points in time when the burst pulses arrive at the surface are marked with arrows. It can be seen that the front is pushed up a certain distance after a pulse has ablated material and settles to a slightly lower value just before the next pulse arrives. This behavior can be seen for all eight pulses until 50 ns after the last pulse when the front starts to propagate away from the sample surface again. This motion can be attributed to the expansion of the long-distance luminous matter and levels off to round about 400 µm for copper. For stainless steel a similar but less pronounced behavior is observed and the front comes to a halt at round about 300 µm instead. This weaker pronouncement is probably due to the lower single pulse fluence used for steel. In Fig. 12, a clearly recognizable front of the long-distance luminous matter was evaluated up to round about 375 ns after the first pulse applied. To evaluate the lifetime of this ablated material the image intensity of 90 µm above the sample surface and higher was integrated and applied over the process time as shown in Fig. 13. An exponential decay of the intensity can be observed and fitted. This allows the half-life of the process luminescence to be determined for both materials which are 117 ns for copper and 120 ns for stainless steel. The intensity of the process luminescence has therefore almost the same half-time even though the applied fluence for copper was round about 3.3 times the fluence used for stainless steel. This indicates that the more pronounced shielding and redeposition effects for copper are not represented by the intensity and lifetime of the process luminescence.
Conclusion and outlook
In order to gain a better insight into the ablation and shielding effects during processing with ultrashort pulsed laser bursts in-situ imaging was performed to examine the process luminescence. In contrast to copper, stainless steel shows only a weakly pronounced ablation efficiency decrease for an even number of pulses in the burst, whereas copper has a significant drop of ablation efficiency for an even number of pulses. For a more precise evaluation the ablated volume per pulse was calculated for both materials. For copper, a strongly pronounced fluctuation of the ablated volumes was observed which were even negative for even numbered pulses. For steel, an unexpected alternation in the ablated volume was found, as well. However, the observed effect is not strong enough to have a significant effect on the ablation efficiency. This behavior is expected to be the effect of shielding through ablation products of the previous pulse which are removed or pushed to the surface by the even numbered pulse so the next pulse is less shielded or not shielded at all. This leads to less ablation of the so shielded even numbered pulse or even to redeposition of material which correlates with the calculated ablation volumes per pulse in this work and would both decreases the overall burst efficiency. For copper this effect is already reported ( [6,11]) but that the ablated volumes per pulse of steel during burst processing show a similar but less pronounced behavior like copper is a new observation. This could indicate that the behavior observed for copper could be specific for metals in general and not only for copper.
For burst processing two spatial areas in process luminescence can be identified. The long-distance luminous matter regime with a high lifetime (120 ns half-time for eight pulses per burst) and a surface-near luminous matter which expanse and propagates fast from the surface. However, there is no visible direct excitation of the long-distance luminous matter by laser radiation. Only an indirect excitation takes place in which the surface-near ablated matter propagates into the long-distance area.
Contrary to all expectations, it was found that the intensity of the process luminescence is not correlated to the shielding effects and the ablated volume per burst pulse. Therefore the shielding effect that obviously takes place is not attributable to absorption in the process plasma. So the shielding of the pulse has to be attributed to reflection or scattering on plasma or particles. Also absorption of the pulse in ablated particles which is still not powerful enough for black body radiation is conceivable.
Furthermore, it should be kept in mind that the drop of efficiency measured on processed samples must not be only influenced by effects observable during a single pulse burst. Also cumulative effects can play a role that only appear during a complete structuring process and are not present in a single burst.
Further investigations for more precise statements are to be carried out. This includes, among others, in situ shadowgraphical imaging concentrating on the particle plumes and further investigations of the process luminescence with neutral density filters and shorter exposure time to examine a broader spectrum. Moreover, an in situ investigation of a complete structuring process could provide further insight. | 2019-08-18T11:25:57.519Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "1119089bff36527b5ffc73bb23d556e319491569",
"oa_license": null,
"oa_url": "http://www.jlps.gr.jp/jlmn/assets/4ce2fcfe791f000564c5c92feba826fb.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9650a75406410475b24728805f0f2c215c100c6e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
234291841 | pes2o/s2orc | v3-fos-license | Partitioning and Surficial Segregation of Trace Elements in Iron Oxides in Hydrothermal Fluid Systems
: Partitioning experiments were done by hydrothermal synthesis of crystals containing trace elements (TEs) by internal sampling of fluid at the temperature of 450 ◦ C and pressure of 1 kbar. The crystal phases obtained were magnetite, hematite, and Ni-spinel, which were studied using X-ray diffraction (XRD), X-ray electron probe microanalysis (EPMA), laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS), atomic absorption spectrometry (AAS), and atomic force microscopy (AFM). The solutions from the sampler’s fluid probes were analysed by AAS for TEs included elements of the iron group plus aluminium. The highest co-crystallisation coefficients of TE and Fe between mineral and fluid (D TE/Fe ) in magnetite were measured for V, Al, Ni and Cr (in decreasing order of n units in value), a lower value was observed for Co (2 × 10 − 1 ), and still lower values for Ti, Zn, and Mn ( n × 10 − 2 –10 − 3 ). In hematite, D TE/Fe values were highest for Al and V (order of n units in value), while lower values characterised Ti, Cr, and Co ( n × 10 − 1 –10 − 3 ), and the lowest values were exhibited by Cu, Mn, and Zn ( n × 10 − 5 ). Copper was confirmed to be the most incompatible with all minerals studied; however, Cu had a high content on crystal surfaces. This surficial segregation contributes to the average TE concentration even when a thin layer of nonautonomous phase (NAP) is enriched in the element of interest. The accumulation of TEs on the surface of crystals increased bulk content 1–2 orders of magnitude above the content of structurally-bound elements even in coarse crystals. The inverse problem—evaluation of TE/Fe ratios in fluids involved in the formation of magnetite-containing deposits—revealed that the most abundant metals in fluids were Fe followed by Mn, Zn, and Cu, which comprised 10 to 30% of the total iron content.
Introduction
Quantitative data on the partitioning of trace elements (TE) constrain the composition of the minerals that crystallise from melts and hydrothermal fluids. Such data are important for the interpretation of mineral genesis-magmatic, sedimentary, hydrothermal, or metamorphic-and validation of hypotheses of primary mineral re-equilibration by post-crystallisation processes involving the action of meteoric water and/or hydrothermal fluids.
Magnetite is considered "an ideal indicator mineral" [1] due to its stability, wide variation in composition and high density allowing it to be separated from sediment components. Moreover, magnetite is a common and widespread mineral that can form in different types of rocks and physicochemical conditions [2]. It is well known that the composition of magnetite is highly susceptible to the parameters controlling mineral formation and alteration ( [3] and references therein). The elements concentrations in = C min TE /C f l TE ) and co-crystallisation coefficients (D min/ f l TE/Fe = C min TE /C min Fe × C f l Fe /C f l TE ) in magnetite (and hematite) in hydrothermal fluid systems. Here C is the concentration; min and fl denote mineral and fluid, accordingly. The purpose of the work presented herein was to reduce this deficiency by adding new data.
Another problem discussed here is surficial segregation of TEs and their effect on the partitioning of elements. The extent of surface accumulation of the REEs Ce, Eu, Er, and Yb was more than two orders of magnitude greater in the outermost layer of magnetite and hematite crystals obtained from an experimental hydrothermal system at 450 • C and 100 MPa pressure [17]. X-ray photoelectron spectroscopy (XPS) revealed an oxyhydroxide composition for the surficial nonautonomous phase (NAP) that accumulated the REEs, with ratios of Fe 3+ /Fe 2+ of 1.1 for both minerals, while the ratio of O 2− /OH − was 1.5 for magnetite and approximately 4 for hematite [17]. The tendency for TEs to accumulate near the surface of magnetite particles was reported for Cu, Mn, and Cd [18]. The surface enrichment of hydrothermal magnetite with TEs is possibly due to the accumulation of TE-containing nanoparticles (NPs) formed from supersaturated hydrothermal fluids by the mechanism of entrapment as the surface of magnetite crystals grow [19]. The interfacial fluid also plays an active role in the formation of supersaturated TEs in solid solution during crystal growth and was proposed as a mechanism to explain the formation of Al-rich lamellae and zinc spinel NPs in the host magnetite [20]. The effect of surface accumulation and segregation of TEs is important in the interpretation of experimental and analytical data, especially data obtained using bulk analytical techniques on small particles with high specific surface area.
Background
Under the crystallisation of isomorphous mixture (BE,TE)X, where X is an anion, TE is a trace (or minor) element and BE-basic element cation of the mineral, the exchange reaction BE s + TE aq = TE s + BE aq occurs where s and aq denote solid and aqueous (fluid) phases, accordingly. The effective bulk coefficient of TE and BE co-crystallisation is where C is the concentration of the element. Hereinafter we deal only with bulk contents of elements rather than with their concentrations in various chemical forms (complexes). Strictly speaking, the process of co-crystallisation depends on the activities rather than concentrations and the bulk co-crystallisation coefficient is not a true constant but dependent on the composition of aqueous and solid phases. However, according to the model of complex solvent, for which the bulk electrolyte dominates over precipitated components, the expression is valid Here D o TE/BE is the "ideal" coefficient dependent only on the solubilities of the pure phases-solid solution end members, F is a Fronius function of the element complexation in aqueous solution, and f is the activity coefficient of the component in solid solution [21]. D o TE/BE is the thermodynamic constant of the interphase ion exchange reaction independent of the presence of other components and phases in the system. It is proportional to the ratio of the activity products L TEX /L BEX , which is related to solubilities (S) of TEX and BEX: F and f ratios in Equation (2) are often invariable for the elements, which are chemically similar and demonstrate the same chemical behaviour in the same solution [21][22][23].
The theoretical considerations were proved by studying the hydrothermal crystallization of sulphide minerals (galena, sphalerite) in presence of admixtures of Cd, Mn and Fe [22,24,25]. The experiments in the PbS-CdS-hydrothermal fluid system at the temperature of 300-430 • C and pressure of 1 kbar confirmed the obtained relations between bulk coefficient of co-crystallisation and solubilities of end members PbS and CdS [24]. The stability of this coefficient under various physicochemical conditions was demonstrated as well as the possibility of estimation of natural fluid composition using the co-crystallisation coefficients. These conclusions were supported by the example of Mn and Fe co-crystallisation with sphalerite in various mineralizing solutions at 400 and 500 • C and 1 and 1.5 kbar [22]. Stability of the D Mn/Fe value allowed the proposal that the bulk co-crystallisation coefficient under the conditions of natural hydrothermal process does not vary significantly in a wide range of variations of physicochemical parameters and solution compositions.
With respect to the iron oxide minerals, the data on TE co-crystallisation under hydrothermal conditions are severely limited [16,26,27].
The main purpose of this work is to obtain the data on partitioning of the main TE of magnetite and hematite, estimate D TE/Fe values and give geochemical implications allowing mineral's composition to be used as a quantitative indicator of TE concentration in ore-forming fluids. The acquisition of reliable data on elements' distribution in the mineral-hydrothermal fluid system suggests that the effect of surficial TE accumulation is taken into account, and this effect is also considered in the present paper.
Experimental Procedure
Standard techniques of hydrothermal thermogradient synthesis of iron oxides in presence of the TE were applied using stainless steel (200 cm 3 ) autoclaves equipped with titanium alloy (VT−8) passivated inserts with a volume of~50 cm 3 each. An internal sampling method using perforated titanium traps was used to obtain data on the composition of the high-temperature fluid phase [17]. The temperature in the zone of crystal growth was 450 • C and the pressure was 1 kbar. These values of temperature and pressure were high enough for near-equilibrium crystal growth and for trapping fluid in quantities required for analysis [28]. The liquid-to-solid ratio in the experiments was~5. The full duration of the experiments was 24 days with the first 4 days under an isothermal regime applied to homogenise the batch material and ensure near-equilibrium conditions for the subsequent 20 days of thermogradient recrystallisation with a temperature drop of 15 • C on the exterior wall of the autoclave. The actual temperature gradient inside the reaction vessel for this configuration was no more than 0.1 • C/cm [29]. The experiments were terminated by autoclave quenching under cold running water with a temperature drop of 5 • C/s. After insert unsealing, the solution was immediately extracted from the sampler which was rinsed with aqua regia to dissolve any remaining precipitates. The cleaning solution was subsequently combined with the first solution extracted, and a special chemical medium was created to analyse the elements using atomic absorption spectrometry (AAS). The conditions were not equal in different experiments and we did not attempt in this work to estimate the reproducibility of data in a parallel experiment. However, according to our experience with Au, the reproducibility of trapped fluid composition for trace elements is better than 30 rel.% [30]. pH measured in the solutions from samplers varied over the range of 7.2-8.1. The occurrence of fine-grained hematite as a quench phase (see below) suggests that the experimental conditions were close to the magnetite-hematite equilibrium (log f O 2 = −21.6 bar). The batch was made up of high purity reagents (pure reagent-grade) and comprising two parts. The basic components were represented by oxidised (Fe 2 O 3 ) and reduced (FeO or metallic Fe) forms of iron in a molar ratio of 1 or 2, or solely in the oxidised form. The TEs were introduced to the batch as metal oxides ( Table 1). The Fe component of the batch weighed 5 g, and each TE oxide were 0.1 or 0.25 wt% of batch weight. The significant part of the batch (~10-30%) was left after the experiments. The crystals formed in the upper part of the insert yielded up to 600 mg. Solutions of ammonium chloride (NH 4 Cl) of 5 and 10% were reported to be the most effective mineralising solutions for growing iron oxide crystals [17]. This solute is also significant for natural fluid systems [24,31,32].
X-ray Electron Probe Microanalysis (EPMA)
Iron oxide crystals of magnetite, hematite, and Ni-spinel were inserted into wellpolished epoxy pellets after they were washed with distilled water and ethanol and analysed using EPMA with a Superprobe JXA−8200 (JEOL Ltd., Tokyo, Japan) microprobe supplied with energy dispersive and wavelength-dispersive spectrometers (WDS) in Vinogradov Institute of Geochemistry of SB RAS (Irkutsk, Russia). Quantitative WDS analyses were conducted at an accelerating voltage of 20 kV, a beam current of 20 nA, a beam diameter of 1 µm, and a counting time of 10 s for major elements and 20 s or 30 s for TEs. The background counts were 5 s or 10 s long for major elements and 15 s long for TEs. Matrix corrections and analysed element contents were calculated using the ZAF (atomic number, absorption, and fluorescence) approach applying the software for quantitative analysis for Superprobe JXA−8200 (V01.42© 2021-2007, JEOL Ltd., Tokyo, Japan). Standardisation was performed using well-characterised minerals (hematite Fe 2 O 3 for Fe, spinel MgAl 2 O 4 for Al, chalcopyrite CuFeS 2 for Cu, sphalerite ZnS for Zn, and rutile TiO 2 for Ti), alloys (FeNiCo for Ni and Co) and oxides (V 2 O 5 for V, Cr 2 O 3 for Cr, and MnO for Mn) as standard samples. Measurements were made at 10 to 20 points on each grain, depending on its homogeneity. Reliable estimates of the TE content by EPMA was possible for Co, Ni, Al, V, and Mn (the minimum detection limit (MDL) ≈ 0.1 wt%). TE concentrations and their standard deviations are presented in Appendix A (Table A1). [17]. Dash denotes that the component was not added to the batch.
X-ray Diffraction (XRD)
XRD was used mainly for phase analysis and to recognize the phase composition of the minerals produced in the experiments (magnetite, hematite, magnetite + hematite, and Ni-spinel) employing powder diffraction patterns. Unit cell edges were measured with a D8 ADVANCE diffractometer (Bruker, Germany) using CuKα radiation in Vinogradov Institute of Geochemistry of SB RAS. Phases were identified using a base of powder diffraction data PDF−2 (ICCD PDF−2, Release 2007). Phase composition and unit cell edges were refined using the program TOPAS 4 (User's Manual, Bruker AXS, 2008, Karlsruhe, Germany). Uncertainties in unit cell edges occurred at the level of ± 1-3 × 10 -4 nm; sensitivity to the presence of admixed phases was 0.5 wt%.
Atomic Absorption Spectrometry (AAS)
The AAS method was used to analyse trapped solutions [17,26,27]. As a bulk method, AAS played a supporting role to EPMA and LA-ICP-MS in the study of the solid phases obtained because of the phenomenon of surficial TE accumulation [17]. AAS measurements were performed on Perkin-Elmer instruments (Model 403 and Analyst 800, The Perkin Elmer Corp., Norwalk, CT, USA) in the Vinogradov Institute of Geochemistry of SB RAS. Elements were determined with a precision of ±5-10 rel.%. Element concentrations were calculated by external calibration using standard solutions prepared in-house from analytically pure substances.
Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS)
(1) Bulk Crystals For this analysis, we used the same crystals mounted in epoxy pellets that were analysed by EPMA. Measurements were performed on an Agilent 7500ce unit manufactured by Agilent Technologies with quadrupole mass analyser (Agilent Tech., Santa Clara, CA, USA) using a New Wave Research UP−213 laser ablation platform in the Limnological Institute of SB RAS (Irkutsk, Russia). Parameters of the LA-ICP-MS experiment were: plasma power of 1400 W, carrier gas flow rate of 1 L/min, plasma-forming gas flow rate of 15 L/min, cooling gas at 1 L/min, laser energy at 80%, frequency of 10 Hz, and a laser spot diameter of 55 µm. Dwell times per isotope/element were 0.15 s, and acquisition time was 14 s. Measurements were made at 15 to 16 points in several (3 to 4) grains from each sample. The content of Ni and Co was determined by EPMA in each grain of each sample analysed and used as internal standards. Due to high content in iron oxides, Fe is less sensitive to composition variations and less convenient as an element used for studying correlation of LA-ICP-MS and EPMA data when the local areas of crystals are analysed.
(2) Outermost Crystal Layers Only crystals of high quality with smooth faces were used to study TE distribution in the outermost layers by sequential laser removal in combination with ICP-MS analysis performed in the Vinogradov Institute of Geochemistry of SB RAS (see [33] for details). The requirements for the unpolished crystals studied were fulfilled exactly for magnetite crystals, while hematite crystals were often smooth-faced but the face areas were not large enough to analyse. The instrument system involved a quadrupole mass-spectrometer (Perkin Elmer NexION 300D) and a laser ablation platform (NWR−213). Plasma power was 1400 W, the carrier gas was Ar delivered at a rate of 0.8 L/min, and the rates of the auxiliary flows (plasma/cool and auxiliary gas) were 18 and 1 L/min, respectively. Due to the high scan rate, 63 scans per minute were recorded which allowed us to clearly discriminate variations in material flux with time. The YAG:Nd laser platform (wavelength = 213 nm) was optimised for aerosol transportation using pure He over the shortest distance to the mass spectrometer torch at a flow rate of 0.6 L/min. Laser power was set at 20%, frequency at 10 Hz, and laser spot diameter at 100 µm. The laser beam passed along a straight-line profile of each sample six times at a rate of 200 µm/s, so that the first and last spots occurred outside of the sample. Inasmuch as the end points of the profiles were outside the sample, we were able to clearly distinguish between the mass spectrometry data of all six ablation passes on each sample and eliminate the effect of deep laser ablation of the material at the beginning of the profiles because of the extra time needed to accurately position the platform. The calibration of elements to be analysed was carried out using the NIST 612 standard conformed to the in-house standard sample-highly homogeneous magnetite crystals synthesised hydrothermally. Unfortunately, it proved difficult to standardise the depth due to high surface roughness along the laser track and material sputtering along the ablation groove, thus the data obtained were rated as semiquantitative. However, the data obtained in [33] for hydrothermal magnetite crystal surface under similar conditions of analysis allowed us to estimate the depth of groove after one laser pass as~1.5 µm.
Atomic Force Microscopy (AFM)
The surface morphology of the crystals was investigated in contact mode with an SMM−2000 scanning multi-microscope (Zelenograd, Russia) in Vinogradov Institute of Geochemistry of SB RAS. The microscope's software made it possible to analyse roughness and other surface characteristics and determine the height and shape of nano-sized objects on the crystal surfaces. Standard Si 3 N 4 cantilevers (Veeco, Park Scientific, USA) with a tip-rounding radius of 10 nm were used. The SMM−2000 microscope was a certified measurement tool (no. 980080025). According to the certificate, the maximum resolution was 2.5 nm in the X-Y plane and 1.1 nm in the Z direction.
Characteristics of Phases Obtained
Sufficiently large crystals (up to 3 mm) of magnetite and smaller crystals (up to 1 mm) of Ni-spinel and hematite were obtained. In two experiments, magnetite and hematite were obtained together (Table 1). Magnetite crystals were usually octahedral ( Figure 1), Ni-spinel demonstrated better development of {110} and {100} faces combined with {111}, and hematite crystals formed hexagonal prisms combined with well-developed pyramids of different indexes and usually assembled into compact-grained aggregates ( Figure 2). TE oxides were not found in crystals as well as in the residual dispersed batch material. The quench phase was mainly the bricky-colour hematite crystals and aggregates of~10-40 µm in size. The patches of quench phase were rarely observed on the surface of single crystals obtained because the solution was isolated from growth zone under quenching below 380 • C, a critical point of NH 4 Cl solution.
Minerals 2020, 10, x FOR PEER REVIEW 8 of 21 {111}, and hematite crystals formed hexagonal prisms combined with well-developed pyramids of different indexes and usually assembled into compact-grained aggregates ( Figure 2). TE oxides were not found in crystals as well as in the residual dispersed batch material. The quench phase was mainly the bricky-colour hematite crystals and aggregates of ~10-40 µ m in size. The patches of quench phase were rarely observed on the surface of single crystals obtained because the solution was isolated from growth zone under quenching below ~380 °C, a critical point of NH4Cl solution. [34]. The content of other TEs was negligible (≤0.1 at.%); therefore, both phases are solid solutions of magnetite and trevorite (NiFe2O4) close to the Ni end-member. Table 1). Note the homogeneity in elements' distribution inside of a crystal. Table 1). Note the homogeneity in elements' distribution inside of a crystal. Table 1). Note the homogeneity in elements' distribution inside of a crystal. (Table 1). Fine growth zoning in hematite crystal is clearly seen. (Table 1). Fine growth zoning in hematite crystal is clearly seen. Table A1 in Appendix A presents the results of EPMA, LA-ICP-MS, and AAS of crystals obtained in the hydrothermal experiments (see Table 1 for conditions, phases synthesised, and Fe content in trapped fluids). The atomic ratios of TE/Fe in minerals and fluids and the partition and co-crystallisation coefficients calculated for the most reliable data with a minimum standard deviation of the mean (Table A1). Only limited data sets were obtained for hematite and Ni-spinel; thus, these results are considered preliminary and require further experimental validation to substantiate initial findings.
Partition and Co-Crystallisation Coefficients
For a majority of elements, we noticed reasonable agreement among different analytical methods including the bulk AAS. Discrepancies between them did not exceed ±30%; however, some elements demonstrated a lack of correlation between the AAS data and the data obtained by other methods. This was dramatically evident for Cu; based on AAS results, the Cu content in magnetite and hematite crystals ranged from 32 to 3500 µg/g, whereas results from EPMA were below the MDL, and LA-ICP-MS only detected 2.7-17.1 µg/g Cu (Table A1). The elevated contents in comparison to LA-ICP-MS were also observed for Zn and Cr in hematite as determined by AAS. One possible reason for the effect of the lack of correlation between methods is the enrichment of TEs on crystal surfaces (see Sections 4.3 and 5.3). The partition and co-crystallisation coefficients are shown in Table 2.
Data obtained in this work were supplemented with results for Mn [26] and Cr and V [27] partitioning in magnetite-fluid systems previously obtained under the same temperature and pressure parameters as before and a similar composition of mineralising solution. The co-crystallisation coefficients are presented in Table 2 and in Figure 5. 3.0 ± 1. Mt ** Hm
Cr
Mt ** Hm 140 9.8 ± 8 Mt ** Hm 540 1200 6.6 ± 3.8 6.1 ± 5.9 * Errors were calculated as ε = t αn × S x / √ n at a confidence level of 0.9 for more than 3 experiments (n > 3) and at a confidence level of 0.8 for n = 3. For n = 2 and highly discrepant values at n = 3 (Table A1), the mean is given. t αn is the Student's coefficient, S x is the root-mean-square deviation. ** Data for magnetite obtained under similar conditions were taken from [26] (Mn) and [27] Table 2). Variations in co-crystallisation and partition coefficients were sufficiently large, especially for Ni-spinel and hematite, for which only two and three experiments were available, respectively. The data for magnetite were more representative. In most cases, the coefficient of variation is lower for D min/ f l TE/Fe than for the partition coefficient D min/ f l p (Table 3), demonstrating better reproducibility in the co-crystallisation coefficient compared to the partition coefficient. These data may be explained relying on Equation (2) (Section 2): F TE /F Fe can be tolerant to the changes of physicochemical conditions due to chemical similarity of the elements (TE and Fe), whereas F TE by itself can vary in more wide range changing the D min/ f l p value.
Data obtained in this work were supplemented with results for Mn [26] and Cr and V [27] partitioning in magnetite-fluid systems previously obtained under the same temperature and pressure parameters as before and a similar composition of mineralising solution. The co-crystallisation coefficients are presented in Table 2 and in Figure 5.
The elements under investigation were divided into four groups shown in different colours ( Figure 5). The highest values for / / ≈ (0.n -n) characterised Ti in hematite, V in magnetite and hematite, Cr in magnetite, Ni in magnetite and Ni-spinel, and Al in all three phases. The second group with / / ≈ (0.0n -0.n) included Ti, Mn, and Co in magnetite, Cr and Ni in hematite, and Co in Ni-spinel. Table 2). Variations in co-crystallisation and partition coefficients were sufficiently large, especially for Ni-spinel and hematite, for which only two and three experiments were available, respectively. The data for magnetite were more representative. In most cases, the coefficient of variation is lower for / / than for the partition coefficient / (Table 3), demonstrating better reproducibility in the co-crystallisation coefficient compared to the partition coefficient. These data may be explained relying on Equation (2) (Section 2): FTE/FFe can be tolerant to the changes of physicochemical conditions due to chemical similarity of the elements (TE and Fe), whereas FTE by itself can vary in more wide range changing the / value.
Surficial Crystal Enrichment with TE
The result of layer-by-layer LA-ICP-MS analysis of magnetite crystals in six laser passes are shown in Figures 6 and 7. Crystals from Experiments 1 and 5 (Tables 1 and A1) with smooth faces and low mean surface roughness (~20 nm from AFM data) were used. The first one or two passes showed markedly higher TE contents than those for volume (Figures 6 and 7). This was not as relevant for elements with relatively high contents (1-2% of Ni and Co each) but was clearly essential for minor elements (≤0.1% of TE). The highest concentrations of TEs were associated mostly with the first layer removed, for which the thickness was estimated at 1-1.5 µm (see also [17,33]). The second lasers pass partly utilised material spattered and redeposited after the first pass. Therefore, the scale of the effect observed was comparable to the size parameters of the NAP on the surface of hydrothermal magnetite crystals (~300 nm [30]).
The first one or two passes showed markedly higher TE contents than those for volume (Figures 6 and 7). This was not as relevant for elements with relatively high contents (1-2% of Ni and Co each) but was clearly essential for minor elements (≤0.1% of TE). The highest concentrations of TEs were associated mostly with the first layer removed, for which the thickness was estimated at 1-1.5 µm (see also [17,33]). The second lasers pass partly utilised material spattered and redeposited after the first pass. Therefore, the scale of the effect observed was comparable to the size parameters of the NAP on the surface of hydrothermal magnetite crystals (~300 nm [30]). Tables 1 and A1). The first two laser passes distinctly detect higher contents of TE with respect to the volume concentration (dotted line), although the uncertainty is relatively high for TEs in two most surficial lines (at a level of ±30% rel.). (Tables 1, A1). The first two laser passes distinctly detect higher contents of TE with respect to the volume concentration (dotted line), although the uncertainty is relatively high for TEs in two most surficial lines (at a level of ±30% rel.). (Table 1 and Table A1). Figures 8 and 9 show the fractal character of surficial objects distribution in the surficial layer of magnetite crystals. The heights of surficial objects relative to the surface area of minimal roughness rarely exceeded 100 nm (Figure 9). (Table 1). Relatively uniform nonautonomous phase (NAP) coverage and two populations of the surficial submicron-sized objects. (Table 1). Relatively uniform nonautonomous phase (NAP) coverage and two populations of the surficial submicron-sized objects. Figure 9. Atomic force microscopic image of magnetite crystal surface. The crystal from Experiment 5 (Table 1). Surface relief along the blue line shows the level difference of ~120 nm.
Comparison with Previous Experimental Data
Ilton and Eugster [16] studied Mn, Zn, Cu, and Cd co-crystallisation in magnetite at 600-800 °C and 2 kbar pressure. Although only two experimental points were available for Cu, the result at 650 °C ( Table 1). Surface relief along the blue line shows the level difference of~120 nm.
Comparison with Previous Experimental Data
Ilton and Eugster [16] studied Mn, Zn, Cu, and Cd co-crystallisation in magnetite at 600-800 • C and 2 kbar pressure. Although only two experimental points were available for Cu, the result at 650 • C (D Mt/aq Cu/Fe = 1.3 × 10 −5 ) was surprisingly close to that obtained in this study at 450 • C (D Mt/aq Cu/Fe = 1.9 × 10 −5 , Table 2). Ilton and Eugster obtained the temperature dependences for Mn and Zn, which gave D Mt/aq Mn/Fe = 3.7 × 10 −3 and D Mt/aq Zn/Fe = 1.8 × 10 −3 when extrapolated to 450 • C using expressions given in [16]. The co-crystallisation coefficient for Zn conforms ideally to the one obtained here, that is, (1.7 ± 0.8) × 10 −3 ( Table 2); whereas a small offset was observed for Mn (7.6 × 10 −3 in this work (Table A1) and ( Although in general case the co-crystallisation coefficient is dependent on the composition of multi-component system, in some special cases the effect of the system complexity is negligible. This occurs when D does not change significantly due to the chemical similarity of the elements co-crystallised, unmixing of coexisting solid phases (saturation in different kind of anions like sulphide and oxide) and validity of the model of complex solvent [21,24,25]. The most reliable data may be obtained from the study of "pair" co-crystallisation coefficients of chemically similar elements such as D Ni/Co , D V/Cr , D Mn/Zn , D Mg/Zn , etc. In addition, we partially confirmed the conclusion of [27] that the co-crystallisation coefficient is less variable than the partition coefficient D p (Table 3), and therefore, is preferred for the analysis of element partitioning in fluid-mineral systems. However, Cu and Zn present an exception to this rule (Table 3). Moreover, data eliminate doubts about the application of distribution coefficients in the presence of components such as Ti, Al, and Cr [16] because in our experiments all elements were introduced into the system simultaneously.
In the present work, we dealt with partitioning of the majority of main discriminator elements for magnetite, which included Mg, Al, Ti, V, Cr, Mn, Co, Ni, Zn, and Ga, according to [14]. TE partitioning in magnetite in hydrothermal systems differs significantly from igneous magnetite. Among the elements studied in this work, only Al was clearly incompatible (D p~0 .1), whereas other elements (except Cu for which no data were presented) were compatible in magnetite and had D p values from~1 to 100 [14]. In contrast, our results revealed the compatibility of Al and the incompatibility of Zn and Mn in magnetite in a hydrothermal system ( Table 2). The behaviour of Co, Ni, Cr, and V in the hydrothermal system did not contradict distribution constants in the magnetite-silicate melt systems.
TE Partitioning in Magnetite and Hematite: Implications for Natural Fluid Composition
Despite the lower reliability of data obtained for hematite as compared to magnetite (Tables 2 and A1), it is essential to define differences between them with respect to partitioning of common TEs and compositions of the fluids coexisting in equilibrium with both minerals. If crystallised from one and the same hydrothermal solution, magnetite and hematite might have similar contents of elements such as Al, Cu, and V, whereas hematite might be enriched in Ti and depleted in Co, Ni, Zn, Mn, and Cr as compared to magnetite (Table 2). Therefore, it seems to be not necessarily that if magnetite and hematite from the same ores have similar trace element contents, then they were derived from the same ore fluids [12]. TE/Fe ratios in the fluid phase were calculated using the data presented in [12] and in Table 2 of the present work. The fluid coexisting with magnetite was enriched in Ti 25-30 times, and highly depleted in Cr, Mn, Co, Ni, and Zn (1-3 orders of magnitude), relative to fluid coexisting with hematite. This fact is impossible to explain under the assumption of one and the same fluid with a certain Fe content. Therefore, it is hardly possible that magnetite and hematite having similar TE compositions were formed simultaneously in equilibrium with the same fluid phase. The data presented in [12] show that fluids equilibrated with magnetite and hematite differed in composition with respect to the majority of TEs (except possibly Al and V).
An analysis of the composition of fluids involved in the formation of the Yuleken porphyry Cu-Mo deposit in northwestern China [15] was performed because temperatures during the hydrothermal stages of ore formation (450-400 • C) is close to the temperature used in the present study. Average TE/Fe ratios calculated for fluids at three stages of hydrothermal activity are presented in Table 4. Changes in fluid composition appeared to accompany the transition from early to late potassic stage as distinguished by the appearance of sulphides. Ti/Fe and Co/Fe ratios decreased while Cu/Fe increased appreciably. It is interesting to note that Mn, Cu, and Zn were the most abundant metals in fluids after Fe; they contributed~10 to 30% of the iron content. The second most abundant TE was Ti contributing about 5% of the Fe content, followed by V and Al at (4-5) × 10 −2 %, and then Co (2 × 10 −2 %), Cr (2 × 10 −3 %), and Ni (2 × 10 −4 %). However, this study does not intend to be exhaustive in application to different porphyry deposits where the magnetite stability and composition depends on several factors: changes in intensive parameters, late fluid overprinting, etc. Calculations performed for other deposits of magmatic-hydrothermal genesis [12,13] fully support the prevalence of Mn and Zn (Cu was rarely detected) and the intermediate position of Ti, whereas other TEs can vary in the interval (n × 10 −1 -n × 10 −4 %) of the Fe content. The information that skarn fluids in equilibrium with magnetite containing minor Mn can have Mn/Fe ratios greater than one [16] should be supplemented with Zn/Fe ratios because Zn has a co-crystallisation coefficient D Mt/aq TE/Fe even lower than Mn (Table 2).
Surficial Effect on TE Accumulation
The surficial effect is important and must be considered, especially in the study of dispersed mineral systems in both experimental and natural environments (sedimentary, diagenetic, seafloor Mn-Fe nodules, crusts, and so on). The XPS data revealed two valence states for iron (Fe 3+ and Fe 2+ ) as in the magnetite volume structure but in different proportions (1:1 instead of 2:1, respectively), and the presence of hydroxyl ions and cation vacancies support the hypothesis of the formation of surficial NAP of oxyhydroxide composition [17,33]. The incorporation of metal cations like Cr and Ni increased the amount of surface hydroxyls due to stronger Lewis acid activity of cations substituting iron [34].
The functional hydroxyl groups enhanced metal adsorption in the surficial crystal layer. The increased accumulation potential of NAP was due to the presence of hydroxyl ions, unsaturated chemical bonds, and structural disorder (including metal and oxygen vacancies), that weakened the crystal-chemical control of element incorporation [29]. For instance, there appeared to be an opportunity to realise the surficial goethite-like phases of oxyhydroxide composition and structural incorporation of elements from the iron group plus aluminium [35].
In this work, we rarely encountered discrepancies between data collected with bulk and local methods because we used relatively large crystals with low specific surface area. However, the example for Cu showed that for crystals as large as 1-2 mm, the surficial accumulation may increase the measured Cu bulk content up to 1-2 orders of magnitude (Table A1). Copper contents in the volume of crystals were determined as~3-17 µg/g. The same level of TE concentrations was observed for Zn and Cr in hematite, and these cases also demonstrate the excess of bulk content compared to local one up to an order of magnitude and more. Figures 6 and 7 leave no doubt that Cu and Ti are highly subjected to the TE accumulation effect due to NAP presence that might be the reason for inconsistency of the data on Cu distribution in hydrothermal magnetite (see Section 1).
The TE surface segregation provides a significant contribution to the average concentration of microelements even with a low thickness of the surficial NAP enriched with it, as the AFM study showed. The crystal surface contains nano-objects of different sizes with smaller surficial objects that repeat the morphological features of coarser objects (Figures 8 and 9). The fractality is important for the absorption of TEs by NAP because it points to an increase in the real surface as compared to the topological surface. The data obtained show that one needs to be very careful with the results acquired through bulk analyses, with the crystals small enough and the lack of size control [36].
Conclusions
We present the first data on partitioning of several discriminator elements for magnetite in mineral-hydrothermal fluid systems at 450 • C and 1 kbar. The three mineral phases studied were magnetite, Ni-spinel, and hematite, the last two supported by a limited data set. The element series Ni, Co, Al, Cr, and V and Ti, Ni, Al, Cr, and V are shown to be compatible in magnetite and hematite, respectively. On the other hand, Zn, Mn, and Cu are incompatible in magnetite, and Co, Zn, Mn, and Cu are incompatible in hematite. The highest values for the co-crystallisation coefficient D TE/Fe (0.n-n) characterised Ti in hematite, V in magnetite and hematite, Cr in magnetite, Ni in magnetite and Ni-spinel, and Al in all three phases. The second group with D TE/Fe ≈ 0.0n-0.n includes Ti, Mn, and Co in magnetite, Cr and Ni in hematite, and Co in Ni-spinel. The third group is represented by Ti and Zn in Ni-spinel, Co in hematite, and Zn in magnetite characterised by co-crystallisation coefficients on the order of 10 −3 . The lowest values (n × 10 −5 ) are distinctive features of Mn and Zn in hematite and Cu in all three phases. According to the co-crystallisation coefficients, magnetite and hematite crystallised from the same hydrothermal solution may display similar contents of Al, Cu, and V, whereas hematite must be enriched in Ti and depleted in Co, Ni, Zn, Mn, and Cr as compared to magnetite. Magnetite oxidation and its transformation into hematite under hydrothermal conditions will cause the release of Co, Ni, Zn, Mn, and Cr, and will not affect the behaviour of Cu, Al, and V and facilitate Ti absorption by the solid phase.
A fair amount of data on the composition of magnetite and hematite of various origins have been published; nevertheless, these data were of little relevance due to difficulties in adapting the data to reconstruct the composition of the ore-forming fluids. Using the partitioning data obtained in this study, the proportion of elements in the fluid involved in the formation of magnetite-containing deposits was evaluated. For the porphyry Cu-Mo deposit, Fe was most abundant metal component in fluids, followed by Mn, Cu, and Zn which comprised~10 to 30% of iron content. The less abundant TE was Ti which comprised~5% of Fe content, followed by V and Al (4-5) × 10 −2 %) and Co (2 × 10 −2 %), Cr (2 × 10 −3 %), and Ni (2 × 10 −4 %). The calculations performed to determine the fluid composition of magmatic-hydrothermal systems support the prevalence of Mn and Zn (and probably Cu) and the intermediate ratios of Ti, whereas other TEs can vary in the interval of 0.n to n × 10 −4 % of Fe content.
The surficial segregation of TEs contributes to the average concentration of microelements even with thin surficial NAP layers enriched with TE (~100 nm). At ppm and lower than ppm TE content, the surficial accumulation effect increased total TE content up to 1-2 orders of magnitude above concentrations inside of crystal even for coarse crystals.
Further development of this work involves the estimation of absolute TE concentrations over a wide range of temperature and salt composition of fluids. For this purpose, we plan to use magnetite solubility data [37] and physicochemical modelling [38]. Insights gained from this study can be applied to various hydrothermal systems using modern databases on thermodynamic properties of solid substances (magnetite and its solid solutions), hydrothermal solutions, and gas phases. This approach can help to solve the problem of using the magnetite composition as a quantitative indicator of TE concentrations in ore-forming fluids.
Data Availability Statement:
The data presented in this study are contained within the article and available in the references listed.
Acknowledgments:
We thank Ekaterina Kaneva for X-ray measurements. Cooperation with the scientists at the Shared Research Center's 'Isotope-geochemical research' of the Vinogradov Institute of Geochemistry of SB RAS and 'Ultramicroanalysis' at the Limnological Institute of SB RAS are greatly appreciated. The authors would like to thank two anonymous reviewers for their deep insight into the problem, constructive criticism and useful comments.
Conflicts of Interest:
The authors declare no conflict of interest. Table A1. Compositions of obtained crystals, trapped fluids and calculated partition and co-crystallisation coefficients of trace elements in the mineral-hydrothermal fluid system at 450 • C and 1 kbar. a AAS-atomic absorption spectrometry (bulk analysis), EPMA-X-ray electron probe microanalysis, LA-ICP-MS-laser ablation-inductively coupled plasma-mass spectrometry; dash indicating absence of data caused by insufficient material quantity for analysis or low TE concentration (<minimum detection limit (MDL)). Values taken for partitioning calculations in the cases when several methods were resultative are shown in bold. Errors were calculated as ε = t αn × S x / √ n at a confidence level of 0.95, where t αn is the Student's coefficient for n degrees of freedom, S x is the root-mean-square deviation. b Accuracy of the method is estimated as ±10% rel. c Binary association Mt + Hm. | 2021-05-11T00:04:22.590Z | 2021-01-10T00:00:00.000 | {
"year": 2021,
"sha1": "3ab004fe7e10ca8b7f76e879cdb39b22621e9472",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/11/1/57/pdf?version=1610358416",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2160bc9fa9c9afc19956e32b2b00c70d3c485443",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
195246541 | pes2o/s2orc | v3-fos-license | Deriving and validating biomarkers associated with autism spectrum disorders from a large-scale resting-state database
Resting-state functional magnetic resonance imaging (MRI) has been used to investigate the brain activity related to autism spectrum disorder (ASD). In this study, we applied information from a large-scale dataset, the Autism Brain Imaging Data Exchange (ABIDE), to clinical applications. We recruited 21 patients with ASD and 23 individuals with neurotypical development (TD). We applied ASD biomarkers derived from ABIDE datasets and subsequently investigated the relationship between the MRI biomarkers and indicators from clinical screening questionnaires, the social responsiveness scale (SRS), and the Swanson, Nolan, and Pelham Questionnaire IV. The results indicated that the biomarkers generated from the default mode and executive control networks significantly differed between the participants with ASD and TD. In particular, the biomarkers derived from the default mode network were negatively correlated with the raw scores and model factors of the SRS. In summary, this study transferred the efforts of the global autism research community to clinical applications and identified connectivity-based biomarkers in ASD.
cingulate cortex/precuneus [6][7][8] . Studies using R-fMRI have reported that brain regions within the default mode network (DMN) overlapped with brain regions associated with ToM. Thus, using R-fMRI to measure FC may provide quantitative insights into the social-cognitive ability of patients with ASD [9][10][11][12] . For example, Assaf et al. demonstrated that FC values of specific brain regions were highly correlated with SRS scales in a group of patients with ASD. Weng et al. determined that FC strength between the posterior cingulate cortex and temporal lobe was negatively correlated with social impairment based on the Autism Diagnostic Interview-Revised (ADI-R). These studies have highlighted the potential applications of R-fMRI for investigating the complex cognitive function underlying ASD.
Since 2012, the Autism Brain Imaging Data Exchange (ABIDE) initiative has collected more than 2000 R-fMRI datasets of patients with ASD and individuals with neurotypical development (TD) subjects across international laboratories 13 . ABIDE could allow researchers to investigate brain mechanisms underlying ASD and to identify ASD-related biomarkers through R-fMRI 14 . This study applied information from the ABIDE initiative to investigate a local cohort. We derived R-fMRI biomarkers from ABIDE and obtained the metrics of local datasets. We analyzed and assessed the performance of the biomarkers and relationships between social responsiveness and functional brain networks in patients with ASD.
Methods and Materials
ABIDe: R-fMRI datasets. This study included two databases, namely ABIDE and the Kaohsiung Medical University Hospital (KMUH) databases. Table 1 lists the details. This study included 1112 ABIDE I datasets from 17 sites and 983 ABIDE II datasets from 16 sites. The ABIDE datasets were obtained online 15 . In total, 2095 ABIDE datasets were used (ASD: 1001 and TD: 1094; 5-64 years). These datasets are anonymous and in accordance with HIPPA guidelines. ABIDE: Preprocessing and RSN10 networks. The procedure for preprocessing R-fMRI datasets and generating brain FC networks is displayed in Fig. 1(a). Anatomical 3D volumes and R-fMRI 4D volumes were processed in the FMRIB Software Library (FSL) environment. The anatomic volumes were preprocessed using FSL-BET for brain extraction and subsequently normalized to Montreal Neurological Institute (MNI) coordinates. For R-fMRI volumes, timing inconsistencies and temporal image shifts were corrected using the slice timing and image realignment functions in the FSL. Subsequently, the volumes were registered to preprocessed anatomic volumes by using FSL-BBreg and normalized to the MNI space by using the nonlinear registration tool FSL-FNIRT. The voxel size was resampled to 2 × 2 × 2 mm 3 , and the volumes were smoothed using a Gaussian Analysis procedure for obtaining masks of the 30 biomarkers from the ABIDE datasets (a) producing 10 resting-state networks using a 3D T1 volume and a 4D R-fMRI volume (b) using a two-sample t-test to obtain 30 masks.
www.nature.com/scientificreports www.nature.com/scientificreports/ filter with a full width half maximum at 6 × 6 × 6 mm 3 . The subsequent signal processing involved applying a temporal bandpass filter (0.01-0.08 Hz) to the R-fMRI volumes and regressing out 24 motion parameters obtained after the realignment procedure 16 and five principal components with the highest variance estimated from voxel time series for the white matter and cerebrospinal fluid by using CompCor 17 . We subsequently applied dual-regression analysis 18 by using the FSL general linear model (GLM) and the 10-brain resting-state networks (RSN10), "PNAS_Smith09_rsn10.nii.gz, " provided by Smith et. al. 19,20 as a reference. The dual-regression analysis was used to assess the FC of each voxel estimated based on the GLM parameters normalized by the residual within-subject noise 18,21 . The procedure generated 10 whole-brain RSN maps for each dataset. The networks (RSN1 to RSN10) correspond respectively to the primary visual, occipital pole, lateral visual, default mode (DMN), cerebellum, sensorimotor, auditory, executive control (ECN), right frontoparietal, and left frontoparietal networks.
ABIDE: Procedures deriving 30 biomarkers.
The block diagrams of the generation of 30 R-fMRI biomarkers, R1 to R10 and A1 to A20, are displayed in Fig. 1(b). The quantities of the biomarkers were calculated based on 30 masks. The masks for R1 to R10, termed R-masks, were generated by identifying voxels with Z values higher than 4 in the RSN10 template (PNAS_Smith09_rsn10.nii.gz) provided by , and the masks for A1 to A20, termed A-masks, were created on the basis of the group difference of ABIDE RSN10 maps. Total RSN10 maps from the ABIDE datasets were 2095 (ASD: 1001 and TD: 1094) We performed a two-sample t-test on the ABIDE RSN10 FC maps with threshold-free cluster enhancement by using FSL-randomise with 5000 permutations for multiple comparisons. Subsequently, we identified the voxels satisfying two criteria: (1) FC values significantly different [family-wise error (FWE)-corrected p < 0.05] between the ASD and TD groups and (2) voxels inside the corresponding R-masks to create A1-A20 masks (ASD > TD: A1 to A10 and ASD < TD: A11 to A20). We subsequently calculated the averaged FC values of the RSN10 maps by using the masks to generate 30 biomarkers for each participant, referred to as R1 to R10 and A1 to A20 hereinafter. The procedure is illustrated in Fig. 2. KMUH: R-fMRI datasets. For the local cohort, 44 individuals (ASD: 21 and TD: 23; 12-22 years) were recruited from the active follow-up psychiatric clinic at KMUH and the community. Both groups of participants were between 12 and 22 years old and had scores of >70 in either the full-scale Wechsler Adult Intelligence Scale or full-scale Wechsler Intelligence Scale for Children, Fourth Edition. The participants in the ASD group were diagnosed with autistic disorder on the basis of the DSM, Fourth Edition, Text Revision symptom criteria in their early childhood in accordance with the Autism Diagnostic Observation Schedule 22 ; their ASD diagnoses were confirmed using the DSM-5 before they were enrolled into this study. This study was approved by the Institutional Review Board of Kaohsiung Medical University and Kaohsiung Medical University Hospital. Informed consent was obtained from the participants' parents and the participants themselves in accordance with the guidelines of the Institutional Committee on Clinical Investigation. The participants underwent imaging experiments performed using a 3.0 T whole-body MRI system (Siemens, Skyra, Germany), equipped with a 32-channel head coil, at Kaohsiung Veterans General Hospital. We obtained brain structural images and R-fMRI images by using a three-dimensional (3D) magnetization-prepared rapid gradient-edge (MP-RAGE) sequence and a gradient-echo echo planar imaging (EPI) sequence, respectively. The imaging parameters for 3D MP-RAGE were TR = 2000 ms, TE = 2.07 ms, FOV = 256 mm, flip angle = 9°, sagittal slices = 160, matrix size = 256 × 256, voxel size = 1 × 1 × 1 mm 3 , and TI = 900 ms. The imaging parameters for EPI were TR = 2300 ms, TE = 30 ms, FOV = 194 mm, slice thickness = 3 mm, axial slices = 40, measurements = 150, in-plane resolution = 3.03 × 3.03 mm 2 and matrix size = 64 × 64. The total scan time of EPI was approximately 5 min.
KMUH: social Responsiveness scale and swanson, Nolan, and pelham Questionnaire IV.
The parents of the participants from KMUH completed the Chinese version of the SRS and Swanson, Nolan, and Pelham Questionnaire (SNAP-IV). The SRS is a 65-item scale that measures the severity of autism www.nature.com/scientificreports www.nature.com/scientificreports/ spectrum symptoms as they occur in natural social settings 5 . We obtained the Chinese version of the SRS from the developer under a license for academic use. The psychometric properties of the Chinese version of the SRS were validated by Taiwanese researchers 23 . The sum of the total raw SRS score and five subscores reflecting the factors in the model (viz., social awareness, social cognition, social communication, social motivation, and autistic mannerisms) were derived for analysis. The SNAP-IV comprises 26 items regarding the symptoms of inattention, hyperactivity/impulsivity, and oppositional defiant disorder (ODD). The Chinese version of the SNAP-IV is a reliable, valid instrument for rating the symptoms of inattention, hyperactivity/impulsivity, and ODD in both clinical and community settings. Its psychometrics properties for Taiwanese populations have been validated 24 . Three SNAP-IV scores (viz., inattention, hyperactivity/ impulsivity, and ODD) for each participant were derived for analysis. Table 2 presents the average SRS and SNAP-IV scores for the KMUH datasets. statistical analysis. Total RSN10 maps from the ABIDE and KMUH datasets were 2095 (ASD: 1001 and TD: 1094) and 44 (ASD: 21 and TD: 23), respectively. We calculated the biomarkers for each RSN10 dataset and subsequently obtained two matrixes (ABIDE: 2095 × 30 and KMUH: 44 × 30) for further statistical analysis. The differences in biomarkers between the ASD and TD groups were assessed using the t-test. For the KMUH datasets, we performed a correlation analysis to investigate relationships between the 30 biomarkers and SRS and SNAP scores and a receiver operating characteristic analysis to evaluate the classification performances of the biomarkers. www.nature.com/scientificreports www.nature.com/scientificreports/ Figure 3(a) shows the representative slices of the RSN10 templates (Z > 4) generated from PNAS_Smith09_rsn10. nii.gz. Figure 3 Figure 4 displays the masks of the 30 biomarkers. The R-masks were produced using the RSN10 template (Z > 4), and the A-masks were derived from the group analysis of the ABIDE RSN10 FC maps (FWE-corrected p < 0.05, two-sample t-test). The volumes of the masks are listed in Table 3. The volumes of the A3, A4, and A18 masks, which ranged from 34 to 44 mL, were the top three among the A-masks. They were generated from the lateral visual, default mode, and ECN networks, respectively. Table 3 lists the mean and standard deviation of the 30 biomarkers of the ABIDE and KMUH datasets. The significance of the difference between the ASD and TD groups was assessed using the t-test. For the ABIDE datasets, the R-biomarkers derived from the RSN10 template (viz., R3, R4, R5, R6, and R8) provided by were significantly different between the ASD and TD groups (p < 0.05). All the 20 A-biomarkers differed significantly between the ASD and TD groups (p < 0.01). However, we considered the results strongly biased and excluded them from Table 3. The statistics regarding the ABIDE A-biomarkers likely overfitted because the A-masks were derived based on differences between the groups in the ABIDE datasets. For the KMUH datasets, the five biomarkers (viz., R9, A4, A14, A15, and A18) differed significantly between the ASD and TD groups (p < 0.05, t-test). Of these five biomarkers, A4 and A14 were both derived from the DMN, and R9, A15, and A18 were obtained from the right frontoparietal, cerebellum, and ECN networks, respectively. Table 4 lists Pearson's correlation coefficients between the 30 R-fMRI biomarkers and the nine SRS and SNAP questionnaire metrics in the KMUH datasets. Figure 5 displays the color-coded matrix that is based on Table 4. The DMN-derived biomarker A4 (TD > ASD) and all five SRS metrics were negatively correlated (r = −0.333 to −0.420). In particular, the false discovery rate adjusted p values were statistically significant (adjusted p < 0.05) in five cases (viz., A4 versus SRS total, awareness, cognition, social communication, and autistic mannerism).
Results
A4, A15, and A18, obtained from the DMN, cerebellum, and ECN networks, respectively, exhibited a significant relationship according to the findings of difference tests and correlation analyses. We calculated the receiver operating characteristic (ROC) curves for distinguishing ASD by using the biomarkers (R4, R5, R8, A4, A15, and A18) of the three networks. Figure 6 displays the ROC curves obtained using the six biomarkers above. The areas under the curve (AUCs) for the biomarkers were (0.590, 0.745, p < 0.05), (0.588, 0.646), and (0.646, 0.677) for (R4, A4), (R5, A15), and (R8, A18), respectively. Figure 7 presents the masks of the three networks using different colors to highlight the R-masks and the A-masks. Although R4 and A4 were both derived from the DMN, the AUC of A4 was significantly higher than that of A8 (p < 0.05) 25 . The results indicate that A4 derived from the ABIDE datasets was an effective indicator for classifying ASD, and the FC of brain regions in A4 masks was correlated with cognitive impairments in patients with ASD. www.nature.com/scientificreports www.nature.com/scientificreports/
Discussion
Early in the development stage of this study, we collected R-fMRI datasets to create the KMUH cohort and explored the brain regions associated with the symptoms of ASD. We reviewed the literature, implemented pipelines to reconstruct the RSN10 maps, and used the two-sample t-test to evaluate the ASD and TD datasets (n = 44). The results indicated that no brain voxels were statistically significant (FWE-corrected p < 0.05). Meanwhile, ABIDE commenced its open-science project to provide the large-scale R-fMRI ASD database. We subsequently sought approaches to transfer ABIDE information to clinical applications involving a local cohort. We ultimately formulated an approach to extract biomarkers significantly different between ASD and TD from the ABIDE database and then validate the performance of the biomarkers using the KMUH cohort. Finally, we used correlation analysis to examine potential relationships between the biomarkers and social behaviors estimated based on clinical screening questionnaires.
We systematically analyzed the 10 RSNs of the brain in the KMUH dataset by using the RSN10 template. Although more RSNs have been reported in the literature, RSN10 networks have been consistently reported regardless of variations in acquisition protocols and analysis methods. The benefits of the analysis based on the RSN10 template are multifold. The template is publicly available; thus, researchers can compare results based on it. Smith et al. additionally mapped RSN10 onto behavioral domains on the basis of 7342 BrainMap activation images 26 . The mapping aided the interpretation of RSN10 components. For example, based on the behavioral mapping of RSN10, the biomarker A8 could be associated with action-inhibition, cognition, emotion, and perception-somesthesis-pain. Finally, RSN10 is now widely used in the R-fMRI research community. Although this study analyzed data using lab-made pipelines based on FSL, we found that the method for producing RSN10 maps was similar to that offered by the functions of the open source analysis project, the Configurable Pipeline for the Analysis of Connectomes (C-PAC) 27 . The pipelines, as well as the 30 masks of this study, are available 28 . The open-science materials and tools, including the RSN10 template, C-PAC, ABIDE, and our pipeline, can be used to replicate the methods of this study. Table 3. The characteristics of the biomarkers in ABIDE and KMUH datasets. *Significant differences between the biomarkers of TD and ASD (p < 0.05) **Significant differences between the biomarkers of TD and ASD (p < 0.01). (2019) 9:9043 | https://doi.org/10.1038/s41598-019-45465-9 www.nature.com/scientificreports www.nature.com/scientificreports/ From the statistical results, we identified three sets of biomarkers that may be involved in the symptoms of ASD. They are (ABIDE, FC difference: R3, R4, R5, R6, and R8), (KMUH, FC difference: R9, A4, A14, A15, and A18), and (KMUH, FC-behavioral scores correlation A4). We observed that the three networks, DMN (R4, A4), cerebellum (R5, A15), and ECN (R8, A18), frequently presented in the three sets. The results suggest that the three networks could be the major resting-state networks associated with ASD symptoms. The ABIDE-derived features, A4 (DMN), A15 (the cerebellum network), and A18 (ECN), reached statistical significance in the difference tests in the KMUH dataset. The AUC results of A4, A15, and A18 were higher than those of R4, R5, and R8. The higher accuracy of the three A-biomarkers implied that the three R-biomarkers were not as sensitive as the three A-biomarkers used for identifying patients with ASD, and the brain regions indicated by the three masks may be the primary source of ASD.
Default modes network: A4. The results of this study indicate that the FC of DMN in the ASD group was weaker than that of the TD group. These results are in agreement with those of previous investigations 9, [29][30][31] . The A4 mask includes several brain regions: the mPFC, posterior cingulate cortex, left occipital cortex, and right MTG. The levels of social awareness, social cognition, social communication, social motivation, and autistic mannerisms from the SRS are all negatively correlated with the FC strength of the A4 mask. This finding is consistent with that of Assaf et al. who suggested that FC strength among the mPFC/anterior cingulate cortex (ACC), precuneus, and DMN correlated negatively with the SRS; in particular, weak FC strength of the ACC was correlated with higher levels of autistic mannerisms 9 .
Cerebellum network: A15. The A15 biomarker of the cerebellum network was higher in the ASD group than in the TD group. The results suggested the cerebellum's potential role in social-cognition behaviors. This is consistent with the findings of previous investigations. In large-scale fMRI studies on social cognition and the cerebellum, Van Overwalle et al. found robust clusters associated with social-cognitive studies, and their FC analysis identified the crucial role of the cerebellum in social mentalizing [32][33][34] . Previous imaging fMRI studies have revealed that the cerebellum activation of patients with ASD differs from that of TD individuals 35 www.nature.com/scientificreports www.nature.com/scientificreports/ www.nature.com/scientificreports www.nature.com/scientificreports/ reported that patients with ASD had increased activation of the cerebellothalamic network in a visually guided saccade experiment. Allen et al. identified increased and widespread activation of the cerebellum in patients with ASD compared with TD controls.
Executive control network: A18. The ECN covers parts of the medial-frontal lobe area, including the ACC, dorsolateral prefrontal cortex, superior frontal lobe, and frontal pole 19,38 . Our results indicated that the FC values of the ECN in the ASD group were higher than those in the TD group. The derived A18 mask covers the ACC, lateral frontal gyrus, and frontal pole. The findings of the correlation analysis indicated that A18 strength was positively correlated with the levels of hyperactivity/impulsiveness and inattention behavioral problems. This network is related to several cognition paradigms, such as action-inhibition, cognition, emotion, and perceptionsomesthesis-pain 19 . The executive control function of attention engages more complex mental operations during monitoring and resolving the conflict between stimulus surroundings. Fan et al. suggested that attentional deficits contribute to the abnormalities of neuropathology in ASD and hypothesized that the attentional network system is a primary role of the pathophysiology of ASD 39 . Keehn et al. indicated that the orienting network was impaired in children with ASD 40 , and the orienting deficit may partly be explained by the ECN.
In summary, this study established an approach for applying information from the large-scale ABIDE database to clinical investigations of local cohorts. We obtained FC biomarkers associated with patients with ASD. They were associated with the the DMN, cerebellum network, and ECN. The results indicated that the social responsiveness of the participants was significantly correlated with the biomarkers related to the DMN.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | 2019-06-22T13:41:37.891Z | 2019-06-21T00:00:00.000 | {
"year": 2019,
"sha1": "5272473e553fa230ccd36a1bad273d9216b86abc",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-45465-9.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c43ac563e1fade611d199da449b6754f7a6f4dda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
156888897 | pes2o/s2orc | v3-fos-license | Value-at-Risk: The Effect of Autoregression in a Quantile Process
Value-at-Risk (VaR) is an institutional measure of risk favored by financial regulators. VaR may be interpreted as a quantile of future portfolio values conditional on the information available, where the most common quantile used is 95%. Here we demonstrate Conditional Autoregressive Value at Risk, first introduced by Engle, Manganelli (2001). CAViaR suggests that negative/positive returns are not i.i.d., and that there is significant autocorrelation. The model is tested using data from 1986- 1999 and 1999-2009 for GM, IBM, XOM, SPX, and then validated via the dynamic quantile test. Results suggest that the tails (upper/lower quantile) of a distribution of returns behave differently than the core.
Introduction
Several recent financial disasters have made clear the necessity for a diverse set of risk management tools. Traditional models of risk management often rely on trivial probabilistic tools and often fail to relax key assumptions for the underlying statistics. An effective tool for risk management should be a withstanding measure of uncertainty robust to a large set of situations. Moreover, it should be suitable for its users, adaptive to complex situations, and compatible for various sample sizes. Risk management is not simply a tool to establish the upper bound on a loss, but also a preventative measure that should lead to the development of an informed decision-making process.
Perhaps the most well known tool for risk management amongst finance practitioners is Value at Risk (VaR). Conceptually, VaR measures the supremum of a portfolio's loss with a particular level of confidence. Consider a portfolio of unitary value with an annual standard deviation of 15%. The 95% daily VaR is simply the product of the daily standard deviation and the total value, or $1 × 2.35 × 0.15 = $0.3525. A 95% daily VaR of 0.3525 means that, if our day were hypothetically conducted an infinite number of times, the loss of our one dollar portfolio would be greater than 0.3525 with probability 0.05.
We say with 95% confidence that on a given day, the maximum loss implied by VaR is 0.3525 for the one dollar portfolio.
A more rigorous definition of VaR is a particular quantile of future portfolio values, conditional on current information. In particular, we say that P (y t < V aR t |Ω t−1 ) = α, where y t is a time t return, Ω is the set of available information in a weak sense, and α is the confidence level or probability. The immediate considerations for a functional model include a closed-form representation, a set of welldefined intermediary parameters, and a test to validate the proposed model. In advance of our model proposition(s), we will review and evaluate existing models for VaR.
The remainder of the paper is organized as follows. Section II will introduce and evaluate existing models for Value at Risk, all of which will guide us in constructing CAViaR. Section III will cover the notion of Conditional Autoregression, the understanding of which is critical to realistic non-i.i.d processes. Section IV will introduce various methods of testing quantile regression, which will enable us to compare the set of well-known models with ours. Section V will focus on the empirical test of CAViaR on IBM, GM, and SPX time series data. We will conclude with section VI.
Existent Models
VaR has become a quintessential tool for portfolio management because it enables funds to estimate the cost of risk and efficiently allocate it. Moreover, a growing number of regulatory committees now requires institutions to monitor and report VaR frequently. Such a measure discourages excessive leverage and increases transparency of the "worst-case scenario". While VaR , methods of estimation vary across both markets and firms. For ease and convenience of terminology, we will refer to all institutions and funds concerned with monitoring VaR as "holders".
One component that often varies between different VaR models is the method by which the distribution of portfolio returns is estimated. A rudimentary example was readily introduced at the beginning of the paper, in which returns were assumed to be independently and identically distributed, or i.i.d. As we will see, however, returns almost never follow a martingale process, but rather, are Markov. A portfolio's performance on any given day almost always effects the performance on subsequent days. Thus, the probability of observing a specific return or variance as an event is dependent on the probability of observing the same event one period prior. The calculation of returns falls within two categories: 1. Factor Models: Here, a universe of assets are studied for their factors, all of which are correlated.
Thus, the time variation in the risk of the portfolio is derived from the volatility of the correlations.
A well known example is the Fama-French four-factor model. The approach, however, assumes that negative returns follow the same approach as non-negative returns. Perhaps an even more alarming assumption by such models is the homoscedasticity between returns per unit risk.
Portfolio Models:
Here, VaR is instantaneously constructed using statistical inference of past portfolios. Then, quantiles are forecast via several approaches, including Generalized Autoregressive Conditional Heteroscedasticity (GARCH), exponential smoothing, etc., most of which incorrectly assume normality. Moreover, the set of models assume that after a certain amount of time, a particular historical return has probability zero of recurring.
We can deduce without an empirical demonstration that that portfolio models will underestimate VaR after time T, and factor models will fail to account for autoregression. Interestingly enough, the last decade has motivated the introduction of extreme quantile estimation, and the notion of asymptotic tail distributions. Many of these models, however, are only representative of especially low (< 1%) quantiles, and do not relax the weak i.i.d. assumption.
Conditional Autoregressive Value-at-Risk
We now address many of the concerns above with CAViaR. In particular, we will study the asymptotic distribution, account for autocorrelation, and do so under various regimes. Suppose that there exists an observable vector of returns, {y t } T t=1 . We denote θ as the probability associated with the Value-at-Risk. Letting x t be a vector of time t observable variables (i.e. returns), and β be a vector of unknown parameters, a Conditional Autoregressive VaR model may take the following form: Interpretation: The quantile of portfolio returns at a time t is a function of not only the past period returns, but also the past period quantile of returns. That is, f t (β) = f t (x t−1 , β θ . The lag operator in the third term, l, links the set of available information at t − j to the quantile of returns at t. The autoregressive function, , creates a smooth path between time-oriented quantiles. The first term, β 0 , is simply a constant. The example provided at the beginning of the paper, which does not account for autoregression, would simply remain a constant: f t (β) = β 0 . Now that we have developed an understanding of the basic form for a CAViaR model, we will explore a few examples.
Adaptive Model
In general, an adaptive model follows the form The adaptive model successfully accounts for increase in expected VaR. Whereas the traditional model would change only with a change in portfolio value, the adaptive model increases the Value-at-Risk by unit one whenever it is exceeded. Moreover, it decreases VaR by unit one if initial estimates proved to be too high. It is clear to see that such a conditional adjustment, in the form of a step function, would provide for a more accurate myopic estimation. However, because all changes are of magnitude one, the adaptive model overlooks large deviations in returns upwards or downwards. For example, consider a state in which the portfolio halved in value for three consecutive days. While the portfolio has been left at an eighth of its value, the VaR only increased by three units from its value at t=0.
Symmetric Absolute Value
The SAV model takes the form The SAV is an autoregressive model in which a change in returns, regardless of direction, results in a change in VaR. What is particularly useful about this model is its ability to generalize movement in portfolio value. However, it is because of this very feature that SAV should not be used as a primary tool for measurement. Consider a series of large deviations in portfolio value, alternating upwards and downwards. While the long-run change in value is zero, the VaR implied by SAV would be unrealistically high. Similarly, a series of small deviations would imply an unrealistically low VaR.
Asymmetric Slope
The AS model takes the form The AS model is intended to capture the asymmetric leverage effect. Specifically, it was designed to detect the tendency for volatility to be greater following a negative return than a positive return of equal magnitude. The model relies on magnitude of error, rather than squared error, as in GARCH.
Indirect GARCH(1,1)
The Indirect GARCH model takes the form While the GARCH model is estimated by maximum likelihood, Indirect GARCH is estimated via quantile regression.
Regression Quantiles
Thus far, we have understood the general form of a Conditional Autoregressive Value at Risk model, and have also seen several possible forms. The primary difference between any pair of CAViaR models is the organization and treatment of β i , the regressive parameter. In the case of SAV, we were interested in a β that reflected magnitude of change in portfolio returns, where in AS, we were interested in only extreme ends of a series of returns. However, how are the underlying parameters actually measured? Koenker and Basset (1978) introduced the notion of a sample quantile to a linear regression. Consider a sample of observations y 1 , . . . , y T generated by the linear model where x t is a length p vector of regressors (i.e. returns), and Q θ ( θ t |x t ) is the θ-quantile of θ t conditional on x t . Consider the linear representation of an adaptive process: f t (β) = x t β. The θ regression quantile isβ to satisfy the objective Qualitatively, we adjust beta until VaR is no longer exceeded, or "hit" by a certain amount. Such a condition is satisfied when the observed indicator variables are 0. To account for the set of available information, Theorem 1 (Consistency). For generalized model (8),β → β 0 in probability, whereβ solves Proof. Please see Appendix Proof. Proof left as exercise for reader
Testing Quantile Models
While the expanded set of models available now account for the autocorrelation of returns, as well as large deviations, they must be tested. Given a new observation, the model remains If such a condition holds for the entirety of the time series, then it is proven valid. As shown by Christoffersen (1998), such a method is equivalent to testing for the independence of indicator variables for the same condition. In other words, While this provides for a natural test of forecasting models, it does not fully assess the validity of quantiles. To test conditional quantile models, we introduce a representative indicator variable that changes with the quantile itself. Define a sequence of independent random variables {z t } T t=1 such that Expressing positive or negative autocorrelated returns in terms of z t indeed accounts for the probability of exceeding a quantile. However, whilst the unconditional probabilities are uncorrelated, the conditional probabilities for a hit still depend on one another. Because these tests evaluate the lower bound of the VaR in the weakest sense, we work towards defining a dynamic quantile. Let Hit t (β 0 ) assumes a value of (1 − θ) for underestimations of VaR, and −θ otherwise. Notice that the expected value is zero, and that there should be no autocorrelation in the values between successive hits.
Suppose we wish to test the significance of an entire set of data along several β simultaneously. Then, Here, H is a diagonal matrix with binary indicators conditioned on available information. Such a test would be run both in-sample, as well as out-of-sample. We will define and prove the conditions of each.
Theorem 3 (In-Sample Dynamic Quantile Test). If the assumptions made (see appendix) are valid, the following holds whereM T is the difference between X (β) and a function of the gradient of f (β).
Proof. Proof left as exercise for reader
Essentially, the DQ test above tests whether or not the test statistic follows a normal distribution in the sense of an identity matrix, and whether the set of all dynamic quantiles in-sample follow a chi-squared distribution. We now shift our focus to a test statistic for dynamic quantiles out-of-sample.
Theorem 4 (Out-of-Sample Dynamic Quantile Test). Let T R denote the number of in-sample observations and N R denote the number of out-of-sample observations. Then Proof. Proof left as exercise for reader Use of the dynamic quantile tests allows for an estimation of the independence in "hits". The ideal quantile test would be one in which all hits (y t < V aR t ) are independent. Regulators would be able to choose between different measures of VaR when evaluating a portfolio.
Empirical Results
The historical series of portfolio returns were studied for four different regimes of CAViaR. A total of
Optimization Methodology
Using a random number generator, n vectors were generated, each with uniform distribution in [0, 1].
The regression quantile function was computed, and from the n vectors, m ⊆ n vectors with the lowest regression quantile criteria were selected as initial values for optimization. For each of the four regimes, we first used the simplex algorithm. Following the approximation, we used a robust quasi-Newton method to determine new optimal parameters to feed into the simplex algorithm. This process was repeated until convergence, and tolerance for the regression quantile was set to 10 −10 .
Simplex Algorithm
The algorithm operates on linear programs in standard form to determine the existence of a feasible solution. The first step of the algorithm (Phase 1) involves the identification of an extrema as a guess.
Either a basic feasible solution is found, or the feasible region is said to be empty. In the second step of the algorithm (Phase 2), the basic feasible solution from Phase 1 is used as a guess, and either an optimal basic feasible solution is found, or the solution is a line with infinite (unbounded) optimal cost.
Quasi-Newton Method
QN methods are used to locate roots, local maxima, or local minima if the Hessian is unavailable at each step. Rather, the Hessian is updated through analyzing gradient vectors. In general, a second order approximation is used to find a function minimum. Such a taylor series is for a gradient ∆f and a Hessian approximate B, the gradient of the approximation is We seek the Hessian B k+1 = argmin B B − B k V where V is a positive definite matrix defining the norm
CAViaR Results
We now review Conditional Autoregressive Value-at-Risk, methodology introduced by (
Interpretation
As previously discussed, the arrival of information creates uncertainty, and increases VaR. While this is true in both the conditional and unconditional case, it is emphasized in the former. It is well-known that the period between 1986 and 1999 was volatile, and was affected by events such as LTCM and Global crises in Russia and Asia. If VaR truly increases with the arrival of information, then it is sensible to see the peak in 1987, where all positive beta assets faced an increase in systemic risk with the market crash. The adaptive regime adjusts for changes in VaR, so the momentary increase in 1987 was given less weight.
It is also interesting to note that the autoregressive models capture non-systemic risk. While the market crash in 1987 affected all positive-beta assets, we also see less severe increases in VaR during 1999. With high probability, this is due to an idiosyncratic event-a total inventory recall by GM. Such a recall likely created uncertainty in expected cash flows, thus increasing the periodic volatility of the stock. The persistent volatility was exponentially weighted in the adaptive regime, resulting in a large increase towards the end of the period of study.
Interpretation
Information in the 2000's was much more readily available than it was during 1980-2000. Consequently, a rapid digestion and reflection of information in asset prices may have resulted in more dynamic expecta- tion. We see immediately that conditional VaR is much lower, indicating that the arrival of information did not induce as much uncertainty as it did in the decade prior.
We are aware from the previous iteration that VaR increased in 1999 across all adaptive regimes. From this prior, it is sensible to see high VaR from the very first year of data, given that time is continuous.
While the increased natural filtration of information suggests lower VaR, we infer sources based on backward-looking bias. Aside from the vehicle recall in 1999, the market faced a "mini-crash" in the early 2000's. This is better known as the "bubble burst". Further, the financial crisis in the 2008-2009 period resulted in an increase in conditional VaR. It is interesting to note the remarkable similarity across the four regimes, indicating an increase in the mean reversion coefficient of volatility.
Broader Interpretation
The arrival of news results in an expansion of information available. If the market is truly efficient, asset prices will reflect the expectations of those exposed to this information (Fama, 1997). However, expectations are not always dynamic, and integration of information may not be continuous. Consequently, volatility, a form of uncertainty, on a particular day will be autoregressive, and depend on volatility from previous days. It becomes necessary to use autoregression when calculating the expected loss within a p−quantile.
We demonstrate several adaptive models, including Symmetric Absolute Value, Asymmetric Slope, and Indirect GARCH (1,1). We recognize that while all of these models satisfy the requirements of autoregression, they differ in their treatment of β t , the autoregressive parameter. When evaluating Value-at-Risk, conditional on information, we must carefully choose the model, and understand the underlying assumptions.
It is well known that for a monotonically increasing cumulative distribution function, an increase in p will cover a larger portion of the distribution of risk. An immediate consequence of this is that the 99% VaR exceeds the 95%. We re-confirm the notion. We show that conditioning on the arrival of information may increase or decrease VaR, depending on the change in expectation. We contrast conditional against unconditional VaR, and autoregressive against independent VaR. In both contrasts, the former is a more proper treatment.
Conclusion
We Regressive parameters (β i ) were estimated by minimizing the regression quantile loss function, and the models were tested via dynamic quantiles.
The worst performing method was the adaptive method, which failed to detect poor returns in 1999.
The best performing method was Asymmetric Slope, which captured the larger effect of negative returns vs. positive returns on VaR. Symmetric Absolute Value (SAV), which does not segregate positive and reflected in the 1 % SPX News curves, which study the effect of news-driven returns on the SP 500.
Standard, non-conditional quantile regressions were studied for XOM, a large-cap stock similar to GM.
Without autoregressive properties, 95 % VaR was underestimated relative to the CAViaR case. Such a result motivates the use of multiple risk measurement tools, each carrying different treatment of (β i ) and underlying assumptions.
The most typical use of VaR involves determining the expected loss with 95 % certainty. While useful as a bare approximation, a more proper analysis of risk should be carried across various quantiles and multiple distributions. Moreover, the loss function for a given day should be clustered within time frames. We have re-confirmed that volatility is autoregressive and conditional on the arrival of information. Treating value at risk independently will almost surely underestimate maximum losses during periods of high volatility and overestimate during periods of low volatility. The use of multiple tools may be beneficial to financial institutions, the individuals they may represent, and the health of market participants as a whole.
Extensions
Future applications include smaller time frames, different sets of stocks, an extension to the multivariate case, and quantile sensitivity. It is also worth investigating the implied volatility of out-of-the-money options calculated under the conditional and non-conditional regime. We expect the skew to carry more weight in the conditional regime to account for disaster risk.
There also exists potential to improve the methodology used within the paper. Namely, • Cost Regularization of Autoregressive parameters • Local inference of Hessian matrices to employ interior point methods for optimization
Consistency
Proof. Let Q T (β) = T −1 T t=1 q t (β), where q t (β) = [θ − I(y t < f t )β))][y t − f t (β)] Claim: E[q t (β)] exists and is finite for every β. This can be easily checked as follows: Because f is continuous in β (complete probability space), q t (β) must be continuous is uniquely minimized at β 0 for T sufficiently large We have h t (λ|Ω t ) > h t > 0 whenever |λ|< h t . Hence, for 0 < τ < h 1 We take the unconditional expectation, and see
Consistency Assumptions
to the information set and E(F (Ω t ) 3 ) ≤ F 0 < ∞, for some constant F 0 . 2. E(|F (Ω t )| 4 ) ≤ F 1 < ∞ for all t and for some constant F 1 , where F (Ω t ) has been defined under asymptotic normality 3. The difference between the DQ test and representative A T , D T converge in probability to 0 | 2016-03-05T14:33:01.000Z | 2016-03-05T00:00:00.000 | {
"year": 2016,
"sha1": "eb4d8e5b0c5ecf6456f73063d8250ce096d26688",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eb4d8e5b0c5ecf6456f73063d8250ce096d26688",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
260970778 | pes2o/s2orc | v3-fos-license | Constructing multiple active sites in iron oxide catalysts for improving carbonylation reactions
Surface engineering is a promising strategy to improve the catalytic activities of heterogeneous catalysts. Nevertheless, few studies have been devoted to investigate the catalytic behavior differences of the multiple metal active sites triggered by the surface imperfections on catalysis. Herein, oxygen vacancies induced Fe2O3 catalyst are demonstrated with different Fe sites around one oxygen vacancy and exhibited significant catalytic performance for the carbonylation of various aryl halides and amines/alcohols with CO. The developed catalytic system displays excellent activity, selectivity, and reusability for the synthesis of carbonylated chemicals, including drugs and chiral molecules, via aminocarbonylation and alkoxycarbonylation. Combined characterizations disclose the formation of oxygen vacancies. Control experiments and density functional theory calculations demonstrate the selective combination of the three Fe sites is vital to improve the catalytic performance by catalyzing the elemental steps of PhI activation, CO insertion and C-N/C-O coupling respectively, endowing combinatorial sites catalyst for multistep reactions.
Shujuan Liu 1,3 , Teng Li 1,3 , Feng Shi 1 , Haiying Ma 1,2 , Bin Wang 1 , Xingchao Dai 1 & Xinjiang Cui 1 Surface engineering is a promising strategy to improve the catalytic activities of heterogeneous catalysts.Nevertheless, few studies have been devoted to investigate the catalytic behavior differences of the multiple metal active sites triggered by the surface imperfections on catalysis.Herein, oxygen vacancies induced Fe 2 O 3 catalyst are demonstrated with different Fe sites around one oxygen vacancy and exhibited significant catalytic performance for the carbonylation of various aryl halides and amines/alcohols with CO.The developed catalytic system displays excellent activity, selectivity, and reusability for the synthesis of carbonylated chemicals, including drugs and chiral molecules, via aminocarbonylation and alkoxycarbonylation.Combined characterizations disclose the formation of oxygen vacancies.Control experiments and density functional theory calculations demonstrate the selective combination of the three Fe sites is vital to improve the catalytic performance by catalyzing the elemental steps of PhI activation, CO insertion and C-N/C-O coupling respectively, endowing combinatorial sites catalyst for multistep reactions.
Due to the economic and environmental advantages of non-noble metals present, the development of heterogeneous catalysts based on earth abundant transition metals is of substantial importance in catalysis [1][2][3] .However, compared with noble metal catalysts which generally catalyze reactions efficiently under mild conditions 4,5 , low activity and selectivity has been a drag on the exploration of non-noble metal catalysts (NMCs) [6][7][8] .To achieve outstanding catalytic performance, several strategies such as constructing multi-metallic nanoparticles 9,10 , doping heteroatoms [11][12][13] , creating metal-support interaction [14][15][16] , have been widely applied.Recently, Beller described a protocol to synthesize catalyst by immobilizing metal complexes on solid supports and subsequently pyrolysis under inert atmosphere, enhancing the NMCs catalytic performance on different reactions [17][18][19] .In addition, single atom catalysts have been attracted considerable attention on the synthesis of NMCs with exclusive catalytic properties 20,21 .Despite numerous achievements on NMCs synthesis, there are still of great significance to create active and selective NMCs.
Surface engineering was considered as another effective strategy to regulate the surface charge distribution and optimize the active sites, and further extend the functionalities of NMCs [22][23][24] .Surface imperfections such as oxygen vacancies (O vac ) are ubiquitous in metal oxides which can modify the physical and chemical properties of the oxide materials significantly, involving the catalytic performance dramatically in various reactions [25][26][27] .The O vac on oxide supports can not only tune the interaction of the metal and support but also serve as active sites directly with the metal active centers in the catalytic cycle, finely regulating the catalytic activity and selectivity 28 .The creation of O vac on TiO 2 benefits the formation of atomic interface between isolated Pt atom and surface Ti 3+ which facilitates the electron transfer between single Pt atoms and Ti 3+ sites, thereby enhancing the photocatalytic hydrogen production 29 .Surface O vac on Cu/CeO 2 is beneficial to form the anti-sintering active sites by the synergistic effect with neighboring copper cluster, promoting the catalytic efficiency for the RWGS reaction and the catalyst stabilization even at high operating temperature 30 .By means of the O vac on Ni@TiO 2-x , electron density of surface Ni atoms increases by electronic migration from TiO 2-x support to Ni atoms, forming the active architecture of Ni δ--O v -Ti 3+ and promoting the catalytic performance for WGS reaction 31 .In the presence of O vac , grain refinement and spinel/perovskite heterostructure formation for perovskite oxides take place, leading to enhanced oxygen evolution reaction activity 32 .
Although these achievements on the developments of O vac fabrications and applications in catalysis are reported, more attentions are paid on the electronic influence between O vac and adjacent single metal site.However, the formation of one oxygen vacancy normally causes the micro-environmental change of multiple metal sites.As we all know, the subtle variation in the active site architecture can affect catalytic performance significantly and even change the reaction pathway, thereby different elementary steps for a multistep reaction might be selectively determined by one of multiple metal sites.Thus, it is highly interesting to study the relationship between the catalytic reactivity with multiple metal sites.
Herein, active Fe sites with different catalytic performance are conducted by fabricating O vac on Fe 2 O 3 , which could serve as an ideal active catalyst for multistep carbonylation reaction of aryl halides and amines/alcohols with CO.The experiment results show that Fe 2 O 3 -O vac exhibits preeminent activity, selectivity, and durability in carbonylation of various aryl halides and nucleophiles, including amines, alcohols, drugs and chiral molecules derivatives.DFT calculations indicate that three different of Fe sites (denoted as Fe1, Fe2 and Fe3) of Fe 2 O 3 -O vac are formed accompanying the formation of O vac (Fig. 1, Supplementary Fig. 1 and Fig. 2a, b).Importantly, the selective combination of these three Fe sites catalyzes different elementary reactions of PhI activation (Fe1 and Fe3 in Fig. 1b), CO insertion (Fe1 and Fe2 in Fig. 1c), C-N coupling (Fe3, Fe1 and Fe2 in Fig. 1d), respectively.The catalytic activity and selectivity of the carbonylation is significantly enhanced by the "combinatorial site catalysis" of these three Fe sites on Fe 2 O 3 -O vac .This work reveals the catalytic differences of the multiple metal sites around O vac and improved catalytic performance is achieved by their combinatorial catalysis, providing a concept to the future rational catalyst design and activity enhancement of NMCs.
Synthesis and characterization of catalysts
A series of Fe 2 O 3 catalysts with different vacancies were synthesized via NaBH 4 reduction of the hydrothermally prepared Fe 2 O 3 33-36 , which were denoted as 0.5 Fe 2 O 3 -O vac , 1.0 Fe 2 O 3 -O vac and 2.0 Fe 2 O 3 -O vac (0.5, 1.0 and 2.0 presented the molar ratio of NaBH 4 : Fe 2 O 3 ), as schematically illustrated in Fig. 2a.As shown in Supplementary Table 1, ICP-OES analysis for the starting material (FeCl 3 ) and Fe 2 O 3 -O vac catalyst revealed that the content of other metals such as Cu, Ni and Pd was below the limit of detection.
The X-ray diffraction (XRD) patterns (Supplementary Fig. 3) exhibited that Fe 2 O 3 samples were formed with a high crystalline structure where (104) plane was dominant, which could be well indexed to the standard XRD pattern of hematite Fe 2 O 3 (PDF #: 890599) 37 . 57Fe Mössbauer spectroscopy is a very useful technique to explore the local magnetic behavior as well as the oxidation state of Fe atoms, the transmission Mössbauer spectra at room temperature for Fe 2 O 3 , 0.5 Fe 2 O 3 -O vac and 1.0 Fe 2 O 3 -O vac (Supplementary Fig. 4) only exhibited the single of the hematite Fe 2 O 3 , excluding the formation of the iron nanoparticles (Supplementary Table 3) 38 , which was consistent with the XRD analysis.As shown in Fig. 2b-d, TEM images revealed the cube structure of Fe 2 O 3 samples with 400 nm diameter.High-resolution TEM (HR-TEM) images in Fig. 2e-g provided information for the structure of Fe 2 O 3 samples, where the lattice fringe spacing of 0.280 nm was observed mainly, corresponding to (104) lattice of the Fe 2 O 3 37 .It was notable that the (104) lattice was detected for Fe 2 O 3 , 0.5 Fe 2 O 3 -O vac and 1.0 Fe 2 O 3 -O vac , and no change on morphologies was observed after reduction by NaBH 4 .However, increasing the molar ratio of NaBH 4 : Fe 2 O 3 to 2, the resulting 2.0 Fe 2 O 3 -O vac catalyst exhibited an obvious disruption of morphology and surface lattice, probably due to the over-reduction (Supplementary Fig. 5).X-ray photoelectron spectroscopy (XPS) was organized to study the composition of surface oxygen species and the charge distribution.All XPS spectra were charge corrected and referenced with adventitious carbon (284.8 eV).The survey spectra of three Fe 2 O 3 samples were shown in Supplementary Fig. 6, which clearly revealed the coexistence of Fe, O, and C elements of three samples.The density of oxygen vacancy on these catalysts can also be deduced from O 1s spectra.As shown in Fig. 3a, three peaks can be deconvoluted from the O 1 s profiles.The peaks at 529.8, 531.4,and 533.0 eV can be ascribed to surface lattice oxygen (O L ), surface O vac and other weakly bound oxygen species such as adsorbed molecular water and hydroxyl groups (O OH ) 39 .The density of O vac could be supposed as O vac / (O vac + O L + O OH ).As shown in Fig. 3a and Supplementary Table 3, the density of O vac was in the following order: 1.0 Fe 2 O 3 -O vac (26%) > 0.5 Fe 2 O 3 -O vac (18%) > Fe 2 O 3 (13%), suggesting that the content of oxygen vacancies was gradually increased using larger NaBH 4 amounts.Note that the 2.0 Fe 2 O 3 -O vac showed lower content of O vac , probably due to the damage of the morphology and surface lattice caused by the overreduction (Supplementary Fig. 7).The content of surface hydroxyl groups was also analyzed in Supplementary Table 4, which showed that the content of surface hydroxyl groups of Fe 2 O 3 , 0.5 Fe 2 O 3 -O vac and 1.0 Fe 2 O 3 -O vac were almost similar, which revealed the catalytic efficiency was not affected by the surface hydroxyl groups.Moreover, two characteristic binding energy peaks accompanied by broad satellite peaks of the Fe 2p spectrum were observed at 710.9 eV (Fe 2p3/2) and 724.6 eV (Fe 2p1/2), respectively 40 (Supplementary Fig. 8).Additionally, the Fe 2p3/2 peaks in Fe 2 O 3 -O vac were slightly shifted to lower binding energies after the introduction of O vac , which suggested that the valence state of Fe was partly decreased.These results indicated that formation of O vac increased the electronic density of surface Fe in Fe 2 O 3 -O vac .
The existence of O vac in crystals could be qualitatively determined by Electron Paramagnetic Resonance (EPR).Generally, a higher peak intensity in the EPR spectrum represented a higher concentration of O vac 41 (Fig. 3b).All samples displayed EPR signals at the g-value of 2.003, indicating the trapping of electrons on O vac .Interestingly, EPR signal of O vac for pristine Fe 2 O 3 was observed, suggesting some O vac were fabricated during hydrothermal treatment process.The normalized EPR signal intensities were found to increase in the order of Fe Raman spectra also revealed the typical peaks of hematite Fe 2 O 3 for all samples (Fig. 3d) 30 .The peaks located at 221.7 cm −1 was assigned to the Fe-O symmetric stretching vibrations (A 1g mode), and the two peaks at about 287.9, and 408.8 cm 10b and Supplementary Table 5), indicating more reliable surface oxygen atoms were removed during the NaBH 4 reduction, thereby generating more O vac .
XAFS spectroscopy was utilized to probe detailed structure information such as the coordination environment 45 .Figure 3e
Catalytic performance of the aminocarbonylation of iodobenzene
Carbonylation of aryl halides over transition metal-based catalysts is well known as a direct route to synthesis carbonyl compounds, such as carboxylic acids, amide, ester, and ketones [47][48][49][50] .Although many research works focused on the development of various Pd-based catalysts for amides and esters production (Supplementary Table 7), nonnoble metal heterogeneous systems which efficiently catalyze the aminocarbonylation and alkoxycarbonylation are rare reported so far.
The catalytic performance of the prepared catalysts was examined using the carbonylation of iodobenzene and morpholine as benchmark reaction (Table 1).The triethylamine was generally added to neutralize the hydrogen halide formed during the reaction 51 .Using pristine Fe 2 O 3 , low catalytic activity with 15% yield was attained (entry 1).Surprisingly, the NaBH 4 reduction treatment led to a great increase in the catalytic performance with 50% yield obtained by 0.5 Fe 2 O 3 -O vac (entry 2).In the presence of 1.0 Fe 2 O 3 -O vac , extremely high activity was obtained with 97% yield (entry 3).However, 2.0 Fe 2 O 3 -O vac led to quite low activity with only 22% yield (entry 4).The poor performance was attributed to the disruption and collapse of the surface-active sites caused by the over-reduction.These results indicated that the amount of O vac was vital to the catalytic activity.Moreover, control experiments where the iron nanoparticles and physical mixture of iron nanoparticles and 1.0 Fe 2 O 3 -O vac catalysts were conducted, much lower yields of amide were obtained by adding iron nanoparticles (entries 2-4, Supplementary Table 2).To exclude the effect of Cu, Ni and Pd metals in the catalytic performance, Fe 2 O 3 -O vac catalysts containing 250 ppm of Cu, Ni and Pd were prepared and their catalytic activity were studied respectively.As shown in Supplementary Table 2, the addition of Cu and Ni metals in the Fe 2 O 3 -O vac catalyst decreased the catalytic activity, while the Pd exhibited negligible effect (entries 5-7).For comparison, 1.0 CuO-O vac , 1.0 V 2 O 5 -O vac and1.0ZrO 2 -O vac were also examined, lower yields were obtained (entries 5-7).Various solvents including methanol (CH 3 OH), acetonitrile (MeCN), tetrahydrofuran (THF), toluene and n-octane were tested (entries 8-12) but no improved yields obtained.The CO pressure was also investigated and the yields increased gradually with the increase of CO pressure below 1 MPa, but dramatically declined at CO pressure higher than 2 MPa, probably ascribing to the inhibition of activation of PhI by the favorable CO adsorption at high CO pressure (entries 13-16).In addition, low reaction temperature was unfavorable for the catalytic conversion (entry 17).
To further elucidate surface vacancies-mediated catalysis, 1.0 Fe 2 O 3 -O vac was treated at 450 °C in air for 5 h.After calcination, the obtained 1.0 Fe 2 O 3 -O vac −450 catalyst was immediately used for aminocarbonylation of iodobenzene under the identical conditions, and only 13% yield was obtained (Table 1, entry 18).O 1 s spectra showed that the surface O vac fractions were effectively reduced from 26% in 1.0 Fe 2 O 3 -O vac to 14% in 1.0 Fe 2 O 3 -O vac −450 (Supplementary Fig. 11).And the TGA curves of 1.0 Fe 2 O 3 -O vac −450 coincided exactly with the pristine Fe 2 O 3 (Supplementary Fig. 12).Based on these results, the decreased catalytic activity could only be attributed to the decreased concentration of surface O vac after annealing at 450 °C in air.
The quantitative connection between the content of O vac and catalytic performance was investigated and the yield of 3a was plotted against their content of O vac for Fe basically unchanged (Supplementary Fig. 13) and Raman characteristic peaks were observed no shifts of the Fe-O stretching vibration band (Supplementary Fig. 14).Besides, the content of O vac in the 1.0 Fe 2 O 3 -O vac after recycling was close to the fresh one (Supplementary Fig. 15).These results revealed the solid structure stability of 1.0 Fe 2 O 3 -O vac during the reactions.The quasi in situ 1 H NMR spectra were used to monitor the aminocarbonylation reaction process.As shown in Fig. 4c, the signals corresponding to the -C 6 H 5 -group (Ha in Fig. 4c) in iodobenzene decreased over time, while the peaks corresponding to the -C 6 H 5 -group (Hb in Fig. 4c) of 3a increased simultaneously.
In situ DRIFTS analysis was performed to uncover the dynamic catalytic behavior of the 1.0 Fe 2 O 3 -O vac and the possible reaction intermediates in the aminocarbonylation of iodobenzene process (Fig. 4d).The characteristic peak of the PhCO* group, which was formed during the CO insertion elementary reaction, appeared immediately at 1572 cm −1 52 at 50 °C, and then reached maximum intensities at about 120 °C and gradually declined until disappeared.Simultaneously, the signal intensity of stretching vibration of -C = O-of the target amide located at 1613 cm −1 was intensified, which exhibited the generation of 3a as the reaction proceeding.These studies confirmed the formation of PhCO* intermediate and in situ transformation into the desired products during the reaction using 1.0 Fe 2 O 3 -O vac .Control experiments were conducted to study the possibility of radical mechanism (Supplementary Table 8).Under the standard conditions, 99% amide (3a) were obtained by adding 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO, a radical scavenger) and equivalent molar amount of morpholine, excluding the radical mechanism, and we proposed the reaction process was divided into three elementary reaction steps: the PhI activation (step I), the CO insertion (step II) and the C-N coupling (step III) (Fig. 5a) 49 .
DFT calculations
Density functional simulation was performed to elucidate the activity of 1.0 Fe 2 O 3 -O vac .Assisted by the formation of O vac , electrons were redistributed to the surrounded Fe atoms where the bader charge was reduced from 1.40 to 1.26 for Fe1 and Fe3, and from 1.69 to 1.29 for Fe2 (Supplementary Fig. 1).In addition, the coordination environment was also changed after removal of surface oxygen as shown in Supplementary Fig. 1.The influence of the variation of electronic and coordinated circumstances on the catalytic mechanism was studied in detail by DFT calculations, the whole potential energy surface (PES) for 3a synthesis was depicted in Fig. 5b.
In the presence of Fe 2 3 -O vac , iodobenzene was adsorbed at surface with adsorption energy of −1.02 eV (IM1) (Supplementary Fig. 2c).The PhI activation (step I) was occurred on Fe 2 O 3 -O vac with the energy barrier and reaction energy of 0.54 eV (TS1) and −0.55 eV (Supplementary Fig. 16).The hydroxyl groups, which might exist on the surface, had little effect on the energy barrier (0.55 eV on the surface with hydroxyl groups) but increased the reaction energy (−0.06 eV) of PhI dissociation, indicating that surface hydroxyl groups were not beneficial for C-I bond decomposition (IM1' → TS1' → IM2', Supplementary Fig. 16).Therefore, PhI activation preferred to occurred on the vacant site without surface hydroxyl groups.After C-I cleavage, the phenyl group of iodobenzene was spontaneously moved to the Fe1 site by the formation of Fe1-C intermediate (IM2).Afterwards, the CO insertion (step II) was triggered by CO adsorption on Fe2 site with the adsorption energy of −0.66 eV (IM3) (Supplementary Fig. 17).Subsequently, CO was inserted into the Fe1-C bond with the generation of the acyl intermediate (PhCO*) (IM4), which was confirmed by the DRIFTS measurement (Fig. 4d), with the energy barrier of 1.04 eV (TS2) and exergonic by 0.75 eV (Supplementary Fig. 18).Note that the formed PhCO* was adsorbed on Fe1 and Fe2 where C and O sites were bonded with Fe1 and Fe2 atom, respectively.This simulated structure (IM4) was beneficial for the amide formation (step III) which started with the adsorption of morphine [HNR] on Fe3 site (IM5).Next, iodine was displaced by NR* with the formation of HI, the formed NR* attacked C site of PhCO* with the barrier of 0.77 eV (TS3) and strong exothermic by 1.29 eV (Supplementary Fig. 18), leading to the formation of the desired amide product 3a (IM6).
Clearly, C-I bond activation was the highest point among the PES with respect to the IM1, indicating that step I determined the apparent barrier of the reaction.Since the energy barrier for PhI activation on Fe 2 O 3 -O vac (0.54 eV) was lower than that on bare Fe 2 O 3 (0.66 eV, Supplementary Fig. 16), the reaction was more likely to be triggered on Fe 2 O 3 -O vac .Further studies revealed the change of apparent barrier for C-I scission was caused by the different adsorbed geometries where the iodobenzene was parallel adsorbed on Fe 2 O 3 -O vac , completely different from the tilted adsorption on bare Fe 2 O 3 (Supplementary Fig. 2c, d).Differential charge density plots for these two adsorbed states (IM1 and IM1", Fig. 5c, d) before C-I cleavage were explored.Obviously, electrons were accumulated at the zone between the phenyl group and flawed surface in IM1 but no such phenomenon was observed in IM1", indicating the strong interaction between phenyl and adsorbent.This configuration of IM1 benefited the electrons transfer from Fe1 to C 6 H 5 group, stabilizing the phenyl motif during the PhI activation as shown in Fig. 5e.To clarify the relative correspondence of the C-I bond activation with the surface electron transfer ability, the d-band center was depicted (Fig. 5f, g).Notably, the Fe1 and Fe3 of Fe 2 O 3 -O vac displayed a positive value (−1.42 eV) than that of bare Fe 2 O 3 (−2.81eV).The closer of d-band center to fermi level indicated the preferable charge donation from the active site to absorbent, elucidating the easier C-I bond activation by Fe 2 O 3 -O vac 53 .Homogeneous complexes where transition metals were generally in lower oxidation states initiated the reaction by the rate-determined activation of aryl halides 54,55 , However, the CO migratory insertion (TS2) was the rate-determining step in our 1.0 Fe 2 O 3 -O vac system which was different from transition metal process.DFT calculations and experiments results confirmed that the "combinatorial site catalysis" of three Fe sites induced O vac covered three different elementary reaction steps of the aminocarbonylation of iodobenzene, endowing significant improvement of the catalytic performance.
Reaction system scoping
The scope and limitation of the aminocarbonylation of various aryl halides and amines into the corresponding amides were conducted over 1.0 Fe 2 O 3 -O vac .As shown in Fig. 6, the aminocarbonylation of aryl iodide was investigated using various amines as the starting materials.Cyclic and aliphatic acyclic secondary amines with steric hindrance were well tolerated and 91-97% yields were obtained (3a-3d).In addition to secondary amines, both aliphatic and cyclic primary amines were well tolerated and converted to the desired amides in 82-99% yields (3e-3m).Interestingly, oleylamine was successfully transformed to the corresponding amide (3n) in 85% yield with the preservation of the unsaturated C = C bond.Moreover, amide 3o was obtained in 83% yield using benzylamine as substrate.The carbonylation of aromatic amines with different functional groups occurred successfully to afford the desired amides in 91-99% yields (3p-3r).Furthermore, different aryl halides were tested on the aminocarbonylation with morpholine.Generally, both electron-donating and electron-withdrawing groups in aryl halides were well applicable under the identical conditions, affording the desired products in 71-99% yields.Compared with iodobenzene, 97% yield was achieved with p-Me substituted aryl iodides while the yield decreased to 84% when substituting by p-OMe (3s and 3t).As expected, 1-naphthyl iodide also underwent this transformation smoothly, giving the desired product in 99% yield (3u). Importantly, bromobenzene with p-Cl, p-F and p-NO 2 substitutes were also selectively converted to the corresponding products in good to excellent yields (71-87%) after prolonging the reaction time (3v-3aa).
Encouraged by the outstanding catalytic results on aminocarbonylation of aryl halides with amines, we subsequently studied the alkoxycarbonylation of aryl halides with alcohols using 1.0 Fe 2 O 3 -O vac system.As shown in Fig. 7, a series of linear and branched primary alcohols were successfully converted to the corresponding benzoates with iodobenzene in excellent yields (5a-5l).When secondary alcohols, such as 2-butanol, 2-pentanol, cyclohexanol, cyclooctanol and cyclododecanol, were used as substrates, the transformations were efficiently conducted with 87-99% yields obtained (5m-5q).Note that the alkoxycarbonylation of aryl iodides with phenols was also catalyzed, providing phenyl benzoate in 99% yield (5r). Afterwards, the alkoxycarbonylation of various aryl halides was investigated using ethanol as starting material.Aryl halides bearing either electronwithdrawing or electron-donating groups were well tolerated, affording the corresponding benzoates in 73-96% yields (5s-5z).1-Naphthyl iodide also offered 90% yield under the 1.0 Fe 2 O 3 -O vac system (5u).Remarkably, 1-fluoro-4-bromobenzene was converted into the desired product with 85% yield with the maintenance of the F substitute (5z).
Discussion
In summary, a series of Fe 2 O 3 -O vac catalysts with varying amounts of O vac were synthesized, which was used to the carbonylation of aryl halides and amines/alcohols with CO.The ideal 1.0 Fe 2 O 3 -O vac system displayed excellent activity and selectivity for the synthesis of carbonylated chemicals (60 examples), including drugs and chiral related molecules, via aminocarbonylation and alkoxycarbonylation.Characterizations (XPS, EPR, TGA, Raman and XAFS) revealed the formation of O vac and variation of Fe sites on Fe 2 O 3 -O vac triggered by O vac .The experimental studies and DFT calculations verified that the catalytic performance of the carbonylation were significantly improved because the selective combination of these three Fe sites can catalyze the elementary step of PhI activation, CO insertion and C-N/C-O coupling efficiently.This work provided a concept to design NMCs catalysts and study the origin of improved catalytic performance for multistep reactions.
General considerations
All solvents and chemicals, unless otherwise noted, were obtained commercially, and were used as received without further purification.All glassware was dried before using.Analytical thin layer chromatography was performed using pre-coated Jiangyou silica gel HSGF254 (0.2 mm ± 0.03 mm).Flash chromatography was performed using silica gel 60, 0.063-0.2mm, 200-300 mesh (Jiangyou, Yantai) with the indicated solvent system.
Characterizations
GC-MS analysis was in general recorded on an Agilent 5977 A MSD GC-MS.
Fourier transform infrared (FT-IR) spectrum 57 were recorded with a Bruker VERTEX 70FTIR spectrometer.
The in situ DRIFTS 58 of samples were analyzed by a Bruker VERTEX 70 FTIR spectrometer and used for the identification of IR absorbance in the mid-IR region (400-4000 cm −1 ).It was equipped with liquidnitrogen-cooled MCT detector and low-volume gas cell (8.7 mL) with a 123 mm path length and KBr windows.The sample was pretreated at 30 °C for 10 min under CO (5 mL/min) and heated to 180 °C.
Raman spectroscopy 57 (LabRAM HR Evolution Raman spectrometer) was performed by employing a 532 nm laser beam.
The liquid nuclear magnetic resonance spectra (NMR) were recorded on a Bruker AvanceTM III 400 MHz in deuterated chloroform unless otherwise noted.Data are reported in parts per million (ppm) as follows: chemical shift, multiplicity (s = singlet, d = doublet, t = triplet, q = quartet, quint = quintet, m = multiplet, dd = doublet of doublet and br = broad signal), coupling constant in Hz and integration 59 .
XRD measurements 60 were conducted were conducted by a STADIP automated transmission diffractometer (STOE) equipped with an incident beam curved germanium monochromator selecting CuKα1 radiation and a 6°position sensitive detector (step size: 0.014°, step time: 25.05 s).The XRD patterns were scanned in the 2θ range of 10-90°.
XPS measurements 60 were carried out by a VG ESCALAB 210 instrument equipped with a dual Mg/Al anode X-ray source, a hemispherical capacitor analyzer, and a 5 keV Ar + ion gun.All spectra were recorded by using AlKa (1361 eV) radiation.The electron binding energy was referenced to the C1s peak at 284.8 eV.
The thermal properties 60 of Fe 2 O 3 catalysts were evaluated using a METTLER TOLEDO simultaneous thermal analyzer over the temperature range from 30 to 800 °C under nitrogen atmosphere (20 mL/min) with a heating rate of 5 °C/min.In H 2 -TGA analyses, ∼5 mg of catalyst was used, and the change in weight was recorded in the temperature range of 30-850 °C at a heating rate of 5 °C min -1 under 5% H 2 /Ar flow of 20 mL min −1 .
High-resolution TEM analysis 60 was carried out on a JEM 2010 operating at 200 KeV.The catalyst samples after pretreatment were dispersed in ethanol, and the solution was mixed ultrasonically at room temperature.A part of solution was dropped on the grid for the measurement of TEM images.
EXAFS experiments 61 were performed at the Beijing Synchrotron Radiation Facility (BSRF) in Beijing Institute of High Energy Physics, Chinese Academy of Sciences with a storage ring energy of 2.5 GeV and a beam current between 150 and 250 mA.The Fe K-edge absorbance of powder catalysts was measured in transmission geometry at room temperature.EXAFS data analysis was carried out using ifeffit analysis programs (http://cars9.uchicago.edu/ifeffit/).Radial distribution functions were obtained by Fourier-transformed k3-weighted Χ function.
EPR spectra 61 were recorded at room temperature on a Bruker cw spectrometer EMX-PLUS (X-band, ν ≈ 9.8 GHz) with a microwave power of 20 mW, a modulation frequency of 100 kHz and modulation amplitude of up to 1 G, the usage of sample was 10 mg.
Mössbauer measurements 38 were performed using a conventional constant acceleration type spectrometer in transmission geometry in the temperature range from 6 to 300 K. Absorbers were prepared in powder form (10 mg of natural Fe cm −2 ).The γ-ray source is a commercial 25 mCi 57 Co in a palladium matrix.The driver velocity was calibrated using sodium nitroprusside powder and all isomer shifts were quoted relative to the α-Fe foil at room temperature.
The element type and content of the catalyst were determined by inductively coupled plasma optical emission spectrometry (ICP-OES) 58 .Preparation of test sample: 20 mg sample was dissolved with a mixture of concentrated nitric acid and hydrochloric acid, and heated until the sample was completely dissolved, and then the clarified transparent solution was quantitatively transferred to a volumetric flask.
O 2 -TPD was performed on a chemisorption analyzer equipped with a thermal conductivity detector (TCD) 57 .The chemisorption analyzer was TP-5080D from Tianjin Xianquan Industrial and Trading Co., Ltd.The weighed sample (100 mg) was pretreated at 300 °C for 1 h under He (40 mL/min) and cooled to 30 °C.The O 2 gas (30 mL/min) was introduced instead of He gas at this temperature for 1 h to ensure the saturation adsorption of O 2 .The sample was then purged with He for 1 h (40 mL/min) until the signal returned to the baseline as monitored by a TCD.The desorption curve of O 2 was acquired by heating the sample from 30 to 600 °C at 10 °C/min under He with the flow rate of 40 mL/min.H 2 -TPR was performed on a chemisorption analyzer equipped with a TCD 57 .The chemisorption analyzer was TP-5080D from Tianjin Xianquan Industrial and Trading Co., Ltd.The weighed sample (10 mg) was pretreated at 300 °C for 1 h under He (40 mL/min) and cooled to 30 °C.The H 2 /N 2 gas (H 2 : 5 wt%, 30 mL/min) was introduced instead of He for 1 h until the signal returned to the baseline as monitored by a TCD.The reduction curve of H 2 was acquired by heating the sample from 30 to 800 °C at 10 °C/min under H 2 /N 2 gas with the flow rate of 30 mL/min.Synthesis of Fe 2 O 3 500 mg anhydrous ferric chloride and 2 g CTAB (cetyltrimethyl ammonium bromide) were dissolved in 60 mL deionized.After the mixture became clear solution, the above mixture was transferred into a 100 mL Teflon-lined stainless-steel autoclave and heated to 120 °C for 24 h and then cooled to room temperature naturally.The resulting product was collected by filtration, washed several times with deionized water and absolute ethanol, and then dispersed in absolute ethanol and dried at 80 °C in air overnight.The sample was labeled as Fe 2 O 3 .
DFT calculations
Spin-polarized density functional theory method were performed in this work as imvia the Vienna Ab initio Simulation Package (VASP) 62 .The projected augmented wave method (PAW) 63 was used to describe the interaction of electron and ion.A Hubbarf U (U eff = 4 eV) was used to treat the strong correlated electrons of the localized Fe 3dorbital 64,65 .The electron exchange and correlation energies were calculated within the generalized gradient approximation method (GGA) using the Perdew-Burke-Ernzerhof (PBE) functional 66,67 .To make sure that the energy difference is less than 10 −4 eV and the force per atom is less than 0.03 eV/Å, Gaussian smearing (0.02 eV) was used.And Gamma k point was adopted for sampling at Brillouin zone.The kinetic energy cutoff was set up to 500 eV and dispersion correction was considered by using DFT-D3 method with Becke-Jonson damping 68 .The vacuum layer between the periodically repeated slabs was set as 20 Å.
The adsorption energy (E ads ) of adsorbate (X) is obtained from the equation E ads = E X/slab − E slab -E X , where E X/slab is the total energy after adsorption, E slab is the total energy of the clean surface, E X is the total energy of the free adsorbate (X) in a 20 × 15 × 15 cubic box; and therefore, the more negative E ads , the stronger of the interactions between the adsorbates and surface; and the opposite number of E ads is regarded as the desorption energy E des .For reactions, the climbingimage nudged elastic band (CI-NEB) method 69 was adopted to search the transition states (TS) and the vibrational frequency analysis was also processed to verify the authentic transition state with only one imaginary frequency.The reaction barrier (E a ) is defined as E a = E TS − E IS and the reaction energy (E r ) is defined as E r = E FS − E IS , where E IS , E FS and E TS are the total energies of the initial, final and transition states, respectively.Zero point energy (ZPE) correction was included in all energies.
Two models were adopted to clarify the activities of flawed and normal Fe 2 O 3 (104): Fe 2 O 3 -O vac , Fe 2 O 3 (Supplementary Fig. 2a, b).During simulation, the bottom 32 Fe and 48 O atoms were fixed.
General procedure for the aminocarbonylation of aryl halides
Typical procedure for carbonylation of aryl iodides with amines and CO.A mixture of aryl iodides (1.0 mmol), amines (1.5 mmol), catalysts (80 mg), Et 3 N (2.0 mmol) and dioxane (2 mL) were added a glass tube which was placed in an 80 mL autoclave.Then the autoclave was purged and charged with CO (1.0 MPa).The reaction mixture was stirred at 160 °C for 24 h.After the reaction finished, the autoclave was cooled to room temperature and the pressure was carefully released.Subsequently, the reaction mixture was diluted with 5 mL of methanol for analysis by GC-MS.The crude reaction mixture was concentrated by rot-vap and purified by column chromatography on a silica gel column to give the desired products.
General procedure for the alkoxycarbonylation of aryl halides
Typical procedure for carbonylation of aryl iodides with alcohols and CO.A mixture of aryl iodides (1.0 mmol), alcohols (3 mL), catalysts (80 mg) and Et 3 N (2.0 mmol) were added a glass tube which was placed in an 80 mL autoclave.Then the autoclave was purged and charged with CO (1.0 MPa).The reaction mixture was stirred at 160 °C for 48 h.After the reaction finished, the autoclave was cooled to room temperature and the pressure was carefully released.Subsequently, the crude reaction mixture was concentrated by rot-vap and purified by column chromatography on a silica gel column to give the desired products.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
2 O 3 < 0.5 Fe 2 O 3 -O vac < 1.0 Fe 2 O 3 -O vac , significantly improved EPR intensity for 1.0 Fe 2 O 3 -O vac indicated the formation of abundant O vac via NaBH 4 reduction.To further investigate the change of O vac concentration in Fe 2 O 3 samples, the samples were studied by thermogravimetry analysis (TGA) 42 .As shown in Fig. 3c, the total weight loss increased in the order of 1.0 Fe 2 O 3 -O vac (0.64%) < 0.5 Fe 2 O 3 -O vac (1.67%) < Fe 2 O 3 (2.41%),confirming the presence of more O vac for 1.0 Fe 2 O 3 -O vac because of less weight loss.The weight losses observed in the H 2 -TGA analysis
−1 were attributed to the Fe-O symmetric bending vibrations (E g mode).Compared with Fe 2 O 3 and 0.5 Fe 2 O 3 -O vac , 1.0 Fe 2 O 3 -O vac showed red-shifted and broadened peaks, which demonstrated the formation of O vac and large amounts of O vac caused more disrupted lattice of Fe-O bonding.The O 2 -TPD was further conducted to study the change of O vac because O 2 released in low-temperature regions (<400 °C) was labile oxygen species 44 .Oxygen species desorbed from the catalyst surface below 400 °C increased in the order of Fe 2 O 3 < 0.5 Fe 2 O 3 -O vac < 1.0 Fe 2 O 3 -O vac (Supplementary Fig. 10a), indicating that 1.0 Fe 2 O 3 -O vac possessed highest O vac concertation, in consist with the XPS, EPR, TGA, and Raman results.H 2 -TPR experiments showed 1.0 Fe 2 O 3 -O vac consumed less H 2 than Fe 2 O 3 , 0.5 Fe 2 O 3 -O vac (Supplementary Fig. showed the Fe K-edge X-ray absorption near-edge structure (XANES) spectra of the Fe 2 O 3 -O vac samples compared with FeO, Fe 3 O 4 and Fe 2 O 3 as references.The absorption threshold for the Fe 2 O 3 -O vac was close to that of Fe 2 O 3 , and the oxidation states of 0.5 Fe 2 O 3 -O vac and 1.0 Fe 2 O 3 -O vac were fitted and the average valence state calculated by area integration method 46 was approximately 2.99+ and 2.96+, the related stoichiometry of 0.5 Fe 2 O 3 -O vac and 1.0 Fe 2 O 3 -O vac were Fe 2 O 2.99 and Fe 2 O 2.96 , which was agree with the results of ICP-OES.Compared with reference samples, a major peak of Fe K-edge centered at around 1.5 Å was found in the 1.0 Fe 2 O 3 -O vac (Fig. 3f), which can be assigned to Fe-O coordination and no characteristic peaks of Fe-Fe coordination contribution were detected, indicating the absence of metallic Fe species.Further Extended X-ray absorption fine structure (EXAFS) fitting results revealed that the Fe-O coordination number of Fe 2 O 3 , 0.5 Fe 2 O 3 -O vac and 1.0 Fe 2 O 3 -O vac were 6.0, 5.8 and 5.4, respectively, which decreased with the increased dosage of NaBH 4 (Supplementary Table 5).The changes of Fe-O coordination number indicated the creation of O vac of Fe 2 O 3 by the reduction treatment and more O vac were formed for 1.0 Fe 2 O 3 -O vac catalyst.
Fig. 4 | Control experiment.a The linear relationship between the content of O vac and yield; (b) Recycling experiment; (c) Quasi in situ 1 H NMR spectra of the aminocarbonylation of iodobenzene; (d) In situ DRIFT spectra for the aminocarbonylation of iodobenzene over 1.0 Fe 2 O 3 -O vac .
Fig. 5 |
Fig. 5 | DFT calculations.a The reaction pathway for the aminocarbonylation of iodobenzene by 1.0 Fe 2 O 3 -O vac ; (b) Energy profiles of PhI aminocarbonylation over Fe 2 O 3 -O vac surface; (c, d) Differential charge density plots (isosurface value of 0.008 eV/Å 3 ; cyan, charge depletion; yellow, charge accumulation) of PhI on Fe 2 O 3 -O vac and Fe 2 O 3 surfaces (top view (left) and side view (right)); (e) Charge transfer upon the phenyl group at Fe1; (f, g) Density of states of Fe1 and Fe3 atoms from Fe 2 O 3 -O vac and Fe 2 O 3 surfaces (black line was spin-up, red line was spin-down; d-band center was calculated and the value presented was the average of spin-up and spin-down d-band centers).Note: Fe 2 O 3 and Fe 2 O 3 -O vac represent Fe 2 O 3 (104) and Fe 2 O 3 (104)-O vac surfaces.
Fe 2 O 3 -O vac 320 mg Fe 2 O 3 was added to a Shrek tube followed by exchange with Ar, and then 20 mL mixture of H 2 O and EtOH (V(H 2 O)/V(EtOH) = 1/4) were added with 10 min magnetic stirring at room temperature.Then different amount of NaBH 4 was added into the Shrek tube and maintained for 20 min.The product was washed with absolute ethanol three times and dried in vacuum at 75 °C for 6 h to gain a series of Fe 2 O 3 -O vac with different amount of oxygen vacancy, which were denoted as 0.5 Fe 2 O 3 -O vac , 1.0 Fe 2 O 3 -O vac and 2.0 Fe 2 O 3 -O vac .
Table 1 |
Catalyst screening and reaction conditions optimization a | 2023-08-19T06:16:38.921Z | 2023-08-17T00:00:00.000 | {
"year": 2023,
"sha1": "8a482a1551ca36b8d8779b5ee683f1a5aa75d391",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-40640-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a58eb80f9b81d50a5e9ca6e3609849dfd6879bb2",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207756751 | pes2o/s2orc | v3-fos-license | DeepOPF: A Deep Neural Network Approach for Security-Constrained DC Optimal Power Flow
We develop DeepOPF as a Deep Neural Network (DNN) approach for solving security-constrained direct current optimal power flow (SC-DCOPF) problems, which are critical for reliable and cost-effective power system operation. DeepOPF is inspired by the observation that solving the SC-DCOPF problem for a given power network is equivalent to depicting a high-dimensional mapping between load inputs and generation and phase-angle outputs. We first construct and train a DNN to learn the mapping between the load inputs and the generations. We then directly compute the phase angles from the generations and loads by using the (linearized) power flow equations. Such a two-step procedure significantly reduces the dimension of the mapping to learn, subsequently cutting down the size of the DNN and the amount of training data/time needed. We further characterize a condition that allows us to tune the size of our neural network according to the desired approximation accuracy of the load-to-generation mapping. Simulation results of IEEE test cases show that DeepOPF always generates feasible solutions with negligible optimality loss, while speeding up the computing time by up to 400x as compared to a state-of-the-art solver.
I. INTRODUCTION
The "deep learning revolution" largely enlightened by the October 2012 ImageNet victory [1] has transformed various industries in human society, including artificial intelligence, health care, online advertising, transportation, and robotics. As the most widely-used and mature model in deep learning, Deep Neural Network (DNN) [2] demonstrates superb performance in complex engineering tasks such as recommendation [3], bio-informatics [4], mastering difficult game like Go [5], and human pose estimation [6]. The capability of approximating continuous mappings and the desirable scalability make DNN a favorable choice in the arsenal of solving large-scale optimization and decision problems in engineering systems. In this paper, we apply DNN to power systems for solving the essential security-constrained direct current optimal power flow (SC-DCOPF) problem in power system operation.
The OPF problem, first posed by Carpentier in 1962 in [7], is to minimize an objective function, such as the cost of power generation, subject to all physical, operational, and technical constraints, by optimizing the dispatch and transmission decisions. These constraints include Kirchhoff's laws, operating limits of generators, voltage levels, and loading limits of transmission lines [8]. The OPF problem is central to power system operations as it underpins various applications including economic dispatch, unit commitment, stability and reliability assessment, and demand response. While OPF with a full AC power flow formulation (AC-OPF) is most accurate, All authors are with Department of Information Engineering, The Chinese University of Hong Kong.
it is a non-convex problem and its complexity obscures practicability. Meanwhile, based on linearized power flows, DC-OPF is a convex problem admitting a wide variety of applications, including electricity market clearing and power transmission management. See e.g., [9], [10] for a survey.
The SC-DCOPF problem, a variant of DC-OPF, is critical for reliable power system operation against contingencies caused by equipment failure [11]. It considers not only constraints under normal operation, but also additional steadystate security constraints for each possible contingency [12]. 1 While SC-DCOPF is important for reliable power system operation, solving it incurs excessive computational complexity, limiting its applicability in large-scale power networks [13].
To this end, we propose a machine learning approach for solving the SC-DCOPF problem efficiently. Our approach is inspired by the following observations.
• Given a power network, solving the SC-DCOPF problem is equivalent to depicting a high-dimensional mapping between load inputs and generations and voltages outputs. • In practice, the SC-DCOPF problem is usually solved repeatedly for the same power network, e.g., every 5 minutes, with different load inputs at different time epochs.
As such, it is conceivable to leverage the universal approximation capability of deep feed-forward neural networks [14], [15], to learn the input-to-output mapping for a given power network, and then apply the mapping to obtain operating decisions upon giving load inputs (e.g., once every 5 minutes). Specifically, we develop DeepOPF as a DNN based solution for the SC-DCOPF problem. As compared to conventional approaches based on interior-point methods [16], DeepOPF excels in (i) reducing computing time and (ii) scaling well with the problem size. These salient features are particularly appealing for solving (large-scale) SC-DCOPF problems, which are central to secure power system operation with contingency in consideration. Note that the complexity of constructing and training a DNN model is minor if amortized over the many problem instances (e.g., one per every 5 minutes) that can be solved using the same model. In more detail, our contributions are summarized as follows. 1 There are two types of SC-DCOPF problems, namely the preventive SC-DCOPF problem and the corrective SC-DCOPF problem. In the preventive SC-DCOPF problem, the system operating decisions cannot change once they are determined, thus they need to guarantee feasibility under both the preand post-contingency constraints. For the corrective SC-DCOPF problem, the system operator can have a short time (e.g., 5 minutes) [12] to adjust the operating points after the occurrence of each contingency. Our DeepOPF approach is applicable to both problems. We focus on the preventive SC-DCOPF problem in this paper for easy illustration. First, after reviewing the SC-DCOPF problem in Sec. III, we describe DeepOPF as a DNN framework for solving the SC-DCOPF problem in Sec. IV. In DeepOPF, we first construct and train a DNN to learn the mapping between the load inputs and the generations. We then directly compute the phase angles from the generations and loads by using the (linearized) power flow equations. Such a two-step procedure significantly reduces the dimension of the mapping to learn, subsequently cutting down the size of our DNN and the amount of training data/time needed. We also design a post-processing procedure to ensure the feasibility of the final solution.
Then in Sec. V, we derive a condition suggesting that the approximation accuracy of the neural network in DeepOPF decreases exponentially in the number of layers and polynomially in the number of neurons per layer. This allows us to systematically tune the size of the neural network in Deep-OPF according to the pre-specified performance guarantee. We also derive the computational complexity of DeepOPF.
Finally, we carry out simulations and summarize the results in Sec. VI. Simulation results of IEEE test cases show that DeepOPF always generates feasible solutions with negligible optimality loss, while speeding up the computing time by up to 400x as compared to a state-of-the-art solver. The results also highlight a trade-off between the prediction accuracy and running time of DeepOPF.
Due to the space limitation, all proofs are in the supplementary material.
II. RELATED WORK
Existing studies on solving SC-OPF mainly focus on three lines of approches. The first is numerical iteration algorithm, where the SC-OPF problem is first approximated as an optimization problem e.g., quadratic programming [17], or linear programming [18], and the numerical iteration solvers like interior-point methods [19] are applied to obtain the optimal solutions. However, the time complexity of these numericaliteration based algorithms can be substantial for large-scale power systems due to the excessive number of constraints in regard to different contingencies. See [12] for a survey on the numerical iteration algorithms for SC-OPF.
The second is heuristic algorithm based on computational intelligence techniques, including evolutionary programming like swarm optimization. For instance, a particle swarm optimization method with reconstruction operators was proposed in [20] for solving the SC-OPF problem is proposed, where the reconstruction operators and an external penalty are adapted to handle the constraints and improve the quality of the final solution. However, there are two drawbacks of this kind of methods. First, there is no performance guarantee on neither the optimality nor feasibility. Second, the method may still incur high computational complexity.
The third is learning-based method. Existing studies focus on integrating the learning techniques (e.g., neural network, decision tree) into the conventional algorithm to facilitate the process of solving SC-OPF problems. For instance, [21] applies a neural network to learn the system security boundaries as an explicit function to be used in the OPF formulation.
In [22], [23], decision trees are used to derive tractable rules from large data sets of operating points, which can efficiently represent the feasible region and identify possible solutions. However, the proposed heuristic schemes are still iteration based and may still incur a significant amount of running time for large-scale instances.
Recently, there have been some works on learning the active constraints set so as to reduce the size of OPF problems to solve [24], [25]. Determining active constraints set, however, is highly non-trivial for SC-OPF problems. With incorrect active constraint sets, the approach may generate infeasible solution and is not clear how to derive a feasible solution at the end. In addition, [26] proposes neural-network/decision-tree based methods to directly obtain a solution for AC-OPF problems but these methods cannot guarantee the feasibility of the solutions.
Different from existing studies, our DeepOPF uses neural networks to learn the mapping between the load inputs and generation and voltage outputs, so as to directly obtains solutions for the SC-DCOPF problem with feasibility guarantee. As compared to our previous effort in [27], this paper studies the more challenging SC-DCOPF problem and, more importantly, characterizes a useful condition that allows us to design the neural network according to the pre-specified performance guarantee of the obtained solution.
III. SECURITY-CONSTRAINED DCOPF PROBLEM
We study the widely-studied (N − 1) SC-DCOPF problem, which considers contingencies due to outage of any single transmission line in the power system. The objective is to minimize the total generation cost subject to the generator operation limits, the power balance equation, and the transmission line capacity constraints under all contingencies [28]. Assumed the power network remains connected upon contingency, the SC-DCOPF problem is formulated as follows 2 : Here N bus is the number of buses, N gen is the number of generators, and n c is the number of contingency case (c = 0 denotes the case without any contingency). P G = [P Gi , i = 1, . . . , N bus ] is the generator output, P min for the c-th contingency, which is an N bus × N bus matrix as follows: In the above SC-DCOPF formulation, the first set of constraints describe the generation limits. The second set of constraints are the power flow equations with contingencies taken into account. The third set of constraints capture that line transmission capacity for both pre-contingency and postcontingency cases. In the objective, C i (P Gi ) is the cost function for the generator at the i-th bus, commonly modeled as a quadratic function [31]: where λ 1i , λ 2i , and λ 3i are the model parameters and can be obtained from measured data of the heat rate curve [28]. While the SC-DCOPF problem is important for reliable power system operation and it is a convex (quadratic) problem with efficient solvers, solving it for large-scale power networks in practice still incurs excessive running time, limiting its practicability [13]. In the next, we address this issue by proposing a neural network approach to solve the SC-DCOPF problem in a fraction of the time used by existing solvers.
A. A Neural-Network Framework for OPF
We outline a general predict-and-reconstrct framework for solving OPF in Fig. 1. Specifically, we take the dependency induced by the equality constraints among the decision variables in the OPF formulation. Given the load inputs, the learning model (e.g., DNN) is then applied only to predict a set of independent variables. The remaining variables are then determined by leveraging the (power flow) equality constraints. This way, we not only reduce the number of variables to be predicted, but also ensure the equality constraints are satisfied, which is usually difficult in generic learning based approaches. In this paper, we materialize the general framework to develop DeepOPF for solving the SC-DCOPF problem and obtain strong theoretical and empirical results.
B. Overview of DeepOPF
The framework of DeepOPF is shown in Fig. 2, which is divided into the training and inference stages. We first construct and train a DNN to learn the mapping between the load inputs and the generations. We then directly compute the voltages from the generations and loads by using the (linearized) power flow equations.
We discuss the process of constructing and training the DNN model in the following subsections. In particular, we discuss the preparation of the training in Sec. IV-C, the variable prediction and reconstruction in Sec. IV-D, and the design and training of DNN in Sec. IV-E.
In the inference stage, we directly apply DeepOPF to solve the SC-DCOPF problem with given load inputs. This is different from recent learning-based approaches for solving OPF where machine learning only help to facilitate existing solvers, e.g., by identifying the active constraints [24]. We describe a post processing to ensure the feasibility of the obtained solutions in Sec. IV-F.
C. Load Sampling and Pre-processing
We sample the loads within uniformly at random, where P Di is the default power load at the i-th bus and x is the percentage of sample range, e.g., 10%. It is then fed into the traditional quadratic programming solver [32] to generate the optimal solutions. Uniform sampling is applied to avoid the over-fitting issue which is common in generic DNN approaches. 3 After that, the training data is normalized (using the statistical mean and standard variation) to improve training efficiency.
D. Generation Prediction and Phase Angle Reconstruction
We express P Gi as follows, for i = 1, ..., N gen , where α i ∈ [0, 1] is a scaling factor. Instead of predicting the generations with diverse value ranges, we instead predict the scaling factor α i ∈ [0, 1] and recover P G . This simplifies the DNN output layer design to be discussed later. Note that generation of the slack bus is obtained by subtracting generations of other buses from the total load. Once we obtain P G , we directly compute the phase angles by a useful property of the admittance matrices [33], [34]. We first obtain an (N bus − 1) × (N bus − 1) matrix,B c by eliminating the row and column corresponding to the slack bus from the admittance matrix B c for each contingency , c = 0, ..., n c . It is well-understood thatB c is a full-rank matrix [28], [35]. Then we compute an (N bus − 1)-dimension phase angle vectorθ c as whereP G andP D stand for the (N bus −1)-dimension output and load vectors for buses excluding the slack bus, respectively. At the end, we output the N bus -dimension phase angle vector θ c (c = 0, ..., n c ) by inserting constant representing the phase angle for the slack bus intoθ c . Again, there are two advantages of this approach. On one hand, we use the property of the admittance matrix to reduce the number of variables to predict by our neural network, cutting down the size of our DNN model and the amount of training data/time needed. On the other hand, the equality constraints involving the generations and the phase angles can be satisfied automatically, which is difficult to handle in alternative learning-based approaches.
E. The DNN Model
The core of DeepOPF is the DNN model, which is applied to approximate the load-to-generation mapping, given a power network. The DNN model is established based on the multilayer feed-forward neural network structure, which consists of a typical three-level network architecture: one input layer, several hidden layers, and one output layer. More specifically, the applied DNN model is defined as: where h 0 denotes the input vector of the network, h i is the output vector of the i-th hidden layer, h L is the output vector (of the output layer), andα is the generated scaling factor vector for the generators. Matrices W i , biase vectors b i , and activation functions σ(·) and σ (·) are subject to design.
1) The architecture: In the DNN model, h 0 represents the normalized load data, which is the inputs of the network. After that, features are learned from the input vector h 0 by several fully connected hidden layers. The i-th hidden layer models the interactions between features by introducing a connection weight matrix W i and a bias vector b i . Activation function σ(·) further introduces non-linearity into the hidden layers. In our DNN model, we adopt the widely-used Rectified Linear Unit (ReLU) as the activation function of the hidden layers, which can be helpful for accelerating the convergence and alleviate the vanishing gradient problem [1]. In addition, the Sigmoid function [2], σ (x) = 1 1+e −x , is applied on the output layer to project the outputs of the network to (0, 1).
2) The loss function: After constructing the DNN model, we need to design the corresponding loss function to guide the training. Since there exists a linear correspondence between P G and θ c , c = 0, ..., n c , there is no need to introduce the loss term of the phase angles. The difference between the generated solution and the actual solution of P Gi is expressed by the sum of mean square error between each element in the generated scaling factorsα i and the optimal scaling factors α i : where N gen represents the number of generators. Meanwhile, we introduce a penalty term related to the inequality constraint into the loss function. We first introduce an n a × n matrix A c for each contingency, where n a is the number of adjacent buses. Each row in A corresponds to an adjacent bus pair. Given any the adjacent bus pair (i, j) under the c-th contingency, let the power flows from the i-th bus to the j-th bus. Thus, the elements, a i and a j , the corresponding entries of the matrix A c , are defined as: for c = 0, ..., n c , Based on (5) and (7), the capacity constraints for the transmission line in (1) can be expressed as: where (A cθc ) k represents the k-th element of A cθc . Note that θ c is the phase angle vector generated based on (5) and the discussion below it, and it is computed from P G and P D .
We can then calculate the penalty value for (A cθc ) k and add the average penalty value into the loss function for training. The penalty term capturing the feasibility of the generated solutions is defined as: Thus, for each item in the training data set, the loss function consists of two parts: the difference between the generated solution and the reference solution and the penalty upon solutions being infeasible. The total loss can be expressed as a weighted sum of the two parts: where w 1 and w 2 are positive weighting factors for balancing the influence of each term in the training phase.
3) The training process: In general, the training processing can be regarded as minimizing the average value of loss function with the given training data by tuning the parameters of the DNN model as following: where W i and b i , i = 1, ..., N hid represent the connection weight matrix and vector for layer i. N train is the amount of training data and L total,k is the loss of the k-th item in the training. We apply the widely-used optimization technique in the deep learning, stochastic gradient descent (SGD) [2], in the training stage, which is effective for large-scale dataset and can economize on the computational cost at every iteration by choosing a subset of summation functions at every step.
F. Post-Processing
After obtaining a solution including the generations and phase angles, we check its feasibility by examining whether the constraints on the generation limits and the line transmission limits are satisfied. We output the solution if it passes the feasibility test. Otherwise, we will solve the following quadratic programming problem, to project the infeasible solution to the constraint set and output the projected (and thus feasible) solution.
A. Approximation Error of the Load-to-Generation Mapping
Given a power network, the SC-DCOPF problem is a quadratic programming problem with linear constraints. We denote the mapping between the load input P D and the optimal generation P G as f * (·). Following the common practice in deep-learning analysis (e.g., [36], [37], [38]) and without loss of generality, we focus on the case of onedimension output in the following analysis, i.e., f * (·) is a scalar. 4 Assumed the load input domain is compact, which usually holds in practice, f * (·) has certain properties. Lemma 1. The function f * (·) is piece-wise linear. Moreover, it is Lipschitz-continuous; that is, there exists a constant Λ > 0, such that for any x 1 , x 2 in the domain of f * (·), Define f (·) as the mapping between P D and the generation obtained by DeepOPF by using a neural network with depth N hid and maximum number of neurons per layer N n . Again we study the case of one-dimension output. As f (·) is generated from a neural network with ReLU activation functions, it is also piece-wise linear [39].
Before we proceed, we present a result on the approximation error between two scalar function classes, which can be of independent interest.
Essentially, the lemma gives a lower bound to the worst-case error of using a linear function to best approximate a twosegment piece-wise linear function. By generalizing Lemma 2 to multi-input functions, we study the approximation error between f * (·) and f (·).
where d is the diameter of the load input domain D.
The theorem characterizes a lower bound on the worstcase error of using neural networks to approximate loadto-generation mappings in SC-DCOPF problems. The bound is linear in d, which captures the size of the load input domain, and Λ, which captures the "curveness" of the mapping to learn. Meanwhile, interestingly, the approximation error bound decreases exponentially in the number of layers while polynomially in the number of neurons per layer. This suggests the benefits of using "deep" neural networks in mapping approximation, similar to the observations in [36], [37], [38]. 5 A useful corollary suggested by Theorem 3 is the following.
Corollary 4. The following gives a condition on the neural network parameters, such that it is ever possible to approximate the most difficult load-to-generation mapping with a Lipschitz constant Λ, up to an error of > 0.
where d is the diameter of the input domain D.
The condition in (15) allows us to tune the "size" of the neural network according to preferred approximation accuracy. If (15) is not satisfied, then there may exist a difficult mapping, even the smallest possible approximation error exceeds .
B. Computational Complexity
The computational complexity of conventional approaches is related to the scale of the SC-DCOPF problem. For example, the computational complexity of interior point method based approach for the convex quadratic programming is O L 2 n 4 measured as the number of arithmetic operations [40], where L is the number of input bits and n is the number of variables. Plugging in the parameters of the SC-DCOPF problem, this computational complexity turns out to be O N 6 bus . The computational complexity of DeepOPF mainly consists of two parts: the calculation as the input data passing through the DNN model and the post-processing. For the postprocessing, its computational complexity may be negligible in practice as the DNN model barely generate infeasible solutions, as seen in Sec. VI. Thus, the computational complexity of DeepOPF is dominated by the calculation with respect to the DNN model. It can be evaluated by the method in [41].
Specifically, recall that the number of bus and the number of contingencies are N bus and n c , respectively. The input and the output of the DNN model have N in and N out dimensions, and the DNN model has N hid hidden layers and each hidden layer has at most N n neurons. Once we finish training the DNN model, the complexity of generating solutions by using DeepOPF is characterized in the following proposition.
Proposition 5. The computational complexity (measured as the number of arithmetic operations) to generate the generations to the SC-DCOPF problem by using DeepOPF is From empirical experience, we set N n to be on the same order of N bus and set N hid to be a small constant. Thus the complexity of our DeepOPF is O N 2 bus 6 , significantly smaller than that of the interior point method. Our simulation results in the next section corroborate this observation.
C. Trade-off between Accuracy and Complexity
The results in Theorem 3 and Proposition 5 suggest a tradeoff between accuracy and complexity. In particular, we can tune the number of hidden layers N hid and the number of neurons per layer N n to trade between the approximation accuracy and computational complexity of the DNN approach. It appears desirable to design multi-layer neural networks in DeepOPF as increasing N hid may reduce the approximation error exponentially, but only increase the complexity linearly.
VI. NUMERICAL EXPERIMENTS
A. Experiment Setup 1) Simulation environment: The experiments are conducted in CentOS 7.6 on the quad-core (i7-3770@3.40G Hz) CPU workstation and 16GB RAM.
2) Test case: We consider four IEEE standard cases [42]: the IEEE 30-/57-/118-/300-bus test systems, representing small-scale, medium-scale, and large-scale power networks for the SC-DCOPF problem. Their illustrations are in [43], [44] and their parameters are shown in Table I.
3) Training data: In the training stage, the load data is sampled uniformly at random within [90%, 110%] of the default value on each bus. Then we obtain the solution of the SC-DCOPF problem by Gurobi [32]. Gurobi is based on the traditional interior point method [45]. We sample 50,000 training data for each test case.
4) The implementation of the DNN model: We design the DNN model based on Pytorch platform and apply the stochastic gradient descent method [2] to train the neural network. In addition, the epoch is set to 200 and the batch size is 64. We set the weighting factors in the loss function in (10) to be w 1 = w 2 = 1, based on empirical experience. The remaining parameters are shown in Table I, including the number of hidden layers and the number of neurons on each layer of each test cases. We illustrate the detail architecture of our DNN model for the IEEE case30 in Fig. 3.
5) Evaluation Metrics:
We will compare the performance of DeepOPF and a state-of-the-art Gurobi solver using the following metrics, averaged over 10,000 instances. The first is the percentage of the feasible solution obtained by both approaches (for DeepOPF, we only count the feasible solutions before post-processing). The second is the objective cost obtained by both approaches. The third is the running time, i.e., the average computation time for obtaining solutions for the instances. Then we compute speedup as the ratio between the running times of the Gurobi solver and DeepOPF.
B. Performance Evaluation
The simulation results for test cases are shown in Table II and we have several observations. First, as compared to the Gurobi solver, our DeepOPF approach speeds up the computing time by up to three orders of magnitude. The speedup is increasingly significant as the test cases get larger, suggesting that our DeepOPF approach is more efficient for large-scale power networks. Second, the percentage of the feasible solution obtained by DeepOPF is 100% before postprocessing, which implies that DeepOPF barely generates infeasible solution and can find feasible solutions through the mapping. Third, the cost difference between with the Deep-OPF solution and the reference Gurobi solution is negligible, which means each dimension of generated solution has high accuracy when compared to that of the optimal solution.
To further understand the performance of DeepOPF, we plot the empirical cumulative distortions of the speedup and the optimality loss for the IEEE 118-bus test case in Fig. 4(a) Table III. In alignment with our theoretical analysis in Sec. V-A, increasing the depth and the size of the neural network improves the optimality-loss performance, at the (minor) cost of longer computing time.
VII. CONCLUSION
We develop DeepOPF for solving the SC-DCOPF problem. DeepOPF is inspired by the observation that solving SC-DCOPF for a given power network is equivalent to learning a high-dimensional mapping between the load inputs and the dispatch and transmission decisions. DeepOPF employs a DNN to learn such mapping. With the learned mapping, it first obtains the generations from the load inputs and then directly computes the phase angels from the generations and loads. We characterize the approximation capability and computational complexity of DeepOPF. Simulation results also show that DeepOPF scales well in the problem size and speeds up the computing time by up to 400x as compared to conventional approaches. Future directions include extending DeepOPF to the AC-OPF setting and exploring joint learning based and optimization based algorithm design. Proof. We now show the considered piece-wise linear onedimension output function f * (·) is Lipschitz-continuous in the input domain D, which can be partitioned into r different convex polyhedral regions, R i , i = 1, ..., r. The mapping f * (·) is piece-wise linear and can be defined as follows: where x ∈ R n×1 , a i ∈ R 1×n , i = 1, ..., r and b i ∈ R 1 , i = 1, ..., r. Then, we can have: Thus, let Λ = max { a i , . . . , a r }. We have Therefore, f * (·) is Lipschitz-continuous.
APPENDIX B PROOF OF LEMMA 2
Proof. We can derive the lower bound to the worst-case L ∞based approximation error as follows. Suppose we want to find a function g (·) belongs to the linear scalar function class G to approximate the function h belongs to the two-segment piece-wise linear function class H with a Lipschitz constant Λ > 0, over an interval [−µ, µ] (µ > 0). An illustration is shown in Fig. 5. Let g (x) = a · x + b, for x ∈ [−µ, µ]. Let h ∈ H be the following: Then, we can obtain the lower bound for the L ∞ -based approximation error ofĥ (·) and g (·) by the classification discussion on the intercept b. ĥ (x) − g (x) ≥ ĥ (0) − g (0) ≥ Λ · µ 2n = |Λµ − b| ≥ Λ · µ 2 .
• Otherwise Λµ 2 < b. If a > 0, under this case we can have: Otherwise a ≤ 0, we can consider the point x = −µ and obtain the same result. Thus overall, we observe For the worst-case L ∞ -based approximation error, we have ≥Λ · µ 2 APPENDIX C PROOF OF THEOREM 3 Proof. We can characterize the lower bound on the worstcase error of using neural networks to approximate load-togeneration mappings in SC-DCOPF problems as follows.
Suppose G be the family of piece-wise linear function class generated by a neural network with depth N hid and maximum number of neurons per layer N n , on the load input domain D with the diameter d. The maximal number of segment any functions belongs to G can have is defined as n. Let H be the class of all possible f * (·) with a Lipschitz constant Λ > 0. Letf ∈ H comprises of 2n linear segments with equal length be the following: where { a 0 − a 1 , ..., a 2i−1 − a 2i , i = 1, ..., n } = d 2n . According to Lemma 2, on the interval [a i , a i+2 ] , i = 0, ..., 2n − 2, we can have: Thus, . | 2019-10-30T07:15:32.000Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "8bab7d2ba1189c46f1440e940ca7dad4af39d744",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.14448",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aaf857d85e35402b8a040a03e79598005e6a8e7d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
213387302 | pes2o/s2orc | v3-fos-license | Understanding of the Modeling Method in Additive Manufacturing
With the development of additive manufacturing, how to improve the efficiency and accuracy of manufacturing and prediction has attracted more and more attention. This paper focuses on three basic modeling methods, which are summarized according to the issue: empirical method, analytical method and numerical method. These methods are used differently based on practical circumstances. Besides, due to the improvement of computer computing power, machine learning and digital twin have also been applied to the study of additive manufacturing. Machine learning has a good performance in the prediction and optimization of process parameters, but the characteristic of machine learning that requires a lot of data leads to the increase of experimental cost. Digital twin does a good job in monitoring the condition of equipment. In addition, it can replace expensive and time-consuming physical experiments with inexpensive and efficient digital experiments, which can provide data for analysis. However, because of insufficient research, its application is still limited.
Introduction
Empirical method is a method which conducts an investigation relying on experiments and not theories. Empirical method focuses on the observations and measurements rather than understanding of the principles themselves. Empirical method consists of a series of experiments, and the results are summarized according to the experiment data. Then, further experiments are also required for verifying and improving previous findings.
The main advantage of empirical method, compared with other modeling methods, is that it requires the least effort to explore the physical laws in the process of additive manufacturing. Empirical method is a good method for some processes that are difficult to explore their specific physical laws, or those whose physical laws are too complex. However, its disadvantages are also obvious: In most cases, the results only apply to the specific case of a particular process. Besides, a large number of experiments are expensive and time consuming.
Examples
Empirical methods are usually used to find out the relationship between two or several parameters in order to optimize the process.
In vat photo polymerization, Wang et al. [5] use the least-squares method to find out the relationship between the post-cure shrinkage and process parameters. Lan et al. [6] do experimental research on dimensional accuracy of parts and the associated parameters. Karalekas et al. [7] study the shrinkage characteristics of stereolithography built square laminate plates using an acrylic-based photopolymer.
In powder bed fusion, Raghunath et al. [8] investigate the relationship between shrinkage and the various process parameters namely laser power, beam speed, hatch spacing, part bed temperature and scan length.
In material extrusion, Anitha et al. [9] explore the effect of process parameter like layer thickness on design quality. Sood et al. [10] focus on the effect of some important parameters such as layer thickness, part build orientation, raster angle, raster width and air gap on the compressive stress of test specimen.
In sheet lamination, Kechagias and John [11] use typical test part and carry out matrix experiments based on Taguchi design to find out the influence of different process parameters (layer thickness, heater temperature, platform retract, heater speed, laser speed, feeder speed and platform speed) on the roughness of vertical surfaces along Z-axis on ZX-plane of parts.
In previous studies, we can find that most of the researches focus on exploring the relationship between two or several different parameters in order to seek the optimization of process or part quality.
Introduction
Analytical method is a method which gets exact function relation or forms a theorem from existed theories to establish mathematical model. Its principle is clear and every process has theoretical support. The analytical model is the output of a mathematical analysis of the process that considers the laws of physics and related physical processes.
The main advantage of analytical method is that the result obtained can be easily transferred to other related processes. Besides, it does not need to spend too much time and money on experiments. However, sometimes the physical laws of some processes are too complex or unclear, which will make it difficult to establish analytical models. IOP Conf. Series: Materials Science and Engineering 711 (2020) 012017 IOP Publishing doi:10.1088/1757-899X/711/1/012017 3
Examples
Analytical methods usually use formulas and models derived from theoretical calculations to control and predict manufacturing process.
Ramanath et al. [12] focus on the study of the melt flow behavior of Poly-ε-caprolactone (PCL) as a representative biomaterial. Mathematical model is established, and the melt flow behavior is studied by changing the inlet filament velocity and the outlet nozzle diameter and angle. Muller [13] introduces a process modeling and a system control to manufacture Functionally Graded Materials parts with a direct laser deposition system. This work enables to choose an adapted manufacturing strategy and control process parameters to obtain the required material distribution and the required geometry. Strano et al. [14] propose a mathematical model to accurately predict surface roughness, which takes into account the staircase effect and the existence of surface particles. This model is conducive to improving the surface quality of parts.
Introduction
The numerical method is a method based on physical laws, which uses numerical step-by-step method to obtain useful results. It processes data and solves problems by means of computer or computational model. The purpose of numerical analysis is to design and analyze some computational methods to obtain approximate but accurate results for the problem. It is often used in situations where the process is complex and difficult to get exact solutions.
The main advantage of numerical method is that it has a wide range of applications. There are many kinds of analytical software based on numerical methods, which can be used to solve problems faster and more accurately.
Examples
Numerical method is usually a method for solving problems after modeling. Tiebing Chen and Yuwen Zhang [15] use numerical simulation to study the effect of process parameters on the process of stratified sintering in selective laser sintering. Sachs et al. [16] use numerical simulation to develop a mathematical model for binder flight trajectory. Podshivalov et al. [17] apply numerical methods to the generation of microscale scaffolds.
One of the widely used numerical method is finite element method. Finite element analysis uses a mathematical approximation to simulate real physical systems (geometry and load cases). With simple and interacting elements, a finite number of unknowns can be used to approximate an infinitely unknown real system. Since most practical engineering problems are difficult to obtain accurate solutions, the finite element analysis which has high computational accuracy and adapts to various complex shapes makes it an effective engineering analysis tool.
Finite element analysis can be used to solve one-dimensional, two-dimensional and threedimensional engineering problems. Nelson et al. [18] use one-dimensional finite element model to describe heat transfer of powder bed fusion process. Singh et al. [19] use two-dimensional finite element model to measure the evolution temperature distribution and density of bisphenol-A polycarbonate in selective laser sintering process with time, and determine the important technological parameters affecting the final density of laser sintered parts and their relationship. Bugeda et al. [20] establish a three-dimensional finite element model of single-track sintering in selective laser sintering process, which considers both the thermal phenomena and the sintering phenomena involved in the process. Dong et al. [21] establish a transient three-dimensional finite element model to study the phase transition in selective laser sintering process, which also considers the thermal phenomena and sintering phenomena during sintering. António and Vilar [22] propose a model coupling finite element heat transfer calculations, phase transformation kinetics and microstructure-property relations in Ti-6Al-4V, and use the model to obtain the processing maps of deposition parameters related to the microstructure and properties of the parts. IOP Conf. Series: Materials Science and Engineering 711 (2020) 012017 IOP Publishing doi:10.1088/1757-899X/711/1/012017 4 It can be seen from the above studies that the applications of finite element analysis are mainly focused on the thermodynamic study in the powder bed fusion process.
Machine learning and digital twin in additive manufacturing
In addition to the traditional methods mentioned above, due to the improvement of computer computing power, the modeling method of additive manufacturing based on machine learning and digital twin provides a direction for future research.
Machine Learning
Because machine learning can optimize and predict the parameters only through the relationship between input and output without knowing the internal rules of the system, it is suitable for predicting and optimizing the process parameters of additive manufacturing in the case of difficult to explore the internal physical laws of the process. In addition, compared with traditional methods, the application of machine learning in additive manufacturing is conducive to improving efficiency and accuracy. Therefore, in recent years, machine learning has been widely used in the optimization and prediction of the process parameters in additive manufacturing. Lee et al. [23] build a neural network model to find out the influence of process input parameters on the dimensional accuracy of parts and predict the dimensional accuracy of parts. Rong-Ji et al. [24] use genetic algorithm and back propagation neural network algorithm to determine the optimal process parameters of parts to produce parts with higher accuracy. Munguía et al. [25] propose an estimator based on artificial neural network to estimate the build-time and related costs (labor, machine costs and daily expenses) of selective laser sintering process. Padhye and Deb [26] consider the minimization of surface roughness and construction time in the process of selective laser sintering for multi-objective optimization, which use non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimizer (MOPSO). Garg and Lam [27] apply genetic programming, support vector regression and artificial neural network in formulating the laser power-based-open porosity models. Garg and Lam [28] also use the computational intelligence (CI) approach of multi-gene genetic programming (MGGP) to formulate the model for predicting open porosity in selective laser sintering process. Stoyanov and Bailey [29] put forward State-Space Model Identification-minimization of the prediction error in the space of the model structure to predict and monitor trends and quality online. Machine learning can also be combined with model methods. Salonitis et al. [30] propose a method for the design optimization of lattice components towards weight minimization, which combines finite element analysis and evolutionary computation.
However, machine learning still has limitations. It is expensive to carry out a large number of experiments in additive manufacturing, while machine learning needs a lot of data for training. Therefore, the sample size in most machine learning experiments is insufficient. This problem needs to be solved in the future to further improve accuracy.
Digital Twin
Digital twin is a simulation process that makes full use of physical model, sensor update, operation history and other data. The purpose of digital twin is to evaluate and predict the condition of equipment.
In additive manufacturing, digital twin can be used to assess the health of the system and to evaluate and predict process parameters. After full experimental verification, this efficient and cheap digital experiment, which has high accuracy and stability, can replace expensive and time-consuming physical experiment. Some attempts have been made in this field. Knapp et al. [31] design a digital twin model to predict cooling rates, temperature gradients, solidification rates, SDAS and micro-hardness values. Debroy et al. [32] study the current status and needs of the first generation digital twin of additive manufacturing. In the above research, we can find that although the digital twin has a good performance in prediction and evaluation, its application is still limited because the current research on it is insufficient. IOP Conf. Series: Materials Science and Engineering 711 (2020) 012017 IOP Publishing doi:10.1088/1757-899X/711/1/012017 5
Conclusion
In this paper, three basic modeling methods for additive manufacturing are introduced: empirical method, analytical method, and numerical method. We introduce the main idea and significant applications of these three methods respectively. These methods are used in different situations. Besides, due to the improvement of computer computing power, recently machine learning and digital twin have also been applied to the study of additive manufacturing. With the development of modeling methods for additive manufacturing, the technology of additive manufacturing will surely make rapid progress. | 2020-01-09T09:10:59.529Z | 2020-01-07T00:00:00.000 | {
"year": 2020,
"sha1": "c9134dcc50b4c6e12d37af9f2824b715897671e1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/711/1/012017",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9a240d01c5669930c1b8a5bc1e19676973e4701c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
261415810 | pes2o/s2orc | v3-fos-license | Structural basis of IRGB10 oligomerization by GTP hydrolysis
Immunity-related GTPase B10 (IRGB10) is a crucial member of the interferon (IFN)-inducible GTPases and plays a vital role in host defense mechanisms. Following infection, IRGB10 is induced by IFNs and functions by liberating pathogenic ligands to activate the inflammasome through direct disruption of the pathogen membrane. Despite extensive investigation into the significance of the cell-autonomous immune response, the precise molecular mechanism underlying IRGB10–mediated microbial membrane disruption remains elusive. Herein, we present two structures of different forms of IRGB10, the nucleotide-free and GppNHp-bound forms. Based on these structures, we identified that IRGB10 exists as a monomer in nucleotide-free and GTP binding states. Additionally, we identified that GTP hydrolysis is critical for dimer formation and further oligomerization of IRGB10. Building upon these observations, we propose a mechanistic model to elucidate the working mechanism of IRGB10 during pathogen membrane disruption.
The IRG family, also called p47 GTPases, comprise IFNinducible GTPases, which are involved in the early immune response.In mice, a total of 23 genes (IRGA 1-8, IRGB 1-10, IRGC, IRGD, IRGM 1-3) have been identified as IRG family, while only a single full-length IRGC and truncated IRGM have been identified as human IRG family (8,17).Similar to other GTPases, the IRG family possesses a GTPase domain containing a highly conserved P-loop to which GTP binds.The IRG family is divided into two classes, the GKS class and GMS class, according to the Ploop sequence (18).The IRG family GKS class contains a conserved G-x(4)-GKS pattern in the P-loop, while the GMS class contains a G-x(4)-GMS sequence pattern in the P-loop.All IRG families except IRGM (GMS class) are included in the GKS class (17,18).The IRG family is known to contribute to cell-autonomous immune responses against invasion by various pathogens (19,20).
Although their detailed working mechanisms are unclear, several studies on IRGB10, an IRG family member, have indicated that the IRG family mediates pathogen membrane disruption in collaboration with the GBP family, which is critical for the host defense mechanism (20).During this pathogen membrane disruption stage, pathogenic products, such as DNA and lipopolysaccharide (LPS), are released from the pathogen and induce the formation of inflammasomes to further promote the host immune response (20).In the case of IRGA6 and IRGB6, IRGA6 directly binds to the pathogen membrane using N-terminal myristoylation, whereas IRGB6 is not involved in the membrane disruption.However, it remains unclear whether other IRG family proteins can also directly interact with pathogens and contribute to pathogen membrane disruption similar to IRGB10 and IRGA6 (20-24).The various IRG families may have their own action mechanism for the immune system.
Among the IRG family, the structures of IRGA6 (25), IRGB6 (24), and IRGB10 (26) have been elucidated, with several studies revealing that they share similar structures, comprising two distinct domains, a helical domain, and a GTPase domain.The IRG family usually forms a unique head-to-head dimer, as well as a further oligomer during pathogen membrane disruption (26,27).To form head-to-head dimers, IRGA6 uses the P-loop and switch I region of the GTPase domain, whereas IRGB10 uses one of the helices of the GTPase domain (26,27).Without clear experimental data, we previously suggested a structural model of pathogen membrane disruption by IRGB10 using the elucidated GDP-bound dimeric IRGB10 structure (26).Additionally, we speculated that the structure of IRGB10 is altered by GTP hydrolysis similar to that of other GTPase proteins, such as Atlastin1, which is structurally related to the IRG families.We also speculated that GTP hydrolysis and the presence or absence of nucleotides impact the function of IRGB10.Although these assumptions were made based on the GDP-bound structure of IRGB10 in our previous study, several unanswered questions remain regarding the functional mechanism of IRGB10.First, how does nucleotide binding affect the structure and function of IRGB10?Second, is GTP hydrolysis critical for the oligomerization of IRGB10?Lastly, how can IRGB10 make pores in the pathogen membrane?To answer these questions, in this study, we elucidated two more IRGB10 structures, including nucleotidefree and GppNHp-bound forms.Additionally, we reveal that GTP hydrolysis is critical for dimer formation and further oligomerization of IRGB10.Based on the current structural, biochemical, and biophysical studies, we provide a model of IRGB10-mediated pore formation on pathogen membranes in a step-by-step manner.
Expression and purification of GDP-bound IRGB10
The purification details of GDP-bound IRGB10 were introduced in a previous study (26).Briefly, the plasmid containing the IRGB10 gene was transformed into Escherichia coli BL21 (DE3) competent cells.Subsequently, the cells were coated onto plates containing Luria-Bertani (LB) agar and incubated overnight at 37°C.A single colony was inoculated into 5-10 mL of LB medium, transferred to 1 L of LB medium, and incubated at 37°C until the optical density (OD) reached ~0.7.Subsequently, 0.5 mM isopropyl b-D-thiogalactopyranoside was added to the medium to induce protein expression, and the cells were incubated overnight at 20°C.After overnight incubation, cells expressing IRGB10 were collected by centrifugation and suspended in 16 mL of lysis buffer (20 mM Tris-HCl pH 8.0, 500 mM NaCl, and 5 mM imidazole).Subsequently, the cells were disrupted by sonication on ice.The cell lysates were centrifuged at 10,000 g for 30 min at 4°C to remove the cell debris, and the supernatant was incubated with nickel-nitrilotriacetic acid (Ni-NTA) affinity resin (Qiagen, Hilden, Germany).After incubation, the supernatant was loaded onto a gravity-flow column (Bio-Rad, Hercules, CA, USA) and the resin was washed with 50 mL of washing buffer (20 mM Tris-HCl pH 8.0, 500 mM NaCl, and 25 mM imidazole) to remove impurities.The target protein was eluted from the resin in the column using elution buffer (20 mM Tris-HCl pH 8.0, 500 mM NaCl, and 250 mM imidazole).The eluted protein was further purified with size-exclusion chromatography (SEC) using SEC buffer (20 mM Tris-HCl pH 8.0, and 150 mM NaCl).The target protein was eluted at around 13 mL, concentrated to 10-12 mg/mL, and stored for structural and biochemical studies.
Expression and purification of nucleotide-free IRGB10
The same IRGB10 expression clone that was used for the expression and purification of GDP-bound IRGB10 was used for the expression and purification of nucleotide-free IRGB10.The expression in E. coli and affinity chromatography was performed using the same method as that used for the purification of GDPbound IRGB10.During the washing step, the resin was washed with 30 mL of the first washing buffer (20 mM Tris-HCl pH 8.0, 500 mM NaCl), before transferring the washed Ni-NTA resin to 50 mL of the second washing buffer (20 mM Tris-HCl pH 8.0, 1.5 M NaCl) and incubating for 30 min at room temperature.Subsequently, the incubated Ni-NTA resin was reloaded into a gravity column and washed again with 30 mL of the third washing buffer (20 mM Tris-HCl pH 8.0, 500 mM NaCl, 25 mM imidazole).The target protein was eluted using 3 mL elution buffer applied onto the column, and the eluted proteins were loaded onto the SEC column.A Superdex 200 Increase 10/300 GL column (GE Healthcare, Chicago, USA), which had been pre-equilibrated with the SEC buffer, was used in the SEC experiment.The absence of nucleotides was checked by UV absorbance (A260/A280), as outlined in a previous study (28).
Multi-angle light scattering
The molar masses of nucleotide-free IRGB10, GppNHp-bound IRGB10, and K81A mutant IRGB10 were determined by multiangle light scattering (MALS).The purified target protein was injected into a Superdex 200 HR 10/30 gel-filtration column (GE Healthcare) that had been pre-equilibrated in buffer containing 20 mM Tris-HCl pH 8.0 and 150 mM NaCl.The chromatography system was coupled to a MALS detector (mini-DAWN TREOS) and a refractive index detector (Optilab DSP) (Wyatt Technology).The data were collected every 0.5 s at a flow rate of 0.4 mL/min and then analyzed using the ASTRA program.
Crystallization and data collection
Crystallization of nucleotide-free IRGB10 was performed at 20°C using the hanging drop vapor diffusion method.Initial crystals were screened using a crystallization screening kit from molecular Dimensions, Hampton Research.The crystals were grown on plates by equilibrating a mixed drop of 1 mL protein solution (8-9 mg/mL protein in SEC buffer) and 1 mL reservoir solution containing 0.1 M Tris-HCl pH 7.0, 2.0 M (NH 4 ) 2 SO 4, and 0.2 M Li 2 SO 4 against 0.3 mL reservoir solution.The crystallization conditions were further optimized by experimenting with various concentrations and pH values of (NH 4 ) 2 SO 4 .The optimized crystals appeared in the presence of 0.1 M Tris-HCl pH 7.2, 1.8 M (NH 4 ) 2 SO 4 , and 0.2 M Li 2 SO 4 .
Crystallization of the GppNHp-bound IRGB10 was performed at 20°C using the hanging drop vapor diffusion method.Just before crystallization, 10 mM GppNHp and 2 mM MgCl 2 were added to 11 mg/mL nucleotide-free IRGB10 protein sample and incubated for 20 min.After incubation, the mixture was screened using a crystallization screening kit.Initial crystals were grown on a reservoir solution containing 0.1 M Tris-HCl pH 8.5, 20% (w/v) polyethylene glycol (PEG), 2,000 monomethyl ether (MME), and 0.2 M trimethylamine n-oxide.The diffraction data sets were collected at the BL-5C beamline of Pohang Accelerator Laboratory (PAL) (Pohang, Republic of Korea).Data processing and scaling were conducted using the HKL2000 package.
Structure determination and analysis
The structures of nucleotide-free and GppNHp-bound IRGB10 were determined by the molecular-replacement (MR) phasing method using the Phaser program in the PHENIX program (29).The previously solved IRGB10 GDP-bound structure (PDB ID: 7C3K) was used as the search model.Model building and refinement were conducted by COOT (30) and Refmac5 (31), respectively.Water molecules were added using the ARP/wARP function in Refmac5.The geometry was inspected using PROCHECK and was found to be acceptable.The quality of the model was confirmed using MolProbity (32).All structure figures were created using PyMOL (33).
Oligomerization measurement
Oligomerization of IRGB10 was assessed using turbidity measurement (34,35).Assembly of the IRGB10 oligomer was determined by measuring the absorbance at 350 nm UV using a Nanophotometer NP80 (IMPLEN, Munich, Germany) at 37°C.Purified proteins were concentrated to 100 mM ~500 mM and placed in quartz cuvettes.The protein only was placed in cuvettes before starting the measurement.After 500 s, 10 mL of the GTP and MgCl 2 mixture was added.After finishing the measurement, the protein samples were centrifuged at 10,000 g for 10 min at 4°C to remove aggregates.The remaining solution was loaded onto a SEC column, which had been pre-equilibrated with buffer containing 20 mM Tris-HCl pH 8.0, 150 mM to determine the dimer form of IRGB10.
Native-PAGE
Self-oligomerization of IRGB10 due to GTPase activity was monitored by native-PAGE using a Phast system (GE Healthcare).Pre-made 8%-25% acrylamide gradient gels (GE Healthcare) were used for this experiment.The shifted bands on the gel were stained with Coomassie Brilliant Blue.Purified nucleotide-free IRGB10 was mixed and incubated with different concentrations of GTP and MgCl 2 mixtures at 37°C for 30 min, before loading the mixture onto native gels.
Circular dichroism measurements
A tentative structural change of IRGB10 caused by GTPase activity was detected using CD measurements.A J-1500 spectropolarimeter at the Korea Basic Science Institute (Osong, South Korea) was used for the CD experiment.The spectra were obtained from 200 to 260 nm at 25°C in a 1-mm pathlength quartz cuvette using a bandwidth of 1.0 nm, a 100 nm/min speed, and a 5-s response time.Three scans were accumulated and averaged.The concentration of nucleotide-free IRGB10 and K81A mutant IRGB10 in the SEC buffer was 0.3-0.4mg/mL.Next, 2 mM GTP and 0.2 mM MgCl 2 mixture was added to the protein to generate a nucleotide-free IRGB10 + GTP sample.The mixture was incubated at 25°C for 30 min just before injecting the sample into the spectropolarimeter.
Accession codes
The atomic coordinates and structure factors of nucleotide-free and GppNHp-bound IRGB10 were deposited in the Protein Data bank under accession numbers 8JQY and 8JQZ, respectively.
Nucleotide-free IRGB10 is a monomer in solution
Many GTPases, including the GTPase domain-containing dynamin family, function appropriately by altering their structure and stoichiometry dependent on their GTP/GDP binding state and GTPase activity (36, 37).To reveal the accurate working mechanism of IRGB10 in the process of pathogen membrane disruption, whose function might be dependent on the state of nucleotide binding and hydrolysis capacity, we attempted to solve the structures of the nucleotide-free IRGB10 and IRGB10/GTP complexes.As we observed that endogenous GDP in E. coli was automatically incorporated into IRGB10 during the purification step, we used an additional high-salt washing step during an affinity chromatography step, which has been used previously to remove nucleotides from binding proteins, to obtain nucleotide-free IRGB10 (38).The absence of nucleotides was checked by UV absorbance (A260/A280), as has been outlined previously (28).This experiment showed that the absorbance value of nucleotidefree IRGB10 was 0.67~0.64,while that for GDP-bound IRGB10 was 1.02~1.36,indicating that GDP was washed out during the purification step (Supplementary Figure 1).Next, purified nucleotide-free IRGB10 was applied to SEC, with a GDP-bound IRGB10 sample used for size control.Comparison of the SEC profiles indicated that the main elution peak of nucleotide-free IRGB10 was moved to the monomer size position, although the overall generated peaks on the SEC profiles were similar (Figure 1A).The molecular size of nucleotide-free IRGB10 was accurately determined by MALS, which was then used to calculate the absolute molecular mass of the protein particle.The results of MALS showed that the molecular weight of the tentative monomeric peak from nucleotide-free IRGB10 was 53.65 kDa (± 0.7%), whereas the molecular mass of the dimeric GDP-bound IRGB10 was 102.72 (± 1.8%) (Figure 1B).These results indicated that the dimeric GDP-bound IRGB10 became monomeric when IRGB10 lost its GDP.Purified nucleotide-free IRGB10 was successfully crystallized, which allowed the structure of the monomeric nucleotide-free IRGB10 to be solved.The crystallographic data and refinement statistics are summarized in Table 1.Unlike dimeric GDP-bound IRGB10, a single molecule of IRGB10 was detected in the crystallographic asymmetric unit (ASU).The overall structure of monomeric nucleotide-free IRGB10 was almost identical to that of GDP-bound IRGB10, which was composed of a helical domain formed by N-terminal, C-terminal, and GTPase domains, which is a typical domain composition of the IRG family (Figures 1C, D).The GTPase domain consisted of six b-sheets (S1-S6) and six a-helices (H4-H9), while the helical domain consisted of eleven a-helices, including H1~H3 from the N-terminus region and H10~H17 from the C-terminus region.The model of nucleotide-free IRGB10 was constructed from residue 16 to residue 406.The LEH residues at the C-terminus, which were from the plasmid construct, were included in the final model.The electron density of the N-terminus residues and several loops, including switches I and II in the GTPase domain, were not visible in the model (Figure 1C).These parts of the structure could not be constructed due to poor electron density.The unconstructed N-terminus and several loops in the GTPase domains around these structures were also observed in structural studies of IRGA6 and dimeric GDP-bound IRGB10; this indicates that the N-terminus loop, containing around 13-15 residues, and several loops, including switches I and II at the GTPase domain, are extremely flexible and unstructured regions (25).As we removed GDP from IRGB10 during the purification step and found that nucleotide-free IRGB10 became a monomer in solution, using the current structure, we first sought to investigate whether GDP or GTP is in the GTPase domain.The electron density search revealed no traceable electron density for GDP in the typical nucleotide-binding site of the GTPase domain (Figure 1E).
Next, we compared the structure of the nucleotide-free IRGB10 to that of the GDP-bound IRGB10 (PDB ID: 7C3K) to analyze any structural changes that might occur by the loss of nucleotides in IRGB10.Pair-wise structural alignments between nucleotide-free and GDP-bound IRGB10 showed that the overall structures were similar to each other, with a RMSD between the two structures of 1.3 Å (Figure 1F).However, close-up analysis showed that the H2 and H3 helical domains formed by the N-terminal part of IRGB10 were dislocated from the positions of the H2 and H3 regions of GDP-bound IRGB10 (Figure 1G).In contrast, the last part of the helical domain that was constructed by the C-terminal part of IRGB10 was identical to that in GDP-bound IRGB10, indicating that binding nucleotide or GTP hydrolysis causes a slight structural alteration of IRGB10.Although the structures of the GTPase domain of each structure were almost identical, the positions of several loops were not.The structures of switches I and II, both of which are critical for GTPase activity, were unconstructed in nucleotide-free IRGB10, while only switch I was unconstructed in GDP-bound IRGB10 (Figure 1H).Interestingly, the P-loop, which is critical for nucleotide binding and GTP hydrolysis, was well constructed in both structures, indicating that the formation and location of the proper positioning of the P-loop is independent of nucleotide binding, contrary to what we have argued in a previous structural study of GDP-bound IRGB10 (26).
GTP hydrolysis causes dimerization and further oligomerization of IRGB10
Indeed, as we observed that nucleotide binding affects the stoichiometry change of IRGB10, we also investigated the effect of GTPase activity and GTP binding on any oligomeric and structural changes of IRGB10.To accomplish this, we first performed a turbidity assay that has been used previously to analyze the oligomerization of IRGA6 (34).The oligomerization of IRGB10 was detected by checking the absorbance of 350 nm UV light.After adding the GTP/MgCl 2 mixture to GDP-bound IRGB10, UV absorbance was not detected for 1200 s (Figure 2A).However, when the GTP/MgCl 2 mixture was added to nucleotide-free IRGB10, a considerable increase in UV absorbance was detected 600 s after GTP addition (Figure 2B).This UV absorbance was not detected when a non-hydrolysable GTP analog (GppNHp) was supplied to nucleotide-free IRGB10 (Supplementary Figure 2).The results of these turbidity assays indicated that GTP hydrolysis caused the oligomerization of IRGB10.Moreover, visible IRGB10 oligomeric particles were detected in the tube containing nucleotide-free IRGB10 following GTP addition.After removing those oligomeric particles by centrifugation, the solution was loaded onto SEC to determine the remnants in the solution.As GTPase hydrolyzes GTP to GDP, we speculated that the dimeric form of GDP-bound IRGB10 would be observed if the hydrolyzed product of GDP was incorporated into IRGB10 after hydrolysis.As expected, the SEC profile showed that the last portion of IRGB10 after GTP hydrolysis was a dimeric size and was eluted around the 13-14 position where dimeric GDP-bound IRGB10 was eluted (Figure 2C).When the non-hydrolysable GTP analog GppNHp was added to nucleotide-free IRGB10, no oligomeric particles were observed in the tube and the SEC profile showed a monomeric size (Figure 2C), indicating that GTP hydrolysis is critical for the dimerization and further oligomerization of IRGB10.The effect of GDP addition on the nt-free IRGB10 was also assessed by performing the same turbidity (Supplementary Figure 3).As shown at the S3 Figure, GDP addition did not produce oligomeric peak although a little absorbance was detected at 200 sec point after addition of GDP/MgCl 2 .This indicated that GTP hydrolysis is critical step for IRGB10 oligomerization.The GTP hydrolysis-mediated oligomerization of IRGB10 was confirmed by native PAGE, which is another oligomerization detection assay.As shown in Figure 2D, the newly formed oligomeric band was detected by the addition of GTP, indicating that GTP addition caused IRGB10 oligomerization, which was observed in the turbidity assay.Finally, we attempted to confirm whether GTP hydrolysis of IRGB10 is essential for dimer formation and further oligomer formation by constructing mutants that cannot hydrolyze GTP.To perform this experiment, K81, a catalytically important residue identified from previous study (19), was mutated to alanine to produce the K81A mutant, which is the GTP-locked form of IRGB10.Using this GTP-locked form of IRGB10, we performed a turbidity assay and SEC-MALS.Unlike wildtype IRGB10, K81A did not produce visible oligomeric particles following the addition of GTP.Additionally, K81A did not produce a dimeric peak on the SEC profile when GTP was added to K81A (Figure 2E).All three SEC samples, including nucleotide-free K81A, K81A with GTP, and K81A with GppNHp, were eluted at the monomer position at SEC-MALS (Figures 2E, F).In addition, K81A did not produced dimeric peak in the presence of GDP (Supplementary Figure 4).These additional experiments confirmed that GTP hydrolysis is essential for dimer formation and further oligomerization of IRGB10, which may be critical for pathogen membrane disruption.Finally, we elucidated the role of the dimer PPI of IRGB10 on the oligomerization of IRGB10.To evaluate this, we used dimer PPI disrupting mutant D185R, which has been identified by our previous study as PPI interfering mutant.The turbidity assay showed that nt-free D185R mutant failed to produce oligomeric peak after addition of GTP/MgCl 2 (Supplementary Figure 5).Based on this experiment, we concluded that dimerization is a seed of further oligomerization of IRGB10.
GppNHp-bound IRGB10 is a monomer in solution
The mimetic structure of the GTP-bound form of IRGB10 was also solved using GppNHp, which is a non-hydrolyzable GTP 1.The overall structure and numbers of ahelices and b-sheets were similar to previously revealed nucleotidefree and GDP-bound IRGB10 structures.We detected a clear electron density map at the nucleotide-binding site in the GTPase domain responsible for GppNHp (Figure 3A).The P-loop was stably fixed in GppNHp-bound IRGB10, while switches I and II remained unstructured (Figure 3A).Additionally, the crystallographic asymmetric unit comprised two identical IRGB10 molecules, molecule A and B (Figure 3B).As the stoichiometric and structural changes of the IRG family of GTPases are critical for understanding the working mechanism of the IRG family, we next analyzed the stoichiometry of GppNHp-bound IRGB10 in the solution using MALS.The experimentally calculated molecular weight of GppNHp-bound IRGB10 was 48.93 kDa (± 3.941%), indicating that GppNHp-bound IRGB10 is a monomer in the solution (Figure 3C), indicating that GTP bound-IRGB10 without GTP hydrolysis is still a monomer in solution.
To comprehend any structural change caused by nucleotide binding and GTPase activity, we next compared the GppNHp structure with nucleotide-free (Figure 3D) and GDP-bound (Figure 3E) IRGB10 structures by structural superposition analysis.The results of this structural comparison indicated that the overall GppNHp structure is almost identical to that of the nucleotide-free and GDP-bound forms of IRGB10, with RMSDs of 0.8 Å and 1.2 Å, respectively.However, upon closer examination of the helical domain, the locations of several helices were not identical.Indeed, H2 and H3 of the GppNHp-bound form were tilted by approximately 5°compared to those of the nucleotide-free form of IRGB10 (Figure 3F).Moreover, compared to the helical domain of the GDP-bound form, H13 and H14 of the GppNHpbound form were tilted by approximately 8°compared to those of the GDP-bound form of IRGB10 (Figure 3G).The largest structural alteration was detected in the nucleotide binding site (Figure 3H).Although no structural changes were detected when the GppNHpbound structure was compared to the nucleotide-free form (Figure 3I), distinct movements of the H4 and H4 connecting loops were detected when the GppNHp-bound structure was superposed with the GDP-bound form (Figure 3J).Moreover, by conducting a structural comparison of the GTPase domain of the GppNHp-bound IRGB10 with the GDP-bound form, we found that the loops of the GppNHp-bound form were very flexible and unfixed, and the position of H4 connected to the switch I loop in the GDP-bound form was different from that of the GppNHpbound form (Figures 3H, J).All P-loops structures, which are important for nucleotide binding, of the three structures were identical.Finally, we evaluated the far UV circular dichroic (CD) spectra to determine the tentative structural changes of IRGB10 during GTP hydrolysis.The results of this experiment showed that the spectrum patterns were different when nucleotide-free IRGB10 was treated with GTP (Figure 3K).The nucleotide-free IRGB10 alone sample produced a typical CD spectrum pattern of a-helical proteins, exhibiting two pronounced minima at 208 nm and 222nm and a maxima at 200 nm.This pattern was not observed when GTP was provided.Moreover, these changes in the CD pattern were not observed when the GTPase activity defect K81A mutant was treated with GTP.In addition, GDP or GppNHp addition also produced a typical CD spectrum pattern produced by wildtype IRGB10 (Supplementary Figure 6).These CD experiments indicate that GTP hydrolysis might lead to structural changes in IRGB10.
Discussion
Given the importance of the field of study and understanding the mechanism underlying membrane disruption, several structures of the IRG family, including IRGA6, IRGB6, and IRGB10, have been revealed so far.Despite this, the functionally important filament-like structures of the IRG family, which are formed for membrane disruption, remain to be elucidated.To better understand the working mechanism of the IRG family, we initially solved the structure of the dimeric GDP-bound form of IRGB10 (26).Although GDP was not included in the protein sample preparation steps, endogenous bacterial GDP was incorporated in the GTPase domain of IRGB10.As the IRG family has a higher affinity for GDP than GTP, the natural production of GDP-bound IRGB10 was not extraordinary (25, 39).We established a method for purification of the nucleotide-free form of IRGB10 and revealed the structures of the nucleotide-free and GppNHp-bound forms of IRGB10 to establish the structural basis of membrane pore formation.Our results showed that IRGB10 existed as a monomer in the nucleotide-free state and became a dimeric form through GTP hydrolysis.During GTPase activation, the GTPase domain was flexible, and several helices underwent structural changes.Following GTP addition, visible IRGB10 oligomeric particles were detected in the tube containing nucleotide-free IRGB10, which may be aggregates that can be formed due to the absence of membrane.After GTP hydrolysis, IRGB10 is supposed to work on the membrane; however, due to the absence of a membrane or binding partner such as GBP5, oligomeric IRGB10 became aggregated in solution.After removing all of the higher oligomeric particles (or aggregates), the remaining IRGB10 was detected as a dimer in solution, suggesting that the dimeric form is the main functional building block used by the IRG family for membrane disruption of pathogens.
Structural comparison of the three structures of IRGB10, including nucleotide-free, GDP-bound, and GppNHp-bound, indicated that the structure of the monomeric nucleotide-free form was almost identical to that of the monomeric GppNHpbound IRGB10.However, the structure of IRGB10 changed if it experienced GTP hydrolysis.Although we observed a limited structural change at both the helical domain and GTPase domain of IRGB10, we expected huge structural changes in the helical domain of IRGB10, which were not observed in our study.In a previous study, although the bacterial dynamin-like protein (BDLP), a member of the IRG-like GTPase dynamin family in bacteria, had a closed conformation in the crystal structures of the nucleotide-free and GDP-bound states (36), this dynamin-like GTPase underwent huge structural changes at the helical domain when GTP was hydrolyzed.This structural change induced by forming the extended helical domain conferred BDLP with the capability to wrap the membrane by further oligomerization in the presence of lipid membrane, as evidence by cryo-EM structure analysis (40).Assuming that IRGB10 works in a manner similar to that of BDLP, GTP hydrolysis-mediated power generation, structural changes to the extended helical domain using generated power, and further oligomerization-mediated membrane disruption may occur, which may be achieved only in the presence of a phospholipid membrane.The possibility of huge stryctyral change of IRGB10 during GTP hydrolysis was indicated by our CD experiments.Although dramatic change of CD profile was detected when IRGB10 was incubated with GTP, this change might be not due to the structural changes but due to the oligomerization of IRGB10 induced by GTP addition.This should be investigated further in near future.Taken together, based on the results of our structural, biochemical, and biophysical studies, we propose a model of IRGB10-mediated pathogen membrane pore formation (Figure 4).Initially, IRGB10 without nucleotide forms an inactive monomeric conformation.Once GTP is loaded into the GTPase domain of IRGB10, a minimal structural change, especially at the helical domain, occurs to prepare IRGB10 for action.During the GTP-hydrolysis step, IRGB10 may undergo huge structural changes, which may be critical for the membrane association of IRGB10, dimerization, and further oligomerization for pore formation (Figure 4).As we cannot capture the moment at which structural changes of IRGB10 are induced, the types of structural changes that occur during GTP hydrolysis remains an open question.
1
FIGURE 1 Structure of nucleotide-free IRGB10.(A) Profiles of the size-exclusion chromatography (SEC) of GDP-bound IRGB10 (black line) and nucleotide-free IRGB10 (red-line).The shifted peak is indicated by the black arrow.(B) Multi-angle light scattering (MALS) profiles derived from the SEC peak from the nucleotide-free IRGB10 (left panel) and GDP-bound IRGB10 (right panel).Red line indicates the experimental molecular mass.(C) Overall structure of nucleotide-free IRGB10.The rainbow-colored cartoon representation of monomeric nucleotide-free IRGB10 is shown.The chain from the N-to C-terminus is colored blue to red.Helices and sheets are labeled with H and S, respectively.The missing N-terminal loop in indicated by the blue dotted line.(D) The domain boundary and overall structure of IRGB10.The relative positions of the helical domain and the GTPase domains are shown in the bar diagram at the top.(E) Close-up view of the nucleotide binding pocket in the GTPase domain of IRGB10.The 2Fo-Fc electron density map contoured at the 1s level is indicated by the blue mesh.(F) Structural comparison of nucleotide-free IRGB10 (mixed green and yellow) with GDP-bound IRGB10 (magenta) by structural superposition.(G) Close-up view of the helical domains from panel (F) The structurally misaligned region is indicated by the black arrow.(H) Close-up view of the GTPase domain from panel (F) Missing, unconstructed loops in the model are indicated by dotted lines.
2
FIGURE 2 Dimerization and further oligomerization of IRGB10 by GTP hydrolysis.(A, B) Assembly of the IRGB10 oligomer as measured by turbidity changes.Turbidity changes of solutions containing nucleotide-free IRGB10 were measured upon addition of water for control (A) and GTP/MgCl 2 (B).(C) SEC profiles of nucleotide-free (Nt-free) IRGB10 (black line), GTP-added IRGB10 (red line), GDP-added IRGB10 (yellow line), and GppNHp-added IRGB10 (blue line).(D) Native-PAGE of IRGB10 incubated with various concentrations of GTP in the presence or absence of MgCl 2 .The concentrations of GTP incubated with IRGB10 are indicated.(E) SEC profiles of K81A mutant IRGB10.(F) MALS profile derived from the SEC peak from the K81A mutant IRGB10.Red line indicates the experimental molecular mass.
3
FIGURE 3 Structure of GppNHp-bound IRGB10.(A) Overall structure of GppNHp-bound IRGB10.Close-up view of the nucleotide binding pocket in the GTPase domain of IRGB10 shown in the right panel.The missing unconstructed switch I and II loops are indicated by red dotted lines.The 2Fo-Fc electron density map contoured at the 1s level around GppNHp is indicated by blue mesh.(B) A cartoon representation of two GppNHp-bound IRGB10s presented in an asymmetric unit.(C) Multi-angle light scattering (MALS) profiles derived from the SEC peak from GppNHp-bound IRGB10.Red line indicates the experimental molecular mass.(D) Structural comparison of GppNHp-bound IRGB10 (metal blue) with nucleotide-free IRGB10 (mixed green and yellow) by structural superposition.(E) Structural comparison of GppNHp-bound IRGB10 (metal blue) with GDP-bound IRGB10 (magenta) by structural superposition.Two structurally misaligned regions are indicated by black circles.(F) Close-up view of the helical domains from panel (D).The structurally misaligned region is indicated by a black arrow.(G) Close-up view of the helical domains from panel (E).The structurally misaligned region is indicated by a black arrow.(H) Structural comparison of the GTPase domains of GppNHp-bound IRGB10 (metal blue) with GDP-bound IRGB10 (magenta) and nucleotide-free IRGB10 (mixed green and yellow) by structural superposition.(I) Close-up view of the nucleotide pocket from panel H showing GppNHp-bound IRGB10 and nucleotide-free IRGB10.(J) Close-up view of the nucleotide pocket from panel H showing GppNHp-bound IRGB10 and GDP-bound IRGB10.The structurally misaligned region and H4 and H4 connecting loop are indicated.(k) Circular dichroic spectra of nucleotide-free (Nt-free) IRGB10 (black line), Nt-free IRGB10 provided GTP (red-line), and Nt-free K81A mutant IRGB10 provided GTP (blue line).
FIGURE 4
FIGURE 4Putative model of a nucleotide and its hydrolysis-mediated membrane pore formation by IRGB10.The blue lines indicate the N-terminus loops where myristoylation occurs.
TABLE 1
Data collection and refinement statistics. | 2023-09-01T15:06:18.582Z | 2023-08-29T00:00:00.000 | {
"year": 2023,
"sha1": "66660a7040188775ceb49d75d19a75f96d2dcf1c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1254415/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc37b28ea0141d2d3ec1739a6fa41bd76cd73586",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
133974172 | pes2o/s2orc | v3-fos-license | Effects of Increasing Aridity on Ambient Dust and Public Health in the U.S. Southwest Under Climate Change
Abstract The U.S. Southwest is projected to experience increasing aridity due to climate change. We quantify the resulting impacts on ambient dust levels and public health using methods consistent with the Environmental Protection Agency's Climate Change Impacts and Risk Analysis framework. We first demonstrate that U.S. Southwest fine (PM2.5) and coarse (PM2.5‐10) dust levels are strongly sensitive to variability in the 2‐month Standardized Precipitation‐Evapotranspiration Index across southwestern North America. We then estimate potential changes in dust levels through 2099 by applying the observed sensitivities to downscaled meteorological output projected by six climate models following an intermediate (Representative Concentration Pathway 4.5, RCP4.5) and a high (RCP8.5) greenhouse gas concentration scenario. By 2080–2099 under RCP8.5 relative to 1986–2005 in the U.S. Southwest: (1) Fine dust levels could increase by 57%, and fine dust‐attributable all‐cause mortality and hospitalizations could increase by 230% and 360%, respectively; (2) coarse dust levels could increase by 38%, and coarse dust‐attributable cardiovascular mortality and asthma emergency department visits could increase by 210% and 88%, respectively; (3) climate‐driven changes in dust concentrations can account for 34–47% of these health impacts, with the rest due to increases in population and baseline incidence rates; and (4) economic damages of the health impacts could total $47 billion per year additional to the 1986–2005 value of $13 billion per year. Compared to national‐scale climate impacts projected for other U.S. sectors using the Climate Change Impacts and Risk Analysis framework, dust‐related mortality ranks fourth behind extreme temperature‐related mortality, labor productivity decline, and coastal property loss.
Besides model performance, our selection criteria included model independence and broader usage by the scientific community. The CMIP5 models vary in their ability to resolve certain climate system processes, including those most relevant to the United States. In addition, while over 60 different CMIP5 models are available, a number of the models share computer code or are parametrized in similar ways. Recent studies [Sanderson et al. 2015a[Sanderson et al. , 2015bMcSweeney et al. 2015] provide analysis of both model skill at the global scale and independence of underlying code. Ultimately, we apply equal weight to each of the results derived using the six models, but we also provide the model specific results to facilitate analysts who wish to employ other weighting criteria (using, for example, Eyring et al. 2019).
With insufficient resources to conduct a country-specific weighting analysis based on skill and independence, a qualitative consideration of these metrics is still valuable. For purposes of this project, the six GCMs selected were developed by different, well-known modeling groups whose models are frequently used in the literature. In addition, three of the GCMs (CCSM4, GISS-E2-R, and GFDL-CM3) are developed by domestically-based modeling groups (NCAR, NASA, and GFDL, respectively). There is some expectation that modeling teams may pay closer attention to the regional climate in the region where the team is based, and that therefore domestically-based modeling groups might have comparatively greater skill for purposes of impacts analysis in the United States.
S3. Sensitivity analysis of the value of a statistical life (VSL) and total valuation estimates to alternative economic growth and income elasticity inputs
As outlined in the main text, we estimate the economic value of projected health burdens based on federal guidance and valuation functions included in the BenMAP-CE model. For mortality endpoints, we use a base VSL of $7.9 million ($2008) based on 1990 incomes. To create a VSL using $2015 and based on 2015 incomes, the standard value was adjusted for inflation and income growth. The resulting value, $10.0 million for 2015 ($2015), was adjusted to future years to reflect the impact of income growth on individual willingness-to-pay to reduce mortal risk over time, and to our "current climate" base year of 2010, by assuming an elasticity of VSL to GDP per capita of 0.4. The income elasticity is based on empirical evidence that indicates that VSL grows about 0.4% for each 1% increase in GDP/capita -the specific value provided in BenMAP-CE -0.4 -reflects a literature review completed in the mid 1990s.
Recent literature provides a basis for projecting GDP/capita through our full simulation period (through 2100), and for potentially updating the income elasticity. A recent literature review suggests that income elasticity values as high as 1.0 might be more consistent with emerging literature on the topic (see for example Robinson et al. 2018) -the implication of which is that VSL would grow proportionately with GDP/capita.
The results of these sensitivity tests are presented in Table S7. The first row shows the VSL used in the estimates presented in the main text. Using a value of 1.0 for income elasticity, yields higher estimates, ranging from 16% higher in 2030 to 87% higher in 2090 than the VSL estimates used in the main text. 1986−2005 regional and seasonal mean SPEI02 Figure S4 but for coarse dust. The health endpoints considered are cardiovascular mortality (m.C) and asthma ED visits (aed). Table S1. Drought classification based on the Standardized Precipitation Evapotranspiration Index (SPEI). Sources: Dai et al., 2011;Liu et al., 2014;Törnros and Menzel, 2014.
SPEI values Drought/Flood classification SPEI ≤ −2
Extremely dry −2 < SPEI ≤ −1.5 Severely dry −1.5 < SPEI ≤ −1 Moderately dry −1 < SPEI ≤ 0 Mild drought 0 < SPEI ≤ 1 Near normal wet 1 < SPEI ≤ 1.5 Moderately wet 1.5 < SPEI ≤ 2 Very wet SPEI > 2 Extremely wet Table S2. Reference and projected population and total annual incidence. The population is the total projected population across the region for each health endpoint's associated age range. Total incidence is calculated using the projected county-level incidence rates multiplied by projected population disaggregated into 5-year age bins. Values are expressed in thousands and rounded to two significant figures. 100 120 140 *Cardiovascular mortality endpoint approximated from cardiopulmonary incidence (note that when rounded to 2 s.f., the incidence appears to be the same for the 30-99 and 0-99 year age groups). Table S5. Percent of incidence attributable to each age group by health endpoint and 20-year era. The percentages are calculated by dividing the estimated incidence associated with each age group by the total annual incidence found in Table S2. Table S6. Annual economic damages (millions USD 2015$) associated with the health burdens in Table 2. The historical reference value is estimated using 2010 population and baseline incidence rates combined with 1988-2005 dust concentrations. Values shown for future scenarios at 20-year intervals are the excess cost relative to the reference value. "AQ-constant" projections are due to the effects of changing population and baseline incidence rates. RCP projections are due to the combined effects of changing dust concentrations, population, and baseline incidence rates. For each health endpoint and 20-year era, the total cost is equal to the sum of the reference and excess costs projected for each future scenario. Values in parentheses represent the range of variability in the CMIP5 model ensemble for a given RCP scenario. Values are rounded to two significant figures.
Pollutant
Health endpoint Age (years) | 2019-04-27T13:13:40.761Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "eeca575ad3a8c63226b15e3c2ad7090e4d25bdfe",
"oa_license": "CCBYNCND",
"oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2019GH000187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba3a59873436e8baf079013d1359e728a08edce1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
} |
236290829 | pes2o/s2orc | v3-fos-license | Conjunctival sac bacterial culture of patients using levofloxacin eye drops before cataract surgery: a real-world, retrospective study
Background The use of antibiotics preoperatively is effective to decrease the incidence of ocular bacterial infections but may lead to high resistance rate, especially on patients with multi-risk clinical factors. This study systematically analyzed real-world data (RWD) of patients to reveal the association between clinical factors and conjunctival sac bacterial load and offer prophylaxis suggestions. Methods We retrieved RWD of patients using levofloxacin eye drops (5 mL: 24.4 mg, 4 times a day for 3 days) preoperatively. Retrieved data included information on the conjunctival sac bacterial culture, sex, presence of hypertension and diabetes mellitus (DM), and history of hospital-based surgeries. Data was analyzed using SPSS 24.0. Results RWD of 15,415 cases (patients) were retrieved. Among these patients, 5,866 (38.1%) were males and 9,549 (61.9%) females. 5,960 (38.7%) patients had a history of hypertension, and 3,493 (22.7%) patients had a history of DM. 7,555 (49.0%) patients had a history of hospital-based operations. There were 274 (1.8%) positive bacterial cultures. Male patients with hypertension and DM may be at increased risk of having positive bacterial cultures (P < 0.05). Staphylococcus epidermidis (n = 56, 20.4%), Kocuria rosea (n = 37, 13.5%), and Micrococcus luteus (n = 32, 11.7%) were the top 3 isolated strains. Most bacterial strains were resistant to various antibiotics except rifampin, and 82.5% (33 of 40 isolates) of Staphylococcus epidermidis isolates had multidrug antibiotic resistance. Numbers of culture-positive Staphylococcus epidermidis isolates in the male group and non-DM group were greater than those in the female and DM groups, respectively. Micrococcus luteus (n = 11, 8.8%) was found less frequently in non-hypertension group than in hypertension group. Conclusion Sex (Male) and the presence of hypertension and DM are risk factors for greater conjunctival sac bacterial loads. We offer a prophylactic suggestion based on the combined use of levofloxacin and rifampin. However, this approach may aggravate risk of multidrug resistance.
(10%), and Streptococcus spp. (9%) are the major pathogens responsible for endophthalmitis cases after cataract surgery. Without effective preoperative examination and prevention, the bacteria mentioned above may lead to endophthalmitis, a devastating eye infection which can cause irreversible blindness in the infected eye within hours or days of symptom onset [5].
The use of antibiotics is an effective strategy to significantly decrease the incidence of ocular bacterial infections (positive swabs). Among all kinds of antibiotics, levofloxacin (which belongs to quinolones and fluoroquinolones) has been proved to have well-established efficacy and tolerability in the treatment of external ocular infections caused by both Gram-positive and Gram-negative bacteria [6][7][8][9][10][11][12][13]. However, with the widespread use of antibiotics, the resistance rate of bacteria towards antibiotics (including levofloxacin) has gradually increased, which has become a severe threat to public health [14][15][16][17][18]. It becomes even worse with a concomitant decline in the development of novel antibiotics and the emergence of multidrug-resistant strains [19,20]. Moreover, patient-related risk factors such as older age, sex (male), the presence of hypertension and/or diabetes mellitus (DM), and a history of hospital-based surgery may be associated with increasing bacterial load and the emergence of multidrug-resistant bacteria [1]. However, the species and characteristics of multidrug-resistant bacteria in human conjunctival sac have not been systematically summarized.
According to The Food and Drug Administration (FDA), real-world data (RWD) is defined as all data relating to patient health status and/or the delivery of health care, routinely collected from a variety of sources. Moreover, real-world evidence (RWE) is the clinical evidence regarding the usage and potential benefits or risks of a medical product, derived from the analysis of RWD [21]. By studying RWE, clinicians can optimize currently available therapies or develop new prophylactic strategies [22]. It provides support for us to further study the characteristics of levofloxacin resistant bacteria in conjunctival sac.
In the current study, we searched the related literature and reviewed the results of conjunctival sac bacterial cultures of patients that had used Cravit (levofloxacin eye drops, Santen Pharmaceutical Co., Ltd) for antibiotic prophylactic therapy before cataract surgery. With the exception of data from the literature, all RWD were collected in Peking University Third Hospital from 2016 to 2019. By calculating the positive rate, analyzing positive strains and their drug sensitivity, as well as classifying results by clinical factors that may affect the positive rate of cultures, we revealed the association between different clinical factors and the conjunctival sac bacterial load. Further, by analyzing the results we confirmed the necessity for antibiotic use before cataract surgeries and offered prophylaxis suggestions and references.
Ethical approval and consent to participate
All participants provided written informed consent, consistent with the tenets of the Declaration of Helsinki. Peking University Third Hospital Medical Ethics Committee approved all procedures carried out in this study, including the procedure of accessing the clinical/personal patient data used in our research (approval number: M2019432).
Data screening and selection
We included all medical records and related literature data and obtained RWD including basic patient information and conjunctival sac bacterial culture information of patients that had used Cravit (levofloxacin eye drops 5 mL: 24.4 mg, Santen Pharmaceutical Co., Ltd) for antibiotic prophylactic therapy before cataract surgeries. Literature on prophylactic therapy using other antibiotics or povidone-iodine (PVI) was also reviewed and summarized for comparison. For medical records, we restricted the inclusion criteria to patients with cataracts that had visited Peking University Third Hospital from 2016 to 2019. For published literature, the keywords used were "antibiotics", "prophylactic therapy", and "cataract surgery". We restricted the inclusion criteria to observational cohort studies only. The timing of publication was restricted to the last 10 years (2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017)(2018)(2019). Any study published prior to the last 10 years was considered as outdated and was excluded. Moreover, studies that lacked information regarding age, sex, and previous medical history of patients and were not focused on the conjunctival sac bacterial culture of patients undergoing antibiotic prophylactic therapy were excluded. Publications were also excluded if the concentration of levofloxacin used was different from that in the current study. All relevant literature not included were summarized and compared with our study on the clinical effects of antibiotics and bacterial resistance to them.
Data extraction
After screening medical records and publications, we extracted detailed data including the preoperative conjunctival sac bacterial culture of patients using Cravit, patient sex, presence of hypertension and/or DM, and history of hospital-based surgeries. All conjunctival sac bacterial culture samples were only collected and isolated from patients who had come for cataract surgery and had used Cravit preoperatively, 4 times a day for 3 days, from 2016 to 2019. For patients who underwent bilateral operations, we only conducted the cataract surgery on one eye at a time. The interval between the left eye operation and right eye operation of each patient was more than one month. The medical records of the firsteye surgeries were retrieved. Patients were asked to only use topical antibiotics on the eyes that were to be operated. The isolates were all collected from the conjunctival sac of patients just before the operation and were identified using the Vitek-2 automated systems (bioMerieux, France). Antimicrobial susceptibility testing (AST) for tobramycin, ceftriaxone, erythromycin, vancomycin, leveofloxacin, ofloxacin, and rifampin was performed using the Kirby-Bauer (K-B) disk diffusion method according to the Clinical and Laboratory Standards Institute (CLSI) guideline.
Data analysis
Data of patient basic information, results of conjunctival sac bacterial culture, and antimicrobial susceptibility testing were collected and recorded using Excel (Microsoft Office 2019; Microsoft Corporation, Redmond, WA, USA). All statistical analyses were conducted using SPSS 24.0 (International Business Machines Corp.). Considering the data frame, distribution and sample sizes of our results, multiple statistical approaches were applied in our studies. Comparison of the incidence of each clinical factor between culture-positive groups and culturenegative groups was performed using the chi-square test. Binary logistic regression analysis was also used to explore the association between clinical factors and the positive culture of conjunctival sac bacteria. The Kruskal-Wallis H test was conducted to analyze the results of K-B test. Due to the small sample sizes for some strains, the Kruskal-Wallis H test was only conducted on strains of 6 or more isolated samples with K-B test results. Culture-positive patients were divided into two groups according to the clinical factors that were associated with culture results. The presence of various bacteria and their AST results were compared using the chi-square test and Mann-Whitney U test. It was notable that since we involved multiple factors without clear pre-defined hypothesis, multiple testing correction were considered into the final P value threshold (P < 0.001). Statistical significance of other tests was defined as P < 0.05.
Patients and clinical factors
RWD of 15,415 cases, including conjunctival sac bacterial cultures, were retrieved. Because the concentration of levofloxacin used in the published literature was different from that in the medical records of our study, and there was a lack of information regarding age, sex, and previous medical history of patients, all RWD were retrieved from the medical records of patients from Peking University Third Hospital. Clinical factors that may affect conjunctival sac bacterial load of patients before cataract surgery are shown in Table 1. Among the total cases, there were 5,866 (38.1%) males and 9,549 (61.9%) females. There were 5,960 (38.7%) patients with a history of hypertension and 3,493 (22.7%) patients with a history of DM. The number of patients with a history of one or more hospital-based operations was 7,555 (49.0%). There were 169 (1.1%) patients who had undergone bilateral operations and only the medical records of the first-eye surgeries were retrieved.
There were 274 culture samples that were positive, suggesting that these patients had a greater conjunctival sac bacterial load. The positive rate was 1.8%. Among them, there were 37 samples that led to postoperative endophthalmitis eventually (0.2% of all samples). Male patients (n = 137, 2.3%) and patients with a history of hypertension (n = 149, 2.5%) or DM (n = 88, 2.5%) were at an increased risk of having positive bacterial cultures (P < 0.05), but the history of hospital-based surgeries may have had no influence (P > 0.05). Besides, the results of binary logistic regression analysis was shown in Table 2 and the logistic model was statistically significant (χ 2 (4) = 52.686, P < 0.001). Among the 4 independent variables included in the model, sex, presence of hypertension and DM were statistically significant (P < 0.05). The risk of positive culture of conjunctival sac bacteria in male was 1.677 times higher than that in female. The risk in patients with hypertension was 1.844 times higher than that in patients without hypertension. The risk in diabetic patients was 1.385 times higher than that in nondiabetic patients. There were only three patients who had undergone bilateral operations, and the interval between the left eye operation and right eye operation of each patient, as previously stated, was more than one month (Table 3).
Culture identification
The top 10 species of culture-positive samples and the number of culture-positive samples of each species were shown in the Fig. 1A. Of all the 274 positive culture samples, Staphylococcus epidermidis (n = 56, 20.4%), Kocuria rosea (n = 37, 13.5%), and Micrococcus luteus (n = 32, 11.7%) were the three most frequently isolated strains, accounting for 45.6% of culture-confirmed cases. Furthermore, there were 19 positive samples in total that led to postoperative endophthalmitis for the three most common isolates (10 Staphylococcus epidermidis samples, 6 Kocuria rosea samples and 3 Micrococcus luteus samples). The percentage of postoperative endophthalmitis for the three most common isolates were shown as pie graphs in Fig. 1B.
Antimicrobial susceptibility testing
Among the 274 culture-positive samples, information on antimicrobial susceptibility testing using the K-B test was recorded for 234 (85.4%) samples and is summarized in Table 4. For Staphylococcus epidermidis, Kocuria rosea, Kocuria kristinae, Kocuria varians, Micrococcus luteus, Micrococcus lylae, Moraxella spp., Brevundimonas diminuta, inactive biochemical spectra, and unidentifiable bacterial groups, there were statistically significant differences in resistance to different antimicrobial agents (P < 0.05). The zone diameters of rifampin in the K-B test were the largest, which means all these bacteria were most sensitive to rifampin.
It should be noted that the majority of Staphylococcus epidermidis (33 of 40 isolates, 82.5%) isolated samples had multidrug resistance to 3 kinds of antimicrobial agents or more. Further, 22.5% (9 of 40 isolates) were resistant to 3 kinds, 40.0% (16 of 40 isolates) to 4 kinds, and 20.0% (8 of 40 isolates) to 5 kinds. The Upsetview of multidrug resistance of Staphylococcus epidermidis is shown in Fig. 2.
Subgroup classified by clinical factors Sex
Among the 274 culture-positive samples, 50.0% (n = 137) were from male patients, and the rest (n = 137) were from females. For male culture-positive patients, Staphylococcus epidermidis (n = 31, 22.6%), Kocuria rosea (n = 18, 13.1%), Kocuria kristinae (n = 13, 9.5%), Micrococcus luteus (n = 11, 8.0%), and Kocuria varians (n = 7, 5.1%) were the 5 strains with the highest positive rates, accounting for 58.4% of culture-confirmed cases. For female culture-positive samples, Staphylococcus epidermidis (n = 25, 18.2%) was still the most prevalent culturepositive strain, followed by Micrococcus luteus (n = 21, 15.3%) and Kocuria rosea (n = 18, 13.1%). These 3 strains accounted for 46.7% of the culture-confirmed cases. It should be noted that the number of Staphylococcus epidermidis-positive isolates in the male patient group (n = 31, 22.6%) was more than that in the female patient Table 3 Summary of conjunctival sac bacteria of the patients who had undergone bilateral operations -group (n = 25, 18.2%), and there was significant difference between the two groups (χ2 = 7.139, P < 0.05). There was no significant difference in K-B results for various antimicrobial agents between the male and female patients.
Hypertension
Patients with hypertension had more positive culture results than those without hypertension (P < 0.05 There was statistically significant difference between the two groups (χ2 = 9.829, P < 0.05).
As for K-B test results, the median zone diameter of Staphylococcus epidermidis for ofloxacin in the hypertension group (0 mm) was smaller than that in the nonhypertension group (9 mm), and there was a significant difference between two groups (P < 0.05). However, this could be related to use of levofloxacin preoperatively and requires careful analysis.
Comprehensive analysis of related clinical factors
After comprehensive analysis of all related clinical factors, we identified 27 (9.9% of all positive samples) male patients with both hypertension and diabetes mellitus. Staphylococcus epidermidis was the most detected strain (n = 9, 33.3%). The proportion of Staphylococcus epidermidis was highest in the male group (22.6%), the hypertension group (18.1%), and the DM group (25.0%). There were significant differences for various antimicrobial agents in the K-B test (P < 0.05), and the zone diameters of rifampin were largest of all the antimicrobial agents (median zone diameter was 32 mm). As shown in the Fig. 3, the median zone diameter of rifampin in samples from males with hypertension and DM (32 mm) was larger than that in the male group (28 mm), hypertension group (30 mm), and DM group (29 mm). There were no significant differences between groups (P > 0.05).
Discussion
This study systematically retrieved RWD of 15,415 cases of patients that had used levofloxacin eye drops preoperatively. Data was retrieved from published literature from the last 10 years and from patients that had come to Peking University Third Hospital from 2016 to 2019. As we searched, there were several studies on conjunctival swab culture in cataract patients preoperatively without using antibiotic drops in the eyes (Table 6). According to the results, the positive rate of bacterial cultures of the conjunctival sac in cataract patients preoperatively without using antibiotic drops ranged from 48.3% to 74.0% [23][24][25] [26]. In the current study, the results revealed that after topically applying levofloxacin preoperatively, the positive rate of bacterial cultures from the conjunctival sac were 1.8%, which was indicative of the strong antimicrobial effect of levofloxacin in application before cataract surgery. However, it should be noted that even if levofloxacin had been used four times a day for 3 days, the possibility of a positive conjunctival sac bacterial culture still remained. Due to residual bacteria in the conjunctival sac, culture-positive patients were still at risk of endophthalmitis and other infectious diseases. Historically, the incidence of post-cataract surgery endophthalmitis ranges from 0.03% to 0.70% which could lead to serious consequences [27,28]. As shown in Table 7, there are several major pathogens isolated from conjunctival sac of patients with post-cataract surgery endophthalmitis [5,18,[29][30][31][32][33][34][35][36][37][38]. Among them, Gram-positive bacteria is the major pathogen and Coagulase-negative Staphylococci is the most frequently isolated strain [5,18,32,33,[36][37][38]. According to Egrilmez et al., Coagulase-negative Staphylococci shows resistance rates of more than 30% for fluoroquinolone and methicillin [39]. In addition to endophthalmitis, it can also lead to other infectious diseases including bacterial keratitis. Without effective antibiotic prophylactic therapy, patient may be at risk of potentially vision-threatening infection. Table 7 Summary of major pathogens involved in post-cataract surgery endophthalmitis
Year of Publication Major Pathogen
According to our results, Staphylococcus epidermidis, Kocuria rosea, and Micrococcus luteus were the 3 strains with the highest culture-positive rates after usage of levofloxacin eye drops for 3 days preoperatively. All of these bacteria belong to the Micrococcaceae family and are commensals, which can be found on human skin, mucous membranes, and the conjunctival sac [40,41]. They can cause opportunistic infections, requiring considerable attention [42]. Staphylococcus epidermidis is considered non-pathogenic. However, patients with a compromised immune system are often at risk of being infected. Characteristically, infections caused by Staphylococcus epidermidis are often chronic, which contrasts the acute infections caused by Staphylococcus aureus [43]. The pathogenesis of Staphylococcus epidermidis infection usually involves the formation of biofilms and phenol-soluble modulins which can kill human red and white blood cells [44][45][46]. It has been reported that Staphylococcus epidermidis cause biofilm growth on intravenous catheters and medical prostheses [47]. Thus, patients with Staphylococcus epidermidis are at risk of infection after implantation of intraocular lenses during cataract surgery. Besides, Kocuria rosea and Micrococcus luteus can also cause infectious disease in immunocompromised hosts. It has been reported that Kocuria rosea can cause meningitis, canaliculitis, endocarditis, and descending necrotizing mediastinitis [48][49][50][51][52][53][54]. As an opportunistic pathogen, Micrococcus luteus can also cause serious infections, such as endocarditis and brain abscess [55,56]. Our study shows that patients with certain clinical factors (male, the presence of hypertension or diabetes mellitus) are at risk of having a greater conjunctival sac bacterial load, which has been confirmed in previous studies [57][58][59][60]. These factors are often present in patients, which may lead to immunocompromised hosts and resulting ocular opportunistic infections caused by the above-mentioned bacteria [1]. It is therefore suggested that ophthalmologists pay more attention to patients with any of these three clinical factors. As for the antibiotic resistance of conjunctival sac bacteria, we found that the resistance of Staphylococcus epidermidis against ofloxacin in the hypertension group was stronger than in the non-hypertension group (P < 0.05). However, the result cannot explain a direct relationship between hypertension and antibiotic resistance of bacteria and how these relate to the preoperative use of levofloxacin. Levofloxacin, a fluoroquinolone, is an isomer of ofloxacin [61]. By using levofloxacin preoperatively, ofloxacin-sensitive bacteria were widely eliminated in patients, and the ratio of ofloxacin-resistant bacteria in patient conjunctival sacs was relatively increased. This may have influenced the results of the current study.
The fact that there still were culture-positive samples after three days of antibiotic prophylactic treatment with levofloxacin shows that, in addition to a high conjunctival sac bacterial load, another possible reason could be the drug resistance of these bacteria. With the widespread use of antibiotics, antimicrobial resistance rates have gradually increased [19,20]. In the current study, several kinds of bacterial strains were reported as resistant to antimicrobial agents, especially to levofloxacin and ofloxacin. Among them, several Staphylococcus epidermidis isolates had multidrug resistance to antimicrobial agents. It is commonly believed that antimicrobial resistance is higher in Staphylococcus epidermidis than in other Coagulase-negative Staphylococcus spp. [62]. The resistance of these bacterial strains against levofloxacin has been confirmed in several studies and has raised questions regarding the use of particular antimicrobial agents for routine prophylaxis [14][15][16][17][18].
In order to further decrease the conjunctival sac bacterial load through antibiotic prophylactic therapy, we need to carefully consider combinations of other effective antimicrobial agents. Our study suggests that rifampin would be a good choice for better topical prophylactic therapy, since most bacteria were sensitive to that agent. Rifampin belongs to rifamycins and has activity against several types of bacteria. Rubio et al. pointed out that 83.9% of conjunctival sac bacteria were sensitive to rifampin. Rifampin was the most effective for the eradication of the whole, predominantly Gram-positive, flora [63]. According to Chojnacki et al., the rifampin plus polymyxin B-trimethoprim combination demonstrated synergistic antimicrobial activity towards ocular clinical Staphylococcus aureus and Pseudomonas aeruginosa isolates, a low spontaneous resistance frequency, and in vitro bactericidal kinetics and antibiofilm activities equal to or exceeding those of moxifloxacin [64]. Compared to literature on the clinical effects of other antibiotics (Table 8), our study revealed a higher sensitivity of conjunctival sac bacteria towards rifampin [7,[65][66][67][68][69][70][71][72][73][74][75]. Further, there was not enough evidence for side effects of the topical application of rifampin at low concentrations.
Although rifampin is a good choice for combination therapy, it may lead to multidrug resistance and more severe consequences, including fever, headache, orange tears, skin redness or rash (allergic reaction) and other symptoms. Usage of multiple antimicrobial agents can effectively reduce bacterial load in the conjunctival sac. However, more resistant strains can also develop as a result of combined treatment. Therefore, simply adding more antimicrobial agents is an unsustainable strategy for improving antibiotic prophylactic therapy. Furthermore, patients may be at a greater risk of infectious diseases, and the proportion of antibiotic abuse may be higher due to clinical factors. The bacterial flora of the ocular surface may have already been multidrugresistant in these patients. Thus, local application of multiple antibiotics may aggravate the risk of multidrug resistance. Alternatively, we advocate a variety of other methods for decreasing the conjunctival sac bacterial load without using more antibiotics. Usage of povidone iodine (PVI) for irrigation during operation can reduce the bacterial burden in the conjunctival sac and has been proven as effective [76]. According to available literature (Table 9), the irrigation with high concentrations of PVI (5%-10%) can effectively decrease the conjunctival bacterial flora. PVI (5%) solution does not increase antimicrobial resistance and has no adverse effects. Lowconcentration PVI (0.05%) irrigation of the conjunctival sac for 30 s can achieve a low bacterial contamination rate and reduce damage to the ocular surface. Levofloxacin can enhance the effectiveness of conjunctival sac irrigation with PVI solution [9,[77][78][79][80][81][82][83][84][85][86][87][88]. Compared to the preoperative use of topical antibiotics, the use of PVI can achieve the same degree of elimination of conjunctival sac bacteria. However, appropriate PVI concentration and irrigation duration should be precisely controlled, or it may cause damage to the ocular surface. Preoperative topical antibiotic treatment could be used as an additional method for further elimination of conjunctival sac bacteria.
We must admit that our study still has limitations. Due to a lack of information, some statistical analyses could not be conducted and we may not provide unexpected results. Not all clinical factors related to conjunctival sac bacterial load were analyzed in our study due to missing data, including age, history of cancer, and screening for infectious diseases. These factors cannot be ignored and would have to be investigated in a follow-up study. However, this limited result can still arouse our attention to the drug-resistance of conjunctival sac bacteria and provide suggestions for preventive treatment.
Conclusions
Male and the presence of hypertension and diabetes mellitus are clinical risk factors for a greater conjunctival sac bacterial load. In order to decrease the conjunctival sac bacterial load for the prevention of possible infections, we offer a prophylaxis suggestion based on RWD, namely the combined use of levofloxacin and rifampin. However, such combined therapy but may aggravate the risk of multidrug resistance. Therefore, alternative ways should be suggested. | 2021-07-26T00:05:37.825Z | 2021-06-14T00:00:00.000 | {
"year": 2022,
"sha1": "362085f18acce68175b2fdccf1941efff2d555b9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "49f2e2ddf5c19f891683683463ff1da7f1a91d7a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55778464 | pes2o/s2orc | v3-fos-license | An assessment of the small hydro potential of Opeki River, southwestern Nigeria
Nigeria faces an acute shortage of electricity supply and large rural populations have no access to electricity. In this work, the small hydro potential of Opeki River in southwestern Nigeria was assessed. Mean daily flow records for seven years were used to establish a flow duration curve (FDC) for the river and a medium range of heads was evaluated. Conventional power equations were adopted and modified to determine rated output (Pk), annual optimal operation period (To) and to derive power duration curve (PDC) for a proposed plant at the site of interest. The plant’s annual energy production (Ek) and capacity factor (C) were projected from the PDC. At a net head of 46.5 m, an assessment at average potential power (Pave) with a single Kaplan turbine obtained values of Pk, To, Ek and C as 8.8 MW, 148 days, 50,018 MWh, and 65.1% respectively. Assessment results showed that small hydro electric power generation from Opeki River would improve electricity supply to nearby, off grid rural communities.
Introduction
Sub -Saharan Africa (SSA) has an average electricity access level of 14% as compared to 98.4% in North Africa, 60% in South Asia, 74% in Latin America and 72% in the Middle East [1]. Most countries in Sub -Saharan Africa, including Nigeria, face an acute shortage of rural electricity. Nigeria's rural electricity access level is 28% [2]. Rural electrification in Nigeria has been mainly carried out through grid extension [3]. Rural electrification through grid extension is technically challenging and very expensive due to the remoteness and sparseness of rural communities and the cost of maintaining long distribution feeders [4]. As a consequence, rural electrification in Nigeria has been notoriously slow [3,5]. The technically exploitable small hydro potential of Nigeria is high but underutilized [5,6]. Approximately two -thirds of Nigeria lies in the watershed of the Niger River and Benue River and their tributaries [6]. Several rivers of the watershed; including Cross River (southeastern Nigeria) and the Ogun, Osun and Oyan Rivers (southwestern Nigeria) flow directly southwards into the Atlantic Ocean [6]. Because many off -grid, rural communities in Nigeria are in the proximity of streams and rivers; small hydro power has the capacity to increase electricity access levels of these communities. In spite of the aforementioned; numerous impediments have limited the development of small hydro including the unavailability of relevant data and the lack of a comprehensive national inventory of potential small hydro sites [3], [5]. Therefore, in this paper; an assessment of the small hydro potential of Opeki River, a tributary of Ogun River [7], was carried out.
The definition of a small hydro project varies from one region to another, but a generating capacity not exceeding 10 MW is generally accepted as the upper limit of what can be termed as a small hydro plant in Nigeria [3,6]. Small hydro electricity generation is marginal from an economic viewpoint; hence, a large dam usually renders the project economically unattractive [8,9]. Since, most small hydro power plants are run of river developments with no significant water storage facility; the negative environmental impacts including ecological disruption; flooding and social conflicts associated with large scale hydro projects are drastically minimized [9]. In contrast, with the absence of a significant water storage facility; a small hydro plant's power output fluctuates with the hydrological cycle of the river [9,10]. Therefore, a reliable assessment of available small hydro resource cannot be achieved without an evaluation of the hydraulic turbine's response to the annual variability of river flow. These evaluations define the energy available at a site of interest. The extensive methods of these evaluations necessitated the development and implementation of the algorithm in this work using Visual Basic programming language.
Measuring Head
Head in a run of the river small hydro plant is relatively constant. It is defined by the loss of elevation by the river over its stretch between the water surface at the proposed intake and the river level at the point where the water will be returned [9,11]. The gross head is estimated by on-site measurements or from topographical maps. In this work, head measurements were carried out using both methods. The actual head seen by a turbine, termed the net head, will be slightly less than the gross head due to losses incurred when transferring the water into and away from the turbine via water conveyance structures. The net head was calculated using (1) [9,10].
Modeling Stream Flow
Having established the site as topographically suitable for small hydro power development, a firm knowledge of the river's flow regime, as depicted by a flow duration curve (FDC), is required. The FDC is a curve with probability of exceedance (%) on the x -axis and flow rate (m 3 /s) on the yaxis [8,12]. The FDC depicts the annual variability of flow in a river and was used to verify the availability of adequate water supply for power generation [12,13]. An approximation of the area of the region under the FDC provided the average yield of the stream, hence the average flow rate (Q ave ) for the multi -year period [13,14]. Development of the flow duration curve from daily flow records was achieved using a spreadsheet computer application.
Let Q i represent the flow values constituting the primary flow duration curve; then, a minimum non-usable flow must bypass the small hydro plant in order to meet environmental regulations and irrigation requirements downstream and to account for leakages that may occur at the point of diversion [11,12]. This minimum flow, also known as the residual flow (Q r ), was subtracted from all values of primary flow. Hence, the residual flow effectively shifted the primary FDC downwards thereby reducing the volume of flow available to the turbine and creating a secondary FDC consisting of the flows available for power generation (Q j ). The available flow values were thus calculated using (2) [8,9].
i, j = {0, 1, 2, 3, …, n} n = number of equally spaced intervals on the FDC where, Q i = flow values of the primary FDC (m 3 /s) Q j = flow values of the secondary FDC (m 3 /s) Q r = residual flow (m 3 /s) "i" and "j" are subscripts indicating the exceedance probability of a flow value on the primary FDC and secondary FDC respectively.
Power Output of a Turbine
In hydro power plants, the energy of flowing water is converted to torque by the turbine. This torque drives the shaft of the turbine which in turn rotates the alternator to produce electricity [14]. The power at the disposition of the turbine is conventionally calculated using the power equation in (3) [8,14].
where, ρ = density of water (1,000 kg/m 3 ) g = acceleration due to gravity (9.8 m/s 2 ) Q = flow rate (m 3 /s) H n = net head (m) η o = overall efficiency of the system
Establishing the Power Duration Curve
When (3) is expanded to accommodate the distinct efficiencies and losses of the small hydro system and Q k is taken as the plant's rated flow; P k (j) is the power output of the small hydro plant due to available flows (Q j ) relative to the plant's rated flow (Q k ) and P k (j) was evaluated using (4) [10].
j, k = {0, 1, 2, 3, …, n} n = number of equally spaced intervals on the FDC Q j = min (Q j , Q k ) "j" and "k" are subscripts indicating the exceedance probability of a flow value on the secondary FDC When Q j = Q k ; P k (j) is the plant's rated output where; η t = turbine relative efficiency obtained from Fig. 1 η g = generator efficiency (typically 93 ζ t = transformer losses (typically 1 -3%) ζ p = parasitic electricity losses (typically 1 ρ = density of water (1,000 kg/m 3 ) g = acceleration due to gravity (9.8 m/s Q j = available flows for power generation (m Q k = plant's rated flow (m 3 /s) H g = gross head (m) H h = hydraulic head losses (adjusted over the range of available flows) where, ζ h = conduit head percentage loss (typi H w = tail water head losses (adjusted over the range of available flows) and is defined only for (Q where, h w = maximum tail water level (m) Q max = maximum river flow from the primary The power outputs obtained from ( establish power duration curve (PDC) for hydro plant.
The minimum potential power, P min , is the plant's rated output when minimum annual flow rate plant's rated flow [4].
The average potential power, P ave , is the plant's rated output when average annual flow rate is taken as rated flow [4].
Calculating Turbine Relative Efficiency Figure 1. Hydraulic Turbines' Efficiency Curve
The relative efficiency of a hydraulic turbine turbine's efficiency not only at its design flow but also its efficiency at reduced flows (part -flow efficiency) [9, Even though efficiency guarantees are usually provided by turbines' manufacturers; extensive studies of Kaplan, : An Assessment of the Small Hydro Potential of Opeki River, Southwestern Nigeria plant's rated output (P k ). obtained from Fig. 1 93 -98%) 3%) = parasitic electricity losses (typically 1 -4%) m/s 2 ) for power generation (m 3 /s) hydraulic head losses (adjusted over the range of (5) (typically 3 -8%) tail water head losses (adjusted over the range of for (Q j > Q k ) the primary FDC (m 3 /s) obtained from (4) were used to (PDC) for the proposed small , is the plant's rated minimum annual flow rate is taken as the , is the plant's rated is taken as the plant's
Efficiency Curves
The relative efficiency of a hydraulic turbine describes the turbine's efficiency not only at its design flow but also its flow efficiency) [9,11]. Even though efficiency guarantees are usually provided by studies of Kaplan, Propeller, Francis, Crossflow, Pelton and Turgo turbines have established formulae for calculating relative efficiencies under varyi These formulae, described in derive efficiency curves for the various turbines considered in this work. These efficiency curves are shown in Fig. 1.
Calculating Annual Optim
The plant's annual optim estimation of the number of days in a year that the small hydro plant can deliver rated output. Th operation period was calculated where, T o = optimum operation perio t d = approximate number of days in a year (365 days) Pr(Q k ) = exceedance probability of (%)
Calculating Annual Maximum Output
The annual maximum reduction in the plant's rated output gives an indication of the quantity of power required to complement the small hydro plant flows. The maximum reduction in the plant's rated output was calculated from the PDC using ( P r P k where, P r = annual maximum reduction in P f = plant's firm output (kW) The small hydro plant's firm output output that a small hydro plant can reliably throughout the year and is calcula i.e. Q j = Q 100 .
Estimating Annual Energy Production
The annual energy produced by the small hydro plant calculated by approximating the area PDC. To achieve this trapezoidal integration was employ In order to numerically implement the trapezoidal rule, a domain discretized into "n" equally spaced intervals such that "n" represents the percentage exceedance intervals on the power duration curve with "n+1" flow values was considered. The approximation of the integral is [11,15] [10,11]. described in detail in [10], were used to derive efficiency curves for the various turbines considered in this work. These efficiency curves are shown in Fig. 1.
Calculating Annual Optimum Operation Period
The plant's annual optimum operation period is an e number of days in a year that the small hydro plant can deliver rated output. The annual optimum calculated using (7).
ptimum operation period (number of days) number of days in a year (365 days) = exceedance probability of the plant's rated flow
Maximum Reduction in Rated
maximum reduction in the plant's rated output the quantity of power required to plant during periods of reduced maximum reduction in the plant's rated output d from the PDC using (8); maximum reduction in rated output (kW) plant's firm output (kW) firm output (P f ) is the power output that a small hydro plant can reliably provide d is calculated from (4) when j = 100
Estimating Annual Energy Production
The annual energy produced by the small hydro plant was calculated by approximating the area of the region under the PDC. To achieve this trapezoidal integration was employed. In order to numerically implement the trapezoidal rule, a domain discretized into "n" equally spaced intervals such the percentage exceedance intervals on the power duration curve with "n+1" flow values was tion of the integral is given in (9) 6f x / 7 7 7 f x /89 : 7 (9) modified to calculate the small hydro al energy production using (10); where, E = the annual energy produced by the plant (kWh) P k (j) = the power outputs from (4) (kW) A = plant's annual availability (typically 85 -98%) t y = approximated number of hours in a year (8760 hrs) h = percentage spacing of the intervals on the PDC (1%)
Calculating Annual Capacity Factor
The small hydro plant's capacity factor is a ratio of the plant's actual energy output over a period of time and its potential energy output if it had operated at rated output the entire time. The plant's annual capacity factor was calculated using (11) [9]; where, C = plant capacity factor E = annual projected energy production from (10)
Turbine Application Range Charts
Charts have been developed to aid the selection of suitable turbines for varying site conditions of flow and head. Typical turbine application range charts are shown in Fig. 2 and Fig. 3. Suitable turbines are those for which a given net head and rated flow plot within the operational envelopes. Although some variation exists between envelopes of the same turbine for charts produced by different manufacturers; these charts serve as an important guide during the prefeasibility stage of small hydro assessment [11].
Power and Energy Assessment
In the absence of long term records; short term records (seven years) of average daily flows from a gauging station at Abidogun village were acquired from Ogun -Osun River Basin Development Authority (OORBDA). These records were used to develop a flow duration curve for Opeki River. In order to ensure a residual flow (Q r ) equal to 50% of minimum flow (2.97 m 3 /s) was sustained annually as prescribed, (2)
. Primary and Secondary Flow Duration Curves for Opeki Riv
Taking the average annual flow (Q allowed an estimation of average potential power The small hydro plant rated output (P (4) when ; Q k = Q j . Therefore, when (4) appropriate turbine relative efficiencies turbine efficiency curves in Fig. 1 for Kaplan, Propeller, Francis, Crossflow, Pelton and Turgo turbines output using Q ave was calculated for each turbine type. Subsequently, by applying (7) to the exceedance prob associated with Q ave , the annual optimum operation period for the small hydro plant was estimated.
Again, when (4) was used with the relative efficiencies along with available flow the secondary FDC; the variation in turbine efficiency and change in plant output as available flows plant's rated flow was computed for Kaplan, Propeller, Francis, Crossflow, Pelton and Turgo turbines were plotted to form turbine efficiency curve duration curves. The turbine efficiency curve the variation in turbine efficiency as available flows from the rated flow of a turbine. The TEC for a single Kaplan turbine proposed plant. The power duration curve (PDC) shows the drop in the small hydro plant's rated output as available flows below the plant's rated flow; hence the PDC describes the small hydro plant's ability to sustain output at reduced flows. : An Assessment of the Small Hydro Potential of Opeki River, Southwestern Nigeria curve comprising of flows available to the turbine for power generation. Primary and secondary flow duration in Fig. 4. The average of Opeki River for the multi -year period (less the residual flow) is 21.4 m 3 /s indicating that the river exhibits significant ration Curves for Opeki River annual flow (Q ave ) as rated flow average potential power (P ave ).
(P k ) is obtained from 4) was used with the efficiencies derived from the for Kaplan, Propeller, Turgo turbines; plant rated calculated for each turbine type. 7) to the exceedance probability , the annual optimum operation period estimated. with the appropriate turbine available flows constituting in turbine efficiency and change in plant output as available flows deviate from the for Kaplan, Propeller, Turgo turbines. These results plotted to form turbine efficiency curves and power . The turbine efficiency curve (TEC) shows the variation in turbine efficiency as available flows deviate TEC in Fig 5 is plotted for the small hydro wer duration curve (PDC) shows the drop in output as available flows falls the plant's rated flow; hence the PDC describes the plant's ability to sustain output at reduced flows.
The PDC in Fig 6 is plotted f plant at average potential power Since a turbine will only accept rated flow; when available flow exceeds the turbine's rated flow, the excess flow bypasses the turbi constitutes the flow used by the turbin Figure 5. TEC for a single Kaplan With the power duration curve estimate the annual maximum reduction in plant's rated output. The annual energy produced by the small hydro plant was estimated by approximating the area of the region under the power duration curve Upon estimation of the plant's annual energy production, the small hydro plant annual capacity factor was calculated using (11). Results of the aforementioned average potential power for all turbines : An Assessment of the Small Hydro Potential of Opeki River, Southwestern Nigeria is plotted for the proposed small hydro at average potential power with a single Kaplan turbine. Since a turbine will only accept flows equal to or less than its ; when available flow exceeds the turbine's rated flow, the excess flow bypasses the turbine and the rated flow constitutes the flow used by the turbine.
Kaplan turbine at average potential power PDC for the proposed small hydro plant at average potential on curve developed, (8) was used to maximum reduction in the small hydro he annual energy produced by the estimated by approximating the area of the region under the power duration curve using (10). Upon estimation of the plant's annual energy production, the small hydro plant annual capacity factor was calculated esults of the aforementioned assessment at for all turbines are summarized in Using the turbine application range chart in Fig. 2, Kaplan and Francis turbines were considered appropriate for the proposed small hydro plant at average potential power and the plant's expected optimum operation period is 148 days annually With a single Kaplan turbine installed; the plant is estimated to have a rated output of 8.8 MW and projected to produce 50018 MWh of energy at 65.1% capacity factor as shown in Table 1. Consequently, there is an annual energy deficit of 23305 MWh. The Kaplan turbine is expected to attain an efficiency of 91.8 % at rated flow which falls to 0% as a result of reduced flows during the dry season as shown in Fig 5. The proposed plant is not highly dependable as shown in Fig. 6; therefore, a standby capacity of 8.8 MW must be made available from a central grid or an independent source to compensate for the total loss of generation during the dry season.
Alternatively, with a single Francis turbine installed; the small hydro plant is estimated to have a rated output of 8.5 MW and projected to produce 46607 MWh of energy at 62.9% capacity factor as shown in Table 1. Consequently, there is an annual energy deficit of 25557 MWh. The Francis turbine is expected to attain an efficiency of 88.5 % at rated flow which falls to 16.3% as a result of reduced flows during the dry season. The plant is not highly dependable; hence, a standby capacity of 8.3 MW must be made available from a central grid or an independent source to avoid severe power shortages during the dry season.
Power and energy estimates were made for a medium range of heads at average potential power. These results are shown in Table 2. Taking the minimum annual flow (Q min ) as rated flow allowed an estimation of minimum potential power (P min ). The minimum annual flow rate of Opeki River obtained from the secondary FDC is 2.97 m 3 /s at Q 100 . A summary of assessment results for all turbines at minimum potential power is shown in Table 3. Using the turbine application range chart in Fig. 2, Francis and Propeller turbines are considered appropriate for the proposed small hydro plant at minimum potential power. A Francis turbine is also considered suitable by the turbine application range chart in Fig. 3. At minimum potential power, rated flow is available throughout the year; therefore, the proposed plant expected annual optimum operation period is all 365 days annually.
With a single Francis turbine installed, the plant is estimated to have a rated output of 1.16 MW and projected to produce 9948 MWh of energy at 97.8% capacity factor as shown in Table 3. The Francis turbine is expected to maintain an efficiency of 87.3 %. Hence, the annual energy deficit of 214 MWh is mainly due to the plant's 98% availability.
Alternatively, with a single Propeller turbine installed, the plant is expected to have a rated output of 1.2 MW and projected to produce 10350 MWh of energy at 97.8% capacity factor as shown in Table 3. The Propeller turbine is expected to maintain an efficiency of 90.9 %. Hence, the annual energy deficit of 188 MWh is mainly due to the plant's 98% availability.
Power and energy estimates were made for a medium range of heads at minimum potential power. These results are shown in Table 4.
Conclusion
Assessment results show that small hydro electric power generation from Opeki River can contribute in no small measure to improving electricity supply to nearby rural communities since the electrical energy demands of these communities are modest. Widespread development of small hydro power can contribute immensely to improving rural electricity access levels throughout Nigeria. | 2019-04-13T13:07:21.127Z | 2014-06-27T00:00:00.000 | {
"year": 2014,
"sha1": "b408a35fdaa1e8df10ff0f2668e279698b3d0001",
"oa_license": null,
"oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.sjee.20140203.12.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "378f19fec9217f5dd6a2693079a998d04bff8e65",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
268006134 | pes2o/s2orc | v3-fos-license | Streaming Consumers: Series Versus Videos, What Distinguishes Them?
In recent years, countless studies have sought to explain and characterise video streaming consumption. Its facet of excessive consumption, the so-called binge-watching, has been privileged in an attempt to define and quantify, if possible, the phenomenon. In this work, we intend to analyse streaming video consumers, aiming to identify the factors that differentiate two specific types of consumers: TV series streamers and medium/short-duration video streamers. The data was obtained using a structured questionnaire, shared online between 16 June and 16 August 2022, to individuals residing in Portugal, aged between 18 and 64, with 496 valid responses. Our sample consists of 67.3% women, and 83.5% of the participants are of Portuguese nationality. About 75% of the participants assumed themselves as consumers of TV series, and about 25% were medium/short-duration video consumers. This study used several statistical techniques, including descriptive statistics, chi-square independence test and logistic regression. The factors that differentiate these two groups of consumers are gender, age group, environment where they live, type of platforms used, the device they usually watch, with whom they typically watch and the system recommendations.
Introduction
The advancement and massification of information and new communication technologies have allowed for a new way of consuming media entertainment in recent years.The popularisation of access to personal devices, with particular emphasis on mobile phones and computers, along with a growing and stable Internet connection, has provided a new form of online consumption.Viewers can now access their favourite programs anywhere, anytime and as often as they like.Thus, the consumer now has perfect control not only of what they consume but particularly how they consume it.Hence, it became possible to consume multiple TV series episodes (or another type of program) simultaneously for as long as one wants without being dependent on the weekly TV schedule and successive advertising interruptions.According to some authors (e.g., Shim & Kim, 2018), this new pattern of streaming video consumption, despite greater efficiency in choosing and controlling what is consumed, can lead to an excess, the well-known phenomenon of binge-watching, which has been much studied in recent years by various subject areas (Merikivi et al., 2018;Pittman & Sheehan, 2015;Steiner & Xu, 2020;Sung et al., 2018).This type of phenomenon originates in the development of various 'on-demand streaming' platforms, notably Netflix, Hulu, HBO GO, Amazon Prime, Disney+, Crunchyroll and Apple TV.In 2013, Netflix created new ways of consuming TV programs in which viewers could choose extensively from the diverse content offered (Starosta & Izydorczyk, 2020).Recently, according to data from Netflix, around 208 million consumers subscribed to this paid platform in 2020 Netflix (2021).These numbers have continued to grow, especially during the quarantine and lockdown of the economy following the pandemic (Rahman & Arif, 2021).
This new behaviour towards video streaming has aroused increasing interest in the profile of this type of consumer, excessive or more recreational and in their real motivation.
We also emphasise that currently, several studies are being carried out regarding the profile and consumer behaviour of TV series such as Netflix and HBO,…, that is of programs whose each episode has a longer duration (e.g., Martínez-Sánchez et al., 2021;Nagaraj et al., 2021), and other studies to analyse the profile and behaviour of the consumer for videos of short/medium duration, such as YouTube, TikTok, Reels (e.g., Blagojević, n.d.;Chen, 2016;Iqbal, 2023).As referred by Yoo et al. (2020), like binge-watching, short-watching is also a representative way of consuming media.In congruence with the continuing growth of subscription video-on-demand (SVoD) services such as Netflix and Hulu, studies on bingewatching appear to receive more attention than research conducted on shortwatching.In this way, it will be essential to differentiate between the two types of programs/series and to know what distinguishes them in consumer profiles.
In recent years, several definitions have emerged to distinguish videos in terms of their length.For Newman (2010), short-watching is similar to the snack culture: consuming media in short periods to suit hectic lives.In the study conducted by Bytyci (2014), it was concluded that the mean length for short videos was about 2.9 minutes, whereas for long-form videos, it was about 30.7 minutes.According to Kong (2018), 'Short video' refers to video content shorter than 5 minutes distributed via digital media platforms.Short video features include low-cost production, highly spreadable content and blurry boundaries between producers and consumers.Lew (2021) states that the concept of 'medium-length videos' or 'mid-form videos' was formally introduced in October 2020 by Watermelon Video's CEO Ren Lifeng and is now trending on all major video platforms in China.Here, a distinction is made between the three categories of video according to their length: Short-form, whose duration is up to 1 minute (e.g., TikTok videos); mid-form, where video duration can be anything between 1 and 30 minutes, but a length of 5-15 minutes is more commonly seen (e.g., YouTube videos); and long-form, whose duration is more than 30 minutes (e.g., Netflix and Disney+ videos).According to Bonacci (2022), short-form videos are typically under 10 minutes long, while long-form videos exceed that 10 minutes.
Thus, it is clear that there is no unanimous definition in the literature as to the duration of the videos under review.Therefore, in our study, we made the distinction between TV series, considering, in this case, videos with a longer duration, such as Netflix, HBO or Disney+ videos and videos of medium/short duration, such as videos such as YouTube, TikTok or Reels.
In this setting, the main objective of this study is to identify the main factors that distinguish the consumer of streaming TV series from the consumer of short/ medium duration videos.These factors include some socio-demographic characteristics, the devices and platforms used and factors that can be considered motivational, such as the recommendations made by the system and relaxation.To this end, our main research question is: RQ: What are the main factors distinguishing a consumer of streaming series and a consumer of medium/short duration videos?
To answer the question, streaming consumers were asked about the type of product they usually consume: TV series (e.g., Netflix, HBO and Disney+) or medium/short-length videos (e.g., YouTube, TikTok and Reels).This variable is the dichotomous dependent variable of interest.
The article is organised as follows.In the second section, a review of the literature is carried out.The following section describes the methodology used, questionnaire and measures, as the procedures adopted in data collection and the statistical techniques used to respond to the study's objectives are described.In the fourth section presents the results obtained, namely the sample characterisation, hypothesis validation and research question.Finally, the conclusions are drawn, and the limitations of this study are mentioned.
Literature Review
Numerous studies have sought to explain and characterise video streaming consumption in recent years.Its facet of excessive consumption, the so-called binge-watching, has been privileged, seeking to define and quantify, if possible, the phenomenon.One of the first proposals was put forward by McNamara (2012), who suggested a minimum consumption in duration and number of episodes.The problematic application of this concept and the fact that most studies used a self-reported variable led to the need to find a more operational criterion, giving rise to a metric of 'three or more episodes at once'.The studies by Pittman and Sheehan (2015), Schweidel andMoe (2016), Trouleau et al. (2016) and Merikivi et al. (2018) contributed to shaping this criterion.More recently, Anghelcev et al. (2022) suggested the more realistic hypothesis of considering different consumption levels, differentiating non-binge-watchers from regular binge-watchers or heavy binge-watchers.Most of these works were mainly concerned with investigating the main factors at the origin of the bingewatch phenomenon, such as personality traits, other psychological factors or even motivation, in which entertainment, relaxation and fantasy and fandom phenomena are highlighted.
For a more systematic review of the issue, see, for example, Flayelle et al. (2020) and Starosta and Izydorczyk (2020).Other authors go a little further, exploring the potential addictive nature of the phenomenon, seeking to identify risk consumers, which would be of particular importance for the prevention of mental disorders or social problems, as well as allowing for a better understanding of this excessive consumption behaviour.This is the main objective of the work by Ort et al. (2021) when concluding that although consumers of TV series reveal, in general, low levels of a behaviour, identified by the authors, as problematic, this trend could be reversed as the frequency of binge-watching increases.However, the results of the study suggest that consumers identified as 'regular' cannot be characterised by problematic or even addictive behaviour.
A reference should be made to the study by Lüders et al. (2021), which, in a broader sense, seeks to analyse the profile of the streaming consumer, choosing the particular cases of music (Spotify) and TV (Netflix).The analysis was carried out based on the Norwegian reality, aiming to assess the extent to which the two types of consumers differ: in terms of TV consumption, it is younger men with higher income who seem to be more frequent streaming consumers, whereas concerning streaming music consumption, age and gender are both explanatory and significant factors that lead to more frequent consumption compared to nonstreaming users.
The demographic analysis by Nagaraj et al. (2021) found that the younger generation was more willing to subscribe to SVoD services.In contrast, education was negatively related to willingness to subscribe.As one's educational qualifications increased, the familiarity and awareness of technology and innovations also increased.With the increasing need for convenience, interactive features and widespread global content, over-the-top (OTT) services are highly appealing to the educated class of society.Also, the educated working class gets entertainment on their devices, saving time in their already busy schedules.Occupation also emerged as one of the important demographic factors in determining willingness to subscribe.Consumers from public sector jobs have more leisure time when compared to private sector employees, so they show greater willingness to subscribe to SVoD services.Although it is a common perception that males and females have different viewing patterns and preferences, when it comes to the choice of OTT service subscription and platform, the study found no significant influence of gender in predicting willingness to subscribe.
Despite some studies concluding that binge-watching is gender-neutral (Moore, 2015), differences appear in preferred programs-women tend to consume more dramas and comedies, while men often choose other types of content, such as science fiction (Chang, 2020).In addition, it is also important to understand the reason, beyond the more apparent socio-economic reason, why some consumers subscribe to one or more paid platforms (as is the case with Netflix) and others prefer free platforms (as is the case with YouTube).There is, therefore, a place to identify different segments of consumers either by socio-demographic characteristics or differences in the programs chosen or still which device they choose.Thus, considering the above, we propose the following research hypotheses: H 1 : There is an association between socio-demographic characteristics and the type of programs consumers watch.H 2 : There is an association between the type of platforms where they watch and the type of programs that individuals watch.Another aspect to remember is the type of device used to watch the series and whether the kind of series that individuals watch is associated with loneliness or social interaction.In this sense, the way they do it and with whom they watch their favourite programs are additional characteristics that distinguish the consumers.D'heer and Courtois ( 2016) state in their study that an increasingly saturated media environment potentially alters how viewers engage with televisual media and each other.In this respect, they address how mobile devices, such as tablets, have entered our living rooms and altered TV's social uses and practices.They conclude that the use of mobile Internet devices in addition to the TV is integrated into our everyday TV viewing behaviour.They also report that although family members may all be watching programs in the same living room but on different devices, their conversation diminishes.
According to Rahman and Arif (2021), their study's results indicate that most respondents use smartphones for binge-watching on Netflix.The most popular device for marathon Netflix usage among the respondents is the smartphone (65.7%), followed by laptops (45.7%) and desktop computers (43.8%).This indicates that most of the respondents prefer portability.Some surveys suggest that excessive video streaming consumption is usually a more solitary behaviour and that it may, in some cases, be associated with specific personality traits and psychological factors (Wagner, 2016;Wheeler, 2015).
A study by Evens et al. (2021) reveals that nowadays, individuals not only have more choice and control over which audiovisual content they access when (timeshifting) and where (place-shifting), but they can switch between services (platformshifting) and select the most appropriate screen (device-shifting) to playback that content.Depending on the socio-spatial context, consumers may, for example, prefer to watch a drama series individually on a mobile phone while commuting on public transport but would cast the same series to a bigger screen while being together with friends or family.Yoo et al. (2020) analyse the addiction effects of both binge-watching and short-watching.They considered several motivation variables and concluded that social interaction significantly positively affects attitudes towards short-watching.Thus, we propose the following research hypotheses: H 3 : There is an association between the type of device used and the type of program individuals watch.H 4 : There is an association between who individuals watch and the type of program that individuals watch.
Since we would like to know if the consumer profile is different for the TV series and medium/short length video consumers, we would like to know if the motivations like recommendations, relaxation or behaviour patterns are associated with the two kinds of programs.Several studies analysed the motivations related to video streaming consumption.
The excessive consumption of video streaming is analysed by Hasan et al. (2018), considering the impact of the recommendation system.This question is somewhat innovative since it brings to the investigation the role that the platform recommendation system can have in excessive consumption.Thus, the authors consider three significant factors likely to explain excessive consumption: motivation and psychological.And the use of the recommendation system (on sites such as YouTube and Netflix).The latter refers to a type of service included in Internet applications, which may lead to excessive use of the application in question.Despite some similarities of this service between platforms, attention is drawn to the effectiveness and quality of recommendations, which may vary from individual to individual and from platform to platform.
It is important to mention the role of recommendations, either by friends (e.g., Anghelcev et al., 2022) or through reviews and readings about series (e.g., Forte et al., 2021), which can end up influencing decisively content choices and the intensity of that consumption.
According to Limov (2020), users will rely on Netflix's recommendation system to find foreign content more frequently than other common discovery sources, like their peers or reviewers.Evens et al. (2023) state that algorithmic platforms may direct viewers towards popular series based on their viewing behaviour, almost reducing that choice.This might not be the case for the affordances categorised under the 'Modality' umbrella, which instead seem to describe the technology.
Concerning motivation, and among several possible explanations, entertainment and relaxation stand out, or even, on a smaller scale, the attempt to escape everyday problems or other negative emotions.In the results obtained in the study of Castro et al. (2021), relaxation was the main motivation for watching Netflix at the end of the day.Ort et al. (2021) conclude that relaxation and entertainment are unproblematic and rather recreational motives regardless of binge-watching frequency.Also, Khan (2017) concludes that the strongest predictor for liking and disliking YouTube videos was the relaxing entertainment motive; commenting and uploading were strongly predicted by the social interaction motive, and the information-giving motive anticipated sharing.The results of the study by Yoo et al. (2020) demonstrate that media audiences show different motivations towards the tendency of binge-watching and short-watching.As Camilleri and Falzon (2021) concluded, their study's research participants sought emotional gratification from the streaming technologies.As referred by the authors, they probably allowed them to relax in their free time.Other theoretical underpinnings reported that individuals use certain technologies to distract themselves into a better mood.Thus, we propose the following research hypotheses: H 5 : Recommendations are perceived differently by individuals who watch TV series and those who watch medium/short-duration videos.
H 6 : Relaxation is perceived equally by individuals who watch TV series and those who watch medium/short-duration videos.H 7 : The consumption pattern is perceived equally by individuals who watch TV series and those who watch medium/short-duration videos.
Questionnaire and Participants
This study collected cross-sectional data using a structured questionnaire on the Google Forms platform.A pre-test with ten individuals has been previously carried out, which led us to correct some questions.The questionnaire consists of three sections: (a) Socio-demographic data (seven questions); (b) type of video streaming, platforms, devices and with whom watch (four questions) and (c) motivation (recommendations, relaxation) and consumption pattern (eight questions).All questions are of multiple choice, and there are no open-ended questions.The questionnaire is written in Portuguese and was shared on social networks and Facebook video-streaming consumer groups between 16 June and 16 August 2022.The questionnaire takes approximately 5 minutes to complete.In the third section of the questionnaire (the questions are shown in Tables 1 and 2), we used a 5-point Likert scale (1: Strongly Disagree, 2: Disagree, 3: Neutral, 4: Agree, 5: Strongly Agree
Measures Socio-demographic Data
In the first section of the questionnaire, questions were asked regarding gender, age groups, educational qualifications, professional situation (occupation), marital status, the environment where they live, and nationality.
Platforms, Devices and Streaming Consumers
In this section of the questionnaire, we start with the question about the kind of program we usually see: TV series (e.g., Netflix, HBO and Disney+) or medium/ short length videos (e.g., YouTube, TikTok and Reels).Regarding the type of platforms used, it was asked whether they usually watch on paid or free platforms (Nagaraj et al., 2021), as well as the type of device most frequently used (Hasan et al., 2018).In this section of the questionnaire, we also have a question to find out with whom they usually watched the video streaming.
Reasons and Pattern of Consumption
Concerning the reasons that lead consumers to watch TV series or medium/shortlength videos, questions were asked about the recommendations made by the system and suggestions of reviews (Forte et al., 2021;Hasan et al., 2018).Another reason is considered as the relaxation factor, based on the work of Ort et al. (2021) but also analysed in other works, such as Hasan et al. (2018).The questions are shown in Table 1.
To analyse the consumption pattern of this type of consumer, the questions shown in Table 2 were posed, which are based on the work carried out by Pena (2015), Shim and Kim (2018), Forte et al. (2021) and Ort et al. (2021).
Data Analysis
In general terms, this study is essentially quantitative and descriptive.A questionnaire was prepared to carry out this study, and a non-probabilistic sampling technique was used.In this study, we characterised the sample using descriptive statistics measures.
To test the research hypotheses, from H 1 to H 4 , chi-square independence tests were performed.The parametric t-tests were used to test hypotheses H 5 , H 6 and H 7 , having previously used Cronbach's alpha coefficient to analyse the internal consistency of the items used in the considered factors.
The research question, RQ, was answered using the logistic regression model, whose dependent variable, Y, assumes the value 1 if the individual watches mostly TV series and 0 if the individual watches mostly medium/short duration videos.Lee and Monsam (2018) used the logistic regression model to assess whether there is any statistical significance between Millennials' desires to switch from traditional television services to OTT.Laban et al. (2020) used this model to explain the strategy (implicit as the reference group) according to the type of production (non-Netflix originals as the reference group) and genre (drama as the reference group).Nagaraj et al. (2021) also used this model to analyse the availability of subscribing to new OTT services or continuing with the same services, considering as independent variables socio-demographic factors such as age, education, gender, income, occupation and household structure.For all the statistical analysis of the data, the statistical software IBM SPSS 27 was used.
Sample Characterisation
Table 3 presents the characterisation of the sample obtained.Table 3 shows the results' absolute and relative frequencies (in percentage).In our sample, we have a total of 496 valid responses.Regarding the complete sample, 67.3% of the individuals are female, the majority having a bachelor's degree or higher, 55.2% are full-time workers, 56.9% are single and 80.6% live in urban areas.Our sample comprises individuals aged between 18 and 64; only 5.8% of this belongs to the age group between 55 and 64.It should be noted that around 58.9% of users use paid platforms, 46.0% watch programs on television and 75.2% of individuals watch programs alone.
Regarding the two subsamples analysed in this study, we highlight: (a) 64.3% of individuals who watch TV series are workers, while 57.7% of individuals who watch medium/short duration videos are students; (b) Among individuals who watch TV series, 76.7% do it using paid platforms, while of individuals who watch medium/short duration videos, 95.1% do so using free platforms; (c) 60.1% of individuals who watch TV series use television for this purpose, while 74.0% of individuals who watch medium/short duration videos use their mobile phone or tablet; (c) among of individuals who watch medium/short duration videos, 95.1% do it alone, with 68.6% being the result for individuals who watch TV series.
Figures 1-4 show the distribution of consumers of TV series and short/medium duration videos by gender, age group, platform type and device type, respectively.
Hypothesis Validation
Table 4 presents the results obtained by performing the chi-square test and the respective Cramer's V contingency coefficient, and the latter is used to measure the intensity of the existing association between the variables.The results aim to test hypotheses H 1 , H 2 , H 3 and H 4 .By analysing the results in Table 4, we can conclude that there is an association between the type of programs that individuals watch and the following variables: gender, age, occupation, marital status, residence zone, type of platform, device they watch on and with whom they watch with.
The education level and residence zone variables were not statistically significant with the type of programs individuals watch (p value > .05).Regarding the intensity of the association, we found that it is more significant for the variable's platform, device and age class.Finally, we can conclude that hypothesis H 1 is partially validated, and hypotheses H 2 , H 3 and H 4 are validated.
To test hypotheses H 5 , H 6 and H 7 , parametric t-tests were performed for the equality of two means.The results are shown in Table 5.Previously, Cronbach's alpha coefficient was calculated to analyse the internal consistency of the items that constitute the factors.Based on (Hair, 2010) values, we can say that the factors recommendations (3 items) and relaxation (2 items) have moderate consistency with alpha values of 0.609 and 0.673, respectively, and the factor consumption pattern (3 items) has good internal consistency, with an alpha value of 0.769.By analysing the results obtained, we can see only one statistically significant difference in the two groups of individuals' perceptions regarding the recommendations.To better illustrate these differences, Figures 5 and 6 were elaborated.Thus, hypotheses H 5 , H 6 and H 7 are validated.
Finally, a multiple logistic regression model was estimated using the stepwise forward variable selection method to answer our research question, considering the probability of entry equal to 0.05 and removal equal to 0.10.
Table 6 presents the results of the estimated logistic regression model, which shows the estimated coefficients, B, the estimated standard errors, EP, the Wald statistic values, degrees of freedom and respective p values, the odds ratios (OR), as well as the 95% confidence intervals for the ORs.Overall, we can conclude that the variables that help to distinguish the two types of programs that individuals watch are gender, age group, residence zone, type of platforms, type of device, who they watch with, as well as recommendations.
To assess the significance of the estimated model, we used the likelihood ratio test (whose null hypothesis states that the estimated model is not significant).The observed value of the test statistic is G 2 = 344.172(p value = .000),so we can conclude that the estimated model is statistically significant.The value of the Hosmer-Lemeshow quality of fit statistic was also calculated, χ HL 2 = 8.519 (p value = .384> .05),so we do not reject the null hypothesis that the model fits the structure of our data.The McFadden's pseudo-R 2 is 0.619, indicating we are dealing with a good fit.
In addition to the statistics of fit quality, we present the classification Table 7, which indicates the observed and estimated values of the dependent variable under study.By analysing the results of Table 7, we can conclude that the estimated model correctly classifies 90.3% of the cases, with a specificity of 82.1% and a sensitivity of 93.0%.The previous values were obtained using a comparison value equal to 0.5.
As previously stated, the estimated logistic regression model shows that seven independent variables significantly explain what distinguishes TV series viewers and viewers who watch medium/short videos: gender, age class, residence zone platforms, devices, who they watch with and recommendations.
Conclusions
The main objective of this study was to analyse the differences in consumption patterns between two consumer groups: TV series streamers and short/medium duration video streamers.For this purpose, a sample of 496 respondents aged between 18 and 64 was used.
The differences appear right away in the socio-demographic characteristics of the two groups-if, on the one hand, the TV series is watched mainly by a female and working audience, on the other hand, the medium/short duration video consumers are essentially students, with some gender neutrality.As for the way they watch, there seems to be a common pattern between the two types of consumers since both watch, as a rule, alone.Moreover, there is a clear distinction in the type of platform used: TV series streamers watch mostly on paid platforms, while video consumers do so almost entirely on free platforms.
Regarding motivation, there seems to be some homogeneity between the two groups, with relaxation being the main reason that leads them to stream.Regarding the most used device, TV series consumers watch mainly through TV (this result does not agree with the results of Rahman & Arif, 2021), while consumers of medium/short duration videos use mainly smartphones.Finally, we highlight the role played by the recommendation system, which is particularly relevant for TV series consumers.
Streaming entertainment consumption is not only a growing trend but has been undergoing rapid change, strongly enhanced by the confinement of the population due to the COVID-19 pandemic.This market seems to be undergoing rapid change, making it essential to know the profile of the target consumer of this entertainment industry.Until now, there has been an almost exclusive focus on studying excessive consumption, describing the factors that cause it and which groups are most vulnerable, to assess risks and contribute to preventing possible adverse effects on mental health, social life, etc.
As some authors have mentioned (Flayelle et al., 2020), it is essential to know what moves and explains the behaviour of most streamers, and that will be only frequent or regular consumers without additional characteristics.Knowing these differentiating characteristics (social, demographic, motivational, lifestyles, etc.) will provide information for the industry to adapt content and the logic of streaming itself to a constantly changing audience.Some affinity may even be discovered as to the motivation behind the programs consumed (music, series, videos, etc.) that the platforms have not yet explored (see the work of Lüders et al., 2021).How content is accessed, be it the computer, the smartphone or even the traditional TV set, is also crucial for 'building' new audiences (Burroughs, 2019).Recommendations, whether through interpersonal relationships or through the system of the platform itself, assume an important role.This issue has been given very little consideration in the literature and deserves more attention.
Our study revealed that only the system recommendation was significant, something that is not unrelated to the more 'solitary' profile with which the streamer watches the programs.This trend may frustrate any attempt to influence the diversification of programs that the individual watches since, according to some authors, the consumers may close themselves in a kind of 'bubble' since the contents recommended by the system are related to the individual history consumed (Gutzeit et al., 2021).Finally, according to the results of our study, the streamer of short/medium duration videos is young (male or female), and the streaming industry cannot ignore users of free platforms.
In sum, people now have more choices and seem to be dividing their digital entertainment options more evenly based on the kinds of value they offer.For many, digital media is entertaining and can offer utility, foster community and support emotional needs.For more people, the digital and physical are likely becoming equally real and meaningful.Generation Z is the first generation to grow up with smartphones, social media and always-on access to the internet.Their brains and behaviours are being shaped equally by the physical and digital worlds, further invoking the nascent metaverse.They may hold the keys to the future of media and entertainment.
This study thus aims to contribute to the literature on video streaming consumption since we are not aware of any studies that make this distinction between the two types of consumption to date.
Limitations
In this work, a non-probability sampling technique was used, so using a stratified sampling technique, namely by gender, age group or region of the country, is a way to enrich this study and confirm the results obtained.Another limitation of this study was that consumers were not asked about their behaviour regarding streaming consumption before, during and after the confinement period.This is a suggestion for future work.In this study, we used a multiple logistic regression model to distinguish the factors that could be associated with the consumption of TV series to medium/short duration videos, and perhaps it would be interesting for future work to use a model hedonic-motivation system adoption model (HMSAM) to study consumption of the two types of programs separately.
Figure 3 .
Figure 3. Streamers by Type of Platform.
Figure 4 .
Figure 4. Streamers by Type of Device.
Table 1 .
Questions to Assess Consumer's Motivations.
Table 2 .
Questions to Assess the Consumption Pattern.
Ort et al. (2021) Forte et al. (2021)episode or one more video, and then I'll turn it off' when watching series or videos.Ort et al. (2021) Forte et al. (2021)
Table 4 .
Results of the Chi-square Test.
Table 5 .
Results of the t-Test for Independent Samples. | 2024-02-27T17:23:40.331Z | 2023-12-30T00:00:00.000 | {
"year": 2024,
"sha1": "45acb8774c1656ff92f76b26fd021bf1d996bbdf",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09760911231214155",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "661d1469be47ed1da727e6ad501a2b2d037197cd",
"s2fieldsofstudy": [
"Business",
"Sociology"
],
"extfieldsofstudy": []
} |
1369362 | pes2o/s2orc | v3-fos-license | Chaotic Method for Generating q-Gaussian Random Variables
This study proposes a pseudo random number generator of q-Gaussian random variables for a range of q values, -infinity<q<3, based on deterministic chaotic map dynamics. Our method consists of chaotic maps on the unit circle and map dynamics based on the piecewise linear map. We perform the q-Gaussian random number generator for several values of q and conduct both Kolmogorov-Smirnov (KS) and Anderson-Darling (AD) tests. The q-Gaussian samples generated by our proposed method pass the KS test at more than 5% significance level for values of q ranging from -1.0 to 2.7, while they pass the AD test at more than 5% significance level for q ranging from -1 to 2.4.
I. INTRODUCTION
The q-Gaussian distributions have been studied in a wide variety of fields from natural sciences to social sciences. They have been applied in thermodynamics, biology, economics, and quantum mechanics. The generating mechanism is still an open question, but several mechanisms that have been shown to produce q-Gaussian distributions are known, such as multiplicative noise, weakly chaotic dynamics, correlated
II. REVIEW OF THE GENERALIZED BOX-MULLER METHOD
The zero-mean normal q-Gaussian distribution parameterized by q is described as where B(a, b) is the beta function, which is defined as For q < 1 symmetric distributions with compact support ranging from − 2 1−q to 2 1−q appear. Specifically, the normalized Wigner distribution is obtained at q = −1. In the case of 1 < q < 3, Equation 1 has heavy-tails and g(x; q) ≈ const.|x| ν−1 , where ν = (3 − q)/(q − 1) > 0 is related to the degree of freedom of the Student's t-distribution. ν is coincident with the tail index of the complementary cumulative distribution of g(x; q). This also gives an existence condition in the heavy-tail regime of the q-Gaussian distribution.
Firstly, let us start our discussion from the GBMM proposed by Thistleton et al. [13]. To introduce their method to generate q-Gaussian random variable, we define a q-analog of both exponential and logarithmic function. Definition 1. Suppose the one-dimensional ordinary differential equation The solution is given as We call the solution h(w) q-exponential function. Obviously, one has ln q (w) = w 1−q − 1 1 − q (w > 0), May 1, 2014 DRAFT which we call the q-logarithmic function. Clearly, we get lim q→1 w 1−q − 1 1 − q = ln w.
Definition 3. The GBMM [13] is given by transformations from i.i.d. uniform random variables u 1 and u 2 ranging from 0 to 1. .
Proposition 1. The joint probability density of x and y in Equation 8, is given by Proof of Proposition 1. From Equation 8, we obtain where we used the equality Note that Equation 10 is recognized as a two-dimensional q-normal distribution, where r = 2 − 1/q and D = 2. This is properly parameterized with each marginal q-variance equal to one.
Proof of Proposition 2. Integrating Equation 9
in terms of y, we obtain Equation 13. In the case of q = 1, we obviously obtain In the case of 1 < q < 3, we have Using the equality among beta function and gamma functions where the gamma function is defined as and Therefore, Equation 15 can be rewritten as Setting q = q ′ +1 3−q ′ , we obtain In the case of q < 1, we obtain the joint density p X,Y (x, y) has a compact support ranging from Similarly to the case of 1 < q < 3 setting q = q ′ +1 3−q ′ , we obtain where |x| ≤ 3−q ′ 1−q ′ . Figure 1 shows the distribution of Equation 9 for several cases of q. The distribution is the spinning object. The marginal distribution in terms of ξ is also equivalent to Equation 13. DRAFT May 1, 2014
They proved that for d ≥ 2 the ergodic invariant measure of the map dynamics w n+1 = P d (w n ) has an explicit density function invariant measure µ W (w) = 1 π √ 1−w 2 . More generally, let us extend the Chebyshev polynomial to a two-dimensional case as [21]. Definition 4. P d (w) and Q d (w, v) are given as real and imaginary parts of binomial expansion, where the equality w 2 + v 2 = 1 is necessary in order to obtain P d (w) from this expansion. Here, in this definition we used the Eular equality This P d (w) is the Chebyshev polynomial of degree d. The first few polynomials are explicitly given
Definition 5.
For d ≥ 2, we define the map dynamics on the unit circle w 2 n + v 2 n = 1. The set of variables (w n , v n ) is uniformly distributed on the unit circle if we set an initial condition of (w 0 , v 0 ) on the unit circle. We set v 0 as an arbitrary value in (0, 1) and Proof of Lemma 1. In addition to P d (w) = cos dθ, we introduce Q d (w, v) = sin dθ, where w = cos θ and v = sin θ, 0 ≤ θ ≤ 2π. From the equality given in Equation 25, the angular θ n of w n + iv n follows the map dynamics which is ergodic and has an ergodic density function [12] p Θ (θ) = 1 2π Transforming the orthogonal coordinate (w, v) into the polar coordinate (a, θ) by w = a cos θ and v = a sin θ, we have p A (a) = δ(a − 1). Since ons has a = √ w 2 + v 2 ∂w ∂a = cos θ, ∂w ∂θ = −a sin θ, ∂v ∂a = sin θ, and ∂v ∂θ = a cos θ, the Jacobian matrix is expressed as Therefore, the joint density of the ergodic invariant measure of w and v can be described as in terms of w and v. Integrating Equation 32 with respect to v and w, we respectively obtain (36) Definition 6. As an alternative method for generating q-Gaussian random variables, we propose chaotic maps based on the following map dynamics: where May 1, 2014 DRAFT where T l (u) is an l-th order piecewise linear map defined as For example, in the case of l = 2, Equation 42 gives the tent map In the case of l = 3, Equation 42 is expressed as The number of iteration c is an integer greater than or equal to 1. The order l of the piecewise linear map is an integer greater than or equal to 2. By using the product among z n , w n , and v n , we can also obtain two-dimensional deterministic dynamics. The random seed of this pseudo random generator is given by Note that factor 2 in front of q-exponential function in Equation 43 should be replaced with a value both smaller than and close to 2, such as 1.99999, for the round error correction in the case of actual numerical computation.
Proof of Lemma 3. The density of the ergodic invariant measure [12] of the piecewise linear map follows the uniform distribution p U (u) = 1 (0 < u < 1) independently of l and c. Since we obtain du/dz = u q −2 ln q (u) from the transformation in Equation 40, we have In this derivation, we used the equality introduced in Equation 11. is the q-Gaussian distribution which is the same as Equation 9 and given by Proof of Theorem 1. By using Equation 28 and Equation 46, the joint density p Ξ,H (ξ, η) of the ergodic invariant measure in terms of ξ and η is given as Theorem 2. The marginal density of ξ is a one-dimensional q-Gaussian distribution, where q ′ = (3q − 1)/(q + 1). Hence, sequences ξ n generated from the maps in Definition 1. are random numbers sampled from a q ′ -Gaussian distribution, where q ′ = (3q − 1)/(q + 1). IV. NUMERICAL SIMULATION Figure 2 shows sample paths for several values of q ′ . As shown in these figures, they seem to be from a trapped random walk to Lévy walk as q is increasing. Figure 3 shows holds at z = −2 ln q (1/2). Since one has the Lyapunov exponent of z n , defined as is computable. Here, h KS is the Kolmogrov-Sinai entropy. The relation λ = h KS holds in one dimensional case by the Pesin identity. Independently of the initial conditions (v 0 , z 0 ) and the parameter q, it is numerically confirmed that the Lyapunov exponent λ approaches log(2) at l = 2 and c = 1. This is consistent with the theoretical value of chaotic map, which is conjugate with a diffeomorphism g for the tent map. More generally, the Lyapunov exponent λ approaches to c log(l) in a general case of f l,c . This iterated map is deterministic, however, the auto-correlation function of the productive variable ξ = wz, decays 0 for m ≥ 1 from the orthogonality of the Chebyshev polynomials. Obviously, the expectation value of ξ is Since due to the independence of w and z, we have we obtain the auto-correlation of ξ as (1 < q < 5 3 ) .
Note that C(0) is not finite for 5/3 < q < 3 since the variance of q-Gaussian distribution is not finite for 5/3 < q < 2 and it is undefined for 2 < q < 3. In this derivation, we used the permutability and the orthogonality of the Chebyshev polynomials, In the same way, it can be proved that the auto-correlation function of the productive variable η = vz also decays 0 for m ≥ 1.
The cumulative distribution of ξ generated by Equation 37. Equation 38. and Equation 45, defined as can be expressed as where β(x; a, b) is the regularized incomplete beta function, and erfc(x) is the complementary error function defined as We compare the cumulative distributions of ξ n obtained from Equation 37. Equation 38, and Equation 45 with Equation 60. Since we normally generate q-Gaussian random variables from the given q ′ , for practical usage, we need the inverse relation between q and q ′ : q = (q ′ + 1)/(3 − q ′ ). Figure 4 shows the empirical complementary cumulative distributions of ξ, computed from 10,000 samples for (w 0 , z 0 ) = (0.1, 1.0). Comparing the empirical distribution with the theoretical one, we found that they are very close for each parameter q ′ .
We conducted the Kolmogorov-Smironov (KS) and the Anderson-Darling (AD) tests in order to verify whether the empirical distributions of sequences generated by our proposed method are convergent to the q-Gaussian distributions. It is known that Anderson-Darling test is suitable for checking the goodnessof-fit for heavy-tailed distributions [22]. Assuming M samples of ξ 1 , . . . , ξ M , the test statistics are given where F M (ξ n ) an empirical cumulative distribution function, and ψ(u) is a weight function. In the case of ψ(u) = 1, Z gives a KS test statistic and in the case of ψ(u) = 1 u(1−u) , Z gives an AD test statistic. Table I shows the best p-values of both KS and AD tests for several q values at d = 8, l = 2, and c = 1. The p-value of KS test is greater than 0.1 for q < 2.7. Therefore, the null hypothesis that the sequences are not samples from the theoretical distribution is not rejected at more than 5% statistical significance for q values from 1 to 2.6 in KS test. The degree of freedom ν goes to 0 as q approaches 3.
For q > 2.7 (ν < 0.17), both the proposed procedure and GBMM does not work since degree of freedom ν is very small. The p-value of AD test is greater than 0.1 for q < 2.4. Since AD test is sensitive for tail events, the null hypothesis is not rejected from the value of q smaller than KS test values. Table II shows the p-values of both KS and AD tests for several q values at d = 6, l = 2, and c = 6. The tendency of p-values is very similar to ones at d = 8, l = 2, and c = 1. The KS test passes at more than 5% statistical significance for q values ranging from -1 to 2.6 in KS test. The same is true for −1 ≤ q < 2.4 in the case of AD test.
While GBMM [13] is based on transformation of uniform random variables, our proposed method here is purely mechanical generation of q-Gaussian distribution based on ergodic theory. Thus, no random number are not assumed for the generations of q-Gaussian distribution. Its implementation is very simple as shown in the example code in Appendix A. Figures 6 (d = 8, l = 2, and c = 1) and 5 (d = 6, l = 2, and c = 6) show the best p-values of (a) KS test and (b) AD test obtained from 10,000 samples in 100 trials with the proposal and the GBMM for several q. The best p-values provided by the proposed method are same as ones by the GBMM for many cases.
V. CONCLUSION
We proposed a pseudo random number generator of q-Gaussian random variables for a range of q values, −∞ < q < 3, based on deterministic map dynamics. Our method consists of ergodic transformation on the unit circle and map dynamics based on the piecewise linear map. We conducted both KS and AD tests for random number sequences generated by GBMM and our proposed chaotic method for several values of q. The q-Gaussian samples passed the KS test at the 5% significance level for q < 2.7, and passed the AD test at the 5%significance level for q < 2.4.
APPENDIX A SOURCE CODE
We show a C source code for our proposed method for d = 8, l = 2, and c = 1. The code is exhibited in order to demonstrate the algorithm, and is not optimal for speed. The algorithm is implemented in four functions. The first two functions compute q-exponential and q-logarithmic functions. The next function setseed qnormal(v 0 , z 0 ) sets two random seeds v 0 and z 0 , and qnormal(q) calls the iterated map to generate q-Gauss random variables by our proposed method. | 2013-01-09T14:25:17.000Z | 2012-05-08T00:00:00.000 | {
"year": 2013,
"sha1": "6a0478166567c01d0989de8a1f91ceb8a9cf3c0f",
"oa_license": "implied-oa",
"oa_url": "https://doi.org/10.1109/tit.2013.2241174",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "00fa5e8bb76fe564d1f1805312794f5d40b33fdd",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
} |
269792223 | pes2o/s2orc | v3-fos-license | Ten-year trend analysis of malaria prevalence in Gindabarat district, West Shawa Zone, Oromia Regional State, Western Ethiopia
Background Malaria is a major public health concern in Ethiopia, where more than half of the population lives in malaria risk areas. While several studies have been conducted in different eco-epidemiological settings in Ethiopia, there is a notable scarcity of data on the prevalence of malaria in the Gindabarat district. Therefore, this study aimed to analyse 10-year trend of malaria prevalence in Gindabarat district, West Shawa Zone of Oromia, Western Ethiopia. Methods A retrospective laboratory record review was conducted at Gindabarat General Hospital and Gindabarat District Health Office from September 2011 to August 2020. The retrieved data included the date of examination, age, sex and laboratory results of the blood smears, including the Plasmodium species identified. Data were summarized and presented in the form of tables, figures, and frequencies to present the results. The data were analysed using SPSS (version 25.0) and Microsoft Excel. Results Over the course of 10 years, a total of 11,478 blood smears were examined in the public health facilities in the district. Of the total blood smears examined, 1372 (11.95%) were microscopically confirmed malaria. Plasmodium falciparum, Plasmodium vivax and mixed infections (P. falciparum and P. vivax) accounted for 70.77%, 20.55% and 8.67% of the cases, respectively. Malaria prevalence was significantly higher among individuals aged ≥ 15 years (12.60%, x2 = 13.6, df = 2, p = 0.001) and males (14.21%, x2 = 59.7, df = 1, p = 0.001). The highest number of malaria cases was recorded from September to November. Conclusion Malaria remains a public health problem in the district. P. falciparum was the most predominant parasite species in the area. Malaria prevalence was significantly higher among individuals aged ≥ 15 years and males. There was a remarkable fluctuation in the number of malaria cases in different months and years. In the study area malaria cases peaked in 2015 and 2017 then decreasing from 2017 to 2019, with sharp increase in 2020. Moreover, this study showed malaria cases were reported in all seasons and months, but the highest was observed from September to November. Strengthening malaria control activities is essential to further reduce the burden of malaria and pave the way for the anticipated elimination. Supplementary Information The online version contains supplementary material available at 10.1186/s12936-024-04975-2.
Background
Malaria, while preventable and treatable, is one of the world's deadliest diseases, and has a substantial influence on public health and economic development of tropical countries.In 2022, an estimated 249 million malariarelated illnesses and 608,000 deaths were reported globally [1].Sub-Saharan Africa bears a disproportionately high burden of the disease, and children under 5 years of age are particularly vulnerable.Over the last two decades, remarkable achievements have been obtained in malaria control globally.Several countries have been planning for malaria elimination, some of which have already eliminated malaria from their national territories in the past two decades [2].Nonetheless, the successes obtained in the control of malaria are threatened by insecticide resistance in malaria vectors, anti-malarial resistance in malaria parasites and the occurrence of invasive malaria vectors [3][4][5].
Despite the considerable gains in the control of malaria, it remains a public health problem in Ethiopia.Approximately three-quarters of the land mass is favorable for malaria transmission, where more than half of the population lives.In most malaria risk areas of the country, malaria transmission is seasonal, and peaks between September and December following the major rainy season, which spans from June to August.Areas below 1000 m above sea level, mainly the western lowlands, experience perennial high burden malaria transmission.Almost all cases of malaria in Ethiopia are caused by Plasmodium falciparum and Plasmodium vivax, and there is remarkable spatiotemporal variation in the distribution of these species [6,7].Malaria in Ethiopia is primarily transmitted by Anopheles arabiensis, while Anopheles pharoensis, Anopheles funestus and Anopheles nili play a secondary role.Of concern is the recent detection and distribution of the invasive malaria vector Anopheles stephensi in Ethiopia [8,9], and other Eastern African countries [10,11].Anopheles stephensi is native to parts of Asia, and was found to have been naturally infected with Plasmodium and implicated in urban malaria outbreaks in eastern Ethiopia [12].Malaria control in Ethiopia mainly involves the deployment of the core vector interventions (insecticide-treated nets [ITNs] and indoor residual spraying [IRS]) and passive case detection and treatment of cases.
Analysing the trend of malaria in a particular setting is essential for better understanding of the effectiveness of control interventions and progress towards elimination.Ethiopia has planned an ambitious goal of eliminating malaria by 2030.Nevertheless, there have been reports of malaria outbreak in several areas in Ethiopia in recent years [13][14][15].According to the 2023 World Health Organization (WHO) malaria report, the number of malaria cases in Ethiopia increased by 1.3 million in 2022 compared to the preceding year [1].Despite favourable climatic conditions for malaria transmission in Gindabarat district and recent challenges related to disruption of the health care system due to COVID-19 pandemic, the trend of malaria cases in the district is not known.The aim of this study was to describe the trend of malaria cases diagnosed at Gindabarat General Hospital and the district health office, West Shawa Zone, Western Ethiopia.
Study setting
The study was conducted in Gindabarat district, located in the West Shawa Zone, Oromia Regional State, Western Ethiopia.The district is 200 km from Addis Ababa (Fig. 1).Gindabarat District has an altitude range of 1500-3500 m above sea level and is characterized by a mean annual temperature ranging from 20 to 25 °C [16,17], and an average annual rainfall is 1150 mm.According to the projection of the Central Statistical Agency of Ethiopia carried out in 2015, the estimated population of the district was 104,595 people, 52,726 (50.4%) of whom were male and the remaining 51,869 (49.59%) were female [18].The district has 32 health posts, six health centres and one General Hospital.Gindabarat General Hospital and the district health offices were selected for this study since health posts are almost all newly established health institutions and most of the district population visits Gindabarat General Hospital and district health offices (health centres monthly report a carbon copy which used as a logbook to the district health office).Malaria is the most prevalent and seasonal disease in areas where both P. vivax and P. falciparum co-exist.
Study design
Institution-based retrospective cross-sectional study was conducted by reviewing the malaria case records from registers of district health office and Gindabarat General Hospital from 01 September to 30 December 2021.
Study population
The study population included all individuals with suspected malaria who had visited Gindabarat General Hospital, and district health offices (monthly report from health centres of the district) from September 2011 to August 2020.The study covered the period from 2011 to 2020, capturing data on all confirmed malaria cases diagnosed and treated at the hospital that fulfilled the inclusion criteria.
Inclusion criteria
The analysis included data such as the number of malaria cases diagnosed in months and years, the types of malaria species identified, and sociodemographic data (age and sex) regardless of pregnancy, or other infection status.
Exclusion criteria
Any data that did not meet the inclusion criteria were excluded.Data with incomplete information on any of the relevant variables were excluded.The data included year and month of the visit, sex, age, status of the blood film (positive or negative), and species of Plasmodium detected.
Sample size
All 11, 478 malaria reports documented in the laboratory logbooks and reports of the health centres from September 2011 to August 2020 that met the inclusion criteria were taken for analysis.
Data collection and quality control
A well-organized checklist was used for collecting data on malaria cases and related information registered from 2011 to 2020.The data were retrieved from the laboratory logbooks of Gindabarat General Hospital and the district health office.The information contained on the checklist included the year and month of the visit, sex, age, status of the blood film (positive or negative), and species of Plasmodium found.An expert medical laboratory technologist collected the data.The WHO protocol was followed in the hospital, where microscopic blood film screening was performed as the gold standard to confirm the presence of Plasmodium parasites and identify the species.The national standard operating procedure was followed for examining blood films for malaria parasites.The microscopic examination was carried out by laboratory technologists or technicians who had received thorough training in the microscopy of malaria.Throughout the 2011-2020 study periods, microscopy was the only method employed to identify Plasmodium species because there is expertise and electricity in all health facilities (health centres and district hospital).The data were gathered under supervision, and before analysis, they were verified as comprehensive.Prior to the study, data collectors and supervisor were trained for 2 days to insure the quality of data.They were trained on the data collection tools, variables of interest, rationale, objective and significance of the study.Similarly, the data entry clerks were trained on the same points.The whole process, data capturing and data entry was daily supervised by principal investigator to ensure the completeness and consistency of the data.Data that were not fully registered were not included in the analysis.
Data processing and analysis
To conduct the analysis, the obtained data were input into Epidata version 3.1 and exported Statistical Package for Social Sciences (SPSS) version 25.Some of the figures were also created using Microsoft Excel.Tables, figures, and frequencies were used to present the results.To ascertain the frequencies and percentages of general malaria prevalence and trend prevalence in terms of year, season, Plasmodium species, sex, and age, descriptive statistics were used.The correlation between malaria burden, and sex and age group was examined using a chi-square test.A P-value of 0.05 or lower indicated statistical significance.
Results
During the 10 years (2011-2020), a total of 11,478 blood films from malaria-suspected patients were diagnosed at Gindabarat General Hospital and district health office.The prevalence of malaria fluctuated during the 10 years of the study with a minimum (8.4%) and maximum (13.5%) annual malaria prevalence reported in 2019 and 2017, respectively, as indicated below.The number of suspected malaria cases peaked between 2011/12 and 2018 in the district (Table 1).
Figure 2 shows that the number of P. falciparum, P. vivax and mixed infections increased from 2014 to Generally, there was a fluctuation in the number of malaria cases throughout the study period, and the graphs of the yearly number of malaria cases exhibited a "V" shape from 2011 to 2015, 2015 to 2017, and 2017 to 2020 (Fig. 3).
Graphical representations of the significance checks of the mean differences between malaria cases within a year over a timeline were checked by using one way ANOVA post hoc Tukey's test, as indicated by the output below.There was a statistically significant difference in the means between and within years from 2011 to 2020 (F (9, 11,468) = 2.698, P = 0.004).Generally, there is a statistically significant mean difference among years from 2011 to 2020, as illustrated in the ANOVA output (Table 2).
The mean difference in malaria cases significantly decreased from 2011 to 2019, 2015 to 2019, 2016 to 2019, and from 2017 to 2019.However, the mean difference in malaria cases from 2019 to 2020 significantly increased (Table 3; Fig. 4).
Distribution of malaria cases by sex
Of the 1372 confirmed malaria cases, 14.21% (833/5862) were reported among males, while the remaining 9.60% (539/5616) were reported among females, with a maleto-female ratio of 1.54.The distribution of Plasmodium species in relation to sex is shown in Fig. 5.There was a statistically significant association between malaria prevalence and sex (x 2 = 59.7, df = 1, P = 0.001) (Fig. 5).
Distribution of Plasmodium species by age
The distribution of parasite species in relation to the age group is shown in Fig. 6 above.There was a statistically significant association between malaria prevalence and age group (x 2 = 13.6,df = 2, P = 0.001).Patients in the ≥ 15 years age group were more affected, with a prevalence rate of 12.60% (1044/8284), followed by those in the 5-14 years age group and under-five years of age, with prevalence rates of 10.65% (251/2356), and 9.20% (77/837), respectively.Concerning Plasmodium species, P. falciparum was the predominant species in all age groups, and was more common in the ≥ 15 years age group; moreover, P. vivax was the 2nd dominant species in the same age groups, with a prevalence rate of 727 (52.98%) and 221 (16.1%) (Fig. 6).
Seasonal distribution of malaria
Malaria cases have been reported in all seasons and months.The outcome of this study showed that the highest number of malaria cases was observed from September to November and the lowest was registered from March to May.The study also showed the species levels side by side, the maximum number of P. falciparum was registered in all seasons followed by P. vivax, and a minimum number of mixed infections (P.falciparum + P. vivax) were recorded in all seasons when compared to each other (Fig. 7).
Discussion
Malaria is a major public health concern in many regions of the world, especially in sub-Saharan Africa.Ethiopia is one of the most common forms of infection in this region.Patients in this region suffer greatly from malaria, which has high transmission rates and related morbidity and mortality.The slide positivity rate in the retrospective study was 11.95%.The slide positivity rate in the retrospective study was 11.95%.This finding is comparable with the finding of other studies conducted in Ethiopia such as those of Arsi Negele (11.4%) [19], Kombolicha (7.52%) [20] and Northern Shoa (8.4%) [21].In contrast, the prevalence in the present study was much greater than that in a study conducted in South Arabia (0.1%) [22].However, the current malaria prevalence is much lower than that reported in studies conducted in Wereta town (32.6%) [23], Kola Diba (39.6%) [24], the Welega zone (20.07%) [25], and the Omo Zone of Southern Ethiopia (41.5%) [26].These differences in malaria prevalence might be due to differences in climatic conditions, the skill of laboratory personnel in identifying malaria, the types of malaria intervention activities in the areas, and the diagnostic techniques used for malaria.The number of malaria cases is increasing in various parts of Ethiopia as a result of malaria outbreaks or high malaria transmission settings.
The prevalence of malaria has fluctuated annually, with the maximum and minimum numbers of cases recorded in 2017 and 2019, respectively.The decrease in malaria cases in 2019 might be comparable to the increase in awareness of the community toward the application of ITNs and the minimization of environmental compliance.This is because Ethiopia currently plans to eliminate malaria by 2030 in collaboration with different stakeholders so that the community has a strong willingness to anticipate (prevention) and initiate malaria control over the past decade.This result was in line with the global decreasing burden of malaria, which between 2000 and 2019 resulted in the averting of 1.5 billion cases; the majority of these instances (82%) were in the WHO's African areas, which includes Ethiopia [27].There has also been a notable drop in malaria recorded in Ethiopia [28][29][30].
In this study, the prevalence of malaria was higher in males [14.21%, (833/5862)] than in females [9.60%, (539/5616)], which is comparable with earlier studies carried out in Eastern Wollega [25], Southwest Ethiopia [31] and South central Ethiopia [32].This might be due to the occupation of males and their lifestyle.Males are usually involved in irrigation activities, agricultural activities, and day labour, which might be suitable environments for mosquito breeding sites; alternatively, males are usually engaged in outdoor activities at dusks and dawns, which may coincide 16 h after peak biting.These findings are in line with those of a study conducted in India [33].Among the age groups, the prevalence of malaria was highest in the 15 years and above age group (12.60%), followed by the 5-14 years age group (10.65%).This was in agreement with the findings of a study conducted at the Kola Diba Health Centre [24].However, in contrast to these findings, a study conducted in Wolaita Zone showed a high malaria positivity rate in 5-14 year-olds [34].The reason why malaria affects individuals aged 15 years and older in the Gindabarat district might be because these age groups are productive and, as a result, are actively involved in irrigation and agricultural activities; this can increase the exposure of these groups of people to Anopheles mosquito bites.
In the study area, the number of malaria cases peaked from September to November, which was in agreement with the findings in Dembia [35], Dembecha [36], Northwest Tigray [37], Ataye [21], Guba [38], Jimma [30], and Harari [39], followed by June to August.In seasonal transmission areas in Ethiopia, malaria cases usually peak from September to November following the major rainy season, which spans from June to August, creating a suitable environment for the breeding of Anopheles mosquitoes.The minor transmission period is a result of a small amount of rain from April to May.This discrepancy may be the result of the presence of advantageous local conditions, such as standing water and host spots in the microenvironment after a substantial rainfall followed by a dry season.These conditions foster an environment that is favourable for the growth of the mosquito population, the survival of the parasite in the mosquito, the rate of bites, and the spread of malaria parasites.
There was a significant yearly fluctuation in the number of malaria cases throughout the study period.Four significant peaks were observed in 2011, 2015, 2017, and 2020.The findings of our study revealed that there was a fluctuating trend in the occurrence of malaria in the study area over the course of 10 years.A significant decrease in the number of malaria cases occurred between 2011 and 2019, 2015 and 2019, and 2017 and 2019, with a minimum number of malaria cases reported in 2019 (8.4%).Malaria control interventions have generally intensified in Ethiopia over last decade, with the goal of eliminating malaria by 2030.However, there was a significant increase in the number of malaria cases from 2019 to 2020, with the peak number of malaria cases being reported in 2017(13.5%).The significant increase in malaria cases from 2019 to 2020 might be due to the COVID-19 pandemic, which might have interrupted health services.
The mean difference in malaria cases within and between years in the research area fell from 2011 to 2013, but increased in the following 2 years (2014 and 2015).The malaria cases were at the highest peak in 2015 and 2017 then it was decreasing from 2017 to 2019, with sharp increasing 2020.The decrease in malaria incidence between 2011 and 2013 and between 2017 and 2019 may be related to community awareness raising about the use of various insecticides and repellents (such as buzz of ), enhancing indoor residual spray operation quality, and reducing environmental compliance.
Malaria cases have fluctuated in general during the last 10 years in the study area.Many factors, including host and vector characteristics, social and economic influences, and changes in healthcare infrastructure, could contribute to these fluctuations.Mosquito control measures, population immunity, government policy, availability of health facilities, and drug resistance, 17 among other social, biological, and economic factors, all have significant impacts on malaria prevalence [40,41].
Limitations of the study
The primary limitation of this study was the inconsistency and incompleteness of the retrospective data because certain essential variables such as sociodemographic data (age and sex) were missing.
Conclusion and recommendation
The total malaria prevalence declined in the study area, indicating good progress toward meeting the 2030 malaria elimination goals.Malaria, however, remains a public health concern in the region, affecting 11.9% of the population.P. falciparum is the prevalent species in the research region, indicating a change from P. vivax to P. falciparum malaria, which puts the ongoing malaria elimination campaign at risk.The reproductive age group and males were more afflicted by the infection, which was more prevalent/common during the cultivation season, affecting public health and economic development of the area.Therefore, malaria control and elimination programs should be strengthened to further reduce the burden of malaria, particularly among highly affected groups.There is also a need to intensify the prevention and control strategies for P. falciparum in this area.
Fig. 1
Fig. 1 Map of the study area
Fig. 2
Fig. 2 Distribution of malaria cases by diagnosis years at Gindabarat General Hospital and district health office, Western Ethiopia from 2011 to 2020
Fig. 3
Fig. 3 Blood smear-positive rate of malaria in Gindabarat district, Western Ethiopia, from 2011 to 2020
Fig. 5 Fig. 6
Fig. 5 Distribution of malaria infection by sex in Gindabarat district, Western Ethiopia, from 2011 to 2020
Fig. 7
Fig. 7 Distribution of Plasmodium species in different seasons in the Gindabarat district from 2011 to 2020, Western Ethiopia
Table 1
Annual trends in total malaria cases in Gindabarat General Hospital and district health office, Western Ethiopia; from 2011 to 2020
Table 2
One way ANOVA post hoc Tukey's test, for significance checks of the mean difference in Gindabarat district, Western Ethiopia, from 2011 to 2020
Table 3
Malaria case mean difference significance checks in Gindabarat district, Western Ethiopia from 2011 to 2020 a Indicates that the difference is significant at the 0.05 level | 2024-05-17T13:05:30.866Z | 2024-05-16T00:00:00.000 | {
"year": 2024,
"sha1": "c80580b604a512d7f9d6ed84dc05699510f46d42",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dc14444f7d22bd53ad6d7cf0c0c158af40c0af1f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235425818 | pes2o/s2orc | v3-fos-license | Major depressive disorders in young immigrants: A cohort study from primary healthcare settings in Sweden
Aims: Previous studies on major depressive disorder (MDD) among immigrants have reported mixed results. Using data from primary healthcare settings in Sweden, we compared the incidence of MDD among first- and second-generation immigrants aged 15–39 years with natives. Methods: This was a retrospective nationwide open cohort study. Eligible individuals were born 1965–1983, aged 15–39 years at baseline, and resided in Sweden for at least one year during the study period 2000–2015. We identified MDD cases through the Primary Care Registry (PCR). The follow-up for each individual started when they met the inclusion criteria and were registered in the PCR and ended at MDD diagnosis, death, emigration, moving to a county without PCR coverage, or the end of the study period, whichever came first. Results: The final sample included 1,341,676 natives and 785,860 immigrants. The MDD incidence rate per 1000 person-years ranged from 6.1 (95% confidence intervals: 6.1, 6.2) to 16.6 (95% confidence intervals: 16.2, 17.0) in native males and second-generation female immigrants with a foreign-born father, respectively. After adjusting for income, the MDD risk did not differ substantially between first-generation male and female immigrants and natives. However, male and female second-generation immigrants had a 16–29% higher adjusted risk of MDD than natives. Conclusions: This cohort study using primary healthcare data in Sweden, albeit incomplete, indicated that second-generation immigrants seem to be at a particularly high risk of MDDs. The underlying mechanisms need further investigation.
Introduction
Millions of people have left their home countries due to war, political conflicts and economic hardship over the past decades. Based on United Nations estimates, the global immigrant population reached 272 million in 2019 [1]. Good mental health can help shape more successful integration in immigrant groups. However, many immigrants have experienced stressful and traumatizing situations that could affect both their own and their offspring's mental health.
Previous research has shown that many first-and second-generation immigrants have an increased risk of developing mental disorders [2,3]. In Europe, young refugees and asylum-seekers had a reportedly higher risk of post-traumatic stress disorder (PTSD) [4,5], emotional and behavioral disorders and anxiety disorders. A recent global systematic review found a higher risk of mood disorders among first-and second-generation immigrants than native populations [6].
Major depressive disorder (MDD) is a common mood disorder and a leading cause of lost Major depressive disorders in young immigrants: A cohort study from primary healthcare settings in Sweden productivity and poor health-related quality of life. Between 1990 and 2017, the global age-adjusted incidence rates of MDD increased in most countries [7]. In 2017, MDD accounted for an estimated 2.5% (95% confidence interval (CI): 1.8, 3.2) of the lost disability-adjusted life years (DAlY) among those aged 15-49 years [8]. Persons living with MDD also have an increased risk of suicide [9], alcohol dependence and drug addiction [10]. Genetic vulnerability (e.g. family history of MDD) and environmental factors (e.g. childhood adverse experiences) and their interplay may increase the risk of MDD [11].
Systematic review studies on rates of MDD among immigrants have reported heterogeneous and mixed results [4,6,12,13]. Insufficient sample sizes, use of different definitions of immigrants and suboptimal evaluation of MDD may in part account for the observed heterogeneities. Also, in past studies, more severe forms of MDD, that is those requiring hospital care, have received more attention than cases where hospitalization is not required, although Sundquist et al. have indicated that nearly 80% of MDD cases are managed in primary care settings [14]. Data on the incidence of MDD among immigrant populations based on primary healthcare diagnoses are, however, limited. representative samples and longitudinal data sources are needed to estimate the incidence of MDD in immigrant populations.
Over the past few decades, Sweden has received a large number of immigrants including refugees/asylum seekers and non-refugees (e.g. students, labour immigrants and immigrants coming to Sweden due to family ties). A large number of refugees in Sweden originate from the Middle East and Africa, while many non-refugee immigrants are from European countries. By 2018, approximately 2,540,000 Swedish residents (around 25% of the population) had an immigrant background including first-generation immigrants (born abroad) and second-generation immigrants (born in Sweden with one or two foreignborn parents).
All residents of Sweden, including immigrants and non-immigrants, have access to affordable healthcare services that are universal. Primary care services are available within a short distance in most neighborhoods and have a high coverage in the country. The majority of the Swedish population will therefore seek healthcare through public services. Sweden also has a tradition of nationwide individual-level data collection for administrative purposes. The availability of longitudinal databases and the large number of immigrants creates a unique opportunity to study immigrant mental health with a high statistical power and at a reasonably low cost. A Swedish Primary Care register (PCr), covering all primary healthcare visits for approximately 87% of the Swedish population, has recently become available for research purposes. To the best of our knowledge, this study is the first to use large-scale primary healthcare data to estimate incidence rates and risks of MDD among first-and second-generation immigrants compared to natives aged 15-39 years.
Methods
In this retrospective open cohort study, we used individual-level data from several nationwide registers in Sweden. The individual-level data across registers were linked using a unique personal identification number, which, to preserve confidentiality, was replaced with a serial number by Statistics Sweden. The study population consisted of 2,127,536 individuals born 1 January 1965 to 31 December 1983, who resided in Sweden for at least one year between 1 January 2000 and 31 December 2015. This study was part of a larger project which received ethical approval from the regional Ethical review Board in lund, Sweden (Ethics approval No: 2012/795). The latest amendment was approved by the Swedish Ethical review Authority (Ethics approval No: 2019-01588).
Data sources and variables
We used individual-level data from the register of the Total Population (rTB), the Multi-Generation register, the Cause of Death register, and the Migration register to identify eligible individuals. Furthermore, we used the longitudinal Integration Database for Health Insurance and labor Market Studies (lISA) as the source of data on income. Statistics Sweden (SCB) and The National Board of Health and Welfare (Socialstyrelsen) provided us with most of the data for this analysis. For the outcome MDD, clinical diagnoses were obtained from the PCr. The PCr was created from regional data sources from a majority of Swedish counties.
Immigrant background. We defined natives as individuals with both parents born in Sweden. The immigrant group were assigned to first-or secondgeneration immigrants. Individuals in the first-generation immigrant group were born abroad with both parents born abroad. Second-generation immigrants were born in Sweden and were divided into three subgroups based on parental country of birth: (a) both parents foreign-born; (b) Swedish-born father and foreign-born mother; and (c) Swedish-born mother and foreign-born father. The individuals were distributed as follows: 1,341,676 natives (63.0%), 550,304 (25.9%) first-generation immigrants (with two foreign-born parents), 82,550 (3.9%) secondgeneration immigrants with two foreign-born parents, and 153,006 (7.2%) second-generation immigrants with one foreign-born parent.
Income. We obtained individual disposable income from the Swedish Tax Agency for the individuals who were 18 years or older at baseline. This data has nationwide population coverage and is highly accurate. For individuals younger than 18 years, we used maternal income at baseline. Disposable income was standardized per year to make the values comparable over time. We categorized the income variable into quartiles (low, mid-low, mid-high and high income). Only 11,117 individuals had a missing income. These few individuals were considered very likely to have no income at all. Therefore, we assigned them to the low-income quartile.
MDD diagnosis. We identified individuals with MDD through the PCr. The PCr includes individual-level information on clinical diagnoses at visits to primary healthcare centers in Sweden. The PCr covered the majority of Swedish counties during the follow-up. The digitalization of patient records at primary healthcare centers started, however, in different years across counties. Figure 1 presents the population density in 2007 (mid-follow-up time) and the number of years each county contributed with data during the follow-up period. We used the 10th revision of the International Classification of Diseases (ICD-10), codes F32 and F33, to identify individuals with MDD.
Statistical analysis
Baseline was defined as the year the individual was registered in a county that was covered in the PCr. The follow-up ended at the time of an MDD registration, death, emigration, moved to a county without coverage in the PCr or the end of the study period (31 December 2015), whichever came first. Age-adjusted incidence rates (Ir) of MDD were estimated per 1000 person-years and by sex and immigrant group. Incidence was defined as first registration of MDD during the study period. Incidence rate ratios (Irrs) (using Swedish natives with two Swedish-born parents as the reference group) were calculated for males and females separately. Model A included adjustment for birth year and Model B also controlled for income. For all estimates, 95% CIs were used. We also calculated period prevalence of MDD for the immigrant and non-immigrant groups.
results Table I shows the characteristics of the study population by the covariates and the outcome. Firstgeneration immigrants had the highest percentage of individuals who moved to a region without PCr coverage. The follow-up period ranged from 5.9 (firstgeneration immigrants) to 9.3 years (second-generation immigrants with two-foreignborn parents). Among the natives and the immigrant groups, 117,501 and 61,964 individuals had an MDD diagnosis, respectively. Table II shows the prevalence and incidence rates of MDD in the different subgroups. The prevalence and 16.6 (95% CI: 16.2-17.0) in second-generation female immigrants with a foreign-born father. The age-adjusted incidence rates in males varied between 6.1 (95% CI: 6.1-6.2) in natives and 8.2 (95% CI: 7.9-8.5) in second-generation immigrants with a foreign-born father. The corresponding incidence rates in females was approximately twice as high in each subgroup. Table III shows the Irrs of MDD comparing the male and female immigrant subgroups with the native reference group after adjusting for birth year in Model A and, in Model B, additional adjustment for income. In the Model A for males, immigrants had higher risks of MDD compared with natives with Irrs ranging between 1.15 (95% CI: 1.13, 1.17) and 1.34 (95% CI: 1.29, 1.39) in first-generation immigrants and second-generation immigrants with a foreign-born father, respectively. After adjustment for income, the risks decreased slightly but remained significant in all subgroups. In Model A for females, immigrants also had higher risks of MDD compared with natives with Irrs ranging between 1.06 (95% CI: 1.04, 1.08) and 1.25 (95% CI: 1.22, 1.28) in first-generation immigrants and second-generation immigrants with a foreign-born father, respectively, that is the same subgroups as in males. After additional adjustment for income, the risks decreased slightly and no longer remained significant in firstgeneration female immigrants.
Discussion
We conducted a retrospective cohort study based on primary healthcare diagnoses among 2 million individuals aged 15-39 years in Sweden and compared the incidence rates of MDD between natives and first-and second-generation immigrants. After controlling for the effects of income on MDD, all male and female second-generation immigrant groups had a 16-29% higher risk of MDD than the native reference group. However, the MDD risk in male and female first-generation immigrants did not differ substantially from the reference group, after adjusting for income.
Previous research on MDD risk among first-generation immigrants is heterogeneous [13]. The firstgeneration immigrants in our study did not have a substantially different MDD risk from natives. This finding contrasted with the results of previous studies from other countries [15][16][17]. In Canada, a systematic review indicated a lower prevalence of mood disorders among first-generation immigrants than natives [15]. The review also highlighted a lack of longitudinal studies on mood disorders among immigrants in Canada. In Finland, a register-based study found lower rates of any psychiatric disorder among first-generation immigrants than natives [17]. A meta-analysis that included 25 surveys performed in Western countries indicated a lower prevalence of major depression in first-generation immigrants compared to the general populations [4].
However, some research has suggested an elevated risk of mood disorders, such as depression, among first-generation immigrants [4,18]. A systematic review by Mindlis et al. indicated an approximately 25% higher risk of mood disorders in first-generation immigrants than natives [6]. A large multicenter survey, initiated by the World Health Organization (WHO) in France, showed a higher likelihood of mood disorders among first-generation immigrants [16]. A systematic review showed a two-fold prevalence of depression among first-generation refugees than labor immigrants [18]. Other studies were in line with ours showing no substantially different depression risk compared with natives. For example, in the Netherlands, there were no differences between female immigrants from Turkey and Morocco and native controls in the risk of hospitalization for depressive disorders [19]. Foo et al. conducted a systematic review on first-generation immigrants and found a 10% lower but statistically non-significant odds of depression in this group compared with natives [12]. limited data exist on the risk of MDD and other mood disorders among second-generation immigrants [6]. The second-generation immigrant subgroups in our study had an approximately 15-30% higher risk of MDD than natives. This finding is in partial agreement with previous research. lau et al. showed, in a nationally representative survey, that the second-generation Asian American women in the USA had higher rates of depression than their firstgeneration counterparts [20]. A register-based study from Sweden showed a higher risk of affective disorders in second-generation Finns [21]. However, other second-generation immigrant subgroups in the study did not have higher risks of affective disorders than natives. In a systematic review from 2017, a slightly higher risk of mood disorders among secondgeneration immigrants compared with natives was found but it did not reach statistical significance [6]. In our study, the risk of MDD among second-generation immigrants did not vary substantially in the different subgroups. Our results partially supported the findings of a study from Denmark [22]. Cantor-Grae et al. used register-based data on hospital admissions for any psychiatric disorder in Denmark and found that second-generation immigrants with one foreignborn parent had a higher risk than natives [22]. However, the second-generation with two-foreignborn parents had a lower risk of hospital admission for any psychiatric disorder than natives. In contrast to our findings, the results of a nationally representative survey in the USA did not find a higher risk for depressive disorders among second-generation young adult immigrants than natives [23]. Another study from the USA found varying risks of mood disorders across second-generation immigrants from different ethnic groups [24].
However, it is important to keep in mind that the existing heterogeneities between different studies may be explained by differences in reason for immigration. For example, many refugees and asylum seekers may experience larger difficulties in learning the new language and securing an employment whereas labour immigrants may have better possibilities to achieve a successful integration in the host country. Heterogeneities in previous studies may also be related to different ways to measure the outcome. Our study was based on clinical diagnoses from primary healthcare settings, whereas most previous studies were based on surveys or hospital data.
MDD is a challenging psychiatric diagnosis with key symptoms ranging from a somewhat depressed mood to suicidal behavior and completed suicide [25]. Genetic studies have suggested a moderate heritability for MDD [26] whereas psychosocial risk factors seem to play significant roles in the development of the condition during the lifespan [25]. Immigrants experience substantial stress before, during and after migration. Although this study did not investigate causal mechanisms behind the differential risks of MDD among the immigrant groups, there are several potential explanations where only a few can be elaborated upon here. Firstly, economic and employment challenges are among the main determinants of mental health in immigrants. However, we controlled for income in our estimates and the results remained almost unchanged. It is possible that other workrelated factors, including employment stability and job satisfaction, represent potential determinants of immigrants' mental health. Immigrants may be at a higher risk of losing their jobs than natives. Poorer social networks, employment opportunities and economic challenges might hit the immigrants harder than the natives. In addition, later generations of immigrants may experience an even more pronounced psychosocial stress concerning discrimination, which puts them at higher risks of mental health issues [20]. Acculturative stress may also increase the risk of depressive disorders among immigrant populations.
It is possible that the risk of MDD among the first-generation immigrants in our study is underestimated due to a lower healthcare service utilization [24]. Castaneda et al. investigated potential disparities in immigrants' uptake of mental health and rehabilitation services using combined survey-and register-based data in Finland [27]. According to their results, among those with affective symptoms, immigrants had a lower uptake of mental health and rehabilitation services than native Finns. However, in a register-based study from Denmark, male and female immigrants had a higher risk of a first-time contact for mental disorders than natives [28] and all immigrant subgroups, except those originating from Asia and Sub-Saharan Africa, had a higher likelihood of seeking psychiatric services for affective disorders than natives.
As young second-generation immigrants tend to be better educated than their parents, they may be more likely to use psychiatric services. Another study showed that the longer time a first-generation immigrant had lived in the country, the more likely they were to use psychiatric services [29]. First-generation immigrants may also perceive depression differently than natives. A qualitative study from France found four main reasons why the immigrants did not seek help for their depression at primary health clinics: the reluctance to be treated (pharmacological treatment and/or psychotherapy), the preference to see a psychiatrist directly, providential healing and not believing that treatment is necessary. First-generation immigrants were less likely to talk to doctors about their depressive symptoms despite the majority of them appreciating more support. The immigrants may also be more likely to endorse self-help strategies to deal with their mental health issues. Perceived stigma towards psychiatric diagnoses is another barrier to service utilization [30].
Our study had some limitations. First, we only controlled for income at baseline. The impact of variations in the income during the follow-up was not accounted for. Second, during the follow-up, some individuals moved to areas without coverage in the PCr. First-generation immigrants were more likely to move to regions without PCr coverage. This may have underestimated the risk of MDDs in the firstgeneration immigrants in our study. However, considering that the differences in mobility between the subgroups were not large, it should not have changed the main conclusions of this study to a large extent. Third, we don't have any estimates on the validity of MDD diagnoses in primary healthcare settings. The MDD diagnoses in primary healthcare settings are, in most cases, given by physicians and, in some cases, by certified psychologists. Medical diagnoses in Sweden have, in general, high validity [31] and it could be assumed that this is the case for primary healthcare diagnoses as well. Finally, our study was explorative and we did not investigate the potential impact of differences in service utilization, medical treatment and psychotherapy or risk factors of poor mental health that may differ between different immigrant groups (i.e. refugees v. labour immigrants, European v. non-European immigrants) and natives. Our study also had many strengths. The large sample size and the use of unique almost nationwide primary care data increase the generalizability of our findings. The data from the majority of primary healthcare centers in Sweden was available over the 16 years of this study. In addition, more than one assessment is needed to identify an MDD diagnosis. Therefore, the data on clinical MDD diagnoses in our study may be more representative compared to surveys based on self-report at only one assessment. However, we did not have access to detailed information on diagnostic procedures and evaluations.
conclusions
We investigated the risk of MDD among first-and second-generation immigrant subgroups based on almost nationwide clinical data from primary healthcare centers in Sweden. The first-generation immigrants had an MDD risk that was comparable with the one in natives. However, the MDD risk among all second-generation immigrant subgroups was higher than in natives. These results urge for further investigations to find potential explanations behind our findings, such as a lower healthcare utilization in the first generation and/or a poor integration in the second generation. | 2021-06-15T06:16:22.133Z | 2021-06-14T00:00:00.000 | {
"year": 2021,
"sha1": "1c3f9ebe7660ab4b11c11c7281b18724b464be3b",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/14034948211019796",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "264abd68ddb0e8cd73473615a0ec6ac133575239",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210859406 | pes2o/s2orc | v3-fos-license | Explosive, continuous and frustrated synchronization transition in spiking Hodgkin-Huxley neuronal networks: the role of topology and synaptic interaction
Synchronization is an important collective phenomenon in interacting oscillatory agents. Many functional features of the brain are related to synchronization of neurons. The type of synchronization transition that may occur (explosive vs. continuous) has been the focus of intense attention in recent years, mostly in the context of phase oscillator models for which collective behavior is independent of the mean-value of natural frequency. However, synchronization properties of biologically-motivated neural models depend on the firing frequencies. In this study we report a systematic study of gamma-band synchronization in spiking Hodgkin-Huxley neurons which interact via electrical or chemical synapses. We use various network models in order to define the connectivity matrix. We find that the underlying mechanisms and types of synchronization transitions in gamma-band differs from beta-band. In gamma-band, network regularity suppresses transition while randomness promotes a continuous transition. Heterogeneity in the underlying topology does not lead to any change in the order of transition, however, correlation between number of synapses and frequency of a neuron will lead to explosive synchronization in heterogenous networks with electrical synapses. Furthermore, small-world networks modeling a fine balance between clustering and randomness (as in the cortex), lead to explosive synchronization with electrical synapses, but a smooth transition in the case of chemical synapses. We also find that hierarchical modular networks, such as the connectome, lead to frustrated transitions. We explain our results based on various properties of the network, paying particular attention to the competition between clustering and long-range synapses.
Introduction
The phenomenon of phase transition is an important part of modern statistical physics with many applications in physical and biological systems [1,2,3,4].
It is generally believed that many naturally occurring systems self-organize to the edge of a phase transition point which can lead to many functional advantages [5,6,7]. In the case of biological systems, such transitions are generally believed to be of the critical (continuous) nature associated with a critical point [8,9] or an extended critical regime [10,11]. Synchronization transition is an interesting phase transition that might occur in some important systems such as power grids [12], ecological systems [13], and seasonal epidemics spreading [14], with both a continuous as well as a more interesting explosive transition.
It is also interesting to note that the critical brain hypothesis [9] had originally assumed that the brain operates at the edge of an activity phase transition.
However, recent theoretical [15,16] as well as experimental [17] studies show that the criticality may be associated with a synchronization transition. This possibility provides a stronger motivation to study various types of synchronization transition that may occur in network models of biological neurons.
Phase synchronization also has a key role in large-scale integration, memory, vision and other cognitive tasks performed by the human brain [18,19,20,21].
In a healthy brain, synchrony must occur at a moderate level. Excessive synchronization leads to brain disorders like epilepsy or Parkinson, while schizophrenia and autism are related to deficit of synchronization among neurons [22]. Thus a healthy brain is thought to be functioning at the edge of synchronization transition between order and randomness [23,16,15]. From this perspective, a slight increase in neural interactions might lead to a synchronization transition in lo-cal neural circuits. The type of resultant transition (continuous, explosive or frustrated) is therefore important. For example, when the emerging transition is a continuous one, then a small change in the interaction strength changes the amount of synchronization slightly. But if the emerging transition is an explosive one a small increase in the interaction strength may result in a sudden emergence of global order in the neural circuit. Explosive synchronization has functional advantages if it occurs during a fast response, but it also has disadvantages if it occurs, for example, during an epileptic seizure.
Recently, we have provided a systematic study of beta-band synchronization transitions in network models of Izhikevich neurons [24] and showed that contrary to the case of simple phase oscillators, biologically meaningful models of neural dynamics exhibit synchronization transitions which depend on the average firing frequency of neurons [24]. This difference is rooted in the fact that phase oscillator dynamics has a single time-scale (the mean-value of natural frequencies) which can be re-scaled without having any significant influence on the dynamics of the network [25], while biologically plausible neural dynamics typically has more than one time-scale, e.g. refractory period. The frequencydependent behavior can arise when one of these time-scales depends on a changing parameter, while the other one does not, thus leading to changing ratio of the various time-scales [24]. In fact it was shown that the patterns of transition changed significantly when one increased the average frequency to the gamma-band ( >30 Hz).
Gamma-band oscillations are also an important class of rhythms appearing during a broad range of the brain activities [26], and have received a great deal of attention. Gamma-band oscillations have been observed in several cortical areas, as well as subcortical structures [27]. In sensory cortex, gamma power increases with sensory drive [28], cognitive tasks including feature binding [29], visual grouping [30], stimulus selection [31,32] and attention [33]. In higher cortex, gamma power is the dominant rhythm during working memory [34] and learning [35]. Also, it is reported that irregular gamma waves have been observed in pathologies such as Alzheimer [36].
Our purpose here is to provide a systematic study of a biologically motivated neuronal network. We therefore propose to study synchronization transition in gamma-band and seek the effect of synaptic interaction (chemical vs. electrical synapses) as well as the topology of the network used on the ensuing transition type. However, Izhikevich neurons have a tendency to burst as opposed to spike when one increases the input in order to increase the frequency. Furthermore, increasing interaction strength in network of Izhikevich neurons also leads to bursting behavior. On the other hand Hodgkin-Huxley (HH) neurons have a large stable spiking range in gamma frequencies [37]. We therefore use network models of HH neurons in gamma band in order to study synchronization patterns which emerge.
Although synchronization of HH (or HH-type) neurons has been extensively studied before, e.g. in [38,39,40,41,42,43,44,45,46], a systematic study of (the order of) synchronization transition has not been performed to the best of our knowledge. In fact, much of such type of studies usually employ phase oscillator models such as the Kuramoto model [47,48]. Here, our emphasis is to ascertain the type of phase transition (e.g. continuous vs. explosive) that may occur in a collection of HH neurons and how that may depend on synaptic interaction and/or underlying structure (network) [49]. Surprisingly, we find that one-and two-dimensional lattice networks of spiking HH neurons exhibit no transition. Instead they exhibit quasiperiodic partial synchronization as a result of strong clustering which does not lead to global order due to lack of long-range interactions. Random network structures like Erdos-Renyi (ER) and scale-free (SF) networks exhibit continuous transition with either electrical or chemical synapses, with no significant difference between SF and ER structures.
However, small-world network with high clustering coefficient and long-range interaction exhibits explosive (first-order) transition to synchronization when neurons interact via electrical synapses, but exhibits continuous (second-order) transition when interacting via chemical synapses. Furthermore, we consider the role of heterogeneity by introducing a correlation between frequency and the degree of a given neuron. We find that while heterogeneity (in degree or frequency) does not change the order of continuous transition, a correlation between the two can lead to explosive synchronization with electrical synapses, but not with chemical synapses. Finally, we show that hierarchical modular (HM) networks with both types of synapses exhibit frustrated synchronization in an intermediate regime between disordered and ordered phases of the system. Some of the structures studied here have been studied in the beta-band and will consequently be compared and contrasted. However, the case of correlated heterogeneity as well as HM networks are just included in the current study and their counterparts in beta-band had not been studied in ref. [24]. Consequently, such results can be compared with those of phase oscillators independent of frequency.
In the following section, we describe the model we use for our study. In Section (3), we describe our simulation details including the numerical methods used. Extensive results of our numerical study are presented in Section (4), and we close the paper with some concluding remarks in Section (5).
Model
Consider N Hodgkin-Huxley neurons on an arbitrary network. Electrical activity of i th neuron of the network is described by a set of four nonlinear coupled ordinary differential equations as follows [37]: for i = 1, 2, ..., N .
Here v i is the membrane potential, m i and h i are variables for activation and inactivation of sodium current, and n i is the variable for activation of potassium current [37]. α and β functions are the so-called rate variables of HH neuron for each type of ionic currents and depend on the instantaneous membrane potential [37]. We use C m = 1.0, G N a = 120, G K = 36, 387 for the constant parameters [38]. I DC i is an external current which differs from a neuron to the other and determines dynamical properties of uncoupled HH neurons. It is shown that for I DC > 9.8µA/cm 2 a stable limit-cycle is the global attractor for a single HH neuron [50]. We choose values of I DC i randomly from a Poisson distribution with mean value 10.0µA/cm 2 . Therefore, intrinsic firing rates are non-identical and most of the neurons spike regularly with gamma rhythms [26]. Here, we set the mean intrinsic firing rate is f 75 Hz, unless otherwise stated.
The term I syn i in Eq.(1) represents synaptic current received by post-synaptic neuron i. Functional form of this current depends on the synaptic type. For a gap junction or an electrical synapse the synaptic current is [51]: and if the synapse is chemical then [51]: where D i is in-degree of node i, g ji is the strength of synapse from pre-synaptic neuron j to post-synaptic neuron i. Here we assumed that all existing synapses have the same strength, viz g ji = ga ji , where g is the electrical conductance of synapse and a ji is the element of adjacency matrix of the underlying network.
Also in Eq.(6) t j is the instance of last spike of pre-synaptic neuron j, τ s and τ f are the slow and fast synaptic decay constants and V 0 is the reversal potential of synapse which is equal to zero since we assumed that all synapses in our circuit are excitatory. In this study we take τ s = 1.7 and τ f = 0.2 which are the values obtained according to experimental data [51]. From the functional form of these synaptic currents, one can expect that they might have different effects on synchronization of neuronal networks. For example, electrical synapses depend on the phase difference of connected neurons with increasing strength for unsynchronized neurons, while chemical synapses tend to effect post-synaptic neurons regardless of the phase difference, and decaying in strength as a function of time after pre-synaptic firing time t j . Therefore, for example, one would expect for a given value of coupling strength g, electrical synapses would provide more synchronization when compared to chemical synapses.
In order to quantify the amount of phase synchronization in a neural population, we assign an instantaneous phase to each neuron as in [52]: where t m i is the instant of m th spike of neuron i. Then we define a global instantaneous order parameter as: The global order parameter S is the long-time-average of S(t) at the stationary state and measures collective phase synchronization in oscillations of membrane potentials of all neurons, viz S = S(t) t . S is bounded between 0.5 and 1. If neurons spike out-of-phase, then S 0.5 where they spike completely in-phase S 1. For states with partial synchrony 0.5 < S < 1.
Along with the order parameter S, we have also calculated the more commonly used Kuramoto order parameter [47]: with R = R(t) t where 0 ≤ R ≤ 1. R = 0 indicates asynchronous, while R = 1 completely synchronous, oscillations. Essentially, the same results are obtained for R as those obtained for S. However, from a statistical point of view R(t) represents an average of N data points while S(t) represents an average of N (N − 1)/2 data points which results in better statistics for our limited system sizes, and therefore better statistics considering our system size limitations. We also define a generalized susceptibility as the relative root-mean-square fluctuations in the given order parameter: or: Such generalized susceptibilities are very useful tools in order to study phase transitions in general, since critical systems are supposed to exhibit maximal fluctuations at the critical point, diverging in the thermodynamic N → ∞ limit.
Methods
We have scrutinized transition to phase synchronization in networks with N HH neurons interacting via two different synaptic types. We start by providing a detailed description of our procedure. We first determine the network topology by specifying elements of its adjacency matrix. These elements are either zero or one depending on if the nodes are unconnected or connected, respectively. The links in our networks are symmetric. Synapses are also not plastic in this study. The strength of synapses is set with parameter g explained in the previous section. After constructing each network, the synaptic type is determined. If synapses are supposed to be electrical, we use Eq.(5) to describe synaptic currents. While neurons are assumed to interact via chemical synapses, Eq.(6) is used. Next, we fix the values of I DC i and set the parameter g equal to zero. We then integrate Eqs.(1)-(4) using fourth order Runge-Kutta method with a fixed time step ∆t = 10 −3 ms. Typically, much larger time steps are used in simulations of HH neurons, see for example [40,41]. However, since long relaxation times were required in our studies (particularly near the transition points) we choose such a short time step in order to avoid the accumulation of errors. Using this small time step, we are able to specify t m i , the instant of m th spiking of each neuron with an accuracy of 10 −3 ms. Finally we obtain the phase of all neurons and calculate S(t) and R(t) at every time instant (Eqs. (8) and (9)). We allow the dynamics to progress for a long transient time (order of 10 6 time steps) until the fluctuations in S(t) or R(t) reach a stationary state.
After reaching stationary state, we run our simulation for another 2×10 4 ms (2×10 7 time steps) and evaluate the order parameter S and R by averaging S(t) and R(t) over this second interval. We next increase the value of g slightly (keeping all other conditions fixed) and repeat the whole process to evaluate S and R again. In this manner we obtain dependence of order parameter on coupling strength g, in each network topology and for each of the above mentioned synaptic types. The initial condition of integration are random for g = 0 and the system is evolved quasi-statically for larger g values. The synchronization diagrams that are reported here are results of averaging over five network realizations as well as other stochastic parameters. Our results are reported for typically N ≈ 500, but the limited system size does not seem to be an issue in the results to be presented, as essentially the exact same results was obtained when we changed the system size within the range of our computational limits.
Regular networks
The first structure that we consider is regular network, a one dimensional ring of size N = 500 and z = 50 as well as a two-dimensional lattice of side L (N = L × L) with L = 22 and z = 16. z is the coordination number of the network. The results are shown in Figs.1(a-d). It is observed that increasing g does not lead to a transition in either case. It is somewhat surprising as one would expect a transition to synchrony for large g. We have therefore investigated the raster plots for this system for different g values. Such raster plots for one-dimensional ring with electrical synapses for four values of g are shown in Figs.1(e-h). Raster plots for rings with chemical synapses are qualitatively the same as Figs.1(e-h). It is realized that imposing a small interaction among neurons in a regular ring leads to formation of correlated regions. This is not unexpected since regular rings have high clustering coefficient [25]. Increasing g slightly, regulates the phase of neurons on a local level. Since there are no long-range synapses in the system, further increase of g could not vanish phase lags among neurons belonging to far away areas of the network, but instead results in the emergence of the so-called quasiperiodic partial synchronization. Quasiperiodic partial synchronization is denoted to the state of a population of interacting oscillators in which the system sets into a nontrivial dynamical regime where oscillators display quasiperiodic dynamics while collective observable of the system oscillate periodically [53,54,55]. A relevant collective observable for a neural network is the network activity that is defined Fig.1(i) we have plotted A(t) for a regular ring of HH neurons for g = 0.00 when neurons are fully asynchronous (orange curve) and also for g = 0.80 which is where the network is in a quasiperiodic partial synchronization state (green curve). It is seen that when neurons spike out of order, A(t) fluctuates irregularly. But when the network is in state of quasiperiodic partial synchronization neurons spike quasiperiodically and A(t) oscillates periodically. This state emerges in the rings from g 0.20 and remains robust when g is increased further as the perspective of raster plots remain qualitatively the same from g = 0.20 to g = 1.00 (or even for larger values of g which are not shown here).
For sake of comparison, in Fig.1(j) we show the R − g plot for 1D ring with electrical synapses for three different system sizes N . The coordination number in each system is set to be z = 0.1N . In light of these plots we find that the synchronization diagram remains unaltered upon increasing the system size. Also, comparing Figs. 1(a) and 1(j) one verifies the equivalence of the results obtained based on the order parameters R and S, except for the more refined statistics resulting from S. Moreover, the generalized susceptibilities κ R and κ S for the same systems as in Fig.1(j), are illustrated in Figs. 1(k) and 1(l), respectively. It is observed that increasing g does not lead to any distinctive peak in κ R or κ s , confirming that no phase transition occurs in this systems. Generalized susceptibilities for other regular networks studied here are qualitatively similar to Figs.1(j) and 1(l) (not shown).
We note that one might suspect that the lack of transition observed in the 1D lattice might be due to the low dimensional structure, similar to the lack of phase transition in, for example, 1D Ising model. That is why we have also performed simulations for the 2D L × L lattice whose main results are shown in Fig.1(c) and 1(d) which again show no transition. We note that raster plots of the 2D system are quantitatively the same as the 1D case with lesser coherence (due to smaller clustering) and that network oscillations, A(t), are also very similar to the 1D case (not shown).
ER and SF networks
We next consider random networks with small-world effect but with much smaller clustering compared to regular networks. We consider ER network which has a homogeneous random structure as well as SF network which has a heterogeneous random structure [25]. Such networks are constructed using a configurational model [25]. The networks size is N = 500. z = 50 for ER network and z = 25 for SF network. Also the degree distribution function of SF network is P (k)∼k −γ with γ = 2.2. The results for synchronization transition of HH neurons with electrical and chemical synaptic currents, on ER and SF are shown in Figs.2(a-d). It is observed that the system with both types of synapses exhibits a continuous transition from asynchrony to synchrony. Raster plots of spikes for the ER network with electrical synapses are illustrated in Figs.2(eh). It is evident that since clustering coefficient is significantly reduced due to randomness (as compared to regular networks), neuronal clusters do not appear in the system. However, presence of a significant number of long-range con- nections regulates neural activity in this random network when g is increased above a certain threshold. Raster of spikes for other transitions are qualitatively similar to those of Figs.2(e-h) (not shown). Looking at the value where the transition occurs, g t , for a given network, one concludes that synchronization is more conducive to electrical synapses than chemical synapses, i.e. g t is about an order of magnitude smaller for electrical synapses. This makes sense as electrical synapses are known to be stronger than chemical synapses. On the other hand, the strong similarity between the results for ER and SF networks for a given synaptic type, including their corresponding value at transition, leads one to conclude that the role of structural heterogeneity (SF network) is not an important factor in influencing the type and shape of transition curves (S − g plots).
In Fig.2(i), we also show R − g plots of HH neurons with electrical synapses on ER networks with three system sizes N to be compared with with the S − g plot in Fig.2(a). Here, z = 0.1N in each system size. Furthermore, variations of the generalized susceptibilities κ R and κ S upon increasing g for the systems of Fig.2(i) are plotted in Figs.2(j) and 2(k), respectively. It is observed that both κ R and κ S show a specific peak at the transition point which grows with increasing the system size. The behavior of the generalized susceptibilities further collaborates our order parameter results which indicate that our model does not show synchronization transition for low dimensional systems (Fig.1), but exhibits definitive and continuous transition in a high dimensional structure such as complex networks (Fig.2).
Regarding the results associated with figures 1 and 2, we can conclude two important points: (i) the main results are unaltered upon increasing the system size N , and (ii) the synchronization diagrams exhibit qualitatively the same behavior whether we employ R or S, except for the more refined statistics provided by S which enables us to determine the transition point clearly. Therefore, for the rest of this paper we report the results only based on the order parameter S and for our largest available system size (N 500).
Small-world networks
After considering regular networks with high clustering but large average distance on one hand, and highly random networks with strong small-world effect but negligible clustering on the other hand, we are interested in networks that have high clustering coefficient, as well as small-world property. Therefore, we constructed Watts-Strogatz (WS) networks [25] with N = 500 and z = 50 by random rewiring of two percent of links of a regular ring. This low rewiring probability (p = 0.02) allows the system to keep its large clustering coefficient while developing significantly low average distance (i.e. small-world effect). The resulting S − g curves for WS networks with electrical and chemical synapses are shown in Fig.3(a) and 3(b), respectively. Interestingly, we observe a discontinuous (explosive) transition for the case of electrical synapses while a continuous transition is observed for the chemical synapses. As we will discuss shortly, this type of explosive synchronization is different in its mechanism than those seen for phase oscillators in heterogeneous networks such as [56,57]. The synchronization transition is accompanied with a hysteresis loop if a backward sweep in g is performed from the highly synchronized state. Therefore, as seen in Fig.3(a), not only the transition is explosive, the value of S is also historydependent. This is to be contrasted with the case of chemical synapses in WS network where increasing g leads to a continuous transition from asynchrony to synchrony in neural spiking as is clear in Fig.3(b). Therefore, one-and two-dimensional regular networks produced no transition, while random networks produced a continuous transition. However, smallworld networks which lie somewhere between randomness and regularity exhibit explosive synchronization (electrical) as well as continuous transition (chemical).
The fact that transition type in WS network depends on the interaction type is an interesting result and may be important from the point of view of neuroscience, since it has been reported that the brain networks at the microscopic level are similar to WS networks [58]. To elucidate the effect of topology and underlying reason for different order of phase transitions, we display the raster plots of HH neurons with electrical synapses on a WS network in Fig.3(c-f) for the evolution of system in the forward direction. Here, the combined effect of clustering and long-range interaction leads to explosive synchronization. As in the regular rings case the effect of clustering leads initially to correlated regions which are nevertheless not perfectly synchronized for faraway regions of the network. Note the similarity in Fig.1(g) and Fig.3(d), both of which lead to S = 0.5 and no net synchronization. However, as the effect of long-range links in the case of WS network is important, increasing g will eventually lead to interactions among various parts of the network which eventually leads to global order in the system and therefore a phase transition, which was absent in models without long-range interaction. But, why do we observe an explosive
Correlated heterogeneity
It might have been expected that heterogeneity in network structure as in the SF network in Fig.2(b) would have led to a different transition pattern when compared to homogenous random network of ER. We note that the importance of the role of heterogeneity in neural networks has attracted much attention in recent literature. An important observation in regards to synchronization in the Kuramoto model was that structural heterogeneity was not sufficient to lead to different transitional pattern, but a correlation between the frequency and the degree of the node was the key element that would lead to explosive synchronization in SF networks [56]. This means that the high frequency nodes in a network are also the highly connected nodes, while the low frequency nodes are sparsely connected. We note that the range of frequency in spiking HH neurons is relatively limited. However, one may attempt to make a heterogeneous distribution even in this limited range. We have therefore studied a SF network of size N = 500 with γ = 2.2 and k min = 7, k max = 47, z = 15. We have also produced the same distribution of frequencies with f min = 70 Hz and f max = 110 Hz and have studied the correlated (f ∝ k) and the uncorrelated distribution of such frequencies. The results are shown in Fig.4. As is seen, the correlated case leads to explosive synchronization along with hysteresis in the case of electrical synapses, but a smooth transition in the case of chemical synapses. We also show the same results for the uncorrelated case which indicates that the explosive synchronization is in fact due to the correlation between the two heterogeneous distributions of degree and frequency, in the case of electrical synapses.
It is interesting to note that the mechanism for explosive synchronization is very different here than that observed in WS network for electrical synapses.
There, it was the combined effect of local order, which is achieved for low synaptic weight g, and long-range order which sets in for large values of synaptic weight, that leads to sudden order and explosive synchronization in the system, see raster plots in Fig.3. Here, in the case of correlated heterogeneity, the system is still essentially in a completely disordered phase just before the explosive synchronization occurs. See g = 0.227 and g = 0.228 raster plots in Fig.4 which are just before and after the explosive synchronization transition point. This indicates that the system truly goes through a sudden change from disordered to ordered phase. The cause of such an explosive synchronization can easily be understood by looking at the ordered raster plots. One sees that in the synchronous phase the entire system is oscillating at the frequency of f ≈ 110 Hz which is exactly the frequency of the only hub in the system. This indicates the essential role of the hub in this explosive synchronization. The entire network must adjust with the hub and once this happens an explosive synchronization occurs. This mechanism is very much similar to what happens in the case of the well-know explosive synchronization in the Kuramoto model [56]. However, we emphasize that the explosive synchronization we have observed only occurs for the stronger electrical synapses, and we did not observe any explosive synchronization for chemical synapses in the range of parameters studied here. We finally note that explosive synchronization occurs at much higher values of g when compared to the WS case and also exhibits a smaller hysteresis loop. [59]. It is now believed that such intermediate regime is a manifestation of HM structure of the underlying network. As can be deduced from the raster plots in Figs.5(d-i), the HM structure leads to synchronization within various modules which are themselves out of phase with various other modules leading to relative synchrony (small S) which nevertheless fluctuates as various modules go in and out of phase with each other as we change the value of g. For example, for electrical synapses and g = 0.019 there is more synchronization than g = 0.020 as can clearly be seen why from the corresponding raster plots. Therefore, we observe the same type of frustrated synchronization patterns as in the Kuramoto model regardless of the synaptic interaction. We also note that, looking at the values of transition point g t , one sees strong similarity with fully random networks of SF and ER (see Fig.2). This indicates that the onset of synchronization here is also dictated by long-range links. But, once synchronization sets in, it is the strong clustering within various modules that dictate the synchronization pattern for a range of g, before global order sets in for large enough g.
Concluding remarks
In a previous work, we studied beta-band synchronization in network models of spiking neurons. There, we showed that the type of synchronization transition occurring in a neural network depends on the firing rates of constituent neurons [24]. In this paper we have reported a systematic study of synchronization transition in network models of spiking neurons in gamma-band. We employed HH neurons with electrical and chemical synapses. Our focus has been to characterize the combined effect of synaptic type and topological features on the type of synchronization transitions that may occur. The mechanisms and patterns of synchronization transitions we obtained here for gamma-rhythms are distinctly different from those we obtained for beta-rhythms in ref. [24]. For example, in the beta-band in a one-dimensional lattice, we found a continuous transition for the electrical synapses, while here for the gamma-band we observed no transition for any synaptic type in one or two dimensions. Furthermore, here we found smooth transitions for SF and ER networks in the gamma-band, while previously we had observed explosive synchronization on such networks with electrical synapses. On the other hand, here we observe explosive synchronization for the WS networks while in the beta-band we only saw a smooth transition for such networks. We also observed explosive synchronization in the case of SF networks with correlated heterogeneity which was not studied in the previous study for the beta-band.
The underlying mechanisms leading to explosive synchronization in betaband was rooted in the formation of anti-phase groups of neurons for intermedi-ate values of g and their sudden combination at a transition point [24]. This is distinctly different from the mechanism that lead to explosive synchronization in WS network of HH neurons or from the mechanism resulted in abrupt transition in SF network of HH neurons with correlated heterogeneity. However, these three mechanisms of explosive synchronization have a common aspect.
They all occur through electrical synapses. We have not observed explosive synchronization in the case of chemical synapses. Our results highlight the fact that electrical synapses are more conducive to synchronization and can in fact lead to entirely different transition patterns. This is in contrary to other studies that have concluded similar synchronization behavior for electrical and chemical synapses [38].
We also note that it is interesting that our regular one and two dimensional lattice did not exhibit a transition which is what one would expect from the study of the Kuramoto model as it, too, does not exhibit a transition in low dimensional systems [48]. However, the mechanism for such behavior are different, as we do observe considerable amount of order in our system with quasiperiodic oscillations. We once again emphasize the key role of frequency as well as synaptic interactions in such studies, where in the beta-band, in one dimension, we had previously observed a continuous transition for the electrical synapses while no transition was seen for the chemical synapses.
This brings us to emphasize the difference in the type of transitions we observed in WS networks. Electrical synapses, which are strong and fast, lead to explosive synchronization while the slower and weaker chemical synapses lead to a smooth transition. This is particulary interesting as neuronal networks are argued to by on the verge of a phase transition. This could, for example, be related to the fact that electrical synapses are useful in fast involuntary motor response where a strong and fast collective action is desired, while a smooth transition with chemical synapses could be understood in terms of cortical neurons where too much synchronization is deemed to be pathological [22]. Furthermore, we investigated the role of correlated heterogeneity and found that in the case of electrical synapses one observes explosive synchronization while in chemical synapses a smooth transition occurs. This result could be interesting from two aspects. First, it shows that unlike what is generally believed, correlated heterogeneity does not always lead to explosive synchronization as chemical synapses showed a smooth transition. Secondly, it highlights the distinctly different type of explosive synchronization that may occur in electrical synapses. In WS network, explosive synchronization occurred after the system gained a high degree of local order, but in the case of correlated heterogeneity, explosive synchronization was dictated by the role of the hub with no sign of order in the system just before the transition occurred.
We have also considered hierarchical modular networks which resulted in an intermediate regime between order and disorder. Such a behavior has been previously shown to occur in the Kuramoto model [59] and our results indicate that such frustrated transition is a more general property of neuronal systems in HM networks, and it is furthermore independent of the type of synaptic interaction.
We note that our choice of spiking HH neurons naturally limited our range of frequency to the gamma band. However, we observed the same type of synchronization patterns when we increased the natural spiking frequency of the HH neurons to the high gamma band up to f ≈ 110, not shown here.
We previously observed that the synchronization patterns changed in the case of Izhikevich neurons when one increased the average frequency from beta to gamma band [24]. This was shown to be due to the dependence of refractory period on the frequency of the neurons. However, it seems like for high enough frequency the change in the refractory period becomes negligible due to the short spiking intervals.
The role of refractory period, conduction/axonal delays, as well as synaptic plasticity on synchronization patterns are all potentially interesting avenues for future studies [60]. Evidence for robust collective oscillations at the edge of chaos, where scale-invariant activity emerges in neural networks, is reported in recent experimental and theoretical studies [23,16,15]. Investigation of such coexistence at the edge of continuous and explosive transitions obtained in the current study is interesting, as well, although such investigation will be computationally expensive. One might also consider the role of external oscillatory input in such studies which has been recently shown to introduce critical oscillations in certain models of excitable nodes [61]. Lastly, the role of noise was absent in our studies. Noisy dynamics may add some important features to the collective dynamics including important effects in the nature of the phase transition [62]. | 2020-01-23T02:01:33.476Z | 2020-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "1c221175ad3c1ee9dc5c51dc7b373def2aff42dc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.07783",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "326331fddddeb1f762cbea41cda450bd2035df70",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Computer Science"
]
} |
54038326 | pes2o/s2orc | v3-fos-license | Kernel Local Linear Discriminate Method for Dimensionality Reduction and Its Application in Machinery Fault Diagnosis
Dimensionality reduction is a crucial task in machinery fault diagnosis. Recently, as a popular dimensional reduction technology, manifold learning has been successfully used in many fields. However, most of these technologies are not suitable for the task, because they are unsupervised in nature and fail to discover the discriminate structure in the data. To overcome these weaknesses, kernel local linear discriminate (KLLD) algorithm is proposed. KLLD algorithm is a novel algorithmwhich combines the advantage of neighborhood preserving projections (NPP), Floyd, maximum margin criterion (MMC), and kernel trick. KLLD has four advantages. First of all, KLLD is a supervised dimension reduction method that can overcome the out-of-sample problems. Secondly, short-circuit problem can be avoided. Thirdly, KLLD algorithm can use between-class scatter matrix and inner-class scatter matrix more efficiently. Lastly, kernel trick is included in KLLD algorithm to findmore precise solution.Themain feature of the proposed method is that it attempts to both preserve the intrinsic neighborhood geometry of the increased data and exact the discriminate information. Experiments have been performed to evaluate the new method. The results show that KLLD has more benefits than traditional methods.
Introduction
With the information collection technology becoming more and more advanced, a huge number of data have been produced during mechanical equipment running process.The sensitive information which reflects the running status of the equipment has been submerged in a large amount of redundant data.Effective dimensionality reduction can solve this problem.Dimensionality reduction is one of the key technologies for equipment condition monitoring and fault diagnosis.Nonlinear and nonstationary vibration signals generated by the rolling bearing [1,2] make the original highdimensional feature space which consists of the statistical characteristics of the signal inseparable.The traditional linear dimensionality reduction methods such as PCA and ICA not only are under the assumption of global linear structure of the data but also use different linear transformation matrix to find the best low-dimensional projection.The classification information plays an important role.In nonlinear conditions such as the original high dimensional feature space possesses a non-linear structure, however, the classification information is difficult to obtain by linear methods.KPCA is a traditional nonlinear dimensionality reduction method, which achieves the task of dimensionality reduction by discarding relatively small projection in a higher-dimensional linear space.In addition, KPCA aims to find the principal components with the largest variance, which may cause the loss of useful discriminate information [3].
Manifold learning is a data-driven approach and can reveal the underlying nature of the complex data structure, which provides a new approach for the analysis of the intrinsic dimension based on the data distribution.Manifold learning has got a series of research achievements in the feature extraction [4][5][6].Actually, manifold learning method falls broadly into two categories [7] which have different advantages and disadvantages: global (Isomap [8]) and local (locally linear embedding [9]).In [10], the author points out that, as for the discriminate analysis, the local structure 2 Shock and Vibration is usually more important than the global structure when there are no enough samples.As for local manifold learning, local linear embedding (LLE) is an algorithm which has many advantages such as global optimal solution and fast calculation.Furthermore, its minimum reconstruction error weights can keep data local neighborhood geometric properties unchanged when data exhibition shrinks and rotates.So LLE algorithm is applied to the fault feature extraction [11][12][13][14][15].
NPP [16] is one of the manifold learning methods, whose central idea is based on LLE by introducing a linear transform matrix.NPP has been successfully applied in famous "Swiss roll" and "S-curve" dataset dimension reduction.The algorithm assumes that the structure of data on the local significance is linear.However, when the data manifold has a larger bending, manifold learning method will result in the short-circuit problem.
As for these issues, a fault feature extraction method named KLLD is proposed in the paper.This method studies both the iris dataset and the rolling bear original feature dataset constructed by wavelet packet energy with dimensionality reduction application.The effectiveness of this method is verified by contrast with conventional analysis methods.
The rest of this paper is organized as follows.In Section 2, we review briefly the LLE, NPP, Floyd, and MMC algorithm.In Section 3, firstly, KLLD algorithm proposed in this paper is deduced; secondly, the short-circuit problem is introduced and Floyd algorithm is employed to overcome this drawback; lastly, based on the LLD, KLLD, and Floyd algorithm, the calculation steps of LLD and KLLD are designed.In Section 4, we design a KLLD experiment process for the dimension reduction of iris and rolling bear database; then we apply the KPCA, NPP, LLD, and KLLD algorithm to dataset dimension reduction.Conclusions are made and several issues for future study are addressed in Section 5.For the given -dimensional real-valued vectors x ∈ X ( = 1, . . ., ), assume that the vectors of each point and its nearest neighbor lie in the local linear space, by weighting coefficients W ( = 1, . . ., ; = 1, . . ., ) with neighborhood which belongs to x to reconstruct x .W is selected by minimizing the cost function.That is,
Basic Principle
In LLE algorithm low dimensional points can be reconstructed by high-dimensional matrix.Each of the highdimensional data x (:, ) can be obtained by M eigenvalue decomposition of formula (2) and then identify the bottom + 1 eigenvectors corresponding to its smallest + 1 eigenvalues.Then Y is a matrix that is constructed by using these eigenvectors and discarding the eigenvectors corresponding to its smallest eigenvalues of the matrix.Derivation can be found in the literature [9]: In summary, we state LLE as the following algorithm.
Step 1. Select neighbors by nearest neighbor algorithm.
Step 3. Map to embed coordinates Y.
Neighborhood Preserving
Let Y = A X.
(3) can be transformed to the following: where M is obtained by solving (2).The Lagrange extreme method is used: Obviously, we know the projection matrix A is generalized eigenvectors of ((XX ) −1 (XMX )).Step 1. Initialization: computing 0 .We set 0 = ∞ when such shortest path does not exist; otherwise 0 = 0.
Floyd
Step 2. For = 1, 2, . . ., , using (7) to loop iteration, calculate short path of each point in D : 2.4.Maximum Margin Criterion.LDA (linear discriminate analysis) is a popular linear feature exactor.The key step is to find a transform matrix Q ∈ × under the condition of Fisher criterion maximized, where is the number of dimension in original dataset and X and are the number of Y's dimension [17]: where S represents the distance matrix between classes and S represents distance matrix within class. is the number of classes; and are the mean vector and a priori probability of class , respectively. is overall mean vector.
However, we can find drawbacks, because (8) cannot be applied when S is singular due to the small sample size problem.MMC method has been proposed in the literature [17] to overcome these drawbacks.According to LDA, MMC can be represented in the following:
Local Linear Discriminate Algorithm.
According to LLT-STA derivation process in the literature [18], the basic idea of the LLD proposed in this paper is that if the linear transform matrix A of NPP (see (5)) satisfies (10), the ability of discriminate different class of data will be greatly improved.This problem can be expressed as a multiobjective optimization problem: Equation ( 11) can be changed into constrained optimization problem: Lagrange multiplier method is used to solve this problem; that is, further deduced as follows: Equation ( 14) is converted into then we can find the projection matrix A is eigenvectors of
Kernel Local Linear Discriminate Algorithm.
Suppose that is a nonlinear mapping to some feature space F; (15) can be changed into the following: To find local linear discrimination information in the feature space F, we need (16) dot product form of input patterns.Then we replace the dot product form with one of the kernel functions.From the theory of reproducing kernels we know that any solution A ∈ F must lie in the span of all training samples in F. Therefore, expansion terms for A can be written as follows: Combining ( 16) and ( 17) and then multiplying [(X )] at both sides of ( 16), we have We set ( , ) = (( ) ⋅ ( )), and then (18) can be rewritten as follows: where is the number of th samples, is the number of samples, and is the number of sample dimensions.Taking polynomial kernel function ((, ) = () ⋅ () = ( ⋅ + 1) ) as the sample, K , can be rewritten as K , = ( ⋅ + 1) .Similar to Section 3.1, ( 19) can be considered as a generalized eigenvalue decomposition problem.
Short-Circuit Problem.
Traditional Euclidean distance method has many advantages such as perceptual intuitional, easy to understand and calculation.However, Euclidean method could easily lead to short-circuit problem [19], when the high-dimension space possesses a larger hypersurface curvature.Short-circuit problem refers to the fact that a point's neighbor mixed with different types of points, which results in discrimination information cannot be extracted effectively.Distribution of the two-type data in the twodimensional space is shown in Figure 1.
Figure 1 shows two types of points including round and square.Under the condition of Euclidean distances, round12's five close neighbors are {round11, round13, square2, square3, and square4}.This phenomenon will lead to distortion of data dimensionality reduction in low-dimensional space.In order to overcome this drawbacks, we created a connect graph.The point of different type is not connected and we deem the distance between the unconnected points is infinity.In this way the round12's five nearest neighbor points are {round9, round10, round11, round13, and round14}.Therefore, the Floyd algorithm in Section 2.3 is used to find the distance between points in the figure after establishing a connection diagram in high-dimensional sample space.To find the right nearest neighbor point in the LLD algorithm, using the Floyd algorithm can effectively avoid the problem of mixing different types of data samples.
Steps of LLD Calculation.
According to Sections 3.1, 3.2.and 3.3 analysis, we state LLD as the following algorithm.
Input.One has the original space × matrix X, close neighbor points , connection distance , low-dimensional embedding dimension ( < ), and Kernel parameter .
Output.One has × low-dimensional matrix Y.
Step 2. Set connection threshold , determine points value, and construct a weighted graph similar to Figure 1.
Step 3. The distance between x and the rest points is calculated.x 's nearest neighborhoods are selected under the condition that minimum distance is determined according to Section 2.3.
Step 4. Reconstruct weighting matrix W, which is calculated according to formula (1).Step 5. Calculate matrix A according to formula (15) and tion matrix, and then find the low-dimensional embedding + 1 smallest eigenvalue, 2 to + 1 smallest eigenvalue corresponding feature vector A.
Step 6. Calculate Y by Y = A X.
Steps of KLLD Calculation.
Besides the input and output of LLD, kernel parameter also should be considered as for KLLD's input.There are 3 calculation steps.
Step 1. Reconstruct weighting matrix W as Step 1 to Step 4 of Section 3.4, and then calculate M according to formula (2).
Step 2. Calculate according to formula (19); then A can be obtained by (17).
Step 3. Calculate Y by Y = (A ) X.
Application Analysis
4.1.Iris Dataset Dimensionality Reduction.We evaluated the performance of the new approach on the iris plants database.I. setosa, versicolor, and virginica are included in this dataset.Sepal length, sepal width, petal length, and petal width are the characteristics of the plant samples.The number of each of the plant samples is 50.We divided them into two parts equally named dataset1 and dataset2.So there were 25 plant samples in each class of the new database.KPCA, NPP, and LLD were also used in this section to demonstrate the advantage of the dimensionality reduction method.Polynomial kernel function whose parameter = 25 was employed in KLLD and KPCA.The number of close neighbor points is = 14, which was used in NPP, LLD, and KLLD method.The results of iris database dimension reduction are shown in Figure 2. KPCA and NPP methods hardly discriminate three types of the plant as shown in Figures 2(a)-2(d), because of the fact that both of them reduce dimension for describing data.That is to say, they keep the information not discarded during dataset dimension reduction.Figures 2(e)-2(h) show the results of LLD and KLLD methods.There are points representing different plant overlaps in Figures 2(e)-2(f), because LLD is a linear method.As shown in Figures 2(g)-2(h), the KLLD method can discriminate different kinds of plants properly.In order to investigate the accuracy of classification, SVM [20] is used as the classifier.Table 1 shows the accuracy of SVM classification.Dataset1 is the training dataset and dataset2 is the testing dataset.Table 1 illustrates that the KLLD method obtains the highest classification accuracy.
Rolling Bear Fault Datasets Dimensionality
Reduction Experiment 4.2.1.KLLD Experiment Process Design.The calculation steps are designed as follows.
Step 1. Collect vibration signal of the rolling bear.
Step 2. Wavelet packet energy is used to construct the original features datasets.
Step 3. Projection matrix (A ) is obtained by KLLD.
Step 4. Find out dimension reduction result by Y = (A ) X.
KLLD dimensionality reduction process for rolling bearing fault datasets is shown in Figure 3.
Wavelet Packet Energy Original Feature Construction.
Under normal and fault operating conditions, time-domain waveform signals are shown in Figure 4.
The time-domain waveform characteristic of bearing inner race fault is typical shock component.The waveform of normal bearing rolling shows the feature of stable and little fluctuation in amplitude.The waveform of rolling element bearings fault includes random single punch strike component, while the time-domain waveform of bearing outer race fault is very similar to the inner race fault waveform.It is hard to grasp the rolling bear feature of different fault condition only from time-domain waveform.Wavelet packet analysis is a precise method for signal analysis.It is widely used in bearing fault diagnosis currently.So we use this method to construct the original feature.Typical fault of wavelet packet energy is shown in Figure 5.We can find that the different rolling bear faults signals which are processed by the wavelet packet decomposition have significantly different amplitude in different frequency bands of energy.
Wavelet Packet Energy Original Feature Construction.
To verify the validity of the KLLD method, the experiment was performed on Electrical Engineering Laboratory rolling bear vibration database of Case Western Reserve University.We selected the bearing model SKF6203, with the running speed 1730 rpm under normal, inner race fault, ball fault, Shock and Vibration and outer race fault.They were processed by wavelet packet decomposition and two original feature datasets were constructed, named dataset1 and dataset2.Both of the datasets have 40 points.Table 2 shows the original features of dataset1.
Parameter Settings.
According to [21], the authors calculated the optimal embedding dimension of manifold learning algorithm by the following: where is the optimal embedding dimension and is the number of categories.This study considers four operational states of the rolling bear, so the low-dimensional space is 3 by (20).In addition, the distribution of the data can be clearly illustrated in 3-dimensional spaces.The parameter is the quantity of nearest neighbor.It is one of the most important parameters in manifold learning algorithm, because if the number of nearest neighbors is too large the small-scale structure of the manifold could be eliminated and the whole manifold would be smooth.On the contrary, if the number of nearest neighborhoods is too little, the successive manifold may be divided into disjointed submanifolds.Residual variance can be defined as 1 − , where , represent the Euclidian distance matrixes of each point in X and Y, respectively. represents standard linear correlation.The smaller the value of residual variance is, the better high-dimension dataset can be embedded into low dimension [9].The optimal value of can be found by LLE is the basic version of NPP, LLD, and KLLD and it can be used to determine the optimal of datasets.Dataset1 is input X of LLE algorithm, the value of is from 2 to 39, and the results are illustrated in Figure 6.So = 10 is the optimal quantity of neighborhood.
We can find that the distance between different classes is large in numerical while that within the same class is small from Table 3.In order to guarantee that the number of neighborhood in the same class is 10, the connection distance is = 0.441 according to Table 3.The number of neighbors is = 10.Polynomial kernel function is used in KLLD and KPCA and its parameter = 35.Dataset2 is also done the same as we do in dataset1.The KLLD algorithm is used to calculate dataset1 and dataset2.The inner-class distance in low dimension is calculated and it is shown in Table 4, in order to evaluate the effectiveness of the KLLD method precisely.It shows that the proposed method has better clustering ability.To quantitatively evaluate the separability of the method, the sample ratio of between-class average distance and average intraclass distance is calculated.We can find in Table 5 that KLLD is 4.38 × 10 11 while other methods are less than 517.7 in dataset1, and KLLD is 186.4 while other methods are less than 120.8.SVM has been used to calculate the classification accuracy of low-dimension dataset.The results are shown in Table 6, which illustrate that KLLD-SVM method can recognize each condition of rolling bear vibration signal.
Conclusions
A novel dimension reduction algorithm for the purpose of discrimination called kernel local linear discriminate (KLLD) has been proposed in this paper.The most prominent property of KLLD is the complete preservation of both discriminate and local geometrical structures in the data.However, traditional dimension reduction algorithm can not properly preserve the discriminate structure.We have applied our algorithm to iris databases dimension reduction.The experiment demonstrated that our algorithm can extract the different kinds of iris features is suitable for classification.And then we applied KLLD to machinery fault diagnosis.At first, the original feature space of rolling bear dataset was constructed by wavelet energy.Secondly, KLLD algorithm and other dimensionality reduction methods were used, respectively, in the original feature space.Finally, SVM was used for classification.The experiment shows that our method has excellent capability of clustering and dimension reduction.
Figure 3 :
Figure 3: Scheme of KLLD dimension reduction for rolling bear dataset.
4. 2 . 5 .
Calculation and Discussion.KPCA, NPP, and LLD algorithms are also used to analyze the effectiveness of KLLD.Table 2 is input matrix.
Figure 7
shows the results of distribution in three-dimensional space.As shown in Figures 7(a)-7(b), inner race fault and ball fault overlap with each other.KPCA hardly distinguishes different class of points during dimension reduction.Figures 7(c)-7(d) illustrate that normal and inner race fault can be distinguished; however, ball fault and outer race fault have some slight aliasing, especially in dataset2.We can find the same phenomenon in Figures 7(e)-7(f).The results of KLLD are directly shown in 7(c)-7(d).KLLD algorithm can distinguish different classes of rolling bearing dataset.Figure 7 suggests that the different states of discriminating sensitive characteristics are retained.
Table 1 :
Comparison of four dimensionality reduction methods for iris dataset classification.
Table 2 :
Original features constructed by wavelet packet energy spectrums of rolling bear dataset1.
Table 3 :
Distances between each point of rolling bear dataset1.
Table 4 :
Comparison of within-class distance of low dimension using four dimensionality reduction methods.
Table 5 :
Ratio of between-class distance and within-class distance using four dimensionality reduction methods.
Table 6 :
Comparison of four dimensionality reduction methods for classification. | 2018-11-28T10:49:59.349Z | 2014-02-27T00:00:00.000 | {
"year": 2014,
"sha1": "995b803d6739e84e672831d7ad47fd77f5caf538",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/sv/2014/283750.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "995b803d6739e84e672831d7ad47fd77f5caf538",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257038067 | pes2o/s2orc | v3-fos-license | BBT-Fin: Comprehensive Construction of Chinese Financial Domain Pre-trained Language Model, Corpus and Benchmark
To advance Chinese financial natural language processing (NLP), we introduce BBT-FinT5, a new Chinese financial pre-training language model based on the T5 model. To support this effort, we have built BBT-FinCorpus, a large-scale financial corpus with approximately 300GB of raw text from four different sources. In general domain NLP, comprehensive benchmarks like GLUE and SuperGLUE have driven significant advancements in language model pre-training by enabling head-to-head comparisons among models. Drawing inspiration from these benchmarks, we propose BBT-CFLEB, a Chinese Financial Language understanding and generation Evaluation Benchmark, which includes six datasets covering both understanding and generation tasks. Our aim is to facilitate research in the development of NLP within the Chinese financial domain. Our model, corpus and benchmark are released at https://github.com/ssymmetry/BBT-FinCUGE-Applications. Our work belongs to the Big Bang Transformer (BBT), a large-scale pre-trained language model project.
Introduction
Pre-trained language models(PLMs), such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2019), have led to great performance boosts across many NLP tasks. Despite the excellent performance of pre-trained language models (PLMs) on a large number of NLP tasks, their performance is often affected when applied to domain-specific texts that exhibit significant differences from general text in terms of word usage, syntax, and writing style (Gururangan et al., 2020;Gu et al., 2021). To address this issue, Gururangan et al. (2020) proposed that continuing to pre-train a general PLM on target domain corpora and task-relevant texts can effectively improve its performance on * Corresponding author. domain-specific tasks, while Gu et al. (2021) further suggested that pre-training domain-specific PLMs from scratch with a sufficiently large corpus can achieve even better domain-specific performance. Inspired by these studies, domain-specific pre-trained language models have emerged in some domains, such as BioBERT (Peng et al., 2019a) and PubMedBERT (Gu et al., 2021) in the biomedicine field, which have been utilized for practical tasks like entity and relation extraction.
We collect all existing NLP competition tasks and academic datasets related to finance on the Chinese internet and summarized them in Table 2, revealing a growing demand for NLP capabilities in finance, particularly in information extraction and sentiment analysis. To meet these demands and improve the overall level of Chinese financial NLP, several companies have already developed and released Chinese financial pre-trained language models, such as FinBERT (Hou et al., 2020) and Mengzi-BERT-base-fin . However, these models are based on the BERT-base model, have a single architecture type, and a parameter count (around 110 million) that is outdated and unable to meet the increasing demand for NLP capabilities in this field. Therefore, we propose FinT5, the largest Chinese financial pre-trained language model to date, based on the advanced T5 architecture, with 220 million parameters for the base version and 1 billion for the large version.
Furthermore, NLP tasks in the financial industry focus primarily on information extraction, requiring models with high entity knowledge understanding and memorization capabilities. Although studies have shown that pre-trained PLMs on largescale corpora already have some entity knowledge understanding and memorization capabilities, there are still some shortcomings. To address this issue, many studies have used knowledge-enhanced pretraining methods to improve PLMs' understanding and memorization of entity knowledge. However, these methods mostly target BERT-like models and lack strategies designed for T5 models. To improve T5's performance on financial NLP tasks, we propose a concise knowledge-enhanced pretraining method based on the T5 model's text-totext paradigm.
In addition, another challenge faced by Chinese financial NLP is the lack of corpus. The scale and diversity of corpora play an essential role in language model pre-training (Xu et al., 2020;Raffel et al., 2019;Gao et al., 2020). However, existing Chinese financial corpora are small in scale, poor in diversity and not open, as can be shown in Table 1. To solve this problem, we first need to determine the text types that a qualified Chinese financial corpus needs to cover. To this end, we first collected almost all existing Chinese financial NLP tasks and summarized their text sources, as shown in the Table 2. According to the source distribution of these tasks, we have determined the range of text types we need to collect. As a result, we collect and release a large-scale Chinese financial corpus named BBT-FinCorpus with about 300 GB raw text, which consists of five different sources to enhance its diversity covering most text sources of Chinese financial NLP tasks.
The widespread use of benchmark evaluations is a key driving force that has greatly improved and rapidly iterated PLMs. These evaluations use a single score to assess model performance across multiple tasks, enabling direct and comprehensive comparisons between pre-trained language models. Existing English PLMs use the general benchmark evaluations GLUE (Wang et al., 2018) and SuperGLUE , while the general benchmark evaluation for Chinese PLMs is CLUE (Xu et al., 2020). Almost all PLMs participate in these evaluations to compare their performance with other models. However, there is no publicly available benchmark for Chinese financial NLP, which makes it difficult to compare existing pre-trained language models on different task sets and hinders the rapid improvement of PLM performance in the Chinese financial domain.
To address this issue and promote research in the financial domain, we propose CFLEB, the Chinese Financial Language Understanding and Generation Evaluation Benchmark, consisting of six datasets covering language understanding and generation tasks. These datasets encompass a diverse range of text genres, dataset sizes, and levels of difficulty, and more importantly, emphasize challenges that arise in real-world scenarios.
Our contributions are summarized as follows: • We introduce BBT-FinT5, a state-of-the-art financial Chinese PLM with large-scale parameters and knowledge-enhanced pre-training.
• We provide BBT-FinCorpus, a comprehensive and diverse financial Chinese corpus.
• We propose BBT-CFLEB, a benchmark for evaluating Chinese language understanding and generation in the financial domain.
2 Related Work
Domain-specific PLMs and Corpora
PLMs have achieved state-of-the-art performance in many NLP tasks (Devlin et al., 2018;Raffel et al., 2019;. However, when applied to domain-specific tasks, models pre-trained on general corpora often produce unsatisfactory results due to the difference in word distribution from general to specific domains (Gururangan et al., 2020;Gu et al., 2021). To better adapt a language model to a target domain, pre-training on the corpus of the target domain is proposed (Gururangan et al., 2020). For domains with abundant unlabeled text, such as biomedicine, pre-training from scratch results in substantial gains over continual pre-training of general-domain language models (Gu et al., 2021). Consequently, many domainspecific PLMs have been proposed and pre-trained on their respective corpora.
In the field of financial NLP, domain-specific pre-trained language models (PLMs) have demonstrated their superiority over general-domain PLMs. For instance, Araci (2019) and Yang et al. (2020) pre-trained BERT on English finance news and communications, respectively, and outperformed competitive baselines on financial sentiment analysis tasks. In the context of Chinese financial NLP, Hou et al. (2020) pre-trained BERT on Chinese financial news, analysis reports, company announcements, and encyclopedias, and evaluated it on news classification, sentiment analysis, and named entity recognition tasks. Furthermore, pre-trained the Chinese PLM Mengzi on a 20GB financial corpus and demonstrated its effectiveness on multiple downstream tasks. Table 1 summarizes the characteristics of typical PLMs and their corpora in the financial domain. It can be observed that both the scale of our model and corpus exceed existing works.
Knowledge Enhanced Pre-training
Although PLMs can acquire rich linguistic knowledge from pretraining on large-scale corpora, many studies have shown that PLMs still have shortcomings in entity knowledge understanding and memory, as the distribution of entity knowledge in unfiltered corpora is sparse and long-tailed . Therefore, PLMs can benefit from knowledge-enhanced pretraining methods that strengthen entity knowledge understanding and memory.
For example, Ernie (Sun et al., 2019) is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. The disadvantage of this approach is that it can only help the model better learn existing entity knowledge from the corpus, without addressing the issues of sparse and long-tailed distribution of entity knowledge in the corpus.
Ernie 3.0, introduced by Sun et al. (2021), incorporates the universal knowledge-text prediction (UKTP) task. This task involves a pair of triples from a knowledge graph and their corresponding sentences from an encyclopedia, where either the relation in the triple or the words in the sentence are randomly masked. In order to predict the relation in the triple, the model must identify the head and tail entities mentioned in the sentence, and determine the semantic relationship between them.
The limitation of this approach is that it only masks the relation in the triple and not the entities, which can hinder the learning of entity representations. Moreover, distant supervision has a certain amount of noise, which means that the relation in the triple may not necessarily appear in the sentence (Smirnova and Cudré-Mauroux, 2018). Therefore, only masking the relation and predicting it can have a strong negative impact on the model. Although the above methods have made some progress, they are all designed for the BERTlike model.
To our knowledge, there is currently a gap in knowledge enhancement pre-training methods available for T5-like models.
Domain-specific NLP Benchmarks
Various domain-specific NLP benchmarks have been proposed to compare the ability of different methods in modeling text from specific domains in a fair manner. The BLUE benchmark (Peng et al., 2019b) evaluates the ability of models in biomedical text mining through five tasks. The BLURB benchmark (Gu et al., 2021) further focuses on clinical domains by removing two unrelated tasks and includes a wider range of biomedical applications. Despite these efforts, a comprehensive set of benchmark tasks for training, evaluating, and analyzing financial PLMs is still largely unexplored. Currently, the FLUE (Shah et al., 2022) is the only benchmark for the financial domain, consisting of five tasks specifically designed for English financial text. However, we are the first to construct a comprehensive set of benchmarks for Chinese financial text, covering a range of language understanding and generation tasks that differ from previous works.
The Corpus: BBT-FinCorpus
We build FinCorpus, the biggest corpus of Chinese financial domain to get a superior pre-trained language model. Section 3.1 covers how we decided on the corpus contents. We collected, refined and sorted the corpus to finally obtain the FinCorpus, as elaborated in Section 3.3.
Coverage Confirmation of the Corpus
We believe that, since the purpose of domain pretraining is to help models better understand domain texts and perform domain tasks more effectively, it is essential to observe the text distribution of domain tasks to determine the coverage of the corpus. The domain corpus should cover the text sources of domain tasks as much as possible to enhance the model's understanding of the tasks. To this end, we first collected almost all Chinese financial NLP task datasets available on the Chinese internet in recent years, including several datasets used in this study, and their text sources, as shown in Table 2.
It can be seen that the text sources of these financial NLP datasets are mainly concentrated in financial news, company announcements, research reports, and social media. For financial news, we chose the largest financial news websites on the Chinese Internet for crawling, namely Sina Finance 1 , Tencent Finance 2 , Phoenix Finance 3 ,
Crawling and Filtering of the Corpus
We used a proxy-based distributed crawler to crawl public web pages. We filtered the web pages using a series of rules (Raffel et al., 2019;Yuan et al., 2021).
Description of the Corpus
After crawling, cleaning, and processing, we obtained the FinCorpus, a large-scale Chinese financial domain corpus that contains four types of language materials: • Corporate announcements. These are the announcements released by all listed companies in China over the past twenty years. The original data is in PDF format, with a total size of about 2TB. Using a PDF parser, we converted the PDF files into text files, resulting in a total size of 105GB.
• Research reports. These are research reports issued by investment institutions such as securities firms and investment banks on macroeconomic issues, sectors, industries, and individual stocks, analyzing the current status and future development trends of the research object. The original data is in PDF format, with a total size of about 1TB. After conversion, the total size of the resulting text files is about 11GB.
• Financial news. These are the financial news articles from the past five years crawled from websites including Sina Finance, Tencent Finance, Phoenix Finance, 36Kr, and Huxiu. After cleaning, the total size of the resulting text files is about 20GB.
• Social media. These are the posts from all stockholders and bloggers published on stock bar and Xueqiu website over the past twenty years. After cleaning, the total size of the resulting text is about 120GB.
The corpus from the above five sources basically covers all types of texts in the common Chinese financial NLP.
The Large PLM: BBT-FinT5
To enhance the performance of the Chinese financial NLP baseline and foster the growth of the open-source community in this domain, we introduce the FinT5 model. This model's architecture and pre-training tasks are consistent with the T5 (Raffel et al., 2019) model and are pre-trained on BBT-FinCorpus (refer to Section 3). We chose this model for its robust performance on many general benchmarks and compatibility with understanding and generating tasks based on the text-to-text paradigm, which facilitates transfer learning. Our experiments demonstrate that the FinT5 model significantly outperforms T5 trained on the general corpus.
In this section, we first describe the architecture and pre-training task of the T5 model. Then we outline the pre-training acceleration method based on DeepSpeed, and finally introduce the knowledge enhancement pre-training method that we propose for the T5 model, which is based on triple masking.
Pre-training Model Architecture and Task
Raffel et al. (2019) model all NLP tasks in a textto-text format which enable the use of a unified network architecture, training approach, and loss function to handle all NLP tasks, promoting transfer learning in the NLP field. Building upon this, they conducted a series of comparative experiments and chose to develop a large-scale PLM, T5, based on an encoder-decoder architecture and pre-trained using MLM. Specifically, T5 utilizes the span mask method proposed by SpanBERT (Joshi et al., 2020), randomly masking 15% contiguous spans within a sentence rather than independent tokens.
Pre-training Acceleration
We use the optimizer state parallelism and gradient parallelism implemented by DeepSpeed (Rasley et al., 2020) to accelerate the pre-training process. In particular, we found that using the BFLOAT16 (Kalamkar et al., 2019) half-precision floating-point format for optimization can effectively solve the problem of gradient overflow that occurs in the training process with FP16 halfprecision floating-point format, without the need to repeatedly adjust gradient scaling coefficients and other hyperparameters. Kalamkar et al. (2019) pointed out that in the training of deep neural networks, the value range (i.e., exponent range) of the floating-point numbers used to represent each parameter in the network is more important for training stability and performance than their mantissa precision. Therefore, the BFLOAT16 format uses the same eight-bit exponent as the FP32 format to represent the same exponent range as the FP32 format, at the cost of having three fewer mantissa bits than the FP16 format. Extensive experiments have shown that this trade-off makes the BFLOAT16 format as fast and memory-efficient as the FP16 format while having training stability and performance close to that of the FP32 format.
Knowledge Enhancement Pre-training Method Based on Triple Masking
We propose a knowledge enhancement pre-training method based on triple masking (KETM). First, for each triple in the knowledge graph, we use the distant supervision algorithm to obtain sentences corresponding to it. Specifically, for a knowledge triple (head entity, relation, tail entity), if there is a sentence in the encyclopedia that contains both the head and tail entities, we consider this sentence to contain the knowledge described by this triple.
Next, for a sentence and its contained triple, we concatenate the triple at the beginning of the sentence. For the triple part, we randomly mask one element, and for the sentence part, we randomly mask 15% of a random-length span. Finally, we input the masked triple and sentence into the model and require the model to predict the masked element, as shown in the Figure 1. The model is trained to fill the masked element in the triple based on the two unmasked elements in the triple and the partially masked sentence, which helps the model better understand and memorize entityrelated knowledge.
The Benchmark: BBT-CFLEB
In this section, we first describe the method used for selecting tasks for the benchmark. We then introduce the selected tasks and the three leaderboards, each of which is composed of different tasks.
Task Selection
We propose that for domain-specific NLP evaluation benchmarks, special attention should be paid to their practicality, especially for the financially valuable field, to better reflect the model's ability in practice. Therefore, we use a practicality score to measure the practicality of the tasks we collect. Specifically, we invited financial experts to evaluate the practicality of each task and gave a low, medium, or high practicality rating, only selecting tasks with a high practicality rating as candidate tasks. In addition, we only kept tasks with a clear open-source statement as candidate tasks. Finally, we selected six tasks for BBT-CFLEB in Table 2.
Task Introduction
CFLEB includes six tasks in total, consisting of two language generation tasks and four language understanding tasks. These tasks are as follows: • FinNL, a financial news classification dataset.
Given financial news articles, the model needs to classify them into up to 15 possible categories, with evaluation measured by F1-Score. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles.
• FinNA, a financial news summarization dataset. Given financial news articles, the model needs to generate a summary, with evaluation measured by Rouge (Lin, 2004). The training set contains 24,000 articles, the validation set contains 3,000 articles, and the test set contains 3,000 articles.
• FinRE, a financial news relation extraction dataset. Given financial news articles and head-tail entity pairs, the model needs to classify the relation between entity pairs into up to 44 categories, including the null relation, with evaluation measured by F1-Score. The training set contains 7,454 articles, the validation set contains 1,489 articles, and the test set contains 3,727 articles.
• FinFE, a financial social media text sentiment classification dataset. Given financial social media text, the model needs to classify the sentiment of the text into negative-neutralpositive categories, with evaluation measured by accuracy. The training set contains 8,000 articles, the validation set contains 1,000 articles, and the test set contains 1,000 articles.
• FinQA, a financial news announcement event question-answering dataset, derived from the DuEE-fin (Han et al., 2022) financial news or announcement text and a question related to an event mentioned in the text, the model needs to generate an answer to the question based on the text, with evaluation measured by F1-Score. The training set contains 16,000 articles, the validation set contains 2,000 articles, and the test set contains 2,000 articles.
• FinNSP, a financial negative news and its subject determination dataset. Given financial news or social media text and entities mentioned in the text, the model needs to determine if the text contains negative news related to any entity and identify which entity is the subject of the negative news, with evaluation measured by F1-Score. The training set contains 4,800 articles, the validation set contains 600 articles, and the test set contains 600 articles.
Leaderboard Introduction
We have organized the tasks into multiple leaderboards according to different ability requirements (Xu et al., 2020), so that researchers can observe the model's ability rankings from different perspectives. The leaderboards of FinCUGE are as follows: • Overall leaderboard: includes all six tasks.
• Generation ability leaderboard: includes two language generation tasks, FinNA and FinQA.
Experiments
In this section, we first introduces the basic settings of the experiment, including the basic information of the PLMs involved in the comparison and the processing format of the tasks in the evaluation benchmark. Then we conduct sufficient experimental and comparative analysis to validate the effectiveness of the proposed model and method.
Pre-trained Language Models
The models participating in the comparative experiment of this section include: • GPT2-base (Zhao et al., 2019). A Chinese GPT2 released by Zhao et al. (2019). Pretrained using the general corpus CLUECor-pusSmall (Xu et al., 2020).
A Chinese BERT for the financial domain released by Hou et al. (2020).
A Chinese BERT for the financial domain released by .
Our proposed Chinese pretrained language model for the financial domain, with a total of about 1 billion model parameters, and the pre-training hyperparameters are the same as T5-base.
Fine-tuning
For generative models (GPT, T5), we evaluated all six datasets by modeling all tasks as text-to-text. For BERT-based models, we evaluated them on four language understanding tasks: FinNL, FinRE, FinFE, and FinNSP, using BERT with an additional classification layer for all tasks.
Experiment 1: Comparison of Pre-trained Model Architectures
For the two models in the general domain, GPT2base and T5-base, their pre-training corpora, hyperparameters, and training volume are all the same, but their average scores differ significantly, with T5-base significantly outperforming GPT2-base, as shown in Table 4. This difference is mainly due to the differences in the architectures, parameter sizes, and pre-training methods of the T5 and GPT models. This performance confirms the correctness of our choice of the T5 model.
Experiment 2: Effectiveness of Domain Pre-training
As shown in Table 4, the comparison between the FinT5-base model and the T5-base model indicates that the FinT5-base model pre-trained on FinCorpus significantly outperforms the T5-base model with the same parameter size, demonstrating the effectiveness of domain pre-training and the effectiveness of FinCorpus.
Experiment 3: Superiority Compared to Existing Models in the domain
As shown in Table 4, in the four language understanding tasks evaluated with FinBERT and Mengzi-BERT-base-fin, FinT5-base significantly outperformed both models, demonstrating the superiority of FinT5 over existing models in the domain.
Experiment 4: Effectiveness of KETM
As shown in Table 4, by comparing FinT5-base-ke with FinT5-base, it can be seen that the knowledgeenhanced text modeling method significantly improves the model's performance on tasks such as relation extraction and news summarization, without significantly compromising the performance on other tasks, thus proving the effectiveness of the KETM method.
6.6 Experiment 5: Effectiveness of parameter scaling up As shown in Table 4, the performance comparison between FinT5-base and FinT5-large models indicates that the FinT5-large model with one billion parameters performs significantly better than the FinT5-base model, demonstrating the effectiveness of parameter scaling up.
Conclusion
In this article, we introduced three new contributions to the domain of NLP in the context of Chinese finance. We created the largest open-source corpus for this domain, called FinCorpus, which contains a diverse collection of around 300GB of text from four sources. Our FinT5 model is the largest pre-trained language model for the Chinese financial domain, with one billion parameters. To enhance our pre-training method, we developed a unique knowledge-based approach called KETM, | 2023-02-21T02:15:59.206Z | 2023-02-18T00:00:00.000 | {
"year": 2023,
"sha1": "aafae4730b1add0b3e243e011db9ac87428f83cd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aafae4730b1add0b3e243e011db9ac87428f83cd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
38159664 | pes2o/s2orc | v3-fos-license | Galactic Punctuated Equilibrium: How to Undermine Carter's Anthropic Argument in Astrobiology
We investigate a new strategy which can defeat the (in)famous Carter's"anthropic"argument against extraterrestrial life and intelligence. In contrast to those already considered by Wilson, Livio, and others, the present approach is based on relaxing hidden uniformitarian assumptions, considering instead a dynamical succession of evolutionary regimes governed by both global (Galaxy-wide) and local (planet- or planetary system-limited) regulation mechanisms. This is in accordance with recent developments in both astrophysics and evolutionary biology. Notably, our increased understanding of the nature of supernovae and gamma-ray bursts, as well as of strong coupling between the Solar System and the Galaxy on one hand, and the theories of"punctuated equilibria"of Eldredge and Gould and"macroevolutionary regimes"of Jablonski, Valentine, et al. on the other, are in full accordance with the regulation- mechanism picture. The application of this particular strategy highlights the limits of application of Carter's argument, and indicates that in the real universe its applicability conditions are not satisfied. We conclude that drawing far-reaching conclusions about the scarcity of extraterrestrial intelligence and the prospects of our efforts to detect it on the basis of this argument is unwarranted.
1 Strictly speaking, this is just a half of Carter's argument in his fascinating 1983. paper. The rest concerns the issue of the number of "crucial" (or "critical") steps in emergence of intelligent observers on Earth and possibly elsewhere. Although later commentators, like Barrow and Tipler (1986) or Wilson (1994), devoted much attention to this "anthropic" prediction, it lies outside of the scope of the present paper. Without going into details, it is straightforward to see how undermining of the former argument leads to a substantial erosion of the claims of the latter. mention CA approvingly, noticing that (i) it applies to noogenesis (origin of intelligence) rather than biogenesis (origin of life); and (ii) that any finding suggestive of long future duration of the biosphere will undermine it.
However, it has also been criticized. Two most pertinent criticisms so far are those of Wilson (1994) and Livio (1999), the former mainly from the logicomethodological and the latter from the physical point of view. We shall use elements of their criticisms here, but the bulk of the present remarks have not been presented in the literature so far. Wilson's criticism is mostly methodological; he argues that Carter is wrong in restricting the range of possible relations of t b and t * to the three cases listed above, as well as that on the face of the argument, the fact that we appeared on Earth significantly before the end of the Sun's Main Sequence lifetime is highly unlikely. In addition, Wilson points out that the role of the anthropic reasoning in CA is very minor, almost trivial. In several places, Wilson vaguely alludes to some of the empirical inadequacies of CA (see below), but refrains from investigate them further; in the next sections, we shall try to perform that task.
The crucial assumption of CA is that there is no a priori reason for correlation between t * and t b . Livio (1999Livio ( , 2005 has pointed out that this is the main weakness of this argument; processes which induce correlations between the two timescales, like the oxygenation of the atmosphere on terrestrial planets, undermine the argument. Notably, if stellar UV radiation prevents the appearance of land life due to high absorption by nucleic acids and proteins, than it is critical that sufficient ozone layer is built before land life appears. This, in turn, might induce a correlation between the astrophysical and biological timescales, since various stellar masses (and thus various lifetimes) will generate different amounts of UV radiation and dictate various rates of oxygenization of the atmospheres of hypothetical planets in their respective habitable zones. With the fall of this essential assumption of independence of two timescales, CA is doomed as well.
We would like to hereby express even more radical criticism of Carter's argument, based on the questioning of its basic premisses, and to show that the reasoning behind it is inherently flawed-at least so without additional assumptions of rather questionable validity. The main idea is quite simple: Carter's argument relies on the assumption that there are fixed (or at least welldefined) and roughly known timescales for at least astrophysical processes. In addition, it is assumed that the relevant biological timescale is well-defined, albeit unknown, as well. We reject these assumptions, and intend to show that there is sufficient physical justification to propose alternatives. These alternatives are more complicated than Carter's simplification, but this is the necessary price to pay to be in accordance with tremendous achievements of modern astrophysics and astrobiology during the last decade or so. (In addition, Carter requires that the relevant timescales are independent, but this was criticized by Livio and others, and is only a part of the present argument.) These alternatives encompass (i) external physical forcings acting on local biospheres all over the Galaxy, and (ii) elements of complex evolution, namely quasi-periodicity, stochasticity, change of macroevolutionary regimes, and secular evolution with cosmological time. In other words, the core element of CA, belief that "the probability of intelligence increases monotonically with time" (Barrow and Tipler 1986; see §3 below) is (A) just a case of special pleading, and (B) is likely to be wrong on empirical grounds.
In a very limited form, this central argument has been sketched in Dragićević and Ćirković (2003). As Einstein memorably used to say: "Everything should be made as simple as possible, but not simpler." [emphasis by the present authors] In particular, we believe that CA violates the second part of this important methodological guideline by failing to take into account timescale correlations induced by both secular evolution of the Galaxy and sudden catastrophic events; this stands in full accordance with Whitehead's maxim quoted above, when applied to recent and current debates on SETI and related projects. In addition, it is a microcosm of several traditional issues in philosophy of science in general, and philosophy of biology in particular: issues of inevitability vs. contingence, gradualism vs. catastrophism, local vs. global influences on the biosphere, position of intelligent observers on the "tree of life", and some others, recur in our study of CA and related topics.
It is sometimes stated that CA offers an example of the scientific nature of the anthropic reasoning, by virtue of its falsifiability: it offers a prediction that our current astrobiological and SETI efforts will fail and that we shall not discover extraterrestrial intelligent beings in the Milky Way. Formally speaking, this does indeed make the hypothesis scientific, but it can be argued that in this case the meaning of falsifiability is stretched beyond its reasonable usage. For instance, the statement "There are no intelligent alien species in the Galaxy" presumes our capacity to always discriminate between intelligent and non-intelligent aliens with certainty, which can hardly be taken for granted (cf. Raup 1992;Lem 1976Lem , 1987. Even if it were, the timescales for this kind of falsification are quite outstanding; in fact they are at least equal to the often-cited Fermi-Hart timescale for visiting (or colonizing) all stars in the Milky Way. 2 Even the most ardent Popperian should pause when faced with such a remote prospect of falsification, especially when it is not necessary to doubt the anthropic reasoning itself in order to contest a specific argument using many auxiliary assumptions.
It is important to emphasize that we do not intend to make a case for the existence of ETIs in the Milky Way. That is a quite distinct (and, arguably, much more formidable) task. Our aim is simply to show how a particular anti-ETI argument-strengthened, unfortunately, by endless uncritical repetitions in both research and popular literature-can be undermined. Only insofar our lack of credence in the existence of Galactic ETIs is based on CA, it can be said that our study offers an indirect support for ETI plausibility. There are, however, other anti-ETI arguments-notably the Tsiolkovsky-Fermi-Viewing-Hart-Tipler's argument, usually known simply as Fermi's paradox (for the best reviews see Brin 1983;Webb 2002)-which are beyond the scope of the present study and which could, in principle, support ETI skepticism even if CA is dismantled. In our view, the entire problem of the existence or absence of extraterrestrial life and intelligence remains completely open.
Are there well-defined timescales?
There are many cases in everyday life, as well as in science, where apparently independent quantities are of similar or even the same order of magnitude. In an amusing example in his classic textbook Peebles (1993, p. 366) jovially notes the coincidence between the Eddington limit 3 on luminosity of a star per unit mass, and Peebles' own luminosity per unit mass. Although we are today vitually certain that the cosmic microwave background is of primordial origin, this does not invalidate the famous coincidence noted by Sir Fred Hoyle (1994; see also Lightman and Brawer 1992) that the quantity of helium in the universe is almost exactly such that its synthesis through the fusion of hydrogen in toto will produce about the same amount of energy as contained in CMB photons. The timescale for reading of this paper is of the same order of magnitude as the variability timescale of the ultraluminous X-ray source M74 X-1 (and, indeed, most of the microquasars in the local universe!). It would be preposterous and epistemologically naive to assume that in each instance of similar timescales, phenomena have to be causally linked. Such coincidences are so ubiquitous in our complex universe that entire pseudo-sciences have long ago arisen around some of them (measurements of the Great Pyramid of Egypt, for instance); they are reflection of humanity's psychological need for finding causal links and explanations even where all there is are, in G. Udny Yule's famous term, "nonsense correlations". By far most of correlations in the world are of this, noncausal nature. In Peebles' words, "for practical purposes it is only an accident of essentially unrelated numbers." But CA is an example of exactly the opposite extreme: denial of the possibility of such coincidences actually occuring, contrary to what both history of science and our everyday experiences tell us. According to Carter, even if they are observed to occur, as is allegedly the case in the Solar System, this must not reflect anything deeper than a consequence of our restricted viewpoint. Thus, Carter ignores some sound advice of Agatha Christie's famous Miss Marple of: "Any coincidence is always worth noticing... you can throw it away later if it is only a coincidence." In effect, CA is spectacularly underestimating the vast complexity and intricacy of Nature.
Even if there is no causal link between t b and t * , it would be erroneous to reject the t b ~ t * case as Carter does. How many orders of magnitude does this region possess? Are there different external constraints on the timescales, precluding them having values in the entire (0, +∞) range? This has also been criticized by Wilson (1994), though without appeal to physical reality, which makes the case against Carter's thesis significantly stronger. We shall argue now that, due to the oversimplification, there are additional timescales, which make t b ~ t * the most interesting case. Then, it becomes an additional benefit that such choice would make the Earth truly unexceptional and thus is in good agreement with the Copernican principle.
Let us first redefine the astrophysical timescale t * as the timescale of continuous habitability of a terrestrial planet in the Milky Way galaxy. The difference may sound pedantic, but is in fact crucial when we recognize that (astro)physical processes other than the evolution of its parent star can influence the habitability of a planet. In particular, the need to abandon the "closed box" astrobiological picture of Earth (and terrestrial planets in general) is emphasized in a number of recent studies from different points of view. Most pertinently, Lineweaver et al. (2004) investigate the concept of the Galactic Habitable Zone (henceforth GHZ), introduced by Gonzalez et al. (2001), comprising the stars in the Milky Way potentially possessing habitable planets with complex life (for a fine review, see Gonzalez 2005). 4 In both astrobiology and the Earth sciences, such paradigm shift toward an interconnected, complex view of our planet has already been present for quite some time in both empirical and theoretical work (e.g., Clube and Napier 1990;Cockell 1998;Burgess and Zuber 2000;Lenton and von Bloh 2001;Franck et al. 2000Franck et al. , 2001Carslaw et al. 2002;Iyudin 2002;Gies and Helsel 2005;Chyba and Hand 2005).
(It is important to understand here that the very talk about habitable zones makes the assumptions of independent events at best suspicious. Habitable zones are defined as spatio-temporal regions where conditions for life arise due to correlated processes. As far as prospects for SETI are concerned, the relevant zone is GHZ, which occurs as a consequence of roughly understood processes of chemical and dynamical evolution of the Milky Way and its stellar populations. Even more telling in this respect is the concept of the Cosmic Habitable Age (CHA), introduced by Gonzalez (2005). Insofar as habitable zones are an unavoidable part of the modern astrobiological discourse, any argument based on the independent development of biospheres automatically looses force.) 5 Before we analyze the particulars of these external influences and consequent timecale forcing, we wish to emphasize that the very idea of Carter that Main Sequence stellar lifetimes are the only relevant (astro)physical timescales is already a dangerous simplification. There are some rather uncontroversial-in contrast to some of the ideas considered below in more 4 As kindly pointed out to us by Prof. David Grinspoon, the first suggestion of anything even remotely similar to GHZ was given by the great author and philosopher Stanislaw Lem in his One Human Minute (Lem 1986). Lem obviously foreshadowed and inspired much of the contemporary research in astrobiology, including the present study (esp. Lem 1987). 5 A minor additional argument to the same effect may come from the panspermia hypotheses which, although quite speculative and uncertain, have experienced recent resurgence. Thus, Napier (2004), as well as Wallis and Wickramasinghe (2004) have constructed working panspermia models in agreement with all known astrophysics. detail-counterexamples in both past and future history of our planet. 6 For instance, the "Snowball Earth" episodes occurring at least twice in the geological past (Kirschvink et al. 2000;Hoffman et al. 1998) represent global catastrophes which may have annihilated all life except for the small habitats around marine volcanoes and hydrothermal vents. It is entirely plausible that similar episodes of severe global glaciation could have annihilated all life at Earth-analogs elsewhere, so that the "astrobiological clock" gets a complete reset possibly even without any external causative agent, but due to an unfortunate combination of movement of continental plates and Milankovich cycles. 7 Similarly, it seems clear that geophysical processes governing the carbon-silicate cycle are sustainable for a time shorter than the Main Sequence timescales in at least a fraction of potentially inhabitable terrestrial planets in the Milky Way (e.g., Lindsay and Brasier 2002;Gerstell and Yung 2003;Ward and Brownlee 2002). This was not known at the time of Carter's 1983 article. Any such large-scale trends make CA a posteriori less appealing, since they induce further correlations and have their own quasideterministic timescales, thus undermining the independence assumption. Figure 1. Schematical presentation of possible relationship of two independent timescales. With t b we denote the median of biological timescales on different planets of GHZ. We may assume that P X corresponds to some measure of the extinction probability in the most general sense. While cases (a) and (b) correspond to the situation envisaged by Carter's rendition of the anthropic reasoning (in particular, case (a) is the situation encapsulated by CA), we argue that these situations are unjustified simplifications. 6 Parenthetically, this refutes the claim that we have even a single data point of quite unambiguous meaning. This, in our view, underlines the hubris of those who use this (uncertain) point to grandeloquently conclude that we are the only intelligent species in the Galaxy. 7 However, for a view ascribing even the "Snowball" glaciations to our astrophysical environment, see Pavlov et al. (2005).
Physical reality corresponds to cases at least as complex as cases (c) and (d), where we have the environment monotonously becoming either more hostile or friendlier to life. In these cases, obviously, we have to take into account another timescale, which describes the rate of increase or decrease of the extinction events. This is related to the important issue of biotic feed-back. Another consequence of discoveries in Earth sciences and astrobiology in the last decade or so is that the existence of life on Earth tends to make it more habitable both complexity-and time-wise. Simple lifeforms induce changes in the environment conductive to the appearance of more complex lifeforms, and, even more pertinently from the present point of view, the existence of both simple and complex life tends to increase the timespan of the habitable Earth beyond the bounds set by the Main Sequence evolution of the Sun (Lenton and von Bloh 2001). This not only sheds some new light on the controversial Gaia hypothesis, but also shows that the probability of observing an inhabited planet within a given sample at a particular time is not a linear function of the probability of biogenesis, as one would naively expect (and what is assumed in CA). This does not represent an argument against CA yet, since the latter takes the total lifetime of a star on the Main Sequence as the ultimate limit on the timespan of the biosphere, which remains true irrespective of the feedback. 8 However, it does much to weaken the spirit, if not the letter of CA, since it shows that probability of finding life at a particular place cannot be a linear function of time ceteris paribus. (We shall return to this point below. ) We do not need to emphasize that the biological timescales are still very poorly understood. There are some claims that the timescale of biological evolution on Earth is fairly typical. Russell (1983Russell ( , 1995 claims that the appearance of intelligent beings occurs on the average about 3 Gyr after the initial stages of planetary formation. Much shorter timescales have been proposed: McKay argues that plate tectonics actually delayed the appearance of complex lifeforms on Earth, by keeping the level of oxygen low for a long time. According to that idea, the duration of the Precambrian could be as low as 10 8 yrs on planets without plate tectonics, such as Mars (McKay 1996). This would, in turn, significantly accelerate the emergence of sufficient complexity as a precondition for intelligence.
It is in this astrobiologal key that we can reiterate part of Wilson's (1994) criticism of CA contained in the following passage: At first glance, the claim that t e should not differ from a given value of t seems to be equivalent to the claim that t should not differ from a given value of te. But these claims are fundamentally different. The reason the latter one is invalid is that t , insofar as it represents the time that evolution is intrinsically most likely to require, is a probabilistic or statistical quantity. Our knowledge of the value of such quantity cannot be significantly enhanced by the evidence of a single case, especially the nonrandomly chosen one of our own evolution. Only if we were to become aware of a large number of actual cases of extraterrestrial evolution and their corresponding timescales, or if we were to advance our knowledge of the timescales governing various evolutionary mechanisms, could we provide a reasonable estimate of t , and perhaps eliminate values of t much less than τ 0 . But given only t e ~ τ 0 , we cannot on the basis of this single evidential sample conclude much at all about t . We certainly cannot eliminate, as Carter thinks we can, the possibility that t << τ 0 .
Hidden uniformitarianism
Consider the different situations described in Figure 1. With obvious simplification, we imagine extinction probability of lifeforms as generally very low, except for short "spikes" which may correspond either to recurring (similar to mass extinction episodes in Earth's history) or single adversary events (for instance, the end of stellar evolution), i.e. everything which is subsumed in Carter's astrophysical timescale t * . Now, if there is a well-defined biological timescale, CA can be represented as choosing between the cases shown in (a) and (b). Clearly, CA suggests that we should accept case (a), in which the probability of appearance of life on an average terrestrial planet in the Galaxy is minuscule. However, what about cases (c) and (d)? It is clear that in these cases the governing timescale is the one associated with the increase or decrease in frequency of the extinction spikes. It follows that (1) "Carter's criterion" of the relationship between the biological and astrophysical timescales is time-dependent and not universal; and (2) that we may need additional timescales, linked to all astrophysical processes which can cut or impede biological evolution. The problem is that, in order to accept a picture like (c) or (d) we need to abandon one of the most cherished prejudices of the nineteenth and most of the twentieth century, which is uniformitarianism of rate (or gradualism). It is not only that the habitability of an astrobiological site is not constant in time, but the frequency of important events ("extinction spikes") is changing with the evolution of the Galactic system. We shall argue below that the situation shown in Fig. 1.d is the best model for astrobiology and that in such framework CA fails. As we have said, there are a host of recent indications that the Solar System is in fact an open system, strongly interacting with its Galactic environment (e.g., Rampino and Stothers 1985;Rampino 1997;Leitch and Vasisht 1998;Shaviv 2002;Melott et al. 2004;Pavlov et al. 2005;Gies and Helsel 2005). Interactions induce correlations; correlations ruin arguments based on independence assumptions and coincidences. Why is that simple fact so widely shunned in favor of a prejudice representing essentially a return to the outdated nineteenth-century Lyellian gradualism? Barrow and Tipler (1986) succintly state this critical uniformitarian assumption for which we need a specific label: THESIS (*): "the probability of intelligence increases monotonically with time" (p. 559, our emphasis) We deny this assumption, for reasons to be discussed below. It is important to understand that (*) is the central plank of CA-with it gone, the whole edifice crumbles.
First, (*) encapsulates an anthropocentrism unwarranted even in the simplest local case of the biological evolution on Earth. It is by no mean clear (and many evolutionary biologists have denied it) that the history of the terrestrial biosphere represents anything even remotely describable as "monotonical" approach to intelligence. Even if we take the most merciful interpretation of "monotonically", which allows for the ever-present paleontological stasis, there is simply no indication in the history of life that intelligence is inherently more probable today than, say, in the middle of the Cretaceous or in 10 8 years from now. (Quite contrary to the seeming intention of Barrow and Tipler, it is anthropic reasoning which tells us that we should not invoke our presence to argue for the thesis (*)-it is trivial observation that our discussing the subject matter shows that intelligence exists now, while it does not tell us anything about its intrinsic probability. Moreover, the same fact suggests that the intelligence posing these questions is not very old, at least compared to geological or astrophysical timescales, since it is hardly conceivable that any significantly older intelligent species would not have much better insight into the nature of intelligence. Anthropic reasoning, when properly applied, is not just disteleological, but actively anti-teleological; see, e.g. Bostrom 2002.) Second, the problem is not that the probability of intelligence increases with time ceteris paribus. It is quite clear that the probability of observing any particular physically possible phenomenon at least once increases with cosmic time; the very statistical nature of our world ensures that. Of course, we need to take into account our cosmological knowledge: if the lifetime of our universe is finite -as was believed in now mostly discredited recollapsing "Big Crunch" models -then most physically allowed configurations of matter will simply have no time to arise accidentally due to statistical fluctuations. Contrariwise, if the time is infinite and the world is finite and stationary on large scales (as was commonly thought in the time of Boltzmann, say), than any configuration of matter in accordance with the general conservation laws will be achieved countless times, no matter how a priori improbable. 9 But this is entirely different from the claim that we have a monotonic "ascent" towards intelligence under very specific (and in the cosmological context very atypical) conditions required for biogenesis and biological evolution. Such a monotonic approach entails some specific causal reason, since both the spatial and temporal timescales we are considering here are many hundreds of orders of magnitude smaller than those required for the random assembly of even the simplest living systems. However, such a causative agent has not been found, and is most likely to join the outdated (vitalism) and/or compromised new-age notions (morphogenetic field). 10 In Stephen Jay Gould's (1984) words: "the failure to find a clear 'vector of progress' in life's history... [is] the most puzzling fact of the fossil record." And if that is true for a single, by astrobiological measure, physically stable and uniform terrestrial biosphere, we have grounds for accepting it a fortiori for the set of (actual or potential) biospheres comprising GHZ. Figure 2. Is mean really the message? It is a notorious truism that the timescale of measurement dictates the amount of information we can get about evolving phenomena: the longer our measurement lasts, the less useful information we get due to averaging inherent in any measurement. In the context of CA we can regard noogenesis as an extremely slow type of "measurement": thus, it is likely that the physical conditions all over GHZ vary substantially on shorter timescales, which precludes getting information on the basis of a single astrophysical timescale. Therefore, the thesis (*) represents a particularly illuminating example of what biologists came to call a chain-of-being fallacy: the quasi-Victorian idea that the history of biosphere is a steady, linear progression through more and more complex forms culminating in a whiskey-sipping, golf-playing, white-clad and well-armed gentlemen (occassionally subjugating uncouth barbarians overseas). As shown by Gould in the very first chapter of Wonderful Life, this iconography has been conventionally employed in support of various scientifically wrong, but socially comforting, ideological social issues (Gould 1989). Although fierce debates on the issue of "progress"-or large-scale evolutionary trends in general-continue to this day (e.g., Dawkins 1989;Dennett 1995;Gould 1996Gould , 2002Shanahan 1999Shanahan , 2001Knoll and Bambach 2000;Carroll 2001;Conway Morris 1998, both sides do agree that the chain-of-being picture is untenable. No serious biologist will today defend the idea that humankind is the pre-determined pinnacle of Nature. (Sadly, this realignement has not been 10 This indeterminism should not be confused with the erroneous view that biological (or at least Darwinian) evolution proceeds randomly in the metaphysical sense. The (in)famous "Boeing 747" argument of Sir Fred Hoyle has been often cited and misused by creationists and other pseudoscientists as the "proof" of intelligent design or some similar ideologically concocted scheme. Fortunately enough, the argument is demonstrably wrong; it has been refuted many times, notably by Dawkins (1989) and Dennett (1995). followed by the popular press, especially the part with its own ideological axe to grind.) Thus, (*) is ideologically loaded: it supports the idea that intelligence is fundamentally different from other biological traits, and that the bearers of intelligence are entitled to higher and more important place in the natural order of things.
Ironically, distinguished biologists who have opposed SETI like Mayr and Simpson, devoted a large part of their professional careers to debunking the chain-of-being fallacy! Notably, the adaptationist paradigm of which Mayr is one of founding fathers, even hesitates to ascribe any particular importance to intelligence, or to proclaim it different from any other trait in nature. Within the framework of adaptationism, there is no a priori difference between intelligence and, say, the spiral form of the shell of Nautilus. Now, just imagine rephrasing (*) in the following form:
THESIS (*''):
"the probability of a spiral shell with a pitch angle between 23° and 25° increases monotonically with time".
In our view, (*'') is almost obvious nonsense; why should we then-if we discard sentimentality, anthropocentrism and possible extrascientific agendas-give better treatment to (*)? If, to use Stephen Jay Gould's famous metaphor, "the tape was rewound" to the time of the Cambrian Explosion, it would be highly unlikely for humans to re-appear after sufficient time has elapsed (Gould 1989). Gould forcefully argued in several books and papers (Gould 1985(Gould , 1987(Gould , 1989(Gould , 1996, that the very notion of "progress" of the terrestrial biosphere is highly suspicious, culture-laden and with very slim empirical support (if at all). How much more pretentious and vacuous does it sound when applied to the immense diversity that other Galactic environments may present! If the thesis (*) is an interpretation of "progress" or "ascent", the same classical criticisms apply. McKay (1996) playfully suggests that dinosaurs could in fact have developed not only intelligence, but spacefaring civilization as well, without our noticing it at present! Strong erosion, following the giant impact, in the course of 65 Myr could easily obliterate all traces of such a predilluvian culture. Similarly, if humans were to go extinct soon (perhaps as a result of runaway climatic catastrophe, nuclear winter or a misuse of biotechnology or nanotechnology 11 ), in a few million years all traces of human civilization would have been obliterated (except for the satellites in stable orbits and a couple of long-range space probes). Would the next intelligent species, if it ever arises subsequently, have the same cultural predilection for (*) as we have?
Now, if we conclude that (*) is unjustified for the simplest local model of biological evolution on a single planet, how likely is that (*) will apply to a large set of habitable planets comprising GHZ? Even if the planets were isolated, "closed-box" idealizations-which is unrealistic-the uniform behavior implied by (*) is as probable as sudden motion of molecules of homogeneous air in a room into a 1 m 3 volume in a corner. As Boltzmann, Zermelo, Culverwell, and others 11 An option pertinent for obvious reasons-and very relevant for contemporary astrobiologists (e.g., Bostrom and Ćirković 2007). already knew in the nineteenth century, such conspiratorial behavior is highly unlikely on statistical grounds without going into detailed physics of thermodynamical systems (e.g., Steckline 1983). Per analogiam, without knowing any details on particular astrobiological development in each habitat, we might argue that uniformity of evolution expressed by (*) is improbable. Of course, our discussion here and elsewhere pertains only to the evolution of hypothetical biospheres comprising GHZ, which are of interest to SETI studies; if we take into account other galaxies, clusters, etc. the situation may be both quantitatively and qualitatively different.
When we reject hidden uniformitarianism even the t b << t * case of Carter's dilemma is not to be rejected so lightly. An obvious counterexample in this respect is the much-debated "impact frustration" of early lifeforms (Raup and Valentine 1983;Maher and Stevenson 1988;Oberbeck and Fogleman 1989). It is entirely conceivable that early terrestrial life appeared independently several times, only to be destroyed by catastrophic impacts during the epoch of the socalled late heavy bombardment. Only when the frequency of impacts decreased sufficiently (perhaps after the end of the late-heavy bombardment), were early lifeforms capable of spreading, diversifying and evolving in order to produce the subsequent rich and complex terrestrial biosphere. If this was so, where with ' b t we denote the timescale for biogenesis; while we cannot still infer anything about "true" b t (i.e. the noogenesis timescale), if for any reason there were an upper limit to ' b t (related perhaps to the chemical evolution of Earth's atmosphere or surface and the increase of Solar radiation flux), it could be perfectly conceivable that life could "miss the last train" due to the impact interruptions. This scenario is highly instructive, since it shows (A) strong coupling between lifeforms and physical environment, and (B) timescale forcing, through which a physical timescale (the interval between major impacts) actually becomes the only relevant quantity. While we cannot treat this topic in detail here, this example of a new development in astrobiology bolsters the conclusion that uniformitarianism and continuous habitability of a planet are just convenient and oversimplifying myths.
A plausible alternative: global extinction mechanisms
If we accept that CA is at least severely limited by the non-uniformitarian history of life it is natural to ask for more details about the major non-uniformities which impede its "monotonic" progress. 12 Fortunately, modern astrophysical research offers much in this respect. An important paper of Annis (1999) opened a new vista by introducing (though not quite explicitly) the notion of a global regulation mechanism, that is, a dynamical process preventing or impeding uniform 12 We do not presume any special understanding of evolutionary biology here; the formulation is applicable to a case of stable, only very slowly changing biological environment-a single macroevolutionary regime in terms of Jablonski (1986Jablonski ( , 1989)-in which Barrow and Tipler's metaphor of "monotonic approach to intelligence" might perhaps work. Such world is not the real world, but its juxtaposition with the real world can teach us some important lessons. emergence and development of life all over the Galaxy. 13 In Annis' model, which he dubbed the phase-transition model for reasons to be explained shortly, the role of such global Galactic regulation is played by gamma-ray bursts (henceforth GRBs 14 ), collosal explosions caused either by terminal collapse of supermassive objects ("hypernovae") or mergers of binary neutron stars. GRBs observed since the 1950s have been known for more than a decade to be of cosmological origin. Astrobiological and ecological consequences of GRBs and related phenomena have been investigated recently in several studies (Thorsett 1995;Dar 1997;Scalo and Wheeler 2002;Thomas et al. 2005). To give just a flavor of the results, let us mention that Dar (1997) has calculated that the terminal collapse of the famous supermassive object Eta Carinae could deposit in the upper atmosphere of Earth energy equivalent to the simultaneous explosions of 1 kiloton nuclear bomb per km 2 all over the hemisphere facing the hypernova! According to the calculations of Scalo and Wheeler (2002), a Galactic GRB can be lethal for eukaryotes up to the huge distance of 14 kpc. Thus, this "zone of lethality" for advanced lifeforms is bound to comprise the entire GHZ whenever a GRB occurs within the inner 10 kpc of the Galaxy. Annis suggested that GRBs could cause mass extinctions of life all over the Galaxy (or GHZ), preventing or arresting the emergence of complex life forms. Thus, there is only a very small probability that a particular planetary biosphere could evolve intelligent beings in our past. However, since the regulation mechanism exhibits secular evolution, with the rate of catastrophic events decreasing with time, at some point the astrobiological evolution of the Galaxy will experience a change of regime. When the rate of catastrophic events is high, there is a sort of quasi-equilibrium state between the natural tendency of life to spread, diversify, and complexify, and the rate of destruction and extinctions. When the rate becomes lower than some threshold value, intelligent and spacefaring species can arise in the interval between the two extinctions and make themselves immune (presumably through technological means) to further extinctions.
It is important to understand that the GRB-mechanism is just one of several possible physical processes for "resetting astrobiological clocks". Any catastrophic mechanism operating (1) on sufficiently large scales, and (2) exhibiting secular evolution can play a similar role. There is no dearth of such mechanisms; some of the bolder ideas proposed in the literature are cometary impact-causing "Galactic tides" (Asher et al. 1994;Rampino 1997), neutrino irradiation (Collar 1996), clumpy cold dark matter (Abbas and Abbas 1998), or climate changes induced by spiral-arm crossings (Leitch and Vasisht 1998;Shaviv 2002). Moreover, all these effects are cumulative: the total risk function of the global regulation is the sum of all risk functions of individual catastrophic mechanisms. The secular evolution of all these determine collectively whether and when conditions for the astrobiological phase transition of the Galaxy will be satisfied. Of course, if GRBs are the most important physical mechanism of extinction, as Annis suggested, then their distribution function will dominate the global risk function and force the phase transition.
One should also note that there is another sort of global regulation mechanism which we shall not discuss here, but which features prominently in the SETI-related discourse: an intentional global regulation imposed by whichever intelligent community first achieves Kardashev's Type III status (utilizing resources on the pan-Galactic basis). This is the astrobiological background of scenarios such as the "Zoo hypothesis" (Ball 1973) or the "Interdict hypothesis" (Fogg 1987). The same regulatory effect could be achieved without directly controlling the Galactic resources by releasing destructive von Neumann probes (see the disturbing comments on this scenario in the classical review of Brin 1983). It is very difficult to assess the real value of such scenarios while lacking deeper theoretical ideas about the capacity of advanced technological civilizations (possibly postbiological in nature; see Dick 2003;Ćirković and Bradbury 2006). It seems reasonable to assume a gradual switch of regimes from natural to artificial astrobiological regulation. All these ideas present special cases of the general global regulation hypothesis, but are too speculative to be seriously considered at present.
GRB regulation has an important correlation property: the rhythm of biological extinctions should be synchronized (up to the timescales of transport times ~10 4 yrs for γ-rays and high-energy cosmic rays) in at least part of the histories of all potentially habitable planets. In fact, a bold hypothesis has been put forward recently by Melott et al. (2004) that a known terrestrial mass extinction episode, one of the "Big Five" (the late-Ordovician extinction, cca. 440 Myr before present), corresponds to a Galactic GRB event.
It is intuitively clear that such correlated behavior undermines Carter's argument. With a set of modest additional assumptions it is possible to show it quantitatively. For instance, in Figure 3 we show results of toy numerical experiments performed in order to see how timescale forcing arises in simplified evolving systems. This presents a simple realization of the astrobiological regulation model of Annis (1999). GRBs are taken to be random events occuring with exponentially decreasing frequency with the fixed characteristic timescale 5 t γ = Gyr in accordance with the cosmological observations (e.g., Bromm and Loeb 2002), and biological timescales for noogenesis are randomly sampled from a log-uniform distribution between 10 8 (the minimum suggested by McKay 1996) and 10 16 yrs (the total lifetime of the Galaxy as a well defined entity; Adams & Laughlin 1997). For simplicity it has been assumed that the age of the Galaxy is exactly 12 Gyr and that all planets are of the same age. It is taken that the chain of events leading to life and intelligence can be cut by a catastrophic event at any planet in our toy-model Galaxy with probability Q, and its astrobiological clock reset. The toy model counts only planets achieving noogenesis at least once and it does not take into account any subsequent destructive processes, either natural or intelligence-caused (like nuclear or biotech self-destruction). Probability Q can, in the first approximation, be regarded as a geometrical probability of an average habitable planet being in the "lethal zone" of a GRB, and more complex effects dealing with the physics and ecology of the extinction mechanism can be subsumed in it.
While we shall present more detailed analysis and interpretation of these and similar numerical experiments in a forthcoming work (Vukotić and Ćirković 2007), some conclusions invariably support our criticism of CA and are worth mentioning here. The system exhibits a systematic shift of behavior as we move from small values of Q (gradualism) to large values (catastrophism). At large Q, we have a step-like succession of astrobiological regimes, governed by external timescale forcing. In each regime, it is obvious that the ages of inhabited planets are not independent and uncorrelated, just the contrary, as we expected from the considerations above. In other words, neocatastrophism removes, ironically enough, the basic tacit assumption of CA. If the agents of extinction are correlated over the spatial scale of GHZ, timescale forcing undermines Carter's reasoning in a natural way. As Heraclitus fancied 25 centuries ago, (astrophysical) thunderbolt may indeed steer all things (astrobiological).
(It is important to emphasize that in the simulation above we have neglected all sources of correlations between the life-bearing sites barring the GRB regulation. Some processes -like panspermia, either natural or directedcertainly deserve to be taken into account, and we shall discuss them in detail in a forthcoming study. However, all these processes will only strengthen the correlations and thus decrease our confidence in Carter's reasoning.)
Conclusions
We conclude that it is too early to draw sceptical conclusions about the abundance of extraterrestrial life and intelligence from our single data point via the "anthropic" argument of Carter (1983). In addition to other deficiencies of the argument pointed out in the literature, we emphasize that a picture in which regulation mechanisms reset local astrobiological clocks (which, consequently, tick rather unevenly) offers a way to reconcile our astrophysical knowledge with the idea of multiple habitats of life and intelligence in the Galaxy. In other words, Earth may be rare in time, not in space! Quite contrary to the conventional wisdom, we should not be surprised if we encounter many "Earths" throughout the Galaxy at this particular moment in time, at stages of evolution of their biospheres similar to the one reached at Earth. The unsupported assumption of gradualism is identified as the main source of confusion and unwarranted SETI skepticism (for a related discussion in the practical context of the Drake equation see Ćirković 2004b). This pertains to the Milky Way galaxy, where communication times are short enough to make the entire effort worthwhile (and to bring other factors, such as Fermi's paradox, into play). If we take into account progressively larger ensembles it will be possible sooner or later to find the monotonic behaviour criticized above, but this is largely formal and irrelevant for practical SETI.
The astrobiological picture presented here can be understood by means of loose analogy with the much-discussed theory of punctuated equilibrium in evolutionary biology. Seeking to explain the evident stop-start nature of the fossil record Eldredge and Gould (1972) proposed the theory of punctuated equilibria (for the detailed elaboration and synthetic view see Gould 2002). According to this theory, species tend to remain stable for long periods of time ("stasis"). The equilibrium is punctuated by abrupt changes in which existing species are suddenly (on geological timescale) replaced. The astrobiological analogy of paleontological stasis can be found in Fig. 3c, where we perceive long periods (of ~ 1 Gyr duration) with the same number of inhabited planets before a sudden change. This feature is in itself antithetical to the spirit of CA; as emphasized by Gould (1989): Hence, a good deal more than half the history of life is a story of prokaryotic cells alone, and only the last one-sixth of life's time on earth has included multicellular animals. Such delays and long lead times strongly suggest contingency and a vast realm of unrealized possibilities. If prokaryotes had to advance toward eukaryotic complexity, they certainly took their time about it. This is directly opposite to the thesis (*) and its monotonic ascent toward a (perceived) noble goal. This is intimately linked to the issue of existence or otherwise of a welldefined biological timescale, universal for all habitable planets in the Milky Way. Obviously, in order to discuss this issue we have first to establish to which degree observed timescales on Earth (the one for appearance of life, or the one for rise of complex metazoans, or the one for emergence of intelligent species) are a consequence of deterministic or just contingent processes, and how big a role chance has played in their values (Carroll 2001). Gould's "paradox of the first tier" points in the same direction: "...mass extinctions are sufficiently frequent, intense, and different in impact to undo and reset any pattern that might accumulate during normal times." (Gould 1985) Here we add another aspect to this "enlightened" view of catastrophes: not only do they provide the pump of evolution by enabling innovative overthrows of entire faunas, but -in the astrobiological context -they could provide us with correlations on the basis of which we could meaningfully consider astrobiological evolution of the Galaxy; indirectly, they offer weak support for our current and future SETI efforts.
This undermining of Carter's argument is entirely in accordance with the well-known tendency in the history of science and the human culture in general: overcoming of the sense of privilege surrounding the Solar System, Earth, terrestrial life, and humanity. This "Copernican" tendency comes as a consequence of the astrophysical discourse, not as a sacred dogma to be preserved at all costs. Furthermore, a natural generalization of the history of the terrestrial biosphere to the case of the Milky Way from the astrobiological point of view entails the acceptance of a sort of Galactic (neo)catastrophism (or a Galactic punctuated equilibrium!). It immediately undermines CA, since there is no fixed, unique, reified timescale in the core of the argument.
There are several reasons of a partly non-scientific nature for the strong impression CA leaves in many quarters. As we have seen, it is tempting to subsume all complicated astrophysics and planetary science into a single timescale, although this degree of simplification is, in fact, unwarranted; this applies to biology even more forcefully. There is an unfortunate tendency in the philosophy of science to downplay radical scientific theories and underestimate our present level of understanding; sometimes it is motivated by healthy reasons of skepticism, but often-the case of Mach's fierce "philosophical" opposition to Boltzmann's atomism comes to mind-it actually represents a conservative backlash, impeding recognition of new ideas. In addition, the textbook account of the defeat of catastrophism and the misleading philosophical legacy imparted to it in the mid-nineteenth century lead all too often to half-conscious neglect of any temporal markers in investigation of natural phenomena other than the beginning and the end. Finally, CA offers an emotionally satisfying, but nevertheless false sense of "strength in numbers".
A particularly dangerous form of "quick and dirty" generalization is embodied in the thesis (*) of monotonical ascent toward intelligence criticized above. Naive chain-of-being anthropocentrism (or intelligence-centrism) surrounding the reasoning of CA-proponents is starkly manifested here. In our view, neither is there a physical basis (uniformitarianism of environmental conditions) for it, nor there is clear biological justification (since the adaptive value of intelligence is still an unknown quantity). Thus, CA represents more wishful thinking coupled with intellectual inertia when faced with abandoning gradualism and the closed-box assumption, than a serious scientific argument.
The tremendous progress in astrobiology (e.g. Chyba and Hand 2005) clearly demonstrates that the oversimplifications inherent in CA are no longer tenable. To retain them means to reject all we have achieved in the last couple of decades on establishing concrete physical and chemical conditions for emergence of life in the cosmic context. If CA is largely, as Carter himself admitted, an argument from ignorance, then any decrease in our ignorance ought to prompt its reassessment and reevaluation. We are fortunate enough to live in this exciting epoch of truly wonderful results in this field, from cosmology and orbital observatories down to biochemical labs and paleontology museums. That it is also the epoch in which CA can be effectively undermined is by no means a coincidence.
All in all, we have no a priori (or even anthropic-based) reason to reject the existence of extraterrestrial intelligence in the Milky Way. Geocentrism stays defeated and the road for serious SETI studies is as open as ever. | 2018-04-03T05:39:13.830Z | 2009-06-30T00:00:00.000 | {
"year": 2009,
"sha1": "8285bc9961a179816128f28ed2c9b69ad132bb88",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0912.4980",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8285bc9961a179816128f28ed2c9b69ad132bb88",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
259626516 | pes2o/s2orc | v3-fos-license | ENHANCING THE CONNECTION BETWEEN PEOPLE AND HERITAGE THROUGH SOCIAL MEDIA IN CHINA
: Although numerous participatory methods and assessments have been tested in the western context, the community participation practice is still limited in China, a developing country. In addition, due to the diversity of cultural, historical, economic, and social backgrounds, Chinese community participation differs from similar international approaches. Hence it is critical to explore how engaging Chinese people with their own heritage properties can foster stewardship, multi-stakeholder connection, and support policy and strategy making. It is very challenging to preserve the character of historic urban landscapes because they change over time in a rapidly developing urban context. Research on revealing people's stories and preferences concerning the Chinese historic urban landscape in their visiting and living experience is needed to address this concern directly. The study applies a novel approach to explore public attitudes based on the fusion of social media data, land use data and other information. It examines the spatial patterns of public responses towards the government-led urban heritage conservation projects in the historic city centre of Harbin, China, in sixteen months (2021-2022). The article concludes that social media plays an important role in accessing broader communities, monitoring people's preferences, and observing heritage attributes and values for inclusive urban heritage management.
INTRODUCTION
The rich architectural heritage and historic city centres across China have a strong potential to support urban regeneration and inclusive heritage management. However, the conditions of numerous heritage sites are always severely deprived, neglected, or inactive due to the lack of public participation (Deng et al., 2015;Liang et al., 2022;Verdini et al., 2017). According to UNESCO's Recommendation on the Historic Urban Landscape (HUL), using Information and Communication Technologies is explicitly recommended for holistic urban heritage conservation (van der Hoeven, 2019). The research explores how social media can contribute to implementing HUL and further facilitate urban heritage conservation by enhancing public participation.
Except for the introduction and conclusion, the main body of this paper is structured in five parts: literature review, problem statement, research method, findings and discussion. Against the research background and case study analysis, we discuss and explore the essential components of the social-media data assessment framework, aiming to answer the following four questions: (1) What are the spatial patterns of how the public responded to urban conservation projects? (2) To what extent does the response vary between people from different backgrounds (such as gender and between local and non-local residents)? (3) Which areas of architectural and urban heritage value impact on people's daily lives (such as home-based activities and travel related activities)? (4) What policy lessons can be learnt from the interpretation of the results of this research in order to enhance community engagement in urban heritage conservation? * Corresponding author
RESEARCH BACKGROUND
By attempting to encompass both the tangible and intangible components of historic urban areas, the HUL Recommendation provides an integrated approach to urban heritage conservation (Bandarin and Oers, 2014). It aims to protect historic urban landscapes from being fragmented and degraded by uncontrolled and rapid urban development. The Recommendation also pointed out that rampant urban transformation can undermine the identity of a place, although it acknowledges the economic and sociocultural benefits that come along with urbanisation (Bonfantini, 2016). In addition, the Recommendation seeks to raise awareness of the social, cultural and economic value of urban heritage in this context of urbanisation (Oers and Roders, 2013). While recognising that cities are dynamic, HUL defines urban change and development as no longer being opposed to the historic urban landscape maintenance but as part of it and needs to be managed (Bandarin and Oers, 2012).
Along with the widely recognized vital role of the local community in international practices, in 2012, establishing, mentation, implementing, and promoting the UNESCO Operational Guidelines for the Implementation of the World Heritage Convention encouraged involving broader communities worldwide in the heritage conservation process (UNESCO, 2012). Heritage professionals are trying to involve as many citizens as possible in decision-making processes related to heritage management and making participatory activities inclusive. Recently, the identification and engagement local and minority groups regarding the decision-making of heritage conservation plans have been further highlighted by ICOMOS as an issue of concern (ICOMOS, 2020). It is worth mentioning, social media has become an essential tool for connecting people with their cultural heritage and contributing to relevant practices and research (Deng et al., 2015;Nummi, 2018).
Social media has become one of the most popular spaces to express people's opinions and discuss heritage values as a window to nonexpert perceptions (van der Hoeven, 2020). It can be used to share information about cultural heritage events and activities, promote local cultural heritage sites, or provide a space for community members to share their personal stories and experiences related to cultural heritage (Giaccardi, 2012;Liang et al., 2021). It can also be used to crowdsource information, resources, and ideas from community members, allowing for more collaborative and inclusive decision-making in cultural heritage initiatives (Bai et al., 2022;Ginzarly et al., 2019).
Social media enhances the connection between people and heritage by making it more accessible (Arrigoni et al., 2019). The convenience of accessing heritage information online has made it possible for people to learn more about their cultural heritage, even if they are not physically located near a heritage site (Ch'ng et al., 2020). Social media platforms offer virtual tours of heritage sites, allowing people to experience the locations providing interactive experiences such as virtual reality simulations and augmented reality experiences (Han et al., 2020). This accessibility has encouraged more people to engage with their cultural heritage and has contributed to the preservation of heritage sites by raising awareness.
Social media allows people to connect with others with similar interests and form communities around their cultural heritage (Hood and Reid, 2018). By sharing their experiences at heritage sites, posting photos and videos, and engaging in discussions, participants contribute to preserving cultural heritage by raising awareness and encouraging others to engage with it (Psomadaki et al., 2019). People can also make donations through social media, sign up for volunteer programs, and participate in events that preserve heritage sites (Beel et al., 2017;Foa, 2019). This level of participation has helped create a sense of community among heritage supporters and forges a deeper connection with heritage (Cotterill et al., 2016).
Despite the growing interest in social media for cultural heritage conservation, too little attention has been paid to assessing public attitudes in responding to ongoing construction projects in the historic city centre. Thus, the article explores how to use social media to investigate participants' preferences and their understanding of heritage attributes and values in conserving historic urban landscapes.
CASE STUDY AS PROBLEM STATEMENT
After being colonized by Russia for several decades, the city center of Harbin, especially the Daowai district, has shaped a unique urban landscape style assembled by numerous 'Chinese Baroque' buildings with a hybrid of Baroque façade and traditional Chinese quadrangle (Jin et al., 2018). Today, a small part of the historic area has been renovated in turning gentrified thanks to the government-led urban regeneration project since the 2010s (see Figure 1). At the same time, many buildings remain unattended and in decay (see Figure 2). Without fully participating in the heritage conservation process, many local people were forced to relocate from their live-since-born houses, which caused tension and widespread discontent (Zhang, 2021). In particular, hard-to-access residents, as minority groups, due to the highlighted ethnicity, religion, disabled issues, etc., were largely ignored in the heritage management process. Decisionmaking processes related to heritage management should be inclusive and involve the participation of broader communities. Research on revealing people's stories and preferences concerning the Chinese historic urban landscape in their visiting and living experience is needed to address this concern directly. The Chinese Baroque district in Harbin represents an important chapter in China's history, reflecting the cultural exchange between China and the West during the early 20th century. The district is a unique blend of Chinese and Western architectural styles, showcasing the creativity and ingenuity of the architects and builders of the time. The district is not only a testament to the rich cultural heritage of Harbin but also a symbol of China's progress and development over the past century. However, despite its significance, the Chinese Baroque district in Harbin is facing numerous challenges in terms of heritage conservation. One of the major challenges is the rapid pace of urban development, which has resulted in the destruction of many heritage buildings. Additionally, the district has suffered from neglect and a lack of investment in conservation efforts, which has led to the deterioration of many buildings. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy To address these challenges, local authorities in Harbin have taken several measures to preserve the Chinese Baroque district. One of the key measures has been creating a heritage conservation plan, which outlines the steps to safeguard the Chinese Baroque community and its heritage buildings. The program includes a range of measures, including restoring heritage buildings, creating public spaces, and promoting cultural tourism. Another important measure has been the establishment of a heritage trust, which will provide financial support for the preservation of heritage buildings in the district. The trust will be funded by private donations and will collaborate with local authorities to ensure that the heritage buildings are preserved for future generations. Establishing the heritage trust is important in ensuring the long-term sustainability of heritage conservation efforts in the Chinese Baroque district.
Local authorities have also taken steps to promote cultural tourism in the district by developing a range of tourism-related activities and programs. The provincial government hope to raise awareness of the local community by promoting cultural tourism and generating income that can be used to support heritage conservation efforts. On May 7, 2022, the Daowai District Government of Harbin held the operation launching ceremony of the first and second phases of the renovation project of the Chinese Baroque Historic District. The third-round reconstruction project has been approved by the relevant departments for construction and is scheduled to start construction on August 5, 2022 and be complete on December 31, 2023.
METHOD
The study proposes to make use of social media, which has made it possible for people to access and learn about cultural heritage from a distance, in order to have access to a wider group of people. Among the various Chinese social media platforms, the study chooses to collect and analyse the information, comments and geo-location data of the users from one of the most popular platforms, Weibo. This study used a web crawler tool to obtain data, focusing on geolocated posts with the following settings.
The keyword search string consisted of the location qualifier 'Lao Daowai' or 'Chinese Baroque' plus a number of synonyms in the Chinese context, such as 'Historic Buildings' related to urban heritage. In order to better observe people's attitudes towards upcoming renovation projects, the data was collected between the end of the second phase of the Chinese Baroque Historic District renovation project (7 May 2021) and the start of the third phase (5 August 2022). The spatial domain as the sampling points, i.e., the real-time geolocation of the Weibo posts, was limited to the Harbin city area when the tweets were collected. The final available data were obtained from 130 posts after a series of data cleaning steps, such as data de-duplication, content determination and checking for missing information. The next step of data cleaning is data analysis, which mainly consists of two parts: User information (including gender, account IP, etc.) and content identification of published Weibo.
As one of the pioneering trials, this study identified the content manually rather than using Machine Learning tools. This was mainly due to the small amount of data obtained, which was just over 100 items. Subsequent research and practical applications should encourage the use of Artificial Intelligence tools, such as Natural Language Processing, for large collections of text content recognition. The content semantic analysis step determines the user's emotional disposition (i.e., positive, neutral, negative), whether the post's content is related to home activities or travel, and extracts keywords related to heritage values and attributes, such as architecture, street, humanities, history. These keywords were further divided into attribute and value categories in accordance with the HUL recommendation: heritage attributes refer to the actual conservation of intangible and tangible heritage, while values refer to the motivation for conservation (van der Hoeven, 2019).
The final step is to visualize the processed data through ArcGIS to explore the spatial pattern of user responses to the Chinese Baroque cultural heritage renovation project. The geographic information data of Harbin City for this study was obtained from the revised draft of Harbin City Master Plan (2011-2020). The visual representation of the spatial pattern was made by setting the user's positive and negative attitudes and heritage values and attributes as two groups of distal representation colors, respectively.
User information statistics
After collating the 130 user portraits obtained, this study classified users into three categories: gender, account IP address, and geolocation (see Table 1). Table 1 shows that the number of male users was approximately equal to the number of female users, with no significant bias. Identifying IP addresses allowed us to determine that the proportion of locals was approximately 43% of the total, showing that both locals and tourists have a strong willingness to express themselves. In this step, missing information due to users not wanting to share address information and making their own privacy settings, etc. is displayed in N/A. Due to legal restrictions on the protection of personal data privacy, this paper only examines and analyses user information that is publicly available. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X-M-1-2023 29th CIPA Symposium "Documenting, Understanding, Preserving Cultural Heritage: Humanities and Digital Technologies for Shaping the Future", 25-30 June 2023, Florence, Italy minute!" The second most popular was a post from the local government promoting the heritage value of Harbin's historic district, which received over a hundred likes and retweets. This shows that urban heritage conservation is a relatively hot topic among Chinese social media users, generating widespread interest. It is important to stress that, the analysed data lacked information on the demographic attributes of social media users, such as age, education level, and occupational status. As such, future studies should focus on enhancing methodological comparisons and devising innovative approaches that incorporate the demographic features of social media users. In particular, such investigations should aim to establish connections between the study data and the characteristics of the underlying population to underscore any potential population bias and ensure greater representativeness of the sample (Ginzarly et al., 2019).
Content semantic identification of obtained Weibo posts
The results of the manual content analysis and label recognition are shown in Table 2. The statistics show that more than half of the people are neutral towards the preservation of Chinese baroque urban heritage. In addition, the number of users who have a clear preference with a positive sentiment is about four times higher than the number of users who have a negative sentiment. Overall, there are far more users who are optimistic about the Harbin China Baroque regeneration project than those who are not. The spatial pattern of citizens' responses is visualised and analysed in section 5.3 in conjunction with geographic information data on Harbin City. Perhaps influenced by the epidemic quarantine policy, slightly fewer users posted textual content related to tourism activities than those who stayed at home. Posts containing words such as "check in" or "travelogue" were classified as travel-related, while other tweets with no obvious characteristics were classified as stay-at-home. By tagging each text message with multiple keywords, we can see that users' focus on the target heritage sites is more on aspects of architecture and collective memory. It is worth noting that due to the diversity of textual information, each tweet was given between zero and four tags based on its content. The data shown as N/A in Table 2 are tweets that contain only the place names of Chinese baroque or similar.
It is imperative to acknowledge that our semantic analysis is confined solely to the textual content of the descriptions. Nevertheless, Weibo furnishes supplementary information, including tags and topics, that can potentially unveil additional insights into users' comments regarding the images they upload and the heritage values that underpin different situations. Moreover, it is worth emphasizing that social media data analysis outcomes do not necessarily align with the broader perceptions of the public sphere. This is because Weibo or any social media user cannot be considered as an accurate representation of civil society at large.
Spatial distribution of user sentiment towards Chinese Baroque urban heritage conservation projects
The study conducted a sentiment analysis of Weibo content by evaluating and assessing its semantics on a per-article basis, after which a corresponding label (positive, neutral, negative) was assigned to each post. The resulting label assignment dataset, along with latitude and longitude information, was then imported into ArcGIS software and combined with the geographic information of Harbin city for further computing processing. The resultant output is shown in Figure 3, with the red areas indicating significant positive emotions and the blue representing the significant negative emotions. Street with only a dozen items. In the interim, a significant proportion of users provided unequivocally positive feedback regarding the three aforementioned areas. However, unfavorable comments predominated in the remaining peripheral areas. Users from outside Harbin's central historic district and even from suburban counties seem to have a rather critical attitude towards urban heritage conservation (see Figure 4).
Spatial patterns of user preference for heritage attributes and values
The present study employs an analysis of data categorization of heritage attributes and values, building upon the findings of the keyword statistics discussed in section 5.2 and the theoretical framework of the HUL recommendation. Specifically, the keywords "architecture," "street," and "landscape" were grouped under the category of architectural attributes, while "history," "collective memory," and "policy" were grouped under the category of heritage values. The remaining keywords were identified as falling somewhere in between these two categories. Overall, a total of 71 data items were found to pertain to heritage attributes, while 56 were identified as relating to heritage values. Figure 5 and Figure 6 depict the spatial distribution of users' preferences for heritage following the application of geographic information visualization and analysis. The results of the study revealed a correlation between cultural and natural attributes within the city, as well as tangible and intangible attributes. Users located in the central historic district demonstrated a pronounced interest in the complex social values attached to heritage (see Figure 5), while those focused on the tangible heritage aspect tended to come from the urban periphery (see Figure 6). The findings further suggest that static spaces and dynamic cultural and spiritual needs, as well as urban ecology, remain relatively disparate. This may be attributed to the fact that urban heritage regeneration policies, such as the Chinese Baroque regeneration project and related regulations, are often treated in isolation from other aspects of the city and do not account for the urban environment. The complex relationship between semantics, space and the urban environment needs to be explored comprehensively in subsequent studies.
DISCUSSION
Involving local communities in China's urban heritage conservation projects is seen as a critical component of successful conservation efforts, and is essential for ensuring that cultural heritage is preserved and protected for future generations. The data procured from social media platforms furnishes insights from numerous participants residing in the city and enables access to a wider community that transcends spatial boundaries. Nevertheless, it is crucial to supplement the insights gleaned from social media with other conventional survey methods, such as questionnaires, interviews, and workshops, to yield more comprehensive findings. The outcomes generated by disparate survey methods may either correspond or diverge, but a synthesized appraisal can offer a more nuanced understanding of the relationship between multiple stakeholders and urban heritage assets. However, the extent to which the written texts are genuine personal beliefs, rather than a collection of biased ideas published for the purpose of self-promotion, is still under debate.
Social Media for inclusive decision-making in heritage conservation process
Social media presents an opportunity to promote more inclusive decision-making in the heritage conservation process. By enabling heritage stakeholders and the broader public to provide feedback, share information, and express their opinions, social media can ensure that a diverse range of perspectives is considered when making decisions about heritage conservation.
The primary advantage of utilizing social media for inclusive decision-making is its capacity to involve a wider range of stakeholders. This includes individuals who may be unable to attend physical meetings or events and those who may have previously felt excluded from traditional decision-making processes. By providing an accessible and inclusive platform for engagement, social media can ensure that all stakeholders' perspectives and concerns are taken into account.
Real-time feedback is another advantage of social media, which can facilitate a more dynamic and responsive decision-making process. This can enhance trust and encourage greater collaboration among stakeholders as their views and opinions are effectively incorporated into the decision-making process. To promote transparency and build community support, organisations should involve communities in the decisionmaking process, while also being transparent about their conservation goals and methods.
Inclusive decision-making processes related to heritage management should be established to ensure the participation of hard-to-reach residents. This may involve establishing formal or informal committees or working groups or including community representatives in existing decision-making bodies. In this way, social media can be leveraged to promote greater collaboration and engagement among diverse stakeholders, leading to more effective and inclusive heritage conservation decision-making.
Social media for empowering local communities by raising awareness and education
The use of social media is a promising approach to empowering local communities in heritage conservation by raising awareness and providing education. The dissemination of information and knowledge through social media can effectively engage and empower local communities to play an active role in preserving and promoting their cultural heritage.
The potential to reach a wide and diverse audience is a key advantage of using social media to empower local communities. The provision of accessible and engaging content on social media can facilitate a better understanding of heritage conservation issues, generate support for heritage conservation initiatives, and inspire local communities to take an active role in the process.
Moreover, social media can be used to promote education and capacity building initiatives that are relevant to heritage conservation. This includes online resources, workshops, and training programs to equip local communities with the necessary knowledge and skills to effectively participate in heritage conservation efforts. Through this, local communities can develop a sense of ownership and responsibility for heritage and make informed and meaningful contributions to the decisionmaking process.
In addition, social media can facilitate communication and collaboration between local communities and heritage organisations. The use of social media as a forum for sharing information, ideas, and concerns can foster trust and effective partnerships between stakeholders. This can result in greater cooperation and coordination in heritage conservation efforts and ensure that local communities are actively involved in the decision-making process.
CONCLUSION
Combining social sciences, urban planning, and heritage conservation knowledge with digital platforms for collecting data on the multiple values held by the public for the architectural heritage site in Harbin is the essential innovative part of this research. Engage individuals and communities with their cultural heritage through implementing and evaluating the big-data approaches created through the project. It is one of the first trials to test the use of social media to collect information on participants' perceptions and relationships with heritage conservation in China.
The article provides evidence and supports replication and upscaling activities of the implemented community-led regeneration strategies and plans. It explores the potential for social media data to contribute to heritage conservation by better understanding the people's preferences, heritage values, and how to collect the stories from hidden members of communities. It also establishes an open-data-based assessment framework for preference analysis and determine the parameters for evaluating the heritage value of Chinese Baroque district. The study quantifies stakeholders' attitudes in Chinese-Baroque district, Harbin, China over a sixteen-month timeline to fully capture the temporal and spatial dynamics of urban heritage conservation events and changes. Suggestions will also be made for a more inclusive future in the planning and management of both this area of Chinese Baroque and other similar areas of historical interest.
As an echo of UNESCO's Recommendation on the historic urban landscape (https://whc.unesco.org/en/hul), the study contributes an innovative way to connect various stakeholders to cultural heritage at the regional level through the social media platform. The research outputs can help support decision-making by offering several essential indicators from online platforms, such as stakeholders' preferences, to experts and policymakers. Further, this research could be used by authorities and managers to popularize the bottom-up collaborative way in historic city areas across China and to enhance community engagement in the everyday heritage conservation process. The strategic significance of this project is to arouse and appeal to global attention to the inclusiveness and sustainability of heritage conservation, chasing for a balance among various and sometimes conflict needs in economically less developed regions. | 2023-07-11T18:04:54.800Z | 2023-06-23T00:00:00.000 | {
"year": 2023,
"sha1": "9151ec46dd66338a27dadffbf0ad58391637bbea",
"oa_license": "CCBY",
"oa_url": "https://isprs-annals.copernicus.org/articles/X-M-1-2023/165/2023/isprs-annals-X-M-1-2023-165-2023.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "351815c0eb34c911eaa091337b16baa58121286c",
"s2fieldsofstudy": [
"History",
"Sociology",
"Environmental Science"
],
"extfieldsofstudy": []
} |
13217882 | pes2o/s2orc | v3-fos-license | CSE1L/CAS, the cellular apoptosis susceptibility protein, enhances invasion and metastasis but not proliferation of cancer cells
Background The cellular apoptosis susceptibility (CAS) protein is regarded as a proliferation-associated protein that associates with tumour proliferation as it associates with microtubule and functions in the mitotic spindle checkpoint. However, there is no any actual experimental study showing CAS (or CSE1 and CSE1L) can increase the proliferation of cancer cells. Previous pathological study has reported that CAS was strongly positive stained in all of the metastasis melanoma that be examined. Thus, CAS may regulate the invasion and metastasis of cancers. CAS is highly expressed in cancers; if CAS is associated with cancer proliferation, then increased CAS expression should be able to increase the proliferation of cancer cells. We studied whether increased CAS expression can increase cancer cell proliferation and whether CAS regulates the invasion of cancer cells. Methods We enhanced or reduced CAS expression by transfecting CAS or anti-CAS expression vectors into human MCF-7 breast cancer cells. The proliferations of cells were determined by trypan blue exclusion assay and flow cytometry analysis. Invasion of cancer cells were determined by matrigel-based invasion assay. Results Our studies showed that increased CAS expression was unable to enhance cancer cell proliferation. Immunofluorescence showed CAS was distributed in cytoplasm areas near cell membrane and cell protrusions. CAS was localized in cytoplasmic vesicle and immunogold electronmicroscopy showed CAS was located in vesicle membrane. CAS overexpression enhanced matrix metalloproteinase-2 (MMP-2) secretion and cancer cell invasion. Animal experiments showed CAS reduction inhibited the metastasis of B16-F10 melanoma cells by 56% in C57BL/6 mice. Conclusion Our results indicate that CAS increases the invasion but not the proliferation of cancer cells. Thus, CAS plus ECM-degradation proteinases may be used as the markers for predicting the advance of tumour metastasis.
CAS is the human homologue of the yeast chromosome segregation gene, CSE1 [9]. CAS is associated with microtubules and mitotic spindles, the cellular organelles for cell cycle mitosis division; hence CAS is speculated to play a role in cell proliferation and is regarded as a proliferation-associated protein [1,10]. Consequently, many pathological studies demonstrated that the expression of CAS in tumors is related with tumor proliferation in cancer development [2][3][4][5]; although there is no any actual experimental study showing CAS can increase cancer proliferation. CAS is highly expressed in cancers; if CAS regulates cancer proliferation during cancer development, then increased CAS expression in cancer cells should be able to increase the proliferation of cancer cells. Instead, our recent study showed that increased CAS expression in human HT-29 colorectal cancer cells inhibited but not stimulated the proliferation of HT-29 cells [11].
Metastatic tumours secrete extracellular matrix (ECM)degradation proteinases to degrade ECM during invasion. MMP-2 is an ECM-degradation proteinase that secreted from invasive cancer cells and plays an important role in tumor metastasis regulation [12,13]. Tumour cells with strong secretion activities can enhance the secretion of ECM-degradation proteinases and enhance tumour metastasis [14,15]. Experiment showed that MMPs production could be regulated at the level of secretion [16]. Thus, for increasing their invasion and metastasis ability, metastatic tumour cells may develop strong secretory ability to enhance MMPs secretion.
CAS was identified in a study of an antisense DNA fragment that is capable of causing cell resistance to apoptosis induced by bacterial toxins and tumor necrosis factors [17]. CAS also regulates apoptosis induced by cypermethrin [18], interferon-γ [19], and chemotherapeutic drugs including doxorubicin, 5-fluorouracil, tamoxifen, and cisplatin [20]. Pathological studies showed that the expression of CAS was related positively with high stage and high grade of cancers, as well as worse outcome of the patients [1][2][3][4][5][6][7][8]. Notably, a pathological study reported that CAS was strongly positive stained in all of the metastasis melanoma that be examined (n = 23) [2]. Thus, CAS may regulate the invasion and metastasis of cancers.
Tumor metastases are the main characteristics of high grade cancers and are also the main causes of cancerrelated mortality. Our recent study also showed that CAS transfection was unable to increase the proliferation of HT-29 cancer cells. Thus, we speculate that CAS regulates invasion and metastasis but not proliferation of cancers. We report here that CAS enhances invasion and metastasis but not proliferation of cancer cells.
Vectors
We isolated total cellular RNA from HT-29 cells with the Trizol reagent (Invitrogen, Carlsbad, CA, USA). Reverse transcription reaction was carried out using the 1st-strand cDNA synthesis kit (Clontech Laboratories, Palo Alto, CA, USA). The reverse transcription reaction mixture (20 μl) containing 1 μg of DNase-treated total RNA, 20 pmol oligo (dT) 18 primer, 50 mM Tris-HCl pH 8.3, 75 mM KCl, 3 mM MgCl 2 , 0.5 mM each of dNTP, 1 unit RNase inhibitor, and 200 units/μg RNA of MMLV reverse transcriptase was incubated at 42°C for 1 hour. The PCR reactions were done in a 50-μl reaction mixture containing 5 μl of the reverse transcription reaction mixture, 100 ng each of primer, 0.3 mM Tris-HCl pH 8.0, 1.5 mM KCl, 1 μM EDTA, 1% glycerol, 0.2 mM each of dNTP, and 1 μl of 50×Advantage 2 polymerase mix (Clontech). Primers used to amplify CAS cDNA were 5'-TATAGCAAT-GGAACTCAGCGATGC (sense) and 5'-AGTTTAAAG-CAGTGTCACACTGGC (antisense). The DNA was amplified in a GeneAmp PCR System 9700 (Perkin-Elmer, Norwalk, CT. USA) for 35 cycles using the following parameters: 94°C for 30 seconds, 65°C for 30 seconds, and 72°C for 200 seconds with a final extension step at 72°C for 10 minutes. The amplified products were resolved in 1% agarose gel with ethidium bromide. The DNA was eluted and cloned into pGEM-T vector (Promega Corporation, Madison, WI, USA), and was subsequently cloned into the pcDNA3.1 eukaryotic expression vector (Invirogen) to obtain pcDNA-CAS vector. The pcDNA-CAS vector was cut with Apa I and Hind III, and the 516-bp CAS fragment (bp 1 to 516) was cloned into the Apa I and Hind III sites of pcDNA3.1 vector in an anti-sense direction to obtain pcDNA-anti-CAS vector. The identities of the DNA sequences were determined by DNA sequencing.
Cells and DNA transfections MCF-7 breast cancer cells, 293 embryo kidney cells, and B16-F10 mouse melanoma cells were from American Type Culture Collection (Manassas, VA, USA). Cultures of cells were as previously described [21]. Cells were transfected with vectors using the Lipofectamine plus reagent (Invitrogen). Transfected cells were selected with a high concentration of G418 for 3 weeks. Multiple drug-resistant colonies (> 100) were pooled together and amplified in mass culture. The transfected cells were maintained in media containing 200 μg/ml G418. For the experiments, cells were cultured in media without G418 and the media were refreshed every 4 days, as MCF-CAS cells are relatively prone to apoptosis in long-time culture when not supplemented with fresh media.
Immunoblotting
Cells were washed with PBS and harvested by scraping. The harvested cells were washed with PBS and lysed in RIPA buffer (25 mM Tris-HC1 [pH 7.2], 0.1% SDS, 0.1% Triton X-100, 1% sodium deoxycholate, 150 mM NaCl, 1 mM EDTA, 1 mM sodium orthovanadate, 1 mM phenylmethylsulfonyl fluoride, 10 μg/ml aprotinin, and 5 μg/ml leupeptin). The protein concentrations were determined with a BCA protein assay kit (Pierce, Rockford, IL, USA). Fifty micrograms of each protein sample was loaded onto SDS-polyacrylamide gel. Proteins were transferred to nitrocellulose membranes (Amersham Pharmacia, Buckinghamshire, UK). The membrane was blocked at 4°C for overnight with blocking buffer (1% BSA, 50 mM Tris-HCl, pH 7.6, 150 mM NaCl, 0.1% Tween-20). The blots were incubated for 1 hour at room temperature (RT) with primary antibodies followed by incubated with secondary antibodies conjugated to horseradish peroxidase for 1 hour. The levels of protein were detected by enhanced chemiluminescence with an ECL Western blotting detection system (Amersham Pharmacia).
Cell proliferation assay
Equal numbers of cells (1 × 10 4 cells/dish) were seed on 100 mm culture dishes. The media were refreshed every three days. The cell numbers were countered every 24 hours by trypan blue exclusion assays after cell seeding. For each time point, three plates of cells were counted, and each plate was only counted once.
Flow cytometry analysis
Cells were harvested by 0.1% trypsin-EDTA digestion, washed with PBS containing 0.1% glucose, and fixed in 70% ethanol at 4°C for 16 hours. Cells was stained with a propidium iodide (PI) staining solution containing 100 μg/ml PI, 100 μg/ml RNase A, and 0.1% glucose for 30 minutes. The PI fluorescence was measured with a BD FACS Canto flow cytometer (BD Biosciences, Bedford, MA, USA). A minimum of 10,000 cells in each treatment was analyzed in the flow cytometry analysis.
The proportion of each cell phase was expressed as percentage of the total number of the living cells. The proportion of sub-G1 of each treatment was expressed as percentage of the total number of cells.
Immunofluorescence
Cells grown on coverslips (12 × 12 mm) were cytospun at 1000 rpm for 10 minutes. Cells were washed with PBS, fixed with 4% paraformaldehyde, permeabilized with 0.1% Triton X-100 in 4% paraformaldehyde, and blocked with PBS containing 0.1% BSA and 0.5% Tween-20. Cells were incubated with primary antibodies, washed with PBS, incubated with goat anti-mouse (or anti-rabbit) IgG secondary antibodies coupled to Alexa Fluor 488 (or 568). Coverslips were examined with a Zeiss Axiovert 200 M inverted fluorescence microscope. Experiments were carried out on duplicate coverslips of three independent experiments and five random fields were imaged per coverslip.
Immunogold electron microscopy
Cells were washed with PBS and fixed in a mixture of 0.5% glutaraldehyde and 2% paraformaldehyde in Hepes buffer (pH 6.8) for 15 minutes and then in 2% paraformaldehyde in Hepes buffer (pH 6.8) at 4°C for 14 days. Samples were dehydrated with 80% ethanol and infiltrated with increasing concentrations of Lowicryl HM20 resin (Polysciences, Tokyo, Japan). Polymerization of Lowicryl HM20 was performed by UV irradiation (wavelength peak at 360 nm) for 24 hours. Ultrathin sections were cut and then mounted on nickel grids coated with 2% Neoprene (Ohken, Tokyo, Japan). After being sunk in 100% ethanol for 3 minutes, samples were immersed in 0.01 M EDTA (pH 7.2) at 65°C for 24 hours. The samples were washed with PBS three times (5 minutes/wash) and blocked with PBS containing 1% BSA and 0.1% Tween-20 for 15 minutes. The samples were incubated with a mixtures of primary antibodies diluted in PBS (1:30) for 1 hour, washed with PBS three times (5 minutes/wash), reacted with 12-nm gold-labeled secondary antibodies, followed by washing with PBS three times (5 minutes/ wash). The samples were stained with uranyl acetate and were examined on a Hitachi H-7000 transmission electron microscope.
MM-2 secretion analysis
Equal numbers of cells were seed on 100 mm culture dishes. Serum contains high level of endogenous MM-2 and may interfere with the MM-2 secretion assay, thus cells were grown to confluence and than cultured in media without serum supplement for 36 hours. The conditioned media were harvested and the cell numbers were determined. The cell number standardized conditioned media were resolved in 10% SDS-PAGE and the levels of MMP-2 in media were analyzed by immunoblotting with anti-MMP-2 antibodies.
Matrigel-based invasion assay
Polyvinylpyrrolidone-free polycarbonate filters with 8μm pore size (Costar, Cambridge, MA, USA) were soaked in matrigel (BD Biosciences) (1:10 in DMEM for transfected B16-F10 cells and 1:50 in DMEM for transfected MCF-7 cells) at 4°C for 36 hours and then incubated at 37°C for 2 hours. The filters were washed 4 times with DMEM and were placed in the microchemotaxis chambers. The cells were treated with 0.1% trypsin-EDTA, resuspended in DMEM media containing 10% FBS and then washed with serum-free DMEM media. Cells (1 × 10 5 ) were finally suspended in DMEM (200 μl) and placed in the upper compartment of the chemotaxis chambers. Culture medium (300 μl) containing 20% FBS was placed in the lower compartment of the chemotaxis chamber to serve as a source of chemoattractants. After being incubated in the cell culture incubator for 10 hours (for transfected B16-F10 cells) or 24 hours (for transfected MCF-7 cells), the cells on the upper surface of the filter were completely wiped away with a cotton swab. The cells on the lower surface of the filter were fixed in methanol, stained with Liu's A and Liu's B reagents, and then counted under a microscope. Cells invaded to the microchemotaxis chambers were also counted. For each replicate, the tumour cells in 10 randomly selected fields were determined, and the counts were averaged.
Animal metastasis experiment C57BL/6 mice between 6-7 weeks old (National Laboratory Animal Center, Taipei, Taiwan) were housed in an animal holding room under standard conditions (22°C; 50% humidity; 12-hours light/dark cycle). Each C57BL/6 mouse was injected with viable B16-EV cells or B16-anti-CAS cells (5 × 10 4 cells in 50 μl DMEM/mouse) in the tail vein. Each experimental group included 11 B16-EV cellsinjected mice and 11 B16-anti-CAS cells-injected mice, and totally 66 mice were used in the experiment. Twentyfive days after injection, the mice were sacrificed and necropsied. The numbers of tumours in lungs were counted by macrography and micrography. Mouse care and experimental procedures were performed following the guideline of the Animal Care Committee of Academia Sinica, Taiwan.
Statistical analysis
All values are expressed as mean ± standard deviation (SD). Statistical differences were analyzed by two-tailed Student's t-test. The α-level of 0.05 was used to determine statistical significance.
Results
Increased CAS expression is unable to enhance the proliferation of MCF-7 cancer cells MCF-7 cells were separately transfected with pcDNA3.1 control vector (EV), pcDNA-CAS vector (CAS), and pcDNA-anti-CAS vector (anti-CAS) to obtain MCF-EV, MCF-CAS, and MCF-anti-CAS cells, respectively (Fig. 1a). For reducing cellular CAS level, antisense DNA method is better than such techniques as gene knockout or siRNA for this study. Because CAS is essential for cell survival, knockout of the CAS gene or extreme CAS reduction may affect cell survival [22]. Also, CAS was identified in a study of an antisense DNA fragment that is capable of causing cell resistance to apoptosis induced by bacterial toxins [17]; hence reduction of cellular CAS levels by antisense DNA fragment against CAS is sufficient to obtain the cellular effect of CAS reduction. We assayed the proliferations of MCF-EV, MCF-CAS, and MCF-anti-CAS cells to study the effect of CAS expression on the proliferation of MCF-7 cancer cells. CAS is a cellular apoptosis susceptibility protein, thus high CAS expression may be toxic to cells. In the routine cell cultures, we have noted that the growth of MCF-CAS cells was obviously slower than that of MCF-EV and MCF-anti-CAS cells, probably due to the cytotoxicity of CAS. CAS is essential for cell survival, and most MCF-7 cells died after being transfected with the anti-CAS vector (data not shown). However, the growth rate of the established stable cell line, MCF-anti-CAS cells, was similar to the growth rate of the MCF-EV cells (Fig. 1b). On the other hand, the results of cell proliferation assays showed that the growth rate of MCF-CAS cells was indeed slower than that of MCF-EV cells and MCF-anti-CAS cells (Fig. 1b). We speculate that the cytotoxicity of CAS might account for the inhibition of cell proliferation of MCF-CAS cells by CAS. Thus, flow cytometry analyses were performed to study whether CAS overexpression induces cytotoxicity of MCF-7 cells and reduces cell proliferation. MCF-EV and MCF-CAS cells were cultured for 24 or 96 hours and FACS analyses based on DNA content were used to determine the cell cycle distribution. The phase distributions were 56.0% at G1, 22.4% at S, and 21.6% at G2/M for MCF-EV cells cultured for 24 hours; 55.2% at G1, 20.4% at S, and 24.4% at G2/M for MCF-CAS cells cultured for 24 hours; 57.0% at G1, 16.2% at S, and 26.8% at G2/M for MCF-EV cells cultured for 96 hours; 58.2% at G1, 11.3% at S, and 30.5% at G2/M for MCF-CAS cells cultured for 96 hours. The percentages of cells in sub-G1 phase were 1.0% for MCF-EV cells cultured for 24 hours, 1.2% for MCF-CAS cells cultured for 24 hours, 2.0% for MCF-EV cells cultured for 96 hours, and 6.4% for MCF-CAS cells cultured for 96 hours. The sub-G 1 fraction in a DNA histogram determined by flow cytometry is con-Increased CAS expression is unable to increase the proliferation of MCF-7 cells sidered to be the apoptotic cells. The results showed that CAS overexpression (i.e. MCF-CAS cells) increased the percentage of sub-G 1 fraction of cells (Fig. 1c). The results also showed that CAS overexpression decreased the percentage of MCF-7 cells in S-phase (Fig. 1c). Thus, CAS overexpression induces cytotoxicity of MCF-7 cells and reduces cell proliferation.
CAS is located in cytoplasm vesicle near cell membrane and cell protrusion
Immunofluorescence observations with anti-CAS monoclonal antibodies showed punctual staining of CAS in cytoplasm around perinuclear areas and areas near cell membrane as well as in the cell protrusions of MCF-7 cells and 293 cells (Fig. 2a). CAS binds strongly with importinα, a nuclear-transport receptor [23], thus CAS around perinuclear areas may mainly associate with the importinα complex. The punctual staining of CAS in cytoplasm areas near cell membrane and cell protrusion indicates that CAS may be located in cytoplasm vesicles. We studied the location of CAS by electron microscopy. Immunogold electron microscopy analyses showed that CAS (12-nm gold, arrows) was located in vesicle and mainly in the vesicle membrane (Fig. 2b).
CAS colocalizes with MMP-2
Our data show that CAS may be located in cytoplasm vesicles. Cytoplasm vesicle plays an important role in regulating cell exocytosis [24]. Thus, CAS may regulate the secretion of tumor cells and hence regulates the invasion and metastasis of cancer cells. MMP-2 plays an important role in regulating tumor metastasis. Our double-stain immunofluorescence studies showed that although not all CAS was colocalized with MMP-2, there were indeed many CAS colocalized with MMP-2 in MCF-7 and 293 cells (Fig. 3).
CAS regulates MMP-2 distribution
The effects of CAS expression on MMP-2 distribution were studied by double-stain immunofluorescence. In MCF-EV cells, colocalizations of MMP-2 with CAS were mainly observed at cytoplasm areas near cell membrane, and only some in cell protrusions (Fig. 4a). In MCF-CAS cells, many MMP-2 was efficiently translocated to cell protrusions and was obviously colocalized with CAS (Fig. 4a) in the cell protrusions. In a high-resolution photograph, CAS was found to be colocalized with cytoplasmic vesicle and MMP-2 (Fig. 4b).
CAS regulates MMP-2 secretion and invasion of cancer cells
MCF-EV, MCF-CAS, and MCF-anti-CAS cells were grown to confluence and then cultured in media without serum supplement for 36 hours. The conditioned media were harvested and were subjected to immunoblotting with anti-MMP-2 antibodies. The results showed that MMP-2 secretion from cells was enhanced by increased CAS expression and was reduced by CAS reduction (Fig. 5a). The effects of CAS expression on invasion of MCF-7 cancer cells were assayed by matrigel-based invasion assays. The results showed that increased CAS expression enhanced the invasion of MCF-7 cells by 236.8% (P = 0.0024), and reduced CAS expression inhibited the invasion of MCF-7 cells by 57.9% (P = 0.0098). The average numbers of the invaded cells were 45.7 ± 4.1, 19.6 ± 6.5, and 8.2 ± 2.1 (cells/field) for MCF-CAS, MCF-EV, and MCF-anti-CAS cells, respectively (Fig. 5b). The secretion of MMP-2 from another tumor cell line, B16-F10 melanoma cells, was also studied. B16-F10 cells were separately transfected with the EV, CAS, and anti-CAS vectors to obtain B16-EV, B16-CAS, and B16-anti-CAS cells, respectively (Fig. 5c). The results of MMP-2 secretion assays also showed that increased CAS expression enhanced MMP-2 secretion, and reduced CAS expression decreased MMP-2 secretion of B16-F10 melanoma cells (Fig. 5d). Matrigel-based invasion assays showed that increased CAS expression enhanced the invasion of B16-F10 cells by 249.2% (P = 0.0019), and reduced CAS expression inhibited the invasion of B16-F10 cells by 75.7% (P = 0.0073). The average numbers of the invaded cells were 89.5 ± 10.7, 35.9 ± 9.4, and 8.7 ± 2.2 (cells/field) for B16-CAS, B16-EV, and B16anti-CAS cells, respectively (Fig. 5e). Thus, the results of matrigel-based invasion assay indicate that CAS can regulate the invasion of cancer cells.
CAS regulates the metastasis of B16-F10 melanoma cells
Experimental animal tumour metastasis assays were done to study the effect of CAS expression on the metastasis of B16-F10 melanoma cells. B16-F10 cells are high metastatic in C57BL/6 mice, thus we studied whether CAS reduction can reduce the metastasis of B16-F10 cells in C57BL/6 mice. Animal tumour metastasis experiments showed that reduced CAS expression decreased the pulmonary metastasis of B16-F10 cells by 56% in C57BL/6 mice (P = 0.0107). The average lung tumour numbers of mice injected with B16-EV cells were 32.7 ± 6.5 tumours/ mouse (average tumour diameter 2.6 ± 1.8 mm) and were 14.3 ± 4.6 tumours/mouse (average tumour diameter 2.5 ± 1.5 mm) for mice injected with B16-anti-CAS cells (Fig. 6). Eleven B16-EV cells-injected mice and six B16-anti-CAS cells-injected mice passed away three weeks after injection. Thus, anti-CAS transfection reduced the mortal-ity of mice injected with B16-F10 cells; probably due to anti-CAS transfection reduces the metastasis ability of B16-F10 cells. The results of animal tumour metastasis experiment indicate that CAS can regulate the metastasis of cancers.
Discussion
CAS is regarded as a proliferation-associated protein that associates with tumour proliferation as it associates with microtubule and functions in the mitotic spindle checkpoint [1][2][3][4][5]10]. But our data showed that CAS overexpres-CAS regulates MMP-2 secretion and invasion of cancer cells sion in human MCF-7 cancer cells did not enhance but did reduce the proliferation of MCF-7 cells. The involvement of CAS in proliferation of cancer cells is supported by a study showing that reduction of cellular CAS protein by transfection of antisense cDNA against CAS in HeLa cells perturbed progression from G2 (retards transition from G2) to G1 in the cell cycle [25]. CAS may be necessary for M phase mitotic spindle checkpoint in cell cycle progression, but it is quite impossible that tumour cells highly expressing CAS can increase tumour proliferation. The key step that determines the rate limiting for cell proliferation is mainly at the G1-S phase of cell cycle rather than at the mitotic phase [26,27]. Also, CAS is associated with mitotic spindle and regulates the mitotic spindle checkpoint, thus CAS may halt the progression of mitosis until the cells are truly ready to divide. p53 protein also plays a role in activating cell cycle checkpoints, and activation of p53 can stop cell cycle progression at the cell cycle checkpoints [28,29]. The involvement of CAS in proliferation of cancer cells is also supported by a pathological study showing that the expression of the Ki67 proliferation marker in lymphomas was significantly positive correlated with CAS [3]; however this study also reported that a significant fraction of CAS-positive normal and malignant lymphocytes were also found to be Ki-67 negative [3]. In tumours, various oncogenes are activated and various anti-oncogenes are inactivated [30,31], the activated oncogenes and the inactivated anti-oncogenes may stimulate the proliferation of tumours that highly expressing CAS. Thus, positive correlation between CAS and Ki67 expression in tumours is not sufficient to make a conclusion that CAS is related with tumour proliferation. As an apoptosis susceptibility protein, highly expression of CAS can cause the cells susceptible to apoptosis. Our flow cytometry cell cycle study showed that CAS highly expressed in cancer cells can induce cytotoxicity of cells and decreases the proliferation of the cells (Fig. 1).
Formation of cell polarity can stimulate cell-cell adhesion, inhibit tumour migration, and decrease cell proliferation [32]. We have reported that CAS stimulated the polarity of HT-29 cancer cells and thus inhibited the migration of HT-29 cells [11,33]. These results are contradictory to our present report that CAS enhances the metastasis of cancer cells. HT-29 cell line is a special cell line as it is easy to form polarity under in vitro cell culture [34]. In another study, we have observed that CAS was unable to stimulate the polarity of other cancer cell lines including B16-F10, MCF-7, Colon 205, Hep G2, and SK-Hep-1 cancer cells (data not shown). CAS was also unable to increase the proliferation of these cells (data not shown). Thus, the ability of CAS on polarity formation and migration inhibition seems to be a special phenomenon that happened restrictively in HT-29 cell line and the increase of invasion and metastasis should be the correct role of CAS on tumour development.
Although MMPs were previously believed to be synthesized and rapidly secreted, later experiment shows that MMPs are stored in the secretory vesicles in cells and are rapidly secreted in response to angiogentic stimuli [35]. Thus, regulation of the secretion of ECM-degradation proteinases from tumour cells may also plays an important role in regulating tumour metastasis. Our data showed that CAS was localized in vesicle and CAS overexpression enhanced the secretion of MMP-2 and enhanced the invasion and metastasis of tumour cells. Pathological studies also showed that CAS was highly expressed in the metastatic tumours and the expression of CAS was correlated positively with high cancer stage, high cancer grade, and worse outcome of the cancer patients. Taken together, CAS may play an important role in regulating tumour metastasis and CAS plus ECM-degradation proteinases Animal tumor metastasis experiments show CAS regulates the metastasis of B16-F10 melanoma cells Figure 6 Animal tumor metastasis experiments show CAS regulates the metastasis of B16-F10 melanoma cells. Eleven B16-EV cells-injected mice and six B16-anti-CAS cellsinjected mice passed away three weeks after injection thus were excluded from the statistics. Six mice (three B16-EV cells-injected mice and three B16-anti-CAS cells-injected mice) didn't grow tumour in lungs and thus were also excluded from the statistics. The results showed that reduced CAS expression inhibited the pulmonary tumour metastasis of B16-F10 melanoma cells in C57BL/6 mice. | 2018-05-08T18:12:20.164Z | 0001-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "b654bef0936fdb96b428e09058e30d249a301eaa",
"oa_license": "CCBY",
"oa_url": "https://jeccr.biomedcentral.com/track/pdf/10.1186/1756-9966-27-15",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b654bef0936fdb96b428e09058e30d249a301eaa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
240157478 | pes2o/s2orc | v3-fos-license | Reviewing the availability, efficacy and clinical utility of Telepsychology in dialectical behavior therapy (Tele-DBT)
Background Telepsychology is increasingly being implemented in mental health care. We conducted a scoping review on the best available research evidence regarding availability, efficacy and clinical utility of telepsychology in DBT. The review was performed using PRISMA-ScR guidelines. Our aim was to help DBT-therapists make empirically supported decisions about the use of telepsychology during and after the current pandemic and to anticipate the changing digital needs of patients and clinicians. Methods A search was conducted in PubMed, Embase, PsycARTICLES and Web of Science. Search terms for telepsychology were included and combined with search terms that relate to DBT. Results Our search and selection procedures resulted in 41 articles containing information on phone consultation, smartphone applications, internet delivered skills training, videoconferencing, virtual reality and computer- or video-assisted interventions in DBT. Conclusions The majority of research about telepsychology in DBT has focused on the treatment mode of between-session contact. However, more trials using sophisticated empirical methodologies are needed. Quantitative data on the efficacy and utility of online and blended alternatives to standard (i.e. face-to-face) individual therapy, skills training and therapist consultation team were scarce. The studies that we found were designed to evaluate feasibility and usability. A permanent shift to videoconferencing or online training is therefore not warranted as long as face-to-face is an option. In all, there is an urgent need to compare standard DBT to online or blended DBT. Smartphone apps and virtual reality (VR) are experienced as an acceptable facilitator in access and implantation of DBT skills. In addition, we have to move forward on telepsychology applications by consulting our patients, younger peers and experts in adjacent fields if we want DBT to remain effective and relevant in the digital age. Supplementary Information The online version contains supplementary material available at 10.1186/s40479-021-00165-7.
Background
Telepsychology, 1 i.e. the provision of psychological services using telecommunication or digital technologies (e.g. internet, telephone applications and virtual reality), is on the rise [2][3][4][5][6]. COVID-19 accelerated this trend [7]. However, a scoping review about the efficacy and clinical utility of telepsychology in DBT is lacking. The aim of the current review is to help DBT-therapists make empirically supported decisions about the use of telepsychology during and after the current pandemic and to anticipate the changing digital needs of patients and clinicians.
Standard DBT is an empirically supported, cognitive behavioral treatment for adults and adolescents suffering from chronic suicidal and self-harming behavior, characteristic for borderline personality disorder (BPD) [8,9]. DBT is a comprehensive program that consists of four primary modes of treatment delivery, to address the multiple needs of suicidal patients [9][10][11]. In individual therapy, patients figure out how they can take realistic steps to a life worth living and how to stay motivated. In skills training, the focus is on acquiring skills from trainers and peers. Consultations outside of office hours, preferably by the individual therapist, facilitate generalization of skills to situations where patients need them the most, e.g. during suicidal crises. Even though this mode is often called "phone consultation", DBTteams have used all kinds of technology to stay in touch with clients [12]. The therapist consultation team, lastly, helps individual therapists and skills trainers to deliver adherence treatment and remain motivated throughout the process [9,11].
Although primarily designed for suicidal and selfharming behavior, DBT remains effective when tailored to fit the needs of other clinical populations, age groups, or treatment settings [13,14]. A significant part of this customizability relates to the underlying treatment rationale, Marsha Linehan's biosocial theory. This theory explains maladaptive behavior, including suicidal behavior and non-suicidal self-injury (NSSI), as manifestations of pervasive emotion dysregulation or as ways of coping with it [9,15]. Emotion regulation capacity is a transdiagnostic and dimensional construct, assumed to play a key role in a broad range of mental illnesses [15][16][17][18]. Especially the DBT skills training is evolving from a treatment mode for suicidal patients with BPD to a transdiagnostic intervention [9]. Telepsychology is increasingly being used in DBT supervision and -training for professionals [19][20][21][22][23][24][25] and in mental health care in general [26,27]. Arguments both in favor and against such an evolution can be made. On the one hand, evidence suggests that the efficacy of telepsychology is comparable to face-to-face care in diverse populations and settings [28][29][30]. On the other hand, studies about telepsychology have primarily focused on short-term effects [31][32][33]. In addition, there is insufficient research to consider any telepsychology intervention as evidencebased for suicidal ideation, self-harm or BPD [34,35]. However, a comprehensive overview of research regarding the use of telepsychology in DBT is lacking.
We conducted a scoping review to fill this research gap [36]. Since we are interested in the use of telepsychology in DBT in all of its capacities and welcome the evolution toward dimensional theoretical constructs in psychiatry, we did not restrict ourselves to one specific diagnostic category. The overarching aim was to document the best available research evidence regarding the efficacy and clinical utility of telepsychology in DBT. In doing so, we used the definitions of the American Psychological Association (APA) [37,38]. Treatment efficacy referred to the scientific evaluation of whether a treatment works. Clinical utility comprised the applicability, feasibility, and usefulness of the intervention in specific situations, as well as the generalizability of an intervention whose efficacy had been established [37,38]. To obtain a detailed overview, we reviewed all types of research evidence (i.e. clinical opinion, observation, consensus among recognized experts, systematized clinical observation, quasi experiments, randomized controlled trials) that could help answer our research questions. At the same time, we took the difference in methodological quality into account when interpreting the results [39,40]. Within our overarching aim, we had three specific research questions (RQ): RQ1:What do we know about the efficacy and clinical utility of telepsychology in standard DBT, i.e. using telecom for between-session-contact, in support of skills generalization?
RQ2: To what extent is telepsychology equivalent or superior to face-to-face contact in other modes of DBT treatment, i.e. individual therapy, skills training, consultation team?
RQ3: Does the addition of telepsychology to standard DBT modes, −strategies, −procedures and skills increase the efficacy or clinical utility of DBT?
Criteria and identification of studies
A search was conducted in PubMed, Embase, PsycARTI-CLES and Web of Science (WoS) until the 9th of March 2021, following PRISMA-ScR guidelines [36]. Search terms for telepsychology were included and combined with search terms that relate to DBT. Search terms (Additional file 1) and syntax (Additional file 2) were modified as necessary for each database. Only 1) English, French, German and Dutch manuscripts from 2) peerreviewed sources, that 3) provided quantitative or qualitative research evidence 4) on efficacy or clinical utility of 5) the use of digital technology 6) in DBT treatment or 7) implementation of telepsychology in DBT treatment were considered. To identify other potentially relevant studies we 1) crosschecked the reference lists and citing articles of identified studies, 2) checked the references of The Oxford Handbook DBT [13], and 3) screened the references of Phone Coaching in Dialectical Behavior Therapy [12] (Fig. 1).
Selection of studies
All records from the database searches were imported in EndNote. This collection was electronically and manually deduplicated. The titles and abstracts were screened for eligibility by two review authors independently (HvL, RS). Records that clearly did not fulfill the criteria were excluded. The remaining references were made available in full text and assessed for eligibility by the same two review authors (HvL, RS) (Additional file 3). Conflicting decisions were discussed until agreement was reached. If necessary, we consulted with two other authors (UW, LvdB).
For the randomized controlled trials (RCTs) and prepost studies, detailed descriptions of design, participants, experimental and comparator interventions, measures, and statistical significance of changes in the primary outcome variables can be found in Table 1. If an effect was significant, we also reported Cohen's d or calculated it ourselves, following the formulas and procedures mentioned in Lakens et al. [41]. In the results section, we go over the main findings per research question and per technology.
Sample of studies
The original database search yielded 804 reports. Crosschecking and screening The Oxford Handbook DBT and Phone Coaching in Dialectical Behavior Therapy resulted in 10 records. This sample of 814 records contained 196 duplicates. After screening titles and abstracts, 83 references were selected as potentially eligible, of which 41 clearly met the inclusion criteria ( Fig. 1). A detailed overview of the 83 full-text articles that we assessed, and the reason why specific articles were excluded, can be found in Additional file 3.
Study characteristics
In total, 41 studies were selected. The majority of the studies consist of observational data, secondary analyses, Rizvi ( focus groups and surveys, patients' opinions and expert opinions. Eighteen out of forty-one studies focused on the frequency, efficacy and utility of phone consultation. Nine studies focused on the added value of smartphone applications. Nine studies focused on internet delivered skills training. Four studies focused on virtual reality in DBT. One study focused on a computer program and one study focused on learning a DBT skill by means of video. All studies were in the initial stage of clinical research (i.e. feasibility, acceptance, usability).
Efficacy and clinical utility of telecom for between-sessioncontact, in support of skills generalization (RQ1) Telephone Looking first at the treatment efficacy, we did not find any RCTs or pre-post studies about the added value of phone consultation in DBT. However, there are articles that contain data on phone consultation frequency and associations with treatment outcome. Chalker et al. [52] investigated the association between phone calls, satisfaction (patients and therapists), and treatment outcome (patients) in a standard DBT program with adults diagnosed with BPD. More frequent between-session contact was significantly associated with client satisfaction, therapist satisfaction, treatment retention and decrease in psychosocial problems. There was no association between violation of personal limits regarding phone calls and any of the outcome measures. Oliveira and Rizvi [53] studied the frequency of phone calls and text messaging in a sample of patients with BPD who were participating in 6 months of standard DBT. Results showed that therapists received an average of 2.55 phone calls a month. Limbrunner et al. [54] investigated DBT phone consultation to reduce eating disorder-related urges in DBT for eating disordered adults. Results indicated that the average number of calls ranged between 0 to 4 calls per day. The duration of the phone consultation range between 1 to 30 min, with an average of 6 min. The other studies focus on the clinical utility of phone consultation. Linehan formulated most of her expert opinions regarding phone consultation in the DBT treatment manual [10]. In later articles [55,56] she highlighted aspects of this mode of treatment and substantiates the assumption that DBT reduces the contingency of between-session contact and suicidal behavior. Linehan emphasizes the importance of timing, i.e. calling before self-destructive behavior takes place, and content, i.e. focused on skills in a 'matter of fact' tone. She also asserts that the individual therapist is best placed to provide phone consultation, since it is the only person who can assuage relationship cracks between sessions, knows the patients learning history and current skills, and can discuss the chain of events that led up to the consultation. At the same time, Linehan accentuates the necessity of teaching the patient skillful ways of asking for help (i.e. "making the therapist want to talk to them on the phone"), because it can be life saving for chronic suicidal patients. This requires firm but flexible use of personal limits that may vary by patient, time and context.
In response to the first RCT of Linehan et al. [57], on the effectiveness of DBT, R.E. Hoffman wrote 'a letter to the editor' [58] where several comments and questions about the trial were posed, including the comment that for the treatment as usual (TAU) condition, telephone consultation 24/7 is not feasible. Linehan and Heard acknowledged that the availability of therapists probably had been greater in DBT compared to TAU. However, the actual amount of telephone calls per month did not differ. Linehan and Heard also pointed out that there was no correlation between the number of phone calls and the number of parasuicide episodes in DBT, in contrast to TAU, where such a correlation was found.
Implementation studies and expert opinions provide some additional insight in the clinical utility of telephone consultation in DBT. Chugani and Landes [59] conducted a survey amongst clinicians to investigate the implementation of DBT in College Counseling Centers. A frequently reported barrier to implement the standard DBT program was the unwillingness of individual therapists to offer phone consultation. Flynn et al. [60] reported similar findings in a survey exploring challenges experienced by clinical sites of implementation of DBT in community settings. Common issues concerned therapists' reluctance, lack of management support and issues regarding clinical responsibility. Landes et al. [61,62] made a step-by-step inventory of the implementation process of DBT in the Veterans Health Administration (VHA) healthcare system. Barriers in the implementation of DBT in a routine setting, rated as 'unable to overcome', were all related to phone consultation. In Landes et al. [63], the authors identify four specific challenges and solutions with regard to phone coaching: 1) 'tools' such as work telephone, laptop that gives access to the electronical medical records of patients, organizational policies and procedures, 2) compensation for after-hours phone coaching, 3) willingness of clinicians to provide phone coaching and 4) consistent program and leadership support.
Manning [64], Koons [65] and Ben-Porath [66][67][68][69], describe how to carry out phone consultations. They emphasize the importance of explaining in detail the essence and the works of phone consultation to the patient, resulting in an agreement. When carefully introduced at forehand, patients will be more aware of the contingencies of problem behavior. Based on clinical experience, myths are confronted that exist about the expected disastrous impact of phone consultation on the therapists' life and professional career (among others: being called each night, burnout, inadvertently reinforcing maladaptive behavior and thus enhancing suicide risk, risk of being sued). At the same time, common errors (among others: failure to orient patients, errors concerning contingency management, using phone consultation for other needs than skill generalization, contact recovery or encouragement, setting limits when they are not crossed) and the need of support, validation and problem solving in the therapist consultation team are discussed. Finally, Steinberg et al. [70] emphasizes the importance of making informed decisions concerning parental involvement in phone consultation, therefore they wrote a decision-tree to assist in determining when parental involvement is necessary with adolescents.
Videoconferencing Chu et al. [71] describe two case studies on the DBT school refusal (DBT-SR) program.
In this program, DBT strategies were used to target emotional and behavioural dysregulation (including internalizing problems like anxiety and depression) in youth that refuse to go to school. DBT for adolescents (DBT-A) [ Internet We found two controlled trials about the efficacy of internet-delivered DBT skills training. Lungu [44] developed and evaluated the effectiveness of a computerized, transdiagnostic DBT skills training for Emotion Regulation (iDBT-ER) for adults with a wide range of psychopathology. This online intervention consisted of 81-h weekly sessions: the first two sessions focused on mindfulness skills, followed by six sessions on emotion regulation skills. Every session followed the same structure: an overview of session material, mindfulness practice, homework review, teaching new skills (using videos), practicing new skills (variety of online assignments), assigning new homework and anticipating potential obstacles. Results of this pilot were compared to a matched historical control group who received face-toface DBT skills training (DBT-ST). Participants of the iDBT-ER reported progress in all of the primary outcomes, namely emotion dysregulation, psychopathology (anxiety, depression), general distress, DBT skills practice and mindfulness skills practice. Compared to the historical control group, pre-post effect sizes were similar for skills practice. Pre-post effect sizes were lower for iDBT-ER concerning anxiety, depression and general distress. Compared to DBT-ST, iDBT-ER reported a strong and significant increase in self-reported mindfulness (See Table 1). Wilks et al. [50,51,73,74] performed an RCT to evaluate the feasibility, acceptability and efficacy of an eight-session internet delivered DBT skills training intervention (iDBT-ST), for suicidal adults who engage in heavy episodic drinking. Each session lasted approximately 30-50 min and included 2-3 new DBT skills. Each skill was introduced via a short video. Finally, participants engaged in interactive and guided practice. At the end of each session, participants selected a homework exercise. Participants received DBT worksheets and were encouraged via daily emails and/or text messaging. One third of all participants completed the training. No clinical differences were found between drop-outs and completers of iDBT-ST (See Table 1). Results showed that technical problems appeared to pose barriers to treatment feasibility and completion. However, over the four-month study period, an immediate and significant reduction in suicidal ideation, alcohol use, alcohol quantity and frequency and emotion regulation compared to the waiting list controls was found, with large effect sizes for suicidal ideation and alcohol consumption.
Videoconferencing Salamin et al. [42] compared the diary card data of seven patients suffering from borderline personality disorder during two periods: 8 weeks prior to confinement (i.e., set of measures to slow down the spread of COVID-19) and during 8 weeks of confinement. From the 16th of March 2020 to the 26th of April 2020, individual therapy, DBT skills training and consultation team meetings were only possible by means of phone consultation or videoconferencing. Moreover, DBT skills training was limited to 45 min (instead of 2 h) per week and was provided individually. Using a multilevel approach, they found a significant decrease in self-reported binge-eating, fear, shame/ guilt and tension even when they switched to videoconferencing. At the same time, problem behaviours including suicidal behaviour, NSSI, anger outbursts, and experiences including sadness, anger, happiness, emptiness and suicidal ideation did not change significantly in this period of time. Self-reported distress increased significantly (See Table 1). Lopez et al. [43] performed a pilot study comparing group cohesion between patients who participated in a DBT skills training group via Video Teleconferencing (VTC) and an in-person DBT group. The primary diagnosis of the patients was depression but patients with bipolar and anxiety disorders were also included. Results show that the relationship with the facilitator and the feeling of their learning capacity did not differ between the two groups. There was a significant difference on the relation to member interaction and group cohesion between the two groups ( Table 1). The VTC group found it harder to connect with each other in the virtual environment. Compared to the in-person DBT group, the VTC group had a significant better attendance although they reported that attending the group via telehealth would not have been their first choice. Treatment via VTC was preferable to no treatment at all.
Concerning the clinical utility, one survey study and one expert opinion was found. Lakeman and Crighton [75] conducted a survey amongst clinicians to explore the impact of the COVID-19 measures on various DBT programmes for patients with BPD and obstacles to engaging with patients and colleagues via online platforms. Results show that the primary obstacles to providing DBT via online platforms were service and clinician centred obstacles. Few clinicians expressed confidence in being able to adapt to online DBT. Clinicians had no experience of using online platforms and some did not have access to internet or privacy in their home environment. The authors concluded that clinicians need to be supported through education, supervision and coaching in the use of telehealth interventions.
O'Hayer [76] highlighted challenges and opportunities of comprehensive DBT for BPD via the online video conferencing platform ZOOM. Challenges in the skills training were that patients tended to get distracted, feel ashamed or disconnect. Opportunities included using the chat function to communicate with participants, spontaneous 'virtual tours' during the break and the ability to choose a screen name. Challenges for the individual treatment were less engagement and increased concerns about the therapist being distracted. Overall, patients reported that they felt less connected.
Video Waltz et al. [77] performed an RCT to evaluate the feasibility of a psychoeducational video to learn adults with BPD naïve to DBT a novel DBT skill. In the experimental condition participants viewed the experimental video first and watched the control video after a week (designed to control time, attention and repeated testing). In the control group the order was reversed. Follow up was 1 week later. In the experimental video Marsha Linehan taught 'opposite action', part of the emotion regulation module of DBT [78]. Twenty percent of the participants dropped out. All remaining participants used the opposite action skill one or more times in the week after watching the experimental video. At follow-up, a significant reduction in painful emotions was reported as well as an increase in knowledge, increase in expectations of a positive outcome, and high satisfaction after watching the video of opposite action ( Table 1).
Efficacy and clinical utility of adding telepsychology to standard DBT modes, −strategies, −procedures (RQ3) Mobile (web) application Four trials provided data on the efficacy and clinical utility of DBT mobile applications. Rodante et al. [46] investigated the acceptability and preliminary effectiveness of an interactive mobilehealth application in 18 adults who participated in DBT at least a month and showed suicidal behaviours or NSSI. The app, named CALMA, provided DBT skills and evidence-based tools to prevent suicide. There were four modalities: "out of crisis", "I need help", "problemsolving" and "emergency". The emergency modality provides users with their emergency contacts and the possibility to share their location with others. It is automatically activated if the app detects that distress does not decrease after three attempts to use skills. Bayesian analysis 2 showed a high probability of decreased suicidal ideation (p = .966), suicidal plan (p = .849), suicidal gesture (p = .760), thoughts about NSSI (p = .909) and NSSI (p = .826) in the group where CALMA was added to DBT. The authors reported a high probability of a greater decrease in suicidal ideation and NSSI in DBT + CALMA, compared to DBT only. However, all interval of the comparisons included zero. The app also showed good acceptability by users.
Rizvi et al. [47] explored the feasibility of a DBT Coach smartphone app for the emotion regulation skill 'opposite action'. Adults diagnosed with BPD and a comorbidity with substance abuse disorders, already in standard DBT treatment, received a smartphone with the app for 10-14 days, which they could use whenever needed. The app consisted of instructions on how to apply the opposite action skill, and mindfulness. Each app usage started and ended with a rating on emotional intensity and urges to use drugs on a 0-10 Likert scale. The app was on average used 15 times over the course of 13 days. Participants reported that the use of the app strengthened their knowledge and self-efficacy in execution of the skill. The intensity of the emotions and the urge to abuse substances decreased significantly after usage and finally a decrease in depression, psychological distress and in overall substance use during the trial was found (See Table 1). An extended version of the DBT Coach was evaluated by Rizvi, Hughes and Thomas [48]. The app was used by adults diagnosed with BPD and had a recent history of NSSI. The app was used as an add on to 6 months of standard DBT. The app included skills from mindfulness, emotion regulation, interpersonal effectiveness and distress tolerance. Results showed a reduction in distress and in urges of self-harm directly after using the app. Frequency of use did not correlate with treatment outcomes, except for the frequency of NSSI episodes, where higher use led to fewer episodes (See Table 1). Over 90% of the participants reported that the app was easy to understand and to use, and that they would use the app if available outside the trial.
Schroeder et al. [79] developed "Pocket Skills", a mobile web app that uses texts, videos and images of Marsha Linehan to teach DBT-skills (i.e. basics, mindfulness, emotion regulation, distress tolerance and addiction). A conversational agent promotes engagement to the app and use of the skills. Pocket Skills also gives access to a DBT diary card day by day. Participants were adults diagnosed with depression, Generalized Anxiety Disorder (GAD), BPD, Post Traumatic Stress Disorder (PTSD) or bipolar disorder. Schroeder et al. conducted a 4-week field study. Participants were randomized in two groups. The experimental group received semi-personalized text messages each morning like 'One of your mindfulness goals is to reduce pain, tension and stress! Keep practicing mindfulness skills!'. The other group received nonpersonalized text messages. Depression, anxiety and dysfunctional coping decreased, and the use of DBT skills increased (Table 1). Participants who received daily semi-personalised messages practiced more skills, which resulted in faster improvements. However, we found no information about the randomization process and the number of participants in each group. The app was rated 'very usable'. Exit-survey data confirmed that Pocket Skills helped patients to stay engaged in DBT and practice their skills.
Four articles contained descriptions of clinical utility by opinions and qualitative data on DBT-applications for smartphones and tablets. Austin et al. [80] evaluated a smartphone application that was developed to support DBT. Participants who were receiving DBT used the app. Overall, there was a positive perception of the app's efficacy and usability. Helweg-Joergensen et al. [49] developed a smartphone application as an adjunct to DBT to fill in DBT diary cards, called mDiary. Adults who were enrolled in active DBT treatment used the app for at least 4 weeks. The diary card data could be consulted by therapists online. The authors concluded that the mDiary App was an acceptable and relevant innovation for both patients and therapists, although patients experienced a better usability than therapists. Cristol [81] describes the perspective of one BPD patient in DBT on how using technology in DBT can be validating and helpful in discovering typical responses to common stressors. This patient tried several mobile applications and found 'Daylio' most useful. The usage of this app helped this patient to minimize stigma as using a smartphone is a common habit, in contrast to filling our diary cards using paper and pencil. Washburn and Parrish [45] described their experience with the DBT Self-Help mobile application, that gives access to key DBT skill sets such as mindfulness, distress tolerance, emotional regulation and interpersonal effectiveness. They recommended the application for patients already enrolled in a DBT program.
Virtual reality Looking first at the treatment efficacy, Navarro-Haro et al. [82] evaluated the virtual reality Dialectical Behavior Therapy (VR DBT) mindfulness skills training in a pre-post study with patients with a GAD. Patients were randomly assigned to a Mindfulness-Based Intervention (MBI) with or without 10 min of VR DBT Mindfulness skills training. The MBI consisted of seven modules, once a week. Following the first six sessions of the MBI, half of the participants also took part in six individual 10-min sessions of VR DBT mindfulness skills training. During the VR DBT mindfulness skills sessions, a participant made use of a headset, float down a 3D computer-generated river while listening to one of three mindfulness skills training audio tracks. The audio tracks were 'observing sound', 'observing visuals' and 'wise mind'. Both groups showed a significant decrease in GAD symptoms (see Table 1). The MBI plus VR DBT group retained significantly more participants than the MBI group alone. Additional pre-post improvements showed that the MBI plus VR DBT group improved in the non-judging facet of mindfulness, the interference subscale of the Emotion Regulation Scale and the state of relaxation after all the VR DBT sessions.
There are three studies concerning the clinical utility of virtual reality. First, Navarro-Haro et al. [83] investigated the clinical utility of virtual reality (VR) by using immersive VR to facilitate mindfulness skills training in DBT, as described above. They wrote a case study of a 32-year-old woman diagnosed with BPD and substance use disorder who received standard DBT. Key measurements were administered before and after each VR DBT mindfulness skills training session and results showed that urges to commit suicide, self-harm, quit therapy, use substances and negative emotions measured by the diary card were all reduced after each VR mindfulness session. Gomez et al. [84] used the VR DBT mindfulness skills training in a case study of a 21-year-old male with severe skin burn covering one third of his body. The primary assessment consisted of measurements of post-traumatic stress disorder, mindfulness acceptance and positive and negative emotions before and after each VR DBT skills training. Results show that the patient accepted the VR DBT and wanted to continue using mindfulness. There was a small reduction in PTSD symptoms after four VR DBT sessions and the reduction in negative emotions was most pronounced after the first VR DBT session but decreased even more the second time and stayed near zero the third and fourth time. Positive emotions were very high after the VR DBT sessions. The same virtual reality enhanced DBT mindfulness skills training was used by Flores et al. [85]. They describe a case study investigating the feasibility of the virtual reality enhanced DBT (VR DBT) mindfulness skills training for two patients with spinal cord injury. The primary assessment consisted of measurements of depression, anxiety and positive and negative emotions before and after each VR DBT skills training. Results showed that patients not only accepted VR as part of their treatment, but also liked using it. Both patients showed a reduction in ratings of depression, nervousness/anxiety and reported being less emotionally upset after the VR DBT skills training. Patient 1 showed also a reduction of negative emotions, where the negative emotions of patient 2 increased directly after the VR DBT mindfulness skills session.
Computer program Görg et al. [86] examined the acceptance and feasibility of the computer program MOR-PHEUS, as a part of a Dialectical Behavior Therapy Post Traumatic Stress Disorder (DBT-PTSD) residential treatment, that allows computer-assisted in sensu exposure and exercise in self-management during the treatment of PTSD. MORPHEUS can be used to record, and listen to the recordings of, in sensu exposure sessions. Meanwhile playing the recorded sessions patients monitor their level of distress and state dissociation. If the level of state dissociation is too high, MORPHEUS offers one of the 15 diverse skills in order to regulate themselves. Patients received a 12-week multicomponent residential treatment based on the principles of DBT. All participants were diagnosed with PTSD and used MORPHEUS as often as it was required in the standard DBT-PTSD protocol, that is, at least 2 to 5 times a week. Results show that patients found the skills helpful to block dissociation and wanted to use the program again in therapy and would recommend this program to a friend. Meanwhile, patients rather used their DBT skills during exposure instead of using the digital skills in MORPHEUS.
Discussion
We conducted a scoping review to help DBT-therapists make empirically supported decisions about the use of telepsychology during and after the current pandemic and to anticipate the changing digital needs of patients and clinicians.
Our first focus was the efficacy and clinical utility of telepsychology in standard DBT, i.e. using telecom for between-session-contact, in support of skills generalization. The literature provides us with valuable information about using telephone and videoconferencing to support this mode of treatment. Despite the fact that telephone was the first technology that was used to provide between-session coaching, quantitative information on efficacy and utility of phone coaching by the individual therapist remains limited. We found data about frequency of out of session contact [52][53][54], percentages for which calls are made in telephone consultation [53,54], associations between out of session contact and decrease in drop out and greater change of psychological symptoms [52]. However, an RCT about the added value of out of office availability is missing. This is striking, given the function telephone consultation has, the fact that this treatment mode is a key barrier for the implementation of DBT [59][60][61][62][63] and that experts describe how out of office availability places a substantial strain on DBT teams. A RCT in suicidal BPD patients that compares 24/7 access to between-session coaching provided by the individual therapist versus another service/ application versus no between-session coaching is delicate. However, a trial performed by Nadort et al., about the added value of telephone availability in schema focused therapy (SFT) for BPD, suggests that it is feasible [87]. An alternative would be to build on previous work of Oliviera and Rizvi [53] and collect more fine-grained data on the between-sessions contact in ongoing and future DBT -trials. As there is a strong increase of research into DBT [87], a meta-analysis on the subject could be within reach.
Our second focus was to identify research about telepsychology in DBT modes of treatment that are usually provided face-to face (i.e. individual therapy, skills training or consultation team). Quantitative studies on individual therapy or DBT skills training by means of videoconferencing (i.e. synchronous communication with trainers and peers) or blending face-to-face individual therapy with interactive online modules (i.e. asynchronous communication with trainers) are scarce, and still in an early phase of clinical research: evaluating feasibility, acceptance and usability. We did not find RCTs that tested the hypothesis that online or blended DBT is superior or at least equally effective as standard, face-to-face DBT. At the same time, we observe a steady increase of online or blended care in clinical practice, with the coronavirus pandemic as a catalyst [7,26,27].
In the absence of sufficient evidence, we think it is advisable to return to face-to-face contact as soon as possible and to remain aware of selection bias, confirmation bias and technology optimism. Cautioning against bias is not the same as stating that online or blended DBT is not efficacious or useless. Despite the concerns and challenges that were reported (i.e. technical issues, difficulties to achieve connectedness and group cohesion [43,74,75], we found no reports of serious adverse events or loss of learning capacity in teams that switched to videoconferencing-platforms during the pandemic [42,74,75]. Treatment via videoconferencing was preferable to no treatment at all [74]. In addition, the results of pre-post studies about acquiring DBT-skills by means of internet-delivered modules [44,50,51,73] and the psychoeducational video [76] are promising. The next step is to test the efficacy of online or blended DBT in trials with more sophisticated methodologies, and to understand what works for whom, why and in what circumstances. Recent RCTs that were performed in the adjacent field of telepsychology in DBT-trainings and -supervision for clinicians can be a source of inspiration [19][20][21][22][23][24]. The third and last focus was to review the efficacy or clinical utility of adding telepsychology to standard DBT modes, −strategies, −procedures and skills. We did not find a trial that investigated the added value of standard DBT + telepsychology in comparison to standard DBT. However, smartphone apps are experienced as an acceptable facilitator in access and implantation of DBT skills and led to decrease of a broad range of psychopathology [46][47][48][78][79][80]. One patient described how usage of an application could help to minimalize stigma, as the use of a smartphone is experienced as a common habit [72]. The advantages of a mobile application is that daily personal (text) messages can be added to support using the app. Individuals who received such daily semipersonalised messages practice more skills than those who do not, which results in faster improvement [48]. Future applications could extend current functionalities, for example by creating tailored and gamified content, easily accessible at the right place and time [88]. It is also worth investigating to what extent advanced mobile applications could be a assistive technology for DBT therapists, especially in the implementation of out of office availability. Furthermore, we could add the use of wearables to passively monitor patients' state and become even more effective in orienting patients to DBT skills when they need them the most [89,90]. In line with the positive experiences of using mobile applications, the preliminary results of using VR to facilitate mindfulness skills training are positive [45,[82][83][84]. The usage of VR helped preventing drop-out in GAD patients [45] and two case studies showed that patients liked using VR [83,84].
In our introduction, we stated that is important to anticipate the needs of future patients and clinicians [90][91][92][93]. In performing our scoping review we could not help but wonder whether we aren't always a couple of steps behind of our youngest patients. For example, we know from clinical practice that patients in DBT skills training groups stay in touch via mobile apps and social media, to share information and to support each other. There are podcasts, youtubers and influencers that discuss DBT-skills. These phenomena may have a larger impact on acquisition and generalization of DBT skills than we think (for better or for worse). At the same time, we know that people build identities and friendships online, and that conflicts, ostracism, bullying and abuse are increasingly taking place in virtual environments. Maybe it is time to add worksheets about 'digital technology skills' in the next DBT skills manual? The point here is that, more than ever, we think that it is wise to consult with our patients, younger peers and experts in adjacent fields if we want to remain accessible, effective and relevant in a digital age.
Conclusion
A shift towards videoconferencing and online trainings is justifiable if it is the only way to get an evidence-based treatment like DBT to patients that need it. However, current research evidence does not support a permanent shift towards online or blended DBT. It is pivotal and timely to increase efforts to investigate the efficacy of online/ blended DBT, compared to standard face-to-face DBT. In addition, we need to gain insight into the benefits of out-of-office availability (e.g. 'phone consultation) as a standard module of DBT for suicidal patients. Lastly, other technologies should continue to be explored, as smartphone applications, virtual reality, social media platforms, podcasts, semi-automated online communication and more, all hold promise for assessment, skill acquisition and generalization. We need to move forward on this, to improve both the range and effectiveness of existing approaches, to address the high demand for professional support and to anticipate the needs of clinicians and patients with emotion regulation disorders. | 2021-10-30T13:43:33.284Z | 2021-10-30T00:00:00.000 | {
"year": 2021,
"sha1": "b08dbb40d259ab49ba5fb874492c7ae6a376bf5b",
"oa_license": "CCBY",
"oa_url": "https://bpded.biomedcentral.com/track/pdf/10.1186/s40479-021-00165-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b08dbb40d259ab49ba5fb874492c7ae6a376bf5b",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208208048 | pes2o/s2orc | v3-fos-license | The new complement inhibitor CRIg/FH ameliorates lupus nephritis in lupus-prone MRL/lpr mice
Backgrounds The aberrant activation of complement system is critically involved in lupus nephropathy. Recent study showed complement C3 inhibitor was effective in the treatment of lupus nephropathy. In this study, we investigate the effect of a novel complement C3 inhibitor, CRIg/FH, in the treatment of lupus nephropathy in MRL/lpr lupus mice. Methods We treated MRL/lpr female mice with a dose escalation of CRIg/FH (10, 5 and 2 mg/kg) by intraperitoneal injection twice weekly since 12 weeks age. In addition, MRL/lpr mice treated with intraperitoneal injection of normal saline or oral prednisone, along with C57BL/6 J healthy mice were maintained to serve as controls. We started 8-h urine collection weekly to screen proteinuria by measuring the levels of urine urea/creatinine. Serum samples was collected at week 16 and 20 to measure levels of urea nitrogen, creatinine, and immunological markers (C3, C4, A-ds-DNA) before the mice were sacrificed at 20 weeks age to collect kidneys for histopathological examinations. Results Overt skin lesions were observed in MRL/lpr mice treated with normal saline, while skin lesion was not observed in CRIg/FH treated MRL/lpr mice. There was no overt proteinuria observed in MRL/lpr mice treated with CRIg/FH. Serum creatinine and BUN levels in MRL/lpr mice was maintained in highest CRIg/FH dose (10 mg/kg twice a week) to be significantly lower than that in prednisone treated MRL/lpr mice at 20 weeks age. In addition, CRIg/FH treatment in MRL/lpr mice results in a significantly elevated serum C3 and C4 levels when compared to prednisone treatment at both 16 and 20 weeks. Furthermore, our study identified that serum level of A-ds-DNA was also significantly lower in CRIg/FH treatment than that in predisone treated MRL/lpr mice. Renal pathology confirmed that kidneys from CRIg/FH treated MRL/lpr mice suffered less from nephritis and complement disposition. Conclusion Our results showed that the complement inhibitor CRIg/FH can protect MRL/lpr mice from lupus nephropathy by preserving renal function and glomerulus complement activation. Our findings support the positive effect of complement inhibitors in the treatment of lupus nephropathy.
Yu Shi and Wen Yao contributed equally to this work.
Background
Lupus nephropathy (LN) is a common however severe manifestation in systemic lupus erythematosus (SLE) with significant morbidity and mortality [1]. The pathogenesis of lupus LN is initiated by the abnormal activation of complement system triggered by the nephrotic deposition of circulating immune complexes (CIC) formed by autoantibodies in SLE [2]. The activation of complement system in SLE is characterized by the consumption of complement proteins [3][4][5]. The degree of reduction in serum levels of C1q, C2, C3 and C4, which are components in the classical complement pathway, is associated with the occurrence, development and prognosis of LN [6]. In addition, the activation of the alternative pathway is another important factor that exaggerating complement activation [7]. These evidence indicated that both classical pathway and alternative pathway are involved in the pathogenesis of LN. The aberrantly activated complement leads to complement deposition in the glomeruluss, thereafter the induction of proliferation in glomerular mesangial cells, and finally results in overt nephropathy [8].
The classical treatment of SLE involves the use of nonsteroidal anti-inflammatory drugs, antimalarial drugs, glucocorticoids. However, the broad effect of these systemic immunosuppressants restricts its clinical use owing to difficulties in the balance for attainment of treatment effects over avoidance from adverse reactions. In recent years, complement-targeted therapy (complement inhibitors) showed promising results in the treatment of both SLE and LN [9,10]. An example of the first generation of complement inhibitors is Eculizumab, a monoclonal antibody that prevents complement C5 from activation through its cleavage into C5a and C5b [11]. In addition, recent studies revealed that inhibition in the early level of complement cascade, complement C3, also has potential therapeutic effects in LN [12]. Nevertheless, concerns regarding C5 or C3 inhibition was often raised due to its effect of systematic inhibition in complements therefore increased risk of both severe infection and opportunistic infection [13].
The CRIg/FH is a novel complement inhibitor and a fusion protein combining the extracellular domain of CRIg and the alternative pathway inhibitory domain of factor H (FH) [14]. The design for CRIg/FH is based on its ability to bind C3 digestion product C3b/iC3b/C3c to avoid unnecessary cells surface deposition, and at the same time, FH domain facilitates factor I-mediated C3b degradation, thereby inhibiting activation and amplification of the alternative pathway [15,16]. A recent study showed that CRIg/FH can inhibit Thy-1 antibodymediated complement activation in a rat model of mesangioproliferative glomerulonephropathy (MPGN) therefore protect glomerular mesangial cells from complement-mediated damage and proliferative lesions [14]. These evidence suggested that CRIg/FH could be effective in the treatment of LN in which the activation of complement system is critically involved. Hence in this study we conducted a mice study with a dose escalation treatment of CRIg/FH on the widely used MRL/lpr lupus mice, to investigate the effect of CRIg/FH on lupus nephropathy.
Mice and treatment
Forty female MRL/lpr mice (weighing 37.7 ± 1.6 g at 8 weeks old) from Shanghai Slack Laboratory Animal Co.Ltd. were maintained in the Experimental Animal Science Department of Fudan University under SPF conditions and a 12-h light-dark cycle with free access to standard diet and tap water. At 12 weeks age, the MRL/ lpr mice were randomly assigned into treatment groups (N = 8 in each group) to receive twice weekly intraperitoneal injection of CRIg/FH [14] with a dose escalation design (10, 5 and 2 mg/kg each injection). The remaining MRL/lpr mice were randomly devided into control groups and received intraperitoneal injection of normal saline (NS) twice a week (group NS, n = 8) or daily oral gavage of prednisone (18.2 mg/kg/d, group Pred, n = 8). In addition, eight C57BL/6 J mice from Shanghai Slack Laboratory Animal Co.Ltd. were maintained under same condition and left untreated to serve as normal control.
The mice were monitored for skin lesions, lymphadenopathy, proteinuria, serum creatinine and urea nitrogen, and levels of serum C3, C4 and A-ds-DNA. The study was approved by the institutional review board of Children's Hospital of Fudan University ([2016]172).
Urine and serum biochemistry, and serum immunology
During the experiment, 24-h urine samples were collected weekly starting at 12 weeks of age. Proteinuria was detected by measurement of ratio between protein (RenjieBio,Shanghai) and creatinine (Yaoyunbio, Shanghai) in urine samples using ELISA. Blood samples (200 μl) were drawn through tail vein at 16 weeks of age and at 20 weeks of age (time of sacrifice). The serum levels of creatinine, urea nitrogen and the levels of serum C3, C4 and A-ds-DNA were detected by ELISA (Yaoyunbio, Shanghai).
Histopathological analysis
At age 20 weeks, all mice were sacrificed to collect kidneys after euthanasia by intraperitoneal injection of pentobarbital (200 mg/kg). The kidneys were formalinfixed and paraffin-embedded and sliced at 4 μm thickness for HE staining. Immunohistochemistry of C3d, membrane attack complex (MAC) and C1q staining were conducted using antibody against C3d (R&D biosystems; AF2655), C5b-9 (Abcam; ab55811), and C1q (Abcam; ab71089). In addition, Fluorescent-dye conjugated antibodies against mouse IgG (Invitrogen; A11029) and mouse IgM mu chain (Abcam; ab150121) were used for immunofluorescence detection of IgG and IgM deposition in the kidney. Renal pathology indicators were scored by two experienced renal pathologists (HL and GL) independently based on the average score according to the activity index of renal tissue in lupus nephropathy [17]. The intensity of immunostaining was reported by GL and blindedly assessed by HL.
Statistical analysis
The statistical analysis were performed with SPSS version 19.0 software. Continuous variables are presented as mean ± standard deviation. Difference between groups was determined using one-way ANOVA, and a p value less than 0.05 was considered to be statistically significant.
General conditions
The normal saline (NS) treated MRL/lpr mice began to develop skin lesion at 16 weeks of age. The skin lesion was overt at 20 weeks of age (Fig. 1). In contrast, MRL/ lpr mice treated by 10 mg/kg CRIg/FH twice weekly presented less hair loss.There were two mice died in normal saline treated MRL/lpr mice and one died in oral prednisone treated MRL/lpr mice before 20 weeks age..
CRIg/FH reduce proteinuria and protect renal function in MRL/lpr mice
The level of proteinuria in either NS treated or prednisone treated MRL/lpr mice were significantly elevated after 16 weeks of age (Fig. 2a), whereas protein levels in urine of MRL/lpr mice treated with CRIg/FH were still not significantly elevated by 20 weeks of age. After 18 weeks of age, the level of proteinuria in all dosage groups of CRIg/FH treated MRL/lpr mice were significantly lower than that of the NS or GC treated mice (P < 0.05, Fig. 2a).
Treatment with CRIg/FH significantly reduced serum creatinine and urea nitrogen levels in MRL/lpr mice. At 16 weeks of age, serum creatinine levels in all CRIg/FH treated MRL/lpr mice were significantly lower than those in either NS or prednisone treated MRL/lpr mice (P < 0.05, Fig. 2b). At 20 weeks, serum creatinine levels in CRIg/FH treatment groups maintained to be significantly lower than that in NS treated MRL/lpr mice (P < 0.05). However, comparing to prednisone treated MRL/ lpr mice, significant decrease in serum creatinine level was only observed in CRIg/FH treatment with highest dose (10 mg/kg twice weekly, P < 0.05, Fig. 2b).
Similarly, serum urea nitrogen levels in MRL/lpr mice treated with CRIg/FH were all significantly lower than those in the NS treated MRL/lpr mice at 16 weeks of age (P < 0.05, Fig. 2b). In addition, serum urea nitrogen level in CRIg/FH treatment with higher dosage (10 and 5 mg/ kg twice weekly) were significantly lower than that in the prednisone treated MRL/lpr mice (P < 0.05, Fig. 2b). At 20 weeks of age, the effect of CRIg/FH to maintain serum urea nitrogen at low levels in MRL/lpr mice was observed in higher dosage (10 and 5 mg/kg twice weekly) however not in the lowest dose (2 mg/kg) group (Fig. 2b), .
CRIg/FH blocks complement activation in MRL/lpr mice At 16 weeks, the levels of serum complement C3 and C4 in MRL/lpr mice treated with CRIg/FH at all dose was significantly higher than that in NS treated MRL/lpr mice (Fig. 3). The difference in C3 was maintained at 20 weeks age mice only between MRL/lpr mice treated with highest dose CRIg/FH (10 mg/kg twice weekly) and NS treated MRL/lpr mice (Fig. 3). Nevertheless, the levels of serum C3 and C4 in CRIg/FH treated mice was significantly lower than that in C57BL healthy mice at both 16 and 20 weeks age (Fig. 3). The serum levels of C4 was significantly lower in higher dose (10 and 5 mg/kg twice weekly) of CRIg/FH treated MRL/lpr mice when compare to either NS or prednisone treated MRL/lpr mice, at both week 16 and 20. The significance of C4 decreasing is not identified in lowest dose of CRIg/FH (2 mg/kg twice weekly) when compared to prednisone treated MRL/lpr mice (Fig. 3).
Similarly, the activation of autoimmunity was significantly lower in MRL/lpr mice treated with CRIg/FH. At 16 weeks, the serum A-ds-DNA level in MRL/lpr mice treated with CRIg/FH at all doses was significantly lower than that in NS treated MRL/lpr mice or GC treated MRL/lpr mice (Fig. 3). The effect continued at 20 weeks measurement (Fig. 3). Nevertheless, the levels of serum A-ds-DNA was still significantly higher in CRIg/FH treated MRL/lpr mice than that in normal control mice at both 16 and 20 weeks (Fig. 3).
CRIg/FH improves lupus nephropathy in MPL/lpr mice
The H&E staining of mice kidney at 20 weeks of age showed that MRL/lpr mice had significant nephropathy in those treated by NS than CRIg/FH (Fig. 4a). In MRL/ lpr mice treated with highest dose CRIg/FH (10 mg/kg twice weekly), there were 7 mice categorized as LN II, 1 mice LN III, whereas all mice treated with NS were scored as LN IV-G. The activity index in MRL/lpr mice treated with CRIg/FH were significantly lower than that in NS treated lupus mice ( Table 1). The activity index of MRL/lpr mice treated with highest dose of CRIg/FH was Fig. 2 Proteinuria and renal function in MRL/lpr mice. A urine protein/creatinine ratio of studied mice; B levels of serum creatinine and blood urea nitrogen at 16 and 20 weeks age of studied mice. *P < 0.05 compared between CRIg/FH treated mice (black dot) and each control group (vertical bar). NS = normal saline; Pred = prednisone; Scr = serum creatinine; BUN = blood urea nitrogen. The CRIg/FH treatment at indicated dose was administrated twice a week significantly lower than that of prednisone treated MRL/ lpr mice (P < 0.05). Nevertheless, the nephropathy index of activity was not statistically different between lowest dose of CRIg/FH treatment (2 mg/kg twice weekly) and prednisone treatment. The chronic index of MRL/lpr mice treated with higher dosage of CRIg/FH (10 and 5 mg/kg twice weekly) was the same as that in prednisone treated MRL/lpr mice.
The deposition of complement MAC, C1q, and C3d in all CRIg/FH treatment groups were significantly reduced compared with NS treated mice. Nevertheless, the deposition of immunoglobulin IgM and IgG were not significantly improved in CRIg/FH treatment groups. (Fig. 4b, Table 2).
Discussion
In this study, we explored the effect of a C3 complement inhibitor, CRIg/FH, on lupus nephropathy using the widely used MRL/lpr lupus mice. Our results showed that CRIg/FH was able to protect in MRL/lpr mice from nephropathy as presenting lower level of proteinuria, improved renal function and serum immunological markers and minor nephritis. The complement system actively participates in the pathogenesis of lupus nephropathy. The lupus nephropathy is initiated by the nephrotic deposition of abnormal circulating immune complexes (CIC) formed by autoantibodies against free nucleosomes [18]. After CIC inducing complement cascade reaction through classical pathway, the proliferation and activation of glomerular mesangial cells was triggered to form glomerular diseases [19]. The initial activation and further exaggerated complement activation by alternative pathway together contributed to the pathogenesis of overt nephritis in lupus [20]. It was found that monoclonal antibody against C5 effectively improved lupus nephropathy in both lupus mice [21] and patients with lupus nephropathy presenting low levels of serum complement [22]. The C5 blockers which acts effectively against the formation of MAC therefore is currently available and widely used in practice to treat autoimmune diseases [23]. Nevertheless, the complement inhibiting effect of C5 blockers is often extensive and systematic, resulting in elevated risk of immunosuppression and opportunistic infections [24]. Although recently study showed that C3 blocker effectively improved proteinuria and preserved renal function in lupus mice [12], despite its targeting earlier phases in the complement activation, potentency of C3 blocker to broadly suppress complement activation still raised similar concerns [12]. The CRIg/FH is a fusion protein combining the extracellular domain of CRIg (C3b/iC3b/C3c binding domain) and the alternative pathway inhibitory domain of factor H (SCR1-5) [14]. The combination of the two domains was shown to not only bind to the C3 digestion product C3b/iC3b/C3c and subsequent reduce activation of classic pathway, but also inhibit the activation of alternative pathway [15,16]. The effect of CRIg/FH against complement overactivation induced nephritis was first recognized in rat model of mesangioproliferative glomerulonephropathy (MPGN) [14]. The ability of CRIg/FH to inhibit classical pathway and remove C3b/iC3b deposition on cell surface and protecting glomerular mesangial cells raised the possibility of its treatment effect in other complement activation related nephritis [14]. Additional evidence showed that the binding between CRIg/FH and C3b subunit on the surface of macrophages was able to place FH domains to participant and inhibit alternative pathway [16,25,26]. Immunohistological findings for renal tissues from MRL/lpr mice treated by CRIg/FH at 10 mg/kg twice weekly and normal saline (NS). The NS treated mice showed significant glomerular atrophy, basement membrane thickening and rupture, mesangial area widening, mesangial matrix increases, mesangial cell proliferation and deposition of components of immunopathogenesis; MRL/lpr mice revealed less renal damage and immunological deposition. NS = normal saline, Pred = prednisone, MAC = membrane attack complex. The CRIg/FH treatment at indicated dose was administrated twice a week The effect of CRIg/FH on lupus nerphritis was confirm by renal pathology and immunofluorescence of the lupus mice. CRIg/FH was able to maintain normal glomerulus morphology and inhibiting levels of local C3 and C4 however not CIC. Interestingly, our results showed that the A-ds-DNA titer decreased in lupus mice treated with CRIg/FH. Although the complement inhibitor does not have directly effect on adaptive immune system, there are preliminary data showing that complement inhibition contributed to the relief of disease activity of lupus in both human subjects and MRL/lpr mice model [27,28]. Despite its detailed mechanism is currently unexploited, it was suggested that the clinical value of the drug in treating SLE can act beyond the complement system.
Conclusions
Our study showed that complement inhibitor CRIg/FH can effectively treat lupus nephropathy in the classical MRL/lpr model. The protective effects of lupus nephropathy are manifested in many aspects, including reducing proteinuria, serum creatinine and urea nitrogen levels, and reducing tubular inflammation. In addition to directly blocking the complement pathway of complement, whether CRIg/FH can improve the inflammation of kidney and alleviate the proliferation of mesangial cells through other pathways needs to be further clarified. Further experiments are needed to confirm the relationship between complement and the acquired immune system in SLE. Data are presented as number of mice scored by intensity of immunological staining, in comparison between CRIg/FH (10 mg/kg twice weekly) and normal saline (NS) treated MRL/lpr mice. Kidneys from the two MRL/lpr mice treated by NS who died earlier were also examined | 2019-09-16T23:13:06.463Z | 2019-08-02T00:00:00.000 | {
"year": 2019,
"sha1": "3ccc2805dcb2645c0aa12db6d6dd9420a772add4",
"oa_license": "CCBY",
"oa_url": "https://bmcnephrol.biomedcentral.com/track/pdf/10.1186/s12882-019-1599-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fef9695946b0c8be92ced88abc834de510de4dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246347194 | pes2o/s2orc | v3-fos-license | Reliability of pictorial Longshi Scale for informal caregivers to evaluate the functional independence and disability
Abstract Aim The pictorial Longshi Scale was designed to assess patients' functional ability in the Chinese context, which is gradually used by some informal caregivers. However, its reliability compared with healthcare professionals has not been examined. Design A multi‐centre cross‐sectional study conducted in 24 Chinese hospitals. Methods We recruited patients undergoing rehabilitation treatment and informal caregiver dyads. Informal caregivers and healthcare professionals evaluated patients' functional ability using the Longshi Scale according to three levels (bedridden, domestic and community). The Kappa coefficient and McNemar‐Bowker test were used to examine the consistency and accuracy between the two parallel assessments. Results This study involved 947 patients (mean age: 46.07 ± 11.72 years) and informal caregiver dyads (64.86 ± 12.94 years). Most patients were males (66.3%), while most caregivers were females (60.7%). Over 70% of patients and caregiver dyads had a secondary‐school education and lower. Around 90% of caregivers were relatives (spouse, 42.8%; offspring, 20.7%; siblings: 13.3%; parent, 12.0%) of patients. The agreement in sub‐levels of the Longshi Scale between caregivers and healthcare professionals ranges from 73%–89%, and the corresponding Kappa coefficients range from 0.504–0.786. Caregivers were more likely to assign fewer patients to the bedridden group and more to the domestic group than healthcare professionals. The subgroup analysis by education level indicated that the difference in assigning patients into three degrees of functional disability was only significant in those with primary‐school education, while non‐significant in those with secondary‐school education and higher. Conclusion The evaluation outcomes of functional ability using the Longshi Scale are similar between informal caregivers and healthcare professionals. However, informal caregivers' education level is a dominant factor in affecting the assessment accuracy compared with healthcare professionals. Informal caregivers with a secondary‐school education and higher are supported to evaluate patients' functional ability independently.
| INTRODUC TI ON
Approximately 15% of adults suffer from some kind of disability globally (Bethge et al., 2014), which is projected to increase annually (Lee et al., 2021). Disability is caused by serial factors, including trauma, ageing and acute or chronic diseases (Shakespeare & Officer, 2011).
Accurately assessing functional independence and disability facilitates the rehabilitation strategy design and services guideline, further promoting patients' functional ability recovery (Liu et al., 2022).
Currently, functional ability is mainly evaluated by healthcare professionals using special scales, which are time-consuming and terminological. Moreover, the assessment outcomes are challenging for patients and their families to understand Prodinger et al., 2017). A simple and reliable tool for assessing functional ability in patients and their informal caregivers is warranted.
Informal caregivers are defined as "individuals who provide ongoing care and assistance, without pay, for family members and friends in need of support due to physical, cognitive, or mental conditions" (Madara Marasinghe, 2016), which play essential roles in the rehabilitation setting, supporting the rehabilitation and subsequent discharge of patients (Young et al., 2014). Family care, the most common subtype of informal care, accounts for 80% of total care in Europe (Verbakel et al., 2017). Likewise, the Asian cultural norms encouraging families to care for their elders also substantially increase the family care rate (Ansah et al., 2016). Accumulative evidence has highlighted the significance of family care for supplementing the deficiency of professional care and enhancing the quality of long-term care (Ansah et al., 2016;Wang et al., 2017).
| BACKG ROU N D
Some studies have shown that families, non-professional healthcare workers and social workers can observe the functional impairments and disease symptoms of the older people (Ranhoff, 1997;Wang et al., 2019), effectively improving early disease diagnosis and related treatments. In China, the functional disability assessment at different stages of recovery remains inconsistent .
Patients' functional disabilities can be evaluated to varying degrees by professionals accurately when they are admitted to the hospital.
At the same time, continuous assessment after discharge is unavailable during family (Wang et al., 2019) because of the lack of specialist physicians and assessment tools for non-professionals (Bethge et al., 2014;. Therefore, a family user-friendly functional disability assessment tool without specialized training is needed. Current tools for assessing functional disability are designed using the written language, including the Barthel Index scale Liu et al., 2022), the Functional Independence Measure scale (FIM) (Prodinger et al., 2017), the modified Rankin Scale (mRS) (Banks & Marotta, 2007), and the World Health Organization (WHO) disability assessment scale (Chen et al., 2020). These scales constitute many specific medical terms and require participants to report functional limitations verbally, which shows less feasibility among people with illiteracy, language barriers and even dementia. Pictorial scales have shown more feasibility and application than text-based for those population groups in the medical field (Akena et al., 2018;Hadjistavropoulos et al., 2014;Theou et al., 2019;Tomlinson et al., 2010).
| Research question
In 2013, our team developed a pictorial Longshi Scale (Figure 1) based on a survey of 1,862 people with functional disabilities in China (Wang et al., 2019). To our knowledge, this is the first pictorial scale to assess functional ability . The reliability and validity of the Longshi Scale have been assessed among therapists, interns and personal health aids with an intraclass correlation coefficient >0.8, indicating good intra-and inter-rater reliabilities (Wang et al., 2019). However, the reliability of the Longshi Scale among informal caregivers remains unassessed. Additionally, one study demonstrated that the education level of evaluators might influence the assessment outcomes. Therefore, we aimed to verify the reliability of the Longshi Scale among informal caregivers and further explored the influence of education levels on assessment outcomes.
| Study design and setting
This muti-centre cross-sectional study was conducted in the departments of rehabilitation of 24 hospitals located in 11 cities across China from 11-31 December 2020. Initially, this study was designed to assess the accuracy and time consumption of informal caregivers using the Longshi Scale and Barthel Index scale to evaluate patients' functional independence, compared with professional healthcare workers. Then, the basic demographic information and scores of the Longshi Scale were used for analysis.
| Sample size
In the original protocol, we planned to include 744 subjects, while a total of 1,006 eligible subjects were initially recruited. After the evaluation of the inclusion and exclusion criteria, 947 subjects were included K E Y W O R D S functional ability, healthcare professional, informal caregiver, Longshi scale for analysis in this study. However, considering that the purpose of this study was to explore the reliability of informal caregivers using the Longshi Scale to assess patients' functional independence and disability, we recalculated the sample size using the following formula.
where π is the proportion of adults with disability (π = 15%) (Wang et al., 2019); μ α is the critical value of the two-sided test of the first type of error probability (μ α = 2.580); δ is the allowable error (δ = 0.05); We added 20% to non-response rates and incomplete study instruments. Thus, we calculated that the minimum sample size was 408 in this study.
| Participants
In this study, we recruited participants via posters at the nurse stations and the wards in survey hospitals. Nurses contacted the patients and their caregivers in the wards according to the bed number. Qualified participants were invited to participate in the study.
A total of 1,006 consecutive inpatients were enrolled according to the inclusion criteria. For the purpose of this study, we only selected patients who were adult (>18 years) patients who had functional disabilities after diagnosing with cerebral haemorrhage, stroke, spinal cord injury, post-operative brain tumours and brain trauma. Those who suffered from mental illness, serious cognitive dysfunction and the inability to understand the images shown in the Longshi Scale were excluded. All patients' cognitive status was assessed by a nurse using the Mini-Mental State Examination (MMSE) before the disability evaluation . In this study, we excluded patients suffering from a mental illness and cognitive dysfunction (MMSE < 27) because the Longshi Scale is a pictorial scale and patients with MMSE scores < 27 had difficulty in recognizing the pictorial-based items of the Longshi Scale. Additionally, patients who participated simultaneously in other clinical studies were excluded. Furthermore, the informal caregivers were selected according to the patients they cared for. The inclusion criteria for informal caregivers in the study were adults who take care of their patients without any pay, including patients' families, relatives, friends or colleagues. Those who were hired as formal caregivers were excluded.
Non-Mandarin-speaking caregivers were also excluded because of a lack of translation services. The participants were form 24 hospitals in 11 cities of China, and every city might have one dialect, but professional evaluators and nurses who contacted the participants were unlikely to speak every dialect. To make researchers, evaluators and patients communicate more smoothly, we only included Mandarinspeaking participants. The informal caregiver was asked to assess their patient using the Long Scale independently. Written informed consent was obtained from the participants and caregivers' dyads.
Eighteen healthcare professionals were willing to participate.
They were randomly divided into six groups. Each group included one therapist and two interns. Before the assessment, all healthcare professionals were given a brief, half-day training sessions on how to accurately score the Longshi Scale. Each healthcare professional interviewed one patient at a time and each patient received Longshi Scale evaluation from one of three healthcare professional. Both of assessments from healthcare professionals and informal caregivers were conducted on the same day.
| Data collection
The sociodemographic information of the patients and informal caregivers were collected using online questionnaires by nurses after signing the informed consent forms. Then, functional independence and disability were assessed using the Longshi Scale by healthcare professionals and informal caregivers, respectively. All data were recorded and uploaded in the Mike website by a special account. Mike is an online form production website, which can collect, store and management data (MikeCRM Co., Ltd., https:// www.mikec rm.com/). First, one nurse logged into the preregistered Mike account and made electronic forms online, including two basic information forms and Longshi Scale. Each electronic form could generate a unique link or identification code. Second, the nurses used the link or identification code to collect the basic information of the patients and informal caregivers, separately.
Thereafter, healthcare professionals and informal caregivers collected the Longshi Scale scores of patients on a face-to-face basis using another link or identification code. Once the data collection is completed, it could not be changed. Finally, all the data were reviewed and checked by the study assistants in the data management platform of Mike website. Any record with missing information was excluded from the study.
| Instruments
The Longshi Scale assessment was divided into three steps. First, all patients were allocated to the bedridden, domestic or community groups, which depended on their ability to move out of the bed, move outdoors, and return indoors. Second, patients in each group were evaluated using a 3-point Likert subscale (form), including (1) bedridden group subscale (Form 1, including bladder and bowel management, feeding and leisure activities); (2) domestic group subscale (Form 2, including toileting, grooming and housework); and (3) community group subscale (Form 3, including community mobility, shopping and social participation). Third, we calculated the total score of each subscale (minimum independence = 3 and maximum independence = 9) ( Figure 2).
| Analysis
Statistical analyses were conducted using SPSS (version 22.0; IBM Corp., Armonk, NY, USA). Demographic characteristics were presented as numbers. Demographic characteristics included age, sex (male and female), marital status (such as married, unmarried, divorced and widowed), ethnicity (Han and minority), living pattern (such as living alone, living with family, living with tender, living in nursing institution and other), annual household income (less than 50,000, 50,000-100,000, 100,000-150,000 and more than 150,000 yuan) and degree of education (primary and lower, high school, college and higher). Religion and retirement were coded as "yes" and "no," respectively.
Descriptive statistics (i.e. frequency, percentage, mean and standard deviation) were calculated. The Kolmogorov-Smirnov test was used to examine the normal distribution of the data. The chi-square test or McNemar-Bowker test was used to compare the nominal variables in the three groups (bedridden, domestic, and community). Kruskal-Wallis test was used to compare the differences of the age among the groups. Mann-Whitney test was performed to determine the statistically differences between the Longshi Scale scores of three groups. Scatter plots were used for comparison the mean differences in Longshi Scale sum score between healthcare professionals and informal caregivers. The closer the scatter point is to the mean difference line, the better the consistency is. The level of significance was set at <0.05.
The healthcare professionals' scores served as reference standards. As a special type of correlation coefficient, Cohen's kappa statistic (κ) was used as a standardized measure of agreement. All items were scored on an ordinal scale with more than two alternatives, and the weighted kappa coefficient was used. The degree of agreement evaluated by κ coefficient at the item level has the following standard definitions: poor (κ = 0.00-0.20), fair (κ = 0.21-0.40), moderate (κ = 0.41-0.60), good (κ = 0.61-0.80) and very good (κ = 0.81-1.00) (Wang et al., 2019). The marginal homogeneity test was used to examine asymmetry bias. F I G U R E 1 Longshi Scale for assessing the activities of daily living 5 | RE SULTS
| Characteristics of participants
The sociodemographic characteristics of informal caregivers and their patients are summarized in Table 1. A total of 1,006 patients were invited to participate in the study. Of these, 59 were excluded because of refusing to participate (n = 18), missing data (n = 26) or duplicate data (n = 15). There were 947 eligible patients, and their caregivers were included in this study. Among all the patients, 419 (44.2%), 298 (31.5%) and 230 (24.3%) were classified into the bedridden, domestic and community groups, respectively. The mean age in the bedridden group, domestic group and community group were 65.70 ± 13.588, 65.42 ± 12.987 and 62.55 ± 11.372 years, respectively. The majority of them were male (n = 628, 66.3%), of Han ethnicity (n = 946, 99.9%), had a secondary school educational level (n = 544, 57.4%), had retired (n = 556, 58.7%), and had a family annual income of 50,000-100,000 yuan (n = 446, 47.1%).
The mean age of informal caregivers was 46.07 ± 11.715 years.
| Scores of Longshi Scale in healthcare professionals and informal caregivers
The scores of Longshi Scale were normally distributed both in healthcare professionals and informal caregivers (Kolmogorov-Smirnov = 5.018, p = 0.000 vs Kolmogorov-Smirnov = 5.049, p = 0.000). The scores of Longshi Scale items in bedridden, domestic, and community groups were presented as boxplots in Figure 3.
For healthcare professionals, the mean scores of the three items were 5.14 ± 1.946, 4.77 ± 1.421 and 7.55 ± 1.959, respectively.
For informal caregivers, the mean scores of the three items were 5.00 ± 1.936, 5.37 ± 1.656 and 7.18 ± 2.055, respectively. The mean score of each item was also compared, and there were no differences between healthcare professionals and informal caregivers in each item of the three groups (p > 0.05).
| Reliability analysis
All of the Longshi Scale items had kappa coefficient above 0.50, which illustrated moderate agreement between healthcare professionals and informal caregivers' scores ( Table 2). For the "community mobility" and "shopping" items, the kappa coefficients were higher than 0.70, indicating good agreement, and the agreement rates between healthcare professionals and informal caregivers were 86.4% and 89.3%, respectively. However, for the "bladder and bowel management," "entertainment," "toileting," "grooming and bathing" and "housework" items, the kappa coefficients were lower than 0.60, indicating moderate agreement, and the agreement rates were 73.6%, 74.5%, 73.9%, 72.7% and 85.7%, respectively.
According to the evaluation results of informal caregivers, there were 591, 221 and 135 patients in the bedridden, domestic and community groups; however, according to the evaluation of professionals, there were 569, 241 and 137 patients in these groups, respectively. There was no statistically significant difference existed between informal caregivers and healthcare professionals without education stratification of informal caregivers (McNemar-Bowker = 7.413, p > 0.05). Considering that education level was an F I G U R E 2 Flow chart of assessment using Longshi Scale. First step of Longshi Scale is to assess if subjects belong to bedridden, domestic, or community groups according to whether they can transfer out of bed or outdoors and return. Each subject will then be further evaluated using the corresponding form (subscale) of Longshi Scale. Finally, calculate the total score of each form (subscale) important factor influencing on evaluation results, we conducted the subgroup analysis by education level of informal caregivers, using healthcare professionals as reference standards. The results showed that there was no statistically significant difference in the secondary school or above groups (McNemar-Bowker between 0.707and 4.714, p > 0.05). However, in the primary school group, the accuracy evaluation of Longshi Scale differed significantly between healthcare professionals and informal caregivers (McNemar-Bowker = 8.759, p = 0.013). The results are showed in Table 3. Figure 4 shows the difference of the Longshi Scale sum scores between the healthcare professionals and informal caregivers. The mean differences were 0.14 ± 1.57, −0.59 ± 1.65, and 0.37 ± 1.96 in the bedridden, domestic and community group, respectively. No significant bias existed in the sum scores, as the informal caregivers scored slightly higher than the healthcare professionals (5.41 ± 2.027 vs 5.39 ± 2.036, p > 0.05). The scatter showed that there were 6, 3, and 1 data point out of the range in the bedridden, domestic, and community groups (mean ± 2SD), respectively. These findings implied good or better agreement on Longshi Scale scores between healthcare professionals and informal caregivers.
| DISCUSS ION
There are approximate 42 million people with disabilities in China, and most are living in rural areas (Ansah et al., 2021). A large number of healthcare professionals are needed for the identification of highrisk groups, disability evaluation and nursing care (Bai et al., 2021).
However, the limited nursing staff and medical resources in some poverty-stricken areas prevent people with disability from being assessed and cared for (Qiao et al., 2022). Informal caregivers represent the most abundant personnel resource in looking after people with disabilities (Ranhoff, 1997). Training them to make a skilled assessment of functional independence is helpful in lightening the
TA B L E 1 (Continued)
F I G U R E 3 Scores of Longshi Scale in healthcare professionals and informal caregivers. Longshi Scale divided patients into three groups, including bedridden, domestic and community groups; each group was evaluated using a 3-point Likert subscale, which included three different items about functional ability. The mean scores of Longshi Scale items in bedridden group (a), domestic group (b) and community group (c) were compared using t-tests. The level of significance was set at <0.05. In the bedridden group, the mean scores of "bladder and bowel management", "feeding" and "entertainment" were 1.61, 1.84 and 1.58, respectively, according to informal caregivers, while according to healthcare professionals, the mean scores were 1.60, 1.84 and 1.63, respectively. In the domestic group, the mean scores of "toileting," "grooming and bathing" and "housework" were 2.15, 1.50 and 1.22, respectively, according to informal caregivers, while according to healthcare professionals, the mean scores were 2.12, 1.54 and 1.21, respectively. In the community group, the mean scores of "community mobility," "shopping" and "social participation" were 2.58, 2.55 and 2.64, respectively according to informal caregivers, while according to healthcare professionals, the mean scores were 2.60, 2.59 and 2.68, respectively TA B L E 2 Agreement rate of Longshi Scale scoring between healthcare professionals and informal caregivers The results of Longshi Scale evaluated by healthcare professionals served as reference standards. c MNB, McNemar-Bowker Test, was used to compare the differences between healthcare professionals and informal caregivers with different education levels.
burden of nursing staff and ease the effects of unbalanced medical resources. So far, the existing functional independence and disability scales are word-based with some potential limitations, such as too many evaluation items, difficulty of use for non-professionals, time consuming and requiring more than two healthcare professionals to evaluate, which takes up extensive medical resources (Wang et al., 2019). However, the Longshi Scale is a pictorial scale, which can used by non-professionals, and it takes not more than a minute to finish the evaluation, which greatly reduces the time and medical resources required . The results from this study indicate moderate or good agreement between healthcare professionals' and informal caregivers' scores on the Longshi Scale items and their sum scores. A few disagreements can be explained by withinpatient variability due to day-to-day variation (Wang et al., 2019).
The items of grooming and bathing demonstrated the poorest agreement (kappa 0.504), which might be related to variation in personal hygiene.
Ordinarily, informal caregivers might be able to observe functional decline in their patients (Blanco et al., 2020). Presumably, other informal caregivers, such as home helpers, could also be trained to score activities of daily living (ADL) reliably and evaluate functional independence accurately after a short introductory course (Tam & Schmitter-Edgecombe, 2019). In the current study, the informal caregivers worked in teams, together with the healthcare professionals, therefore the scores may have been biased. Although they were instructed not to communicate about the scores, the confounding bias was inevitable. If the assessment of ADL was consistent between informal caregivers and healthcare professionals, suggesting that double-track assessment might be a potential approach to address the insufficient source of professionals care. A previous study indicating that Barthel Index scassessment by a physician from patient interviews was not reliable (Liu et al., 2020;MacIsaac et al., 2017;Wang et al., 2019). The results of this study indicated that the ability of ADL assessed by informal caregivers is a better method to detect decline in functioning, than the doctor's interview about ADL tasks, particularly among stroke survivors in the subacute and recovery stages.
The results obtained from the informal caregivers in this study are not suitable for the assessment of patients suffering from mental illness, because their scores might be biased (Ranhoff, 1997;Zhao et al., 2021).
In addition, our results implied a moderate or good agreement on the Longshi Scale evaluation between healthcare professionals and informal caregivers, especially for disability patients in the community and domestic groups. This might be associated with the ceiling effect of the Longshi Scale. Similarly, Barthel Index scale is F I G U R E 4 Difference in Longshi Scale sum score scored by healthcare professionals and informal caregivers. Longshi Scale divided patients into three groups, including bedridden, domestic and community groups; each group was evaluated using a 3-point Likert subscale. The average differences in Longshi Scale sum score between healthcare professionals and informal caregivers were 0.14 ± 1.57, −0.59 ± 1.65, and 0.37 ± 1.96 in the bedridden, domestic and community groups, respectively. Outlier points were determined as outside the range (mean ± 2SD). The scatter showed that there were 6, 3 and 1 outlier points out of the range in the bedridden, domestic and community groups, respectively known to have a ceiling effect that makes it insensitive to slight functional impairments in previously well-functioning patients (Sarker et al., 2012). Although a significant ceiling effect was found in the bedridden group in our previous study, the internal consistency of all three groups was acceptable for group comparison (Wang et al., 2019). However, Barthel Index scale quantifies ADL on an ordinal, hierarchical scale that ranges from 0 to 100, which limits interpretation of numeric changes in the total score. As for informal caregivers, it is difficult to understand how much score change is significant . A distinct feature of the Longshi Scale is the categorization and scoring system, which facilitates the understanding of patients' functional independence by informal caregivers . Moreover, the pictorial scale may allow a much simpler and more inclusive assessment across all populations, especially for people with aphasia and reading difficulties (Quinn et al., 2011). In this study, we found that except for people with education below the primary school level, there were no statistically significant differences between the two groups. Future research should focus on interventions to make reliable assessments of Longshi Scale in informal caregivers with a low degree of education.
This study included 947 pairs of informal caregivers and patients from 24 clinical settings in 11 cities of China. To our knowledge, this study is the first to address the reliability of pictorial based Longshi Scale for informal caregivers to evaluate the functional independence and disability of inpatients. We believe that these findings provide insights into disability evaluation and medical resource allocation in some impoverished areas. Healthcare strategies for functional disability may integrate healthcare professionals with informal caregivers to improve the effectiveness of rehabilitation.
| LI M ITATI O N S
The interpretation of these results also needs to consider the following limitations. First, the cross-sectional design of this study restricted identification of a causal relationship to functional independence. Second, the sampling method was non-random, and the included hospitals were collaborative organizations with the authors' departments. Although the inherent bias could be unavoidable, our study covered over 24 hospitals in 11 cities to ensure generalizability. Moreover, note that our study only selected patients aged over 18 years, which may make the findings inapplicable to the populations under 18 years of age. Finally, most variables were measured by self-report; thus, we invited experienced investigators to assess functional disability and combine medical records to reduce recall bias as much as possible.
| CON CLUS ION
There is good or moderate agreement between healthcare professionals and informal caregivers on Longshi Scale evaluation.
However, informal caregivers' education level is a dominant factor in affecting the assessment accuracy compared with health professionals. Informal caregivers with secondary-school educations and higher are supported to evaluate patients' functional ability independently.
ACK N OWLED G EM ENTS
We would like to thank Miss Chunli Cai and Wanqi Fu for offering technical assistance in data collection and recruitment participants.
We also would like to thank all the healthcare professionals, informal caregivers and patients to participate this study.
CO N FLI C T O F I NTE R E S T
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
E TH I C A L A PPROVA L
The study protocol was approved by the Medical Ethics Committees of the Shenzhen Second People's Hospital (project identification code: 20201105004). Written informed consent was obtained from all patients and their informal caregivers, who agreed to participate in the study. | 2022-01-28T16:03:49.554Z | 2022-01-26T00:00:00.000 | {
"year": 2022,
"sha1": "c0d076bdf2b0acb5252332c71765a1dd6e9a9b37",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1180495/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Wiley",
"pdf_hash": "776285af674d6412932e872f92c03a27805444b0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
15358853 | pes2o/s2orc | v3-fos-license | Identifying dynamical systems with bifurcations from noisy partial observation
Dynamical systems are used to model a variety of phenomena in which the bifurcation structure is a fundamental characteristic. Here we propose a statistical machine-learning approach to derive lowdimensional models that automatically integrate information in noisy time-series data from partial observations. The method is tested using artificial data generated from two cell-cycle control system models that exhibit different bifurcations, and the learned systems are shown to robustly inherit the bifurcation structure.
Dynamical systems are used to model a variety of phenomena in which the bifurcation structure is a fundamental characteristic. Here we propose a statistical machine-learning approach to derive lowdimensional models that automatically integrate information in noisy time-series data from partial observations. The method is tested using artificial data generated from two cell-cycle control system models that exhibit different bifurcations, and the learned systems are shown to robustly inherit the bifurcation structure. Various phenomena ranging from climate change to chemical reactions have been modeled extensively by dynamical systems [1,2], and the relevance of dynamical systems to modeling biological phenomena is being increasingly recognized [3,4]. Recent advances in experimental techniques such as live-cell imaging that clarifies molecular activities at high spatiotemporal resolutions [5][6][7] have accompanied this recognition. However, noise, partial observation, and a low controllability are still challenges for measuring biological systems in that both the system dynamics and measurement processes are highly stochastic, only a few components in a system are observable, and only a small number of experimental conditions can be examined. These difficulties have led to models being constructed from experimental observations in a way that is often ad-hoc and semi-quantitative at best because instructive criteria and practical methods have not yet been established for deriving the model equations by systematically integrating the information in the experimental data.
To model complex systems such as cellular processes, a full description of all the systems details is often impractical and not informative. Instead, a reduced description that preserves the essential features of the system is more useful for comprehension., i.e., models described by low-dimensional dynamical systems are sufficient for explaining experimental observations. In particular, the bifurcation structure is a fundamental feature of dynamical systems since it characterizes the qualitative changes of the dynamics. Thus, identification of low-dimensional model systems that inherit the original bifurcation structure is a crucial step in understanding the dynamics.
Here we propose a statistical machine-learning approach to automatically derive the low-dimensional model equations from single-cell time-series data obtained at a few conditions (i.e., bifurcation parameter values; Fig. 1). Techniques for learning nonlinear dynamical systems from time-series data have been employed for chaos [8,9], spatiotemporal patterns [10][11][12], and multi-stable systems [13]. Only a few studies have applied the technique to biological data [14,15]. In a similar manner to some of those studies [14,15], we employ a statistical technique to deal with the noisy and partial time-series data. However, rather than aiming to fit the model parameters to the observation, we obtain the low-dimensional model equations that inherit bifurcation structure of the full system to capture the basic nature of the observed system. The performance of the method is demonstrated by using artificial data.
We introduce a nonlinear state space model composed of state and observation equations that describe the system dynamics and observation process, respectively. We consider a system that is modeled by a D-dimensional stochastic differential equations, and d components in the model can be simultaneously observed. The state equations are discretized in time by the Euler-Maruyama scheme [16]. We write the time evolution of the ith variable at a time point t, x t i (i = 1, . . . , D), as where ∆t is an integration time, σ i is the intensity of the system noise, and s is a bifurcation parameter. System noise ξ t i is sampled from a standard normal distribution. To achieve efficient learning, the function f i is considered to be expressed by a summation of linearly independent functions as f i ({x t j }, s) = Ni n k n i f n i ({x j }, s), where N i is the number of parameters {k n i } and functions {f n i }. Since our aim is to reproduce the bifurcation structures of systems subjected to unknown equations, we adopt a polynomial base for the {f n i }, rather than biochemically realistic functions like the Michaelis-Menten equation.
The observation value of the i th component at a time point r, y r i (i = 1, . . . , d), is written as where η i is an observation noise intensity, and φ r i is sampled from a standard normal distribution. In general, a set of observed time points is a part of the entire set of time points in the numerical integration. Hereafter, θ indicates the parameters to be estimated: The learning of dynamical systems is formulated as a maximum likelihood (ML) estimation, which is summarized below (further details are given in the section 1 in the Supplemental Material). The likelihood is given by the conditional probability of the observed time series Y as a function of the model parameters θ. However, a straightforward maximization of the likelihood p(Y |θ) is difficult because it requires the untractable summation of p(Y |X, θ)p(X|θ) with respect to the time series of the state variables X. Thus, we employ the EM algorithm to maximize the log likelihood of a model by a two-step iterative method that alternately estimates the states and parameters [17]. In the first step, the E step, the posterior distribution of the time series of a state (p(X|Y, θ)) is estimated based on the tentative parameter set θ old . In the second step, the M step, the expectation value of log p(X, Y |θ) is calculated as and the parameter estimation is updated as In this step, the optimization problem is reduced to linear simultaneous equations and thus can be solved easily. However, the problem in the E step is still analytically unsolvable because the probability distribution of the time series is necessary. This calculation requires a state estimation at all time points including the points at which measurements are not conducted. We therefore obtain a numerical approximation of p(X|Y, θ) using a particle filter algorithm that performs state estimations of nonlinear models using a Monte-Carlo method [18,19]. The particle filter (a numerical extension of Kalman filter) approximates a general non-Gaussian state distribution as a set of particles representing samples from the distribution and evaluates the log likelihood of the models. Since the use of the particle filter introduces stochasticity into the learning algorithm, a slight modification of the M step is required to ensure convergence of the learning [20]. The optimization function in eq.(4) is replaced by Q ′ I (θ) = (1−α I )Q ′ I−1 (θ)+α I Q(θ, θ old ), where I is the iteration index, and {α I } is a sequence of non-increasing positive numbers converging to zero.
To validate the method, we apply it to artificial data generated from models of a eukaryotic cell cycle control system since this system provides an illustrative example of cellular dynamics composed of many molecular components [21][22][23][24][25]. The cell cycle is a fundamental biological process characterized by repeated events underlying cell division and growth in which key proteins, Cyclin and Cyclin-dependent kinases, change their concentration periodically and activate various cellular functions such as DNA synthesis.
Two molecular circuit models of the cell-cycle control system in Xenopus embryos are adopted as the data generators: that proposed by Tyson and co-workers (the Tyson model) [21,22], and that proposed by Ferrell and co-workers (the Ferrell model) [23,24]. Although both models show an oscillation onset as the synthesis rate of Cyclin increases, they differ in the type of bifurcation at the onset; the Tyson model exhibits a saddle-node bifurcation on an invariant circle (SNIC), while the Ferrell model exhibits a supercritical Hopf bifurcation. We investigate whether the proposed learning procedure reproduces the correct bifurcation types of each model.
Both data generators are composed of 9 variables including Cdc2, Cyclin, and other regulatory proteins. The time-series data is generated by a numerical calculation of these models as nonlinear Langevin equations at a few parameter values (see the section 2 in the Supplemental Material for the model equations, and obtained time-series data). We simulate noisy observation by adding Gaussian noise to each observation value. Artificial data are prepared for three Cyclin synthesis rates across the bifurcation point, and for each bifurcation parameter value, three independent time series are prepared in which the oscillation exhibits a large fluctuation in amplitude and period among the samples.
Considering a polynomial of degree M , we write the system equations to be learned as The observation equations are expressed simply as y r i = x r i + η i φ r i . We consider the active Cdc2 and Cyclin concentrations to be observable variables since their levels have been observed in previous experiments [23]. Accordingly, y 1 (x 1 ) and y 2 (x 2 ) represent the observed (true) concentrations of active Cdc2 and Cyclin, respectively. The other variables x i (i > 2) represent the true concentrations of unobservable components. In the system, the Cyclin synthesis rate, s, is a bifurcation parameter. We take the constant term in the equation for Cyclin to be the synthesis rate, i.e., k 1 2 = s. Note that the observed orbit in the active Cdc2-Cyclin plane exhibits no intersection, (see Fig. S1 in the Supplemental Material) suggesting that the two variables are sufficient to abstract the original high-dimensional dynamics.
The simplest polynomial form required for reproducing the observed dynamics is determined by starting with linear equations composed of active Cdc2 and Cyclin (system dimension D = 2 and polynomial order M = 1) and increasing the D and M by one. It turns out that D = 2 is sufficient for reproducing a given time-series data set as shown below. The polynomial order M is determined by minimizing the information criteria through an optimization of the balance between the goodness of fit and the model complexity [26,27]. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) are evaluated from the log likelihood, parameter number, and data size for each model (Fig.2). Both the AIC and BIC show a decrease from M = 1 to 3, but an increase or insignificant decrease at M = 4. Therefore, we analyze models with D = 2 and M = 3 (see the section 3 in the Supplemental Material for the learned parameter values and detailed settings in the learning algorithm).
To check whether the learning procedure can capture the bifurcation of the original data generator system, we compare the bifurcation diagrams of the learned systems with those of the data generators. Figures 3(a) and (b) show bifurcation diagrams against Cyclin synthesis rate s (red lines) for the learned systems in the Tyson and Ferrell models, respectively. The bifurcation diagrams for the corresponding noiseless data generators are shown by the gray lines. Although the data for the learning are given only at three bifurcation parameter points (indicated by the broken lines), the learned systems have quantitatively similar diagrams to those of the corresponding data generators. The sudden appearance of a limit cycle with finite amplitude is reproduced for the Tyson model, while the gradual increase in amplitude from the bifurcation point is reproduced for the Ferrell model. These features are characteristics of the SNIC and supercritical Hopf bifurcation. Nullclines of the learned systems in the vicinity of the bifurcation points are shown in Figs.
3(c) and (d) for the Tyson model and in Figs. 3(e) and (f) for the Ferrell model.
The results confirm the onset of SNIC and supercritical Hopf bifurcation, respectively. Thus, each learned system inherits the bifurcation type of the original model through the learning procedure in spite of noisy and partial observations.
When the learning is conducted by using the data on two of the three bifurcation parameter points, the learned systems still exhibit the correct bifurcation types, although the points of oscillation onset and amplitudes are biased (Figs. 3(g) and (h)). Note that identification of bifurcation is possible even by using the data only on one side of a bifurcation point (as indicated by the green lines). These results indicate the interesting possibility that the learning procedure can predict the type of bifurcation that will occur from the data before the bifurcation point only.
We also show here how the high-dimensional phase space structures of the original data generators are mapped onto the lower-dimensional surfaces in the learned systems. Reduced two-variable models are derived by adiabatic elimination following a similar procedure by Novak and Tyson [28] (see the section 4 in the Supplemental Material for the detailed procedure and reduced model equations). Like the learned systems, the reduced models are composed of active Cdc2 and total Cyclin. Figure 4 shows the nullclines of the learned systems (the solid orange and purple lines) and the reduced models (the broken lines). In both the Tyson and Ferrell models, the learned system and reduced model nullclines for active Cdc2 have a similar N -shaped form (orange lines), indicating the existence of positive feedback in the molecular circuits. In contrast, those for the total Cyclin disagree quite significantly. To check the consistency of the nullclines and dynamics, Fig. 4 also shows a noisy time series from the data generators (blue points) and the orbit of the learned system (red lines). The nullclines of the learned systems are consistent with the dynamics in the data but the reduced models are not. This failure arises because the dynamics of a component mediating the inhibition from active Cdc2 to Cyclin is not fast enough to allow the adiabatic approximation. Higher-order contribution beyond the adiabatic elimination performed here should be included, which requires complicated technical work. Nevertheless, the learning process automatically reproduces the appropriate low-dimensional dynamics and estimates the bifurcation structures without knowledge of the detailed high-dimensional model systems.
Gathering biological data is complicated by intrinsic and observation noises, partial observation, and a small number of possible experimental conditions. We have outlined here a machine-learning procedure based on likelihood maximization that makes use of all the information in the time-series data, including that in the noise. By using synthetic data that share the difficulties found in actual biological data, we demonstrated that the procedure could derive low-dimensional model equations that reproduced the obtained time-series data and captured the bifurcation types of the original systems. These results support the conjecture that the learning procedure will be able to construct reliable low-dimensional models for real time-series data of active Cdc2 and Cyclin levels in future studies. Being able to identify the model systems and bifurcation types will provide a useful method for elucidating both the molecular interactions in the circuit and the biological functions of the dynamics. Further, since the dynamics and bifurcation are found widely among various biological processes, the method is expected to be applicable to various cell system with cell-imaging data.
We note that the proposed procedure can be interpreted as a reduction method from high-to lowdimensional systems like adiabatic approximation. In particular, in the vicinity of the bifurcation points, the systems are usually reduced to normal forms represented by low-dimensional differential equations with low-order polynomial forms [29]. However, unlike analytical reduction methods that require the original high-dimensional equations, the present learning procedure uses only the time-series data. This is especially advantageous for studying cell dynamics that involve complex molecular interactions. On the other hand, since the learning method has a less theoretical basis for interpreting the obtained equations, it should be complemented by an analytical procedure.
In essence, the proposed method performs quantitative inference of the phase space structures of the dynamical systems. Therefore, not only the bifurcation structure but also other properties of the dynamical system can be analyzed using the same theoretical groundwork developed here. The detection of phase sensitivity from noisy data in studies of biological clocks [30] would provide an interesting application. In addition, the method is flexible enough to be combined with other machine-learning techniques; it was recently shown that compressive sensing exhibits a high performance for learning dynamical systems [9]. Those possible extensions will further improve our method depending on the situations and experimental setup. In summary, the proposed method will be an efficient way to capture the essential features of the cellular dynamics by mediating dynamical system mod-eling with experimental observations. We would like to thank K. Kamino, N. Saito, and S. Sawai for illuminating comments and stimulating discussions. This work was supported by the Grant-in-Aid MEXT/JSPS (No. 24115503). We first introduce the state space model composed of the state equations and observation equations describing the system dynamics and observation process, respectively. Let us consider D-dimensional stochastic differential equations that describe a system, and d components in the system that are observed simultaneously. In the model, the state variable x i (i = 1, . . . , D) evolves under the function f i ({x j }, s), where s represents an input to the system, and the observation value y i is obtained through the function g i ({x j }). By discretizing the dynamics in time with the Euler-Maruyama scheme [1], we can write the space state model as where t(∈ T ) and r(∈ R) are time points, ∆t is an integration time, and σ i and η i are the noise intensities in the dynamics and observation, respectively. Both ξ t i and φ r i are sampled from a standard normal distribution. In general, the set of observed time points R is a subset of the entire time point set T (i.e., R ⊆ T ) for the numerical integration. We assume that the function f i can be expressed by the summation of linearly independent functions (f n i (n = 1, . . . , N i )) as Here, {k n i } are the coefficients to be estimated. Let us consider that A time-series data sets are given. The learning procedure estimates the parameters {k n i }, {σ i }, {η i }, and all the true states {x t i } for each time-series set. In our method, the initial condition for the ith component in the ath time-series set is assumed to obey a Gaussian distribution parameterized by the mean µ i,a and the variance V i,a . Distributions at other points are automatically estimated by the particle filter algorithm explained below. Then, the parameters to be estimated are θ = ({k n i }, {σ i }, {η i }, {µ i,a }, {V i,a }).
SAEM algorithm
Our aim is to find model parameters θ by maximizing log likelihood function Here, Y (= {Y a }, a = 1, . . . , A) denotes the data sets of the A time series, and X (= {X a }, a = 1, . . . , A) denotes the entire time series of estimated states. We employ an EM algorithm that maximizes log P (X, Y |θ) (the complete-data log likelihood function), which is equivalent to maximizing the likelihood in eq. (4) [2]. By iterating two steps known as the E and M steps, the states X and parameters θ are estimated alternately. Since our implement of the E step includes the Monte-Carlo method as described below, stochastic approximation EM (SAEM) algorithm is adopted [3]. The SAEM procedure is described as follows. 3. (M step) Rename θ as θ old , and update the parameter vector as where and {α I } is a non-increasing sequence of positive values converging to zero. The details of the E and M steps are described in the following sections.
E step
Since different time-series data are independent stochastic variables, we can write log p(X|Y, θ) = a log p(X a |Y a , θ).
Then, each log p(X a |Y a , θ) is evaluated by using a particle filter algorithm that approximates the non-Gaussian distribution of the state x t i as a collection of many particles, each of which represents a sample from the distribution [4,5]. Specifically, the algorithm required here is called a particle smoother. For the ath time series, let x t,p i,a denote the pth particle for representing x t i , and let y t i,a denote an observed value at time t. The procedure of the particle smoother is described as follows.
where l r,p a = d i p(y r i,a |{x r,p j,a }).
• If P ef f = 1/ p (w p a ) 2 < P thres (i.e., if effective number of the particles fall below a threshold value), resample the particles according to the new weights. Note that the history of particles (x 0,p i,a , x 1,p i,a , . . . , x r−1,p i,a ) is resampled in parallel.
3. Finish when all data points have passed (t = max(R)), and estimate the log likelihood as log L a (θ) = r∈R log( 1 P where X p a indicates a sample path ({x t,p i,a }, i = 1, . . . , D, t ∈ T ). On the basis of this approximation, we calculate the average of the complete-data log likelihood as
M step
At the Ith iteration, the parameter-value update is performed by finding the θ for which d dθ Q I (θ) = 0. We describe the case of d dθ Q(θ, θ old ) = 0 for simplicity, although the optimization problem can be solved generally by the same method. The following example demonstrates the determination of parameters of the system dynamics (k n i ) and the strength of the system noise (σ i ) in detail.
First, by differentiating the complete-data log likelihood with respect to k m l , we obtain where ∆x t,p l,a = (x t+1,p l,a − x t,p l,a )/∆t. By defining the vectors b l and k l and a matrix A l as the system dynamics parameters are determined as follows.
Next, using the new k n i calculated above, we obtain and thus, The other parameters are estimated in the same manner. Only for the variance of the initial condition V i,a , we define the minimum value V min to avoid an unnaturally small value resulting from a problem called sample impoverishment [6].
Data preparation
The model equations and parameter values used in the present study as the data generators are described here. Each data generators, the Tyson and Ferrell models, is numerically integrated with white Gaussian noise by using a stochastic Runge-Kutta II (SRKII) algorithm [7]. To prohibit negative values of the chemical concentrations as a result of noise, each variable is reset to a small positive value ǫ (= 0.001) when the value is less than ǫ. We confirmed that the results of the present study are stable so long as ǫ is sufficiently small.
Ferrell model
The Ferrell model is described in Tsai et al. [9].
Artificial measurement process
For both the Tyson and Ferrell models, the artificial measurement process is implemented as follows. Here, η 1,2 is the observation noise intensity, and φ 1,2 is sampled from a standard normal distribution.
Parameter values
The parameters used in the Tyson and Ferrell models are listed in Tables S1 and S2, respectively. In the Tyson model, all the parameter values are the same as those in the original literature [8]. In the Ferrell model, all the parameter values are the same as those in Pomerening et al. [10] except for "f actor," which is set as in [9]. Figure S1 show noiseless orbits of the data generators. We note that each orbit exhibits no intersection, indicating that the two observable variables are sufficient to abstract the original high-dimensional dynamics.
Data set
The data used for the learning are shown in supplemental Fig. S2.
Settings and results of learning
Parameters in the learning algorithm is listed in Table S3. The learned parameters of the third-order polynomial model used in the main text is shown in Table S4.
Model reduction
We reduce the Tyson and Ferrell models to two-dimensional systems by the same procedure as described in [11]. By denoting the non-dimensionalized active Cdc2 and total Cyclin levels as u and v, respectively, the reduced models are written as follows. The reduced Tyson model takes the form aṡ where f Cdc25 , f Wee , and f APC are the functions corresponding to the adiabatic solutions of eqs.
(32), (33), and (34), respectively. This reduction procedure includes the determination of the level of Cdc2-Cyclin-tp (i.e., the value of u) from the sum [Cdc2-Cyclin] + [Cdc2-Cyclin-tp] (see Appendix A in [11]). This is based on a detailed balance assumption of the phosphorylation reaction between the two molecular species. However, in the Ferrell model, the absence of the reaction makes the original reduction procedure inapplicable. Then, we simply assume [Cdc2-Cyclin-tp] ∼ [Cdc2-Cyclin] + [Cdc2-Cyclin-tp], because it is observed that the ratio of [Cdc2-Cyclin] to the summation is small throughout the dynamics within the parameter region we consider. Consequently, the reduced Ferrell model is written aṡ where f Cdc25 , f Wee , and f APC are derived from the adiabatic approximation in the same manner as the Tyson model. Parameter | 2012-08-23T02:41:36.000Z | 2012-08-23T00:00:00.000 | {
"year": 2012,
"sha1": "4f379f60aa23db3cdd8ad971c8c27465affb58b8",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1208.4660",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4f379f60aa23db3cdd8ad971c8c27465affb58b8",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics",
"Biology"
]
} |
115175255 | pes2o/s2orc | v3-fos-license | Non-perturbative Quantum Dynamics of the Order Parameter in the Pairing Model
We consider quantum dynamics of the order parameter in the discrete pairing model (Richardson model) in thermodynamic equilibrium. The integrable Richardson Hamiltonian is represented as a direct sum of Hamiltonians acting in different Hilbert spaces of single-particle and paired/empty states. This allows us to factorize the full thermodynamic partition function into a combination of simple terms associated with real spins on singly-occupied states and the partition function of the quantum XY-model for Anderson pseudospins associated with the paired/empty states. Using coherent-state path-integral, we calculate the effects of superconducting phase fluctuations exactly. The contribution of superconducting amplitude fluctuations to the partition function in the broken-symmetry phase is shown to follow from the Bogoliubov-de Gennes equations in imaginary time. These equations in turn allow several interesting mappings, e.g., they are shown to be in a one-to-one correspondence with the one-dimensional Schr\"odinger equation in supersymmetric Quantum Mechanics. However, the most practically useful approach to calculate functional determinants is found to be via an analytical continuation of the quantum order parameter to real time, \Delta(\tau ->it), such that the problem maps onto that of a driven two-level system. The contribution of a particular dynamic order parameter to the partition function is shown to correspond to the sum of the Berry phase and dynamic phase accumulated by the pseudospin. We also examine a family of exact solutions for two-level-system dynamics on a class of elliptic functions and suggest a compact expression to estimate the functional determinants on such trajectories. The possibility of having quantum soliton solutions co-existing with classical BCS mean-field is discussed.
I. INTRODUCTION
The concept of spontaneous symmetry breaking is one of the cornerstones of modern physics: Most phase transitions we know are associated with the appearance of a non-zero local order parameter that represents a broken symmetry and leads to a state that has a lower symmetry than that of the underlying Hamiltonian. In elementary particle physics, the Anderson-Higgs mechanism is the most promising scenario to explain the appearance of finite masses for elementary particles, including gauge bosons. The canonical model to explain the origin of the broken symmetry phenomenon usually involves a Lagrangian for a boson field, ∆, that has quadratic and quartic terms and that can be symbolically represented as follows: L[∆] = α |∆| 2 +β |∆| 4 +c |D∆| 2 , where D corresponds to a gauge-invariant derivative and α, β, and c are constants. In the context of elementary particle physics, it defines a Mexicanhat model for the Higgs boson, which is a minimal renormalizable field theory that produces symmetry breaking "by design." In solid state physics, such a Lagrangian is associated with the Ginzburg-Landau functional for a fluctuating order parameter near a phase transition and in many cases it can actually be derived from a more general microscopic Hamiltonian (which is typically an interacting fermion model, such that the order-parameter field is associated with a composite, rather than canonical boson).
Such a microscopic derivation was first accomplished by Gor'kov, 1 who starting from the BCS Hamiltonian obtained the Ginzburg-Landau functional for a superconductor and found explicitly the Ginzburg-Landau coefficients in terms of microscopic parameters (i.e., electron mass, electron density, interaction strength, and concentration of impurities). The general framework for a derivation of this type now appears in excellent textbooks 2 and can be briefly summarized as follows: One starts with an interacting electron model that has a "desired" phase transition (e.g., electrons with attraction for superconductivity): The partition function of the model can be expressed in terms a path integral of the corresponding imaginary-time (Grassmann) action, which includes a quartic term describing interactions. This term in path integral can be decoupled via an auxiliary Hubbard-Stratonovich boson field, ∆(x) ≡ ∆(τ, r). Then, the fermionic component of the action becomes Gaussian and the fermions can be integrated out to produce an effective action S eff [∆(x)], which can be formally expressed as a non-linear functional determinant [see, e.g., Eq. (2.6) in Sec. II]. The Hubbard-Stratonovich field, ∆(x), describes a fluctuating in space and imaginary time, τ , order parameter and the appearance of a non-zero expectation value, ∆, of this field below a certain transition temperature, T c , is associated with a broken symmetry phase. In the vicinity of T c , the relevant trajectories of ∆(x) are assumed to be such that its imaginary-time dependence is unimportant [that is, ∆(τ, r) is assumed to be independent of τ ], ∆(r) is in some sense small, and it is also assumed to be weakly fluctuating in space (long-wavelength approximation). Hence, the action can be related to the free energy by simply writing F [∆(r)] = T c S [∆(r)], and expanded in a Taylor series, which yields the Ginzburg-Landau theory, with the quadratic coefficient α ∝ (T − T c ).
The derivation of the Ginzburg-Landau theory outlined above is justified only near a classical phase transition. Below T c , the assumptions about ∆ being small and τ -independent break down (if the relevant interaction constant, g, is not small they may break down even "earlier"). However, it is exactly the low-temperature phase, including the ground state that we associate with a spontaneously broken symmetry. This picture is based on the very reasonable assumption that the relevant "trajectories" of the order-parameter field, ∆(x), at low temperatures are located near the classical saddle-point ∆ ≡ const, which becomes the only possible trajectory at T = 0 and therefore represents an exact solution. This assumption is equivalent to stating that the effective action, S eff [∆(x)], has one and only one minimum which occurs in a single "point" in the space of all allowed functions, |∆(τ, r)| (modulo the overall phase). We reiterate that there is no good reason to expect that the simplified form of the Ginzburg-Landau action remains reliable at low temperatures. In fact, if we "insist" on the canonical Ginzburg-Landau form and attempt to derive the corresponding coefficients in the expansion, we shall find that the coefficient of the quartic term generally diverges as T → 0. 3,4 Hence, we have to work with the full functional determinant in S eff [∆(x)], which is a complicated non-linear functional and we know little about its properties apart from its behavior in a tiny sub-space of constant functions. To the best of author's knowledge, there is no model (associated with breaking of continuous symmetry below T c > 0), where such functional determinants have been explicitly calculated beyond the classical mean-field analysis.
The objectives of this work are to bring up the general problem of non-perturbative quantum dynamics in broken-symmetry phases and to construct a general framework to calculate functional determinants that appear in the non-linear effective action for quantum trajectories of the order parameter in the pairing model. The latter is a seemingly hopeless goal, but we show that one can obtain exact results in certain cases and based on those results formulate a more general Ansätz that is expected to be useful for a large class of quantum trajectories. To address this and other related questions, we employ the Richardson pairing model, [5][6][7] which is an interacting fermion model that has a paired ground state built-in. In fact, it is "almost" the mean-field BCS model in the sense that the corresponding order parameter does not have any real-space dependence and so all such fluctuations 8 have been eliminated. However, the model still retains quantum dynamics of ∆(τ ). The Richardson model is integrable and there exists an exact Bethe-Ansätz solution, 5,7 which determines the exact eigenstates and spectrum of the model in sectors with a fixed number of singleparticle excitations and Cooper pairs. However, this algebraic Bethe Ansätz solution does not appear to be very helpful in calculating the thermodynamic partition function in the grand-canonical ensemble and we use an alternative method, which is based on coherentstate path-integral representation of Anderson pseudospins, 9 describing the BCS sector of the model. We use a mapping of the equilibrium problem in imaginary time onto that of nonequilibrium superconductivity, and take advantage of the exact non-equilibrium solutions, obtained recently in a series of amazing papers by Levitov et al. 10 and Yuzbashyan et al. 11,12 By analyzing a certain family of exact results, we propose a general closed expression to estimate the corresponding functional determinant, which is not always exact but is expected to be quantitatively reliable for a large class of elliptic functions and their limits.
Our paper is structured as follows: In Sec. II, we present the canonical Richardson model and formulate in more technical details the key questions within the conventional Grassmann path integral/Hubbard-Stratonovich approach. The questions involve studying various aspects of fluctuation physics and they are addressed in the rest of the manuscript using a variety of techniques: In Sec. III, we derive combinatorially an exact expression for the thermodynamic partition function of a generalized Richardson model in terms of a "spin partition function" associated with single-particle states and an "Anderson pseudospin partition function" associated with the paired/empty states. The generalized Richardson model includes the canonical Richardson model (reduced BCS Hamiltonian) as a particular case, and in this limit, the spin part of the partition function becomes trivial, so that the problem reduces to the problem of calculating contributions of Anderson pseudospins to the partition function. Sec. IV formulates a coherent-state path integral for Anderson pseudospins to calculate the functional determinants of interest. It is shown that by introducing a single Hubbard-Stratonovich field one can represent the full thermodynamic partition function as a product of terms local in parameter space. The contribution of each such local term to the partition function follows from the Bogoliubov-de Gennes equation in imaginary time. In Sec. V, we study phase fluctuations within the path integral formalism and obtain an exact expression for the partition function in terms of a sum of phase winding numbers. Sec. VI is the main part of the paper, which addresses the question of (possible) fluctuations of the amplitude of the order parameter, assuming that the phase fluctuations are completely suppressed. Sec. VI contains several parts: In Sec. VI A, the symmetry properties of the imaginary-time Bogoliubov-de Gennes equations are discussed and it is shown that the full density matrix solution satisfying the proper initial condition, ρ(τ → 0) =1, can be constructed from a particular spinor solution satisfying arbitrary initial conditions. In Sec. VI B, we show that the general problem of solving imaginary-time Bogoliubov-de Gennes equations in the presence of a quantum-fluctuating order-parameter field is equivalent to that of a one-dimensional supersymmetric Schrödinger equation, with "superpotentials" determined uniquely by ∆(τ ). Therefore, the cases where these two problems are solvable are shown to be closely related. Sec. VI C 1 derives an exact expression for the full density matrix, ρ(τ ), corresponding to a non-trivial dynamic order parameter, representing the soliton of Ref. [10] analytically-continued to imaginary time. The resulting functional determinant is found to be surprisingly simple and is equivalent to that of a Fermi gas. Sec. VI C suggests that the simplification of the functional determinant observed in Sec. VI C 1 is not accidental but has a natural explanation: It is argued that the effective action associated with a given quantum-fluctuating, ∆(τ ), is given by the sum of the dynamical phase and Berry phase accumulated by a two-level-system driven by a time-dependent magnetic field determined by the analytically continued order parameter, ∆(τ → it). This conjecture is verified to work well on a large class of functions, where ∆(τ + it) is an elliptic function with two primitive periods along the τ and it-axes. A general expression for the corresponding effective action is presented in Sec. VI D and the possible implications of the results obtained to non-perturbative quantum dynamics of the superconducting order parameter are discussed.
II. THE RICHARDSON PAIRING MODEL AND KEY QUESTIONS
Let us consider spin-1/2 fermions, described by the creation/annihilation operators,ĉ † l,s andĉ l,s , labeled by the spin index s = ±1 and the index l ∈ L, where L is a set of allowed single-particle states. It can be a discrete, possibly finite, set (associated for example with localized levels in a mesoscopic superconducting grain [13][14][15] ) or a continuum of momentum states in a system with open boundary conditions (such that |l, s and |l, −s are a pair of time-reversed states). We will refer to the states l as to "sites." We perform some formal mathematical manipulations assuming that L is discrete and finite, but it is without a loss of generality, as this assumption does not preclude us from taking the proper limit at any stage of the calculation. The canonical Richardson Hamiltonian (or equivalently the reduced BCS Hamiltonian) describing an s-wave superconductor has the form: where V L is either the number of sites in L, if the set L is discrete, or otherwise if L represents a continuum spectrum, V L is a volume (in this case, the sums are to be replaced with integrals over momenta, k ≡ l). In what follows, we will also use the notationg = g/ (2V L ).
To formulate the main questions, let us first follow the conventional method of treating Hamiltonian (2.1) and represent the partition function as a Grassmann path integral: and introduce the Hubbard-Stratonovich field, ∆(τ ), to decouple the interaction term in the Grassmann action, integrate out the fermions from the resulting quadratic theory, and arrive to the following standard effective action expressed in terms of the order parameter field whereτ are two-by-two Pauli matrices in the Nambu space and the determinant is to be evaluated over both the time variable and the Nambu space. To trace over the Nambu space, one can use the identity below which is valid for any matrices/operators A, B, C, and D, provided that A and D are invertible. Applying this identity to Eq. (2.3), we find where the effective action is given by is the partition function of a non-interacting Fermi gas given by Eq. (3.18) below. Here G ± l = (∂ τ ± ξ l ) −1 are Green functions, whose explicit form in τ -representation is easy to obtain.
Calculating formally the first variation of the effective action S eff [∆] with respect to ∆[τ ] leads to the mean-field equation for an extremum ∆ MF [τ ] of the functional, which generally has the complicated operator form: where the right-hand-side is to be understood as a kernel of the corresponding operator in τ -representation i.e., a kernel, K(τ, τ ′ ), defines an operator by its action on an arbitrary β-periodic function, f (τ ), as followsK · f (τ ) = β 0 K(τ, τ ′ )f (τ ′ )dτ ′ . The equation (2.7) can be cast into a more friendly form of an integral equation, but it would still remain too complicated for a systematic analysis. We do however know that there exists a solution to this equation, which is a constant that in the classical BCS model is given by ∆ BCS ∼ ω 0 e −1/(νg) (here we have to assume that L is momentum space and l∈L · = V L ν dξ l ·, ν is the density of states at the Fermi level, andω 0 is the usual high-energy cut-off to regularize the Cooper logarithm). One can verify explicitly that indeed ∆ BCS is a true minimum [i.e., it is not only a minimum on a tiny subset of constant functions, but a true minimum on the space of allowed functions, ∆(τ ) = ∆(τ + β)], but there are still a few important questions that remain: (i) Does the classical BCS mean-field result represent the only minimum at 0 ≤ T < T c , or there may exist quantum non-perturbative trajectories of ∆(τ ), which would give contributions energetically comparable to the classical mean-field (or better)?
(ii) A related key technical question is whether it is possible to calculate the functional determinant, det 1 − G + l · ∆ * · G − l · ∆ , for "trajectories" of the order parameter with nontrivial quantum dynamics? (iii) What are the effects of quantum fluctuations 16,17 of the modulus and/or phase of the order parameter on thermodynamics (e.g., the energy of the ground state)? We will address these questions to some extent in the following sections using an alternative method, namely the path-integral formalism for Anderson pseudospins.
III. FACTORIZATION OF THE GENERALIZED RICHARDSON HAMILTO-NIAN
The Richardson Hamiltonian (2.1) is known to be integrable 5,7 and its integrability is due to the existence of an infinite number of conservation laws at two levels of the problem: First, the Hamiltonian commutes with the z-component of the spin on any site and therefore the Hilbert sub-spaces associated with the singly-occupied states and the paired/empty states are separated and can be studied independently. 18 After this factorization, the Hamiltonian for paired states reduces to a pseudospin Hamiltonian (expressed in terms of Anderson pseudospins). As Richardon discovered, the pseudospin Hamiltonian amazingly has an infinite number of conservation laws as well and this allowed him to construct an exact Bethe-Ansätz solution to the corresponding spin problem in a given sector (with a fixed total pseudospin), and in particular, find a set of coupled algebraic equations determining the energy spectrum in the sector. The Richardson equations are exact and therefore include correctly all quantum fluctuation effects, but this exactness also makes it difficult to use the solution for practical purposes and to interpret its physical meaning, because the solution mixes up fluctuations of the order parameter of different types. In addition, the Richardson equations are still too complicated to allow a further analytic treatment and most importantly they address different pseudospin sectors independently. For this reason, we do not use the results of the algebraic Bethe-Ansätz approach to calculate thermodynamic properties of the model, but we find it however very useful to perform the first simpler step in the Richardson's solution, i.e., to factorize the Hilbert space into single-particle and paired/empty states. It turns out that this factorization is allowed for a more general Hamiltonian than (2.1) and in the interest of generality and future work, we present this procedure for such a more general model, which we dub the generalized Richardson model [e.g. Eq. (3.4) below represents a generalized Ising- Richardson model].
Let us define the density, spin, and Cooper pair operators on each site as follows: l,sĉ l,s is the density operator, is the spin, withŜ z l = 1 2 s=± sĉ † l,sĉ l,s being its z-component, and ClearlyP l ≡ P † l † =ĉ l,−ĉl,+ . Let us now use these operators to express the following generalized Ising-Richardson model:Ĥ where ξ l describes single-particle energy eigenvalues/spectrum, B l is an applied magnetic field in the z-direction,g is an interaction in the BCS channel, andJ is an Ising-type spin interaction. We reiterate that the special case of Hamiltonian (3.4) withJ = B = 0 and g l 1 ,l 2 ≡g = const yields the canonical s-wave Richardson pairing model (2.1) that we actually will study in the rest of the paper. However, the more general Hamiltonian (3.4) has the same "local" in L conservation laws, since it commutes with the z-component of the spin,Ŝ z l , on any site: This allows us to define the following projectors for an arbitrary subset of L where L 1,2 ⊂ L. Note also thatP 2 1,2 [L 1,2 ] =P 1,2 [L 1,2 ]. By convention we shall denote the projectors on a single site (i.e., if the corresponding subset consists of a single element, Obviously for those single-site projectors we have:P 1 (l) +P 2 (l) =1. (3.8) This resolution of unity allows us to represent the Hamiltonian (3.4) as a sum of Hamiltonians acting in different "sectors" of the Hilbert space as follows: Each term in the above sum represents two Hamiltonians acting on single-particle states in L 1 and paired/empty states in L 2 . The corresponding spin and pairing Hamiltonians arê (3.11) Now, one can follow Anderson and check that the operatorsP † l ,P l , and (ρ l −1), when constrained by the projector on empty/paired states, form a closed su(2) algebra on each site (here and below, we use the symbol, su(2), for the Lie algebra and SU(2) for the Lie group) or in other words, the operators are Anderson pseudospins. One can therefore drop the projectors and replace the operators with Pauli matrices (since,P 2 l = 0, we have to use the two-dimensional representation)P † l =τ + l ,P l =τ − l , andρ l =τ z l + 1. Similarly, one can remove the projector in Eq. (3.10) and simply replaceρ l with one, since each site in L 1 is guaranteed to be singly-occupied by construction. This leads to the following decomposition of the Hamiltonian (3.4)Ĥ Since the Hamiltonian (3.4) does not have operators that connect different partitions of L, the total partition function is given by a combination of the products of the partition functions corresponding to the Ising and XY -models on different sets Note that factorization of the Hilbert space into single-particle and pair/empty states, which led us to Eq. (3.15), does not require thatŜ z l is locally conserved, but requires only that the spin and pseudospin sectors can be uncoupled via projectors (3.6) and (3.7), which is a much weaker requirement. This implies that this construction may be applied to even more general Hamiltonians of type (3.4), which include quantum interaction terms for real spin. This avenue will be explored elsewhere, 19 but here we instead focus on the much simpler canonical Richardson pairing Hamiltonian (2.1), where there are no interactions for real spins (J l 1 ,l 2 ≡ 0), nor there are magnetic fields (B l = 0), and hence the partition function associated with the single particle states is simply Z spin [L 1 ] = l∈L 1 2e −βξ l , so that the full partition function of the pairing model is simplified to on a subset L 2 ⊂ L and whereg l 1 ,l 2 ≡ g/ (2V L ). We will use this decomposition (3.16) in the remainder of the paper.
To run a simple sanity check on the result obtained, we consider the non-interacting case with g = 0, i.e., the Fermi gas. Eq. (3.14) therefore is the Hamiltonian of non-interacting pseudospins in magnetic fields, b l = (0, 0, ξ l ), and the partition function is given by Since the partition function involves products of "local" in L terms and all possible decompositions are to be considered, we can equivalently rewrite Eq. (3.17) as follows which is indeed the partition function of a non-interacting Fermi gas of spin-1/2 particles.
IV. PATH INTEGRAL FOR ANDERSON PSEUDOSPINS
In Sec. III, we showed that the full partition function of the Richardson model is given by where Z BCS is the partition function of the XY -Hamiltonian with infinite-range interactions [here, we subtract a constant from the HamiltonianĤ BCS ′ given by Eq. (3.14) and set g l 1 ,l 2 ≡g]:Ĥ To calculate the partition function, we employ the coherent-state spin path integral formalism and write it in the form: 3) where n l = (sin θ l cos φ l , sin θ l sin φ l , cos θ l ) is a vector constrained to move on a unit sphere, We now perform the Hubbard-Stratonovich decoupling for the interaction term in the spin path integral, which allows us to write the full partition function in the form: where the z l is a "local" path-integral, which depends on a realization of the "global" Hubbard-Stratonovich field Note that in Eq. (4.4) the explicit factorization of the terms into single-particle and paired/empty states is no longer necessary due to "locality" of the "dynamic partition function," z l , after the Hubbard-Stratonovich decomposition. The contribution of the singleparticle terms is simply given by the factor of two in Eq. (4.4).
To treat the path integral (4.5), we note that it can be "generated" as a solution to the following differential equation for a "density matrix,"ρ l : The trace of the two-by-two "density matrix" evaluated at τ = β gives the desired partition function z l = Trρ l (β). This relation can be proven by writing a formal solution to Eq. (4.6) as a τ -ordered exponential and then expressing it as a path integral to reproduce exactly (4.5).
To verify that the formulas obtained so far are consistent with what is known, let us consider the case of the classical mean-field, where the order parameter is taken to be a constant ∆ BCS MF (τ ) ≡ ∆ BCS = const. In this case the solution to Eq. (4.6) is given by ρ (0) l (β) = exp −ĥ l β . Since,ĥ l = ξ lτ z + Re ∆τ x − Im ∆τ y , and we can writê whereĥ l = E l (n l ·τ ) with |n l | = 1, so that E l = ξ 2 l + |∆| 2 is the familiar quasiparticle spectrum in BCS theory, which in the pseudospin language translates into an effective magnetic field experienced by a pseudospin. Calculating the trace, we recover the partition function of a spin-1/2 in a magnetic field of magnitude |b l | = E l : z (0) l = 2 cosh (E l β). Now returning to Eq. (4.4) and noticing that {2 + 2 cosh (E l β)} = 2 cosh E l β 2 2 , we can write the classical mean-field contribution to the partition function as follows: where we recall thatg = g/(2V L ). Varying the action with respect to ∆, we indeed recover the familiar BCS self-consistency equation We note that eventhough the classic BCS equation follows from the Richardson Hamiltonian, this zero-dimensional model does not have a true (classical) phase transition. In particular, if we calculate the Riemann integral over ∆ that appears within the classical mean-field approximation in Eq. (4.9), the resulting function Z BCS MF (T ) will be continuous in the vicinity of a nominal T c (e.g., one can expand the free energy into a Taylor series and obtain a zero-dimensional Landau theory, which leads to a continuous partition function expressed in terms of the error function; see Ref. [16] for details). If the underlying physical model is higher-dimensional, then a phase transition is anticipated, 16 and we can interpret the temperature at which a derivative of the partition function over T has the sharpest slope as a temperature where the phase transition occurs. However, it is not only the partition function itself that is of primary interest, but also the trajectories that provide main contributions to it. In the weak-coupling limit, the transition point can in turn be identified (in the leading approximation with respect to g) with the point where a non-trivial solution to the self-consistency equation (4.10) first appears, but in strong coupling this is not necessarily so. We note that one can use the simple BCS result (4.9) for estimates of T c (g) by examining the partition function as explained above (i.e., looking for a temperature where the slope of its second derivative is the sharpest). However, of course this procedure is not quantitatively reliable as it neglects superconducting fluctuations in real space (which are classical fluctuations for the purpose of determining T c ), which have been excluded from Richardson model from the outset. In what follows, we will not address the very interesting question of determining T c in strong coupling, but instead will focus on the effects of quantum dynamics of the order parameter.
V. PHASE FLUCTUATIONS
Let us now express the order parameter in Eq. (4.4) explicitly as a product of a timedependent amplitude part and a phase factor, ∆(τ ) = ∆ 0 (τ )e iγ(τ ) . To proceed further, we note that the first term (the factor of two) in the product in Eq. (4.4) originates from singleparticle states, which are free (real) spins and as such this factor of two is nothing but a partition function of a free spin-1/2. In the path-integral language, it can be "generated" by the action, which contains just a Wess-Zumino term and no Hamiltonian. I.e., we can use the following "representation of the factor of two:" . (5.1) Such Wess-Zumino terms appear in the factors z l in Eqs. (4.4) and (4.5) as well and we get We note here that the first term in the product, which is equal to one within our conventional Richardson model (2.1), will have a more complicated form in the generalized Richardson Hamiltonian (3.4), where it should be related to the partition function of an Ising model for real spins on singly-occupied sites. We now perform the following change of variables ("gauge transformation") φ l → φ l (τ ) + γ(τ ). The dependence of the action on the overall phase of the order parameter disappears from the last term in the product in Eq. (5.2) and appears only in the Wess-Zumino term. The corresponding γ-dependent part of the action therefore reads We can now evaluate the path integral over γ(τ ), following Ref. [20] and keeping in mind the periodic boundary conditions for ∆(τ ) and n(τ ), so that γ(β) − γ(0) = 2πq, with q ∈ Z.
Therefore, we obtain Hence, the phase fluctuations of the order parameter constrain the sum l η l (τ ) to be equal to a constant at all times. The resulting sum over q can be rewritten as an inverse discrete Fourier transform Therefore, the result of path integration in (5.4) is The partition function for Anderson pseudospins (5.2) reads where we limited the sum over N to positive values only because (1 + η l ) ≥ 0 (if the set L is finite we can restrict the sum to N ≤ V L ) and the path integral over the order parameter field includes only the dynamics of the modulus. In Eq. (5.6), S ∆ 0 is the "bare action" for the order parameter field, S WZ is the sum of all Wess-Zumino terms for the pseudospins, and the interacting part of the effective action reads: We see that the effect of phase fluctuations of ∆(τ ) is to separate the partition function into "sectors," where the total projection of the z-component of Anderson pseudospins is a constant integer at all times. This analogy can be made more explicit, if we imagine the associated real-time pseudospin dynamics, governed by the Bloch equation,Ṁ l = b l × M l , and where the effective magnetic field is determined by is the modulus of the order parameter properly analytically-continued to real times, τ → it]. The δ-functions in Eq. (5.6) demand that the real-time dynamics of individual pseudospins must be correlated in such a way that they pin the "total pseudospin moment," l M z l (t), to a constant. Note that these constraints imposed by the phase fluctuations are in addition to the constraint that may be imposed by any mean-field treatment of the remaining path integral over the amplitude ∆ 0 . From this, one can see that our ability or lack thereof to satisfy a certain mean-field (in a mesoscopic integrable system 13,14 ), e.g. a constant amplitude such as in classic BCS mean-field, is determined by the initial conditions for the pseudospins.
The physical meaning of all these results can be clarified if we first consider the subset of paired/empty states and recall that the density operator on a site, l, (3.1) of the original model is given byρ l = 1 +τ z l for the paired/empty states l ∈ L 2 ⊂ L, so that the Anderson "spin-up" corresponds to the existence of a Cooper pair and a "spin-down" to an empty site. Therefore the operator corresponding to the total number of Cooper pairs is given bŷ and the time-dependent field in the path-integral formalism corresponding to this operator is given by From Eqs. (5.3) and (5.9), we see that the action that includes the phase of the order parameter can be written as follows I.e., we recover the fact that in the absence of gapless excitations, the phase of a Bose field operator and the number of bosons (Cooper pairs in our case) are canonically conjugate operators, satisfying therefore the Heisenberg uncertainty principle. However, the effective action (5.7) may contain contributions from single-particle states as well [they are associated with the factor of one in the sum in Eq. (5.7)] and if we allow such states, i.e., if l ∈ L 1 = ∅, then the canonical conjugate to the phase,γ, the way it is defined above, will also have a contribution from the real spins on singly-occupied sites. The meaning of the field [1 + η l (τ )] /2 is different for those singly-occupied states and relates to the z-component of the actual magnetic moment of a site. This suggests an interesting relation for the full phase action (5.3), which now includes contributions from both paired/empty states and single-particle states: 11) where N tot (τ ) and S z tot (τ ) are fields corresponding to the total number of particles and the total magnetic moment of the system. If single-particle states are completely gapped out as it is usually assumed, then all particles are bound in Cooper pairs, the total magnetic moment is identically zero and we recover the familiar conclusion summarized by Eq. (5.10). But in general, the Hamiltonian version of Eq. (5.11) will be a Heisenberg uncertainty relation/commutator, which involves both the superconducting part (Anderson pseudospins) and a magnetic part (real spins): γ, 1 2N tot +Ŝ z tot = i1. We reiterate here that while our model is "biased" towards a superconducting state and has no magnetic interactions for real spins, a more general Richardson Hamiltonian [see, e.g., Eq. (3.4)] may have non-trivial magnetic interactions [see, e.g., Eq. (3.13)], which in principle may lead to a magnetic phase transition that would compete with superconductivity, c.f., Refs. [21], [22], [23], and [24].
Both N tot and S z tot are certainly good quantum numbers and are separately conserved. Hence, the phase, γ fluctuates strongly (via the Heisenberg uncertainty principle) and since we treated these fluctuations exactly in Eq. (5.6), the δ-function constraints there effectively enforce these underlying global conservation laws. An important question is whether we actually need to enforce them to describe a realistic superconductor. The classic description of an s-wave superconducting ground state requires no gapless excitations (L 1 = ∅) and hence the phaseγ is identified with the phase of a Cooper-pair superfluid with broken gauge symmetry (that is, γ does not fluctuate). Per the same Heisenberg uncertainty principle, we must require then that eitherN CP orŜ z tot or both fluctuate strongly (in a closed system, it must be both, because the only way by which N CP can change is by breaking Cooper pairs into single-particle excitations).
Another more technical way to argue in favor of the same conclusion is to consider a Richardson model or a more general (non-integrable) physical Hamiltonian from which it descends, weakly coupled to a bath and/or to a noisy magnetic field. Then, we are allowed to break weakly some constraints associated with the global conservation laws. This can be done by "softening" the δ-functions in Eq. (5.6): E.g., we can represent each δ-function as a narrow Gaussian and then allow a finite width to the Gaussian, which would be equivalent to introducing a charging-energy-like term to the action δS γ ∝ γ 2 dτ that penalizes phase fluctuations. Both these arguments suggest that to describe a realistic superconductor in the actual broken-symmetry phase, we have to suppress phase fluctuations, which can be accomplished by dropping the S γ -term and the resulting constraints in the partition function (5.6). This however brings up the question of whether the low-temperature state with broken gauge symmetry will allow fluctuations of the amplitude of the order parameter and if yes, whether they are purely mesoscopic or may involve more serious non-perturbative solutions.
VI. AMPLITUDE FLUCTUATIONS
A.
Bogoliubov-de Gennes Equations in Imaginary Time
We now consider the amplitude fluctuations assuming that the phase fluctuations are suppressed. As it was shown in Sec. IV, the partition function, originating from a non-trivial fluctuating order parameter field, is given by the trace of the density matrix z l [∆ 0 (τ )] = Trρ(β) , which now is the solution to the following Bogoliubov-deGennes equation in imaginary time with a real, but generally time-dependent ∆ 0 (τ ), What is required at this stage is to find a general expression for z l [∆ 0 (τ )] as a functional of the order parameter and to perform a variational analysis on the resulting effective action. This is equivalent to calculating the functional determinant in Eq. (2.5), which appears within a more conventional treatment. This is a difficult problem, which is intimately related to the problem of dynamics of a two-level system in a time-dependent magnetic field (generalized Landau-Zener problem), described via non-linear differential equations that have known analytic solutions only in a few special cases. While to determine the exact dynamics of pseudospins under an arbitrary perturbation, ∆ 0 (τ ), may not be possible, one can still get further insights by taking advantage of the recent progress in understanding non-equilibrium BCS superconductivity 12 and the problem of dissipation due to externally driven two-level systems, 25 where exact solutions can be obtained for a wide class of external perturbations associated with elliptic functions. Below, we explore solutions to Eq. (6.1) in some special cases and generalize the results to express the functional determinant that arises within this class of dependencies in a compact form. However, let us start with a general analysis of the imaginary-time Bogoliubov-de Gennes equations (6.1). Let us assume first that ∆ 0 (τ ) = ∆ 0 (−τ ), i.e. that it is an even function, which may occur "naturally" or via a periodic continuation from the physical imaginary-time interval, [0, β] [all conclusions below can be generalized easily to the case where ∆ 0 (τ ) = ∆ 0 (2τ 0 − τ )]. Let us also consider a Nambu spinor χ(τ ) = u(τ ) v(τ ) and look for a solution to the following equations without specifying initial conditions. We also require that ∆ 0 (0) = ∆ 0 (β), since it is a field that arises from a path integral in imaginary time. The corresponding function may have a "natural" period commensurate with β or a single "accidental" period and in the latter case we shall periodically continue the function, ∆ 0 (τ ) defined on τ ∈ [0, β], such that it satisfies ∆ 0 (τ ) = ∆ 0 (τ + β), ∀τ .
B. Bogoliubov-de Gennes Equations and Supersymmetric Quantum Mechanics
We see that if we know any particular solution to Eq. (6.2) with arbitrary initial conditions, the problem of calculating the functional determinant is solved. However, it is of course the main challenge to find a particular solution. To shed light on the complexity of this general problem and to obtain a further interesting insight, we now take a detour to point out a direct connection between the solvability of Bogoliubov-de Gennes equations (6.2) and supersymmetric Quantum Mechanics.
Let us introduce the following functions
From Eqs. (6.2), we find The function, p(τ ), is expressed in terms of the other two related functions R + (τ )R − (τ ) ≡ 1: where p 0 is a constant of integration that can be set to one, p 0 = 1, since we are looking for an arbitrary solution. We see that Eqs. (6.8) for R ± (τ ) are represented by a rather general Riccati equation, which has been studied for some 300 years and which has known analytic solution only in a limited number of cases. But let us however proceed further and simplify the form of these equations by introducing the following new variables: , and r ± (τ ) = R ± (τ ) ± W (τ ). (6.10) We now assume also that ∆ 0 (τ ) does not change sign (which in fact is a requirement if phase fluctuations have been eliminated, since a change-of-sign in the order parameter should be incorporated into its phase dynamics). In this case, we can unambiguously determine τ (x) and treat all functions involved as functions of x. We get These are Riccati-type equations as well, but they now have a form that is reminiscent to equations appearing in the context of supersymmetric Quantum Mechanics. To see the connection, we recall that a generic Riccati equation can always be reduced to the following form such that a particular solution to Eq. (6.12) is written explicitly as f 0 (x) = σ/φ + and therefore the question of finding an analytic solution to a generic Riccati equation (6.12) reduces to that of finding explicitly the functions φ(x) and σ(x). In our case (6.11), we see that σ ± (x) = φ ′ ± (x), while the equations for φ ± (x) have the formĤ (6.13) where L = β 0 ∆ 0 (s)ds is the period of the potentials, V ± (x) = W 2 (x) ± W ′ (x), and W (x) is defined in Eq. (6.10). We see that the operatorsĤ ± in the right-hand-side of Eq. (6.13) are Schrödinger operators associated with two superpotentials V ± (x), which have the canonical form of those in supersymmetric Quantum Mechanics 26 and which in our case are determined by the underlying dynamics of the order parameter! Furthermore, since the ∆ 0 (τ ) has been periodically continued, Eqs. (6.13) are actually Schrödinger equations in a periodic superpotential. Even though, for our purposes what is really needed is the "wave-function" associated with just one (negative-energy) state, E = −1, we can easily examine whether the supersymmetric Schrödinger equations admit zero modes (they do not). For this we can follow Ref. [27] and notice that in a periodic potential the wave-functions are the Bloch-Floquet states, which for a zero-mode, if it were to exist, would translate into the condition φ 0,± (x+L) = e ±ν φ 0,± (x), with ν = L 0 W (x) = βξ. Since, the real factor ν is non-zero (apart from the state with ξ = 0), there are no zero modes, the Witten index is zero, and therefore the supersymmetry is broken for our conventional s-wave superconductor. Admittedly, the significance of this fact is unclear (at least for topologically trivial superconductors studied here), but what however may be important is the fact that the existence of analytic solutions to the underlying Bogoliubov-de Gennes equations (6.1) should be related to the existence of solvable supersymmetric potentials and vice versa. Another interesting approach that potentially may lead to progress would be to study quasiclassical solutions to Eqs. (6.13), where the WKB method is known to work very well (it is exact in many notable cases). We shall however leave these questions for future work and explore below another means to treat the Bogoliubov-de Gennes equations to calculate the functional determinant of interest.
A Solvable Case with Non-Trivial Quantum Dynamics of the Order Parameter
Up to this point, we have considered general properties of the Bogoliubov-de Gennes equations (6.1) and found just one explicit solution corresponding to the "trivial" case of a constant order parameter, thereby recovering classic BCS theory in this language. For a further progress, it is desirable to examine the properties of some other exact solutions in less trivial cases, but as noted above the number of known solvable cases is quite limited. Fortunately, some additional insight comes from recent progress in the closely-related problem of non-equilibrium BCS superconductivity. There is a whole class of new solutions that have been recently obtained that not only admit exact analytic treatment of pseudospin real-time dynamics, but also amazingly satisfy the mean-field self-consistency constraint (for some specific real-time dynamics). Even though, as we shall see below, these solutions for ∆ 0 (τ ) are not at all optimal for minimizing the imaginary-time action in equilibrium, let us nevertheless examine some associated exact solutions for the density matrix. We will present below the simplest such solution, which is the imaginary-time version of the Ansätz proposed by Levitov et al. 10 That is, let us seek the function, R + (τ ), [see, Eqs. (6.7) and (6.8)] in the form R + (τ ) = 2ξf (τ ) −ḟ (τ ), (6.14) where we, following Ref. [10], identify f (τ ) = ∆ −1 0 (τ ) so that the equation for f reads (6.16) where ω and τ 0 are arbitrary constants for the purpose of satisfying Eq. (6.15). However, we also have to satisfy the periodicity requirement for the order parameter, ∆ 0 (0) = ∆ 0 (β), which leads to two possibilities: (i) If ω = 2πn/β, with n ∈ Z, the solution (6.16) is "naturally periodic" with the period commensurate to β; (ii) If ω = 2πn/β, but τ 0 = β/2, it is an "accidentally periodic" solution. As we shall see below for the purpose of minimizing the imaginary-time action the latter "accidental" periodicity is much preferable, while the naturally periodic solution is not even allowed in the case of (6.16). In either case, the Ansätz (6.14) immediately leads to the following solution for R + (τ ) = v(τ )/u(τ ) and for the function where we introduced ǫ = 2ξ/ω. One can explicitly verify thatρ(τ ) given by Eq. (6.19) above indeed satisfies Eq. (6.1) together with the initial conditionρ(0) =1. The solution (6.19) looks complicated, but for the purpose of calculating the functional determinant (or equivalently the "partition function," z l [∆ 0 (τ )]), we do not need its full form but need just its trace at τ = β. Calculating this trace, we find an interesting result for this particular choice of the order parameter z l ω cos (ωτ − ωτ 0 ) = Trρ(β) = 2 cosh (ξ l τ ), (6.20) which as we see does not depend on ∆ 0 (τ ) at all (neither on frequency nor on τ 0 ) and is equivalent to a pseudospin not subject to any time-dependent ∆ 0 (τ ): I. e., . This is a very curious result indeed, because it suggests that the functional determinant may have a much simpler form than the actual "density matrix" used as a tool to calculate it.
To complete the analysis of this non-trivial fluctuation, let us examine the action evaluated for this particular "trajectory" of ∆ 0 (τ ), which has the form: given by (6.20). We notice that the first term diverges for the trajectories with "natural periodicity" because ∆ 0 (τ ) changes sign, which is not allowed if the phase fluctuations have been eliminated, but in any case such trajectories do not contribute to the partition function at all. If however τ 0 = β/2 and ωβ < π (i.e., the order parameter is positive, ∀τ ∈ [0, β]), we find immediately that the contribution to the action is S ω cos (ωτ − ωβ/2) = 2ω g tan(ωβ) + S FG , (6.21) where the last term is the action of a non-interacting Fermi gas [c.f., Eq. (6.20)] and the first one is the energy cost to have a fluctuating order parameter in the form (6.16). Since there is no means to compensate this energy cost by adjusting the "negative-energy" term associated with z l , we conclude that the chosen "trajectory" of ∆ 0 (τ ) is a low-probability event in a low-temperature superconducting state. Note that since ω must be kept smaller than πT , instantons of this type will completely die out in the ground state, but they may appear as classical excitations at higher temperatures, including even the normal state.
Functional Determinant on a Class of Elliptic Functions
Sec. VI C 1 shows that while a tour-de-force derivation of the density matrix for a nontrivial fluctuating order parameter is not impossible, but generally is quite complicated, the actual result for the functional determinant may look very simple. To understand the origin of the "mysterious" simplification of the complicated density matrix (6.19) to the very simple-looking trace (6.20), we will rely on the recent work of Yuzbashyan 12 and a related work of Dzero and the author. 25 Let us consider a particular quantum trajectory of the order parameter ∆ 0 (τ ) and analytically continue this function from τ ∈ R to complex values z = τ + it ∈ C. One can formulate a sensible general framework in terms of a z-dependent S-matrix,Ŝ(z), but we will consider here only its analytically continued form on the real-time axis (which is equivalent to a Feynman-Wick rotation at T = 0). Let us now use this analytical continuation to relate the Bogoliubov-de Gennes equation (6.1) for the density matrix in imaginary time to the corresponding Schrödinger equation for the S-matrix in real time t: where ∆ 0 (it) andĥ l (it) are symbolic notations for the dynamic order parameter and the Hamiltonian properly analytically-continued to real times, correspondingly. The Hamiltonian,ĥ l (it) = Re ∆ 0 (it)τ x + Im ∆ 0 (it)τ y + ξ lτ z = (b l ·τ ) /2, belongs to the two-dimensional representation of the Lie algebra, su (2), while the unitary S-matrix belongs to the twodimensional representation of the SU(2) group. Note that there is no need to specify the dimensionality of the matrix representation of operators in Eq. (6.22), which can be viewed as an equation of motion in the abstract group, i.e.ĥ l (it) ∈ su(2) ∼ so(3) andŜ(t) ∈ SU(2), ∀t. We can also write an associated Schrödinger equation for spinor wave-function, Just like in Sec. VI A, we can argue that if we know a particular solution to Eq. (6.23) that satisfies an arbitrary initial condition, we can always construct another linearly independent solution with the help of a time-reversal operation that now reads (−iτ y ) Ψ * (t) (iτ y ) and hence a full S-matrix can be constructed using one particular solution to (6.23). Let us note here that a key motivation for studying the analytically-continued form of the Bogoliubov-de Gennes Eqs. (6.1) and (6.2) is that if we know the solution to Eq. (6.22) or just a particular solution to Eq. (6.23), which has the familiar form of a Schrödinger equation, we should be able to analytically continue the result back to imaginary time [so that in some sensê S(−iτ ) →ρ(τ )] and therefore calculate the functional determinant. Now, let us narrow down a class of functions considered from that of arbitrary order parameters (analytically continued from the values τ ∈ [0, β] to complex arguments), ∆ 0 (z), to those that are periodic along the it-axis, i.e., ∆ 0 (z) = ∆ 0 (z + iβ t ), where β t ∈ R is the corresponding period. Let us also assume that the original fluctuation is a meromorphic and periodic function along the τ -axis as well, i.e., ∆ 0 (z) = ∆ 0 (z + β τ ), which is either due to a natural periodicity with the period commensurate to the inverse temperature β τ = β/n or some other period unrelated to the fact that the relation ∆ 0 (0) = ∆ 0 (β) must hold (see, e.g., previous Sec. VI C 1, where depending on the parameters both cases can be realized). This narrower class of functions represents elliptic functions, 28 with primitive periods β τ and iβ t , which are arbitrary constants at this point.
If ∆ 0 (z) is an elliptic function with the primitive periods (β τ , iβ t ) as defined above, then the Schrödinger equation (6.22) is that describing a spin-1/2 under a periodic-intime perturbation and so let us look for a solution of (6.22) in a Bloch-Floquet-type form, is a periodic 2 × 2-matrix and E is a constant. On the other hand, we could have used the same Floquet argument for the original Bogoliubov-de Gennes equation to argue that the "density matrix" may be written in a similar form,ρ(τ ) =ρ p,+ (τ )e Eτ +ρ p,− (τ )e −Eτ , where nowρ p,± (t) =ρ p,± (t+β τ ) is a "periodic part of the density matrix" [c.f., Eq. (6.19)], and E is the same as before. These arguments suggest themselves to be generalized in the form of a solutionŜ(z) =Ŝ p,+ (z)e Ez +Ŝ p,− (z)e −Ez , where z = τ + it andŜ p,± (z) is an "elliptic matrix function" of a complex argument, z ∈ C, such thatŜ p,± (z) =Ŝ p,± (z + nβ τ + imβ t ), ∀n, m ∈ Z.
To get a more useful expression for the "partition function," z, let us now focus on a dependence ∆ 0 (it) slow enough so that there are no level crossings taking place. Consider now a pseudospin, described by the spinor Ψ l (t) of Eq. (6.23), evolving from the initial state that is an eigenstate of the Hamiltonian at t = 0, e.g., we can take it to represent to a pseudospin moment opposite to the "initial magnetic field," b l (0) = (∆ 0 (0), 0, ξ), which lies in the XZ-plane. The adiabaticity assumption immediately tells us that the quantummechanical phase "collected" after the completion of a single cycle, t : 0 → β t of the magnetic field, b(t), is given by the expression in the exponential below Ψ l (β t ) = Ψ l (0)e −i(γ Berry +γ dyn) , (6.25) where γ Berry is the Berry phase, which is determined by the following flux through the area, A b , swept by b(t) over the cycle, t : 0 → β t , and γ dyn is the dynamical phase given by where the integrand can be easily recognized as an "instantaneous eigenenergy" of the corresponding spin Hamiltonian (which in turn represents the energy of an excitation in a superconductor subject to such a fluctuation). Note that we could have taken the other initial condition corresponding to a pseudospin pointing along the "initial magnetic field," which would have evolved into a state with the dynamical phase, which is a complex conjugate to (iγ dyn ) above. We can recall now that since we are interested in the S-matrix modulo its periodic part, we can construct the remainder out of the two phase factors, which therefore gives exactly the desired (Eβ t ) that appears in Eq. (6.24).
If we now make a further simplifying assumption and consider a fluctuation, ∆ 0 (τ − τ 0 ), which is described by an even function of its argument (see, previous Sec. VI C 1) such that the analytically-continued ∆ 0 (it) is also real-valued, we immediately find that the effective "magnetic field" simplifies to b l (t) = (∆ 0 (it), 0, ξ). Therefore, the area swept by any such dependence in the parameter space is zero and the Berry phase (6.26) vanishes identically as well. Note that this conclusion would also hold if we assume that the order parameter is an odd function of (τ − τ 0 ), such that it leads to a purely-imaginary ∆ 0 (it) (let us recall that phase fluctuations in imaginary time have been eliminated). Under these assumptions, we can identify the factor arising from the "non-periodic" part of the S-matrix/"density matrix" with the dynamical phase, (Eβ t ) = γ dyn to obtain the following result for the functional determinant: where we have restored the index l that parameterizes the sites of the original Richardson model.
The Adiabaticity Requirement
Eq. (6.28) has appeared after a chain of rather general arguments, which however included a number of additional assumptions. Let us reiterate these assumptions: We have assumed that ∆ 0 (τ ) can be analytically-continued from τ ∈ [0, β] ∈ R to the complex plane, C, and that the resulting function ∆ 0 (z) is an elliptic function with two primitive periods (β τ , iβ t ) such that β/β τ ∈ Z. We also have further assumed that the there exists τ 0 ∈ R such that the order parameter is either an odd or an even function of the argument ∆(τ − τ 0 ), which ensures that the Berry phase vanishes. Finally, we have assumed that the time-dependence of the analytically-continued order parameter, ∆ 0 (it) is "slow enough," such that no level crossings take place.
The last assumption is the most restrictive and one may wonder about the accuracy and domain of applicability of the conjecture (6.28) and in particular about the meaning of "slow enough" in the adiabaticity assumption. This question has been addressed by Gangopadhyay, Dzero, and the author in Ref. [25] in the context of two-level-system dynamics in superconducting qubits. Mathematically, Ref. [25] presented an extended class of exact solutions associated with elliptic functions describing the driving field (which represent a generalization of the anomalous solitons discussed in the amazing paper of Yuzbashyan in Ref. [12]). Here we reiterate only key facts relevant to our paper: The following functional dependencies of ∆ 0 (it) admit exact explicit solutions of the associated Eqs. (6.23) and (6.22): where the "magnetic field" in the cross product is exactly as in Sec. VI C 2: b l (t) = (∆ 0 (it), 0, ξ l ) with ∆ 0 (it) given by Eq. (6.29). Therefore, the equations of motion (6.23) in the SU(2)-group have been reduced to equations of motion (6.30) on a sphere, S 2 . Let us recall that S 2 = SU(2)/U(1), therefore Eq. (6.30) has less direct information than the original Schrödinger equation. It turns out that the "missing part" is exactly the sought-after overall time-dependent U(1) phase of the wave-function, which we expect to reduce to the sum of the Berry phase and a dynamic phase discussed in the previous Sec. VI C 2 [in the case of dependence (6.29), the Berry phase is zero]. Using the Ansätz proposed by Yuzbashyan in Ref. [12], one can find 25 the solutions, M(t), to Eqs. (6.30), expressed in terms of elliptic functions with the same periodicities that the elliptic function (6.29) in accordance with the suggested generalization of the Floquet argument to elliptic functions, as discussed in the previous Section VI C 2. Using these exact solutions one can construct the full S-matrix describing the motion in SU (2). It can be done by parameterizing the components of the spinor in Eq. (6.23) as follows ψ ↑/↓ (t) = One can see that while the amplitudes and relative phase are directly related to the "instantaneous" direction of the Bloch vector a ↑/↓ (t) = 1 ± 2M z (t) and θ(t) = arctan [M y (t)/M x (t)], the common phase γ(t) depends on the trajectory in a non-local way and to determine it, one has to go back to the Schrödinger equation (6.23). This indeed can be done and the phase, γ, can be found (this part has to be done numerically for generic parameters). The main conclusion of this analysis is that if Ω a is small, this exact phase is essentially indistinguishable from the dynamic phase described by Eq. (6.27) [however, for any non-zero Ω a , Eq. (6.28) is not exact]. As Ω a increases up to a critical value, Ω (cr) a , level crossings start taking place and the absolute value of the quantal phase is suppressed compared to the adiabatic result. In all cases considered, the adiabatic quantal phase is either equal or larger than the exact phase, and therefore it can be viewed as an estimate from above. Ref. [25] also indicates that for all Ω a < Ω (cr) a , the adiabaticity condition is satisfied and in this case the compact (6.28) expression for the functional determinant can be used. This discussion was initially motivated by the "paradox" found in the fully-solvable case described in Sec. VI C 1. We remind that the contribution to the action of a pseudospin moving in the presence of a non-trivial fluctuating ∆ 0 (τ ) described by Eq. (6.16) turned out to be completely independent of the parameters of this fluctuation and was found to be identical to the corresponding contribution expected in a Fermi gas, i.e., in the absence of any order parameter whatsoever, ∆(τ ) ≡ 0. Another part of the "paradox" was that the full solution for the "density matrix,"ρ(τ ), was very cumbersome (6.19) and the simplification occurred at the final stage of calculating its trace, which led us to Eq. (6.20) for Trρ l (β) = 2 cosh (ξ l τ ).
These paradoxes can be now resolved with the help of Eqs. (6.28) and (6.29). One can consider various limits of the function (6.29), in particular, that of Ω a → 0, which leads to the following expression of the dynamic order parameter considered previously by Yuzbashyan 12 and Levitov et al.: 10 ∆ 0 (e, it) On the other hand, the limit κ → 1 leads to sn (u, 1) = tanh u and Eq. (6.29) reproduces the anomalous soliton of Ref. [12]. If we now take both limits, i.e., κ → 1 and Ω a → 0, we find ∆ 0 (e, it) . (6.32) An analytical continuation of this function to imaginary time yields ∆ 0 (e, τ ) = ω/ cos (ωt), which is exactly the soliton studied in Sec. VI C 1. Note that this soliton is not an elliptic function, but a circular function, because it has only one incommensurate period in the τ -"direction." However, it is a limiting case of a proper elliptic function, with its period β t taken to infinity. Therefore, per the arguments of Sec. VI C 2 and using Eq. (6.28), we are to write the "partition function" as follows: Now, it is easy to see the origin of the paradoxical result (6.20): Since cosh −2 (ωt) decays exponentially with increasing t, the soliton term in Eq. (6.33) vanishes in the (β t → ∞)-limit and does not contribute to the integral. We therefore recover the correct result (6.20)! Now that the origin of the result (6.20) in the exactly-solvable case is understood, we can use the exact solution to get another insight into the range of applicability of Eq. (6.28). The list of assumptions for the validity of Eq. (6.28) includes that β be a natural rather than an accidental period of ∆ 0 (τ ) (enforcing this periodicity via a periodic repetition of ∆ 0 (τ ) from τ ∈ [0, β] → R would not necessarily work, because the resulting periodically-continued function may not have the desired analytical properties and any arguments based on them would become unreliable). Hence, we do not expect Eq. (6.28) to work in the accidentallyperiodic cases, but the exactly-solvable example (6.16) with τ 0 = β/2 and ∀ω ∈ R shows that at least in this particular case Eq. (6.28) does work correctly. It remains unclear at this stage, whether this result is an artefact of the particular dependence (6.16) or it is rather an indication that Eq. (6.28) applies to a wider class of elliptic functions and their limits with accidental β-periodicity.
D. Contribution of Elliptic Trajectories to the Partition Function
Now let us summarize our findings (conjectures) and present the following expression for the contribution to the partition function of those specific elliptic trajectories, ∆ 0 (τ ) → ∆ 0 (z) ∈ Ell, for which our expression for the functional determinant applies: where just as before ∆ 0 (it) corresponds to an analytically-continued order parameter, which leads to an elliptic function with the primitive periods (β τ , iβ t ) or a limit of such an elliptic function. In Eq. (6.34), we have also included the Berry phase contribution, which in general should be present, but whenever ∆ 0 (it) is either purely real or purely imaginary, the Berry phase vanishes identically.
We have already verified that the action in Eq. (6.34) reproduces the exact results in certain exactly-solvable cases. Note here that the classic BCS result (4.9) is certainly reproduced exactly as well, because a constant function represents a trivial elliptic function, ∆ 0 (τ ) ≡ ∆ 0 (it) ≡ ∆ = const and so we can use Eq. (6.34), and after an integration recover the correct "partition function" of a spin in a constant magnetic field, z (0) l = 2 cosh (E l β), which corresponds to the BCS mean-field. One can also argue based on (6.34) that the classical mean-field is indeed a true minimum on the space of these elliptic functions, Ell. Let us consider an order parameter ∆ 0 (τ ) = ∆ + δ∆(τ ), where δ∆(τ ) is in some sense small. Let us expand the action in Eq. (6.34) assuming that δ∆(τ ) does not induce a Berry phase. Hence a correction to the relevant part of the action is (we consider the low-temperature limit, β → ∞): One may wonder, if one can use the analytical properties of elliptic functions to bring the integral that appears in Eq. (6.34) to the τ -axis in a similar way, i.e., to the form → β 0 ξ 2 l + |∆ 0 (τ )| 2 ? We know however that this substitution can not generally be correct, because some non-trivial solutions that we have analyzed manifestly contradict this assumption. However, we have proven above that for all relevant fluctuations in the immediate "vicinity" of the classic BCS mean-field the substitution above would work. One can explicitly verify that the interesting property of this (generally incorrect) substitution is that the variational analysis of the functional, , the constraint δS δf 0 = 0) indeed immediately selects the classical mean-field f 0 (τ ) ≡ ∆ BCS = const as the only saddle point. Hence, one can use the expression forS above to determine the contributions to the partition function due to Gaussian quantum fluctuations in the vicinity of the BCS mean-field (for simplicity, we consider the low-temperature limit only). The result is not unexpected and is quite "boring," taking the following form for the usual BCS superconductor (i.e., the parameter space, L, is momentum space): where F BCS is the energy of the classical mean-field BCS state, T = β −1 , and V is the actual physical volume and hence the contribution of these mesoscopic fluctuations to observables in a bulk system is negligible. It is alluring to attempt a variational analysis of the action in Eq. (6.34) to see if there could exist other saddle points apart from the classical mean-field. However, the variational analysis would be problematic, because the action in Eq. (6.34) contains "apples and oranges," that is two functionals of different functions, ∆ 0 (τ ) and ∆ 0 (it), which are related in a non-trivial way via an analytical continuation. However, to make the case that non-linear soliton contributions are important (we should distinguish here between instantons, which are trajectories that connect classical minima, 29 and fictitious at this stage new minima, which we dub solitons), one does not necessarily need to find true quantum minima, finding any quantum trajectory that corresponds to the energy smaller than mean-field would suffice. Since the first term is always positive and the second one is always negative, we are to look for ways to minimize ∆ 1 and maximize ∆ 2 . In the classical BCS mean-field ∆ 1 = ∆ 2 and there is no room for any additional variation, but in the functional (6.34) such additional variations are in principle allowed. One can check that the analytical continuation of solitons of the non-equilibrium BCS problem with natural periodicity do not produce a "good" solution at least at T = 0. If on the other hand, we take Eq. (6.34) as a given functional and consider various trial functions without attempting to prove that they actually satisfy the formal domain of validity of the Ansätz, we immediately find a variety of dependencies that do "better than classical mean-field" in term of energetics. However, these "results" should be taken with a grain of salt, because there is no way to determine the actual range of validity of (6.34) beyond those dependencies associated with known integrable spin dynamics and the most natural explanation for any accidental solution obtained within a trial-and-error analysis of (6.34) is that it is probably beyond the applicability of the method. On the other hand, there appears to exist no proof that such solitons are impossible. Looking at the rich structure of the functional determinant, it appears conceivable that there exist trajectories in the huge functional space spanned by, ∆(τ ), that do not just collapse into the mesoscopic term (6.36), but instead provide more noticeable contributions to the action. A numerical analysis of some non-linear solutions, guided by the analytical result (6.34), will be published elsewhere.
VII. SUMMARY
This paper presents an analysis of non-perturbative fluctuation phenomena in the pairing model. The key step of this analysis is a decomposition of the partition function of the Richardson model into spin and pseudospin terms. It is shown that such factorization is possible for a generalized Richardson model that includes both BCS and spin interactions. Even though we have not presented here a theory to describe both types of non-trivial interactions on an equal footing, the development of such an extension is straightforward 19 and would lead to a two-order-parameter theory expressed in terms of two "global" Hubbard-Stratonovich fields. 21 The analysis of phase fluctuations presented here indicates that these interactions will be competing and that such competition can be enforced via a commutation relation between the density and spin density and the overall phase. However, the present paper has focused on the analysis of a simpler canonical Richardson model that has no magnetic interactions. Even though the spin sector of this Richardson model is trivial, the existence of this (single-particle) sector is important for the possible existence of any non-trivial fluctuations of the amplitude of the order parameter in the low-temperature phase.
The main technical part of the paper involves a calculation of functional determinants that appear in the non-linear effective action expressed in terms of the Hubbard-Stratonovich field. We have shown that the Anderson pseudospin language and in particular its coherentstate path-integral representation lead to practically useful and physically intuitive insights into the structure of the functional determinants for non-trivial quantum trajectories. Therefore, this approach may be much preferable to the conventional Grassmann path integral method. We have shown that a functional determinant is given by the trace of a density matrix that satisfies the Bogoliubov-de Gennes equations in imaginary time. This leads to a differential equation of the Riccati type, which is directly related to the supersymmetric Schrödinger equation with superpotentials determined by the imaginary-time dynamics of the order parameter, ∆(τ ). Let us note here that a particularly promising direction for further research could be to use the WKB-method to treat the relevant differential equations.
In Secs. VI C 2 and VI D, we proposed an explicit, compact expression for the functional determinant for a certain large class of elliptic functions and the arguments that led us to the conjecture (6.34) involved an analytical continuation of the Bogoliubov-de Gennes equations in imaginary time to the real-time axis (or more generally to the complex plane, z = τ + it), such that the problem could be mapped onto that of a two-level system in a time-dependent magnetic field determined by quantum dynamics. This is a known, very complicated problem, but we have taken advantage of some recent exact results and our recent work on an extension of these results to analyze a family of exact solutions that are associated with elliptic functions. These results have led us to Eq. (6.34), which provides a useful intuition for the effective action of the model and suggests that the functional determinant, that is often treated as a thing-in-itself, can actually be calculated and is related in a very straightforward way to the dynamical and Berry phases of a pseudospin moving in a "magnetic field," determined by the quantum dynamics of a fluctuation. Let us reiterate however that a formal justification of our solution applies only to adiabatic dependencies on the specific class of elliptic functions with the periods along the imaginary and real-time axes.
An important open question is whether the considerations presented in this paper can be generalized to other types of functions, ∆(τ ), which are not associated with any elliptic functions that lead to integrable pseudospin dynamics. A particularly promising avenue here could be to use the reverse-engineering approach for constructing exact solutions as described in Refs. [25] and [30], which effectively implies a change-of-variables from the Hubbard-Stratonovich field, ∆(τ ), to the generators, Φ(t), which govern the dynamics of the S-matrix,Ŝ(t) = exp − i 2 Φ(t) ·σ , satisfying the proper Bogoliubov-de Gennes equations. It would also be interesting to see whether a chaotic rather than integrable dynamics 31 can be realized under any circumstances in this model. Quite generally such dynamics, if at all possible, are not expected to lead to energetically favorable contributions to the action, because the "trivial" term that gives an energy penalty to any non-zero order parameter configuration corresponds to the average of |∆| 2 , while the second non-trivial term that favors superconductivity contains contributions from different sites, and if the dynamics exhibit a "chaotic behavior" in the parameter space, L, the signs in the second term would fluctuate strongly from site to site and are expected to average out to zero instead of lowering the corresponding energy. This argument supports the approach to use regular elliptic trajectories that describe a synchronized collective behavior of the pseudospins. Another open question relates to the role of the Berry phase in the functional determinant (6.34). All exact solutions we have analyzed [that are sensible to describe thermodynamics, where the constraint ∆(0) = ∆(β) must be imposed] have a trivial (zero) pseudospin Berry phase. This however represents a limitation in our ability to solve Eqs. (6.1), rather than an indication that Berry phase terms are unimportant.
Finally, we reiterate the main question posed in this paper and the arguments of the last Sec. VI D, which suggest that non-perturbative soliton trajectories that co-exist with the classical mean-field are not impossible and in fact the rich general structure of the functional determinant suggests that the construction of such quantum fluctuations may be possible at least in some modification of the model (which may involve interactions for real spins). Generally, the right question to ask would be whether there exists any fermion model that exhibits breaking of continuous symmetry and such that its low-temperature phase allows non-perturbative soliton solutions for a component of the Hubbard-Stratonovich field that is normally considered "massive?" In other words, can the non-linear effective action for the Hubbard-Stratonovich field develop any other minima apart from the classical meanfield? A proof that no such solutions exist would confirm the fundamentals of classical spontaneous symmetry breaking and mathematically would imply that there is no need to study complicated non-linear actions at T = 0 [such as the non-linear effective action in Eq. (2.6)] and that in an infinite system they should crossover to a functional deltafunction of the type, e −S[∆(τ )] ∝ δ |∆(τ )| − ∆ MF , c.f., Eq. (6.36). On the other hand, even a single example of an order-parameter trajectory that is energetically beneficial to the classical mean-field would seriously question this fundamental conjecture. We know that any such trajectory, if at all possible, can not be anywhere near classical mean-field (in the functional space of allowed fluctuations), but the possibility of a non-perturbative solution not adiabatically-connected to the mean-field has certainly not been ruled out. | 2010-07-10T19:50:38.000Z | 2010-03-11T00:00:00.000 | {
"year": 2010,
"sha1": "fa74ee833fa30a3b4d9ffdb88986957ebf292f23",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1003.2237",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fa74ee833fa30a3b4d9ffdb88986957ebf292f23",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
52493817 | pes2o/s2orc | v3-fos-license | Actinides , accelerators and erosion
Fallout isotopes can be used as artificial tracers of soil erosion and sediment accumulation. The most commonly used isotope to date has been Cs. Concentrations of Cs are, however, significantly lower in the Southern Hemisphere, and furthermore have now declined to 35% of original values due to radioactive decay. As a consequence the future utility of Cs is limited in Australia, with many erosion applications becoming untenable within the next 20 years, and there is a need to replace it with another tracer. Plutonium could fill this role, and has the advantages that there were six times as many atoms of Pu as of Cs in fallout, and any loss to decay has been negligible due to the long half-lives of the plutonium isotopes. Uranium-236 is another long-lived fallout isotope with significant potential for exploitation as a tracer of soil and sediment movement. Uranium is expected to be more mobile in soils than plutonium (or caesium), and hence the U/Pu ratio will vary with soil depth, and so could provide an independent measure of the amount of soil loss. In this paper we discuss accelerator based ultra-sensitive measurements of plutonium and U isotopes and their advantages over Cs as tracers of soil erosion and sediment movement.
Introduction
Soil erosion in Australia is recognized as a major ongoing issue, and its mitigation by sustainable land management practices will be an ongoing process into the future.The effects of climate, for instance, will directly impact soil stability, and in traditional agricultural areas will need to be managed alongside the increasing pressure to provide enough food for our expanding population.The need to produce more food will require changes in farming operations and is also likely to force expansion of agriculture into areas not currently used.It is likely that this will be into areas with soils of poorer structure and less resistance to erosion.The ability to estimate decadalscale erosion losses in association with these changes in land management practice will therefore be important in the coming years.
Radio-isotopic tracers, particularly fallout 137 Cs, have been used for many years to study soil and sediment movement, and in conjunction with modelling, provide a means to assess the effectiveness of individual management practices.Assessing the effects of land management on soil erosion is however a difficult process, because of the spatial and time scales involved.Significant reliance is therefore placed on modelling, and isotopic tracers provide a valuable tool for testing and developing such models.a e-mail: steve.tims@anu.edu.auThe total 137 Cs activity dispersed by the atmospheric QXFOHDU ZHDSRQV WHVWV RI WKH ¶V DQG ¶V ZDV RI the order of 1000 PBq [1].This, and the ability to determine the 137 Cs concentration readily by counting the 662 keV J-ray emitted when the 137 Cs (t 1/2 = 30a) decays, have allowed significant use of 137 Cs as a tracer of soil and sediment transport that has occurred in the time since fallout deposition.In the Southern Hemisphere however, fallout levels were significantly lower compared to those in the north, and combined with the steady decline in 137 Cs activity as a result of radioactive decay, is beginning to limit the analytical uncertainties of the measurements.As a consequence, many 137 Cs erosion applications will become untenable within the next ~20 years.This will constrain the future utility of fallout Cs as a tracer, and there is a real need for an equivalent replacement.
In addition to 137 Cs, the plutonium isotopes 239, 240 Pu were also distributed around the globe as a result of the nuclear weapons tests.The total released activity was an order of magnitude less than that of Cs, and plutonium fallout from the tests was not widely monitored; however the historical fallout pattern is believed to show a structure similar to that of caesium [2].In terms of its suitability as a replacement for 137 Cs, there is growing evidence that Pu and Cs display similar particle-reactive behaviour in terrestrial environments [3][4][5].Furthermore, the long half-lives of 239 Pu (t 1/2 = 24,110a) and 240 Pu (t 1/2 = 6561a) has resulted in the radioactive decay of only The technique of Accelerator Mass Spectrometry (AMS) counts atoms directly, rather than measuring their radioactive decay.This is of note, because the nuclear tests yielded over six times as many atoms of 239+240 Pu as 137 Cs atoms.AMS plutonium measurements also confer a number of advantages over caesium measurements [6], including reduced counting times, the near absence of background interference and improved statistical precision.A further significant advantage is that the 240 Pu/ 239 Pu ratio from local nuclear weapons test sites can differ from the global average.This can permit DVVHVVPHQW RI WKH VLJQLILFDQFH RI DQ\ ³ORFDO´ FRQWULEXWLRQ to the inventory [7].This information cannot be gained from 137 Cs measurements alone.In addition, the AMS technique also offers the opportunity to investigate a new, complementary tracer to plutonium for the assessment of soil loss and movement: fallout 236 U.
Plutonium measurements
Figure 1 shows a schematic representation of the AMS system at the Australian National University (ANU) as configured for Pu measurements.Soil and sediment samples are prepared at the ANU for AMS analysis based on the techniques described in [4].This entails addition of a 242 Pu spike to the homogenised soil or sediment material, and the plutonium is then leached from the sample with hot nitric acid and purified using ion exchange columns.The extracted material is then dispersed in an iron-oxide matrix and pressed into AMS sample holders.
Plutonium isotopes are currently measured with the $06 V\VWHP XVLQJ D ³VORZ-F\FOLQJ´ WHFKQLTXH ZKHUHLQ the isotopes 242 Pu, 240 Pu and 239 Pu are injected as PuO - ions into the 14UD accelerator in turn by adjusting the magnetic field in the mass-analysing (Inflection) magnet (figure 1).Details of the AMS system and measurement methodology relevant to actinide measurements are given in [6,8] Heavy Ion Accelerator Symposium 2012 processes that move soil grains down the profile, and also possibly as a result of movement in solution.The shape of the depth profile is determined by factors such as soil type, organic matter content, porosity and by the chemical properties of the isotopic tracer (i.e. by how well the tracer binds to the soil particles).Caesium-137 and the plutonium isotopes bind tightly to soil particles, and there is good evidence that the soil depth profiles of both elements peak at or near the surface, at approximately the same depths (figure 3) [5].There is also evidence that their concentrations can be well correlated in soils and sediments collected from different land-use types [e.g.4,9].The similar behaviour of Pu and Cs in soils, combined with the superior statistical precision of the AMS measurements, should permit Pu measurements to be referenced against existing 137 Cs soil inventory data.Quantification of the effects of recent land use change, through such referenced data sets, could provide a means to assess changes in soil redistribution associated with modern changes in land management practices, and will also provide new data sets with which to test and validate soil and sediment re-distribution models.
Uranium-236
Comparison of 137 Cs depth profiles from eroded sites with those from undisturbed reference sites has long been used to deduce the depth of soil material that has been lost to erosion.This technique assumes either that the soil characteristics at the reference site match those at the site under investigation, or that any differences in the characteristics are not significant factors in the loss of soil.The use of Pu isotopes in place of 137 Cs is subject to the same assumptions.
Bomb-produced 236 U also has a long half-life (t 1/2 = 23.42Ma), and potential as a complementary tracer to plutonium for soil loss and sediment movement studies.Uranium is expected to be more mobile in soils than plutonium, particularly in acidic soils [10].This difference is likely to give rise to differently shaped depth profiles ± 236 U concentration depth profiles should peak at a greater depth than those for plutonium.This may get around a limitation of using Pu alone, i.e. that the loss of only a thin layer of surface soil can result in the loss of much of the Pu, which is concentrated in the top few cm of the soil.Determination of the actual amount of soil loss is then sensitive to the assumed depth profile of Pu close to the surface, and is a significant source of uncertainty.If the 236 U/Pu ratio varies with depth, the ratio in the surface soil at an eroded site could provide a semi-independent measure of the amount of soil loss.The ratio could also provide a means to check how well the sampling and reference site soil characteristics match.Furthermore, the measurement of this ratio in transported sediment could also provide valuable information on the average depth from which sediment has been derived by the combination of surface wash and gullying processes.
AMS is by far the most sensitive technique with which to measure fallout 236 U in environmental samples, and the only technique currently capable of measuring 236 U at the requisite sensitivity [11].Furthermore, extraction and measurement of 236 U and the Pu isotopes from the same environmental material has been demonstrated at the levels necessary [12,13].There is, however, very little data in the literature regarding concentrations of fallout 236 U in soils, and even less regarding the use of 236 U as an environmental tracer, although first attempts have been reported recently [14,15].
Summary
In an era where Australian agriculture is undergoing changes in traditional management practices, and expansion into new areas likely to be more susceptible to erosion, AMS actinide measurements can provide a longterm solution to the declining sensitivity of 137 Cs measurements used for the assessment of soil loss and sediment movement.The superior precision of the AMS plutonium measurements, compared to 137 Cs gammaspectroscopy data, and the opportunity for new methods for erosion analysis using the new fallout isotope 236 U, will allow for continued improvement and validation of soil and sediment re-distribution models.
It is noteworthy that the 240 Pu/ 239 Pu ratio, routinely determined by the AMS analysis, can indicate the significance of any contribution to the fallout inventory DULVLQJ IURP ³ORFDO´ QXFOHDU ZHDSRQV WHVWV D IDFWRU WKDW cannot be taken into account from 137 Cs measurements alone.This could be important in the Australian context, as candidate areas for the expansion of agriculture potentially fall within regions influenced by fallout from nuclear weapons tests carried out in Australia.
There is also significant potential to make use of the extensive database of 137 Cs results that already exist, via data sets referenced to AMS plutonium measurements.Such data could be used to assess the efficacy of changes in land management methods focused on minimising soil loss.
Fig. 1 .
Fig. 1.Essential features of the ANU AMS system for the measurement of Pu.
Figure1shows a schematic representation of the AMS system at the Australian National University (ANU) as configured for Pu measurements.Soil and sediment samples are prepared at the ANU for AMS analysis based on the techniques described in[4].This entails addition of a 242 Pu spike to the homogenised soil or sediment material, and the plutonium is then leached from the sample with hot nitric acid and purified using ion exchange columns.The extracted material is then dispersed in an iron-oxide matrix and pressed into AMS sample holders.Plutonium isotopes are currently measured with the $06 V\VWHP XVLQJ D ³VORZ-F\FOLQJ´ WHFKQLTXH ZKHUHLQ the isotopes 242 Pu,240 Pu and 239 Pu are injected as PuO - ions into the 14UD accelerator in turn by adjusting the magnetic field in the mass-analysing (Inflection) magnet (figure1).Details of the AMS system and measurement methodology relevant to actinide measurements are given in[6,8].The 242 Pu, 240 Pu and 239 Pu isotopes are normally measured for 1, 3 and 2 minutes, respectively, with the sequence being repeated as many times as is necessary, with three loops being typical.The isotopic ratios 239 Pu/ 242 Pu, 240 Pu/ 242 Pu and 240 Pu/ 239 Pu are determined from the data, and 239 Pu and 240 Pu concentrations in the original sample material deduced from the known amount of added spike.A typical 239 Pu spectrum recorded with the ANU AMS system is shown in figure2.
Fig. 2 .
Fig. 2. Typical 239 Pu spectrum obtained from a soil sample.The ~20 g soil sample was collect from the top 0±5 cm of the soil profile and yielded a 239 Pu peak of over 600 counts in a 2 minute collection period.In soils the concentrations of fallout isotopes vary with depth as a consequence of bioturbation, mechanical | 2017-09-27T10:05:54.162Z | 2012-10-01T00:00:00.000 | {
"year": 2012,
"sha1": "bc62930679b4fa8bfbee4ee149efa42b523d8570",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2012/17/epjconf_hias2012_01002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1e7ed89004ab881cfb4ca20fcf3972baf8e39819",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
199662391 | pes2o/s2orc | v3-fos-license | Incretin Hormones: The Link between Glycemic Index and Cardiometabolic Diseases
This review aimed to describe the potential mechanisms by which incretin hormones could mediate the relationship between glycemic index and cardiometabolic diseases. A body of evidence from many studies suggests that low glycemic index (GI) diets reduces the risk for type 2 diabetes and coronary heart disease. In fact, despite the extensive literature on this topic, the mechanisms underlying unfavorable effects of high GI foods on health remain not well defined. The postprandial and hormonal milieu could play a key role in the relationship between GI and cardiovascular risk. Incretin hormones, glucagon-like peptide1 (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP), are important regulators of postprandial homeostasis by amplifying insulin secretory responses. Response of GIP and GLP-1 to GI have been studied more in depth, also by several studies on isomaltulose, which have been taken as an ideal model to investigate the kinetics of incretin secretion in response to foods’ GI. In addition, extrapancreatic effects of these incretin hormones were also recently observed. Emerging from this have been exciting effects on several targets, such as body weight regulation, lipid metabolism, white adipose tissue, cardiovascular system, kidney, and liver, which may importantly affect the health status.
Introduction
High carbohydrates diets, especially from refined sources such as white rice and white bread, starches, and added sugars, are associated with an increased risk of cardiovascular events and all-cause mortality [1].
It has become clear that not all carbohydrates are the same and that post meal rise in glucose levels is mainly influenced by carbohydrates' quality and other food-related compounds rather than by carbohydrate quantity per se.
The glycemic index (GI) is a measure of blood glucose increase elicited by foods, computed from the incremental area under the postprandial plasma glucose curve of a test food, expressed as percentage of that of an equal amount (typically 50 g, at times 25 g) of a reference carbohydrate (e.g., glucose or white bread) [2]. Even if its relevance still continues to be an object of debate, GI represents a property of the food itself, precisely defined by the International Organization for Standardization (ISO) method 26,642:2010 and a methodology sufficiently valid and reproducible for discriminating foods based on their glucose response [2].
Quality of carbohydrates as defined by GI can markedly affect the health status. In particular, consumption of high-GI foods has been described to increase the risk for noncommunicable diseases such as obesity, type 2 diabetes, and cardiovascular diseases [2,3]. Several prospective cohort studies suggest that low GI diets reduce the risk for type 2 diabetes and coronary heart disease [4][5][6].
Despite the extensive literature on this issue, the mechanisms underlying unfavorable effects of high GI foods on health still remain not well defined. The main role has been historically attributed to postprandial hyperglycemia, along with related hyperinsulinemia [7]. However, the postprandial metabolic and hormonal milieu is a very complex pathophysiological state, as many factors outside of blood glucose and insulin levels may be implicated. In this context, insulin secretory responses initiated by post-meal hyperglycaemia are notably amplified by incretin hormones, a phenomenon called "incretin effect" and attributed to the release of incretin hormones from specialized entero-endocrine cells elicited by absorption of oral glucose but not by intravenous glucose administration. Thus, incretin hormones, glucagon-like peptide-1 (GLP-1) and glucose-dependent insulinotropic polypeptide (GIP) may be important regulators of postprandial homeostasis [8].
The relationship between GI and healthy status is not clear, but several observations support a role for inflammatory as well as oxidative stress in foods with high GI. In particular, after a three-month reduction diet with low glycemic index, an increased level of IGF-I with cardioprotective effect was observed [9]. Moreover, a three-month intervention using a low glycemic index diet decreases inflammation by increasing the concentration of uric acid and the activity of glutathione peroxidase [10].
Since rates of glucose absorption closely related to GI of foods trigger a different pattern of incretin response, and a variety of pancreatic and extrapancreatic effects have been described for these gut-derived hormones, a role for GIP and GLP-1 in mediating GI effects on health status may be reasonably supposed. In this light, this review aimed to describe the potential mechanisms by which incretin hormones could mediate the relationship between glycemic index and cardiometabolic diseases.
Pancreatic Effects of Incretin Hormones
The majority of GIP producing K cells are found in the proximal small intestine and the duodenum, while GLP-1 producing L cells are much more dispersed, with a gradient from a low density in the duodenum to a higher density in the ileum, but also in the colon and rectum [11].
In fasting state, healthy human subjects have basal plasma concentrations for both incretins in the low picomolar range (10−12 mol L −1 ). These start to rise a few minutes after meal, reaching a peak after approximately 1 h, and returning to basal levels after several hours [11]. Nutrients stimulating incretin secretion are glucose and other carbohydrates (including sucrose and starch), triglycerides, and some aminoacids [12], whereas proteins are a comparatively weak stimulus.
Upon release, GIP and GLP-1 bind to specific G-protein coupled receptors present on ß-cells, where they exert additive effects on the stimulation of insulin secretion in a glucose-dependent manner [13]. Incretin hormones always require a permissive degree of hyperglycaemia to exert their insulinotropic action. Activation of protein kinase A elicited by the binding of incretins to their respective receptors cannot initiate the release of preformed insulin secretory granules from ß-cells, without closing of potassium channels, depolarization, and calcium ion influx, as determined by hyperglycemia. For the same reason, incretins cannot provoke episodes of hypoglycemia. For GLP-1, the absolute glycemic threshold below which insulin secretion cannot be stimulated, even at supra-physiological concentrations, has been identified as approximately 66 mg/dL [11]. This cut-off appears safe in particular in non-diabetic people, as these subjects generally show hypoglycemic symptoms for serum glycemic values below 50 mg/dL. Incretin release is related to a meal by definition and depends on rates of nutrient entry into the small intestine to reach K and L cells [14]. Because of the more proximate location of GIP-producing K cells, the necessary gastric delivery, both in terms of food volume and glycemic content, is lower for GIP than GLP-1, even if studies in isolated perfused proximal small intestine suggest that both incretin responses to nutrients are elicited simultaneously [15].
Apart from insulin stimulation, GLP-1 reduces glucose concentrations by inhibiting α-cell glucagon secretion at all glucose levels and beyond the inhibition caused by blood glucose lowering alone [16]. GIP equally contributes with GLP-1 to the incretin effect, but unlike GLP-1 did not affect glucagon secretion.
In β-cells, GLP-1 not only enhances glucose-dependent insulin secretion and insulin synthesis, but, as described in animal studies, also stimulates β-cell proliferation and inhibits apoptosis, thus preserving β-cell function [17]. Whether this eventual regulation of β-cell mass in humans translates into a beneficial impact on diabetes progression still remains less elucidated.
Therefore, the pancreatic effects of GLP-1 and GIP are fundamentally similar, with the exception of the inhibitory effect on glucagon secretion performed only by GLP-1.
Response of GIP and GLP-1 to Food Intake
Many studies have documented the response of incretin hormones to bread, reporting conflicting results. Fiber-enriched [18] and cereal-based [19] breads have been reported to reduce GIP secretion compared with white bread. A study [20] evaluating the five most common breads consumed in Spain, discovered that GIP release was higher after intake of wholemeal rather than white bread. Wheat bread consumption with different particle size elicited a GLP-1 response much lower after bread prepared with flour and 85% broken wheat kernels compared to a control bread made from wheat flour combined with wheat bran to obtain a similar fiber content [21].
Hartvigsen et al. [22] reported that only rye bread with kerornels may decrease GIP and GLP-1 postprandial secretion. Other authors have described no differential effect of white wheat or whole-grain breads on both incretins [23,24].
This wide variability of response probably depends on the fact that carbohydrates exert a different effect if ingested alone or into a food matrix such as bread. Many factors, among which manufacturing conditions, type of cereals, starch structure, bread particle size and inclusion of other ingredients, may affect the glycemic and incretin responses. In general, the glycemic response of a food is altered in the presence of other foods depending on the amount and source of carbohydrate and the amounts and types of fat and protein added. Moreover, other factors may influence entero-hormone release. Short chain fatty acids, produced during fermentation of unabsorbed carbohydrates or fiber by the intestinal microbiota, may stimulate GLP-1 secretion [25]. Bile acids are also able to potentiate GLP-1 release by activating their receptor TGR5 [26][27][28].
Isomaltulose and sucrose are degraded respectively by isomaltase and sucrase, disaccharidedegrading enzymes located in small intestinal villi. In healthy subjects, isomaltulose is completely hydrolyzed at much lower rate than sucrose, thus determining slower rates of blood glucose and insulin increase as compared with sucrose [32,33]. The resulting monosaccharides (glucose and fructose) are completely absorbed by the small intestine [34,35] and do not reach the larger intestine, where alteration of microbioma might affect postprandial glucose responses [36]. Consequently, isomaltulose releases the same amount of energy as sucrose as confirmed in animal studies [32].
Based on these intrinsic characteristics, isomaltulose represents the ideal model to investigate the kinetics of incretin secretion in response to foods GI.
Studies on rats report that isomaltulose ingestion is characterized by reduced postprandial insulin and GIP release [37] and ileal administration of isomaltulose in anesthetized animals triggers a greater response of GIP than jejunal administration [38].
In the first human study [39] on the kinetics of incretin secretion elicited by isomaltulose, both plasma glucose and insulin levels resulted significantly lower, and total GIP secretion significantly and dramatically smaller, after isomaltulose than after sucrose loading. In contrast, GLP-1 levels observed at later time-points were significantly higher with isomaltulose than sucrose, as well as glucose and insulin levels at 120 min.
In a randomized, double-blind, crossover study on type 2 diabetic patients [40], postprandial glucose metabolism was characterized by using a combination of euglycemic-hyperinsulinemic clamp and labeled oral isomaltulose or sucrose load. Consistently with the previous report [39], absorption of isomaltulose was prolonged by on the 22-08-201950 min with respect to sucrose. Mean plasma concentrations of insulin, C-peptide, glucagon, and GIP were 10-23% lower. In contrast, GLP-1 was 64% higher after isomaltulose ingestion. Similarly, in another study [41] on type 2 diabetic participants, the incremental area under the curve of GIP was substantially reduced by 40% and that of GLP-1 remarkably and significantly 6.3-fold higher following isomaltulose than sucrose intake.
Keller et al. [42] reported that sugar sweetened beverages with isomaltulose (containing 50% fructose) led to significantly higher GLP-1 release compared to maltodextrin-sucrose intake (containing only 12.5% fructose), although GLP-1 response has been shown to be lower after pure fructose intake when compared with an equicaloric glucose load [43]. This result likely depends on the slower degradation of the α-1,6 glycosidic bond of isomaltulose and a higher proportion of glucose reaching the distal part of the ileum.
Therefore, in light of the above, low-GI carbohydrates are characterized by low postprandial endogenous GIP levels and increased GLP-1 concentrations. This most likely happens because slowly digested low-GI carbohydrates bypass the upper intestinal K-cells producing GIP and reach the most distant L-cells producing GLP-1.
Extrapancreatic Effects of Incretins
The GIP and GLP-1 receptors are widely expressed in multiple tissues and cell types [44] and a large body of evidence describes a plethora of pleiotropic activities for incretins outside of Langerhans islets. Most of these effects have been reported in animal experiments, and their relevance in humans have not always been ascertained [45].
Body Weight Regulation
GLP-1 play a physiological control of body weight by reducing appetite and enhancing satiety. The exact mechanism of these effects is complex and not completely understood. Evidence has been accumulated to support roles for both central and peripheral GLP-1 in the regulation of energy balance.
Intravenous GLP-1 infusion decreases gastric emptying rate by means of afferent-mediated vagal central mechanisms [46]. The effect has been observed both in healthy human subjects [47] and in patients with type 2 diabetes [48] in a dose-dependent manner.
However, rather than primarily lowering gastrointestinal motor activity, GLP-1 mainly reduces appetite by affecting the brain regulating center's function [49]. GLP-1 receptor is expressed in various brain areas consistent with regulation of appetite and satiety, and intracerebroventricularly administered GLP-1 strongly reduces short-term food intake in rats [50]. The afferent vagal nerve system is more likely a mediator of this central effect of GLP-1, as total subdiaphragmatic vagotomy attenuates the reduction in food intake induced by peripheral GLP-1 administration in rodents [51]. GLP-1 activity in the brain of peripheral GLP-1 could be mediated by afferent vagal nerve termini adjacent to L-cells and/or in the hepatoportal region [52].
In addition to effects on energy intake, GLP-1 may contribute to negative energy balance by increasing energy expenditure, as intracerebroventricular injection of GLP-1 increases thermogenesis from interscapular brown adipose tissue in mice [53].
Lipid Metabolism
Fat ingestion is a physiological strong stimulator of GLP-1 release in humans and rodents [54]. On the other hand, GLP-1 infusion improves postprandial lipidemia [55], most likely as a result of delayed gastric emptying and insulin-mediated inhibition of lipolysis. In addition, a reduced Apolipoprotein B48 (Apo-B48) synthesis seems to be implicated. Apo-B48 is the primary protein component of the chylomicrons (CM)-it is specifically distributed in small intestine-derived CM. Its bloodstream concentration during fasting is usually quite low [56]. In rats, intravenously administered GLP-1 inhibits Apo-B48 production, resulting in decreased release of triglycerides into the circulation after lipid containing meals [57]. Exendin-4, a long-acting GLP-1 analogue, directly inhibits the synthesis of Apo-B48 from hamster enterocytes [58]. In this sense, the eventual postprandial hyperlipidemia, postprandial hyperglycemia, metabolic syndrome and myocardial infarction may be monitored and assessed by analyzing and monitoring Apo-B48 protein.
GLP-1 robustly stimulates lipolysis in adipocytes isolated from abdominal fat of morbidly obese subjects [61], but not in subcutaneous abdominal fat of healthy volunteers [62]. Treatment with a GLP-1-producing adenovirus reduces fat mass, proinflammatory M1 macrophages and inflammatory cytokines in ob/ob mice, thus suggesting an anti-inflammatory action of GLP-1 in adipose tissue [63].
Cardiovascular System
The GLP-1R has been found in various cardiovascular tissues and many studies indicate that GLP-1 have a host of protective effects at this level, independently of nutrient homeostasis [63,64].
In particular, endothelial dysfunction is believed to be an important link between the postprandial state, atherosclerosis and cardiovascular disease. Even if postprandial vasodilatation is mediated by insulin-induced release of nitric oxide [65], it has been demonstrated that GLP-1 per se has direct beneficial effects on endothelium-dependent vasodilatation, particularly in the postprandial state [66]. GLP-1 has been shown to increase NO availability in a wide range of vascular beds [67] and to inhibit endothelin-1 production [68]. In vitro, GLP-1 induces endothelium-dependent vasodilation in preconstricted pulmonary arteries [69] and inhibits TNF-alpha-mediated PAI-1 induction in vascular endothelial cells, improving cell dysfunction [70]. Administration of GLP-1 improves endothelial function in salt-sensitive hypertensive rats [71]. Of great relevance, pharmacological levels of GLP-1 improves endothelial function in healthy individuals [72] as well as in type 2 diabetic patients with stable coronary artery disease [73].
GLP-1 or GLP-1 receptor agonists have demonstrated multiple beneficial actions on the heart. In rats, GLP-1 protects myocardium from ischemia [74] and improves the cardiac function in animals with congestive heart failure [75]. In humans, GLP-1 attenuates ischemic left ventricular dysfunction during stress echocardiography in patients with coronary artery disease [76] and improves left ventricular function in some studies of heart failure subjects [77].
In patients with type 2 diabetes [78], long-term treatment with GLP-1 analogs reduces blood pressure, an effect well observed before significant weight loss and potentially mediated by GLP-1's natriuretic effect and/or by improved endothelial function.
Kidney
Intravenous GLP-1 infusion increases natriuresis in rats and humans [82], possibly via increased atrial natriuretic peptide secretion from the heart [83] or via increased expression of the Na+/H+ exchanger in renal tubules [84]. Since the GLP-1 receptor is expressed in the brush border microvilli of proximal renal tubules and glomerular endothelial cells, a direct modulation of Na+ handling by the kidney cannot be excluded.
White Adipose Tissue
Animal and human studies report a physiological role for GIP in the nutrient uptake into adipose tissues and, therefore, in the pathogenesis of obesity [85,86].
GIP may increase fat storage directly by binding to its receptors on adipocytes and indirectly by potentiating insulin secretion, which notoriously induce a switch from lipolysis to lipogenesis in the adipose tissue.
The fundamental support for the role of GIP in obesity comes from studies on GIP receptor knockout mice, which, unlike control animals, are protected from obesity and insulin resistance in response to high fat or high-GI diets [87]. Moreover, in a mice model of partial reduction of GIP secretion, high-fat diet alleviated obesity and lessened the degree of insulin resistance, accompanied by higher fat oxidation and energy expenditure [88].
There is also evidence that GIP induces both in mice and human fat cells expression and release of inflammatory cytokines with possible relapse on insulin resistance [89][90][91].
Fatty Liver
Liraglutide, a GLP-1 analog, may improve liver fibrosis in non-alcoholic fatty liver disease [92]. On the contrary, increased postprandial release of GIP has be linked to unfavorable effects in this condition. In fact, in patients with NASH, GIP response correlated directly with hepatic steatosis, postprandial resistin, and free fatty acid (FFA) increase [93].
Vascular Responses
There is strong evidence that GIP increases splanchnic perfusion after a meal to enhance blood supply to the gut and optimize nutrient delivery to the liver [94,95]. GIP may also be involved in non-splanchnic arterial regulation, likely by producing endothelin-1 and nitric oxide, as suggested by studies using cultured human endothelial cells [96].
In mouse arteries, GIP induces the expression of the proatherogenic cytokine osteopontin, a key player in the pathogenesis of vascular disease, through the local release of endothelin-1 and activation of CREB (a transcription factor participating in the regulation of osteopontin expression) [97].
Additional support for an unfavorable vascular role for GIP comes from a large-scale genome-wide association meta-analysis reporting the correlation of a variant in the GIP gene with myocardial infarction [98].
Conclusions
Plenty of evidence indicates that cardiometabolic health is affected by the quality of carbohydrate present in foods. As indicated by several studies exploring the incretin response to isomaltulose, low-GI carbohydrates are characterized by low postprandial endogenous GIP levels and increased GLP-1 concentrations. This probably is due to the different timing of stimulation of k-cells and L-cells, depending on the different GI of the carbohydrates.
Based on their opposite extrapancreatic effects, GIP and GLP-1 may behave as the "yin and yang" and could represent the causal link between GI and health status.
Currently, there is a lack of clinical trials investigating the impact of low-GI foods on body weight, glucose homeostasis and cardiovascular risk [3]. Moreover, the observed effects in intervention studies, when present, are generally of small magnitude.
However, a potential role in health outcome is suggested by the positive experience with alpha-glucosidase inhibitors [99,100], which convert meals into low-GI meals by shifting the sucrose absorption from the jejunal to ileal intestine, thus enhancing GLP-1 secretion [101,102].
In conclusion, it is reasonable to assume that incretin hormones play a crucial role in the attainment of a wide range of health benefits, as associated to the quality of ingested carbohydrates.
Currently, there are several physiopathological suggestions, but few strong clinical data, supporting this intriguing hypothesis, which, however, in light of recent important clinical evidence on the cardiometabolic protection effects of GLP1 receptor agonists, certainly deserve the development of ad hoc clinical trials. | 2019-08-16T13:04:03.631Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "1e0b5e9c3f30bc67946a42268e7291e125a1d583",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/11/8/1878/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee905c98661d9399e4db889ae4a1e0d60ba862b2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244779252 | pes2o/s2orc | v3-fos-license | Lactic Acid Transport Mediated by Aquaporin-9: Implications on the Pathophysiology of Preeclampsia
Aquaporin-9 (AQP9) expression is significantly increased in preeclamptic placentas. Since feto-maternal water transfer is not altered in preeclampsia, the main role of AQP9 in human placenta is unclear. Given that AQP9 is also a metabolite channel, we aimed to evaluate the participation of AQP9 in lactate transfer across the human placenta. Explants from normal term placentas were cultured in low glucose medium with or without L-lactic acid and in the presence and absence of AQP9 blockers (0.3 mM HgCl2 or 0.5 mM Phloretin). Cell viability was assessed by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide assay and lactate dehydrogenase release. Apoptotic indexes were analyzed by Bax/Bcl-2 ratio and Terminal Deoxynucleotidyltransferase-Mediated dUTP Nick-End Labeling assay. Heavy/large and light/small mitochondrial subpopulations were obtained by differential centrifugation, and AQP9 expression was detected by Western blot. We found that apoptosis was induced when placental explants were cultured in low glucose medium while the addition of L-lactic acid prevented cell death. In this condition, AQP9 blocking increased the apoptotic indexes. We also confirmed the presence of two mitochondrial subpopulations which exhibit different morphologic and metabolic states. Western blot revealed AQP9 expression only in the heavy/large mitochondrial subpopulation. This is the first report that shows that AQP9 is expressed in the heavy/large mitochondrial subpopulation of trophoblasts. Thus, AQP9 may mediate not only the lactic acid entrance into the cytosol but also into the mitochondria. Consequently, its lack of functionality in preeclamptic placentas may impair lactic acid utilization by the placenta, adversely affecting the survival of the trophoblast cells and enhancing the systemic endothelial dysfunction.
INTRODUCTION
The normal growth and development of the fetus are sustained by the placenta. This ephemeral organ is more than just a selective barrier between the mother and the fetus. It is also a metabolically dynamic interface that uses part of the nutrient uptake to promote its own cellular growth (Vaughan and Fowden, 2016). In this context, emerging evidence shows that Frontiers in Physiology | www.frontiersin.org 2 December 2021 | Volume 12 | Article 774095 the placenta may act as a sensor, detecting the availability of nutrients in the maternal circulation and adapting its metabolism to support fetal development (Díaz et al., 2014;Vaughan and Fowden, 2016). Glucose is the primary substrate needed to meet the fetus and the placenta energy requirements (Baumann et al., 2002;Hay, 2006). The transfer of glucose across the placenta is mediated by specific Glucose transporters (GLUTs), GLUT1 being the most abundant isoform expressed at term (Illsley, 2000;Baumann et al., 2002). Besides, the level of lactic acid in fetal circulation is higher than in maternal circulation, suggesting that lactic acid could also serve as fuel for the fetus (Vaughan and Fowden, 2016).
Lactic acid exists in two isomeric forms: D-lactic acid and L-lactic acid. However, mammalian cells can only metabolize the L-lactate stereoisomer. Transcellular transfer of lactate is facilitated by a family of transmembrane proteins known as monocarboxylate transport system (MCT) that functions as a proton symport and is stereoselective for L-lactate (Halestrap, 2013;Iwanaga and Kishimoto, 2015). In the brain, it was reported that the transfer of monocarboxylates, such as lactate, may also be facilitated by aquaporin-9 (AQP9) (Badaut, 2010;Tescarollo et al., 2014). It was also found that lactate permeability increases with acidification suggesting that AQP9 may play a role as a channel for the protonated lactic acid form (Rambow et al., 2014). In addition, recent research proposes that lactate can also cross the mitochondrial membranes. In the mitochondria, lactate may be metabolized to pyruvate by the mitochondrial lactate dehydrogenase (LDH), leading to the formation of NADH. Thus, the production of NADH could scavenge reactive oxygen species (ROS) and protect cells from ROS-induced damage (Miki et al., 2013).
AQP9 belongs to a family of integral membrane proteins whose primary role is to facilitate transcellular water fluxes in response to osmotic gradients. In addition to water, AQP9 is also permeable to urea, glycerol, and monocarboxylic acids, like lactic acid, but it is impermeable to cyclic sugars as d-glucose (Tsukaguchi et al., 1998). Unlike MCTs, AQP9 can only transport the protonated form of monocarboxylates (Tsukaguchi et al., 1998;Rothert et al., 2017).
In normal human term placenta, MCT1 and MCT4 are localized on the basal membrane and the apical microvillus membrane of the syncytiotrophoblast cells (Settle et al., 2004;Nagai et al., 2010). MCT4 has a low affinity for lactate, playing a role in lactate export under conditions of high intracellular lactate (Halestrap, 2012). On the other hand, AQP9 is expressed in the apical membrane of the syncytiotrophoblast (Damiano et al., 2001) and the plasma membrane of the cytotrophoblast cells (Wang et al., 2004). However, at term, the cytotrophoblast layer is discontinuous and it does not restrict the transfer between the mother and the fetus.
In several placental disorders, such as preeclampsia, alterations in the formation of the syncytiotrophoblast may change the normal expression and function of many transport proteins and negatively impact the transfer of essential molecules, such as glucose, proteins, and oxygen (Brett et al., 2014).
In this regard, GLUT1 expression and function are downregulated in placentas from preeclamptic women (Lüscher et al., 2017), suggesting a reduction in glucose transport across the placenta. Additionally, a significant decrease was found in aerobic glycolysis in preeclamptic placentas (El-Bacha et al., 2019;Hu et al., 2021). Consequently, the trophoblast cells and the fetus might be driven to use an alternative source of energy like lactate.
Previously, we found that the molecular expression of AQP9 significantly increased in placentas from preeclamptic pregnant women (Damiano et al., 2006). However, functional experiments showed that water and monocarboxylate transport mediated by AQP9 were dramatically reduced (Damiano et al., 2006). Notwithstanding this, there is no evidence of alterations in the transcellular water transport between the mother and the fetus, suggesting that the main role of AQP9 in the human placenta is not related to water transport (Szpilbarg et al., 2018).
Given that AQP9 is also a metabolite channel, we proposed that this protein could be involved in placental energy metabolism. As a result, alterations in AQP9 may enhance syncytiotrophoblast stress, negatively affecting the survival of the cells. This feature may accelerate the release of apoptotic syncytial aggregates into maternal circulation potentially causing the damage of the endothelial cells.
However, the participation of AQP9 in the lactate transfer across the placenta was not investigated yet.
Tissue Collection
This study was approved by the local ethics committee of the Hospital Nacional Dr. Prof. Alejandro Posadas and the Facultad de Farmacia y Bioquímica, Universidad de Buenos Aires, Argentina [EXP-UBA: 45449/2017 Res(CD) No 2168/2017], and written consent was obtained from patients before the collection of the samples. Full-term normal placentas (n = 16) were obtained after cesarean section. All placentas were collected from healthy pregnant women who carried on an uncomplicated pregnancy and gave birth to a newborn without anomalies. Women who carried on multiple pregnancies, and those who had underlying maternal conditions, such as chronic kidney disease, chronic hypertension, liver disease, collagen vascular disease, diabetes, major fetal abnormalities, cardiovascular disease, and cancer, that could adversely affect the pregnancy were excluded. The clinical characteristics of the pregnant women are shown in Table 1.
Tissue Culture Conditions
The placentas were placed with the maternal side facing up and arbitrarily divided into four quadrants. Cotyledon fragments were isolated from different areas of each placenta midway between the chorionic and basal plate, using sterile dissection. After that, the decidua and basal plate were removed completely, and the placental tissue was thoroughly washed with saline solution to eliminate blood. Villous tissue was further dissected Frontiers in Physiology | www.frontiersin.org 3 December 2021 | Volume 12 | Article 774095 into explants of ∼50 mg and cultured as we previously described (Castro-Parodi et al., 2013). Briefly, explants were preincubated for 30 min in a free-serum medium to allow the tissue to recover from the isolation processes. Then, explants were placed into 24-well plates with low glucose Dulbecco's modified Eagle's medium (DMEM, Life Technologies, Inc. BLR, Grand Island, NY, United States) and 100 IU/ml penicillin, 100 mg/ml streptomycin, 32 mg/ml gentamicin, and cultured at 37°C during 18 h. This medium contained 5 mM glucose and 1 mM sodium pyruvate, hereafter referred to as the low glucose medium. In some wells, this medium was supplemented with (a) 20 mM glucose (control situation), (b) 10 mM D-Lactic acid (Sigma-Aldrich Corp., San Luis, MO, United States), or (c) 10 mM L-Lactic acid (Sigma-Aldrich Corp., San Luis, MO, United States). D-Lactic acid is a stereoisomer of L-Lactic acid, which is not metabolized by mammalian cells. In all situations, osmolarity was adjusted by adding D-mannitol (Sigma-Aldrich Corp., San Luis, MO, United States). In all the experimental conditions, explants were cultured in the presence and absence of 0.3 mM HgCl 2 (Sigma-Aldrich, San Louis, MO, United States), a nonselective inhibitor of AQPs, 0.5 mM Phloretin (Sigma-Aldrich, St. Louis, MO, United States) for specific blocking of AQP9 (Inuyama et al., 2002;Haddoub et al., 2009), and 50 mM alpha-cyano-4hydroxycinnamic acid (CHC, Sigma-Aldrich, St. Louis, MO, United States) (Inuyama et al., 2002), a nonspecific inhibitor of MCTs. HgCl 2 stock solution was prepared in PBS, while Phloretin and CHC were diluted in DMSO. Vehicle controls were performed and no changes were observed compared with the untreated control (data not shown).
Experiments were conducted independently in triplicates and repeated at least three times.
The protein expression of AQP9 was tested by Western blot in the experimental conditions (Castro-Parodi et al., 2013).
MTT Incorporation
Viability was assessed by the 3-(4,5-dimethylthiazol-2-yl)-2,5diphenyl tetrazolium bromide (MTT, Sigma-Aldrich Corp., San Luis, MO, United States) assay as described previously (Castro-Parodi et al., 2013). After treatments, explants were incubated with 0.5 mg/ml MTT for 2 h at 37°C. After this time, each explant was put in another well containing 1 ml methanol to extract the formazan. Optical density was measured at 595 nm and values were relativized to the amount of total protein (Castro-Parodi et al., 2013).
LDH Release
The release of the cytosolic enzyme LDH in the extracellular environment due to the disruption of the plasma membrane may reflect that cells are dying by necrosis (Chan et al., 2013). LDH release was quantified in the culture medium using the colorimetric method described by Chan and coworkers (Chan et al., 2013). Briefly, LDH catalyzes the oxidation of lactate into pyruvate with the formation of NADH from NAD + . Then, NADH is used in the conversion of the tetrazolium salt, 2-p-iodophenyl-3-p-nitrophenyl tetrazolium chloride, into a red formazan product. This reaction is catalyzed by the enzyme diaphorase. Formazan concentrations are directly proportional to the concentration of LDH. The optical density of the formazan product was measured at 492 nm and values were relativized to the amount of total protein.
The antibodies were detected using horseradish peroxidaselinked goat anti-rabbit IgG (Jackson ImmunoResearch Laboratories Inc., West Grove, PA, United States; 1:10,000) or anti-mouse IgG (Jackson ImmunoResearch Laboratories Inc., West Grove, PA, United States; 1:10,000) and visualized by chemiluminescence using the Enhanced Chemiluminescence Western Blotting Analysis System (ECL plus, Amersham Pharmacia Biotech Ltd., Pittsburgh, PA, United States) according to the manufacturer's instructions. Images were acquired using the ImageQuant LAS 500 chemiluminescence CCD camera (GE Healthcare, CA, United States) and the bands were quantified by the ImageJ 1.45 s software package. Detection of Bax and
Isolation of Mitochondrial Fractions and Detection of AQP9
Mitochondrial fraction was obtained from fresh placental tissue as previously described (Bustamante et al., 2014). Briefly, villous tissue free was dissected and homogenized in MSHE buffer (210 mM mannitol, 70 mM sucrose, 1 mM EDTA, and 5 mM Hepes, pH 7.4). The homogenate was centrifuged at 1,500 g for 10 min. The supernatant was recovered and centrifuged at 11,000 g for 10 min to sediment the total mitochondrial fraction. The supernatant was ultracentrifuged at 100,000 g for 60 min, resulting in a pellet designated as the microsomal fraction.
To separate the two subpopulations of mitochondria based on the specified sedimentation velocity, the total mitochondria fraction was centrifuged again at 4,000 g for 15 min. The obtained pellet corresponds to the "heavy/large" mitochondrial fraction. Following, the supernatant was centrifuged at 12,000 g for 15 min, obtaining a pellet described as "light/small" mitochondrial fraction (Bustamante et al., 2014). All centrifugations were carried out at 4°C. All the pellets were resuspended in MSHE buffer and protein concentration was determined as described above. Cytosolic and microsomal contaminations were assessed by determination of the specific activities of the lactic dehydrogenase and the antimycin A -insensitive nicotinamide adenine dinucleotide (NADH)-dependent cytochrome C reductase (Ramírez-Vélez et al., 2013;Bustamante et al., 2014).
Proteins were resolved by SDS-PAGE on a 12% gel, electrophoretically transferred to a nitrocellulose membrane. Membrane was probed with a polyclonal anti-AQP9 antibody (Alpha Diagnostic International Inc., San Antonio, TX, United States; 1:1,000) followed by incubation with a goat anti-rabbit immunoglobulin G (IgG; Jackson ImmunoResearch Laboratories Inc., West Grove, PA, United States; 1:10,000) conjugated to peroxidase. Immunoreactivity was detected using the ECL plus (Amersham Pharmacia Biotech Ltd., Pittsburgh, PA, United States) as previously described. To confirm equal loading, each membrane was also stained with Ponceau S as a general protein marker (Lanoix et al., 2012;Szpilbarg and Damiano, 2017).
Characterization of Placental Mitochondrial Populations
Mitochondrial morphology of each subpopulation was analyzed by flow cytometry using a three-color FAC-SCAN cytometer equipped with a 15-mW air-cooled λ = 488-nm argon laser (Becton Dickinson, Franklin Lakes, NJ, United States) (Mattiasson et al., 2003). The mitochondrial size was determined by the Forward angle light scatter (FSC) of photodiode, and the response collected by an E-00 setting with logarithmic amplification gain of 5.39, and the mitochondrial structure was evaluated by the light scattered (SSC) at the perpendicular direction detected by a photomultiplier tube using a voltage of 578 and a linear amplification gain was adjusted to 4.3 (Bustamante et al., 2014).
To study the mitochondrial transmembrane potential (ΔΨm), heavy and light mitochondria fractions were loaded with 30 nM of the potentiometric probe 3,3′-dihexyloxacarbocyanine Iodide (DiOC6, Thermo Fisher Scientific, Waltham, MA, United States) and evaluated by flow cytometry. The isolated mitochondrial fractions were either treated with 200 μm Ca 2+ , to analyze calcium intake handling by mitochondria, 5 μm Carbonyl cyanide 4-(trifluoromethoxy)-phenylhydrazone (FCCP, Sigma-Aldrich, San Louis, MO, United States), as a positive control, or with PBS (control) for additional 5 min and immediately acquired by the cytometer. The working solutions of the probes were diluted in PBS. Fluorescence response was analyzed and differences were quantified in five independent experiments (Bustamante et al., 2014).
Transmission Electron Microscopy
Each mitochondrial fraction was washed with PBS and fixed in the glutaraldehyde fixation solution 2.5% in PBS for 4 h at 4°C. After washing twice, both fractions were fixed in 1% osmium tetroxide in PBS for 60 min at 4°C.
Subsequently, dehydration of the samples was carried out, using increasing concentrations of alcohol followed by acetone. Then, they were included in a water-soluble epoxy resin, Durcupan (Sigma-Aldrich, San Louis, MO, United States) at 60°C for 72 h to promote the polymerization. Once polymerized, 0.5 μm semi-fine cuts were made using an Ultramicrotome (Reichert Jung Ultracut E). The sections were mounted on slides, stained with toluidine blue, and observed under the light microscope.
The sections obtained in ultramicrotome were mounted on copper grids and were contrasted with uranyl acetate and lead citrate (Reynolds, 1963). Finally, they were observed under a transmission electron microscope (MET Zeiss 109) equipped with a digital camera (Gatan 1,000 W). The analysis of each mitochondrial fractions was based on mitochondrial inner membrane topology (Sun et al., 2007).
Statistical Analysis
The statistical analysis was conducted by GraphPad Prism 7.02 software (GraphPad Software, Inc. La Jolla, CA). All values were expressed as means ± SEM. The significance of the results was analyzed by Student's t-test, one-way ANOVA followed
RESULTS
Effect of the Availability of Glucose and Lactate on the Viability of Trophoblast Cells Before and After the Blocking of AQP9 In order to evaluate the use of lactate as a glucose substitute, normal placental explants were cultured in (a) 25 mM glucose medium (control condition), (b) Low glucose medium, (c) Low glucose medium with D-Lactic acid, and (d) Low glucose medium with L-Lactic acid.
In all the tested conditions, the protein expression of AQP9 did not change ( Figure 1A).
Explant viability, evaluated by MTT incorporation, decreased significantly in low glucose medium even in the presence of D-Lactic acid compared to those explants cultured in the control condition. Interestingly, when explants were cultured in low glucose medium supplemented with L-Lactic acid, MTT levels were similar to control ( Figure 1B).
Even more, in this situation, the inhibition of MCTs did not affect cell viability, suggesting that L-lactic acid is passing through another transport protein. To investigate the contribution of AQP9 in the lactic acid transfer, we use HgCl 2 and Phloretin to block AQP9. We found that in explants cultured in low glucose medium supplemented with L-lactic acid, the blocking with HgCl 2 and phloretin significantly enhanced cell death. However, no difference was found between both inhibitions. In the other situations, cell death was not modified before and after the blocking of AQP9 ( Figure 1B).
In addition, LDH release was analyzed to determine membrane integrity. Membrane leakage is usually associated with cell death by necrosis or late apoptosis. In all the situations tested, the release of LDH into the culture medium did not change, suggesting that cell death is not due to disruption of the plasma membrane ( Figure 1C).
Effect of the Availability of Glucose and Lactate on the Apoptosis of Trophoblast Cells Before and After the Blocking of AQP9
To confirm the mechanism of cell death and the participation of AQP9 in the survival of the trophoblast cells, Bax and Bcl-2 expressions and the number of apoptotic nuclei were evaluated. Accordingly with the MTT incorporation, in explants cultured in both low glucose medium and low glucose medium supplemented with D-lactate, Bax/Bcl-2 ratio increased significantly compared to control and remained unaffected after the blocking of AQP9 (Figure 2). The addition of L-lactic acid to the low glucose medium prevented the Bax/Bcl-2 ratio increase (Figure 2). However, in this case, the inhibition of AQP9 resulted in an increased expression of the pro-apoptotic protein Bax and consequently in the Bax/Bcl-2 ratio (Figure 2).
Regarding the TUNEL assay, we observed the same behavior. In explants cultured in low glucose medium, the number of TUNEL positive cells was significantly higher than in those cultured in control condition while the addition of L-lactic acid abrogated the apoptosis. As expected, in the medium supplemented with L-lactic acid, the blocking of AQP9 gave rise to an increase in the number of apoptotic nuclei (Figure 3).
Isolation of Mitochondrial Fractions and Detection of AQP9
To investigate the subcellular localization of AQP9, total mitochondria fraction, and the heavy/large and light/small mitochondria subpopulations that coexist in the villous trophoblasts were isolated by differential centrifugation and characterized by flow cytometry and transmission electron microscopy (TEM). Cytosolic and microsomal contaminations were less than 1.8 and 2.3%, respectively.
In concordance with previous reports, we found that heavy/ large particles showed high FSC while the light/small particles presented low FSC, both with similar SSC characteristics suggesting that despite the different sizes of the particles, the internal complexity was similar. The ΔΨm revealed that the light/small mitochondria fraction was depolarized with a level of polarization lower than the heavy/large mitochondrial fraction ( Table 2). In the presence of the uncoupler FCCP and 200 μm Ca 2+ , both fractions showed a decrease in their ΔΨm.
TEM analysis of both mitochondrial fractions confirmed the presence of two phenotypes. Representative TEM micrographs are shown in Figure 4A. The analysis of the mitochondrial phenotype was based mainly on the inner membrane topology. The heavy fraction shows dense staining of the inner matrix with an intact outer membrane showing lamellar cristae. The light mitochondrial fraction exhibits an inner membrane enclosing separate vesicular matrix compartments or cristae and frequently swollen mitochondria with expanded matrix space, lack of staining of the matrix, and fragmented or disorganized phenotype.
Then, we explored the expression of AQP9 in trophoblast mitochondria. We showed evidence for the first time that this protein was found in the mitochondria fraction isolated from villous trophoblast cells ( Figure 4B). As we previously reported, a band of 30 kDa corresponding to the AQP9 protein was also found in the microsomal fraction and no band was detected in the nuclear fraction (data not shown). We also analyzed the expression of AQP9 in the heavy and light mitochondrial subpopulations. We found that AQP9 is only present in the heavy mitochondria fraction (Figure 4C).
DISCUSSION
Previous studies have widely changed the conception that lactate is only a waste metabolic product of cell glycolytic metabolism (Gladden, 2004;Goodwin et al., 2015;Baltazar et al., 2020). In this regard, it is well accepted that glucose is metabolized by the placenta to generate lactic acid, which is the key fuel Frontiers in Physiology | www.frontiersin.org for fetal growth (Baumann et al., 2002;Hay, 2005Hay, , 2006. In sheep, it was reported that lactate produced by the placenta represents almost 25% of fetal oxidative metabolism (Burd et al., 1975). Moreover, an association was found between reduced placental lactate transport to the fetus and fetal growth restriction (Settle et al., 2006). Nevertheless, it was not explored whether the placenta can use lactate to substitute glucose as an energy substrate when its availability in the maternal blood is reduced. On the other hand, Miki and coworkers have reported that brain AQP9 can work with MCTs to transport lactate and speculated that it could have a role in energy metabolism and/or as a ROS scavenger (Miki et al., 2013;Akashi et al., 2015). Previously, we found that AQP9 expressed in the human placenta may not be only involved in water movement and homeostasis (Castro Parodi et al., 2011;Castro-Parodi et al., 2013). However, the role of AQP9 in the human placenta is still unknown.
In this work, we found that cell death was induced when placental explants were cultured in low glucose medium. There was no evidence of disruption of the plasma membrane, so cell death may take place by apoptosis. In this condition, the addition of L-lactic acid prevented cell death, and interestingly, the inhibition of MCTs did not affect cell viability revealing that another transport protein may be facilitating L-Lactic acid entry into the cell. On the other hand, the blocking of AQP9 led to an increase in both the pro-apoptotic protein Bax and the number of TUNEL positive nuclei in low glucose conditions. Therefore, our findings suggest that trophoblasts can use L-lactic acid as an alternative source of energy when glucose availability is reduced by an AQP9-mediated mechanism.
It is well established that mitochondria orchestrate the process of life-and-death decisions of the cell (Can et al., 2014;Javadov et al., 2020;Marín et al., 2020).
In many tissues, it was proposed that lactate can enter the mitochondria and be metabolized to pyruvate by the mitochondrial LDH, whereas NAD + is reduced to NADH (Gladden, 2004;Passarella et al., 2014;Goodwin et al., 2015). Thus, NADH generated in the mitochondria can be re-oxidized to NAD + by the electron transport chain, while pyruvate can enter the tricarboxylic acid (TCA) cycle, which allows maintenance of the mitochondrial energy homeostatic cycle (Schurr and Gozal, 2012). Besides, NADH may act as a ROS scavenger. Thus, any alteration in NADH production may give rise to ROS accumulation triggering cell damage and finally leading to cell death (Miki et al., 2013). In this regard, there is considerable evidence that ROS promotes the apoptotic death Frontiers in Physiology | www.frontiersin.org 9 December 2021 | Volume 12 | Article 774095 of villous trophoblasts (Szpilbarg et al., 2016(Szpilbarg et al., , 2018Marín et al., 2020). In the brain, it was reported that AQP9 also localizes in mitochondria (Amiry-Moghaddam et al., 2005), suggesting that mitochondrial AQP9 may function as a monocarboxylate channel working with MCT to transport lactate (Miki et al., 2013;Akashi et al., 2015).
In human placenta, it is well documented that as cytotrophoblast cells differentiate into syncytiotrophoblast cells, trophoblast mitochondria undergo morphological and functional modifications. Previous reports showed that after in vitro fusion experiments, an accumulation of numerous small mitochondria was observed in the syncytial cells (Martinez et al., 1997). Thus, the "heavy" mitochondria fraction may be related to the cytotrophoblast while the "light" fraction may be linked to the syncytiotrophoblast (Martinez et al., 1997;Bustamante et al., 2014;Fisher et al., 2020). However, both mitochondria subpopulations may coexist in the syncytiotrophoblast.
In this context, we isolated both fractions and explored the expression of AQP9 in trophoblast mitochondria. According to previous work, we confirmed that the light/small mitochondria subpopulation is less polarized than the heavy one (Bustamante et al., 2014). Furthermore, the electron microscopy images showed well-defined differences not only in the mitochondrial morphology of each subpopulation but also in the inner membrane topology. Our results also revealed that AQP9 is present in the villous trophoblast mitochondria. Even more, this is the first report that shows evidence that this protein was only observed in the large/heavy mitochondria subpopulation.
Bustamante and coworkers have reported that the "heavy" fraction showed a better respiratory function, lower hydrogen peroxide production, lower mitochondrial P450, and higher cardiolipin concentration than the "light" fraction. In addition, they demonstrated that the "heavy" fraction expressed significant protein levels of p53, Bax, and cytochrome c compared with the "light" fraction (Bustamante et al., 2014). Based on these data, they suggested that the reduced oxygen consumption capacity, observed in the light fraction, may be related to a decrease in ATP production (Bustamante et al., 2014). Besides generating ATP, mitochondria also serve as local calcium (Ca 2+ ) buffers that tightly regulate intracellular Ca 2+ levels (Haché et al., 2011). In this way, the electrochemical potential across mitochondria's inner membrane is used to sequester Ca 2+ . Thus, a lower ΔΨm in the small/light fraction may reflect that calcium ions are dissipated more slowly across the inner mitochondrial membrane into the mitochondrial matrix, affecting the speed of electron transfer via the oxidative phosphorylation complexes and the citric acid cycle activity (Bertero and Maack, 2018).
All these differences suggest that both mitochondria fractions could be involved in different cellular processes. In this regard, Fisher and coworkers have recently proposed that the heavy mitochondrial subpopulation may participate in the physiological apoptotic mechanisms required for the normal differentiation and turnover of villous trophoblast cells (Fisher et al., 2020). Meanwhile, the light fraction may execute necrosis or autophagy (Fisher et al., 2020).
The evidence presented here supports the idea that in trophoblast cells, AQP9 may function as a lactate transporter together with MCTs. Since we found that AQP9 localized not only in the apical membrane (Damiano et al., 2001) but also in the mitochondria of the villous trophoblast cells, this protein may facilitate not only the lactic acid entrance into the cytosol but also into the mitochondria (Figure 5A). Along with this, we found that in a reduced glucose medium A B C FIGURE 4 | Expression of AQP9 in trophoblast mitochondria. (A) Representative transmission electron microscope images of heavy/large and light/small mitochondria found in the isolated subpopulations. Individual mitochondria were falsely colored to aid in identification. The scale bar represents 100 nm and the magnification is 50,000 × (n = 4 placentas).
(B) AQP9 expression in microsomal and mitochondrial fractions. Representative immunoblot revealed that AQP9 is expressed in both mitochondrial and microsomal fractions isolated from villous trophoblast cells.
(C) AQP9 expression in heavy/large mitochondria subpopulation. Representative immunoblot showed the expression of AQP9 only in the heavy/large fraction (n = 12 placentas).
A B
FIGURE 5 | Schematic representation of lactate use by trophoblast cells. In normal pregnancies, trophoblast cells use glucose as a fuel to support the placenta's cellular growth (A). The excess of lactate due to fetal metabolism may be driven into the maternal circulation by monocarboxylate transports system (MCTs) and AQP9. When the availability of glucose is reduced (B), trophoblast cells can use lactate as a carbon source substitute (dot lines). The uptake of lactate into the cytosol may be facilitated by AQP9 and MCTs localized in the plasma membrane. In the cytosol, lactate can be metabolized into pyruvate or it can pass across the external and inner mitochondria membranes by MCTs or AQP9. In the mitochondria matrix, lactate may be oxidated to pyruvate by a mitochondrial LDH while NAD + is reduced to NADH. The produced NADH may act as a reactive oxygen species scavenger. In preeclamptic placentas, AQP9 is not functional altering the use of lactate by the trophoblast cells. This may affect the mitochondria function, leading to the activation of the mitochondrial pathway of apoptosis. OMM, Outer Mitochondria membrane; IMM, Inner Mitochondria membrane; ECT, Electron transport chain; TCA cycle, tricarboxylic acid cycle; and MPC 1/2, Mitochondrial pyruvate carrier 1 and 2.
supplemented with L-Lactic acid, lactic acid cannot enter the cell when AQP9 is blocked, impairing mitochondrial function, resulting in the activation of the mitochondrial pathway of apoptosis. Therefore, it is possible that the ability of the villous trophoblast cells to better respond to the stress may be related to the content of heavy/large mitochondria with a functional AQP9. It is well accepted that preeclampsia is usually associated with intermittent placental perfusion. Consequently, fluctuations in O 2 tension may enhance placental oxidative stress which has a critical role in exacerbating the villous trophoblast apoptosis (Hung et al., 2002;Hung and Burton, 2006;Marín et al., 2020). Considering the reduced GLUT1 expression and the decreased aerobic glycolysis observed in preeclamptic placentas, the concentrations of lactate in the placenta and the maternal blood might be increased. Although several reports have shown that plasma lactate levels are high in preeclampsia (Peguero et al., 2019), lactate concentrations are low in the placentas from preeclamptic women, suggesting that lactate cannot pass across the cell membrane of the trophoblasts.
Accordingly, we speculated that the increased oxidative stress observed in preeclampsia may impair AQP9 function as a lactate transporter. In this scenario, the lack of functionality of AQP9 may impair the lactic acid utilization by the placenta, promoting more accumulation of ROS and adversely affecting the survival of the trophoblast cells. This stress in the trophoblast cells may enhance the shedding of apoptotic aggregates into maternal circulation resulting in the systemic endothelial dysfunction that characterizes the maternal syndrome. Therefore, a non-functional AQP9 might be involved in the pathogenesis of preeclampsia.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Hospital Nacional Dr. Prof. Alejandro Posadas and Facultad de Farmacia y Bioquímica, Universidad de Buenos
AUTHOR CONTRIBUTIONS
YM, LA, and JR carried out the experimental work and analysis of data. AC provided the placental tissues and discussed the results. NS and JB carried out data analysis and discussion and critically reviewed the manuscript. AD designed the study and wrote the manuscript. All authors contributed to the final version of the manuscript. | 2021-12-02T14:44:52.872Z | 2021-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "d374b67331c94f5cc54688bb2f5d8cfba5365976",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.774095/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d374b67331c94f5cc54688bb2f5d8cfba5365976",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
146044103 | pes2o/s2orc | v3-fos-license | Algorithm of Medical Image Fusion based on Laplasse Pyramid and PCA
The paper firstly expounds the principle and method of image fusion of Gauss Pyramid, Laplasse Pyramid and principal component transform. After that, it explains in detail that the image is decomposed by Laplasse and Pyramid, and then the image fusion is carried out by principal component analysis and average gradient method for high frequency part and low frequency part. Finally, Laplasse’s inverse transform is used to get the final fused image. The algorithm is tested with medical images, and the effect of the algorithm is not very ideal, but it is more effective than the simple PCA and Laplasse Pyramid image fusion method.
Introduction
Image Pyramid is a multi-scale representation of images. It is an effective but simple in concept structure for multi-resolution interpretation of images. The theory of the image Pyramid method is to decompose each image of the fusion into a multi-scale Pyramid image sequence. A low-resolution image is in the upper level, a high-resolution image is in the lower level, and the upper image is 1/4 of the size of the previous level. The number of layers is 0, 1, 2... N. Combining all the images of Pyramid on the corresponding level with certain rules, the synthetic Pyramid can be obtained, and then the synthetic Pyramid is reconstructed according to the inverse process generated by Pyramid, and the fusion of Pyramid is obtained.
Image Decomposition Based on Gauss Pyramid (Gaussian Pyramid, GP)
Gauss Pyramid is a technology used in image processing, computer vision and signal processing. Gauss Pyramid is through Gauss smoothing and subsampling to obtain some subsampling images. That is, the K layer Gauss Pyramid can get the K+1 layer Gauss image by smooth, subsampling. Gauss Pyramid contains a series of low-pass filters, whose up to the bottom is gradually increasing from the upper to the next layer by factor 2. Gauss Pyramid can span a very large frequency range. The source image is G 0 ,G 0 is used as the zeroth layer (bottom layer) of Gauss Pyramid. The original input picture is sampled by Gauss low pass filter and interlaced interlaced, and the first level of Gauss's Pyramid is obtained. In the first level image low pass filtering and lower sampling, the next level of the pyramid is obtained, and the above-mentioned process is repeated to form the high level pyramid. The current layer image of Gauss Pyramid is generated by the low pass filtering of the previous image, Gauss's low pass filter, and then the 2 sampling of interlaced and separated rows. The size of the current layer image is in turn the 1/4 of the previous image size.
Image Reconstruction Based on Laplasse Pyramid (Laplasse Pyramid, LP)
The composition of Laplasse Pyramid is evolved on the basis of Gauss Pyramid, which describes some of the high frequency details lost by convolution and lower sampling operations during the operation of Gauss Pyramid. Laplasse Pyramid is composed of a series of images that are first subtracted and then enlarged by the source image. We can interpret Laplasse Pyramid as an inverse form of Gauss and Gauss. (1) The above G l interpolation method is used to get the amplified image G l * , which is inserted 0 in the even and even number, and then the lower Gauss kernel is used to filter, so that the size of the G l * is the same as that of the G l−1 . The image G l * is magnified and G l is reduced. Although the image G l * of the first layer of Pyramid image G l is amplified and the size of the Pyramid second layer image G l−1 is the same, the two are actually different. We can see from formula (2) that G l−1 contains more details than G l * .
Laplasse's image in Pyramid can be obtained by subtracting approximate two adjacent images in Gauss Pyramid: In formula (3), N is the top layer of Laplasse's Pyramid; LP l is the L image of Laplasse and Pyramid. The images of LP 0 ,LP 1 ,…,LP N is made up of Pyramid by Laplasse Pyramid. The LP 0 image of each layer is the difference between Gauss's Pyramid G 0 image and its high level image G 1 after interpolated and enlarged G 1 * .
Through the inverse process of formula (3), we get the formula (4). For the above mentioned inverse operation of Pyramid Laplasse, we can add the noise and the up sampling to restore the corresponding Gauss Pyramid to get the final source image. (4) After the fusion of Laplasse Pyramid, we have inferred the source images from the top to bottom.
Image Fusion Based on PCA (Principal component analysis, PCA)
PCA is an optimal orthogonal transformation based on target characteristics. In statistics, principal component analysis is a multivariate statistical method. PCA transform aims to transform multiple indicators into a few comprehensive indicators by using the idea of dimensionality reduction. PCA technology can often get the most important elements and structures from too "rich" data information, remove the noise and redundancy of data, reduce the original complexity of data, and reveal the simple structure hidden behind the complex data.
The goal of PCA method is to look for r(r<n) new variables to reflect the main features of things, to compress the size of the original data matrix, to reduce the dimension of the eigenvectors and to select the least dimension to summarize the most important features. Each new variable is a linear combination of the original variables, reflecting the comprehensive effect of the original variables, and has certain practical implications. These r new variables are called "principal components", which can reflect the effect of the original n variables to a large extent, and these new variables are interrelated and orthogonal. Through principal component analysis, the data space is compressed, and the characteristics of multivariate data are intuitively expressed in low dimensional space.
(1) According to the original image data matrix X, we find out its covariance matrix C: (2) To obtain the eigenvalues and eigenvectors of the covariance matrix, and to form the transformation matrix. The characteristic equation is as follows: (6) In the form: I is a unit matrix, and U is a eigenvector.
(3) To Calculate transform matrix T: T = U T . It is a matrix composed of various eigenvectors, and the U matrix is an orthogonal matrix, that is, the U matrix satisfies: (4) To replace the transformation matrix T:Y=TX , the specific expression of the PCA transformation will be obtained. (8)
Image Fusion of Laplasse Transform Pyramid based on PCA
Assuming that LA l and LB l are the l layer images after the decomposition of Laplasse Pyramid of the source image A and B, the fusion result is LF l. When l=N, LAN and LBN are the top-level images obtained from the source image A and B after the decomposition of Laplasse Pyramid. Since PCA can combine the most important information of the two components, the principal component analysis is used to fuse the top level images. The algorithm is as follows: There are N source images, each image regards as a one-dimensional vector, records as x k , k=1,2,…,N.
(1) From the source image constructing the data matrix X=(x1,x2,….,xN) T ; (2) To calculate the covariance matrix C of the data matrix X; For the fusion of other layer images, when 0<=l<N, for the L layer that is decomposed by Laplasse Pyramid, first calculates the regional average gradient of M*N (M, N are odd number and M>=3, N>=3) with its pixels as the center.
(11) Among them, I x and I y are the first difference of pixel f (x, y) in the direction of x and y respectively, as follows: (12) Therefore, for every pixel LA l (i,j) and LB l (i,j) in the L level image, we can get the corresponding regional average gradient GA(i,j) and GB(i,j). Because the average gradient reflects the small details and texture changes in the image, it also reflects the sharpness of the image. Generally speaking, the larger the average gradient, the richer the image level is, and the clearer the image is. Therefore, the fusion results of each layer are as follows: (13) After get the fusion images of Pyramid at all levels LF 1 、LF 2 、……,LF N , we can obtain the final fusion image reconstructing by the previous formula (4).
Experimental Results
The medical MRI image is regarded as a sample image, and the effectiveness of the method is evaluated on MatLab7.0. The experimental results are shown in figures 1, 2, 3, 4 and 5.
Conclusion
This paper introduces the basic principle, thought and algorithm steps of Laplasse Pyramid image fusion based on PCA, and carries out a simulation experiment on MatLab7.0 with medical image as the data. It can be seen from the experimental results that the image effect of the fusion method is not very ideal and obviously blurred, but it is better than the simple PCA image fusion and the Laplasse Pyramid image fusion algorithm. At the same time, the research of the algorithm lays a solid foundation for the research of image fusion algorithm in later wavelet transform. | 2019-05-07T13:10:35.590Z | 2019-04-10T00:00:00.000 | {
"year": 2019,
"sha1": "ef89f2d6cd569ec61ec5c7d7063b6c2d6add951b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/490/4/042030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b8d612bdd8967d9d443df9a17bfc31436aa02324",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
} |
216119429 | pes2o/s2orc | v3-fos-license | Perceived Climate Change Impacts and Adaptation Strategy of Indigenous Community (Chepangs) in Rural Mid-hills of Nepal
Climate change is projected to increase in vulnerable areas of the world, and marginalized communities residing in rural areas are more vulnerable to the change. The perceptions of climate change and adaptation strategies made by such communities are important considerations in the design of adaptation strategies by policy-makers. We examined the most marginalized indigenous group "Chepang" communities' perceptions towards this change, variability, and their attitudes to adaptations and adapted coping measures in mid-hills of Nepal. We interviewed 155 individuals from two Chepang communities, namely, Shaktikhor and Siddhi in Chitwan district of Nepal. We also analyzed biophysical data to assess the variability. The findings showed that the Chepang community has experienced significant impacts of climate change and variability. They attributed crop disease, insect infestation, human health problem, and weather-related disaster as the impacts of climate change. Strategies they have adopted in response to the change are the use of intense fertilizers in farmland, hybrid seeds cultivation, crop diversification, etc. Local level and national level adaptation policies need to be designed and implemented as soon as possible to help climate vulnerable communities like Chepangs to cope against the impacts of climate change.
Introduction
Although Nepal's contribution to global greenhouse gas emissions is very low, it is considered to have an extremely high risk of climate change (Gentle, Maraseni 2012;Godar Chhetri 2012). Moreover, Nepal is recognized to be the fourth most climate-vulnerable nation in the present world Gentle et al. 2014). Categorized under least developed country, Nepal comprises of ample agriculture and resources dependent communities including Chepang, Rai, Tamang, etc. in rural hills which are highly vulnerable to the adverse impacts of climate change (Piya et al. 2016).
More than 80% of Nepalese communities depend on agriculture for their livelihood and fall under poverty. Although Chepang community was considered to be a nomadic indigenous community as they were dependent on hunting and gathering earlier, currently, they are dependent on rain-fed agricultural system and forest resources for their subsistence and livelihood (Piya et al. 2016). In developing nations such as Nepal, marginalized communities dependent on natural resources for their livelihood are facing more impacts of climate change and are susceptible to climatic vulnerability, and at the same time, these communities are deprived of adaptation practices because of multitude barriers where cultural barrier is a major one (Adger et al. 2013). Chepangs are thus, grouped as highly marginalized indigenous communities facing multiple impacts of climate change and lacking knowledge on adaptation strategies to cope with changing climate (Piya et al. 2013). A larger number of Chepang communities are residing in the mid-hills of Central Nepal. Although residents from diverse communities are facing detrimental impacts of climate change, Piya et al. (2013) reported that Chepang communities are considered as one of the highly climate vulnerable communities in Nepal, and are facing major impacts like unpredictable droughts and erratic rainfall that have led to the reduction of agricultural products. For instance, maize and millet production in their communities are dramatically fluctuating and declining. Similarly, they have been the victim of acutely rising temperature, unexpected change in seasons, significant reduction in agricultural productivity, water shortage, etc., which have resulted to increaseimpacts of climate change, because, they are agriculture dependent and residing in rural mountain landscape (Halbrendt et al. 2014;Piya et al. 2013Piya et al. , 2016. Moreover, indigenous, marginal and socially excluded communities like Chepangs with low physical and social assets and adaptive capacity to cope with the changing climate are facing remarkable impacts of climate change (Pandey et al. 2016;Piya et al. 2016).
In response to these increasing impacts of climate change, Nepal has been successful in preparing various environmental policies, and has tried to implement them. Different adaptation policies like Climate Change Policy, National Adaptation Programs of Actions (NAPA), Local Adaptation Plans for Actions (LAPA) and Forest Policy have been formulated and implemented in different timeframe that have made some changes in rural adaptation initiatives (Dhungana et al. 2018;Gauli, Upadhaya 2014;Gentle et al. 2018;Ghimire et al. 2019). However, indigenous communities like Chepangs still lack minimal knowledge about climate change adaptation (Halbrendt et al. 2014;Piya et al. 2013). They are adopting some sorts of climate change adaptation like practices without clear knowledge of what they are doing, and are considered to be autonomous and short-term practices. These sorts of adaptation practices could turn out to be maladaptive practice in the near future. Lack of education, poor social networks, poor economic status and settlement in geographically poor landscapes, incapability of information and lower adaptive capacity are the prime reason that they are unable to adopt adaptation strategies (Halbrendt et al. 2014;Piya et al. 2013). There is, thus, the lack of scientific information on the impacts of climate change and the adaptation strategies that the indigenous and marginalized Chepang community is adopting in response to the changing climate. This study looked into the perception of the Chepang community towards climatic variability, impacts of climate change in their community, and adaptation practices adopted by Chepangs in response to climate change at present. The information from this study feeds into the formulation of national climate change policy as it will provide a better idea on climate change impacts and adaptation strategies that some marginalized indigenous communities such as Chepangs and others could adapt based upon their traditional ecological knowledge. The study adopted descriptive and exploratory research design. Problems identified into the field were analysed and described based on its functions and conditions. There has been limited research into climate change impacts on chepang community, and this research tries to explore the problems.
Chepangs: the Study Community
The Chepangs are one of the many indigenous nationalities of Nepal with a population of 52,237 (0.23% of the total population of Nepal) (CBS 2011). Majority of this community live in mid-hills in Chitwan, Makawanpur, Dhading and Gorkha districts of Nepal. The Chepang community has been categorized as a highly marginalized indigenous nationality by the government of Nepal. Although Chepangs depend on agriculture, their major livelihoods rely on forest resources to a large extent for wild edibles, fodder and fuel. As Chepangs' livelihoods are mainly based on natural resources, they are likely to suffer from the impacts of climate change such as variability in rainfall, drying up of water resources, etc. Piya et al. (2016) mentioned that Chepang's vulnerability is further compounded by limited access to information and by spatial isolation, and suggested that studies investigating vulnerability to climate change should focus on marginalized communities because they are the most vulnerable and least able to cope with the adverse impacts. To draw the attention of the government and stakeholders to these issues, it is necessary to gather information through studies based on the livelihoods of these vulnerable communities. With this in mind, we purposively selected Chepang community for this study.
Study Area
Chitwan district lies in Siwalik range of central development region of Nepal. The geographical position of the district lies between latitude 27°21΄N to 27°46΄N and longitude 83°55΄E to 84°48΄ E. The district is characterized by the fragile land topography with Churiya hills and altitude ranging from 141m to 1945 m above mean sea level. Chepangs mainly reside in Kaule, Korak, Siddhi, Ayodhyapuri and Shaktikhor VDCs of Chitwan district (CBS 2011). Therefore, Shaktikhor and Siddhi, which are known as pocket villages of the Chepangs are selected as study area (Figure 1).
Data Sources
The study was based on primary data collected throughthe household survey. More than 5% constituting a total of 155 households (79 from Shaktikhor and 76 from Siddhi VDC) were randomly selected as sample households for interview. The survey contained a set of questionnaires that mainly focused on capturing the perception of selected households on climate change, its impacts and adaptation as well as their coping strategies to address the impacts.
In addition to the household survey, we conducted key informants survey and focus group discussions to gather in-depth information and to understand the situation more. The community leaders, Community Forest Users' Groups secretaries, local teachers, ward committee members and the members of local Community Based Organizations (CBOs) were selected as key informants. We conducted two focus group discussions, one in each VDCwhich included elite representatives of the community (15 individuals in each discussion). The focus group discussion was guided by the set of questionnaires. In addition to this, we let discussion to be free so that we would not miss any information that was not covered by the questionnaire. The discussion was further helpful to enrich and verify the information received through the household survey and key informant interview.
To understand the impacts of climate change in the area, different indicators of impacts, such as agricultural production, crop disease, insect infestation, animal disease, physical loss, human health, forest fire, weather-related disaster, water resource depletion and invasive species were taken. Respondents were asked to explain the current situation of each sector comparing it with the situation in the past ten years. The household survey used a five-point Likert scale, where one denoted "highly increased" and five denoted "highly decreased" to quantify people's perceptions about the temporal difference on the conditions. Responses were analyzed by computing weighted mean depending upon their preference.
This study also used raw daily data of the minimum and maximum temperature, and precipitation of Rampur station obtained from the Department of Hydrology and Meteorology, Kathmandu, Nepal for 30 years from 1987 to 2016. The data were corrected to manage the missing data. The trends of temporal variation of temperature and precipitation were analyzed using linear regression. The linear trends between time series and climatic data are as given in equation below: Y = a + bx, Where, y = time (year), x = temperature or precipitation and "a" and "b" are constants.
Results and Discussions
Among the surveyed 155 households female represented 54%, and the majority of the respondents were of age between 35-55 years. The education status of the respondents ranged from illiterate to bachelor level with illiterate being dominant ( Table 1). The significant representation of adult age group with a wider range of education and dominance of female gave us a good opportunity to understand the impacts and adaptation strategies of climate change in the study area as the majority of household works are conducted by female members in the area. The reason behind high representation of female in our household survey was because majority of the males were not at home during the survey. In the Chepang community, it is common for the female to work at home, and male doing outside activities for livelihoods. It was found that majority of the Chepangs were not familiar with the term "climate change", and only 30% respondents were familiar with it. These limited people within the Chepang community aware of climate change, are the major source of information for the wider communication. Although these limited people knew the term 'climate change,' the majority of the Chepang community were found to have observed changes in temperature and rainfall patterns over the years. Similar results were reported by (Piya et al. 2012).
Variability in Temperature and Precipitation: What Chepangs Perceived
The meteorological data showed variability in the temperature trend ( Figure 2). Further analysis revealed an increase in summer temperature and a decrease in winter temperature. Similarly, 77% respondents perceived that they experienced an increase in summer temperature whereas 23% observed no change, and it was as before. Similarly, more than 46% respondents observed cooler winter compared to earlier years (Figure 3). Piya et al. (2012) also found that 47.5% of Chepang respondents in mid-hills of Nepal reported the rise in summer temperature. More than 84% respondents reported that they had experienced variability in rainfall while the rest did not observe any change. Majority of them (74%) observed an increase in rainfall. They also reported that the summer rainfall had increased (Figure 4). Our results agreed with the result reported by Poudel, Shaw (2016) where they found an increase in summer rainfall in Lamjung district. This perceived rainfall variability was verified by the observed meteorological data, which showed that the average annual precipitation increased by 2.11 mm in 30 years interval (1987-2016) ( Figure 5). Although, the majority of the respondents observed variability in annual rainfall, there was great variation in responses regarding starting time of monsoon rainfall. Nearly half (46%) of the respondents observed late rainfall, while 34% respondents observed no change, few of them (13%) observed early rainfall and very few (7%) observed variation in rainfall starting period. Similar results in the variation in rainfall and respondent's perception were reported by Tiwari et al. (2010). The perception of the variability of rainfall by respondents relied a lot on how they were affected by the rainfall for agriculture.
Perceived Impacts of Climate Change
The study showed that Chepang community perceived increase in human disease as the major impacts of climate change followed by animal disease, weather-related disaster, physical loss, decrease in agricultural production, increase in crop disease/insect infestation, decrease in water resources and decrease in forest fire (Table 2). Agriculture was the primary source of livelihoods of more than 89% respondents. Paddy, maize, millet, buckwheat, black gram and mustard were the major crops grown by Chepang communities in the study area. Agriculture is mainly rainfed based, and because of this, they are more vulnerable to climate change. However, 59% respondents observed a slight increase in agricultural production compared to 10 years ago. This might be because of the use of hybrid seeds, improved technology, and change in agricultural practices done by present Chepangs compared to 10 years back as majority of them used to farm in high slopes.
Similarly, Chepangs perceived early flowering of the tree crop, Diploknema butyraceae (Chiuri). 67% noticed early flowering by a month. Malla (2013) reported similar results where early flowering was attributed to an increase in temperature. Respondents also perceived an increase in invasive species. Mooney and Hobbs (2000) reported that climate change influenced all invasive species by affecting their spread in a new habitat. However, forest fire occurrence was observed to have decreased compared to 10 years ago. This decrease in fire occurrence was attributed to the result of the successful establishment of community forestry program and awareness among people.
The decrease in water availability as an impact of climate change was perceived by the majority of the respondents where this indicator obtained the weighted mean of 3.62 (Table 2). The respondents reported the decrease in the availability of drinking water in comparison to 10 years before, and highlighted the problem of fetching water from far away. They also reported drying up of existing springs and water taps. Similarly, Piya et al. (2013Piya et al. ( , 2016 and Gentle et al. (2014) reported that climate change impacts like reduction in agricultural productivity, increase in death of livestock due to increased incidences of diseases, increase in pests and diseases in agricultural crops, drying up of water resources used for drinking and irrigation, excessive increase in invasive species and frequently occurring disasters like landslide, erosion and flood are the major impacts prevailing in different parts of the nation.
Climatic hazards were perceived as the major impacts of climate change by the majority of respondents. We grouped climatic hazards into floods, thunderstorm, hailstorm, intense rainstorm, drought and landslide. 94% respondents reported an increase in climatic hazards. Among them, landslide was the top one followed by drought, flood, hailstorm and intense rainstorm. These hazards might not be completely due to climate change but due to the long-term events with complex causes as explained by Khatri et al. (2016). The road construction, establishment of a poultry farm and querying in slopy lands might have triggered landslides directly, and climate change has also played crucial role for its intensification. It was reported that these hazards brought impacts on livelihoods of the local Chepangs specifically through loss of agricultural land.
Local Adaptation Strategy
The majority (83 %) of households are carrying out local adaptation strategy at their household level to reduce the impacts of climate change. Gentle et al. (2018) and Khanal et al. (2018) also reported that most of the adaptive strategies to reduce the loss of agricultural productivity were locally designed and implemented at household levelas short-term strategies. The major adaptation strategies were classified as the plantation of cash crops, shift to other income generation activities (IGAs), use of hybrid seeds, irrigation, intense fertilizer and practice of soil conservation activities ( Figure 6). 38% respondents reported that they cultivated fruits and vegetables instead of crops considering the possibilityto sell these in the market. However, the level of cultivation was at subsistence level. It was also found that 31% respondents shifted from agricultural production to other income generation activities, such as daily wages, business, and foreign employment. More than 48% respondents used intense fertilizers to increase agricultural production. A total of 69% of respondents have adopted soil conservation activities to reduce the soil loss from the agricultural land. Those activities varied from biological measures toengineering structures with the support of local development bodies and NGOs. However, plantation activities have been doneat the individual level. Apart from these adaptation strategies, Chepangs also reported that they had been buying food from the market, gathered wild plants and tubers from forests to support their livelihoods. The results are consistent with Gentle, Maraseni (2012), Piya et al. (2013), Gentle et al. (2018) and Khanal et al. (2018) that the vulnerable communities in different parts of Nepal have adopted different forms of adaptation strategies like use of fertilizers, crop diversification, income diversification, collection of remittance and adoption of soil conservation strategies that included both engineering and bioengineering strategies to cope with the adverse impacts of climate change and sustain their livelihood. | 2020-04-09T09:11:31.830Z | 2019-11-30T00:00:00.000 | {
"year": 2019,
"sha1": "8a4b08de5413ededbf885a2a20128cf68071980b",
"oa_license": null,
"oa_url": "https://www.nepjol.info/index.php/forestry/article/download/28353/23282",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dd67862d0fb86de698a3d26d1e0e49f17c6ee751",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
} |
15484821 | pes2o/s2orc | v3-fos-license | Integrability of $q$-oscillator lattice model
A simple formulation of an exactly integrable $q$-oscillator model on two dimensional lattice (in 2+1 dimensional space-time) is given. Its interpretation in the terms of 2d quantum inverse scattering method and nested Bethe Ansatz equations is discussed.
The vertex index j in Fig.1 stands for the j th component of the tensor power H ⊗N M . In this paper we will imply the Fock space representation of q-oscillator (Spectrum(h) = 0, 1, 2, . . . ). In the limit q → 1 the model becomes the completely inhomogeneous free-fermion six-vertex model [2].
Partition function Z is a polynomial of two parameters λ and µ (recall, ν 2 = −q −1 λµ), its operatorvalued coefficients belong to H ⊗N M . The main result of this letter is the commutativity of Z, Therefore, Z(λ, µ) is the layer-to-layer transfer matrix of the quantum mechanical model in wholly discrete Turn to the proof of Eq. (2). It is convenient to combine the weights of Fig.1 into a six-vertex-type matrix L αβ acting in the product of two dimensional vector spaces V α ⊗ V β , V ≡ C 2 : Let the lines of the lattice are labeled by the indices α n and β m , 1 ≤ n ≤ N and 1 ≤ m ≤ M as it is shown in Fig.2. The vertices of the lattice are labeled by j = (n, m). The "partition function" Z may be written as It is known [4], the commutativity of layer-to-layer transfer matrices follows from a tetrahedron equation. In particular, the commutativity (2) follows from where matrix elements of M (H 0 ; ξ) belong to an additional copy H 0 of q-oscillator: Equation (5) may be verified directly (in auxiliary spaces V α ⊗· · ·⊗V β ′ , it is just 16×16 matrix equation), the commutativity (2) may derived by the repeated use of (5) for the forms (4).
The layer-to-layer transfer matrix Z(λ, µ) may be interpreted in the terms of the two-dimensional quantum inverse scattering method and quantum groups ( [3] and e.g. [5,6]). Transfer matrix (4) is the 2d transfer matrix Z(λ, µ) = Trace Due to the six-vertex structure of (3), the Lax operator (7) has the block-diagonal form (in the combinatorial formulation of Fig.1 it means the conservation of the number of bold edges on the left and right of α th n column of Fig.2): where L ωm is the Lax operator for m th fundamental representation π ωm of U q ( sl M ) in the auxiliary space 1 . In the quantum space, the Lax operator (7) acts in F ⊗M , where F is a representation space of q-oscillator H. The Lax operator (7) has the central element (9) q Jn = q hn,1+hn,2+···+hn,M .
For the Fock space representation of q-oscillators, F ⊗M = ∞ ⊕ J=0 π Jω1 is the direct sum of rank-J symmetrical tensor representations of U q ( sl M ). The R-matrix for the Lax operators (7) follows from (5): Matrix elements of M (6) depend on two extra parameters λ 0 , µ 0 . They produce the decomposition The definition of the "partition function" Z was initially N ↔ M invariant. In particular, the alternative to (7) product Trace V β n L αn,β (H n ; λ, µ) is the Lax operator for U q ( sl N ) with the spectral parameter µ, while M becomes the length of U q ( sl N ) chain. Central elements of U q ( sl N ) L-operators are (cf. (9)) (12) q Km = q h1,m+h2,m+···+hN,m .
Since both sets of occupation numbers J n and K m are integrals of motion, their eigenvalues define a sub-sector of the q-oscillator model. For example, if M = 2 and the lattice is interpreted as the U q ( sl 2 ) chain with the length N , the choice J n = 1 gives us the six-vertex model (in general, π Jω1 is spin J/2 representation of U q (sl 2 )), whereas K 1 and K 2 stand for the numbers of spins up and spins down.
We would like to conclude the letter by the announcement of the universal form of the nested Bethe Let u, v be an additional auxiliary Weyl pair, uv = q 2 vu, serving the following notations: (14) Q|u = Q(u) , Q|u|u = uQ(u) , Q|v|u = vQ(q 2 u) .
Let now (cf. (13), where the last but one expression is related to U q ( sl M ) Bethe Ansatz) Then the nested Bethe Ansatz equation for U q ( sl M ) chain (notations (14) are taken into account) is In this letter we considered rectangular lattice with homogeneous λ, µ. The results of this paper may be generalized to the case of a lattice of any shape with inhomogeneous set of λ j , µ j . This, as well as the 3d-invariant derivation of (16), is the subject of forthcoming papers. | 2014-10-01T00:00:00.000Z | 2005-09-26T00:00:00.000 | {
"year": 2005,
"sha1": "ed107db79fb5c6e938dbf9ed8c59f9cf22ec1fa6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/nlin/0509043",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "face1916b5c6f6c9023ab9757ecdcd008226ef61",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
1916549 | pes2o/s2orc | v3-fos-license | Meshes that trap random subspaces
In our recent work \cite{StojnicCSetam09,StojnicUpper10} we considered solving under-determined systems of linear equations with sparse solutions. In a large dimensional and statistical context we proved results related to performance of a polynomial $\ell_1$-optimization technique when used for solving such systems. As one of the tools we used a probabilistic result of Gordon \cite{Gordon88}. In this paper we revisit this classic result in its core form and show how it can be reused to in a sense prove its own optimality.
Introduction
We start by looking back at the problem that we considered in a series of recent work [30,32,33]. It essentially boils down to finding sparse solutions of under-determined systems of linear equations. In a more precise mathematical language we would like to find a k-sparse x such that where A is an m × n (m < n) matrix and y is an m × 1 vector (here and in the rest of the paper, under k-sparse vector we assume a vector that has at most k nonzero components). Of course, the assumption will be that such an x exists. To make writing in the rest of the paper easier, we will assume the so-called linear regime, i.e. we will assume that k = βn and that the number of equations is m = αn where α and β are constants independent of n (more on the non-linear regime, i.e. on the regime when m is larger than linearly proportional to k can be found in e.g. [9,16,17]).
Due to its popularity the literature on the use of the above algorithm is rapidly growing. We below restrict our attention to two, in our mind, the most influential works that relate to (2).
The first one is [6] where the authors were able to show that if α and n are given, A is given and satisfies the restricted isometry property (RIP) (more on this property the interested reader can find in e.g. [1,3,5,6,24]), then any unknown vector x with no more than k = βn (where β is a constant dependent on α and explicitly calculated in [6]) non-zero elements can be recovered by solving (2). As expected, this assumes that y was in fact generated by that x and given to us.
However, the RIP is only a sufficient condition for ℓ 1 -optimization to produce the k-sparse solution of (1). Instead of characterizing A through the RIP condition, in [11,12] Donoho looked at its geometric properties/potential. Namely, in [11,12] Donoho considered the polytope obtained by projecting the regular n-dimensional cross-polytope C n p by A. He then established that the solution of (2) will be the k-sparse solution of (1) if and only if AC n p is centrally k-neighborly (for the definitions of neighborliness, details of Donoho's approach, and related results the interested reader can consult now already classic references [11][12][13][14]). In a nutshell, using the results of [2,4,22,23,34], it is shown in [12], that if A is a random m × n ortho-projector matrix then with overwhelming probability AC n p is centrally k-neighborly (as usual, under overwhelming probability we in this paper assume a probability that is no more than a number exponentially decaying in n away from 1). Miraculously, [11,12] provided a precise characterization of m and k (in a large dimensional context) for which this happens.
In a series of our own work (see, e.g. [32,33]) we then created an alternative probabilistic approach which was capable of providing the precise characterization between m and k that guarantees success/failure of (2) when used for finding the k-sparse solution of (1). The approach was a combination of geometric and purely probabilistic ideas and used bunch of tools from classical probability theory, (most notably a couple of results of Gordon from [18] that we will revisit in this paper). The following theorem summarizes the results we obtained in e.g. [32,33].
Theorem 1. (Exact threshold) Let
A be an m × n matrix in (1) with i.i.d. standard normal components. Let the unknown x in (1) be k-sparse. Further, let the location and signs of nonzero elements of x be arbitrarily chosen but fixed. Let k, m, n be large and let α = m n and β w = k n be constants independent of m and n. Let erfinv be the inverse of the standard error function associated with zero-mean unit variance Gaussian random variable. Further, let all ǫ's below be arbitrarily small constants.
1. Letθ w , (β w ≤θ w ≤ 1) be the solution of If α and β w further satisfy (4) then with overwhelming probability the solution of (2) is the k-sparse x from (1).
2. Letθ w , (β w ≤θ w ≤ 1) be the solution of If on the other hand α and β w satisfy then with overwhelming probability there will be a k-sparse x (from a set of x's with fixed locations and signs of nonzero components) that satisfies (1) and is not the solution of (2).
Proof. The first part was established in [33] and the second one was established in [30]. An alternative way of establishing the same set of results was also presented in [29].
We below provide a more informal interpretation of what was established by the above theorem. Assume the setup of the above theorem. Let α w and β w satisfy the following: Fundamental characterization of the ℓ 1 performance: Then: 1. If α > α w then with overwhelming probability the solution of (2) is the k-sparse x from (1).
2. If α < α w then with overwhelming probability there will be a k-sparse x (from a set of x's with fixed locations and signs of nonzero components) that satisfies (1) and is not the solution of (2).
As mentioned above, to establish the result given in (7) we used a couple of classic probabilistic results from [18]. In the following section we will recall these results and see how they can be reconnected and in a way optimized.
We organize the rest of the paper in the following way. In Section 2 we introduce and briefly discuss the two theorems from [18] that we plan to revisit in this paper, while in Section 3 we create the mechanism for optimizing the second of the theorems in certain scenarios. Finally, in Sections 4 and 5 we discuss obtained results.
Key theorems
In this section we introduce the above mentioned theorems that will be of key importance in our subsequent considerations.
First we recall the following results from [18] that relates to statistical properties of certain Gaussian processes.
Theorem 2. ( [18]) Let X ij and Y ij , 1 ≤ i ≤ n, 1 ≤ j ≤ m, be two centered Gaussian processes which satisfy the following inequalities for all choices of indices Based on the above theorem Gordon then went further and proved a more specific type of result now widely known as "Escape through a mesh" theorem. The result essentially looks at a particular class of Gaussian processes and connects them with the geometry of random subspaces and their intersections with given fixed subsets of high-dimensional unit spheres.
Theorem 3. ( [18] Escape through a mesh) Let S be a subset of the unit Euclidean sphere S n−1 in R n . Let Y be a random (n − m)-dimensional subspace of R n , distributed uniformly in the Grassmanian with respect to the Haar measure. Let Remark: Gordon's original constant 3.5 was substituted by 2.5 in [25]. Both constants are not subject of our detailed considerations. However, we do mention in passing that to the best of our knowledge it is an open problem to determine the exact value of this constant as well as to improve and ultimately determine the exact value as well of somewhat high constant 18.
In a more informal language, what Theorem 3 manages to create is a route to connect the location of a low-dimensional random subspace with respect to a given body and a seemingly simple quantity w(S). Then as long as one can get a handle of w(S) and dimensions are large enough one can get a pretty good feeling if a random flat will hit or miss the given body S. There are a couple of restrictions, though. What we call a body in an informal way is not really a body but rather a subset of the unit n-dimensional sphere and what we call a random flat is not really "just" a random flat but actually a subspace chosen uniformly randomly from the Grassmanian (one can think of it as a uniformly random choice among all subspaces of dimension n − m). We believe that it is easier to get a real feeling of the power of Gordon's results if one for a moment leaves technicalities out of the picture and instead views things in a more informal way.
Along the same lines, in our opinion, to fully understand the miraculous importance of Theorem 3 it is maybe a good starting point to have a firm hold of understanding of the original geometric question that it answers. The question is incredibly simple: there is a set S which is a subset of sphere S n−1 in R n . One then generates a uniformly random subspace (as we said above, in this paper, when we talk about uniformly random spaces/subspaces we of course view such a randomness in a Grassmanian sense) of dimension say n − m (where of course m ≥ 0) and wonders how likely is that such a subspace will intersect with S. One simple example that could help visualizing these high-dimensional geometric concepts would be to take n = 3 and look at a spherical cap of the sphere S 2 in R 3 . Then one can chose say n − m = 1 and basically wonder how likely is that a random line through the origin would intersect such a spherical cap. Of course when S is a spherical cap the answer is simple and can be obtained through a simple geometric consideration as the ratio of the spherical cap's area and the area of the entire unit sphere. On the other hand, geometrically speaking, it is immediately clear how much harder the question becomes if S is not a spherical cap and n and n − m are large.
If one then looks back at the original question, which, as we discussed above, is purely geometrical, it seems almost unbelievable (at least a priori) that it can be transformed to a purely analytical problem.
The incredible contribution of Theorem 3 is exactly in its success to create such a transformation and to effectively connect this geometric question on one side and the properties of Gaussian processes on the other side. The idea of moving everything to the analysis terrain is great on its own, however what is more astonishing is that often one can actually accomplish it. Still, when one moves to the analysis terrain there are several questions one should be able to tackle (the problem may just seem a but easier when transferred to the analysis terrain, but nobody guarantees that it is actually easy!). The two questions that we found most pressing are: 1) Can one get a handle of w D (S) for any S? 2) Roughly speaking, the theorem only specifies what will happen if w D (S) < √ m. Is there a definite answer as to what will happen if w D (S) ≥ √ m?
When it comes to answering the first question it doesn't seem that the answer would be yes. Still, experience says that for many "practical" sets S one can actually handle w D (S) (see, e.g. [31,33]). Even if computing the exact value of w D (S) may not be feasible there are possible alternatives. For example, one can try to bound w D (S) and in a way provide at least some kind of answer to the original geometric question. On the other hand, when it comes to the second question one could envision two possible scenarios.
Assuming that the answer to question 1) is no, one then may start looking at particular sets S and then wonder which are the sets S so that w D (S) can be handled. Then the first scenario would be to look at those S for which w D (S) would not be computable. Then even if one can give a definite answer to question 2) the whole concept would appear as a raw theory without final analytical concreteness. The second scenario would on the other hand relate to those S for which w D (S) can be computed. This scenario is actually probably the first next direction for possible further studies of Theorem 3. In the following section we look at this very same scenario and observe that for certain S one can actually provide a definite answer to question 2.
Revisiting Escape through a mesh theorem
In the first part of this section we will look at a couple of technical details that relate to quantities from Theorem 3. We first revisit w D (S). As stated in Theorem 3, w D (S) is given as To be a bit more specific we will assume that set S can be described through a functional equation, i.e we will say that We will then accordingly replace w(S) by subject to
Deterministic view
Clearly to gain a complete control over w D (f ) (and basically w D (S)) one ultimately has to consider its random origin. However, before going through the randomness of the problem and we will try to provide a more information about w D (f ) (and a couple other deterministic quantities that will be introduced below) on a deterministic level. Along these lines, to distinguish between deterministic and random portions of w D (f )) we will introduce quantity w(f, g) as Then clearly As mentioned above, for the time being we will focus on w D (f, g). Also, to make the presentation easier we will assume that the sup in (8), (10), (12), and (13) can be replaced with a max (also for all other occasions in the paper where a sup may appear as more precise we will assume that scenarios are such that a max can replace it). Then (13) can be rewritten as Transforming (15) a bit further one gets Using a Lagrangian multiplier one can move constraint on f (w) into the objective One then easily has and We will now leave the deterministic portion of the analysis of w(S) (or to be more precise the analysis of w(f, g)) for a moment and switch to consideration of a seemingly different optimization problem. Namely we will consider the following deterministic optimization problem and through it we will introduce a new quantity τ (f, A). This quantity will be in a way an "almost" counterpart to w(f, g). At this point the purpose of introducing such a quantity may not be clear. However, as we progress further it will become more apparent what its meaning is and why we introduced it. Here we only mention roughly that τ (f, A) can be thought of as an indicator that subspace of w's, Aw, and the unit sphere w 2 = 1 have an intersection that is also contained in S. Namely, if τ (f, A) < 0 then indeed there is a w such that Aw = 0, w 2 = 1, and f (w) < 0. However by the definition of S from (11) such a w is actually in S. On the other hand if τ (f, A) ≥ 0 there is no w such that Aw = 0, w 2 = 1, and f (w) < 0 and automatically the intersection of subspace Aw and the unit sphere w 2 = 1 is missing set S. Going back to (20) and using again the Lagrangian multipliers one can then move the subspace constraint into the objective τ (f, A) = min Now, we will assume that the structure of set S is determined by a function f (w) for which it also holds In fact, as it will be clear from the subsequent analysis, the property that we will mostly utilize is actually the sign of τ (f, A). Having that in mind one can actually relax a bit requirement (22) sign(τ (f, A)) = sign( min Clearly, (22) or (23) will not hold for any f (w) and any A. However, we will assume that there are f (w) and A for which they will hold. After rearranging (22) a bit we have and after rearranging (23) a bit we have At this point one should note that while quantities w(f, g) and τ (f, A) are random, so far they have been treated as deterministic. In other words, we viewed them as functions of a fixed pair (g, A). Moreover, they are in a good enough shape that we can switch to a probabilistic portion of their analysis. Probabilistic portion of the analysis will essentially contain an analysis that will determine typical behavior of these two quantities when components of g and A are i.i.d. standard normals.
Probabilistic view
To obtain a probabilistic view on quantities w(f, g) and τ (f, A) we will invoke the results of Theorem 2. We will do so through the following lemma which is slightly modified Lemma 3.1 from [18] (Lemma 3.1 is a direct consequence of Theorem 2 and the backbone of the escape through a mesh theorem). (−ν T Aw+ ν 2 g−ζ w,ν ) ≥ 0) ≥ P ( min
Lemma 1. Let
Proof. The proof is exactly the same as is the one of Lemma 3.1 in [18].
Let ζ w,ν = ǫ (g) 5 √ n ν 2 + f (w) with ǫ (g) 5 > 0 being an arbitrarily small constant independent of n. We will first look at the right-hand side of the inequality in (26). The following is then the probability of interest After pulling out ν 2 one has P ( min and then easily P ( min Replacing ν 2 with a scaler 1 λν and solving the minimization over different ν with a fixed ν 2 one obtains > 0 is an arbitrarily small constant and ǫ (m) 2 is a constant dependent on ǫ (m) 1 but independent of n. Then from (28) one obtains We now look at the left-hand side of the inequality in (26).
(This assumption can be avoided; however in the interest of maintaining as simple a presentation as possible we will state it). Moreover, let Then from (35) we have Finally, if all assumptions we made indeed hold then In other words, if (22) (or (23)), (36), and (37) hold then for large n one has with overwhelming probability that the random subspace of w's, Aw, will intersect set S on the unit sphere.
We are now in position to state the following theorem which in a way complements Theorem 3.
Theorem 4. (Trapped in a mesh)
Let m and n be large and m < n but proportional to n. Let S be a subset of the unit Euclidean sphere S n−1 in R n . Moreover, let S be such that it can be characterized through a function f (w) in the following way Let Y be a random (n − m)-dimensional subspace of R n , distributed uniformly in the Grassmanian with respect to the Haar measure. For example, let where A is an m × n matrix of i.i.d. standard normals. Let g be an m × 1 vector of i.i.d standard normals. Further let Assume that f (w) is such that (22) (or (23)) and (36) hold. 1) Let ǫ 1 and ǫ 2 be arbitrarily small constants and let m be such that Then lim n→∞ P (Y ∩ S = 0) = 1.
2) On the other hand, let m be such that Then lim n→∞ P (Y ∩ S = 0) = 1.
Proof. The first part follows from the discussion presented above. For the second part we first observe from (19) and (33) Then we have A combination of (48) and the condition given in (45) gives which is then enough to apply Theorem 3 and obtain (46).
In essence the above theorem provides a characterization of sets S for which one can determine in a sense an optimal maximal/minimal dimension of the missing/intersecting subspace Aw. Of course the result of the previous theorem will be useful as long as one is able to handle (compute) ξ D . Also, one should note that there are numerous other ways that can be used to present the main results obtained above. We chose the way given in the above theorem in order to be as close as possible to the original formulation given in Theorem 3 and at the same time to maintain a presentation that would in a way hint what the main ideas behind the entire mechanism are. For example, among many alternative formulations, the following two are probably even more natural than the version presented in the above theorem. First, instead of trying to formulate results along the lines of Theorem 3 one can formulate probabilistic results based on (38) and the corresponding ones that can be obtained in analogous way for w(f, g). We skip this exercise but do mention that in the absence of Theorem 3 such a presentation would be our preferable one. Second, instead of relying on quantity ξ D one can rely on the original w D (S). Since this modification is relatively simple we will provide a brief sketch of it below. We also do mention that this modification will in the end produce results that are visually more similar to the ones given in the original formulation in Theorem 3. However, to achieve a mere similarity one is in a way forced to remodel formulations given in Theorem 4 which in our view contain a bit of a flavor as to how the entire mechanism works. That way one ultimately produces a visual analogue to Theorem 3 but at the expense of losing a bit of the hint as to what the core of the presented concept is. Still, we do believe that it is convenient to have such a formulation handy and we therefore present it below.
As in the previous subsection, we will make assumption that w(f, g) concentrates around w D (S) = w D (f ) = Ew(f, g) (which is a bit easier to insure than the concentration of ξ(f, g); a way for doing so can be deduced from [18]) and that w D (S) = w D (f ) ∼ √ n, i.e.
(The assumption can also be avoided; as mentioned above, one way to do so even for a fairly general f is to follow the presentation of [18]; however as was the case in the previous subsection, in the interest of maintaining as simple a presentation as possible we will simply assume (54)). Moreover, let Then from (53) we have Finally, if all assumptions we made indeed hold then In other words, if (22) (or (23)), (54), and (55) hold then for large n one has with overwhelming probability that the random subspace of w's, Aw, will intersect set S on the unit sphere.
We are now in position to state the following theorem which is an alternative formulation of Theorem 4 and as Theorem 4 in a way complements Theorem 3.
Theorem 5. (Trapped in a mesh -alternative)
Let m and n be large and m < n but proportional to n. Let S be a subset of the unit Euclidean sphere S n−1 in R n . Moreover, let S be such that it can be characterized through a function f (w) in the following way Let Y be a random (n − m)-dimensional subspace of R n , distributed uniformly in the Grassmanian with respect to the Haar measure. For example, let where A is an m × n matrix of i.i.d. standard normals. Let g be an m × 1 vector of i.i.d standard normals.
2) On the other hand, let m be such that Proof. The first part follows from the discussion presented above. The second part follows from Theorem 3 and parts of its proof given in [18].
Visually speaking, Theorem 5 may seem as a more natural complement to Theorem 3. It is probably even a bit simpler than the formulation given in Theorem 4. On the other hand, formulation in Theorem 4 is still our preferable one. In a way, it contains a bit of a description of what really is the key to success of the entire mechanism. If one is to give only the second portion of these theorems we do believe that then Theorem 5 is a more suitable choice (of course, by no surprise that is exactly what was done in [18]).
Comments
As far as understanding of the above theorems goes, there are several comments that we believe are in place. Below are some of them.
1. As one compares the statements of Theorems 4 and 5 on one side and the statement of Theorem 3 on the other it is clear that the concentration results are stated differently. In fact, not only are they stated differently they are also way inferior in Theorems 4 and 5. We did mention right after Theorem 3 that determining concentrating constants is to the best of our knowledge an open problem even in the original formulation given in Theorem 3. The same remains true for both of our theorems. The difference though is that while constants in Theorem 3 are most likely not the best possible ones, they are, when compared to generic ǫ's (given in our theorems), much better. We do mention that in this paper our major concern was a general type of result that relates to relation between w D (S) (ξ D ) and m rather than a precise concentration analysis. Still, it would be of great importance if one could provide a way more precise analysis and determine ultimate optimality of concentrating constants as well. Our ǫ's can relatively easily be translated into concrete numbers. However, determining their optimal values is actually what requires a more careful approach. In fact, quite possibly, one may end up obtaining the optimal constants which are very large (simply, because one would have to encompass the entire family of sets S; such is the standard set by the generality of some of results presented in Theorems 3, 4, and 5!). This is partially the reason why we haven't stated any specific constants but rather left such a problem to be solved on individual case basis.
2. Another important question that may arise based on our presentation is which of many alternative formulations would be the best possible. Answering such a question seems rather hard. Our experience is that when the mechanism works then typically everything (every quantity of interest) concentrates and if one is then fine with ignoring specifics of concentrations then essentially all formulations are fine.
3. The results presented above will not hold for all sets S. The question then remains can one determine the class of sets S for which they will hold (such a subclass is determined by the two above theorems).
4. How hard is for a function to actually satisfy the assumptions that we have made? This is again a very generic question and it seems that it is better to form a class of functions for which they do hold, instead of trying to exclude those for which they do not.
5. How limiting/general are our descriptions of set S? In reality the description of set S that we assumed is rather simple. We basically assumed that the entire set can be characterized through a functional inequality. However, our assumption was made mostly for the exposition purposes. The entire mechanism would go through as well even if set S was characterized by an arbitrary number, say L, of functional inequalities, i.e., f (l) (w) ≤ 0, 1, 2 . . . , L.
Discussion -how all of it actually works
While the results presented in the previous section may seem a bit dry they are actually quite powerful. However, to really get a feeling how powerful they are one would have to convince himself/herself that there are scenarios when they can be used. While conceptually we discovered an array of sets S for which subspace dimension results of Theorem 3 eventually through Theorems 4 and 5 become optimal we believe that it is easier to grasp the concept on small examples. Of course that is the reason why in the first part of the paper we briefly presented a problem that we were able to attack to full optimality using the mechanisms formulated in Theorems 4 and 5. Below we will briefly sketch how the results presented in Section 1 actually fit into the context of the machinery presented in the previous section. Before doing so we just provide a small example that shows how the entire machinery can be modified a bit if function f (w) is of a special type.
Homogeneous f (w)
When function f (w) is homogeneous one can actually change a bit the presentation described above. In fact the presentation can be changed in many other scenarios as well; however we selected this one just to give a flavor as to what are possible options. Another reason is that sketching how the results given in Section 1 fit into what was presented above will be a bit easier. Now, let f (w) be a homogeneous function. Namely, let f (w) be such that for any a > 0 and a d > 0. Then we say that function f (w) is positive homogeneous of degree d. Then for all practical purposes one can redefine τ (f, A) from (20) in the following way Proceeding then as in Section 3.1 one can write and assume that the structure of set S is determined by a function f (w) for which it also holds If (as in Section 3.1) one instead focuses only on the sign of τ (h) (f, A) one can relax a bit requirement (68) to After rearranging (68) a bit we have and after rearranging (69) a bit we have Now one can repeat all the derivations from Section 3.2 with τ (h) (f, A) instead of τ (f, A). As a final result one would wind up with the theorems that are exactly the same as Theorems 4 and 5. The only difference is that the assumptions on f (w) would be those from (68) (or (69)) instead of those from (22) (or (23)). This is a bit convenient since it essentially boils down to a duality over a convex set. Of course, everything we mentioned in this subsection remains true for any function for which the sign of τ (f, A) from (20) does not change if one relaxes the sphere condition to the ball condition.
An example of set S where everything works
In this subsection we sketch how the results presented in Section 1 fit into the framework given in Section 3.2. We recall first that the problem that we were interested in in Section 1 is essentially the following: for a given n-dimensional k-sparse vectorx (with say last n − k components being zero) can one estimate the dimension of matrices A in (1) such that the solution of (2) is actually k-sparse. In fact let us be a bit more specific. Let us look at a k-sparse vectorx (given the statistical structure that will be later on assumed on A, one can without a loss of generality, setx i = 0, i = k + 1, k + 2, . . . , n). Now, the question of interest is: given A and Ax (where A is an m × n matrix and is typically called the measurement matrix) can one find x such that Ax = Ax.
To make sure that we maintain consistency we do emphasize that Ax in (72) is what y in Section 1 is (in other words, although we did not state it anywhere in Section 1, y was essentially implied to be constructed as the product of matrix A and a k-sparse vector x). As we have mentioned in Section 1 a popular way to attack the above problem is to solve (2), i.e. the following optimization problem While the original problem (72) is NP-hard in the worst case, the optimization problem in (73) is clearly solvable in polynomial time. Letx be the solution of (73). The question then is how often (if ever)x =x. The line of thought first goes through the recognition thatx will be k-sparsex only if there is no w such thatx = x + w, where w is in the null space of A and satisfies (see, e.g. [30,32,33]) If one then defines set S on the unit sphere S (n−1) based on this parametrization of non-favorable w's one effectively obtains If one then defines f (w) as then clearly we have which fits into the description of S given in (11). Moreover, S and ultimately f will indeed satisfy all assumptions that we have made. Namely, f (w) from (76) is positive homogeneous of degree 1 and duality in (68) and (69) will easily hold. Also, let (as in Theorems 4 and 5) Now, if one look at all w's from the null-space of A, i.e. at set Y , one can then connect the intersection of sets Y and S withx being equal or not tox. Namely, if Y ∩ S = 0 thenx =x and if Y ∩ S = 0 then there will be anx such thatx i = 0, i = k + 1, k + 2, . . . , n andx =x. Now, if one views the problem in a random context with matrix A being an m × n matrix of i.i.d. standard normals, then one can for a given ratio k n determine the critical value of ratio m n , mw n = w D (S) 2 n , so that for m n > mw n = w D (S) 2 n with overwhelming probabilityx =x for allx such thatx i = 0, i = k + 1, k + 2, . . . , n. On the other hand, for m n < mw n = w D (S) 2 n with overwhelming probability there is anx such thatx i = 0, i = k + 1, k + 2, . . . , n andx =x (in fact to be more in alignment with our theorems, instead of with overwhelming probability we should say with a probability that goes to one as n → ∞).
Of course, what we presented above is just how critical m w can be connected to w D (S). In a way that solves only a half of the problem. The second half is to actually determine w D (S). That relates to question 1) that we mentioned in the short discussion after Theorem 3. On the other hand, our main concern in here is question 2) from the very same discussion and along the same lines details related to handling w D (S) go beyond the scope of this paper. However, we do mention in passing that computing w D (S) was one of the problems of interest in [30,33] and the results obtained there are actually those presented in Theorem 1.
Also, what we presented in this section is a simple way how one can interpret the entire mechanism from previous sections when it comes to a particular set S. The interpretation given above is related to a rather simple set S. A more complicated version of S where everything also works can be found in e.g. [28,31].
Conclusion
In this paper we revisited a couple of classic probability results from [18]. These results relate to the geometry of the intersection of random subspaces and subsets of the unit sphere in R n and properties of Gaussian processes. Namely, in [18], the likelihood of having random subspace of R n of dimension n − m intersect a given set S on the unit sphere was connected to a quantity describing set S called the Gaussian width. Moreover, it was shown that m can go (roughly speaking) as low as the squared gaussian width without having any significant likelihood of the random n − m-dimensional subspace intersecting set S. In this paper we provided a characterization of a class of sets S for which if m goes lower that the squared gaussian width of S then it is highly likely that n−m-dimensional will intersect set S. In a way we provided a partial complement to the results of [18].
Also, to give a bit more flavor to a rather dry presentation of high dimensional geometry we gave a fairly detailed presentation of how the results that we created can in fact be utilized. We chose an example that deals with solving under-determined systems of linear equations with sparse solutions. It turns out that when the systems are random and gaussian the success of a technique called ℓ 1 -optimization when used to solve them can be connected to the problem of random subspaces intersecting given set S on the unit sphere. We described how such a connection can be established and then provided a sketch as to how the main results of this paper actually work when such a connection is established.
While we presented only one specific example to give a flavor how everything practically works, the overall methodology is way more powerful. There are various other instances where we were able to successfully employ majority of the ideas presented here. Moreover, the mechanisms presented here are in fact a subcase of a much larger concept. In this paper though our focus were particular geometric results established in [18] and how one can complement them. On the other hand, when viewed outside the scope of the results presented in [18] our methodology admits consideration of substantially more general concepts. This goes way beyond the particular problems that we considered in this paper and we will present it elsewhere.
Finally, it is quite likely that Gordon's original results that we revisited here were only a tool towards much higher mathematical goals. Among them would immediately be a better version of the Dvoretzky theorem already established in Gordon's original work. Our results can then be used to complement all of such results where Gordon's estimates turned out to be of use. Of course, revisiting all of these takes a substantial effort that goes way beyond what we planned to present here. Here we only focused at the heart of the idea, which essentially boils down to simple reuse (with a little bit of our own recognition that duality theory can be quite powerful) of the Gordon's mechanism to prove its own optimality. | 2013-03-28T21:39:26.000Z | 2013-03-28T00:00:00.000 | {
"year": 2013,
"sha1": "abf1c944779963bd3298a8f3a7ed8798f31ee1d1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "abf1c944779963bd3298a8f3a7ed8798f31ee1d1",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
269197481 | pes2o/s2orc | v3-fos-license | Matrix transformations of sequences and applications in Fourier analysis
In the presented paper we consider a sequence and its Nörlud and a generalized mean derived from a matrix transformation. Furthermore, sufficient conditions for the matrix are found which implies the converges of the generalized matrix means from the Nörlud ones. The results drawn make it possible to transfer well-known theorems in the theory of Fourier series to more general sequences of operators.
Introduction
The theory of convergence of Fourier series has a long history.Kolmogoroff [1,2] solved the problem posed by Luzin, proving that there exists an integrable function whose Fourier series with respect to trigonometric systems is divergent at all points.A similar problem was solved for Walsh systems by Stein [3], and for Vilenkin systems by Schipp [4].The purpose of the new sequence obtained by matrix transformations is to avoid "bad" properties.A new sequence may have a "good" property, such as being convergent almost everywhere for every integrable function.The situation is even more complicated for functions of two variables.It is known that even continuous functions do not provide convergence almost everywhere.For trigonometric systems see Fefferman [5], and for Walsh-Paley systems see Getsadze [6].The partial sums for double Fourier series can be considered in different ways, for example, rectangular partial sums, spherical partial sums, triangular partial sums, quadratic partial sums, and so on.Rectangular partial sums are represented as tensor products of one-dimensional partial sums, and therefore the new sequence obtained as a result of the matrix transformation can be represented as a tensor product of one-dimensional sequences.Almost everywhere convergence of arithmetical means of rectangular partial sums of Forier series has been studied by Marcinkiewicz and Zygmund [7] for trigonometric systems and by Móricz, Schipp, and Wade [8], Simon [9], Gát [10] for Walsh systems.Moreover, a more general theorem was recently proved by Gát and Karagulyan [11].The situation is different for quadratic partial sums or triangular partial sums.In particular, in the case of trigonometric systems, the problem of the almost everywhere convergence of arithmetic means of quadratic and triangular private sums for every integrable function was solved by Zhizhiashvili [12] and Herriot [13], respectively.In the case of Walsh's systems, see Weisz [14,15], Gát [16], Goginava [17], as well for the case of bounded Vilenkin's system [18].Our goal is to prove these theorems for sequences that can be obtained by more general matrix transformations.Our study is based on the method presented in Hardy's monograph [19].In more detail: The connections between the various ( , ) methods were explored in Hardy's monograph [19].In other words, the necessary and sufficient conditions have been established so that a convergence of sequence by one method of sequence leads to a convergence by another method.Studying the issues in this way is important.Because it can be used in the study of various issues.One method's properties, for instance, can be applied to other methods.Hardy's monograph [19] Note that recently several papers have been published related to the matrix transformations of a sequences [20][21][22][23][24][25][26][27][28][29][30].
Matrix transformations of sequence
Let ℙ be the set of positive integers, ℕ∶=ℙ∪{0}.The set of all integers will be denoted by ℤ.The following notations will also be used below: We say that is regular if In order for to be regular, it is necessary and sufficient that (see [31,Ch. 3] or [19, P. 43-57]): (a) ∑ =0 −, = 1; (b) −, → 0 for each , when → ∞; where is independent of .
Let us define the following matrices .
Consider some examples: Example 1.The arithmetical means is denoted by The (, )-means is defined as follows: where We set ∶= { ∶ ≥ 0} as a sequence of non-negative numbers.We define the th Nörlund means of the sequence We will say that the ∶= { ∶ ≥ 0} sequence is convergent to by method ( , ) It can be written as follows .
Note that for the non-negative weighted Nörlund means condition ( 2) is equivalent to the regularity.The aim of the presented paper is to establish the connections between ( , ) and methods.
The following Hardy's theorem will be used to prove the theorem (see [19, p. 68-70]).
Theorem H. Let method ( , ) be regular and where 0 = 1, < 0 ( > 0) and Hence, the series ( is fixed) has also a positive radius of convergence and consequently, since we conclude the convergence of series From ( 4) and ( 8) we can write Now, we assume that = .Then we obtain () .prove the theorem, it is sufficient to prove that , ∶= −, satisfies the conditions () − ().Let's say = 1, ∈ ℕ, then it is easy to see that () = 1 and () () = 1 and therefore from equality ( 9) property (a) is obtained.Now, we prove property (b).For this it is sufficient to prove that lim →∞ −, = 0 for each fixed .Indeed, from (5) we can write Hence, property (b) holds.It remains to prove property (c).First of all we note that from (4) we can write From which we get that We can write ) .
Trigonometric Fourier series
It is denoted by 1 () , ∶= [0, 1) the set of all 1-periodic, Lebesgue measurable functions with finite 1 norm Then, using Theorem 1, we get the following theorem.Next, we introduce on an orthonormal system which is called the Vilenkin system.At first define the complex valued functions () ∶ → ℂ, the generalized Rademacher functions in this way Now define the Vilenkin system ∶= ( ∶ ∈ ℕ) on as follows.
Specifically, we call this system the Walsh-Paley one if ≡ 2.
For the Vilenkin system, the Dirichlet and Fejér kernels are defined as follows: , respectively.
Vilenkin-Lebesgue points
Weisz introduced the one-dimensional Walsh-Lebesgue point in [34]: where 2 is a Walsh group.Weisz proved in [34] that a.e. point ∈ 2 is a Walsh-Lebesgue point of an integrable function .Moreover, the Fejér means of the Walsh-Fourier series of ∈ 1 ( 2 ) converge to at each Walsh-Lebesgue point.
In [35] it is characterized the set of convergence of Vilenkin-Fejér means.It is introduced the following operator Pál and Simon [36] proved that the one-dimensional Fejér means of the Vilenkin-Fourier series on bounded Vilenkin group of an integrable function converge a.e. to the function.The first author and Gogoladze [35] generalized this theorem and proved that the following is true.
Theorem GG. Vilenkin-Fejér means of all integrable function 𝑓 is convergent at every Vilenkin-Lebesgue point and almost every point is a
Vilenkin-Lebesgue point.
In this section, our goal is to generalize this statement to more general sequences of operators.First, Theorem GG will be generalized to the Ceśaro means, and then, using Theorem 1, Theorem GG will be extended to a more general sequence of operators (see Theorem 4 below).
for all Vilenkin-Lebesgue points of .
We note, that Theorem 4 is also new for Walsh-Paley systems.
Unbounded Vilenkin group 𝐺 𝑚
We point out that in the literature about unbounded Vilenkin group much information is not known.For example, the analogue of Carleson's theorem is unknown, as well as the almost everywhere convergence of arithmetic means.The almost everywhere convergence of partial sums of subsequences was studied by Young Wo-Sang [42,43].In 1999 Gát [44] has proved the following theorem.
Zhizhiashvili [12] has improved this result and he has proved that the condition ( 22) can be ommited.Moreover, the following theorem was proved.
holds almost everywhere on 2 .
Combine Theorem 1 and Theorem Zh we conclude.
Remark 1.Let us note that the problems of almost everywhere convergence of Nörlund means of quadratic partial sums of twodimensional trigonometric Fourier series were studied by Herriot [13].
The next theorem has been proved by Herriot [48].Using Theorem 1 the following theorem can be proven. | 2024-04-18T15:18:34.608Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "9f7392a01269ade81c8acfad536891071de4e059",
"oa_license": "CCBYNC",
"oa_url": "http://www.cell.com/article/S2405844024056160/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "53e8d9f83266fe97053933aa3e8344d4ce6f2191",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
5513095 | pes2o/s2orc | v3-fos-license | NURSING EDUCATION AS ADULT EDUCATION a philosophical standpoint
Die outeur neem die standpunt in dat indien verpleegkundiges in Suid-Afrika die uitdagings van die toekoms te bowe wil kom verandering in fundamentele beskouings oor verpleegonderwys noodsaaklik is. Een van die beskouings wat moet verander is die siening dat basiese verpleegonderwys pedagogie eerder as andragogie is — die studentverpleegkundige moet beskou word as ’n volwasse leerder wat betrokke is by volwasse onderwys. Dit sal nuwe moontlikhede vir selfgerigte en leerdergesentreerde onderrig skep. Die vrugte hiervan sal wees goeie pasiëntsorg deur verpleegkundiges vir wie leer ’n voortdurende proses is.
DEFINING ADULT EDUCATION
T here are probably as m any defini tions of adult education as there are authors on the subject.In most cases the definition reflects the p ar ticular bias of the author.In its o p e ning statem ent on the subject, The Encyclopedia fo r Educational R e search (1 p .30) notes th at adult ed u cation m ay be defined in m any ways.
T hese m any definitions may range from very broad to very narrow perspectives.T he m ost fre quently m entioned characteristics of adult education in the literature were review ed in one study.These are th at adult education is charac terised as being: • voluntary on the p art of the learner; • part-tim e; • under organised auspices; • for persons beyond school age (1 p. 30).Each characteristic has its own problem s, depending upon o n e 's v a n t a g e p o i n t .T h e v o l u n t a r y nature of adult education is p ro b ably the least questionable.
W hen th e term adult education was first used, it was applied to the e d u c a tio n a l a c tiv itie s a s s o c ia te d with w hat is today known as rem e dial adult education.Its target pop ulation was those adults who had received little or no basic school education.L ate r, o th er m eanings w ere applied.E ach reflected the needs of the individual adults p ar ticipating in the education process.
O ne such restricting application em phasises only the organised acti vities and program m es concerned with the education of adults (2 p. 15).In this sense it em braces the whole com plex of educational insti tutions, professional and sem i-pro fessional bodies and voluntary o r ganisations which organise adult education program m es.It ignores the inform al unorganised education undertak en by individual adults who perceive their own needs.
In its broadest application, the term adult education m ay be d e fined as a means o f social adjust m ent and an educational m ovem ent (1 p. 30).This allows us to com e to grips with the very essence of its nature.W hen viewed in this way, education for adults is a m eans of adjusting to a changing w orld and is a by-product of the scientific age.T herefore, as an educational m ove m ent it is based on the needs for social adjustm ent by M odern M an.
M argaret M ead (2 p. 68) put it succinctly w hen she noted: A great deal o f what needs to be taught to adults today was unknow n when they were young.C ontinuing e d u c a ti o n t h r o u g h o u t l i f e h a s become a necessity in alm ost every field, fr o m housekeeping to atomic physics . . .In the advanced co u n tries the im pact o f change is em pha sising the need fo r continuing educa tion throughout life and forcing a fresh assessment o f the whole educa tional system in the light o f this con cept.
This quotation serves to highlight the philosophy underlying adult education.E ducation is a life-long process which does not halt at the end of o n e 's schooling.This is esp e cially true of M an in the tw entieth century.In o rd e r to adjust to his rapidly changing environm ent he m ust constantly learn new skills and gain know ledge.A dult education should then be viewed as one part of the process of education.
EDUCATION A CONTINUOUS PROCESS
The first of three principles (3 p. 58) that any adult education p ro gram m e should b ear in m ind is that education is not com plete when a m an o r a w om an leaves school and goes to w ork.T he second is th at it is a continuous process which goes on throughout life and affects all as pects of life.This includes the growth of the individual, with all facets of his developm ent: aes thetic, intellectual, physical and vo cational.
T he third principle of adult e d u cation is that m ost adults can and w ant to learn, but th eir capacity to study is lost through disuse.T h ere fore, it is im portant to provide o p portunities for the educational p ro cess to continue u n in terru p ted so th at their learning skills are not lost.
T he preceding discussion has brought our atten tio n to th e fact th at adult education is m erely part of the continuum of education.It brings in to question the present education system which does not em phasise sufficiently to young people th at they have not com pleted their education when they leave school o r even finish voca tional training.They m ust be taught, as part of this early educa tion, th at they are being prepared for fu rth er life-long study.
If education is viewed as a conti nuous process throughout life, there is a need not only fo r a change in ap proach and m ethods in education, but also fo r a com plete re-appraisal o f methods, approaches and curri cula in prim ary, secondary and terti ary education.(2 p. 68).
From this vantage point it b e com es less easy to distinguish adult and child education.The division lies in the present education system rath er than the process of educa tion.The m ethods of adult educa tion then becom e the m ethods of education for children.
Having established this philoso phical view point of adult education it m ust be linked to nursing educa tion.Nursing education suffers from the sam e erroneous division into child and adult education.
NURSING EDUCATIO N AS ADULT EDUCATION
Basic nursing education lays the foundations for the effective prac tice of nursing and a basis for ad vanced nursing education (4 p. 6).In m ost cases basic nursing has as its target population students who are relatively recent schoolleavers, the m ajority of w hom are below the age of 21 years and therefore legally m inors (5 p. 10).
This view is m ade m ore explicit in the following q uotation of a state m ent m ade by a nurse educator (6 p. 139).:
Basic nursing education is con cerned with the instruction o f the adolescent, that is, it is part o f peda gogy, as it deals with the adolescent on the way to adulthood, and specially to professional adulthood. It is education after the child has com pleted the period o f secondary education and therefore falls into the category o f tertiary education.
The student nurses who are the consum ers of basic nursing educa tion are therefore considered to be less than adults from both a legal and from a nurse educator stand point.A t best they are considered adolescents; at worst, children.
Yet in the practical ward situa tion student nurses are required to take very adult responsibilities for patient care.These responsibilities may involve, in the m ost extrem e cases, life and death decisions.D uring their entire lives some adults may never be asked to m ake similar decisions.In their private lives student nurses are able to vote, drive, m arry and have chil dren during this sam e period.All of which are very adult responsibili ties.
The language used in the preced ing paragraphs may be considered by some to be ra th e r em otive, but it serves to highlight the am biguous situation in which basic nursing education places both its student nurses and its educators.It certainly would be m ore congruent with the patient care expected of student nurses to consider them as adults.
N ursing, like o ther systems in society, is being subjected to the same pressures of a rapidly d e veloping technological society.
In the next decade nursing will be faced with som e o f its greatest and m ost exciting challenges.With the trend towards mass m edical care and the changing patterns o f health ser vices, the nurse o f tom orrow will have to accept unprecedented re sponsibilities.Minor modificatiom o f existing nursing systems will be inade quate to m eet new situations and dem ands in a rapidly changing society.Fundamental rethinking will be necessary.(
p. 7)
A fundam ental rethinking of nursing education w ould, it is be lieved, consider student nurses undergoing basic nursing education as adults.It would certainly relieve the am biguity of the current situa tion.Should this occur then the nursing education system , in its full est extent, would reap the benefits.The focus would change from basic nursing education being part of pedagogy to being part of andragogy-
SELF-DIRECTEDNESS
Nursing education would, unlike other educational systems in South A frica, then be moving tow ard the concept that adult education is m erely part of a life-long process of education.It would not em phasise artifical differentiations into adult and child education and would in still a sense of self-directedness, es sential to nursing in m odern times.This concept of self-directedness is lacking in traditional nursing educa tion: Traditionally, m ost nurses have not been exposed to the concept o f personal responsibility fo r their own continuing education.Such a con cept may be seen as a requirem ent fo r survival in society changing as rapidly as ours.Learning how to learn becomes a very significant aspect o f such a concept (8 p. 50) The developm ent of personal re sponsibility and self-directedness surely lies in the nature of adult learning.Knowles (9 p. 70) has o u t lined these principles and they are paraphrased and sum m arised here.A dult learners: • respond best to a n o n -th reaten ing learning environm ent w here there is a good student-teacher relationship • w ant to assess them selves against a relevant standard to determ ine their education needs • want to select their own learning experiences (to be self-directing) • prefer a problem -oriented ap proach • w ant to apply their new know ledge and skills im m ediately • w ant to know they are progress ing • w ant to contribute (from their own reservoir of know ledge and skills) to help others learn.Fabb (10 p. 46) notes that:
Instead o f practising pedagogy, which is the art and science o f teach ing children, teachers o f medicine should now be seeking to practice what Knowles refers to as andragogy (andra = grow n up or adult), -the art and science o f helping adults to learn. By applying the principles (o f adult learning), teachers can facili tate the growth and developm ent o f the learners with w hom they work, and in the process, as co-learners, grow and develop themselves.
Seen in this way, adult education is the process which most helps the learners how to learn.It gives them the equipm ent to becom e self-directed learners involved in the life long process of education.
T eachers who work with adult learners should em phasise the p rin ciples of adult education.They should w ork to decrease the d ep en dency of child learners on their te a chers by: • creating a com fortable, non t h r e a te n in g le a r n in g e n v ir o n m ent • providing assessm ent op p o rtu n i ties to help learners diagnose their educational needs • helping the learners plan the se quence of experiences which will m eet their educational needs and produce the desired learning • creating conditions that will m o tivate the learn er to learn • selecting, with the learners, the m ost effective m ethods of p ro ducing the desired learning • providing, with the help of the learners, the hum an and m aterial resources necessary to produce the desired learning • helping the learners m easure the outcom e of th eir learning experi ences (10 p. 37-45).
It is not proposed that basic nursing education becom es self-di rected learning from the first day of the education program m e.It would be unrealistic to expect beginning student nurses to have any m ean ingful self-directedness, given the current educational systems.Self directed learning would be intro duced gradually until, by the final year, self-directedness was fully ac com plished.
The self-directed approach, like the learner-centred approach, is based on: • identification of the m ajor gaps betw een actual and specified cri terion perform ance of skills, knowledge and attitudes, that is, determ ining educational needs • aw areness of the setting or system in which the learning is to take place • selection of those educational o b jectives th at have a high priority for a particular stu d en t's level • selection and organisation of learning activities th at will p ro duce and m aintain the desired behaviour • evaluation of the extent to which the students have m et their ed u cational needs and the educa tional objectives (11 p. 114).It can be seen th at this approach can be adapted to any level of nurs ing education, from the m ost basic to the m ost advanced.W hat will vary is the am ount of direction and guidance necessary for the learners.
A t the basic education level the educational objectives, based on know ledge, skills and attitudes n e cessary for professional registra tion, need to be clearly stated.
Once these objectives are clearly stated then the beginning student can focus on his o r her approaches to achieving them .Knox (12 p. 72) has stated the po tential benefits in organising conti nuing professional education on a self-directed approach: • it provides the basis for articula tion betw een various disciplines and betw een basic and continu ing professional education • it provides for planning of regional developm ent of health m anpow er so th at each person is encouraged to m eet that region's particular health needs • it encourages m axim um utilisa tion of the available resources • it facilitates the efforts of health professionals to increase their com petence and im prove patient care and health m aintenance.
T hese benefits could be enhanced by introducing the adult education, self-directed approach at the basic nursing education level.O n this subject Knox has noted: D uring the past decade or so, m any professional schools have m odified their preparatory educa tion curriculum and instructional m ethods in ways that have increased the likelihood that graduates will continue their education.In som e in stances, these m odifications were de liberately m ade so that a basic objec tive o f preparatory education would be the developm ent o f a questioning approach that w ould encourage and facilitate lifelong learning.(12 p. 72)
Implementation
A final, but crucial question re mains: How w ould present nursing education im plem ent fundam ental rethinking of basic nursing educa tion?
In the first instance it m ust be recognised that change is often threatening and slow in producing results.It is for this reason th at the author believes these changes, which encom pass the principles of adult learning, be introduced by in novative nurse educators at the uni versities -this phase has already begun at a num ber of universities in South A frica.First to the D iplom a in Nursing E ducation students and then later to nursing students on degree courses.In this way g ener ations of stu d en ts' approach to learning will be gradually, but fun dam entally, influenced.
If South A frican nurses are to m eet the challenges of the fu tu re, a fundam ental rethinking of nursing education is necessary.P art of this fundam ental rethinking involves the switch from pedagogy to andragogy -the recognition that the student nurse is an adult learner involved in adult education.This will open up new vistas in self-directed, learnercentred learning, which will reap its g r e a te s t b e n e fits in im p ro v e d patient care by life-long learners. | 2017-04-10T23:49:10.311Z | 1983-09-27T00:00:00.000 | {
"year": 1983,
"sha1": "a8a2f4db70d4bed64c20a579e829fd36ad1ae86d",
"oa_license": "CCBY",
"oa_url": "https://curationis.org.za/index.php/curationis/article/download/517/454",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a8a2f4db70d4bed64c20a579e829fd36ad1ae86d",
"s2fieldsofstudy": [
"Education",
"Medicine",
"Philosophy"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
} |
115155181 | pes2o/s2orc | v3-fos-license | The inverse problem of the calculus of variations for systems of second-order partial differential equations in the plane
A complete solution to the multiplier version of the inverse problem of the calculus of variations is given for a class of hyperbolic systems of second-order partial differential equations in two independent variables. The necessary and sufficient algebraic and differential conditions for the existence of a variational multiplier are derived. It is shown that the number of independent variational multipliers is determined by the nullity of a completely algebraic system of equations associated to the given system of partial differential equations. An algorithm for solving the inverse problem is demonstrated on several examples. Systems of second-order partial differential equations in two independent and dependent variables are studied and systems which have more than one variational formulation are classified up to contact equivalence.
Introduction
In this paper we examine the inverse problem of the calculus of variations for systems of partial differential equations (1.1) u α xy = f α (x, y, u γ , u γ x , u γ y ), α, γ = 1, 2, . . . , m, in m dependent variables u α and two independent variables x and y. Systems (1.1) arise in various contexts such as symmetry reduction in general relativity, non-linear σ-models, and generalized Toda lattices (see [11], [13], [20], [21], and [24]). For systems (1.1) we give a complete solution to the variational multiplier version of the inverse problem. In particular, we give an explicit algorithm for determining the number of possible inequivalent Lagrangians for a given system (1.1).
The variational multiplier problem is stated as follows: Given a system of differential equations F α (x i , u γ , u γ i , u γ ij , u γ ijk , . . . ) = 0, does there exist a Lagrangian L(x i , u γ , u γ i , u γ ij , . . . ) and functions M γ α , with det (M γ α ) = 0, such that (1.2) E α (L) = M γ α F γ , where E is the Euler-Lagrange operator. If such an M α γ exists, then it is referred to as a variational multiplier. If (1.2) is satisfied, the equations F α = 0 are equivalent to a system of Euler-Lagrange equations in the sense that solutions of the Euler-Lagrange equations for L are solutions to F α = 0 and conversely solutions to F α = 0 are solutions for E α (L) = 0.
Also of importance is the number of Lagrangians for a particular system. It is well known that two Lagrangians L and L ′ have identical Euler-Lagrange expressions if and only if L and L ′ differ by a total divergence, at least locally. Therefore we consider two Lagrangians to be equivalent if they differ by a total divergence. We note that it is possible to have a system of differential equations which have two or more inequivalent Lagrangians (and thus more than one multiplier).
The variational multiplier problem is of some interest in theoretical physics. It is widely accepted that fundamental physical theories can be derived from an action principle, or Lagrangian. For a given theory it is important to determine whether the action is unique. Examples with multiple Lagrangians exist in Newtonian Mechanics and the SU (2) chiral model ( [14], [15]. ) The simplest case of the variational multiplier problem is for a scalar second-order ordinary differential equation u xx = F (x, u, u x ). It has been shown by several authors, including Darboux [10], that any scalar second-order ODE is variational and the most general multiplier (and Lagrangian) depends on an arbitrary function of two variables. The multiplier problem for systems of second-order ordinary differential equations has been studied by many authors including Douglas [9], Anderson and Thompson [4], Thompson, Crimpin, Sarlet, Prince and Martinez (See [7], [8], [22], and [23].) Our solution to the multiplier problem for systems (1.1) is partially based on ideas developed in Anderson and Thompson's paper [4]. In that paper Anderson and Thompson used the variational bicomplex to derive a system of algebraic and differential conditions, with components of the multiplier matrix as unknowns, for the existence of a multiplier. They showed that there is a one-to-one correspondence between variational multipliers for the given system and certain cohomology classes in the variational bicomplex associated to the given system of differential equations. While a significant amount of research has been done on the variational multiplier problem for second-order ODE systems, a complete solution remains elusive in the sense that there is no general closed form characterization, in terms of invariants for the system, for determining the existence and degree of uniqueness of multipliers for the given system.
The variational multiplier problem for higher order scalar ordinary differential equations has been studied by Fels [12] and Juráš [17]. Fels [12] obtained a complete solution to the variational multiplier problem for fourth-order scalar ordinary differential equations u xxxx = F (x, u, u x , u xx , u xxx ). Using Cartan's method of equivalence, he was able to produce two differential invariants whose vanishing completely characterizes the existence of a variational multiplier. Unlike the second-order case, the multiplier is unique up to a constant multiple. In [17], Juráš obtained a similar solution for sixth and eighth-order scalar ordinary differential equations, although the differential invariants are increasingly complicated for higher order systems.
For partial differential equations, there are fewer papers on the variational multiplier problem. Anderson and Duchamp [2] studied the variational multiplier problem for scalar second-order quasilinear partial differential equations F = 0 where F = A ij (x k , u, u k )u ij + B(x k , u, u k ). Anderson and Duchamp [2] proved that if det(A ij ) = 0, then F has a variational multiplier if and only if a certain 1-form χ is closed. Moreover, χ is expressed explicitly in terms of F and its derivatives and the multiplier, if it exists, is unique up to a constant multiple. They also showed that second-order scalar evolution equations are never variational. Juráš [16] examined the inverse problem for a scalar hyperbolic second-order partial differential equations in two independent variables. He was able to show that an equation is variational if and only if two particular differential invariants H and K are identically equal. Moreover, the multiplier is unique up to a constant. Juráš' result is equivalent to the Anderson-Duchamp result if the equation is quasilinear. However, his results also apply to hyperbolic Monge-Ampere equations.
In this paper we study the variational multiplier problem for f -Gordon systems We remark that we are working under the assumption that the functions f α are C ∞ on some open set U ⊂ R 2 × R 3m . In the following section we outline the complete solution to the multiplier version of the inverse problem for systems (1.3). In particular, we give an algorithm for determining the number of variation multipliers (and Lagrangians) for a given system (1.3). We show that the most general multiplier depends on finitely many constants determined by the dimension of the nullspace of a certain matrix depending on the functions f α and their derivatives. In Section 3 we demonstrate our algorithm for solving the inverse problem on several examples. In Section 4 we classify all systems (1.3) in two dependent variables that admit two or more inequivalent Lagrangians. In Sections 5 and 6, we turn our attention to proving two technical propositions stated in Section 2. In particular, we define the variational bicomplex for systems (1.3) and show there is a one-to-one correspondence between the Lagrangians for systems (1.3) and special classes of 3-forms ω with dω = 0. We then derive necessary and sufficient algebraic and differential conditions for the existence of a variational multiplier. In the appendix we prove a general theorem on the existence of solutions to systems of combined differential and algebraic equations that allows us to determine the number of variational multipliers from purely algebraic data. This work is an extension of a result established in my PhD dissertation. I wish to thank my advisor Ian Anderson for suggesting this problem and an uncountable number of helpful discussions on this subject.
Main Results
In this section we state our solution to the variational multiplier problem for systems (1.1). Our solution depends on two propositions which we will prove in Section 6. The first proposition gives us a general normal form for variational systems (1.1) and states that the Lagrangian associated to a particular variational multiplier is unique up to modification by a total divergence. Proposition 2.1. If there exists a first-order Lagrangian L(x, y, u γ , u γ x , u γ y ) and a non-degenerate variational multiplier M αβ (x, y, u γ , u γ x , u γ y ) such that Moreover, ifL(x, y, u γ , u γ x , u γ y ) is another Lagrangian associated to the variational multiplier M αβ It follows immediately from Proposition (2.1) that we may restrict ourselves to systems of the form We remark that the condition that M αβ is symmetric and has no first-order derivative dependence also follows from a result established by Henneaux in [14].
Of paramount importance in our study of the inverse problem are three quantities H γ α , K γ α , and S γ αβ defined by (2.3c) In the formulas (2.3) and throughout this paper we are adopting the Einstein summation convention where repeated indices are summed upon. We remark that the quantities H γ α , K γ α , and S γ αβ are relative invariants under the pseudo-group of local contact transformations that preserve systems (1.1). The functions H α γ and K α γ are natural extensions of the generalized Laplace invariants defined in [3] for scalar hyperbolic equations. For systems (2.2) it follows from (2.3) that the functions H γ α and K γ α have no second-order derivative dependence and S γ αβ has no firstorder derivative dependence, that is H γ α = H γ α (x, y, u τ , u τ x , u τ y ), K γ α = K γ α (x, y, u τ , u τ x , u τ y ), and S αβ = S γ αβ (x, y, u τ ), where H γ α and K γ α are quadratic in the variables u β x and u β y . Our second proposition characterizes the algebraic and differential conditions on a variational multiplier for a system (2.2).
where Ω σ α = C σ ατ du τ + A σ α dy + B σ α dx. Remark 2.3. In Section 6, we will derive the algebraic and differential conditions (2.4) and give an algorithm for constructing the first-order Lagrangian L associated to a variational multiplier M αβ satisfying (2.4). The algebraic conditions (2.4a) include the integrability conditions for d 2 M αβ = 0 for (2.4b) and genuine algebraic constraints not arising from the integrability conditions. We will show that the set of all solutions to (2.4) is a finite dimensional real vector space with the dimension determined by the nullity of a completely algebraic system of equations. In particular, for any system of the form (2.2), the solutions to (2.4) are completely determined. However, in some cases there are non-trivial solutions M αβ to (2.4) that do not satisfy det M αβ = 0. In practice, we first determine a basis for the solutions to (2.4) and then determine if a non-degenerate variational multiplier can be constructed from a linear combination of the basis solutions.
We are now ready to state our solution to the inverse problem which says that the variational multipliers M αβ satisfying (2.4) are completely characterized by the solutions to an algebraic system Φ αβ a M αβ = 0, where Φ depends on the given system u α xy = f α .
Theorem 2.4. Let u α xy = f α (x, y, u γ , u γ x , u γ y ) be a system of m partial differential equations where f α ∈ C ∞ (U ) for some open set U ⊂ R 2 ×R 3m . Then there is a matrix Φ, with Rank(Φ) ≤ m(m + 1)/2, depending the functions H γ α , K γ α , and S γ αβ and their derivatives, whose nullspace completely determines the number of linearly independent solutions to (2.4). Specifically, if r = Rank Φ(z) is constant for all points z in a open set V ⊂ U, then at every point z 0 ∈ V there is a neighborhood W ⊂ V of z 0 where the set of solutions to (2.4) is a m(m + 1)/2 − r dimensional vector space over R.
Proof. The algebraic system for a multiplier M αβ is constructed as follows. As a consequence of proposition (2.1), the functions M αβ have no 1-jet dependence and H γ α and K γ α are quadratic in u ǫ x and u τ y . We can decompose the algebraic condition M αγ H γ β = M βγ K γ α into several linear algebraic systems on the components of the multiplier M αβ , with the coefficients depending only on x, y, and u. We then express all algebraic conditions (2.4a) on M αγ as a single system of linear equations where Φ 0 may be viewed as k 0 × m(m + 1)/2 matrix. Differentiating (2.5) and substituting from (2.4b), we get the algebraic condition a Ω α σ M αβ = 0. Equation (2.6) represents a system of purely algebraic conditions on M αβ . We then express the combined algebraic systems (2.5) and (2.6) as We can proceed inductively to define a system of equations If given a point z 0 ∈ U, we see that after k ≤ m(m + 1)/2 differentiations, the rank of the matrices Φ i (z 0 ) must stabilize at some 0 ≤ l ≤ m(m + 1)/2. More precisely, at the point z 0 we have . . , C s αγ }, where s = m(m + 1)/2 − l, be a basis for the set of solutions to the system of linear equations then a general result on systems of algebraic-differential equations, which we prove in the appendix, states that there exists a neighborhood W ⊂ V of z 0 and a linearly independent collection of functions {M 1 αβ , M 2 αβ , . . . , M s αβ } such that for all z ∈ W, and i = 1, 2, . . . , s. Moreover, we claim that if a given collection of functions M αβ satisfies the algebraic and differential conditions (2.9), then M αβ can be expressed as a linear combination Indeed, if M αβ satisfies the algebraic condition (Φ k ) αβ a (z)M αβ (z) = 0 for all z ∈ W, it follows that M αβ can be uniquely expressed as If M αβ satisfies the differential condition (2.4b), then it follows from (2.9) and (2.11) that M i αβ dc i = 0. Since the functions M i αβ are pointwise linearly independent in some neighborhood of z 0 , we deduce that dc i = 0, which in turn implies that c i ∈ R for i = 1, 2, . . . , s. This establishes (2.10) and completes the proof of the theorem.
Examples
In this section we demonstrate our algorithm for solving the variational multiplier problem on several examples. For each example we calculate the invariants H γ α , K γ α , and S γ αβ using (2.3) and we explicitly list the initial algebraic conditions (2.4a) on a multiplier M αβ . We then determine the differential condition (2.4b) and differentiate the algebraic conditions to uncover any additional algebraic constraints. In each case we find the most general Lagrangian and multiplier for the given system. Example 3.1. For our first example, consider the system We will show that (3.1) admits two Lagrangians. In this case S α βγ = 0, so that the only nontrivial algebraic condition from (2.4) is and it follows from (3.2) that the only algebraic constraint is M 11 = M 22 . Since the differential condition is dM αβ = 0, differentiating M 11 = M 22 produces no additional algebraic conditions. The most general multiplier in this case is The Lagrangian corresponding to the multiplier M is given by A routine calculation shows that the Euler-Lagrange equations for L are Example 3.2. Consider the system The only difference between this system and the one in the first example is the x in the second equation. We will show that (3.3) admits a unique Lagrangian. The initial algebraic conditions (2.4) on the multiplier M αβ reduce to The differential condition is again dM αβ = 0. Differentiating Example 3.3. For our third example, we again we make a slight change on the first example (3.1) to get a system that is not variational. Let In this case the algebraic and differential conditions (2.4) on the multiplier M αβ are given by Example 3.4. Consider the system are the components of a symmetric connection on an m-dimensional manifold M. We will show that (3.7) is variational if and only if Γ is a metric connection. Since Γ is symmetric we have S γ αβ = 0 and the Laplace invariants for (3.7) are given by H γ α = R γ ǫ ασ u σ x u ǫ y and K γ α = R γ σ αǫ u σ x u ǫ y , where R α ǫ γβ are the components of the curvature tensor associated to the connection Γ. Using properties of the curvature tensor, we can show that the conditions (2.4) on the multiplier are We see immediately from the differential condition in (3.8) that any multiplier M αβ must satisfy ∂M αβ /∂x = 0 and ∂M αβ /∂y = 0. Consequently, M αβ = M αβ (u ǫ ). It follows that the differential condition on the multiplier simplifies to ∇ γ M αβ = 0, where ∇ γ denotes covariant differentiation with respect to u γ . This proves that Γ is the Levi-Cevita connection for the metric M αβ . Subsequent differentiations of the algebraic condition (3.8) imply that M αβ must satisfy is a (locally) symmetric space, then the algebraic condition (3.8) involving the curvature completely determines the number of linearly independent metrics for the given connection.
Example 3.5. Consider the system of differential equations (3.9) u α xy + C α βγ u β x u γ y = 0, where C α βγ ∈ R are the structure constants of an m dimensional Lie algebra g. The constants C α βγ are skew symmetric in the lower indices and satisfy the Jacobi identity. Using (2.3) we calculate the invariants
Consequently, the algebraic conditions (2.4) can be summarized as
so that dM αβ = 0 as a result of (3.10b). It is easy to check that (3.10b) implies (3.10a). It follows that there is a Lagrangian for (3.9) with multiplier M αβ if and only if M αβ is constant and M αβ satisfies equation (3.10b). Moreover, the Lagrangian is given by We remark that (3.10b) is exactly the same as the condition for the existence of a bi-invariant symmetric bilinear form for a Lie algebra g with structure constants C α βγ . If g is semi-simple, the Killing form provides us with a non-degenerate solution to (3.10b). Consequently, (3.9) is variational whenever g is semi-simple. Moreover, the number of solutions to (3.10b) is equal to the dimension of the Lie algebra cohomology space H 3 (g). If g is simple, then dim H 3 (g) = 1 and the Killing form determines the only non-degenerate solution to (3.10b) up to a scalar multiple (See [19], Theorems 11.1, 11.2). We remark that semi-simplicity is not a necessary condition for (3.10b) to hold, as there are solvable Lie algebras which also admit bi-invariant bilinear forms. For example, consider the solvable 4-dimensional Lie algebra g of consisting of real matrices of the form It is easy to check that the bilinear map M : g × g → R defined by is non-degenerate and bi-invariant for all µ = 0 and all λ ∈ R.
Classification of Variational Systems in Two Dependent Variables
In this section we establish a result characterizing the variational systems (4.1) u α xy = f α (x, y, u γ , u γ x , u γ y ), of two equations and two dependent variables that admit multiple Lagrangians. In order to proceed, we need to make precise two concepts that are paramount to our discussion. We first define what it means for a system to have multiple Lagrangians. Then we review the notion of contact equivalence of two systems of differential equations (4.1).
We say that a system of differential equations (4.1) admits k Lagrangians if there exists a set of linearly independent Lagrangians {L 1 , L 2 , . . . , L k } and a set of linearly independent variational multipliers {M 1 αβ , . . . , M k αβ }, such that E α (L i ) = M i αβ (u β xy − f β ) for i = 1, . . . , k. We say that two f -Gordon systems (4.1) are contact equivalent if there exists a local diffeomorphism where J 2 (R 2 , R m ) denotes the second-order jet-bundle of local sections s : R 2 → R m , such that Φ * (ū ᾱ xȳ −f α ) = Q α γ (u γ xy − f γ ), and Φ * C ⊂ C, where C is the ideal generated by the 1-forms du α − u α x dx − u α y dy, du α x − u α xx dx − u α xy dy, du α y − u α xy dx − u α yy dy. It was shown in [5] that any contact equivalence Φ of two systems of the form (4.1) is the prolongation of a fiber preserving transformation up to an interchange x ↔ y of the independent variables.
We have the following theorem that completely characterizes the variational f -Gordon systems (4.1) in two dependent variables which admit two or more inequivalent Lagrangians.
1) R admits three Lagrangians if and only if R is contact equivalent to a system u xy = λ(x, y)u, v xy = λ(x, y)v.
2) R admits two Lagrangians if and only if R is contact equivalent to a system where W satisfies one of W uu + W vv = 0, W uu = W vv , or W vv = 0. Lemma 4.3. A system of partial differential equations u α xy = f α (x, y, u γ , u γ x , u γ y ) is contact equivalent to a system u α xy = g α (x, y, u γ ) if and only if H = K and S α βγ = 0. If the number of dependent variables m > 1 and H = K = λI, then λ = λ(x, y) and the given system is contact equivalent to a system u α xy = λ(x, y)u α . Proof of Theorem 4.1. We first show that if a system (4.3) admits multiple Lagrangians, then (4.3) satisfies the hypotheses of Lemma (4.3) and is contact equivalent to a system of the form According to Proposition (2.2), if (4.3) admits a first-order Lagrangian L with a symmetric multiplier M αβ , then M αβ satisfies the algebraic conditions and S α βγ are given by (2.3). Multiplying the second equation of (4.5) by M αβ yields If their are only two dependent variables, then equation (4.6) implies that S 1 12 = S 2 12 = 0. Since S α βγ = −S α γβ , we have S α βγ = 0 for all α, β, γ = 1, 2. Then the algebraic conditions (4.5) can be expressed as If (4.1) admits 2 or more inequivalent Lagrangians, then there are least two linearly independent solutions to (4.7) and it follows that the rank of A is at most one. Moreover, the rank of A is one or less if and only if H = K and is rank zero if and only if H = K = λI. From Lemma (4.3), we deduce that Rank A ≤ 1 if and only if (4.3) is contact equivalent to (4.4). We will complete the proof of the theorem by analyzing the algebraic conditions (4.7) for systems of the form (4.4). A calculation of H and K for (4.4) reveals that In this case the algebraic conditions (4.7) for the existence of a multiplier M αβ simplify to The differential condition (2.4b) reduces to dM αβ = 0. Consequently, solving the variational multiplier problem for (4.4) is equivalent to determining all constant solutions M αβ to the equation (4.8).
There are 3 linearly independent solutions to (4.8) if and only if H = K = λI. In this case (4.4) is contact equivalent to a system (4.9) u xy = λ(x, y) u, v xy = λ(x, y) v.
The most general Lagrangian for (4.9) is a linear combination of the Lagrangians L 1 = u x u y + λu 2 , L 2 = u x v y + λuv, and L 3 = v x v y + λv 2 .
We analyze the case where the rank of A is exactly one and we assume there are two nondegenerate, linearly independent, constant solutions M 1 αβ and M 2 αβ to (4.8). We claim there exists an indefinite multiplier M = (M αβ ) that satisfies (4.8). If one of det M 1 < 0 or det M 2 < 0, then we are done, so we assume that det M 1 > 0 and det M 2 > 0. If then it follows that ac > 0 and pr > 0. We claim there is a scalar µ such that det(M 1 −µM 2 ) < 0. Indeed, if then the discriminant ∆ can be expressed as It follows from (4.10) and (4.12) that ∆ ≥ 0 with equality holding if and only if M 1 = t M 2 for some t ∈ R. Consequently, the polynomial (4.11) has two real roots and there exists µ ∈ R such that det(M 1 − µM 2 ) < 0. Now we have established that if there are two independent solutions to (4.8), with at least one of the solutions non-degenerate, then there is an indefinite multiplier M αβ .
If we make a linear change of variables u α = T α γū γ , then a direct calculation of the Euler-Lagrange equations for the Lagrangian L and the transformed LagrangianL verifies that the corresponding variational multipliers transform according to the rule (4.13) M αβ = T σ α T τ βM στ . After a linear change of variables u α → T α γ u γ , we may then assume that the indefinite multiplier M αβ is of the form (4.14) M = 0 1 1 0 .
Substituting (4.14) into (4.8) implies that G v − F u = 0. As a consequence of the de Rham theorem, we see there exists a smooth function W (x, y, u, v) such that There is a second multiplier N, independent of M, satisfying (4.8). We may assume, possibly after subtracting a scalar multiple of M, that According to (4.13), a linear transformation u → λu, v → λ −1 v preserves M and transforms N as We may then assume, possibly after a scaling N → kN, that N has the form where ε = 0, 1, or −1. Taking into account that (4.15) holds, substituting N into (4.8) implies that W vv − εW uu = 0.
If ε = −1, then W satisfies Laplace's equation W uu + W vv = 0 and there exists a function Z(x, y, u, v) such that Z u = W v = F and −Z v = W u = G. The most general Lagrangian in this case is given by If ε = 0, then W vv = 0 and (4.4) can be expressed The most general Lagrangian for (4.16) is If ε = 1, then W satisfies the wave equation and W = W 1 (u + v) + W 2 (u − v). The most general Lagrangian in the case is This establishes the second statement of the theorem.
Proof of Lemma 4.3. If we assume that two systemsūxȳ =f α and u xy = f α are contact equivalent with the change of coordinates given by (4.2), then the formulas (2.3) and a tedious application of the chain rule will verify that the transformation rules for H, K, and S are where (∂u α /∂ū γ ) · (∂ū γ /∂u β ) = δ α β . It follows from (4.17) that the conditions H = K and S = 0 are invariant with respect to transformation (4.2).
For a system u α xy = g α (x, y, u γ ), using (2.3) we see that From (4.17) and (4.18), we deduce that any system u α xy = f α (x, y, u γ , u γ x , u γ y ) that is contact equivalent to u α xy = g α (x, y, u γ ) must necessarily have H = K and S α βγ = 0. We now prove that any system (4.19) u α xy = f α (x, y, u γ , u γ x , u γ y ) with the property that H = K and S α βγ = 0 is equivalent to a system u α xy = g α (x, y, u γ ). It follows from (2.3 Consequently, equation (4.19) simplifies to an equation of the form and G α are functions of x, y, and u ǫ . Moreover, S α βγ = 0 if and only if C α βγ = C α γβ . With a judicious choice of coordinates, we will now eliminate the quadratic terms of (4.20). If we let u α = g α (x, y,ū γ ), then (4.20) transforms as (4.21) ∂g α ∂ū βū β xy + ∂g α ∂ū β ∂ū γ − C α ǫτ (x, y, g τ ) We see thatC α βγ = 0 whenever g α satisfies We differentiate (4.22) with respect toū δ , and after substituting from (4.22) and skew-symmetrizing over β and δ, we obtain the integrability conditions on (4.22) On the other hand a calculation of H and K for (4.20) yields As a consequence of (4.24), the integrability conditions (4.23) are satisfied whenever H = K.
The system of partial differential equations (4.22) then satisfies the Frobenius condition and we deduce that there exists, at least locally, a non-degenerate collection of functions g α satisfying (4.22).
The Variational Bicomplex For Systems of PDE
In this section we introduce some basic definitions and results used in our solution to the variational multiplier problem in Section 6, including infinite jet bundles and variational bicomplexes. As we are only interested in applications to our study of the variational multiplier problem, our discussion will be of a rather brief nature. For a detailed and intrinsic construction of variational bicomplex, we refer the reader to [1], [3], [18], and [25].
Let π k : J k (E) → R n denote the bundle of k-jets of local sections of the trivial bundle E = R n × R m . Local coordinates for J k (R n , R m ) are given by . . i k ≤ n and 1 ≤ α ≤ m. There are natural projections π l k : J l (E) → J k (E) for l ≥ k. The infinite jet bundle over E, J ∞ (E), is defined as the inverse limit of the sequence of finite jet bundles {J k (E) | k = 0, 1, 2, . . . }, along with the projections π ∞ k : J ∞ (E) → J k (E) and π ∞ : J ∞ (E) → R n . The contact ideal C(J ∞ (E)) is generated by the 1-forms The full exterior algebra Ω * (J ∞ (E)) of differential forms on J ∞ (E) is generated by the 1-forms dx i , θ α , θ α i , θ α ij , . . . . There is a bi-grading of the differential forms on J ∞ (E), where Ω r,s (J ∞ (E)) is the C ∞ (J ∞ (E))-module generated by differential forms of the type Since d 2 = 0, it follows that d 2 The local coordinate expressions for the horizontal and vertical derivatives of a smooth functions f ∈ C ∞ (J ∞ (E)) and 1-forms dx i and θ α I are given by where D i denotes total differentiation with respect to x i . The free variational bicomplex is defined to be the double complex {Ω r,s J ∞ (E), d H , d V } s≥0; r=0,1,...,n .
In our solution to the variational multiplier problem for the f -Gordon systems we investigate the existence of certain cohomology classes in the constrained variational bicomplex associated to a system of partial differential equations. To construct the constrained variational bicomplex associated to (5.2), we begin with a trivial bundle π : R 2 × R m → R 2 and consider the second-order jet bundle J 2 (R 2 , R m ) with coordinates given by (x, y, u α , u α x , u α y , u α xx , u α xy , u α yy ), α = 1, . . . , m. An f -Gordon system (5.2) defines a (5m + 2)-dimensional submanifold R 2 ι → J 2 (E) called the equation manifold of (5.2). We define the first prolongation of R 2 as the 7m + 2-dimensional submanifold R 3 ι → J 3 (E) defined by (5.2) and u α xxy = D x f α and u α xxy = D y f α . Further differentiations of (5.2) will yield submanifolds R k ι → J k (E). For convenience we define R 0 = E and R 1 = J 1 (E). We define the infinite prolonged equation manifold R ∞ to be the inverse limit of the sequence {R k | k = 0, 1, 2, . . . }, along with the natural projections π ∞ M : R ∞ → M and π ∞ k : R ∞ → R k . We remark that there is a unique map ι ∞ : type (1, s) conservation law or form-valued conservation law. In the following section, we will show that the solution to the variational multiplier problem is closely related to the existence of non-trivial classes [ω] ∈ H 1,2 (R ∞ ). Since systems of the form (5.2) are of Cauchy-Kovaleskaya type, it follows from a general result of Vinogradov [25] that the horizontal cohomology spaces H 0,s (R ∞ ) satisfy We remark that (5.5) was also established in [5] by constructing a coframe adapted to systems (5.2).
Derivation of Necessary and Sufficient Conditions for the Existence of a Variational Multiplier
Our first result states that the problem of determining all Lagrangians and variational multipliers for an f -Gordon system is equivalent to determining all d closed forms ω ∈ Ω 1,2 (R ∞ ) of a certain type. We also give a description of the general form of possible Lagrangians for an f -Gordon system. Finally, we show that a variational multiplier has no one-jet dependence. Proposition (6.1) along with Corollaries (6.2) and (6.3) will suffice to establish Proposition (2.1), which was stated without proof in Section 2.
Theorem 6.1. For a system of differential equations u α xy = f α (x, y, u γ , u γ x , u γ y ) the following statements are equivalent.
(i) There exists a type (1, 2) form (ii) There exists a first-order multiplier M αβ (x, y, u γ , u γ x , u γ y ) and a first-order Lagrangian L(x, y, u γ , u γ x , u γ y ) such that E α (L) = M αβ (u β xy − f β ). (iii) There exists a multiplier M αβ = M αβ (x, y, u γ ) and a Lagrangian L = −R αβ (x, y, u γ )u α x u β y + Q α (x, y, u γ )u α x + P α (x, y, u γ )u α y + N (x, y, u γ ) such that E α (L) = M αβ u β xy − f β . Proof. We first show that (i) implies (iii). Suppose that ω is given by (6.1) and that dω = 0. on R ∞ . It follows immediately that d H ω = d V ω = 0. A routine calculation using (5.4) shows that if d V ω = 0, then R αβ = R αβ (x, y, u ǫ ) and ω is of the form If we define ρ 0 ∈ Ω 1,1 (R ∞ ) by ρ 0 = −R αβ u β x θ α ∧ dx + R βα u β y θ α ∧ dy, then using(5.4) we see that Since the functions S 0 αβ and T 0 αβ have no 1-jet dependence, we may choose ρ 1 to be of the form We now define ρ = ρ 0 + ρ 1 so that We will now show that λ = L dx ∧ dy, where L is of the form given in statement (iii). Computing d H ρ on J ∞ (E) yields When restricted to R ∞ , where R (αβ) = (R αβ + R βα )/2 and g α is given by (6.2). Since λ = L dx ∧ dy and d V λ = d H ρ on R ∞ , we see that L must satisfy It follows that there exists a function N (x, y, u ǫ ) such that (6.3) L = − R αβ u β x u α y + Q α u α x + P α u α y + N (x, y, u ǫ ) . If we apply the Euler-Lagrange operator E(λ) = E α (L) θ α ∧ dx ∧ dy on J ∞ (E), then Since d H ρ = d V λ when restricted to the equation manifold R ∞ , we deduce that implies ι * E(λ) = 0. On the other hand, a direct computation of the Euler-Lagrange equations for (6.3) gives us E α (L) = 2R (αβ) u β xy + g α + ∂L ∂u α . Since ι * E α (L) = 0 for all α, we get g α + ∂L/∂u α = −2R (αβ) f β , which implies We now prove that statement (iii) implies (i). Assume that there is a variational multiplier M αβ (x, y, u γ ), a first-order Lagrangian λ = L dx ∧ dy, where L is of the form (6.3), and (6.4) E α (L) = M αβ (u β xy − f β ). Note that (6.4) explicitly determines that the multiplier is M αβ = R αβ + R βα . By the first variational formula (See [1], Corollary 5.3), we have E(λ) Since ι * E(λ) = 0, it follows that ι * d H η = d V λ. Then define ω = d V η and the resulting calculation of dω on the equation manifold is Moreover, using (5.4) to compute d V η will verify that ω is of the form (6.1). We have proven that (i) is equivalent to (iii), and clearly (iii) implies (ii). Assume that (ii) holds and there is a first-order Lagrangian L(x, y, u γ , u γ x , u γ y ) and a variational multiplier M αβ (x, y, u γ , u γ x , u γ y ) for the system u β xy = f β . We will show that this implies (iii). Calculating the Euler-Lagrange equations for an arbitrary first-order Lagrangian, we find that where G α is a first-order function. On the other hand, we are assuming where M αβ is a non-degenerate first-order multiplier. Comparing (6.5) and (6.6), we deduce that It follows that L is of the form given in statement (iii) and the fact that M αβ has no onejet dependence follows immediately from a calculation of the Euler-Lagrange equations for a Lagrangian satisfying (6.7).
The following corollary gives a description the general form of a f -Gordon systems that is variational. Corollary (6.3), along with Theorem (6.1) and Corollary (6.2), establishes Proposition (2.1). Corollary 6.3. If u β xy = f β (x, y, u γ , u γ x , u γ y ) has a nonsingular variational multiplier M αβ , then . Proof. According to Theorem (6.1), we may assume that there is Lagrangian L of the form (6.3) and a variational multiplier M αβ (x, y, u), with det M αβ = 0 and . On the other hand, the Euler-Lagrange equations for (6.3) are explicitly given by Clearly, M αβ = (R αβ +R βα ) and multiplying equations (6.8) and (6.9) by M αγ , where M αγ M γβ = δ α β , yields the desired result.
To simplify our calculations, we define ω ′ = ω− 1 4 d H (R αβ − R βα ) θ α ∧θ β . A routine calculation using (5.4) yields (6.14) , where M αβ = (R αβ + R βα )/2. Without loss of generality we may assume that T αβ and V αβ are skew-symmetric. Using (5.5) and the exactness of the columns of the variational bicomplex, it can be shown that dω = 0 if and only if d H ω ′ = 0 and d V ω ′ is d H exact. We will later show that if d H ω ′ = 0 for (6.14), then d V ω ′ is necessarily d H exact.
We now show that d V ω ′ is d H exact whenever d H ω ′ = 0. We claim that d H ζ = d V ω ′ for (6.23) ζ = 1 6 M ασ S σ βγ θ α ∧ θ β ∧ θ γ From (6.10), (6.14), and (6.19), we deduce that ω ′ can be expressed as Remark A.2. For the more geometrically inclined reader we note that Proposition (A.1) is a special case of the following result: For an m + n dimensional manifold M let I ⊂ Ω 1 (M ) denote an exterior differential system of rank m and let q be a regular value of a smooth map F : M → Q. Assume for all p ∈ Σ = F −1 (q) we have I ⊥ p ⊂ Ker (F * : T p M → T q Q) and that I |Σ is a Frobenius system. Then for all p 0 ∈ Σ ⊂ M there is a unique maximal n-dimensional integral manifold φ : N → M through p 0 such that φ(N ) ⊂ Σ.
Proof. Since we are only constructing local solutions to (A.1) and (A.2), we will assume that Rank{B a α (x)} = k for all x ∈ R n . Assume that no additional algebraic constraints are created if we differentiate (A.2) with respect to x i and substitute from (A.1). This guarantees the existence of functions G a ci (x) such that The system of differential equations (A.1) are associated with the C ∞ distribution ∆ on R n × R m generated by the vector fields (A.4) X i = ∂ ∂x i + A γ σi z σ ∂ ∂z γ , i = 1, 2, . . . , n. We then define the C ∞ function F : R n × R m → R l by F (x, z) = B a γ (x)z γ and let Σ = F −1 (0). As a consequence of (A.3) we see that Rank{DF (x,z) } = Rank {B a γ (x)} = k for all (x, z) ∈ Σ. It follows immediately that Σ ⊂ R n × R m is regular submanifold of dimension m + n − k. In addition, we can use (A.3) to show that X i (B a γ z γ ) vanishes identically for all (x, z) ∈ Σ. We conclude that for any p ∈ Σ and i = 1, 2, . . . , m, | 2009-10-15T16:50:46.000Z | 2009-10-15T00:00:00.000 | {
"year": 2009,
"sha1": "9fd6449a4b6737d1f78fecdbdb50db92f837d07a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9fd6449a4b6737d1f78fecdbdb50db92f837d07a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
7419665 | pes2o/s2orc | v3-fos-license | Predicting Presynaptic and Postsynaptic Neurotoxins by Developing Feature Selection Technique
Presynaptic and postsynaptic neurotoxins are proteins which act at the presynaptic and postsynaptic membrane. Correctly predicting presynaptic and postsynaptic neurotoxins will provide important clues for drug-target discovery and drug design. In this study, we developed a theoretical method to discriminate presynaptic neurotoxins from postsynaptic neurotoxins. A strict and objective benchmark dataset was constructed to train and test our proposed model. The dipeptide composition was used to formulate neurotoxin samples. The analysis of variance (ANOVA) was proposed to find out the optimal feature set which can produce the maximum accuracy. In the jackknife cross-validation test, the overall accuracy of 94.9% was achieved. We believe that the proposed model will provide important information to study neurotoxins.
Introduction
Neurotoxins act typically against channels to block or enhance synaptic transmission. According to the mechanism of action, neurotoxins can be classified as presynaptic type and postsynaptic type [1]. The function of presynaptic neurotoxins is to act at the presynaptic membrane [2]. They usually block neuromuscular transmission and inhibit the neurotransmitter release due to their specific enzymatic activities [3]. Postsynaptic neurotoxins can bind to the postsynaptic membrane and acetylcholine receptors [4]. Thus, the study of presynaptic and postsynaptic neurotoxin will give us important clues for drug-target discovery and drug design.
The function and structure of neurotoxins can be correctly measured by biochemical experiments; however, it is time-consuming and costly. The availability of huge amounts of proteins generated in postgenomic age provides us with an important opportunity to design various computational methods for timely and precisely predicting protein functions. Thus, it is important to develop machine learning approach to predict presynaptic and postsynaptic neurotoxins. Recently, Yang and Li developed an increment of diversity-based method to identify presynaptic neurotoxin and postsynaptic neurotoxin [5]. The benchmark dataset including 78 presynaptic neurotoxins and 69 postsynaptic neurotoxins was downloaded from Animal Toxin Database (ATDB) [6]. The overall accuracy was 90.39% in jackknife cross-validation, which is far from satisfactory. Subsequently, Song proposed using bilayer support vector machine (SVM) to improve prediction accuracy based on a new benchmark dataset [7]. Although the overall accuracy was dramatically improved, the sequence identity of the dataset was so high that the results were overestimated.
To overcome the shortcoming mentioned above, in this study, we developed a new method based on feature selection technique to predict presynaptic neurotoxins and postsynaptic neurotoxins. In the following, we will introduce how to construct a new benchmark dataset, to formulate neurotoxin samples using peptide sequences, and to obtain the expected result produced by best feature subset.
Benchmark Dataset Construction.
A high quality benchmark dataset is the fundamental for building a reliable and accuracy model. The Universal Protein Resource (UniProt) provides the scientific community with a single, centralized, authoritative resource for protein sequences and functional information [8]. Thus, we downloaded presynaptic and postsynaptic neurotoxins from the UniProt. Ambiguous information can reduce the quality of benchmark dataset which makes the prediction model unreliable. Thus, we must exclude the protein sequence which contains ambiguous residues (such as "X," "B," and "Z") and which is the fragment of other proteins. High similar sequences in benchmark dataset will bring about overestimation of results. Thus, the CD-HIT program was used to remove the highly similar sequences by setting the cutoff of sequence identity as 80% [9]. According to above screening procedure, the final benchmark dataset included 256 neurotoxin samples which can be formulated as where the subset Pre contains 91 presynaptic neurotoxins and Pro contains 165 postsynaptic neurotoxins.
The Dipeptide Composition.
One of the most important steps in the prediction problem is to formulate neurotoxin sequences with an effective mathematical expression. Generally, we may formulate a neurotoxin by its entire residue sequence as follows: where denotes the residue of neurotoxin P and the subscript is the number of residues of the neurotoxin P. We may use some straightforward and intuitive tools, such as BLAST or FASTA, to find the similar sequences. However, these tools are only suitable for the query sequences which have high similar sequences in searching dataset. If there are no similar sequences in the training dataset, they cannot work well.
Machine learning approach can overcome such problem and correctly identify presynaptic and postsynaptic neurotoxins. Thus, we must convert neurotoxin sequences into discrete vector. A simplest method used to represent a neurotoxin is its residue composition containing a 20dimension vector. However, the sequence order information would be completely lost and hence limit the prediction quality [10][11][12][13]. Thus, the dipeptide composition was used in this study. Accordingly, each neurotoxin sample in our benchmark dataset can be expressed as a 400-dimension vector and formulated as where ( = 1, 2, . . . , 400) is the occurrence frequency of th dipeptide and given by where , , . . . , , are the single letter codes of 20 native amino acids, respectively. can be calculated by where denotes the number of the th dipeptides in the neurotoxin P.
Support Vector
Machine. SVM is a very popular machine learning method and has been widely used in bioinformatics [7,[14][15][16][17][18]. The basic idea of SVM is to transform the input vector into a high-dimension Hilbert space and to determine a separating hyperplane in this space. In this study, we used the LibSVM package 3.18 (http://www .csie.ntu.edu.tw/∼cjlin/libsvm/) to implement SVM. Because it is more suitable for nonlinear classification, the radial basis function (RBF) defined as ( used as kernel function. In the SVM model construction, a grid search strategy with cross-validation test was used to optimize the regularization parameter and kernel parameter as the following standard:
Performance Evaluation.
In this study, we used jackknife cross-validation to test the prediction. In the jackknife crossvalidation test, each protein sample in the dataset is in turn singled out as an independent test sample and all the rule parameters are calculated based on the remaining proteins without including the one being identified. The performance of our proposed method was estimated by the following three indexes called sensitivity (Sn), specificity (Sp), and overall accuracy (Acc) which can be expressed as where Pre and Pro are the total number of the presynaptic neurotoxins and postsynaptic neurotoxins. Pre Pro is the number of the presynaptic neurotoxins incorrectly predicted as the postsynaptic neurotoxins and Pro Pre is the number of the postsynaptic neurotoxins incorrectly predicted as presynaptic neurotoxins.
Results and Discussion
Many published papers have demonstrated that the optimized features could improve predictive accuracy [19][20][21][22][23][24][25]. For high-dimension data, some features are noise or redundant information which has negative contribution to the prediction. Thus, it is very important to develop a feature selection technique to exclude the garbage information. The current study will introduce a new feature selection technique based on the principle of analysis of variance (ANOVA).
Two parameters of feature can be defined as where ( ) denotes frequency of the th feature of the th sample in the th group ( = Pre or Pro). denotes number of samples in the th group ( = Pre or Pro). SS ( ) and SS ( ) are called sum of squares between groups and sum of squares within groups, respectively. If the sample means within groups are close to each other, SS ( ) will be small. If the sample means are close between two groups, SS ( ) will be small. Then the sample variance between groups 2 ( ) and sample variance within groups 2 ( ) can be given by where d and d are called degrees of freedom in statistics. In this study, d = 1 and d = Pre + Pro − 2 = 254, respectively.
According to the statistic theory, the ratio between 2 ( ) and 2 ( ) obeys sampling distribution with d and d degrees of freedom under the null hypothesis. Thus, we used ratio ( ) to measure the contribution of each feature defined as follows: ( ) reveals how strong the th feature is related to the group variables. Accordingly, the 400 dipeptides in (3) were ranked according to their ( ). Subsequently, the incremental feature selection (IFS) strategy was proposed to find an optimal of feature subset. In IFS procedure, we firstly examined the performance of the best feature with the highest ( ) by using cross-validation. Subsequently, a new feature with the second highest ( ) was added to form new feature subset which was also inputted into SVM and the accuracy was calculated. This process was repeated until 400 feature subsets were examined. By setting the number of features as abscissa and the Acc as ordinate, the IFS curves were plotted in Figure 1. From the figure, we observed that, in the jackknife cross-validation, the maximum Acc of 94.9% can be obtained by the top 190 features which are regarded as the optimal feature subset.
It is very important to compare the performance of different methods. However, it is not feasible because the benchmark datasets are different. Thus, we made a rough comparison and recorded the results in Table 1. Yang and Li proposed ID-based method to predict presynaptic and postsynaptic neurotoxins on a benchmark dataset with the sequence identity of <80% [5]. Thus, our method is superior to Yang's method. Song developed bilayer support vector machine to improve the accuracy [7]. We noticed that the sequence identity of the benchmark dataset reaches 90% which results in the overestimation of the method. Thus, our proposed model is more objective and real.
Conclusions
The knowledge for neurotoxin is conductive to the development of drug design and drug-target discovery. Thus, the aim of the study is to develop a computational method to predict presynaptic and postsynaptic neurotoxins. A new feature selection technique was proposed to optimize features and to improve prediction accuracy. The feature selection technique can also be used in other bioinformatics fields. | 2018-04-03T06:06:48.061Z | 2017-02-12T00:00:00.000 | {
"year": 2017,
"sha1": "249ebd7892f8af13ce53f98078fb94ca0153f779",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2017/3267325.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22da1c61a79de8ad82dbeb7be04c08e781473058",
"s2fieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
212824943 | pes2o/s2orc | v3-fos-license | Analysis of accuracy parameters of ANN backpropagation algorithm through training and testing of hydro-climatology data based on GUI MATLAB
The authors have developed a GUI Matlab to simplify the process of predicting Hydro-climatology data using ANN Back Propagation method. Five data for training, testing, and prediction were used. The data, i.e. rainfall, air humidity, duration of shine, temperature, and wind speed are taken from the last ten years with matrix input size m x n. Each data is trained 21 times using a combination of the activation functions (logsig, tansig, and purelin) and training methods (traingda, traingdx, and trainrp). The result of the training data was that the logsig function and trainrp on each layer are the best formulas in conducting training, testing, and predictions with an accuracy of 99.71%. This result is obtained from parameter settings including epochs of 1000, learning rate of 0.7, goal error of 0.0001, and training steps of 1.
Introduction
Forecasting is an activity to estimate what will happen in the future by using conditions or data in the past [1] [2]. Forecasting is widely used in almost all agencies or government institutions to determine policies that must be taken based on previous data or facts. Therefore, various forecasting methods are used according to the types of data available. The output of forecasting can be in the form of predictive data in the future or a mathematical model constructed with the method so that it is easier to see the patterns that occur [3].
Forecasting is very important to do in preparation of and overcoming various problems that may occur in the future. It is also a characteristic of a forecasting method indicating that predictive results must be truly accurate. Forecasting with multiple data by weighting on each input network is highly recommended as an effort to reduce the results of improper forecasting [4]. Today many forecasting methods displaying a minimal error rate are known. The input data, however, is still single, meaning that it is unable to simulate multiple data. Therefore, the Artificial Neural Networks (ANN) Back Propagation has adopted the network with multiple inputs [5]. So the results of the predictions obtained are very good because each data is treated through training and testing data by weighting each neuron (network) before the predicted output is generated.
However, it is necessary to experiment with a combination of activation functions and training methods for training and testing various types of data. Therefore, the research team aimed to compile a combination of training methods and activation functions owned by ANN Back Propagation for hydro- climatology data simulations to obtain a truly reliable and accurate network in each experiment with other data in the future. In Matlab, an NNTools is available for forecasting. However, itstill has some shortcomings in terms of attributes or parameters of accuracy [6]. Due to that, the initial step conducted by the team in this research is developing a Matlab Graphical User Interface (GUI) with various attributes according to the ANN Back Propagation algorithm, to make it easier for the team or user to simulate data in large numbers of cases.
Method
This section focuses on two main discussions, namely data and accuracy parameters. Data used for training and testing are (1) hydrological data (rainfall), and (2) climatological data including wind speed, air humidity, duration of sunlight, and temperature. The data taken from 2008-2017 was sourced from the Central Statistics Agency of West Nusa Tenggara Province.
The accuracy parameters used in this forecasting consist of Mean Absolute Deviation (MAD), Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Root Mean Squared Error (RMSE). The formulas are as follows: Where Xt is the actual data in period-t, Ft is the forecasting value in period-t, n is the amount of data, and t is the time series used [5] [7].
The process of training, testing, and predicting hydro-climatology data using ANN Back Propagation is presented further in the following flowchart. Based on Figure 1, it can be seen that training and testing are carried out 21 times using data from 2008-2016 to predict the 2017 data. In addition, the prediction phase of the 2008-2017 data training and testing was used to predict the 2018 and 2019 data.
Network Construction
In the construction phase of the ANN Back Propagation network, we use the amount of data that becomes the input matrix measurement. In the training and testing phase, the data used is 9 (nine) years and each year consists of 12 (twelve) months, making the measurement of the input matrix data 9 x 12 = 108 data, while the prediction phase of the data is 10 (ten) years, making the data of the input matrix 10 x 12 = 120 data. Because we use 2 (two) screens hidden, the amount of data in the hidden 1 screen are 10 data and in the hidden 2 screen are 5 data. The ANN Back Propagation network obtained in this case is like Figure 2.
Training & Testing
The architecture design of ANN Back Propagation is done to determine the best architecture with certain parameter settings through training and testing of previously shared data. The architectural parameters used in this study are as follows: Based on the results of the training and testing of the five hydro-climatological data, each with 21 trials, the results with the lowest error rate are shown in Table 1. The results in Table 1 were obtained from trial no. 3 (three) of the 21 (twenty one) experiments carried out. All five data that of the training and testing produces the same results for the activation function and training method, and the best activation function of all screens is logsig, while the training method used is trainrp.
Prediction
The prediction of hydro-climatology data in 2018 was conducted after finding the right training method and activation function with the highest level of accuracy. The prediction results are presented in Table 2. Based on Table 2 The output in the form of the actual data and forecast in Figure 3, Figure 4, Figure 5, and Figure 7 is obtained by setting the maximum epoch of 1000. However, the forecast for rainfall data is completed at 522 epoch and gradient is 0.110, the wind speed data is completed at 587 epoch and gradient is 0.247, the air humidity data is completed at 349 epoch and gradient is 0.000444, the duration of sunlight data is completed at 469 epoch and gradient is 0.346, the temperature data is completed at 450 epoch and gradient is 0.000960.
Conclusion
The activation function, training method, and the number of neurons for each network in the ANN Back Propagation greatly determine the outcome of the prediction. It can be seen from the 21 experiments that have been carried out with a combination formula between the three components. The activation function of logsig and the trainrp method are the most accurate combination in producing the smallest | 2020-01-09T09:07:31.893Z | 2020-01-02T00:00:00.000 | {
"year": 2020,
"sha1": "9dfd226159b808bf09c9fe60512ff22462a184d4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/413/1/012008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "85c7efa0df4520cdb9d40042c2c366ac3c22e289",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
24719764 | pes2o/s2orc | v3-fos-license | The 28-kDa Protein Whose Phosphorylation Is Induced by Protein Kinase C Activators in MCF-7 Cells Belongs to the Family of Low Molecular Mass Heat Shock Proteins and Is the Estrogen-regulated 24-kDa Protein*
We have previously reported the presence of a 28- kDa protein in human mammary adenocarcinoma MCF-7 cells, whose phosphorylation by phorbol ester 12-0-tetradecanoylphorbol-13-acetate (TPA) and per- meant diacylglycerol 1,2-dioctanoyl-sn-glycerol was correlated to growth arrest induced by the protein kinase C (PKC) activators. We now investigate the possible identity of this protein with the estrogen-regulated “24-kDa” protein shown as related to the mammalian heat shock protein 27 (Fuqua, S. A. W., Blum-Salingaros, M., and McGuire, W. L. (1989) Can- cer Res 49, 4126-4129). phoprotein suggested identical sites of phosphorylation upon TPA and heat shock stimulation. Partial amino acid sequencing of the 28-kDa protein revealed iden- tity with both the 24-kDa protein and the mammalian HSP27. The fact that estrogens and PKC, respectively, regulate expression and phosphorylation of this 24128-kDa protein strongly argues for its key role in MCF-7 cell proliferation and differentiation. incomplete Freund's adjuvant, and antisera were then collected. Purification and Internal Sequencing of the 28-kDa Protein-The 28-kDa isoform a, isolated from two-dimensional IEF/SDS-PAGE, was digested in the gel matrix with porcine trypsin, and the resulting peptides were separated on a narrow bore C18 Altex column (25 X 0.2 cm, Beckman Instruments). Selected peptides were submitted to automatic amino-terminal sequencing on an Applied Biosystems se- quenator (model 470) coupled to a phenylthiohydantoin-derivative analyzer.
Department of Medicine, Division of Oncology, University of Texas Health Science Center, San Antonio, Texas 78284
We have previously reported the presence of a 28-kDa protein in human mammary adenocarcinoma MCF-7 cells, whose phosphorylation by phorbol ester 12-0-tetradecanoylphorbol-13-acetate (TPA) and permeant diacylglycerol 1,2-dioctanoyl-sn-glycerol was correlated to growth arrest induced by the protein kinase C (PKC) activators. We now investigate the possible identity of this protein with the estrogenregulated "24-kDa" protein shown as related to the mammalian heat shock protein 27 (Fuqua, S. A. W., Blum-Salingaros, M., and McGuire, W. L. (1989) Cancer Res 49,[4126][4127][4128][4129]. 32P-Labeled 28-kDa protein from TPA-treated MCF-7 cells was immunoprecipitated with a 24-kDa-specific monoclonal antibody. Immunoblots from cell extracts fractionated by two-dimensional isoelectric focusing/SDS-polyacrylamide gel electrophoresis demonstrated that TPA induced the conversion of a 28-kDa isoform "a" (PI 6.7) to a more acidic isoform "b" (PI 6.2). Two-dimensional gel analysis of [SH]leucine-labeled MCF-7 cell extracts demonstrated that conversely to TPA, which induced only phosphorylation of 28-kDa protein, heat shock induced both synthesis (increase of isoform a) and phosphorylation (conversion of isoforms a to b) of the protein. 32P labeling of MCF-7 cells allowed demonstration of the presence of an extra phosphoisoform "c" (PI 5.9) upon TPA as well as heat shock treatment. When cells were pretreated with the bisindolylmaleimide GF109203X, a selective inhibitor of PKC, the heat shock-induced phosphorylation was unchanged, while the TPA effect was almost abolished, suggesting that the heat shockactivated protein kinase was very likely different from PKC. However, peptide mapping of the 28-kDa phos-* This work was supported by Institut National de la Santi et de la Recherche Midicale and by Association de la Recherche contre le Cancer, France. A preliminary report of this study was presented at the 8th International Conference on Second Messengers and Phosphoproteins, Glasgow, Scotland, August 3-8, 1992. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement'' in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. phoprotein suggested identical sites of phosphorylation upon TPA and heat shock stimulation. Partial amino acid sequencing of the 28-kDa protein revealed identity with both the 24-kDa protein and the mammalian HSP27. The fact that estrogens and PKC, respectively, regulate expression and phosphorylation of this 24128-kDa protein strongly argues for its key role in MCF-7 cell proliferation and differentiation.
The nucleotide sequencefs) reported in thispaper has been submitted to the GenBankTM/EMBL Data
Heat shock proteins (HSPs)' consist of a number of highly conserved proteins that are synthesized by all pro-and eucaryotic organisms in response to environmental stress including hyperthermia (1-3). Although these proteins are thought to play primarily a protective role in cells subjected to high temperature and other stresses, several lines of evidence suggest that they are involved in a number of other cell functions (2,3). High molecular weight HSPs including HSPSO, HSP70, and HSP6O have been studied with the greatest details and demonstrated as molecular "chaperones" in protein-protein interactions (1, 3). For example, HSPSO has been shown to be associated with steroid receptor (4) or with tyrosine kinases encoded by oncogenes (5). HSP7O and HSP6O have been implicated in protein folding, unfolding, oligomerization, and translocation (2,3). The low molecular weight HSPs are much less understood. Like the other families of heat shock proteins, they are involved in thermotolerance (6) and very likely in other cell functions, including cell growth and differentiation, for the following reasons. (i) Although their synthesis is stress-induced, they have been shown as constitutive proteins that are expressed at specific stages of development at normal temperatures (1). (ii) They are phosphorylated in response to a wide variety of stimuli including growth factors (7,8).
Protein kinase C (PKC) is believed to play a key role in transmembrane signaling leading to cell differentiation and proliferation (9). Characterization and identification of endogenous proteins phosphorylated by PKC activators have received particular attention. We have previously demonstrated the presence of a 28-kDa protein in human mammary adenocarcinoma cell line MCF-7 (IO), whose phosphorylation by TPA and DiCa was closely correlated to growth arrest 'The abbreviatTons used are: HSP, heat shock protein; PKC, protein kinase C; TPA, 12-0-tetradecanoylphorbol-13-acetate; DiCs, 1,2-dioctanoyl-sn-glycerol; IEF, isoelectric focusing; PAGE, polyacrylamide gel electrophoresis; PBS, phosphate-buffered saline; HPLC, high pressure liquid chromatography. induced by the PKC activators (11). We further brought evidence indicating that this 28-kDa protein was very likely a member of the low molecular weight HSP family (12). Indeed, when proteins phosphorylated upon MCF-7 cell exposure to TPA and those synthesized after heat shock were fractionated on SDS-PAGE, the 32P-and [3H]leucine-labeled 28-kDa proteins showed the same electrophoretic mobility. Moreover, heat shock treatment of cells induced a clear-cut increase of 32P incorporation into the 28-kDa protein.
In the meantime, McGuire and co-workers demonstrated that a 27128-kDa estrogen-regulated protein from MCF-7 cells, originally termed "24-kDa" protein (13) and more recently "stress-responsive protein 27" (14), also belonged to the low molecular weight HSP family (14). Indeed, the carboxyl-terminal amino acid sequence deduced from the partial cDNA encoding for this protein contained striking homology with both low molecular weight HSPs of Drosophila and mammalian a-crystallin, now also reported as a HSP (15). Furthermore, the truncated cDNA of the 24-kDa protein was identical to the 3'-region of the human HSP27 cloned from a genomic library (16). The 24-kDa mRNA was significantly induced both by estrogen and heat shock treatment of . It was tempting to anticipate that the 28-kDa protein phosphorylated by PKC activators was similar to the 24-kDa estrogen-regulated protein. However, such a hypothesis was strikingly provocative since the former is believed to mediate cell growth arrest while the latter is supposed to be a marker for cell proliferation.
In this report, we further characterize the MCF-7 cell 28-kDa phosphoprotein as a HSP, and we bring evidence that this protein is the estrogen-regulated 24-kDa protein (or stress-responsive protein 27).
For protein phosphorylation studies, subconfluent cultures (0.5-1 X 10' cells/35-mm dish) were washed in phosphate-free Krebs-Ringer buffer containing 20 mM Hepes pH 7.3, 0.1% bovine serum albumin, and 0.2% glucose, then incubated in 1 ml of the same fresh buffer containing 50 pCi of [32P]phosphoric acid neutralized with 0.1 M Tris base. Incubation was performed either at 37 "C for 1 h followed by 1 h at 42 "C (heat shock-induced protein phosphorylation) or at 37 "C for 2 h, the phorbol ester TPA (100 ng/ml) being added the last 20 min (TPA-induced protein phosphorylation).
For protein synthesis studies, subconfluent cultures (0.5-1 X lo6 cells/35-mm dish) were or were not submitted to heat shock (1 h at 42 "C) and then incubated for 1 h at 37 "C in 1 ml of leucine-free RPMI 1640 medium containing 50 pCi of [3H]leucine. When indicated, TPA (100 ng/ml) was added the last 20 min.
Alternatively, blots were incubated with the polyclonal anti-HSP27 peptide antibody (dilution 1/200). In this case, the rabbit anti-mouse IgG step was omitted.
Immunoprecipitation-32P-Labeled MCF-7 cells were incubated for 20 min in the absence or in the presence of 100 ng/ml TPA. After cell washing in cold PBS, cells were rapidly harvested in PBS, and cell pellets were homogenized with Dounce in 0.1 ml of 20 mM Tris-HCI, pH 7.4, 10% (v/v) glycerol, 1 mM EDTA, 10 pg/ml leupeptin, and 5 mM (3-mercaptoethanol. Homogenates were centrifuged for 1 h at 105,000 X g and cytosols incubated for 2 h at 20 "C with protein A-Sepharose CL-4B previously coupled to rabbit anti-mouse IgG (incubation for 2 h at 20 "C; antibody dilution, 1/10) and 24-kDaspecific monoclonal antibody (incubation for 2 h at 20 "C; dilution, 1/50). The antigen-antibody complexes were then extracted using 0.1 ml of the electrophoresis sample buffer, heated at 60 "C for 10 min, and analyzed by monodimensional SDS-PAGE.
Peptide Mapping of 28-kDa Protein Isoforrm-One-dimensional peptide mapping was carried out according to Cleveland et al. (19) using protease V8 from S. aureus. After 32P-labeled MCF-7 cells were exposed to TPA or heat shock and fractionated on two-dimensional IEF/SDS-PAGE, the respective phosphoisoforms b and c of 28-kDa protein were excised from the gels and directly loaded on 4.5-15% SDS-PAGE and then overlaid with protease V8 (200 ng). Digestion proceeded in the stacking gel during the subsequent electrophoresis.
For two-dimensional peptide analysis, pieces of gel containing the required b and c phosphoisoforms were excised, minced, and incubated overnight with 12.5 pg of diphenylcarbamyl chloride-treated trypsin in 0.5 ml of 50 mM ammonium bicarbonate, pH 8. Lyophilized phosphopeptides were dissolved in 5% acetic acid and then applied to thin-layer cellulose plates and electrophoresed (500 V for 25 min) using a pH 4.4 buffer containing 15% acetone, 2% pyridine, 4% acetic acid, and 79% water. Ascending chromatography was performed in 37.5% butanol, 7.5% acetic acid, 25% pyridine, and 30% water.
Polyclonal Anti-HSP27 Peptide Antibody-An oligopeptide corresponding to the carboxyl-terminal end of human HSP27 (residues 184-193: TFESRAQLGG) was synthesized by the standard solid phase method using an Applied Biosystems 430A peptide synthesizer. The following side chain protecting groups were used on the tbutoxycarbonyl amino acids: tosyl (Arg), benzyl ether (Ser, Thr), and benzyl ester (Glu). Cleavage of the peptide from the resin and removal of side chain protecting groups were performed using the HF method. Peptide purity was checked by reverse phase HPLC analysis using an RP300 C8 column with a linear acetonitrile-0.1% trifluoroacetic acid gradient. Molecular mass was confirmed by fast atom bombardment mass spectrometry (MH+, 1065.4) using a ZAB-H5 double focusing spectrometer (VG analytical, Manchester, UK). The peptide (5 mg) was coupled to keyhole limpet hemocyanin (5 mg) in 5 ml of 0.1 M NaHC03, pH 8.6, and 0.05% glutaraldehyde, and the mixture was dialyzed against 0.1 M NaCl for 24 h. After mixing with complete Freund's adjuvant, the conjugate was then injected subcutaneously into rabbits (0.25 mg of peptide). After 4 weeks, animals were boosted every 2 weeks for 6 weeks with the same amount of peptide in Identification of 28-kDa Phosphoprotein in MCF-7 Cells incomplete Freund's adjuvant, and antisera were then collected.
Purification and Internal Sequencing of the 28-kDa Protein-The 28-kDa isoform a, isolated from two-dimensional IEF/SDS-PAGE, was digested in the gel matrix with porcine trypsin, and the resulting peptides were separated on a narrow bore C18 Altex column (25 X 0.2 cm, Beckman Instruments). Selected peptides were submitted to automatic amino-terminal sequencing on an Applied Biosystems sequenator (model 470) coupled to a phenylthiohydantoin-derivative analyzer.
RESULTS
Immunodetection of the 28-kDa Phosphoprotein in Cells-Previous data indicated that the 28-kDa protein phosphorylated in MCF-7 cells under TPA or DiCs stimulation was very likely a member of the low molecular weight HSP family (12). We wondered whether this protein could be related to the estrogen-regulated 24-kDa protein reported by McGuire and co-workers (14) and further demonstrated as homologous to the mammalian HSP27. T o assess the possible identity of the respective 28-and 24-kDa proteins, we performed immunoprecipitation studies with a 24-kDa-specific monoclonal antibody (C11) following stimulation of '"P-labeled MCF-7 cells with the PKC activator TPA. Immunoprecipitates were fractionated on SDS-12% PAGE, and the gels were submitted to autoradiography. Fig. 1A shows that the C11 antibody immunoprecipitated the 28-kDa protein phosphorylated upon TPA stimulation of cells. The specificity of this immunoprecipitation was assessed by using a normal mouse serum instead of the 24-kDa-specific antibody. Immunodetection of the 28-kDa protein was also performed after Western blotting of unlabeled MCF-7 cell extracts fractionated by SDS-PAGE (Fig. 1 B ) or by two-dimensional IEF/ SDS-PAGE (Fig. 1C). While TPA stimulation of cells did not increase the amount of the specifically recognized 28-kDa protein (Fig. lB), it clearly induced its phosphorylation, leading to the conversion of the isoform a (PI = 6.7) to the more acidic isoform b (PI = 6.2).
Two-dimensional IEF/SDS-PAGEAnalysis of fH]Leucineand :i2P-Labeled 28-kDa Protein upon TPA or Heat Shock Treatment of MCF-7 Cells-Previous data suggested that heat shock treatment of MCF-7 cells could induce both synthesis and phosphorylation of the 28-kDa protein (12). To further characterize this phenomenon and to compare it to the TPA effect, we performed two-dimensional IEF/SDS-PAGE analysis of ['Hlleucine-and "P-labeled MCF-7 cells following TPA in the absence (Cont) or in the presence of 100 ng/ml TPA. Immunoprecipitates obtained from '"P-labeled cells with antibody C11 or normal mouse serum ( N M S ) were subjected to SDS-PAGE followed hy autoradiography ( A ) . Western blots from unlabeled cells fractionated by SDS-PAGE ( R ) or two-dimensional IEF/SDS-PAGE (C) were probed with antibody C11. Only portions of the respective autoradiographs corresponding to the recognized 28-kDa protein or isoforms a and b were shown. and heat shock treatment (Fig. 2).
["]Leucine labeling showed that TPA induced phosphorylation, i.e. conversion of isoform a (PI = 6.7) to isoform b (PI = 6.2), but not synthesis of the 28-kDa protein (no increase of isoforms a + b from TPA-treated cells uersus isoform a from control cells) while heat shock induced both phosphorylation (appearance of isoform b) and synthesis (increase of isoform a) of the 28-kDa protein. The sensitivity of :12P labeling was allowed to demonstrate the presence of two phosphoisoforms, b (PI = 6.2) and c (PI = 5.9), upon TPA as well as heat shock treatment. A small amount of phosphoprotein b was visible in the control confirming the two-dimensional pattern observed in Fig. 1C where isoform b was weakly present in the control.
To investigate the nature of the protein kinase involved in the heat shock-induced phosphorylation of the 28-kDa protein, we studied the protein phosphorylation pattern observed when cells were pretreated with staurosporine, a compound that is believed to be a potent PKC inhibitor. As shown in Fig. 3, in such staurosporine-treated cells, the TPA-induced 28-kDa protein phosphorylation was markedly inhibited with a total disappearance of the phosphoisoform c and a marked reduction of the "P labeling of isoform b. Staurosporine also decreased, although a t a lesser extent, the heat shock-induced 28-kDa protein phosphorylation. As staurosporine has been recently reported to inhibit other protein kinases than PKC (20,21), we performed identical studies in the presence of the bisindolylmaleimide GF109203X, a potent and more selective inhibitor of PKC (22). Fig. 4 shows that, in GF109203Xtreated cells, the effect of TPA on 28-kDa protein phosphorylation was almost abolished while heat shock-induced phosphorylation was unchanged, suggesting that the heat shockactivated protein kinase is very likely different from PKC. Such a hypothesis is further reinforced by the fact that TPA but not heat shock induced the phosphorylation of a 80-kDa/ PI 4.5 protein, the selectivity of GF109203X being demonstrated by the disappearance of this PKC-specific protein phosphorylation in cells treated with this compound.
T o investigate whether TPA and heat shock induced phosphorylation of the 28-kDa protein at the same sites, protease V8 peptide maps were performed from the individual b and c Normal cells GFX-treated cells phosphoisoforms obtained from two-dimensional IEF/SDS-PAGE. Fig. 5A shows identical phosphopeptide maps for both isoforms b and c from the 28-kDa protein phosphorylated upon TPA as well as heat shock treatment of MCF-7 cells.
H leu
To confirm this finding, we performed two-dimensional peptide analysis following trypsin digestion of b and c 28-kDa isoforms upon phosphorylation by TPA and heat shock. Fig. 5 B again shows similar patterns with two major spots observed in each case. Subcellular Localization of the 28-kDa Protein upon TPA and Heat Shock Treatment of MCF-7 Cells-Low molecular weight HSPs have been previously shown to translocate from cytosol to nuclear compartment during heat shock exposure of cells (23)(24)(25). To further compare the effects of TPA and heat shock on 28-kDa protein, we investigated the cellular localization of the protein upon the two distinct treatments. Fig. 6 illustrates an immunodetection of the 28-kDa protein in the respective cytosolic and nuclear fractions using a polyclonal antibody raised against a synthetic peptide derived from the HSP27 amino acid sequence. While heat shock induced the expected redistribution of 28-kDa protein from cytosol to mclear pellet, TPA did not change the initial cytoplasmic localization of the protein. The heat shock-induced translocation of the 28-kDa protein concerned, at least in part, the phosphorylated protein as the 32P labeling of 28-kDa protein followed the redistribution pattern observed when the total amount of the protein was measured (not shown).
Partial Sequencing of the 28-kDa Protein-Attempts to obtain protein sequence information of the 28-kDa isoform a from two-dimensional gel transferred to polyvinylidene difluoride Immobilon membranes were unsuccessful, very probably because the amino terminus of the protein was blocked. Thus, to obtain internal sequence information, we digested the protein in the gel matrix after isolation on two-dimensional gels. The resulting peptides were separated on HPLC and three peaks were selected for sequencing. The material eluting in two of those peaks gave unique sequences corresponding to the peptides 13-20 and 38-46 of the human HSP27 (16). The material eluting in the third peak showed that it was a mixture of three peptides. The deduced sequences could be assigned to peptides 97-110,141-154, and 172-186 of HSP27. In other words, the peptide sequences obtained were found to be identical to the amino acid sequences of 24-kDa protein (stressresponsive protein 27) and human HSP27 (Fig. 7). The horizontal arrow indicates the direction of electrophoresis to the cathode, while the vertical arrow indicates the ascending chromatography.
DISCUSSION
Growth arrest of MCF-7 cells by PKC activators TPA and Dice has been correlated previously to the phosphorylation of a 28-kDa endogenous protein (11). In the present study, we have identified definitely this protein as a member of the low molecular weight HSP family. ["HILeucine labeling of MCF-7 cells (Fig. 2) demonstrated that heat shock induced both synthesis (increase of isoform a) and phosphorylation (appearance of isoform b) of 28-kDa protein while TPA caused only its phosphorylation (conversion of isoform a to isoform b). j2P labeling of MCF-7 cells was allowed to detect a second phosphoisoform of the 28-kDa protein, isoform c in addition to isoform b, confirming the capability of both TPA and heat shock to induce the phosphorylation of 28-kDa protein.
In several but not all experiments, the labeling of isoform c was more pronounced after heat shock than upon TPA exposure (Figs. 3 and 4). However, it is difficult to make conclusions about this phenomenon because of the distinct procedures used. "P labeling of cells was either performed for 1 h a t 37 "C followed by 1 h a t 42 "C (heat shock-induced phosphorylation) (bottom). Cells were homogenized with Dounce in 20 mM Tris-HCI, pH 7.4, 10% glycerol, 1 mM EDTA, 10 pg/ml leupeptin, and 5 mM B-mercaptoethanol. 2,000 X g pellets (P) and 105,000 X g supernatants ( S ) were subjected to SDS-PAGE fractionation. Western blots were probed with the polyclonal antibody raised against the HSP27 peptide.
or 2 h at 37 "C, TPA being added the last 20 min (TPAinduced phosphorylation). When 32P labeling of cells was carried out for 2 h a t 37 "C following 1 h of heat shock at 42 "C, isoform c was much less evident (not shown), suggesting that this isoform is likely subject to rapid dephosphorylation or degradation. Whether the poor labeling of isoform c following TPA treatment of cells reflects the stimulation by the phorbol ester of specific phosphatase or protease for 28-kDa c remains to be established. Alternatively, the heat shockdependent phosphorylation of the 28-kDa protein might involve a protein kinase distinct from PKC, inducing a more pronounced degree of phosphorylation of 28-kDa protein with the appearance of isoform c. In any case, the stoichiometry of phosphorylation of the 28-kDa protein was difficult to assess upon heat shock treatment as both synthesis and phosphorylation occurred. In the case of TPA stimulation, 30-50% of ['Hlleucine-labeled isoform a appeared converted to phosphoisoform b (Fig. 2). This was confirmed by immunological quantification of unphosphorylated (isoform a) and phosphorylated (isoform b) 28-kDa protein (Fig. IC). Experiments using the bisindolylmaleimide GF109203X strongly suggest that the protein kinase activated by heat shock is different from PKC. While the TPA-induced phosphorylation of 28-kDa protein was almost abolished in cells pretreated with this compound, no change was observed in the degree of the 28-kDa phosphorylation caused by heat shock (Fig. 4). Conversely to staurosporine, which has been shown to inhibit other protein kinases in addition to PKC (20,21), GF109203X has been demonstrated as a selective PKC inhibitor (22). Our data confirmed these findings, as staurosporine partly inhibited the heat shock-induced 28-kDa protein phosphorylation (Fig. 3). The activation by heat shock of a protein kinase distinct from PKC was further demonstrated by the fact that TPA but not heat shock induced the phosphorylation of a 80-kDa/pI 4.5 protein, which is likely related to the MARCKS protein shown as a specific PKC substrate in various cell systems (26-28). GF109203X totally abolished this TPAinduced phosphorylation (Fig. 4). However, our results indicated that the respective kinases involved in 28-kDa phosphorylation, i.e. PKC and heat shock-dependent kinase, phosphorylate the protein a t similar sites, as the phosphopeptide maps of the 28-kDa isoforms b and c obtained upon TPA as well as heat shock treatment were identical (Fig. 5 , A and B ) .
As two-dimensional peptide analysis following trypsin diges- 7. Comparison of the amino acid sequence of the peptides obtained from 28-kDa protein with the 24-kDa protein and human HSP27. Shown is the amino acid sequence of human HSP27 cloned from a human genomic library (16). The underline represents the peptides sequenced from MCF-7 cell 28-kDa protein (isoform a), while the ouerline corresponds to the amino acid sequence deduced from the partial cDNA encoding the 24-kDa protein (14).
I MTERRVPFSL LRGPSWDPFR DWYPHSRLFD QAFGLPRLPE EWsQWLGGS
tion of b showed two major spots, it is tempting to postulate the presence of two phosphorylation sites in this isoform. The identity of b and c peptide maps is rather intriguing as c is supposed to contain an additional phosphate with regard to b. Such a result could suggest that the putative extra phosphorylation site in c is very close to one of the sites phosphorylated in b. This point needs further investigation.
The other important feature of our study is the demonstration that the 28-kDa phosphoprotein was the estrogen-regulated 24-kDa protein (or stress-responsive protein 27). First, the specific monoclonal antibody C11 raised against this 24-kDa protein recognized the 28-kDa protein phosphorylated upon MCF-7 cell stimulation by TPA (Fig. 1). There was a striking coincidence in the respective PI values of the unphosphorylated a and phosphorylated b isoforms of the immunodetected 28-kDa protein when compared with those found for the 32Pand [3H]leucine-labeled protein (Figs. 2-4).
Second, the 28-kDa protein was also recognized by a specific polyclonal antibody raised against a synthetic peptide derived from the carboxyl-terminal amino acid sequence of HSP27. McGuire and co-workers have reported previously that the 91 carboxyl-terminal amino acids deduced from the partial cDNA encoding the 24-kDa protein showed total homology with HSP27 previously cloned from a human genomic library (16).
Finally, our partial sequencing of the 28-kDa protein demonstrated identity between the peptides sequenced and the corresponding sequences of both HSP27 and 24-kDa protein. Although discrete differences in the whole amino acid sequence of 28and 24-kDa proteins cannot be totally ruled out, it is more probable that both proteins are in fact the same molecule.
Our study also demonstrates that the 28-kDa protein is susceptible to nuclear targeting under heat shock treatment of MCF-7 cells, this subcellular redistribution concerning at least in part the phosphorylated form of the protein. This finding confirms similar results obtained in other cell systems showing translocation of low molecular weight HSPs from cytosol to nuclear compartment upon heat shock (23-25). Whether this phenomenon has a physiological significance remains to be established. However, the fact that TPA did not induce a similar targeting of the 28-kDa protein might suggest different cellular functions of the protein depending on the stimuli. Such a multifunctionality of this HSP is further indicated by the fact that the 24/28-kDa protein is induced both by heat shock (this paper and Ref. 14) and estrogen (14). Moreover, recent immunological evidence (29) suggests that the 24/28-kDa protein from breast cancer might be related to an estrogen-associated protein previously reported in human myometrium (30) and so far not identified. Thus, the function of the 24/28-kDa protein might depend not only on its cellular expression and subcellular localization but also on its possible implication in estrogen receptor machinery. Finally, one can postulate reasonably that the state of phosphorylation of the protein might account for its cellular function. Indeed, it is tempting to hypothesize that the unphosphorylated form of the protein may be related to the stimulation of cell proliferation, while its phosphorylated forms might, on the contrary, refer to cell growth arrest. In any case, the different levels of regulation of this 24/28-kDa protein, i.e. synthesis, phosphorylation, subcellular localization, and possible association with estrogen receptor, strongly argue for its key role in MCF-7 cell function. | 2018-04-03T05:24:16.963Z | 1993-07-15T00:00:00.000 | {
"year": 1993,
"sha1": "5649d6bca47bb068305d2cd90d6eaca2c17ba597",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(18)82451-6",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "06c0272b49f8f85c573b73c98cc90d9f6cafc033",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
118621170 | pes2o/s2orc | v3-fos-license | Nonradial modes in RR Lyrae stars from the OGLE Collection of Variable Stars
The Optical Gravitational Lensing Experiment (OGLE) is a great source of top-quality photometry of classical pulsators. Collection of variable stars from the fourth part of the project contains more than 38 000 RR Lyrae stars. These stars pulsate mostly in the radial fundamental mode (RRab), in radial first overtone (RRc) or in both modes simultaneously (RRd). Analysis of the OGLE data allowed to detect additional non-radial modes in RRc and in RRd stars. We have found more than 260 double-mode stars with characteristic period ratio of the additional (shorter) period to first overtone period around 0.61, increasing the number of known stars of this type by factor of 10. Stars from the OGLE sample form three nearly parallel sequences in the Petersen diagram. Some stars show more than one non-radial mode simultaneously. These modes belong to different sequences.
Introduction
RR Lyrae stars are classical pulsating stars. They are known to pulsate mostly in radial fundamental mode (RRab) or in first overtone (RRc). Among RR Lyrae stars there are also double-mode pulsators which pulsate in fundamental mode and first overtone simultaneously (RRd, green asterisks in Fig. 1) or in fundamental mode and second overtone (red triangles in Fig. 1). The latter group was discovered mostly thanks to excellent space observations. Observations also revealed a group of RR Lyrae stars in which an additional non-radial mode is excited. These stars pulsate in a radial first overtone (RRc or RRd stars) and in an additional mode with shorter period, P X . Period ratio of the additional mode to the first overtone is in the range of 0.60 − 0.64 (blue circles in Fig. 1). The most typical value is around 0.61. Such period ratio cannot correspond to two radial modes . Hence, the additional mode (0.61 mode in the following) must be non-radial. This type of pulsations was discovered in 23 RR Lyrae stars, based both on ground and on space observations (for a summary see Moskalik et al. 2015) and recently in 18 RR Lyrae stars from M3 (Jurcsik et al. 2015).
Data Analysis
This 0.61 mode has always very low amplitude in a milimagnitude regime. Almost all stars observed with space telescopes showed this non-radial mode (Chadid 2012;Molnár et al. 2015;Moskalik et al. 2015;Szabó et al. 2014). Low amplitude of this mode makes it very difficult to detect it in the ground-based observations. We decided to search for the 0.61 mode in RR Lyrae stars of the Galactic bulge observed by the OGLE project (Udalski et al. 2015). Although quality of ground-based photometry is lower than of the space-based photometry, the number of observed stars is much higher in ground observations. For the analysis we chose all stars available from the OGLE-III (Soszyński et al. 2011) pulsating in first overtone. Input sample consists of 4989 RRc stars and 91 RRd stars. RRc stars were analysed with an automatic method using dedicated software (for details see Netzel et al. 2015a) and 91 RRd stars were analysed manually. It resulted in a detection of 147 stars with 0.61 mode (3% of the sample).
In OGLE-IV there are more than 10 000 RRc stars. Because our analysis was focused on a search for low amplitude signals, we decided to choose only the most frequently observed stars. These are located in OGLE fields 501 and 505 (see position of observational fields in fig. 15 in Udalski et al. 2015). Input sample consists of 485 RRc stars, which were analysed manually. The rest of stars observed during the fourth phase will be analysed automatically. We detected 131 RRc stars, from which 115 are new discoveries (Netzel et al. 2015b). With best quality data the 0.61 mode is found in 27% of stars. We also detected another group of double-mode radial-non-radial RR Lyrae stars (magenta crosses in Fig. 1, Netzel et al. 2015c).
Results
Petersen diagram for multi-mode RR Lyrae stars is presented in Fig. 1. All known 0.61 stars are marked with blue circles. Thanks to the OGLE data we know altogether 303 0.61 stars, compared to 41 previously known. Which such numerous sample we can see for the first time three sequences in the Petersen diagram. The lowest sequence, around period ratio 0.61, is most populated. The highest sequence is less populated and is located around period ratio 0.63. Between these two, there is a third sequence, least populated, but well separated Figure 2. Frequency spectra centered at frequency range characteristic for additional mode (top panels) and its 1/2 subharmonic (bottom panels) for a sample of 0.61 stars. Directly underneath a signal with frequency f (top panel) is located its subharmonic with frequency 1/2f (the bottom panel). Period ratio of the additional mode (top panels) to first overtone is indicated above top panel. from the other two; it is clustered around period ratio 0.62. Structures we detect in the power spectra of the stars are very broad (see fig. 3 in Netzel et al. 2015b). In the time-domain it corresponds to variability of amplitude and phase of the signal. This behaviour of 0.61 mode is confirmed by other studies both from ground-based (e.g. Jurcsik et al. 2015) and space-based observations (e.g. Moskalik et al. 2015;Molnár et al. 2015;Szabó et al. 2014). Most exciting result is a discovery of stars which have three signals in the power spectrum corresponding to three sequences in the Petersen diagram. Power spectra of 6 such star are presented in fig. 5 in Netzel et al. (2015b).
Another common property of 0.61 stars is occurence of subharmonics of the additional mode, both 1/2 f X and 3/2 f X . We detected signal at subhamonic frequency in 26 stars, which constitute 20 % of the OGLE-IV 0.61 stars. Additional modes and their subharmonic are shown in Fig. 2 for selected stars (see also fig. 11 in Netzel et al. 2015b). Structures of subharmonics are very complex. They appear in power spectra as wide bands of excess power. Before our analysis of the OGLE data, subharmonics were detected only in space observations. Typically, presence of subharmonic frequency indicates period doubling of the parent mode (see e.g. Smolec et al. 2012). However, there is another possibility proposed by Dziembowski (these proceedings). Signal around 1/2 f X can be a real mode present in a star, while signal at f X is its harmonic. The majority of signals at subharmonic frequency range, 75 %, correspond to 0.63 sequence. In Fig. 3 we present Petersen diagram for 0.61 stars from OGLE data. With red diamonds we marked stars for which signal at subharmonic is detected. Majority of these stars occupy the highest sequence. This finding supports the model proposed by Dziembowski (these proceedings). | 2016-01-29T13:27:37.000Z | 2016-01-29T00:00:00.000 | {
"year": 2016,
"sha1": "f9b2f9cfb3bd1b526879895ce320518346830703",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f9b2f9cfb3bd1b526879895ce320518346830703",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
298752 | pes2o/s2orc | v3-fos-license | Single-cycle radio-frequency pulse generation by an optoelectronic oscillator
We demonstrate experimentally passive mode-locking of an optoelectronic oscillator which generates a single-cycle radio-frequency pulse train. The measured pulse to pulse jitter was less than 5 ppm of the round-trip duration. The pulse waveform was repeated each round-trip. This result indicates that the relative phase between the pulse envelope and the carrier wave is autonomously locked. The results demonstrate, for the first time, that single-cycle pulses can be directly generated by a passive mode-locked oscillator. The passive mode-locked optoelectronic oscillator is important for developing novel radars and radio-frequency pulsed sources and it enables studying directly the physics of single-cycle pulse generation. © 2011 Optical Society of America OCIS codes: (230.0250) Optoelectronics; (140.4050) Mode-locked lasers; (230.4910) Oscillators; (320.5550) Pulses. References and links 1. A. J. DeMaria, D. A. Stetsen, and H. Heyman, “Experimental study of mode-locked Ruby laser,” Appl. Phys. Lett. 8, 22 (1966). 2. C. V. Shank and E. P. Ippen, “Subpicosecond kilowatt pulses from a mode-locked cw dye laser,” Appl. Phys. Lett. 24, 373–375 (1974). 3. S. Namiki, X. Yu, and H. A. Haus, “Observation of nearly quantum-limited timing jitter in an all-fiber ring laser,” J. Opt. Soc. Am. B 13, 2817–2823 (1996). 4. H. A. Haus, “Theory of mode locking with a fast saturable absorber,” J. Appl. Phys. 46, 3049–3058 (1975). 5. U. Morgner, F. X. Kärtner, S. H. Cho, Y. Chen, H. A. Haus, J. G. Fujimoto, E. P. Ippen, V. Scheuer, G. Angelow, and T. Tschudi, “Sub-two-cycle pulses from a Kerr-lens mode-locked Ti:sapphire laser,” Opt. Lett. 24, 411–413 (1999). 6. D. H. Sutter, G. Steinmeyer, L. Gallmann, N. Matuschek, F. Morier-Genoud, U. Keller, V. Scheuer, G. Angelow, and T. Tschudi, “Semiconductor saturable-absorber mirrorassisted Kerr-lens mode-locked Ti:sapphire laser producing pulses in the two-cycle regime,” Opt. Lett. 24, 631–633 (1999). 7. S. Rausch, T. Binhammer, A. Harth, F. X. Kärtner, and U. Morgner, “Controlled waveforms on the single-cycle scale from a femtosecond oscillator,” Opt. Express 16, 17410–17419 (2008). 8. M. Y. Shverdin, D. R. Walker, D. D. Yavuz, G. Y. Yin, and S. E. Harris, “Generation of a single-cycle optical pulse,” Phys. Rev. Lett. 94, 033904 (2005). 9. E. Goulielmakis, M. Schultze, M. Hofstetter, V. S. Yakovlev, J. Gagnon, M. Uiberacker, A. L. Aquila, E. M. Gullikson, D. T. Attwood, R. Kienberger, F. Krausz, and U. Kleineberg, “Single-cycle nonlinear optics,” Science 320, 1614–1617 (2008). 10. G. Krauss, S. Lohss, T. Hanke, A, Sell, S. Eggert, R. Huber, and A. Leitenstorfer, “Synthesis of a single cycle of light with compact erbium-doped fibre technology,” Nat. Photonics 4, 33–36 (2010). 11. X. S. Yao and L. Maleki, “Optoelectronic microwave oscillator,” J. Opt. Soc. Am. B 13, 1725–1735 (1996). 12. N. Yu, E. Salik, and L. Maleki, “Ultralow-noise mode-locked laser with coupled optoelectronic oscillator configuration,” Opt. Lett. 15, 1231–1233 (1995). #148305 $15.00 USD Received 31 May 2011; revised 12 Jul 2011; accepted 22 Jul 2011; published 23 Aug 2011 (C) 2011 OSA 29 August 2011 / Vol. 19, No. 18 / OPTICS EXPRESS 17599 13. J. Lasri, A. Bilenca, D. Dahan, V. Sidorov, G. Eisenstein, D. Ritter, K. Yvind, “Self-starting hybrid optoelectronic oscillator generating ultra low jitter 10-GHz optical pulses and low phase noise electrical signals,” IEEE Photon. Technol. Lett. 14, 1004–1006 (2002). 14. Y. K. Chembo, A. Hmima, P. Lacourt, L. Larger, and J. M. Dudley, “Generation of ultralow jitter optical pulses using optoelectronic oscillators with time-lens soliton-assisted compression,” J. Lightwave Technol. 27, 5160– 5167 (2009). 15. J. Lasri, P. Devgan, R. Tang, and P. Kumar, “Self-starting optoelectronic oscillator for generating ultra-low-jitter high-rate (10 GHz or higher) optical pulses,” Opt. Express 11, 1430–1435 (2003). 16. A. F. Kardo-Sysoev, “New power semiconuctor Devices for generation of nanoand subnanosecond pulses,” in Ultra-wideband radar technology, J. D. Taylor Ed. (CRC, 2001), ch. 9. 17. M. H. Khan, H. Shen, Y. Xuan, L. Zhao, S. Xiao, D. E. Leaird, A. M. Weiner, and M. Qi, “Ultrabroad-bandwidth arbitrary radiofrequency waveform generation with a silicon photonic chip-based spectral shaper,” Nat. Photonics 4, 117–122 (2010). 18. C. C. Cutler, “The regenerative pulse generator,” Proc. IRE, 43, 140–148 (1955). 19. D. J. Jones, S. A. Diddams, J. K. Ranka, A. Stentz, R. S. Windeler, J. L. Hall, and S. T. Cundiff, “Carrier-envelope phase control of femtosecond mode-locked lasers and direct optical frequency synthesis,” Science 288, 635–639 (2000). 20. J. Yao, F. Zeng, and Q. Wang, “Photonic generation of ultrawideband signals,” J. Lightwave Technol. 25, 3219– 3235 (2007). 21. J. Li, Y. Liang, and K. Kin-Yip Wong, “Millimeter-wave UWB signal generation via frequency up-conversion using fiber optical parametric amplifier,” IEEE Photon. Technol. Lett. 21, 1172–1174 (2009). 22. F. Zhang, J. Wu, S. Fu,2 K. Xu, Y. Li, X. Hong, P. Shum, and J. Lin “Simultaneous multi-channel CMW-band and MMW-band UWB monocycle pulse generation using FWM effect in a highly nonlinear photonic crystal fiber,” Opt. Express 17, 15870–15875 (2010). 23. H. A. Haus, and A. Mecozzi, “Noise of mode-locked lasers,” IEEE J. Quantum Electron. 29, 983–996 (1993). 24. M. E. Grein, H. A. Haus, Y. Chen, and E. P. Ippen, “Quantum-limited timing jitter in actively modelocked lasers,” IEEE J. Quantum Electron. 40, 1458–1470 (2004). 25. V. S. Grigoryan, C. R. Menyuk, and R.-M. Mu “Calculation of timing and amplitude jitter in dispersion-managed optical fiber communications using linearization,” J. Lightwave Technol. 17, 1347–1356 (1999). 26. M. I. Skolnik, Introduction to Radar Systems, 2nd ed. (McGraw-Hill, 1981), pp. 553–560.
Introduction
Passive mode-locking in lasers is used to generate ultrashort pulse train [1,2] with a timing jitter that can be close to its quantum limit value [3].Ultrashort pulses that are generated by passive mode-locking are obtained by inserting a fast saturable absorber into a laser cavity [2,4].The transmission of such an absorber increases as the intensity of the light increases.Therefore, the absorber promotes the laser to generate short intense pulses with a broad spectrum instead of generating a continuous wave signal with a low peak power.From the frequency domain point of view, the saturable absorber locks the phases of the laser modes to obtain short pulses.The shortest pulse duration that was demonstrated in passive mode-locked lasers was limited to few cycles of the carrier wave [5][6][7].To generate single-cycle optical pulses there is a need to utilize techniques that are based on coherent control of four-wave-mixing [8], nonlinear optic [9], or combining laser sources [10].
Optoelectronic oscillators (OEOs) are hybrid devices in which the signal propagates alternately in optical and in electronic components [11].Due to the low loss in optical fibers, they are utilized as a long delay-line that increases the quality factor of the OEO.As a result, OEOs can generate continuous wave signals at frequencies up to tens of GHz with extremely low phasenoise [11].Coupled-OEOs generate ultra-low jitter optical pulses, which propagate through an all-optical path that contains an electro-optic modulator that is fed by an electrical continuous wave [12].Short optical pulses can also be obtained by soliton-assisted compression of sinusoidally modulated prepulses generated by an OEO [13,14] or by using an electro-absorption modulator [15].In all of those works a narrowband electrical filter is used to eliminate most of the cavity modes.
Generating low-jitter single-cycle radio-frequency (RF) pulse train with a high frequency carrier is important for ultra-wideband radars [16] and for arbitrary waveform generation [17].To obtain short pulses from a self-sustained oscillator, several cavity modes should be locked and hence the cavity length of the oscillator should be longer than the pulse carrier wavelength.In a pioneer work, passive mode-locking of an electronic oscillator has been demonstrated [18].The saturable absorber was implemented by using an expander based on a tube.The effect of the difference between the group and the phase velocities on short pulses has been studied.Optoelectronic oscillators offer significant advantages in compared with electronic oscillators that generate short RF pulses.The bandwidth of electro-optical systems is significantly wider in compare with that of electronic systems.Therefore, optoelectronic oscillators enable shortening the generated pulses, increasing the carrier frequency, and increasing the pulse bandwidth as required in modern ultra-wideband radars [16].The loss of optical fibers is significantly smaller in compare with electronic transmission lines.Therefore, optoelectronic oscillators enable decreasing the repetition rate of the pulse train while maintaining low jitter as required in radar applications.
In this paper, we demonstrate experimentally the generation of low-jitter single-cycle pulse train with a carrier frequency in the RF region by using passive mode-locking of an OEO.It is the first time that single-cycle pulses are generated directly by a passive mode-locked oscillator.It is also the first time that passive mode-locking is demonstrated in an OEO.In this device pulses are amplified by an RF amplifier as in electronic oscillators.The insertion of a 200 m long fiber into the cavity enables obtaining mode-locking since it increases the cavity length without adding a significant loss.The long cavity enables the simultaneous oscillation of several modes as required in mode-locking technique.The mode-locking of the OEO enables obtaining low timing jitter -less than 5 ppm of the round-trip duration.An autonomous carrier-envelope phase locking is obtained and hence the pulse waveform is repeated each round-trip.In lasers, such locking requires adding an external feedback that controls the cavity length [19].
The oscillator described in this paper opens new opportunities to explore new physical effects and to study directly the basic limitations of single-cycle mode-locked oscillators.For example, mode-locked OEOs can be used to find the conditions for the cavity dispersion that allow the generation of single-cycle pulses and allow autonomous locking of the group and the phase velocities.In ultrashort lasers the measurement of the optical pulses gives indirect result on the electric field and it also requires many pulses.Therefore, it can not be implemented in real-time.The passively mode-locked OEO reported in this paper is based on similar effects as used to generate ultrashort optical pulses.However, the RF pulse waveform along the cavity can be measured directly.The use RF components in OEOs also enable to tailor the oscillator dispersion.We note that the generation of ultra-wideband RF pulses and single-cycle pulses has been demonstrated by using optical systems that are based on the combination of a nonlinear effect and an optical filter [20][21][22]; however, the noise obtained in such systems is higher than the noise obtained in passive mode-locked devices where the noise can be close to its quantum limit value [3].
Experimental Setup
Figure 1 describes our experimental setup.Light from a semiconductor laser with an optical power of P 0 = 14 dBm at a wavelength of 1550 nm is fed into an electro-optic Mach-Zehender modulator (MZM) with a DC and AC half-voltages of v π,DC = 6 V and v π,AC = 5.5 V, respectively, an insertion loss of α = 6 dB, and an extinction ratio of about (1 + η)/(1 − η) = 20 dB.The bias voltage was set to v B ≈ 10 V, such that low-voltage signals at the RF port are attenuated.The maximum attenuation was obtained for v B = −1 and 11 V.The modulated light power at the output of the MZM, P mod (t), is related to the signal at the RF input of the MZM, v in (t), by [11] where v P = 8 V.The modulated light is coupled through an optical coupler to tap out 10% of the optical signal for measurements.The remind 90% of the optical signal is sent through a long fiber with a length of approximately 200 m, and is then detected by using a photo-detector with a voltage bandwidth of 15 GHz.The output electrical signal is amplified by an RF amplifier with a 19 dB gain, followed by a saturable amplifier with a maximal gain of 13.7 dB that is described in details in the next paragraph.The output of the amplifier is fed back into the RF port of the MZM through an RF coupler.The coupler was used to tap out −18.7 dB of the RF signal power to measure the signal both by a real-time scope and an RF spectrum analyzer.By using a network analyzer we measured that the coupler adds a 90 • phase-shift to the tapped signal with respect to the signal that is fed to the modulator input.
The inset in Fig. 1 describes schematically the slow-saturable RF amplifier: an RF signal is fed into a variable-voltage-attenuator (VVA) and is then amplified by using an RF amplifier with 13.7 dB gain and maximal output power of 1.6 W.About 0.1% of the RF power at the output of the RF amplifier is tapped out and detected by an RF detector.The relation between the tapped power, P t , and the voltage at the output of the RF detector is v out = aP t (dBm) + b, where, a = 0.04 V/dBm, b = 2.5 V, and the tapped power, P t , is given in dBm.The rise time of the detector is about 40 ns.The output voltage is filtered by a low-pass-filter (LPF) with a cutoff frequency of 100 kHz, and is amplified by using an operational amplifier such that v agc = c vout + d, where vout is the voltage at the output of the LPF, c = 4.4, d = 1.5 V, and v agc is the automatic gain control voltage.The voltage v agc is fed back into the control port of the VVA to set its attenuation.The attenuation of the VVA (in dB) varies approximately linearly between 0 − 5 dB as a function of v agc that is in the region of 0 − 2.2 V.The response time of the LPF should be longer than the round-trip time, about 1 μs, in order that the gain saturation will depend on the average RF power of the signal.Higher average RF power at the input of the saturable RF amplifier results in a higher attenuation due to the VVA, and consequently, lower the total amplification.Thus, the saturation of the RF amplifiers depends on the average signal power and it changes over a time scale that is about 10 times longer than the roundtrip duration.The bandwidth of the pulses was mainly determined by the bandwidth of the saturable RF amplifier that was about 550 MHz (full-width-at-half-maximum) around a central frequency of 600 MHz.The bandwidth of the other RF components is considerably wider (about 5 GHz).We used a network analyzer to measure the frequency response of the saturable RF amplifier.The gain spectrum, G( f ), normalized to the maximal gain, G max = 13.7 dB, is shown in Fig. 2(a).The measured phase response of the saturable RF amplifier between 200 MHz and 1100 MHz equals ϕ( f ) = −2π f τ D + ψ( f ), where τ D ∼ = 10 ns is an average delay that is added by the amplifier, and |ψ( f )| 2π .The other components in the cavity add a delay that is approximately equal to the delay of the optical fiber, τ F ∼ = 938 ns.The phase and the group velocities along one roundtrip can be calculated by respectively, where L ≈ 200 m is the length of the optical fiber.Figure 2(b) shows a comparison between the phase velocity and the group velocity, where the two velocities are normalized by v 0 = 2.11 • 10 8 m/s.The frequency dependence of the relative difference between the phase and the group velocities has an oscillatory behavior, with a maximal difference of about ±0.05% and a period of about 60 MHz.The high frequency oscillation of the group velocity over a frequency octave of 440-880 MHz allows the locking of the relative phase between the pulse envelope and the carrier phase as it is obtained in the experiments and as it is also obtained in our theoretical model that will be published elsewhere.The locking of the relative phase between the pulse envelope and the carrier phase is promoted 1) for a bias voltage v B =10.7 V. (b) Waveform at the RF port of the modulator.The waveform was obtained by measuring the pulse at the output port of the coupler by using a real-time oscilloscope, adding 18.7 dB and shifting the phase waveform by 90 • .(c) Normalized optical power at the output of the MZM, P mod (t)/(αP 0 ) (defined in Eq. ( 1)), that is measured by using a 10% optical coupler that is connected to the output port of MZM and measuring the optical signal by using a sampling oscilloscope with an average of 256 samples (green-line).The optical waveform is compared to that calculated by multiplying the waveform at the input of the MZM by its transfer curve (red-line).
The bias voltage of the modulator is set such that its transmission increases as the input voltage increases, as shown in Fig. 3(a).The figure also show that the modulator attenuates low amplitude peaks in the input waveform.Therefore, the modulator is a fast saturable absorber with a time response that is significantly shorter than the pulse duration.The gain saturation of the RF amplifiers occurs over a time scale that is about three to four orders of magnitude longer than the pulse duration.Therefore, the gain saturation of the RF amplifiers approximately depends on the average power.The combination of the modulator and the slow saturation of the RF amplifier promotes the generation of single-cycle pulses.Such short pulses are transmitted efficiently through the modulator due to their high peak voltage.At the same time, a single-cycle pulse that propagates in the cavity has a very low average power.As a result, the RF amplifier is nearly unsaturated and its amplification is almost maximal.The carrier frequency and the bandwidth of the pulses are mainly determined by the central frequency and the bandwidth of the saturable RF amplifier.The pulse must contain a carrier frequency since low-frequency components of the pulse can not propagate inside the cavity because they are blocked by the RF amplifiers (as shown in Fig. 2(a)).Therefore, the time average of the pulse field must be equal to zero.When the gain is high enough, a bunch of pulses propagate in the cavity.By controlling the laser power and the bias voltage of the modulator we could control the loop gain and obtain a single-cycle pulse.For example, for v B =10.5 V, a bunch of about 50 single-cycle pulses were generated and the attenuation of the VVA was equal to 4 dB.When the bias voltage was gradually decreased to 10 V, a single-cycle pulse was generated.In this case, the voltage of the attenuator was equal to 1.3 V, the attenuation of the VVA was 3 dB, the gain of the saturable amplifier was 10.7 dB, and the total gain between the waveform at the input of the modulator and the waveform at the detector output was about 30 dB.The long fiber and the mode-locking of the pulses enable obtaining a very low-jitter.
Experimental Results
Figure 4 shows the single-cycle pulse train that was measured by a real-time oscilloscope and a spectrum analyzer.The single-cycle RF pulse has an envelope with a full-duration-at-halfmaximum of 1.5 ns and a carrier wave with a period time of 1.5 ns.The carrier frequency is about 650 MHz.The measured spectrum that is described in Fig. 4(c) has a 5-dB bandwidth of 440 MHz between 440-880 MHz.Thus, the ratio between the highest and the lowest frequency of the pulse spectrum is greater than two, and the spectrum 5-dB bandwidth spans a frequency octave.We note that the voltage shown in Fig. 4 is the voltage at the output port of the RF coupler.The voltage at the modulator input, that is shown in Fig. 3(b), is about 7.2 times higher than the voltage shown in the Fig. 4 and is also 90 • phase-shifted.The pulse envelop propagates at the group velocity while the carrier wave propagates at the phase velocity.To obtain repetitiveness between the waveforms of adjacent ultrashort pulses there is a need to lock the relative phase between the pulse envelope and the carrier wave.In the frequency domain it means that each Fourier component is an integer multiple of the inverse of The waveform has a carrier period of 1.5 ns and its extracted envelope norm, ±|a(t)| (black dashed-line), has a full-duration-at-half-maximum of 1.5 ns.The signal that is calculated from the envelope is shown for comparison (green solid-line).
t (ns)
the time between adjacent pulses [19].In case that the group and the phase velocities are not the same, the pulse shape changes from one round-trip to another [18].By using a real-time and sampling oscilloscopes we verified that the shape of the electrical pulse in the mode-locked OEO is repeated every round-trip without a need to control the cavity length.Hence, the carrier phase and the envelope phase are locked autonomously.Locking of the carrier and the envelope phases in lasers requires adding an external feedback that controls the cavity length [19].In the passively mode-locked OEO the locking is obtained without controlling the cavity length since the response time of the modulator is an order of magnitude shorter than the carrier period and hence a change in the pulse waveform from one round-trip to the following results in a significant increase in the loss.Furthermore, the relative difference between the measured phase and group velocities varies with a high frequency period over the entire bandwidth and with an amplitude less than 0.05%, as shown in Fig. 2(b).The rapid change of the group velocity over the pulse bandwidth, and the relatively small difference between the phase and the group velocities, allow the locking of the velocities as it is obtained in the experiments.
The width of the pulse envelope, a(t), can be approximately extracted from the measured waveform v(t) = a(t) exp(2πi f 0 t)/2 + c.c., where f 0 is the carrier frequency.The Fourier transform of the wave equals , where A( f ) is the Fourier transform of a(t).One part of the spectrum is located in the positive frequency region, V p ( f ), and the other part is located in the negative frequency region, V n ( f ).In a single-cycle pulse the spectrum in the positive frequency region V p ( f ) contains not only components of A( f − f 0 )/2, but also components from A * (− f 0 − f )/2.However, if the overlap between the negative and positive frequency components is small, we can assume that Then, the spectrum of the envelope can be obtained by . By applying an inverse-Fourier-transform to A( f ) the envelope a(t) is obtained.Figure 5 shows the extracted envelope ±|a(t)|.The figure shows that the measured signal and the signal that is calculated from the envelope are similar but not identical, as it expected when the bandwidth of the signal envelope is comparable with the carrier frequency.The full-duration-at-half-maximum of the envelope equals 1.5 ns compared to 1.5 ns period of the carrier.The time derivative of the envelope argument varies by less than 50 MHz along the time duration when |a(t)| 2 /max(|a(t)| 2 ) > 0.1.
Pulse to Pulse Jitter
The jitter and the stability of the pulse repetition rate of the device are determined by the noise that is added in each round-trip.By using a sampling oscilloscope, the measured pulse to pulse jitter of the pulse train was less than 5 ps which is approximately 5 ppm of the pulse repetition period of 948.5 ns.The jitter measurement was limited by the oscilloscope accuracy.
Since we do not stabilize the system, the long term stability is mainly determined by environmental changes in the fiber.The stability of the pulse repetition rate over a long time was measured by using a counter.The gate time of the counter, which determines the duration of each frequency measurement, was set to 4 seconds.The measurements were collected over a time period of about 3/4 hour.The average pulse repetition rate was equal to 1,054,301 Hz and the rate change was less than 1.5 Hz.The frequency deviations from one measurement to the following had a normal distribution with a standard deviation of σ f = 0.13 Hz.The repetition rate deviations from one measurement to the following had a cross correlation values that were less than 0.1, which implies that different measurements were not correlated.
We calculated the pulse to pulse jitter in our system due to additive white Gaussian noise.We describe the waveform of one of the pulses in the presence of noise by v(t) = f (t) + n(t), where v(t) is the voltage of the waveform at the output of the amplifier, f (t) is the corresponding unperturbed waveform in the absence of noise, and n(t) is a real noise that is added to the pulse waveform in each round-trip.We assume that the added noise is a real Gaussian noise with a time average n(t) = 0, and a correlation at the output of the RF amplifiers n , where G is the amplification, ρ N is the effective power spectral density of the noise (one-sided) at the input of the amplifier, R is the load impedance, and δ (t) is the Dirac delta function.The jitter due to the noise can be calculated as performed in lasers [23,24] or in optical communication systems [25].Due to the small effect of dispersion on the RF pulses the main source of the jitter in the mode-locked OEO is the direct contribution of the noise to the change in the central pulse time.We define the central pulse time of one of the unperturbed pulses as where E 0 is the energy of the pulse waveform We define a time coordinate t = t − T p with respect to the central time of the unperturbed pulse T p .In the presence of noise, the central pulse time becomes: where τ is the round-trip time, t ∈ [−τ/2, τ/2).In deriving Eq. ( 4) we neglect the effect of the pulse energy change due to the noise on the jitter.Keeping terms up to the first order in n(t), the deviation in the central pulse time in presence of noise, equals: The random variable δ T changes from one round-trip to the other.The standard deviation of δ T is defined as the jitter.Since the added noise is a white Gaussian noise that is delta-correlated in time, the deviation of the central pulse time, δ T , is normally distributed with a standard deviation of We estimated the minimal theoretical pulse to pulse jitter in our system.We assume that the power spectral density of the noise, ρ N , is dominated by two unavoidable noise sources: thermal noise of the RF amplifiers, ρ th = NF • k B T amb , and shot noise, ρ SN = 2q e I PD R, such that ρ N = ρ th + ρ SN , where k B is the Boltzmann constant, T amb is the ambient temperature, NF is the noise factor of the RF amplifiers, q e is the electron charge, and I PD is the photocurrent.In our system R = 50 Ω, I PD = 4 mA, and G = 32 dB.In the case of an ideal RF amplifier NF = 1, and for T amb = 300 • K, the spectral noise density equals to ρ N = 7 • 10 −20 W/Hz.Therefore, the resulting timing jitter calculated by using Eq. ( 6) equals σ τ = 0.6 ps.
Conclusions
Single-cycle pulses are the shortest pulses that can be obtained for a given carrier frequency.We have demonstrated the generation of single-cycle RF pulses by passive mode-locking of an OEO.Our measurements indicate that an autonomous locking of the carrier phase with respect to the envelope phase is achieved, so that the pulse waveform is preserved in each round-trip.The measured pulse train has a low pulse to pulse jitter, less than 5 ppm of the round-trip duration.The method described here enables generating single-cycle RF pulse train with a low repetition rate and a low jitter which could not be generated till now by electronic systems.The carrier frequency of the OEO reported in this paper is 650 MHz.However, the method is directly scalable to higher frequencies, and it is limited today only by the maximum frequency of optoelectronic components, which is of the order of tens of GHz.
The low-jitter pulses that are generated by the mode-locked OEO are important for many radar applications, such as in ultra-wideband radars [16], and in bistatic or multistatic radars, in which the transmitting and the receiving antennas are separated [26].In such radars the synchronization between the transmitting and receiving antenna can be dramatically improved by using a low jitter pulse source.Ultra-low-jitter short pulses can also enable the development of novel radars.Ultra-wideband pulses are required to improve the spatial resolution of radars, and single-cycle pulses are the shortest pulses that can be obtained for a given carrier frequency.Doppler radars transmit signals with a long duration and a low phase noise to accurately measure the velocity of moving objects.Very short pulses with a low jitter, as generated by the system described in this paper, can be used to develop novel radars that will be able to accurately measure both range and velocity.Low-jitter single-cycle pulses are also important for generating RF pulses with an arbitrary waveform due to their ultra-wide bandwidth.
Fig. 1 .
Fig.1.Schematic description of the experimental setup.Light from a continuous wave semiconductor laser is fed into an electro-optic Mach-Zehender modulator (MZM).The modulated light is sent through a 200 m length fiber and is then detected by using a fast photodetector (PD).The detector output is amplified by a non-saturable RF amplifier that is connected to a saturable amplifier.The amplifier output is fed back through a coupler into the RF port of the MZM to close the loop.The inset describes schematically the saturable RF amplifier: an RF signal is fed into a variable-voltage-attenuator (VVA) and is then amplified by using an RF amplifier.The RF power at the output of the amplifier is tapped out by an RF detector and is filtered by a low-pass-filter (LPF) with a cutoff frequency of 100 kHz.This signal controls the attenuation of the VVA.
Fig. 2 .
Fig. 2. (a) Gain spectrum of the saturable RF amplifier, normalized to the maximal gain G max = 13.7dB.(b)Comparison between the phase velocity v phase (blue) and the group velocity v g (red) in one rountrip that are normalized to v 0 = 2.11 • 10 8 m/s.The relative difference between the phase velocity and the group velocity has an oscillatory structure in the frequency domain, with a maximal amplitude of about ±0.05% and a period of about 60 MHz.The high-frequency oscillation of the group velocity over a frequency octave of 440-880 MHz allows autonomous locking of the relative phase between the pulse envelope and the carrier wave as obtained in the experiments.
#Fig. 3 .
Fig. 3. (a)The transmission curve of the MZM calculated by using Eq.(1) for a bias voltage v B =10.7 V. (b) Waveform at the RF port of the modulator.The waveform was obtained by measuring the pulse at the output port of the coupler by using a real-time oscilloscope, adding 18.7 dB and shifting the phase waveform by 90 • .(c) Normalized optical power at the output of the MZM, P mod (t)/(αP 0 ) (defined in Eq. (1)), that is measured by using a 10% optical coupler that is connected to the output port of MZM and measuring the optical signal by using a sampling oscilloscope with an average of 256 samples (green-line).The optical waveform is compared to that calculated by multiplying the waveform at the input of the MZM by its transfer curve (red-line).
Fig. 4 .
Fig. 4. Measurement of the single-cycle pulse train by using a real-time oscilloscope (a-b) and by using a spectrum analyzer (c-d).(a) single-cycle pulse waveform with a carrier period of 1.5 ns that corresponds to a carrier frequency of 650 MHz.(b) single-cycle pulse train with a period of 948.5 ns that corresponds to a repetition rate of 1.0543 MHz.(c) Envelope of the spectrum measured with a resolution bandwidth RBW = 1 MHz.(d) Oscillating modes around a frequency of 649 MHz, measured with a resolution bandwidth RBW = 10 kHz.The mode spacing of 1.0543 MHz corresponds to the time period of the pulse train.The voltage at the modulator input is 7.2 times higher than the voltage shown in the figure.
#Fig. 5 .
Fig.5.Single-cycle pulse waveform as it was measured by a real-time oscilloscope (yellow circles) and by a sampling oscilloscope with an averaging of 256 samples (red solid-line).The waveform has a carrier period of 1.5 ns and its extracted envelope norm, ±|a(t)| (black dashed-line), has a full-duration-at-half-maximum of 1.5 ns.The signal that is calculated from the envelope is shown for comparison (green solid-line). | 2017-10-17T16:14:32.535Z | 2011-08-29T00:00:00.000 | {
"year": 2011,
"sha1": "a4779df2bf046aa9fedc7049848136e8e8af7e66",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.19.017599",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a4779df2bf046aa9fedc7049848136e8e8af7e66",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
221865359 | pes2o/s2orc | v3-fos-license | Gender, Physical Self-Perception and Overall Physical Fitness in Secondary School Students: A Multiple Mediation Model
Background: Physical self-perception is often related with better physical fitness perception in adolescents. Moreover, it is an important social cognitive perspective to provide suitable mental health in this population. However, this relationship is unequal between boys and girls. The physical fitness is a marker of health in young population. The aims of the present study were the following: (1) to compare physical self-perception and self-reported overall physical fitness (OPF) between boys and girls (gender) and body mass index (BMI) status, and (2) to determine the mediating role of all physical self-perception subscales (except physical condition) and BMI status in the link between gender and OPF in adolescent students. Methods: This cross-sectional study consisted of 85 adolescent students of secondary school between 12 and 17 years of age; 41 were boys (Mage = 14.6, SD = 1.7) and 44 were girls (Mage = 14.4, SD = 1.6). Adolescent participants completed all clinical characteristics by body composition measures (age, body weight, body height, and BMI). Physical self-perception was assessed by the physical self-perception profile (PSPP) whereas the international fitness scale (IFIS) was used to predict the self-reported OPF of adolescents in the present study. Results: Gender (boys and girls) differed significantly in all PSPP subscales and OPF, whereas the BMI status (underweight = 19 students, normal weight = 53 students, overweight/obese = 13 students) showed significant differences in all clinical characteristics, physical condition (PSPP), and OPF. A multiple mediation analysis was performed using bias corrected bootstrap. This multiple mediation analysis revealed that all PSPP subscales were significant mediators between gender and OPF: attractive body (p = 0.013), sport competence (p = 0.009), physical strength (p = 0.002), and self-confidence (p = 0.002). The total direct effect of gender on OPF was significant (p = 0.002). Moreover, the multiple mediation estimated a completely standardized indirect of X on Y for attractive body (effect = 0.109), sport competence (effect = 0.066), physical strength (effect = 0.130), and self-confidence (effect = 0.193). Conclusions: These findings contribute to understanding the link between gender and OPF in adolescent students and the mediation of physical self-perception and OPF in this relationship. In addition, strategies focused to improve self-confidence and physical self-perception are necessary in female adolescent students, because boys showed better physical self-perception in all PSPP subscales. Girls are a risk group because they report low physical self-confidence with their respective insecurity feelings and psychological disorders. Thus, personal physical self-perception must be considered as an important social cognitive perspective to provide suitable mental health in children and adolescents.
Introduction
Practice of physical activity (PA) and sport helps students during physical education (PE) lessons and non-scholar time to acquire better feelings of personal satisfaction [1], PA motivation [2], and better self-perception of physical fitness (PF) [3]. Self-perception is a reflection of the student about their capacity to meet the physical limits in PA and sports [1]. According to Fox et al. [4], physical self-perception is the main characteristic of the search for mental health and well-being. On the other hand, self-perception of PF is considered to be multidimensional, composed by perceived PF and athletic competence in sport activities and activities with component of muscular strength, flexibility or cardiorespiratory fitness [5]. Moreover, perception of the own PF is sensitive to variations in PA levels and psychological self-satisfaction (e.g., body-image, satisfaction during PA practice, insecurity, etc.) [5,6]. Self-reported PF in children and adolescents is useful to establish possible cardiovascular disease risk and diverse levels of PF in this population [3,7]. Thus, personal physical self-perception assessed by validated instrument could be considered as an important social cognitive perspective to provide suitable mental health in children and adolescents [4,7]. Moreover, self-perception of physical fitness in youth is related with a positive identity and less behavior disorders when they practice PA [8].
Adolescent students' behaviors during a class or their leisure time may vary depending on a number of factors, such as sport motivation, acceptation of their self on sport activities or the satisfaction of physical competence by physical self-perception [9]. It considers that the correct physical self-perception strongly influences teenager and adolescent motivation and the low or high control of self-perception behaviors, and is the reason for the proliferation of studies in this regard [10,11].
The physical self-concept shows a relative importance because it is based on the relationship between the individual's personal beliefs and their subsequent behavior.
Fox and Corbin [12] included five physical competences regarding to the physical self-perception in their instrument physical self-perception profile (PSPP) with the purpose to analyze the effects of the relationship between physical´s perception and participation in PA and sport activities: physical condition, attractive body, sport competence, physical strength and self-confidence.
In this sense, research conducted with young participants have found that a better perception of one physical condition is related with the practice of regular physical exercise and sport in this population [5]. In addition, PF is associated with higher motivation towards PA and better well-being, especially when existing a mix of educative activities such as physical and nutritional education or with the promotion of coeducation [10]. However, previous studies have shown that boys perceive better physical condition than girls because of the high satisfaction in everything related to body perception and PA inside and outside of the school context [1].
The next PSPP physical competence is the attractive body measure. This PSPP subscale is controversial because the body representation has been identified very often with psychiatric disorders and dissatisfaction in several type of population [13,14], specifically in girls [9,15]. The adolescence is characterized by a slow progression from the puberty to adulthood with biological, psychological, social and cognitive changes that varies according to gender and age where the self-image attitude is very significant [16]. Body image has been based overall by self-report body size judgement [17] and the girls are often more critical and demanding than boys about her body image due to social pressure and the respective obsession with perfection [18]. Sport activities for adolescents offer multiple possibilities to improve personal and interpersonal skills, and therefore a better sport competence [1]. On the other hand, girls´perceptions of social competence are not as high as the boys´perception in sport practice [19] and this fact must be taken into account in gender comparative studies.
Another PSPP subscale is physical strength which is strongly associated with general self-concept, happiness and life satisfaction [20][21][22]. In this line, physical strength self-perception has shown to be associated with general fitness in younger population [23], specifically with boys for the obsession to show the muscularity of these [24]. Self-confidence and social identity respect to the sport participation´s interest for adolescent could vary with the time and it is important to begin the contact with sport activities from an early age to stablish a natural relationship and a strong self-confidence in the young practitioner [19]. Self-confidence in boys is normally higher than in girls because they like to be more active in sports and every activity of their daily life [25].
After all literature cited previously regarding to the evidence of the physical self-concept importance in youth, we highlight that physical self-perception [26] or body mass index (BMI) status [27] are markers of health in this population. Therefore, both self-perception through PSPP subscales and BMI status can play an important role as mediators between gender and overall physical fitness (OPF) in adolescents. Regarding to BMI status in students of secondary school between 11-14 years old, the daily life-style and several aspects of the PA practice such as interest for sport practice, frequency or aptitude are related to different BMI status as overweight [28]. Differences of BMI status between adolescent boys and girls is difficult to appreciate due to the continuous body composition changes during this period [25]. PF can be objectively or subjectively measured. There are multitude of PF objective measuring with physical tests focused on adolescents such as the ALPHA-Fitness test (Assessing Levels of Physical Activity) [29], the HELENA study (Healthy Lifestyle in Europe by Nutrition in Adolescence) [30], and the muscular strength measuring test [31], etc. On the other hand, it is possible to use subjective questionnaires to measure the PF such as the international fitness scale (IFIS) [3,7] or self-reported cardiorespiratory fitness [32]. However, objective measurements are always more expensive and difficult to perform with participants than subjective measurements as questionnaires or written tests. The last one is easily possible by the IFIS [7] with a self-report measure for youth to identify the level of physical fitness according to five components: OPF, cardiorespiratory fitness, muscular strength, speed/agility and flexibility.
Taking into account all of the above, the main aims of this study are (1) to analyze the difference of physical self-perception and self-reported OPF between boys and girls (gender) and BMI status (underweight, normal weight, overweight/obese), and (2) to determine the mediating role of all PSPP categories (except physical condition) and BMI status in the link between gender and OPF in adolescent students ( Figure 1).
Participants
A cross-sectional design was used in this study with an observational and descriptive perspective. The sample selection process used was non-probabilistic and convenient. The total sample was composed of 85 adolescent students with ages ranging between 12 and 17 years old (Mage = 14.5, SD = 1.6). According to the profile of the participants in the present study, 41 were boys (Mage = 14.6, SD = 1.7) and 44 were girls (Mage = 14.4, SD = 1.6). Both student groups were recruited from a secondary school in a village in Granada province, Andalusia region. This educative context was representative of that town because it was the only secondary school of the village. A total of 140 students from four classrooms were invited to participate but only 85 accepted to be part of this study (60.7% of the total). There were not any dropout and all participants finished the study.
An inclusion criterion for adolescent students in this study, was to have no illness limitation as measured by a bioelectrical impedance analysis and no history of neuropsychological impairment that could affect the results of the experiment. All variables obtained were subjective (questionnaires) except for clinical characteristics. All participants were selected for the study through information advisor of center, read and signed an informed consent statement before taking part in the study. The participants were fully debriefed about the purpose of the study at the end of experiments. The wide status of age and number of participants is explained because of the non-obligatory nature of the study.
Considering a statistical power of 80% (z), a type 1 margin of error or alpha of 0.05, a response distribution of 50% (r) and sample population of young people in the period of secondary school (12-
Participants
A cross-sectional design was used in this study with an observational and descriptive perspective. The sample selection process used was non-probabilistic and convenient. The total sample was composed of 85 adolescent students with ages ranging between 12 and 17 years old (M age = 14.5, SD = 1.6). According to the profile of the participants in the present study, 41 were boys (M age = 14.6, SD = 1.7) and 44 were girls (M age = 14.4, SD = 1.6). Both student groups were recruited from a secondary school in a village in Granada province, Andalusia region. This educative context was representative of that town because it was the only secondary school of the village. A total of 140 students from four classrooms were invited to participate but only 85 accepted to be part of this study (60.7% of the total). There were not any dropout and all participants finished the study.
An inclusion criterion for adolescent students in this study, was to have no illness limitation as measured by a bioelectrical impedance analysis and no history of neuropsychological impairment that could affect the results of the experiment. All variables obtained were subjective (questionnaires) except for clinical characteristics. All participants were selected for the study through information advisor of center, read and signed an informed consent statement before taking part in the study. The participants were fully debriefed about the purpose of the study at the end of experiments. The wide status of age and number of participants is explained because of the non-obligatory nature of the study.
Considering a statistical power of 80% (z), a type 1 margin of error or alpha of 0.05, a response distribution of 50% (r) and sample population of young people in the period of secondary school (12-17 years) (N = 184) in the village, the sample size of the present study was in the recommended range. The following formulas were used [33]:
Research Design
The present study shows the physical self-perception as a daily condition in the life of adolescents. Thus, we try to analyze the impact of gender (boys and girls) on the OPF self-perception and the results of this relationship. However, it is necessary to highlight more specific physical self-perceptions (mediators) that help us to understand better this relationship between gender and OPF ( Figure 1). These physical self-perceptions are attractive body, sport competence, physical strength and self-confidence (all, PSPP subscales). In addition, the BMI status is not a physical self-perception but is a health marker that affects to the association between male or female adolescents and the OPF [28]. Therefore, we decided to include BMI status as the fifth mediator between the association of gender and OPF.
Measurements
Clinical characteristics such as body weight (kg) and body mass index (BMI)(kg/m 2 ) were measured by bioelectrical impedance analysis with a Tanita SC 330s. With the intention of specifying groups within BMI reference criteria of World Health Organization (WHO) (https://gateway.euro.who. int/en/indicators/mn_survey_19-cut-off-for-bmi-according-to-who-standards/) [34], all BMI status (underweight, normal weight, overweight/obese) were calculated to relate with clinical characteristics, physical self-perception profile, and self-reported physical fitness. Body height (cm) was measured using a stadiometer (Seca 22, Hamburg, Germany). In addition, all participants completed a physical self-perception profile (PSPP) questionnaire and a self-reported physical fitness assessment by IFIS.
Physical self-perception profile (PSPP) [12] was used in the Spanish version [9] to measure the physical self-confidence of adolescent students in this study. The PSPP examines the students' PA and sports practice in their daily life, as well as their sports habits in their leisure time, aiming to better understand the physical self-concept of these students. If the physical capacity perception is more positive, the levels of participation in PA are probably going to increase in children and young people [35]. A total of 30 items make up this questionnaire, and they are grouped in five subscales [9,12]: physical condition (physically active, physical performance, etc.), attractive body (confidence in body image, maintain an attractive body), sport competence (ability to learn sports, sportsmanship, sport confidence), physical strength (confidence of one's own strength in diverse physical situations, muscular improvement), and self-confidence (satisfaction with one's physical condition and physical fitness). The possible Likert-scale answers ranged from "totally disagree" (1) to "totally agree" (4). The reliability of the PSPP Spanish version was highly significant (α = between 0.89 and 0.69) and the internal consistency ranged from 0.70 (physical strength) to 0.80 (sport competence) [9].
The international fitness scale (IFIS) [7] was used to predict the self-reported physical fitness of adolescents in the present study. This simple self-administered scale assessed the physical fitness of the participants in a short time. Adolescents should be able to answer easy questions about their physical fitness. This instrument is composed of five questions: OPF, cardiorespiratory fitness, muscular strength, speed/agility and flexibility. However, we only focused on the OPF with the intention of knowing the perception of adolescent students in this study. The possible Likert-scale answers ranged from "very poor" (1) to "very good" (5), and the questions always invited the participants to compare the self-reported physical fitness with the physical fitness of other friends. Cohen´s kappa coefficient of test-retest in this one question showed a significant value (p < 0.001). The reliability of the IFIS was highly significant (α = between 0.74 and 0.82) [36] and the internal consistency ranged from 0.58 (cardiorespiratory fitness) to 0.65 (OPF) [7].
Procedure
All the participants were given specific information about the study (the main aim, the expected duration of the questionnaires' interview and the procedures). In addition, participants' parents and those responsible for the secondary school center were informed about the nature and objective of the study: body composition and psychometric variable measurement, anonymity of all responses, and non-identification of adolescent student participants. All adolescents of the present study were given two days to complete all of the measurement protocol during physical education classes. The first day, they had to complete the anthropometric measurements in order of the class list. A day after, they filled in the questionnaires related to physical self-perception (PSPP) and self-reported OPF. The instruments measuring the different variables were administered in the classroom by the researchers themselves without the teacher present. Researchers told the participants that they should be sincere with the answers of questionnaires distributed. Teachers had the possibility to obtain the results if they asked about them.
The participants of the present study were selected through the responsible secondary school and physical education teacher, who read and signed an informed consent statement before taking part in the study. The participants' parents obtained information about the main aims of the investigation, based on the document approved by the Bioethics Committee of the University of Granada (563/CEIH/2018), and an informed consent form was signed by them. All adolescent participants in this study were treated according to the American Psychological Association (APA) guidelines with the purpose of ensuring the anonymity of the students' responses.
Statistical Analyses
The normal distribution of data was analyzed using the Kolmogorov-Smirnov test. Variables studied in the present research showed a non-parametric distribution. The mean and standard deviation of the participant´s clinical characteristics (gender, BMI status, age, body weight, and body height), PSPP subdomains (physical condition, attractive body, sport competence, physical strength, and self-confidence) and self-reported OPF by IFIS were performed on student participants of the present study.
The comparison in clinical characteristics, PSPP subdomains and physical fitness categories by IFIS between boys and girls (gender) were performed by the Mann-Whitney U test; whereas the differences between clinical characteristics, PSPP subdomains, and OPF by IFIS between BMI status (underweight, normal weight and overweight/obese) were performed by the Kruskal-Wallis test. Pairwise comparisons were performed with Bonferroni's adjustment. The magnitude of the differences in the diverse outcomes of gender and BMI status categories were calculated using the effect size [37].
The reliability assessment of the data on the five variables included in the multiple moderated mediation analysis (Figure 1) was performed by Cronbach's alpha (α = 0.75).
The association between categorized variables (age and BMI status) and continuous variables (PSPP dimensions and OPF by IFIS) were analyzed by Spearman´s correlation coefficient. The correlation values for performance-based tests were interpreted as follows: weak or no relationship (r = 0 to 0.25), fair degree (0.25 to 0.50), and moderate-to-good (r = 0.50 to 0.75) [38].
A mediation analysis is understood as a mechanism where one mediating variable transmits the effect from an independent variable to a dependent variable, based on linear regression models. The reason not to include physical condition with the other four mediator variables is because the purpose of this PSPP subscale is similar than OPF and we have avoided an irregular consistency assessment in the mediation analysis. Moreover, it is important to highlight that OPF is a general subscale that encompasses the rest physical fitness subscales measured in the IFIS questionnaire. In order to assess whether the association between gender (independent variable) and OPF (dependent variable) was mediated by attractive body, sport competence, physical strength, self-confidence and BMI status, a multiple mediation analysis was fitted using bias-corrected bootstrapped mediation procedures [39]. Bootstrapping is a non-parametric resampling method which involves repeatedly extracting samples from the data by randomly sampling with replacement and estimating the indirect effect in each resampled data-set [40]. This multiple mediation analysis was performed using the PROCESS macro for SPSS (New York, USA), model 4 [39]. A bias corrected bootstrap based on 5000 bootstrap samples with confidence intervals (Cis, 95%) was used to test the statistical significance of the indirect and direct effects in the multiple mediation analysis. If there was not zero in the confidence intervals, the effect was considered to be significant. A statistical diagram of the indirect effect of X on Y through M and the direct effect of X on Y = c´is shown in the hypothetical model of the Figure 1. Finally, physical condition (PSPP) was deleted from multiple mediation because OPF measures a similar subjective perception, and this fact could produce collinearity assumptions.
All statistical analyses were performed using the Statistical Package for Social Science (IBM SPSS Statistics for Windows 21.0. Armonk, NY, USA).
Results
Differences in Clinical Characteristics of the study sample, PSPP domains, and physical fitness self-reported through IFIS are specified by gender and BMI status in Table 1. Boys showed higher values than girls in body height (p < 0.001), all PSPP domains (p < 0.05) and OPF (p < 0.01). According to BMI status, there were significant difference between underweight, normal weight and overweight/obese in age and body weight (both, p < 0.001), body height (p < 0.01), physical condition (PSPP domain; p < 0.01) and OPF (IFIS category; p < 0.01).
The multiple mediation estimated a completely standardized indirect effect of X on Y for attractive body (effect = 0.109), sport competence (effect = 0.066), physical strength (effect = 0.130), and self-confidence (effect = 0.193).
Discussion
The two established objectives of this study were (a) to analyze the difference of physical self-perception and self-reported physical fitness between boys and girls (gender) and BMI status and (b) to determine the mediating role of all PSPP categories (except physical condition) and BMI status in the link between gender and OPF in adolescent students (Figure 1).
Clinical characteristics showed differences between boys and girls in body height and it is important to highlight that the fact of adolescent growth of boys and girls has been discussed as 1976 [41]. The average body height in this study is in line with a multilevel longitudinal analysis of sex differences in height gain and growth were Japanese boys showed higher body height than girls. However, the girls gain peaked approximately two years earlier than boys [42]. Body height is a specific indicator of puberty. It is related to the effects of this period and is generally associated to body height growth differences between boys and girls [43,44], with a higher growth velocity for boys [45]. Moreover, prospective studies about prepubertal body composition have concluded that female pubertal development is intermittent [3][4][5][6][7][8][9][10][11][12][13][14][18][19][20][21][22]36]. Equally, age as an early pubertal marker could explain the relationship between body composition characteristics and pubertal development results (increasing of height, muscle, and bone mass, etc.). Girls experience body weight changes only after menarche with a higher BMI. These changes cannot be considered to be determinant of earlier puberty onset. Higher prepubertal BMI may be associated with earlier menarche in girls [46], while there is not sufficient data about an increase of BMI after the beginning of the puberty in boys [47]. Li et al.'s study [46] concludes that further research is necessary to clarify whether a critical time window exists to explain an increase of body weight levels in early puberty.
In this study, important results showed that positive attractive body, sport competence, physical strength, and self-confidence (all, PSPP subscales) were significant mediators in the link between gender and OPF. Adolescent students of the present study experienced differences between genders in those PSPP subscales. A multitude of reasons maybe attributed to the observed differences in the physical self-concept of boys and girls. In this sense, it must be highlighted that girls normally show a less favorable relationship with the five PSPP subscales than boys, specifically in physical condition, sport competence, and attractive body [1,9,15,48]. According to the last PSPP subscale, attractive body plays an important role in youth because the obsession with perfection of the body is constant in their daily life. In fact, the fascination for beauty is common in all ages and sectors of society and not only characteristic of young people [18]. The feeling of beauty and satisfaction with one's own body may accompany the growth and maturation of both boys and girls since early childhood. Youngsters undergo physical and cognitive changes just before the beginning of the adolescence that influence their personal and social identity construction process [1].
Our findings coincide with Murcia and Cervelló's [9] statement that boys normally feel stronger physical self-confidence in their self and their attractiveness perception is higher than in girls. This fact could be the result of regular PA's effects on male attractive body and the explanation why boys behave with higher attractive body self-confidence than girls. This is a only an hypothesis, since the PA level has not been measured in the present study; although a number of studies confirm this relationship in adolescents [1,2,9,49,50]. Moreover, attractive body showed a positive mediation in the link between boys and OPF. A possible reason for this is that they have a more favorable self-perception regarding their physical self-concept than girls [1,35]. In particular, girls very often report low general physical self-perceptions associated with a negative body image with social physical anxiety or depression [51]. Consequently, the female perception of OPF might be mediated by a low attractive body concept of their self. However, attractive body could not be affected directly by the practice of PA [52] because changes of PA in adolescent girls are mainly predicted by the physical condition perceived [49] instead of cognitive variables such as body image self-perception. This means that we must be cautious with the findings about the mediation of attractive body between gender and OPF. Differences by biological characteristics between boys and girls must be also taken into account as a possible explanation of the attractive body mediation on OPF, specifically in girls. Puberty signs appear before in girls than boys [46], with numerous physiological modification as breast and pubic hair development, facial features, etc. [46,53]. Moreover, some body symmetry and hormone signals are often perceived as attractive or unattractive among young children and adolescents [54]. Thus, girls could try to hide their physical changes in physical tasks as physical exercise or sport and consequently, showing lower OPF because they feel less attractive.
Similarly, sport competence perceived by participants of this study follow results similar to the attractive body mediation analysis with respect to the relationship between gender and OPF. A previous study with Spanish students showed greater sport competence perception in boys than girls and an association between sport competence and physical strength with a general fitness [50]. In another study with Spanish adolescents, it was found that male participants obtained higher scores in attractive body than female participants [9]. Those studies showed similar results than our study. However, the mediation role of sport competence has an impact on OPF for boys. The impact of sport on physical and perceived social competence might explain the improvement of physical health and athletic competence [19]. Therefore, boy participants in this study could perceive a higher OPF score because their sport abilities, motivation, or satisfaction built a stronger physical self-concept than girls and therefore a greater self-perception of physical fitness in general. In contrast, girls could differ from boys in sport competence perception due to difference between genders according to the perceptions of social competence in PA practice and sport [19]. Regardless, it is not easy to discuss the different effects of gender on OPF perception through sport competence because the differences of Sport self-perception between boys and girls may decrease in later adolescence [55].
Regarding the perception of strength in adolescents, the physical self-concept is always associated with Physical Strength perception in both boys and girls whether an adequate physical fitness level exists [21]. However, we must be cautious because the adolescents begin to recognize different physical self-concepts when they are growing up and this fact is accompanied of self-esteem declination or underestimation [56]. On the other side, adolescents with regular frequency of PA practice report higher self-confidence, autonomy, self-motivation and physical self-concept overestimation and this outcome have to be also interpreted with caution [57]. Muscularity and physical strength is typically linked with boys instead of girls [24]. Related to the latter, boys normally perceive higher physical strength than girls and show significant association between physical strength and general fitness [50]. On the contrary, adolescent girls reveal a weak association between physical strength perception and physical self-worth [25]. Likewise, the effect of physical strength on OPF is preceded by studies where the correlation between physical strength and general perception of physical fitness is significant [58], especially in male children and adolescents [50]. Given that our findings about the physical strength mediation role follows the previous studies cited about the effects gender and physical strength and physical strength with OPF, we could be in position to affirm that boys experience greater physical strength perception because of they perceive stronger physical self-confidence, and this fact produces higher self-reported OPF.
The intention to be physically active is important in adolescents, demonstrating the relevance of self-confidence and perceived physical fitness whether there is a wish to practice PA or a sport [59]. Self-confidence is the most important physical exercise and fitness predictor when there is an empowerment of ego due to the increase of the physical fitness concept [12]. In general, self-confidence has been shown to be an essential factor of mental health [60]. In particular, girls suffer lower self-confidence than boys, and these mental symptoms are related to psychological disorders [60]. That is why girls should be more active and practice PA with the aim of increasing their psychosocial well-being [61], preserve mental health, and strengthen self-confidence. Otherwise, when the measurement of self-confidence perception is used with male adolescents, it is normal to find outcomes with greater levels of physical self-confidence [62] and social interactions [57]. Boys normally rate themselves with higher self-confidence than girls and higher participation in PA during their daily life [63]. Thus, boys could be more active than girls, and therefore the greater perceived self-confidence would have an impact in OPF with a higher physical fitness perception. On the other hand, greater self-confidence is normally linked to different components of health-related physical fitness both in boys and girls, being an important pillar for a physically active life [64]. In general, the OPF shows a positive relationship with cognitive factors in physically active students regardless of gender within the school context. In addition, cognitive factors may predict a better OPF in physical education settings or in any other context [65]. However, according to the authors of the construct validity and test-retest reliability of the IFIS questionnaire in Spanish children [3], it is not strange to find differences between boys and girls in the main physical fitness components (cardiorespiratory fitness, muscular strength, speed/agility and flexibility). In accordance with the discussed results in the relationship between gender and OPF and mediation by five variables (attractive body, sport competence, physical strength, self-confidence and BMI status), we can summarize the main factors on OPF between boys and girls as the physical self-concept perceived, the satisfaction with the one's own body, the perceived social competence, and an adequate physical fitness level perceived.
Conclusions
The present study highlights the mediation role of four physical self-perception subscales (attractive body, sport competence, physical strength, and self-confidence) in the direct effect of gender (boys and girls) on OPF perception of adolescent students. Boys perceived greater physical self-confidence in those four subscales and also OPF than girls. Thus, we can confirm high inequality between the genders of our secondary school participants. Considering the importance of physical self-perceptions studied in the mediation of gender on OPF, strategies to improve the self-perception of adolescents should be considered, specifically in female adolescents. Moreover, the association between the regular practice of PA and greater physical self-perception is obvious. As a consequence, higher levels of PA in adolescents are clearly related to higher greater OPF perception. Finally, it should be noted that girls are a risk group because they report low physical self-confidence with their respective insecurity feelings and psychological disorders. Strategies focused to improve self-confidence and physical self-perception are necessary in children and adolescent students.
Future research on adolescent students with similar characteristics to those evaluated in this study should focus on the improvement of physical self-perception by greater PA practice in adolescents, specifically in female adolescents. Poor physical self-perception in attractive body, sport competence, physical strength, and self-confidence could have a negative influence on the mental and physical development of girls, and consequently the OPF perceived is not going to be good.
There are some limitations in this study that could influence in the interpretations of the main outcomes. The sample of secondary school students in our research is not large and the results could be stronger with a larger sample. Moreover, objective OPF as dependent variable would be necessary in further studies where the physical self-perception act as mediator with respect to gender or another independent variable.
Despite the limitations, the present study contributes to the understanding of the relationships between gender and OPF through attractive body, sport competence, physical strength, and self-confidence (physical self-perception) as mediators. | 2020-09-24T13:06:20.848Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "569ef3ebe9b7389170732bda3e71c4aad63e1861",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/18/6871/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67442ad348eed266586aa02204359a34611da862",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
231594803 | pes2o/s2orc | v3-fos-license | Thyrotoxic dilated cardiomyopathy: personal experience and case collection from the literature
Summary The authors examine several reports of the literature concerning thyrotoxic dilated cardiomyopathy. In particular, it is pointed out that this clinical manifestation of hyperthyroidism is rare in readily diagnosed and properly treated hyperthyroidism. Case reports are analyzed comparatively. A case deriving from the direct experience of the authors is also presented. Learning points: Dilated cardiomyopathy has been reported as the initial presentation of hyperthyroidism in only 6% of patients although <1% developed severe LV dysfunction. Clinical picture of thyrotoxic dilated cardiomyopathy can degenerate into an overt cardiogenic shock sometimes requiring the use of devices for mechanical assistance to the circulation, or extracorporeal membrane oxygenation. For thyrotoxic dilated cardiomyopathy, evidence-based pharmacologic measures valid for heart failure should always be supplemented by the administration of specific thyroid therapies such as thionamides (methimazole, carbimazole or propylthiouracil), whose relatively long latency of action should be supported by the i.v. administration of small doses of beta-blocker. In cases of cardiogenic shock, the administration of beta-blocker should be carried out only after the restoration of satisfactory blood pressure levels- with the prudent use of synthetic catecholamines, if necessary.
Summary
The authors examine several reports of the literature concerning thyrotoxic dilated cardiomyopathy. In particular, it is pointed out that this clinical manifestation of hyperthyroidism is rare in readily diagnosed and properly treated hyperthyroidism. Case reports are analyzed comparatively. A case deriving from the direct experience of the authors is also presented.
Correspondence should be addressed to R De Vecchis Email devecchis.erre@virgilio.it Learning points: • Dilated cardiomyopathy has been reported as the initial presentation of hyperthyroidism in only 6% of patients although <1% developed severe LV dysfunction.
• Clinical picture of thyrotoxic dilated cardiomyopathy can degenerate into an overt cardiogenic shock sometimes requiring the use of devices for mechanical assistance to the circulation, or extracorporeal membrane oxygenation.
• For thyrotoxic dilated cardiomyopathy, evidence-based pharmacologic measures valid for heart failure should alwaysbesupplementedbytheadministrationofspecificthyroidtherapiessuchasthionamides(methimazole, carbimazole or propylthiouracil), whose relatively long latency of action should be supported by the i.v. administration of small doses of beta-blocker.
• In cases of cardiogenic shock, the administration of beta-blocker should be carried out only after the restoration of satisfactory blood pressure levels-with the prudent use of synthetic catecholamines, if necessary.
The present article illustrates a collection of cases from the literature plus one case deriving from the Authors' direct experience, all of them having thyrotoxic cardiomyopathy as their subject. It describes a collection of cases characterized by unusual presentation of the hyperthyroidism with the clinical picture of thyrotoxic cardiomyopathy (1,2,3,4,5,6), albeit a fair polymorphism emerges from the case comparison due to comorbid conditions, able to mislead the diagnostic work-up.
Case presentation
The report by Abbasi et al. (7) is centered around the case of a 34-year-old Hispanic male, diagnosed with Graves' disease three years before, who presented to the emergency room with complaints of generalized weakness, palpitations, chest pain and multiple episodes of nausea and vomiting. On presentation, the patient was tachycardiac and had a precordial systolic flow murmur, while the ECG showed atrial flutter. In addition, cardiac troponin was 0.04 ng/mL and echocardiogram revealed severely depressed left ventricular function with an ejection fraction of 26-30%. Of note, the prospect of thyroid function was markedly altered due to the steep rise of triiodotyronine with subnormal TSH: thyroid-stimulating hormone (TSH) 0.02 µIU/mL, Free Triiodothyronine (T3) 25.14 pg/mL and free thyroxine (T4) 5.23 ng/dL. In addition, physical examination showed upper and lower extremity weakness graded with a three out of five on the strength scale. The therapy included propranolol, along with propylthiouracil and hydrocortisone to prevent thyroid storm. Since blood work showed a potassium of 1.8 millimoles per liter (mmol/L), a central line was placed for rapid potassium repletion. Management was continued with methimazole plus propranolol. Physical exam showed an increase in the upper and lower extremity muscle strength to five out of five. The patient discharged on day nine with methimazole, propranolol and lisinopril, with an outpatient follow-up appointment. This report is very remarkable because it encompasses two life-threatening complications of Flajani-Graves-Basedow disease: the truly rare thyrotoxic periodic paralysis and the thyrotoxic dilated cardiomyopathy. In the description made by Meregildo Rodriguez et al. (8) a case of simultaneous presentation of decompensated thyrotoxicosis, diabetic ketoacidosis (DKA) and frank thyrotoxic dilated cardiomyopathy is reported. A patient presented to emergency department with drowsiness, alteration of breathing (tachypnea and Kussmaul's breathing) and severe hypotension. He had a history of malaise, headache, fever, and generalized body pain during the last 6 days. Laboratory findings were serum glucose: 460 mg/dL, urea: 115 mg/dL, creatinine: 1.3 mg/dL, hemoglobin: 12.9 g/dL, hematocrit: 40%, platelets: 198,000/mm 3 , white blood cells: 10,100/ mm 3 , pH: 6.99, TSH: 0.024 μIU/L, free-T4: 2.16 ng/ dL (reference range (RR): 0.82-1.63 ng/dL), total-T3: 0.18 ng/mL (RR: 0.5-2.0 ng/mL), free-T3: 0.42 pg/mL (RR: 2.1-3.8 pg/mL) Echocardiography showing borderline pulmonary artery systolic pressure (35 mmHg), severe LV systolic dysfunction (LV ejection fraction 35%), with left ventricular global hypokinesia and mitral inflow pattern of restrictive type. Based on these results, normal saline, insulin infusion plus potassium chloride, sodium bicarbonate, norepinephrine, hydrocortisone 100 mg every 8 h, methimazole 20 mg every 8 h, and Lugol's solution 10 drops every 8 h, were prescribed. Based on physical examination, chest X-ray (CXR), and progressive decrease in partial oxygen pressure compatible with acute lung edema, i.v. furosemide 20 mg every 12 h was administered for 2 days. On the 6th day of treatment, hydrocortisone and Lugol's solution were stopped, and methimazole was reduced by half. The patient was discharged with almost complete recovery. This report is suggestive of a patient's failure to adhere to the previously prescribed insulin therapy, in conjunction with the triggering of a thyrotoxic crisis due to previously undiagnosed nodular goiter. In fact, the clinical picture is dramatically improved by the administration of methimazole, that is, a drug not previously prescribed to patient because he had never received the correct diagnosis of 'toxic nodular goiter'.
In the report by Alam et al. (9) the case of a 65-yearold woman referred urgently from primary care with worsening breathlessness, tachyarrhythmia as per atrial fibrillation and newly diagnosed left bundle branch block (LBBB) is described. She had a background of type 2 diabetes, asthma and hypertension. Initial ECG revealed atrial fibrillation with fast ventricular rate on the background of LBBB. ECHO findings were consistent with a state of systolic impairment. Initial testing including checking thyroid function test revealed hyperthyroidism. It became evident that this patient had thyrotoxic dilated cardiomyopathy. Chest X-ray: cardiomegaly, left-sided pleural effusion, prominent pulmonary hila, X-ray image suggestive of early pulmonary edema. Echocardiogram: showed severely dilated left atrium with severe impairment to overall left ventricle systolic function (ejection fraction 24% using biplane Simpson method). Moderate tricuspid regurgitation and mild mitral regurgitation. The thyroid function test revealed: thyroid-stimulating hormone (TSH): <0.01 (0.35-3.50 mU/L), thyroxine (free T4): 28.5 (7.5-21.1 pmol/L), triiodothyronine (free T3): 8 (3.8-6.0 pmol/L). Thus, thyrotoxic dilated cardiomyopathy was diagnosed. The patient was started on antithyroid medications (carbimazole 20 mg once daily), beta-blocker (bisoprolol 2.5 mg once daily), ramipril 2.5 mg once daily and i.v. furosemide 80 mg twice daily. As the patient developed bronchospasm, bisoprolol was later switched to ivabradine 2.5 mg twice daily which was slowly uptitrated to 7.5 mg × twice daily. The dose of i.v. furosemide was decreased and switched to bumetanide 1 mg once daily.
In the further course of hospitalization, the patient's condition improved over the next 3-4 days, with complete resolution of fluid overload, and heart rate slowed down to 70 b.p.m.
She turned back to sinus rhythm, maintaining her heart around 50-60 b.p.m.
Her repeat echocardiogram showed moderate to severe left ventricular (LV) impairment with a decrease in tricuspid and mitral regurgitation. Her ejection fraction was improved to 37% (biplane Simpson method).
In the report by Allencherril et al. (10) there is a description of the dramatic picture that occurs in the presence of thyrotoxic heart failure with severe hypotension. The patient described was already known as an individual affected by Graves' disease, however, he had omitted to take the previously prescribed methimazole for the last 45 days. The clinical scenario was initially characterized by dyspnoea and palpitations with an electrocardiographic picture of atrial flutter 2:1 with a ventricular response of 160 btt/min. Laboratory evaluation showed suppressed thyroid-stimulating hormone and markedly elevated free thyroxine (T4) and triiodothyronine (T3). Despite the rapid resumption of therapy with antithyroid drug (propylthiouracil) and propranolol, worsening dyspnea arose with an echocardiographic picture showing severely depressed left ventricular ejection fraction (about 20%), together with left atrial dilation and functional mitral insufficiency. After two days of antithyroid and beta-blocker therapy, cardiac decompensation deteriorated further so as to turn to a frank picture of cardiogenic shock, complicated after several hours by monomorphic ventricular tachycardia (VT). The VT was converted to sinus rhythm by means of transthoracic electric shock, but the subsequent administration of inotropic drugs, namely epinephrine, norepinephrine and vasopressin at congruous doses via i.v. infusion, was not sufficient to raise the pressure up to normal levels.
Thus, an adequate resumption of blood flow for the various organs and systems was not achieved. In particular, severe renal failure developed along with the persistence of heart failure.
Therefore, it was necessary to resort to left ventricular mechanical assistance techniques such as intra-aortic balloon pump (IABP) and venoarterial extracorporeal membrane oxygenation (VAECMO).
After 6 days of uninterrupted therapy with VA-ECMO it was possible to identify a partial retrieval of the LVEF, which attained values of 35-39%. The subsequent evolution was a further increase in LVEF up to values of 45-49% at the 11th day. The patient was extubated 2 days after decannulation from ECMO. At discharge, he returned to baseline cognitive status and functional capacity.
The case-report derived from our direct observation refers to a male sex patient, aged 60 years with a clinical picture of cardiogenic shock related to severe left ventricular dilation and dysfunction. The ECG showed atrial fibrillation at an average ventricular rate of 180 b.p.m. (Fig. 1). Radiography of thorax showed cardiomegaly and pulmonary edema (Fig. 2).
Transthoracic echocardiogram showed dilation of the left ventricle with severe impairment of global and segmental contractility (ejection fraction 15%) and slight mitral insufficiency.
Emergency blood chemistry tests showed normal blood count, blood sugar: 128 mg/dL, azotemia: 37 mg/ dL, GOT: 303 U/L, GPT: 356 U/L, Na: 139 mEq/L, K: 4.4 mEq/L, total bilirubinemia: 1.86 mg/dL; there was also a serious picture of respiratory and metabolic acidosis. The patient was intubated, mechanically ventilated and treated with bicarbonates, digoxin, furosemide, dobutamine, heparin. A hemodynamic balance after 24 h, obtained by positioning a Swan-Ganz catheter in the right cardiac sections, documented: cardiac output 7.6 L/min; pulmonary capillary pressure 11 mmHg, pulmonary pressure 40/24 mmHg, average 29 mmHg. Coronary angiography documented a substantially undamaged coronary tree,that is, free from any hemodynamically significant stenosis (Fig. 3). The diagnosis of thyrotoxicosis, clinically suspected, was confirmed by the dosage of thyroid hormones: FT4 4.7 ng/dL (RR: 0.6-1.7 ng/dl); FT3 1.9 pg/mL (RR: 2.2-5.8 pg/mL); TSH <0.05 mU/mL (RR: 0.2-4 mU/mL); T4 125 ng/mL (RR: 40-120 ng/mL); T3 1.5 ng/mL (RR: 0.6-2 ng/mL). Thyroid antibodies were normal; moreover, the inflammation indexes were normal. Methimazole 10 mg/ day therapy was initiated in two administrations and, under careful hemodynamic monitoring, small doses of propranolol IV were administered (0.05 mg/kg body weight). In the following days there was a progressive improvement in hemodynamic conditions. On the fifth day, atrial fibrillation turned into atrial flutter, while an echocardiographic examination documented a reduction in the size of the left ventricle and an ejection fraction of 35%. On the seventh day the patient was extubated. Subsequently (ninth day) the atrial flutter was treated with synchronic cardioversion with direct low-energy current (75 J) which resulted in the conversion to sinus rhythm (Fig. 4). On the tenth day the patient was discharged in good general condition, with sinus rhythm and completely regressed prior pulmonary edema (Fig. 5), in therapy with methimazole, ACE inhibitors and furosemide, with the diagnosis of thyrothoxic dilated CMP during Plummer disease. On occasion of the checks carried out at 6 and 12 months, the patient was asymptomatic, with sinus rhythm, euthyroid and with echocardiogram which documented normal size of the ventricular cavities, with restored normality of the segmental and overall parietal kinetics of the left ventricle (LVEF: 52%).
Discussion
In the case-report of Abbasi et al. (7) and Allencherril et al. (10) a condition of hyperthyroidism was already known at the time of the onset of the thyrotoxic crisis. In the remaining three casereports -the one by Meregildo Rodriguez et al. (8), the one by Alam ST et al. (9) and ours -the condition of hyperthyroidism was unknown to the patient and doctors at the time of hospitalization, so that thyrotoxic dilated CMP has been the initial manifestation of hyperthyroidism (6).
In three of the five case reports, that of Meregildo Rodriguez et al. (8), that of Allencherril et al. (10) and that described by our team, there is a low-output syndrome requiring the use of i.v. inotropes. Therefore paradoxically norepinephrine in the report by Meregildo Rodriguez et al. (8), dobutamine in our experience and the association of epinephrine, norepinephrine and Coronary angiograms documenting substantial integrity of the three major coronary branches explored with coronarography performed immediately before the patient's clinical picture degenerated into shock. The need of the ECMO, described in the study by Allencherril et al. (10), is indicative of the huge severity of multiorgan deterioration that thyrotoxicosis has caused in this case. In fact, the patient described in the report, suffering from Graves' disease but who had omitted to take the prescribed drugs, exhibits rapidly worsening systolic heart failure (left ventricular ejection fraction = 20%) from thyrotoxic dilated cardiomyopathy, to which are added acute renal failure and shock, the latter being not responsive to sympathomimetic amines. Despite the severe upheaval of the acute phase, after weaning from the ECMO, the restoration of a normal left ventricular ejection fraction is achieved under the effect of the reintroduction of thionamides into the therapy.
In all the cases presented, there was a recovery of the pump function after the introduction of antithyroid drugs (propylthiouracil, methimazole or carbimazole) into the patient's therapeutical scheme. The relatively rapid restoration of normal cardiac volumes is a typical aspect of the evolution of thyrotoxic dilated CMP when it is recognized and treated promptly (6,11,12,13). Instead, in the majority of dilated cardiomyopathies of other origin, the early recognition of the pathology with the choice of appropriate treatment does not warrant per se any favourable regression of cardiac remodeling as well as does not guarantee a restoration of the pump function.
Finally, it is necessary to consider the point that cardiac decompensation of thyrotoxic cardiomyopathy, albeit benefitting from the evidence-based therapy consisting of anti RAAS drugs, namely ACE-inhibitors, angiotensin receptor blockers or mineralocorticoid receptor antagonists and/or beta-blockers cannot be effectively antagonized without the paramount support of the antithyroid drugs. In other words, no measure is sufficient to stop thyrotoxic cardiac decompensation if hormonal hyper-activity persists and is not corrected by highly specific measures, namely thyroid-suppressive drugs, radioiodine or thyroidectomy. In addition, in the presence of low-output syndrome, beta-blockers could be added to the therapy only after adequately correcting the hypotension.
Overall, these observations are able to debunk the concept that thyrotoxic heart failure is always characterized by high cardiac output. In fact, this statement is superficial because it is not suitable for cases of thyrotoxic dilated cardiomyopathy,that is, about 6% of all clinical presentations of hyperthyroidism, in which heart failure is characterized by a depressed left ventricular ejection fraction. Furthermore, although rare in the course of a thyrotoxic crisis during Graves' disease, the occurrence of a low cardiac output syndrome with cardiogenic shock has been described. In this case the use of inotropic drugs is required and, in the refractory forms, the resort to devices for supporting the circulation and lung ventilation such as ECMO has been also recommended.
Concluding remarks
Thyrotoxic dilated cardiomyopathy is part of the spectrum of clinical manifestations of hyperthyroidism. It represents the conclusive stage of the alterations of remodeling and of the function of the left ventricle in case of hyperthyroidism not timely diagnosed and/or not adequately treated. It is usually associated with a condition of heart failure with reduced left ventricular ejection fraction (HFREF) and low cardiac output. Thyrotoxic dilated cardiomyopathy seems to have greater room for improvement and recovery compared to the other varieties of dilated cardiomyopathy (idiopathic, post-ischemic, valvular, etc.). Therapy, alongside RAAS inhibitor drugs, compulsorily includes the administration of beta-blockers and antithyroid drugs (methimazole, carbimazole, propylthiouracil).
Declaration of interest
The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.
Patient's consent
Written informed consent has been obtained from the patient for publication of this case report. | 2021-01-07T09:09:12.674Z | 2020-12-24T00:00:00.000 | {
"year": 2020,
"sha1": "ef867ed0090baf3e2981d8cbc18b49f6bdc84694",
"oa_license": "CCBYNCND",
"oa_url": "https://edm.bioscientifica.com/downloadpdf/journals/edm/2020/1/EDM20-0068.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "671d3c281dcf4c40f203a902cfa201a9c7f51344",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202221585 | pes2o/s2orc | v3-fos-license | Column simulation of Fe, Ti, V heap leaching from titanomagnetite ore
The results of the research of the process of metals extraction from titanomagnetite ore are presented. The experiments were conducted in PVC columns. The most efficient metal recovery is achieved at a hydrofluoric acid concentration of 4 mol/L and ammonium fluoride concentration of 0.42 mol/L. This represent approximately 10% for titanium, iron and vanadium. Selectivity on titanium during percolation of a solution with a hydrofluoric acid concentration of 1 mol/L and ammonium fluoride of 2.5 mol/L is observed.
Introduction
Deposits of titanomagnetite ores are considered as one of the promising industrial sources of iron ore, vanadium, titanium and other valuable elements [1]. For example, the Chineisk complex deposit of vanadium-containing titanomagnetites (Northern Transbaikal area, Russia) is one of the largest deposits in the world in terms of ore stocks (approximately 30 billion tons) [2].
Hydrometallurgical technologies for processing titanomagnetite ores are divided into acid and alkaline methods. Acidic processing methods are based on the dissolution of iron (II, III), vanadium and partially titanium (IV) and their transfer to the liquid phase with subsequent separation operations (precipitation, extraction). In the process of hydrochloric acid decomposition of ore, iron, vanadium, manganese pass into the solution, but titanium and silicon remain in the precipitate. The degree of ore decomposition essentially depends on the concentration and consumption of reagents, temperature conditions and the duration of the process. However, the known hydrometallurgical methods of processing of titanomagnetite ores have disadvantages such as multistage and high energy costs. Therefore, the development of cost-effective titanomagnetite ore processing technology is an important and urgent scientific and technical problem [1,[3][4][5].
Heap leaching is well-established as a relatively low-cost and low-energy method of extracting metals (Au, Cu, U) from low grade ores, but it is practically not used for the processing of titanomagnetite ores [6].
Particular interest are hydrometallurgical technologies for ore processing, based on the effect of solutions containing ammonium and fluorine ions on the mineral material. The basis of the ore processing is that the oxides of transition and non-transition elements in contact with ammonium fluoride or solutions of ammonium fluoride and hydrofluoric acid form fluorometallates or oxofluorometallates favorable for further processing [7,8].
Leaching tests
The process of Ti, V and Fe leaching from the ore was simulated in PVC columns. The experiments were conducted in PVC columns with a height of 1 m and a diameter of 100 mm. Four parallel experiments were conducted with different leach aqueous solutions of ammonium fluoride and hydrofluoric acid. 15 kg of ore were added in each column. The volume of leach solution was 15 liters for each column and irrigation density was 250 ml/h. The percolation process was carried out in the circulation mode. The leach solutions were recycled four times and the fifth cycle was carried out with freshly prepared solutions. Samples of solutions were taken to analyze the content of Ti, Fe and V after each cycle. The column experimental conditions are presented in table 1.
Use of analytical techniques
The titanium, vanadium and iron concentration in the solution, and the elemental composition of the ore were determined using inductively coupled plasma mass spectrometry on a mass spectrometer ICP-MS ELAN-9000 DRC-e (Perkin Elmer USA).
Characterization of the ore
According to the results of elemental analysis, the sample of titanomagnetite ore contains: Fe -55.00 wt %, Ti -6.84 wt %, V -0.48 wt %. Table 2 Thus, the titanomagnetite ore of the Chineisk deposit has a high titanium and vanadium content (V2O5 0.8-1.0 wt.%, TiO2 11-12 wt.%) and can be considered as a huge resource for obtaining vanadium iron and titanium dioxide. However, the pyrometallurgical processing of such ore will be difficult due to the high content of TiO2 (>4%). The relatively high content of some non-ferrous and rare metals makes it possible to extract them. As follows from the table 3 and diagram ( figure 1 (a)), the highest degree of extraction of metals from the ore is achieved in column No. 1, where the concentration of HF is 4 mol/L and NH4F is 0.5 mol/L. Thus, after 4 cycles of percolation, extraction to solution is 6.52 % Ti, 7.29 % V and 7.63 % Fe. After the fifth percolation cycle with a freshly prepared solution, these values are 10.90 %, 11.29 % and 9.99 % respectively. Therefore, ore is most efficiently processed using repeated percolation of solution No. 1 In column No. 2, when using a solution with a concentration of HF -1 mol/L and NH4F -2.5 mol/L, a low degree of metals extraction is observed, however, the content of titanium predominates in the enriched process solution ( figure 1 (b)). The degree of extraction in the solution is: Ti -1.26 %, V -0.17 % and Fe -0.12 %. Thus, with repeated percolation of solution No. 2, the method allows to achieve the selective extraction of titanium and the enrichment of titanomagnetite ore with iron and vanadium.
Conclusion
This study confirms the potential of heap leaching of titanomagnetite ore for the extraction of Ti, V and Fe. The titanomagnetite ore of the Chineisk deposit contains Fe -55.00 wt %, Ti -6.84 wt %, V -0.48 wt %. and can be considered as a huge resource for obtaining vanadium, iron and titanium dioxide. Experimental results show that the most effective extraction of ore components is achieved at a concentration of hydrofluoric acid -4 mol/L and ammonium fluoride -0.5 mol/L and is approximately 10% for titanium, iron and vanadium. Selective extraction of titanium is observed during percolation of a solution with a concentration of hydrofluoric acid of 1 mol/L and ammonium fluoride 2.5 mol/L. | 2019-09-11T02:02:53.032Z | 2019-08-23T00:00:00.000 | {
"year": 2019,
"sha1": "f372a26aefa0ba7d0297682f394d338aed3b094e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/597/1/012008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "438a10c269098255c1b85ab0b7ada6a26c76b7d2",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
133708468 | pes2o/s2orc | v3-fos-license | Role of Minerals Supplementation on Growth and Survival of Litopenaeus vannamei in Low Salinity Water
The culture of shrimp and other fish and crustaceans using low salinity water is a trend that continues to grow throughout the world. In 2011, aquaculture accounted for 52.5% of the world’s fish food supply (FAO 2011). Most fish, crustacean and mollusc aquaculture production (61%) occurs in inland waters. In the same year, brackish water production accounted for 8%. In most locations throughout the world the primary candidate of choice for shrimp culture in low salinity water is the Pacific white shrimp, Litopenaeus vannamei, which is native to the Pacific coast from Northern Peru to Mexico. In 2011, L. vannamei production worldwide was close to 2.5 million tonnes, which is roughly 71% of International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 12 (2018) Journal homepage: http://www.ijcmas.com
total shrimp and prawn production worldwide (FAO 2011). All indications are that the production of L. vannamei will continue to expand, particularly in countries such as China, Vietnam and Thailand. The Pacific white shrimp is a euryhaline species that can tolerate a wide range of salinities 0.5 -45 g L -1 (Menz and Blake 1980;Bray et al., 1994). There are even some indications that it is capable of growing in waters of less than 0.5 g L -1 (Araneda et al., 2008;Cuvin-Aralar et al., 2009). The remarkable ability of L. vannamei to grow in less than ideal environments has made it the species of choice for culturing in low salinity water (Dall et al., 1990, Rothlisberg, 1998and Alday-Sanz, 2010.
To overcome these disease problems, one proposed solution to whiteleg shrimp production problems is the use of water with salinities lower than sea water. Several researchers (Van Wyk et al., 1999: McGraw et al., 2002: Sowers and Tomasso 2006a have studied the growth of L. vannamei in different salinities. In the same way an attempt has been made in zero salinity to study the growth of L. Vannamei. During the last few years, white spot disease (WSD) has spread worldwide and caused large scale mortalities and severe damage to shrimp culture, particularly in Asia leading to massive economic losses. Due to continuous outbreak of WSSV in of P.monodon culture leads to shattering of shrimp culture in India. So the farmers are seriously looking for alternative species for culture. At right time the Coastal Aquaculture Authority of India (CAA) introduced a new species (Litopenaeus vannamei) in India.
At the same time CAA is very keen in the bio security and approval for culture of L. vannamei. The shrimp has been introduced and farmed in Asia since the mid 1990s, with production in Mainland China being particularly significant. However, beginning in 1996, L. vannamei was introduced into Asia on a commercial scale. Total production of L. vannamei in Asia was approximately 3, 16,000 mt in 2002.
It is now evident that L. vannamei is farmed and established in several countries in East, Southeast and South Asia and is playing a major significant role in shrimp aquaculture production. There is very limited research works were done on the culture and growth performance of L. vannamei with different stocking densities in brackish water ponds in India. Owing to its ability to grow and survive in low salinity environments the Pacific white shrimp, (Litopenaeus vannamei, Boone) has become the candidate of choice for low salinity culture.
Inland production of L. vannamei in lowsalinity water is a growing industry in several regions of the world. Depending on their source, inland waters available for shrimp culture are usually of different salinities and possess different ion compositions (Boyd and Thunjai, 2003). Inland shrimp culture has been practiced for several years with tiger shrimp in Thailand (Flaherty and Vandergeest, 1998), and white shrimp has been cultivated inland in several regions of the United States (Davis et al., 2002). Factors such as absence of white spot syndrome virus (WSSV) in water sources (Sanchez-Barajas et al., 2009), adequate environmental temperatures year around, low equipment corrosion and proximity to large markets have permitted the establishment of several farms totalling 350 ha, which produced in 2007 over 1500 metric tons (Industria Acuícola, 2008) have permitted for the expantion of culture in inland low salinity waters.
With the exception of osmoregulation, the maintenance of osmotic balance between body fluids and the water in which the animal lives, the biochemical functions of minerals in aquatic species appear to be similar to those in terrestrial animals (Lovell, 1989). Fresh water species lose ions to the hypotonic environment and therefore suffer from hydration, where as the reverse is true for marine species. Unlike terrestrial animal, which are primarily limited to a dietary source of minerals, aquatic animals may be able to utilize, to some extent, minerals dissolved in the water to meet physiological requirements. Calcium, Copper, Iron, Magnesium, Potassium, Sodium, Selenium, and Zinc are generally derived from the water to satisfy part of the physiological requirements of fish (National Research Council, 1993).
Since aquatic animals can obtain minerals from both ambient water and feed, dietary supplements of selected minerals could facilitate better survival and growth of shrimp held in low salinity conditions. If one looks at the mineral profile of most low salinity well waters as compared to the profiles of low salinity water of oceanic origin you find that the levels of potassium and magnesium are much lower in the low salinity well water source. A marine species reared in seawater do not require dietary sources of magnesium and potassium, whereas freshwater species reared in freshwater do. Consequently, these minerals may be low in marine shrimp feeds that are fed to marine shrimp reared in low salinity water.
Recommendations and research into the influence of minerals on growth and survival in low salinity water is warranted. The ionic composition of saline water appears to be more important than salinity with regards to its effect on shrimp survival and growth. This is probably due to the fact that most of these waters contain adequate levels of sodium and chloride to meet the shrimp's physiological requirement. However, other ions are not at sufficient levels in the water or possibly the diet to meet physiological requirements. Quite often inland shrimp farmers complain of a slow die-off of shrimp. They report that shrimp are easily stressed by handling, temperature and low dissolved oxygen levels of the water. The farmers often observe lethargic shrimp along the sides of nursery tanks and ponds, or stressed shrimp even after gentle handling. Stress is often characterized by a whitening of the tail, cramping and possibly death. A probable reason for this response would be ionic imbalances and nutrient reserves caused by this unique environment.
The objectives of present study are: The effect of identified minerals like (Na, K and Mg) supplements through aqueous and dietary source on growth and survival of L. vannamei. Among the both aqueous and dietary minerals supplementation which gives better growth and survival of L. vannamei in low salinity water.
Site of the experiment
The experiment was conducted in Krishi Vigyan Kendra, Kampasagar, Nalgonda of Professor Jayashankar Telangana state Agriculture University for a period of 7 weeks.
Experimental animals and their acclimatization
Litopenaeus vannamei (1000 numbers) were obtained from CP Hatchery, Nellore, who has been authorized by Coastal Aquaculture Authority (CAA), Chennai to produce seed. Post larvae (PL10) transported by road in plastic bags containing 15 ppt saline water. PL transferred to the same salinity water in the wet lab. Acclimatization was carried out over 8 days. During this time salinity was lowered from 15 ppt to 3ppt bore well water at an average rate of 4ppt day -1 (M. Araneda et al., 2008). During this period the seed were fed with control diet. Shrimp seed were packed in double plastic bags filled with oxygen and water in the ratio of 3:1 in each bag and the density of shrimp was 300/bag. The number of shrimp seed to be packed in oxygen inflated polythene bags was calculated as per the following formula (Jameson et al., 1995). N = (DO -2) X V/CH Where: DO: Dissolved oxygen content of water (mg/l) V: Volume of water used for transport (Lt) C: Rate of oxygen consumption of shrimp (ml/kg of shrimp) H: Duration of transport (Hours)
Experimental design
The aquarium tanks used for experiments were of size 60x30x30 cm (Plate 1). Twenty one aquariums were stalked on iron racks. Aquariums were located in a secured place where there is no direct sunlight and covered all the sides with black paper to avoid algal growth in the tank. Water in the aquariums was aerated by using air stones connected to the air compressor. Filters are used for filtering the aquarium water. The underground water was taken into a tank and allowed to aerate for 48 hours and was used for filling the aquaria. Salinity was checked before taken the water into aquarium. The water is allowed to filter for 24 hours before introducing the shrimps into the aquaria.
Ten numbers of Shrimps with initial average weights of 0.15 -0.18gm were introduced in to each aquarium and triplicates were maintained for each treatment (Dietary supplementation of (plate 2) Na-10g, Na-20g, K-5g, K-10g, Mg-150mg, Mg-300mg and Aqueous supplementation of K-20mg, K-30mg, Mg-40mg, Mg-80mg) includes control. Regular water exchange of 25% was done every day. Left over feed, excreta and other debris was siphoned off from the bottom of the tank without disturbing the shrimps.
Experimental feed preparation and Feeding
In the experiment, formulated feed with the crude protein (35%) were used for feeding. Fishmeal, soybean meal, groundnut oil cake, maize and deoiled ricebran were the ingredients used for control feed. Experimental diets were prepared with same ingredients as used in control diet. In addition to that experimental diet contained following mineral 5g potassium (K + ). Each diet was prepared separately by adding 10g potassium, 10g sodium, 20g sodium, 150mg magnesium, 300mg magnesium. 1% of vitamin mixture added to experimental diets. All the ingredients that are Soybean meal, deoiled rice bran, maize, ground nut oil cake, vitamins used in feeds were obtained from local markets. Ingredients used in the feed and all the experimental diets were estimated for proximate composition (AOAC, 1995).
Each ingredient was procured in required quantity and ground into powder and sieved. All the ingredients were then mixed in required proportion and water was added at the rate of 30 ml per every 100g of feed and dough was prepared. Maida (1%) was used as a binding agent in the feed. The dough was cooked for 20 minutes in pressure cooker and then cooled. 1% Vitamin mixture was added. The homogenous dough was pressed through a hand pelletizer (La Monferrina s.r.l, Italy) with a sieve of 1 mm diameter. The feed was dried in shade and then in hot air oven at 80-90 0 C to reduce the moisture content to 10% and stored properly in dry and air tight bottles and kept in dark cool place.
Proximate composition
Proximate analysis of the feed was estimated by the method of AOAC, 1995.
Growth of L. vannamei fed with dietary minerals supplementation
Weight of shrimp in grams and weight increment data observed weekly for different treatments. Observations on the growth during the first week (7 th day) revealed that weight increment varied between 0.41±0.04 and 0.50±0.10gms for treatment control and Na-10g respectively. Highest and lowest average weight values were observed in the treatments K-10g (0.68±0.02) and control (0.57±0.12). On the 14 th day highest increment of 0.54±0.12gms and lowest increment of 0.41±0.11gms were recorded for the k-10g and control respectively. Highest average weight values (1.14±0.07) and lowest average weight values (0.98±0.04) were recorded for Na-20g and control respectively during the second sample (14 th day). Similar trend continued during the 21 st day also. The highest and lowest weight growth of increment observed were 0.51±0.11 and 0.40±0.10gms for Na-20g and K-10g respectively, while the highest and lowest average weight values observed were (1.65±0.09 and 1.43±0.11) for Na-20g and control respectively. Treatments Na-10g and K-10g stood in second and third positions with growth in weight of 1.63±0.12 and 1.62±0.05 respectively. During the 28 th day, the highest and lowest weight increments observed were 0.57±0.07 and 0.39±0.05gms for K-10g and control respectively. The highest and lowest average weight values observed were (2.19±0.04 and 1.82±0.07) for K-10g and control respectively. Treatments Na-20g and Mg-300mg stood in second and third positions with weight growth of 2.18±0.07 and 2.16±0.07gms respectively. Similar trend continued during the 35 th day of the experiment also. Highest and lowest weight growth increments observed were 0.54±0.04 and 0.46±0.07gms for K-10g and Mg-150mg respectively. The highest and lowest average weight values observed were (2.73±0.12 and 2.30±0.05) for K-10g and control respectively. Mg-300mg and Na-10g stood in second and third positions with growth weight gain of 2.65±0.05 and 2.63±0.14gms respectively. On the 42 nd day highest increment of 0.59±0.09gms and lowest increment of 0.36±0.14gms were recorded for the K-5g and control respectively. Highest average weight values 3.21±0.10 and lowest average weight values 2.66±0.03 were recorded for K-5g and control respectively. On the 49 th day highest increment of 0.72±0.10gms and lowest increment of 0.42±0.11gms were recorded for the K-10g and control respectively. Highest average weight values 3.92±0.06 and lowest average weight values 3.08±0.07 were recorded for K-10g and control respectively. Also highest average weight values of 3.92±0.06 from K-10g and lowest average weight values of 3.08±0.07 from control were observed at the end of the experiment. An overall study indicated that the K-5g recorded total weight increment of 3.87±0.07g in the 49 days experimental period. This was followed by the Na-20g (3.71±0.08), Na-10g (3.70±0.04gm) and Mg-300mg (3.69±0.08gm), they stood in second, third and fourth positions respectively.
The growth data was subjected to analysis of variance (ANOVA) at 5% level of significance and the observations were presented. The statistical analysis has shown that F-value is found to be significant among treatments. Since F-value is found to be significant, the pair wise comparison of any two Treatments could be done by computing RBD two way classification. The Treatment K-10g is found to be significantly superior when compare to other Treatments. Treatment K-10g has shown significantly different from all other Treatments. The second and third positions were occupied by Na-20g and K-5g respectively. There was a significant difference between the culture periods also.
Growth of L. vannamei supplied with aqueous minerals
Weight of shrimp in grams and weight increment data observed weekly for different treatments were studied. Observations on the growth during the first week (7 th day) revealed that weight increment varied between 0.41±0.04 and 0.48±0.07gms for treatment control and K-20mg respectively. Highest and lowest average weight values were observed in the treatments Mg-80mg (0.64±0.05) and control (0.57±0.12). On the 14 th day highest increment of 0.49±0.10gms and lowest increment of 0.41±0.11gms were recorded for the Mg-80mg and control respectively. Highest average weight values (1.13±0.14) and lowest average weight values (0.98±0.04) were recorded for Mg-80mg and control respectively during the second sample (14 th day). Similar trend continued during the 21 st day also. The highest and lowest weight growth of increment observed were 0.47±0.09 and 0.37±0.14gms for Mg-80mg and K-30mg respectively, while the highest and lowest average weight values observed were (1.60±0.02 and 1.43±0.04) for Mg-80mg and K-30mg respectively. Treatments K-20mg and Mg-40mg stood in second and third positions with growth in weight of 1.56±0.04 and 1.45±0.02 respectively. During the 28 th day, the highest and lowest weight increments observed were 0.45±0.12 and 0.39±0.02gms for Mg-80mg and K-30mg respectively. The highest and lowest average weight values observed were (2.09±0.07 and 1.82±0.01) for K-20mg and K-30mg respectively. Treatments Mg-80mg and Mg-40mg stood in second and third positions with weight growth of 2.05±0.11 and 1.86±0.07gms respectively. Similar trend continued during the 35 th day of the experiment also. Highest and lowest weight growth increments observed were 0.53±0.07 and 0.44±0.05gms for K-30mg and Mg-40mg respectively. The highest and lowest average weight values observed were (2.55±0.11 and 2.30±0.05) for K-20mg and control respectively. Mg-80mg and K-30mg stood in second and third positions with growth weight gain of 2.51±0.12 and 2.35±0.05gms respectively. On the 42 nd day highest increment of 0.60±0.05gms and lowest increment of 0.36±0.14gms were recorded for the Mg-80mg and control respectively. Highest average weight values 3.11±0.04 and lowest average weight values 2.66±0.03 were recorded for Mg-80mg and control respectively. On the 49 th day highest increment of 0.56±0.12gms and lowest increment of 0.42±0.11gms were recorded for the K-20mg and control respectively. Highest average weight values 3.65±0.07 and lowest average weight values 3.08±0.07 were recorded for Mg-80mg and control respectively. Also highest average weight values of 3.65±0.07 from Mg-80mg and lowest average weight values of 3.08±0.07 from control were observed at the end of the experiment. An overall study indicated that the K-20mg recorded total weight increment of 3.64±0.05g in the 49 days experimental period. This was followed by the Mg-40mg (3.30±0.05), K-30mg (3.28±0.07gm) they stood in second and third positions respectively.
The growth data was subjected to analysis of variance (ANOVA) at 5% level of significance and the observations were studied. The statistical analysis has shown that F-value is found to be significant among treatments. Since F-value is found to be significant, the pair wise comparison of any two Treatments could be done by computing RBD two way classification. The Treatment Mg-80mg is found to be significantly superior when compare to other Treatments. Treatment Mg-80mg has shown significantly different from all other Treatments. The second and third positions were occupied by K-20mg and Mg-40mg respectively. There was a significant difference between the culture periods also.
Survival of L. vannamei fed with dietary minerals supplementation
Survival percentages of L. vannamei shrimp in various experimental treatments are presented. The survival percentage throughout the period of experiment was lowest for the control, Mg-150mg, Mg-300mg, k-5g, Na-10g, Na-20g, K-10g. By the final sampling (49 th day) the survival percentage was (highest) 80.0 -and (lowest) 50.0.
The survival data was subjected to analysis of variance (ANOVA) presented. Statistical analysis has shown that F-value is found to be significant among treatments. Since F-value is found to be significant, the pair-wise comparison of any two treatments could be done by computing RBD two way classifications. The treatment K-10g had shown highest survival rate when compared to the other treatments. The subsequent positions were occupied by Treatments Na-20g, K-5g, Mg-300mg, Na-10g, Mg-150mg followed by control. Treatment K-10g has shown significant difference from all other treatments. There was significant difference in between experimental period also.
Survival of L. vannamei supplied with aqueous minerals
Survival percentages of L. vannamei shrimp in various experimental treatments were presented. The survival percentage throughout the period of experiment was lowest for the control, K-20mg, Mg-40mg, Mg-80mg and K-30mg. By the final sampling (49 th day) the survival percentage was (highest) 70.0 -and (lowest) 50.0.
The survival data was subjected to analysis of variance (ANOVA) presented. Statistical analysis has shown that F-value is found to be significant among treatments. Since F-value is found to be significant, the pair-wise comparison of any two treatments could be done by computing RBD two way classification. The treatment K-30mg had shown highest survival rate when compared to the other treatments. The subsequent positions were occupied by Treatments K-30mg, Mg-80mg, Mg-40mg, K-20mg followed by control. Treatment K-30mg has shown significant difference from all other treatments. There was significant difference in between experimental period also.
Specific growth Rates of L. vannamei fed with dietary minerals Supplementation
Specific growth rates for L. vannamei treated with different diets were calculated and studied. The specific growth rates by end of the experimental period (49 days) were calculated for all treatments.
Specific growth rates of L. vannamei supplied with aqueous minerals supplementation
Specific growth rates for L. vannamei (whiteleg shrimp) treated with different diets were calculated and studied. The specific growth rates by end of the experimental period (49 days) were calculated for all treatments.
Control group has the lowest Specific Growth Rate of 6.03%. The highest value was in K-20mg with 6.50%. The treatments that stood second and third positions were Mg-80mg (6.38%) and Mg-40mg (6.17%). These were followed by, K-30mg (6.16%).
Feed conversion ratio of L. vannamei fed with dietary minerals supplementation
The Feed Conversion Ratio in different experiments of L. vannamei groups were calculated and presented. The range for Feed Conversion Ratio observed during the period of experiment was 0.20 (Mg-150mg) -3.68(control).
During the first sampling (7 th day) Feed Conversion Ratio ranged between 0.20 and 0.30 and the highest during this period was recorded for k-5g,Na-20g and the lowest was for Mg-150mg.
Sampling on the 14 th day shown the highest value 1.00 for Na-20g and the lowest 0.82 for K-5g. The highest value of 1.92 was observed for K-10g on 21 th day while the lower of 1.28 was recorded for Mg-150mg.
The sampling on 28 th day recorded control with highest Feed Conversion Ratio value 2.29 as same reading and lowest value of 1.73 for treatment of Na-20g.
The highest value of 2.41 was observed for control on 35 th day while the lowest of 2.27 was recorded for K-10g. Sampling on 42 nd day recorded highest value of 3.50 for control and lowest value of 2.47 for K-5g. The last sampling on 49 th day recorded control with highest Feed Conversion Ratio value 3.68 and lowest value of 2.54 for treatment of K-5g.
The Feed Conversion Ratio was subjected to analysis of variance (ANOVA) and presented. Statistical analysis has shown that F-value is found to be significant among treatments. Since F-value is found to be significant, the pair-wise comparison of any two treatments could be done by computing RBD two way classification.
The control was found to be significantly superior when compared to the other Treatments. The K-10g and Na-10g occupied second and third positions. There was a significant difference between the experimental periods also.
Feed conversion ratio of L. vannamei with aqueous minerals Supplementation
The Feed Conversion Ratio in different experiments of L. vannamei groups were calculated and presented.
The range for Feed Conversion Ratio observed during the period of experiment was 0.21 (K-20mg) -3.68 (control).
During the first sampling (7 th day) Feed Conversion Ratio ranged between 0.21 and 0.27 and the highest during this period was recorded for control and the lowest was for K-20mg. The 14 th day sampling shown the highest value 0.98 for Mg-40mg and the lowest 0.90 for K-20mg. The highest value of 1.74 was observed for K-30mg on 21 th day while the lowest of 1.51 was recorded for Mg-80mg. The sampling on 28 th day recorded control with highest Feed Conversion Ratio value 2.29 as same reading and lowest value of 1.85 for treatment ofKa-20mg. The highest value of 2.91 was observed for K-30mg on 35 th day while the lowest of 2.41 was recorded for control. The range on 42 nd day recorded highest value of 3.50 for control and lowest value of 2.50 for Mg-80mg. The sampling on 49 th day recorded control with highest Feed Conversion Ratio value 3.68 and lowest value of 2.88 for treatment of K-20mg.
The Feed Conversion Ratio was subjected to analysis of variance (ANOVA) and presented in table 1. Treatments found to be non significant (Plate 3 and 4). As the production of shrimp in inland low salinity waters continuous to expand, so does the need for cost effective methods for increasing the availability of essential ions to the organisms in order to ensure proper growth and survival. Traditional practices, such as the application agricultural fertilisers (k-mag and murate of potash), commercial mineral mixtures application directly to the water without knowing the demand of shrimp, have been proven effective at improving growth and survival (Mc Nevin et al., 2004). However, the use of these minerals needs to be optimised based on demand of the aquatic organism rather than dumping them in to the pond. It may either allow reduction in the level of supplementation of these minerals and also the risk of mortality of the animals. Experiments in the present study were concluded at a salinity of 3 ppt, which is comparable with the salinity utilised by commercial shrimp farms where the bore wells are the basic source of water. Maintenance of sodium, potassium and magnesium is necessary for proper physiological functioning of body, osmoregulation, building of body and also as activities for many enzymes which play role in carbohydrate metabolism and protein synthesis (Davis et al., 2005).
Growth of L. vannamei in aqueous and dietary minerals supplementation
Dietary supplementation of NaCl has the potential to provide benefits for euryhaline species. In the present study growth was enhanced (3.71g) with the increase of sodium concentration (Na + 20g kg + ) in the diet. In two separate studies with juvenile red drum (Sciaenops ocellatus) reared in freshwater, growth and feed efficiency were improved when fish were fed a diet supplemented with sodium (Holsapple, 1990;Gatlin et al., 1992). Similar feed efficiency was observed at sodium 10g and 20g level in the diet. The results demonstrated that, upto 10g kg -1 supplementary sodium in experimental diet improved the specific growth rate as reported in pacific whiteleg shrimp in USA (Roy et al., 2007b). Potassium plays an important role in the membrane potential of aquatic animals. The present trail showed that there is positive correlation between potassium dietary supplementation and growth enhancement. In this trail shrimp offered with diet contain 10g kg -1 K + yielded significantly (p<0.05) greater weight gain (3.92g) and specific growth rate (6.28%) than the shrimp fed with control diet. Shiau and Hsieh (2001) were reported that increase of K + in diet increased growth in P. monodon. Gong et al., (2004) demonstrated the impact of K + by conducting the trail with and without mineral supplementation in L. vannamei. Similar trend in growth enhancement with dietary mineral supplementation was observed by many earlier works in L. vannamei (Davis et al., 2005;Muylder et al., 2006 andRoy et al., 2007b). In a field trail supplementation source of chelated K + improved growth in L. vannamei (Roy et al., 2007b). However, feed efficiency and feed conversion ratio(FCR)reduced with the increase of K + supplementation from 5g k + kg -1 to 10g k + kg -1 . It may be due to higher k + supplementation levels with the increase of osmolality and respiration rates of animals and stress condition FCR reduced compared to earlier trails.
Magnesium is the major constituent of bones and skeletal parts of the animals (Davis et al., 2005). In the current study showed that significant (p<0.05) increase in weight gain of (3.69g) L. vannamei with magnesium supplementation. Magnesium supplement at 300mg kg -1 in practical diet showed better growth than control diet. Similar observation was made by Cheng et al., (2005) in L. vannamei. These authors were reported that a dietary Mg +2 2.60 -3.46g kg -1 recommended for optimal growth of L. vannamei reared in low salinity water. However, Roy et al., (2007b) observed that there was no significant improvement in growth with magnesium supplementation in practical diet. On the present study higher feed efficiency, lower FCR and higher specific growth rate observed at Mg +2 150mg kg -1 supplementation level in the practical diet to that of control diet. Growth of the shrimp was improved when the diet were supplemented with 0.3% magnesium (Kanazawa et al., 1984). Deletion of magnesium from mineral supplemental diet results in reduced tissue mineralisation in P. vannamei (Davis et al., 1992).
Aquatic organisms collect most of their required mineral content from the surrounding water. In present study results indicated that the significant(p<0.05) increase in growth, lower FCR and higher specific growth rate at aqueous potassium supplementation 20mg l -1 than to that of control diet. Individual weight gain and specific growth rate and percent weight gain were increased with increasing potassium concentration in aqueous source utilized in vannamei culture in low salinity waters (Roy et al., 2007a). However, in the present study weight gain, specific growth rate and feed efficiency were not increased with the increase of concentration of potassium supplementation from 20mg l -1 to 30mg l -1 . Pragnell and Fotedar (2005) found that Penaeus latisulcatus reared in low salinity well water with 100% and 80% potassium concentration as compared to sea water resulted in slower growth. Even though potassium concentration increased weight gain was not observed, it may be due to at higher aqueous potassium concentration in shrimp tissue water decreases and the concentration of free amino acids in the tissue increases. With the progression of above process animal undergo stress it might have resulted in weight loss.
Addition of magnesium to the water from 40mg l -1 to 80mg l -1 increased growth and specific growth rate. Roy et al., (2007a) also noticed similar growth increase in L. vannamei low salinity culture with magnesium addition from 10mg l -1 to 160mg l -1 . Ahmad Ali (1999) was noticed suppressed growth with the magnesium addition in diet in Penaeus indicus. He was opined that the magnesium requirement might be satisfied through absorption from the water. However, feed efficiency did not show significant difference with the magnesium addition to the water.
Survival of L. vannamei in aqueous and dietary minerals supplementation
Minerals play a significant role in the survival of pacific white shrimp in inland low salinity water culture. In the present study sodium dietary supplement trails result indicated that increase of survival (80%) with the increase of sodium supplementation from 10g kg -1 diet to 20g kg -1 diet. Roy et al., (2007a) were observed that L.vennamei survival increased to 92% from 81%.with an increase of sodium supplementation 20g kg -1 diet to that of control diet. Pequeux (1995) reported that sodium and chloride ions plays significant role in osmoregulation of shrimp. These two ions were essential for the survival of shrimp in low salinity waters. In the present study shrimp offered the diet containing 10g K + kg -1 and 20g Na + kg -1 yielded significantly(p<0.05) more survival than shrimp fed control diet. Our results are supported by Shiau and Hsieh (2001) in Penaeus monodon, Pragnell and Fotedar (2005) in Penaeus latisulcatas and Roy et al., (2007b) in L. vannamei. Survival of L. vannamei was increased in the diet with magnesium supplementation of 300mg kg -1 to that of control diet. Roy et al., (2007b) were noticed similar increase in survival with the magnesium supplementation by using coating agents. However, Roy et al., (2007b) were observed contrasting results in the other trail without coating agent used in magnesium supplementation diet. Ahamad Ali, (1999) reported that there was no significance affect on the survival of P. indicus with the supplementation of magnesium in the diet. A number of studies were documented the correlation between potassium concentration and the survival of shrimp. (Boyd et al., 2002;Davis et al., 2002;Saoud et al., 2003).
The results of the present study showed that aqueous potassium supplementation is necessary for the survival of vannamei in low salinity water culture. The shrimps have shown higher survival at the addition of potassium 30mg l -1 to the water. Roy et al., (2007a) were observed similar increase in survival of vannamei with the increase of K + in the water. Zhu et al., (2004) were observed improper Na/K ratio in low salinity water made significant impact on survival of L. vannamei. Pragnell and Foteder, (2005) were reported that the potassium deficiency in low salinity water culture reduce the P. latisulcatus survival.
Results in the present study indicated that addition of magnesium as aqueous source to the vannamei culture in low salinity water enhanced the survival of shrimp. In the present study results showed increasing trend in survival with the increase of magnesium addition to the water. Our observation in the present experiment are also in agreement with previous studies evaluating the impact of magnesium and other minerals supplementation on survival of L. vannamei in low salinity waters Davis et al., 2005 andRoy et al., 2007a). However, Roy et al., (2007a) observed increase of survival in vannamei in low salinity water upto 40mg l -1 of magnesium supplementation to the water, further addition of magnesium resulted in decrease of survival.
In conclusion it can be concluded that dietary minerals supplementation of identified minerals performed better than aqueous minerals supplementation for the enhancement of growth and survival of vannamei in low salinity water.
Further research on role of these minerals in osmoregulation of shrimp may help to answer some of the questions associated with availability in growth and sudden mortality of post larvae in vannamei culture in low salinity water.
Summary
The present experiment was conducted in the wet lab of Department of Aquaculture, College of Fishery Science, Muthukur, SPSR Nellore (District) to study the "Effect of aqueous and dietary minerals supplementation on growth and survival of L. vannamei in low salinity water". Post larvae (PL10) of L. vannamei were brought to the wet laboratory from CP hatchery. Post larvae transported by road in plastic bags containing 15 ppt saline water. PL transferred to the same salinity water in the wet lab. Acclimatization was carried out over 8 days. During this time salinity was lowered from 15 ppt to 3ppt bore well water at an average rate of 4ppt day -1 (M. Araneda et al., 2008). After acclimatization to 3ppt, they were transferred to experimental trail and continued mineral supplementation trails.
In the aqueous source mineral supplementation potassium was added to the water at 20mg l -1 and 30mg l -1 . Magnesium was also evaluated as mineral through aqueous source as 40mg l -1 and 80mg l -1 concentration. Dietary supplementations of minerals were assessed with the addition of minerals sodium, potassium and magnesium at different concentration in vannamei culture in low salinity water. In control treatment shrimp fed with control diet. In case of supplementation of dietary minerals in practical diets, required mineral concentration was added to the control diet during the feed preparation. Control treatment shrimp were fed with control diet and compared the aqueous and dietary mineral supplementation for growth and survival of L. vannamei in low salinity water.
The sampling was done weekly. The duration of experiment was 7 weeks. Triplicates were maintained for all the treatments and the control. The results obtained were subjected to statistical analysis.
The results obtained in the present study on growth, survival, feed conversion ratio and specific growth rate of L. vannamei are summarized.
Important water quality parameters such as dissolved oxygen, temperature, pH, total alkalinity and total hardness were analysed.
The water quality parameters were recorded in the following order, dissolved oxygen varied between 5.45 -7.48 ppm, temperature range from 29.0-30.0 0 c, pH values ranged between 7.8 -8.5 and total alkalinity recorded in the range of 240-280 mg l -1 . The water quality parameters were observed similar for all the treatments and control tanks throughout the experimental period.
Weekly sampling for 7 weeks was done to study growth, survival, FCR, and SGR.
In the dietary minerals supplementation treatments higher growth performance of 3.92g was recorded, potassium dietary supplementation treatment (K + -10g kg -1 diet) and lowest recorded for the control diet.
All the dietary mineral supplementation treatments were recorded higher growth than to that of control (3.08g).
The analysis of variance for growth performance showed significantly difference among the treatments.
Shrimp fed on control diet had showed highest FCR, while potassium dietary supplementation (K + -5g kg -1 diet) has shown the least. All the dietary supplementation treatments showed better feed utilization efficiency than control.
The analysis of variance for FCR has shown significant difference for all the dietary mineral supplementation treatments.
In the dietary minerals supplementation treatment highest SGR (6.50%) was observed in magnesium dietary supplementation treatment (Mg +2 -150mg kg -1 diet) and lowest was recorded for the control.
The highest survival of 80% and lowest of 50% recorded for shrimps fed on experimental diets with potassium dietary supplementation (K + -10 g kg -1 and Na + 20 g kg -1 diet) and control respectively all the dietary mineral supplementation treatments showed better survival than the control.
In the aqueous mineral supplementation treatments highest growth performance 3.65g was recorded for magnesium treatment (Mg +2 -80mg l -1 ) addition to the water and lowest in control.
All the aqueous mineral supplementation treatments showed better growth performance than to that of control.
Analysis of variance for growth performance showed significant difference among the treatments.
FCR observed highest in the potassium addition to the water (K + -20mg l -1 ) in aqueous mineral supplementation treatments and lowest recorded in control.
In the aqueous mineral supplementation treatments highest SGR (6.50%) was observed in potassium supplementation (K + -20mg l -1 ) and lowest recorded for the control.
The highest survival of 70% and lowest of 50% was recorded in potassium (K + -30mg l -1 ) addition to the water treatment and control respectively. All the aqueous mineral supplementation treatments demonstrated better survival than the control.
Analysis variance for the survival showed significant difference among all treatments.
It can be concluded that dietary mineral supplementation of minerals performed better than aqueous mineral supplementation for the enhancement of growth and survival of vannamei in low salinity water culture.
Potassium inclusion at (K + -10g kg -1 diet) is identified as optimal level of inclusion for vannamei culture in low salinity water. | 2019-04-27T13:12:38.141Z | 2018-12-10T00:00:00.000 | {
"year": 2018,
"sha1": "846dad048c0cdb849ae9dfb184d263ba0de550e4",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/7-12-2018/K.%20Veeranjaneyulu%20and%20G.%20Krishnaveni.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6d4083295bd6d650a8333604acb716756e92f7df",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology"
]
} |
258040041 | pes2o/s2orc | v3-fos-license | Evaluating Counseling for Choice in Malawi: A Client-Centered Approach to Contraceptive Counseling
Authors found that clients counseled using the Counseling for Choice approach in Malawi reported better quality of contraceptive counseling and experience of care using metrics including person-centered care and quality of information exchange.
INTRODUCTION
A human rights-based approach to family planning (FP) programming addresses all levels of the health care system and the surrounding enabling environment to ensure the autonomy, agency, and satisfaction of FP clients. 1 Access to high-quality information and counseling-in addition to affordable, voluntary, and nondiscriminatory contraceptive services and products-is a critical "lever" to pull to achieve quality of care within this system. 1,2 Information exchange and interpersonal relations that occur between FP providers and clients have long been recognized as fundamental aspects of quality of care. 3,4 Beyond the objective to uphold the client's right to receive high-quality services, FP clients' perceptions of quality have also been found to be associated with better contraceptive use dynamics, including increased voluntary method uptake, method satisfaction, and continuation in some settings, although evidence is mixed. [5][6][7] Similarly, anticipatory side effects counseling-counseling that prepares women for contraceptive-induced bleeding changes and other side effects linked with using specific methods that they may experiencehas been shown to increase method satisfaction and decrease discontinuation. 8,9 However, evidence on structured counseling approaches that improve the quality of information sharing and interpersonal relations, as well as women's experiences using contraception, remains weak. 10 As a result, the quality of FP counseling remains poor globally. A recent analysis of Demographic and Health Survey data from 25 low-and middleincome countries found that the average country-level Method Information Index score was 34%meaning that only one-third of current contraceptive users received counseling on more than 1 method, were told about side effects, and were told what to do if side effects occurred. 11 Despite overwhelming evidence that fear and experience of adverse side effects and health concerns are major drivers of contraceptive nonuse and method-related discontinuation among women who wish to avoid pregnancy, [12][13] counseling approaches widely used by FP providers in low-and middle-income countries lack an adequate focus on anticipatory side effects counseling. 14 Evidence-based approaches that focus on improving care across these domainsapproaches that are tailored to the client's unique needs, improve information sharing and the client-provider relationship, and strengthen anticipatory side effects counseling-are urgently needed to support informed method choice aligned with clients' preferences and to reduce negative contraceptive use experiences.
Counseling for Choice (C4C) is a new FP counseling approach developed by Population Services International, publicly available at https://www.psi. org/C4C. 15 C4C, which comprises a provider training curriculum and job aid, replaces traditional tieredeffectiveness counseling with structured counseling based on the method attributes most valued by the individual client. C4C also provides a guided structure for comprehensive anticipatory side effects counseling, with a particular focus on menstrual bleeding changes. We used a quasi-experimental study design to evaluate the impact of the C4C intervention on the quality of counseling received, measured by clients' experiences.
C4C APPROACH FOR FP COUNSELING
Contraceptive counseling has evolved as contraceptive approaches and tools have been iteratively developed and updated to improve quality of care. To counsel patients thoroughly on their choices, many clinicians use the autonomous approach to counseling. This involves providing information on all available, medically appropriate methods, with the patient subsequently deciding on a method with minimal provider input. 16 Another common approach is the tiered-effectiveness method. With an effectiveness framework, clinicians present the most effective options first-highlighting voluntary long-acting reversible contraceptive methods. 17 One approach that bridges the gap between the directive versus autonomous approaches is the shared decision-making model-a method that recognizes the expertise of both the provider, who has comprehensive information about methods from a clinical perspective, and the clients, who best understand their own needs and preferences. 18 This and other common counseling approaches, such as Balanced Counseling Strategy Plus (BCSþ), employ evidence-based best practices shown to improve quality of care and FP outcomes, such as increased uptake. 19 These tools are widely used in FP programs globally; however, research on the effectiveness of specific approaches and tools to improve person-centered care and impact contraceptive use dynamics is limited. 20 Among available counseling tools, the new C4C approach shares some common components with BCSþ, the contraceptive counseling tool developed by Population Council and used across many low-and middle-income countries. 21 BCSþ also prioritizes the demedicalization of provider language during counseling, uses client-centered and shared decision-making approaches, and incorporates specific job aids. 22 From there, BCSþ and C4C diverge. BCSþ integrates tiered-effectiveness counseling into the approach, while C4C recognizes that clients may place a higher value on alternative method benefits-such as use on-demand, low frequency of provider visits required, or immediate return to fertility-and makes it easy for providers to compare contraceptives in relation to these other benefits. Where the algorithm, cards, and medical eligibility criteria information used by BCSþ are separate tools, C4C integrates the full suite of information and tools into a single, all-encompassing job aid. Different than BCSþ cards, the C4C job aid includes pages specifically meant to be viewed by lowerliteracy clients. Finally, recognizing that experiencing
C4C Intervention and Tools
Foundational to the C4C approach are 3 contraceptive counseling tenets: support the client to make an informed decision through clear and relevant information provision; provide high-quality, clientcentered interpersonal care; and create a dialogue with clients about side effects, including what to expect and how to manage them. The C4C approach has 2 key components: a 3day training for providers and the Choice Book job aid for providers to use during counseling (Box). The training provides multiple tools and techniques to improve the counseling interaction by creating a dialogue about what matters to the client rather than using the counseling session as a didactic or rote lecture to impart the provider's perspective (and potential bias) and a long list of facts. The Choice Book is a job aid for providers that includes both provider-facing and client-facing tools, including existing reference tools from the World Health Organization and other sources. Figure 1 shows an example book page illustrating how methods are compared across different attributes.
Study Design
We conducted a quasi-experimental evaluation with an intervention and concurrent comparison group in 50 public and 40 private facilities in 8 districts in Northern, Central, and Southern Malawi (Dwanga, Lilongwe, Mangochi, Mchinji, Mzuzu, Nkhata Bay, Nsanje, and Salima). Intervention facilities were sampled through stratified random sampling of a full roster of facilities offering FP services and counseling. Facilities were stratified first by district, then by public or private sector, and finally by client load. Our goal was, within the districts, to balance the number of public and private facilities with high, medium, and low client flows in the intervention and control groups. Of 30 public and 30 private facilities sampled, 25 and 20, respectively, consented to participate in the C4C intervention. We then selected matched comparison facilities based on FP client volumes and sector (private or public). Included facilities were primarily FP and reproductive health clinics, including franchises, and hospitals with FP and Counseling matrix: a tool illustrating which contraceptive options offer various contraceptive and lifestyle benefits GATHER: demonstrating how C4C aligns with the GATHER ("Greet, Ask, Help, Explain, and Return") approach 23 Benefit-specific pages: comparing each method option relative to whether it offers a particular benefit (Figure 1 reproductive health services or wards. During the study period, providers in the comparison group continued using tools with which they were well versed and familiar, such as the flipchart approved by the Ministry of Health; the comparison group was not instructed to use a specific FP counseling tool or approach. Selected providers in intervention facilities received a 3-day training on the C4C approach using the Choice Book that would guide the counseling experience. This training included role-play and practice to achieve competency in the counseling approach, which was assessed via quizzes and observation by the lead trainer. Half of the providers who participated in the training were nurse midwife technicians, about one-quarter were medical assistants, and the remaining one-quarter were either clinical officers or nurse midwife assistants. A post hoc review of trainings that all providers in both groups had received in the past 3 years revealed little difference between comparison (standard-of-care) and intervention providers in terms of training received before the C4C intervention.
Study Population
Between October and December 2018, we enrolled clients seeking FP services at intervention or comparison facilities. All women of reproductive age (aged 18-49 years) seeking FP servicesincluding those initiating contraception, switching methods, or continuing method use-were eligible to participate in in-person study procedures on the date of enrollment. No compensation was provided for participation in the study.
Data Collection
Data collection began 3 months after the training to allow providers time to become accustomed to the C4C approach. Participants completed 2 surveys on the date of enrollment: a pre-counseling survey before seeing a provider and a second post-counseling survey immediately after seeing a provider. Both the pre-and post-counseling surveys were administered in person in a private area of the clinic. The pre-counseling survey captured demographic information, contraceptive history, and acceptability of specific contraceptive side effects. The post-counseling survey collected information on the method chosen and reasons for selection (including reasons for selecting no method), content of information received during the counseling session, and satisfaction with the counseling experience.
Participants were asked in the post-counseling survey to identify their provider; in the final analysis sample, participants who visited an intervention facility but who received counseling from a provider not trained in C4C were excluded.
Ascertainment of Dependent Variables
We ascertained perceived quality of care using the validated 4-item Person-Centered Contraceptive Counseling scale, 25 which includes individual items on clients' perceptions of the respectfulness of care, whether they were allowed to voice their contraceptive method preferences, whether they felt their preferences were taken seriously, and whether they felt that they received adequate information to make a decision about a contraceptive method. Individual items are measured on a 5-point Likert scale (poor, fair, good, very good, or excellent). We report the items that comprise the Person-Centered Contraceptive Counseling scale individually and as a summative binary variable, equal to 1 if the highest rating ("excellent") was given for all 4 items and 0 if otherwise according to published scale scoring guidance. 26 Additional nonvalidated measures were developed to measure key C4C quality domains. For example, within the domain of information exchange and interpersonal relations, confidence using the chosen method was measured on a 5-point Likert scale (from "not at all" to "very confident"); in addition, binary (yes/no) variables were captured on whether the provider addressed all concerns about using contraception, whether the provider asked about prior contraceptive experience, whether the participant trusted the provider to keep the consultation private, and whether the provider helped make a plan for how to remember to use the method (among participants who selected to use short-term methods). In the side effects expectations and management domain, we captured 3 binary (yes/ no) variables: whether the provider provided information on potential side effects, whether the provider helped plan to manage potential side effects, and whether the participant anticipated discontinuing her method immediately if she experienced side effects. A table in the Supplement provides further detail on how all independent variables are linked to each of our 3 quality of care domains of interest.
Statistical Analysis
To estimate the effect of C4C on quality received, we compared participants at intervention versus comparison facilities by fitting multilevel mixed effects models with robust standard errors, with individuals nested within facilities. For Likert scale outcomes, we fit multilevel logistic regression models with random intercepts for health facilities to estimate odds ratios (OR), which can be interpreted as the odds that women in the intervention group gave the highest rating on the Likert scale compared to women in the comparison group. For binary outcome variables, we used analogous mixed effects logistic regression models. Adjusted models include covariates for age (specified as a continuous variable), marital status (modeled categorically as currently married, living with a man as if married, or not currently married or living with a male partner), highest level of educational attainment (none, primary, secondary, or higher), number of living children (none, 1-2, 3-4, or 5 or more), contraceptive method type received at consultation (including none, if no method was chosen after counseling), and facility sector (public or private). The analysis was conducted using STATA version 15.1.
Ethical Approval
The study was approved by the Research Ethics Board of Population Services International in Washington, DC, and by the National Committee on Research in the Social Sciences and Humanities in Malawi. The district health management team and the head/owner of each participating facility gave permission for data collection at study sites. All participants gave their verbal informed consent before study procedures. The clients in both intervention and control sites gave their consent to participate in the study. The study participants in both intervention and control sites were briefed on the study objectives and all requirements of the consenting process.
RESULTS
A total of 1,179 women were enrolled for the inperson study components (N=578 in the comparison group and N=601 in the intervention group).
In the full baseline sample, participants were evenly distributed across age groups, with a slightly higher proportion of women aged 18-24 years and a slightly lower proportion of women aged 35 years and older (
Client Satisfaction and Experience of Quality of Care
More women rated their overall counseling experience as poor in the comparison group (32%) compared to the intervention group (8%), while more women in the intervention group rated their experience as excellent (35%) compared to women in the comparison group (8%) (Figure 2). Receipt of care from C4C-trained providers was associated with statistically significant, positive odds of rating the provider as "excellent" (the highest score) on 4 questions ( ; and giving enough information to make the best decision about a method (aOR=5.14; 95% CI=2.72, 9.71). Participants in the intervention group had 4.6 times the odds of rating their provider as "excellent" on all 4 questions as the comparison group: 140 participants (23.3%) in the intervention group rated their provider as "excellent" on all 4 questions, relative to just 36 (6.2%) in the comparison group.
The person-centered contraceptive counseling measures described in Table 2 are related to aspects of both domains of information exchange and interpersonal relations. In addition to these validated measures, we measured other aspects of counseling related to these domains with the additional variables in Table 3. Participants had 6-fold odds (aOR=6.4; 95% CI=3.08, 13.4) of rating their provider as excellent in addressing all concerns about their contraceptive method relative to those in the comparison group (Table 3). They were also more likely to report that their provider asked about their previous contraceptive experiences than clients in the comparison group, with 448 (74.5%) reporting "yes" versus 219 (37.9%) in the comparison (OR=6.76; 95% CI=3, 12.92). Clients choosing shortacting methods in the intervention group were more likely to report being helped to make a plan to use their method correctly (
Side Effects Expectations and Management
Women in the intervention group were more likely to report that the provider told them about possible side effects they might experience (
DISCUSSION
The novel C4C approach to FP counseling was specifically designed to address common issues with the quality of contraceptive counseling. The approach aims to support the client to make an informed decision about a method that aligns with their self-identified needs and individual preferences for specific method attributes. Clients counseled by C4C providers were more likely to report better care received, with more than 4 times as Abbreviations: aOR, adjusted odds ratio; CI, confidence interval; ICC, intraclass correlation; OR, odds ratio. a Estimated from multilevel mixed effects models including facility-level random effects. "Gave enough information" and "confidence to use chosen method" were modeled as ordinal variables. The "provider addressed all concerns" outcome was modeled as a binary variable. Odds ratios estimated from multilevel mixed effects model with ordinal or binary outcomes. Unadjusted odds ratios estimated from bivariate models. b Estimated from models that include demographic variables (age, marital status, and education), a categorical variable for number of living children (defined as none, 1-2, 3-4, or 5 or more), and variables for method chosen at provider visit and type of facility (public or private). c Estimated from unadjusted model with random intercepts for health facility but without fixed effects predictors. d Estimated variance for the random effects at the health facility level in the adjusted model; estimate has not been exponentiated. e Number of observations in the adjusted model; complete case analysis. f Significant at P<.01. g Significant at P<.05. h For modeled estimates, we use a binary version of the indicator that combines no and don't know responses in a single category (versus "yes"). i Asked only of women who received injectables, oral contraceptive pills, condoms, or emergency contraceptive pills. many reporting their experience as "excellent" overall. We find that the C4C approach improved clients' experience of care across multiple domains and measures of person-centered care, including information exchange, interpersonal relations, and anticipatory side effects counseling, relative to standard-of-care counseling provided in public and private participating health facilities in Malawi.
The interpersonal relations quality of care domain in FP is critical to an overall high quality of care experience: a systematic review on the effects of person-centered quality of contraceptive care found that interventions to improve personcenteredness were consistently associated with improved client experience, perceptions of quality, and satisfaction. 27 C4C addresses this domain of quality by anchoring the counseling approach in the core elements of respect, dignity, and empathy and in care that is nondiscriminatory and responsive to unique client needs. Participants in the intervention group of our study consistently rated their providers more positively across indicators of this client-provider relationship, reporting that their providers respected them as a person, let them say what mattered to them, took their preferences seriously, and were trusted to keep their conversation confidential compared to those in the comparison group.
A principal tenet of the C4C approach is to enable informed decision-making through clear and relevant information provision, building on counseling approaches such as the World Health Organization's Decision-Making Tool for Family Planning Clients and Providers and the Balanced Counseling Strategy tool. 28 Participants who received the C4C intervention were more likely to report that they had enough information to select a method that fit their needs and had more confidence in their ability to use their chosen method than participants in the comparison group. Their providers were more likely to ask them about their previous contraceptive use and to address all of their concerns. This exchange of information is critical to ensuring that clients are well informed about contraceptive options that best suit them. It includes having appropriate information to prepare them for side effects they may experience with a chosen method, a factor that is directly correlated to contraceptive use experiences, and method satisfaction over time. Clients counseled Evaluating Counseling for Choice in Malawi www.ghspjournal.org by C4C providers were more likely to report receiving this anticipatory side effects counseling and having discussed a plan with their provider for how to manage these side effects. Taken together, the findings from this evaluation suggest that the tailored counseling encouraged by the C4C approach, when compared to the standard of care, enables improved information exchange that helps clients make the best contraceptive choice for them. This is consistent with existing literature that describes improved client experiences when counseling includes clear information tailored to one's expressed needs and preferences. 29,30 While overall counseling received was significantly higher among women in the intervention group, the finding that even women in the intervention group continue to report some dissatisfaction with their counseling experience (14.7% reporting their experience as "fair" or "poor") indicates that more can be done to further improve counseling, even when using the C4C approach. This study adds to the growing evidence base on the impact of the quality of counseling on client experience. Several studies have found positive effects of interventions to improve client-or person-centeredness and quality of contraceptive counseling on contraceptive use dynamics, hypothesizing that improved perceptions of interpersonal connection with a provider during counseling, having enough information to make an informed choice, and feeling confident to understand and manage side effects may be associated with method initiation and improved method use experiences. 27,[31][32][33][34] However, evidence of the impact of counseling on contraceptive use dynamics is mixed. 21,35,36 While we do not look here at the impact of the C4C approach on method use over time, we do observe that women counseled using C4C were less likely to report that they would discontinue their method immediately if they experienced side effects, relative to those counseled using the standard approach. Although the difference was not statistically significant, this finding suggests that the C4C approach may support women to select methods with side effect profiles that are more tolerable for their preferences or to better prepare women for what they may expect in terms of side effects. Exploring the impact of improved quality in counseling on contraceptive use dynamics and satisfaction with FP methods over time should be a priority for those in the field aiming to develop and use counseling approaches that truly meet client needs.
Strengths and Limitations
A primary strength of this study is its inclusion of a robust comparison group that allows for direct comparison of key areas of the counseling experience between women who were counseled by C4C-trained providers and women who were not, allowing for more direct conclusions to be drawn regarding the effect that the C4C approach may have on women's experiences with a provider.
There are also some limitations. Though unaware of the specific survey questions to clients or which clients would be surveyed, providers in the intervention group were aware that the new C4C approach on which they were trained would be studied, which may have affected adherence levels to the approach. The pre-counseling survey could have acted as an intervention itself or primed respondents to ask their provider about the topics being asked (e.g., about side effects). This may have improved the quality of counseling observed, but the effect would be expected to be nondifferential by treatment group since all participants received the same pre-counseling survey. Lastly, while it was not within the scope of our project to design a separate training for our comparison group, it is possible that improvements in quality of care could have been seen across some of the same indicators studied here regardless of the specific approach used; the act of simply retraining providers in principles of quality counseling could result in better counseling. Further research could explore the comparative impact of the C4C approach against training in other counseling approaches.
CONCLUSION
This study strengthens the evidence base for the utility and effectiveness of client-centered contraceptive counseling. Among FP clients in Malawi, we found that the C4C approach improved the perceived quality of care across multiple domains relative to standard counseling approaches. Counseling that focuses on supporting clients' fully informed choice in method selection, improving clientcenteredness of the interaction, and strengthening the client's understanding of the potential side effects of their chosen method is a promising approach to improving contraceptive counseling and use experiences. comments during review of the article. We thank Grace Jaworski for supporting data analysis. We also thank implementing partners with the U.S. Agency for International Development (USAID) Office of Population and Reproductive Health that, via various USAID Communities of Practice, provided invaluable feedback to Counseling for Choice and contributed to our collective efforts to strengthen approaches to contraceptive counseling.
Funding: This article was made possible by the support of the American People through the U.S. Agency for International Development under the Support for International Family Planning and Health Organizations: Sustainable Networks 2 (AID-OAA-A-14-00037) and MOMENTUM Private Health Delivery (7200AA20CA00007).
Disclaimer: The views and opinions expressed in this article are those of the authors and not necessarily the views and opinions of the U.S. Agency for International Development. The funder played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or the decision to submit the article for publication.
Author contributions: AK conceptualized the study design and analysis, conducted the analysis, and contributed to the writing. CWR supported data analysis and contributed to the writing and interpretation of results. AK, KD, EL, and AA contributed to the writing and interpretation of results. AK and KD also provided technical support and program management during study implementation. IM and PM carried out the study, contributed to the interpretation of results, and provided critical review and feedback. All authors reviewed and approved this article.
Competing interests: None declared. | 2023-04-09T15:11:17.090Z | 2023-04-07T00:00:00.000 | {
"year": 2023,
"sha1": "6b8b2aab9151f1e942c492b00d82e24af05fd814",
"oa_license": "CCBY",
"oa_url": "https://www.ghspjournal.org/content/ghsp/early/2023/04/10/GHSP-D-22-00319.full.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c4bc8bbd13a26aeb0ebc8ab8bcdfee21922f9e7f",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3368722 | pes2o/s2orc | v3-fos-license | Listeriolysin O Regulates the Expression of Optineurin, an Autophagy Adaptor That Inhibits the Growth of Listeria monocytogenes
Autophagy, a well-established defense mechanism, enables the elimination of intracellular pathogens including Listeria monocytogenes. Host cell recognition results in ubiquitination of L. monocytogenes and interaction with autophagy adaptors p62/SQSTM1 and NDP52, which target bacteria to autophagosomes by binding to microtubule-associated protein 1 light chain 3 (LC3). Although studies have indicated that L. monocytogenes induces autophagy, the significance of this process in the infectious cycle and the mechanisms involved remain poorly understood. Here, we examined the role of the autophagy adaptor optineurin (OPTN), the phosphorylation of which by the TANK binding kinase 1 (TBK1) enhances its affinity for LC3 and promotes autophagosomal degradation, during L. monocytogenes infection. In LC3- and OPTN-depleted host cells, intracellular replicating L. monocytogenes increased, an effect not seen with a mutant lacking the pore-forming toxin listeriolysin O (LLO). LLO induced the production of OPTN. In host cells expressing an inactive TBK1, bacterial replication was also inhibited. Our studies have uncovered an OPTN-dependent pathway in which L. monocytogenes uses LLO to restrict bacterial growth. Hence, manipulation of autophagy by L. monocytogenes, either through induction or evasion, represents a key event in its intracellular life style and could lead to either cytosolic growth or persistence in intracellular vacuolar structures.
Introduction
Listeria monocytogenes is a Gram-positive, ubiquitously distributed, facultative intracellular pathogen that causes listeriosis, a lethal food-borne disease. Following invasion into host cells, the pathogen breaches single-membrane vacuolar compartments to escape into the cytosol using listeriolysin O (LLO) and/or its phospholipases [1,2]. Subsequently, cytosolic bacteria employ the surface protein actin-assembly inducing protein (ActA) to recruit components of the host-cell actin machinery to facilitate intracellular bacterial movement and cell-to-cell spread [1]. However, there is increasing evidence to suggest that a proportion of the bacteria modulate, via LLO, their vacuolar compartments to enable replication and propagation [3,4].
LC3 Is Essential for the Intracellular Growth Restriction of LLO-Producing L. monocytogenes
In HeLa cells, depletion of the autophagy factor LC3 resulted in a significant increase in intracellular replicating wt L. monocytogenes ( Figure 1A). These cells are, however, also permissive for the replication of LLO-negative mutants, which exited into the cytoplasm and formed actin tails ( Figure 1B). However, depletion of LC3 did not affect the intracellular numbers of LLO-negative L. monocytogenes ( Figure 1A).
LC3 Is Essential for the Intracellular Growth Restriction of LLO-Producing L. monocytogenes
In HeLa cells, depletion of the autophagy factor LC3 resulted in a significant increase in intracellular replicating wt L. monocytogenes ( Figure 1A). These cells are, however, also permissive for the replication of LLO-negative mutants, which exited into the cytoplasm and formed actin tails ( Figure 1B). However, depletion of LC3 did not affect the intracellular numbers of LLO-negative L. monocytogenes ( Figure 1A).
LLO Upregulates OPTN in HeLa Cells
As LLO-producing L. monocytogenes were targeted by the autophagy factor LC3, we examined whether LLO regulates OPTN activity. To that purpose, HeLa cells were infected with L. monocytogenes wt and LLO-negative mutant. OPTN levels were determined by immunoblotting. As can be seen in Figure 2A, OPTN was upregulated in cells infected with L. monocytogenes wt but not L. monocytogenes ∆hly, indicating that LLO is required to regulate the expression of OPTN. To confirm this result, HeLa cells were treated with lipopolysaccharide (LPS)-free LLO, purified from L. innocua, and changes in OPTN expression were analyzed by immunoblotting. Indeed, LLO significantly induced the upregulation of OPTN ( Figure 2B).
LLO Upregulates OPTN in HeLa Cells
As LLO-producing L. monocytogenes were targeted by the autophagy factor LC3, we examined whether LLO regulates OPTN activity. To that purpose, HeLa cells were infected with L. monocytogenes wt and LLO-negative mutant. OPTN levels were determined by immunoblotting. As can be seen in Figure 2A, OPTN was upregulated in cells infected with L. monocytogenes wt but not L. monocytogenes ∆hly, indicating that LLO is required to regulate the expression of OPTN. To confirm this result, HeLa cells were treated with lipopolysaccharide (LPS)-free LLO, purified from L. innocua, and changes in OPTN expression were analyzed by immunoblotting. Indeed, LLO significantly induced the upregulation of OPTN ( Figure 2B).
LC3 Is Essential for the Intracellular Growth Restriction of LLO-Producing L. monocytogenes
In HeLa cells, depletion of the autophagy factor LC3 resulted in a significant increase in intracellular replicating wt L. monocytogenes ( Figure 1A). These cells are, however, also permissive for the replication of LLO-negative mutants, which exited into the cytoplasm and formed actin tails ( Figure 1B). However, depletion of LC3 did not affect the intracellular numbers of LLO-negative L. monocytogenes ( Figure 1A).
LLO Upregulates OPTN in HeLa Cells
As LLO-producing L. monocytogenes were targeted by the autophagy factor LC3, we examined whether LLO regulates OPTN activity. To that purpose, HeLa cells were infected with L. monocytogenes wt and LLO-negative mutant. OPTN levels were determined by immunoblotting. As can be seen in Figure 2A, OPTN was upregulated in cells infected with L. monocytogenes wt but not L. monocytogenes ∆hly, indicating that LLO is required to regulate the expression of OPTN. To confirm this result, HeLa cells were treated with lipopolysaccharide (LPS)-free LLO, purified from L. innocua, and changes in OPTN expression were analyzed by immunoblotting. Indeed, LLO significantly induced the upregulation of OPTN ( Figure 2B).
OPTN Phosphorylation by TBK1 Is Essential for the Growth Restriction of L. monocytogenes
Phosphorylation of OPTN by TBK1 enhances its affinity for LC3 [21]. To elaborate the role of TBK1 in L. monocytogenes growth restriction, TBK1 was inhibited with a reversible inhibitor BX-795, prior to infection of HeLa cells with wt L. monocytogenes. Increased intracellular numbers of wt L. monocytogenes were observed in cells treated with BX-795, as compared to untreated control cells ( Figure 3A). The treatment of L. monocytogenes wt with BX-795 did not affect bacterial viability ( Figure S1).
Because TBK1 also phosphorylates other autophagy adaptors, besides OPTN [22], we examined the role of phosphorylated OPTN in L. monocytogenes wt growth restriction in greater detail. Cells were co-transfected with a plasmid encoding (1) OPTN and TBK1; or (2) OPTN with TBK1 with an inactive kinase (KM) domain; and (3) an empty vector vehicle. OPTN reduced intracellular wt L. monocytogenes growth in the presence of active TBK1, but this effect was absent with the inactive TBK1 variant ( Figure 3B). Thus, these results indicate that active TBK1 and phosphorylated OPTN are required to restrict the intracellular growth of L. monocytogenes.
OPTN Phosphorylation by TBK1 Is Essential for the Growth Restriction of L. monocytogenes
Phosphorylation of OPTN by TBK1 enhances its affinity for LC3 [21]. To elaborate the role of TBK1 in L. monocytogenes growth restriction, TBK1 was inhibited with a reversible inhibitor BX-795, prior to infection of HeLa cells with wt L. monocytogenes. Increased intracellular numbers of wt L. monocytogenes were observed in cells treated with BX-795, as compared to untreated control cells ( Figure 3A). The treatment of L. monocytogenes wt with BX-795 did not affect bacterial viability ( Figure S1).
Because TBK1 also phosphorylates other autophagy adaptors, besides OPTN [22], we examined the role of phosphorylated OPTN in L. monocytogenes wt growth restriction in greater detail. Cells were co-transfected with a plasmid encoding (1) OPTN and TBK1; or (2) OPTN with TBK1 with an inactive kinase (KM) domain; and (3) an empty vector vehicle. OPTN reduced intracellular wt L. monocytogenes growth in the presence of active TBK1, but this effect was absent with the inactive TBK1 variant ( Figure 3B). Thus, these results indicate that active TBK1 and phosphorylated OPTN are required to restrict the intracellular growth of L. monocytogenes.
The Reduction of OPTN Promotes the Growth of Wt L. monocytogenes in an LLO-Dependent Manner
To determine the involvement of LLO in OPTN-mediated growth restriction of L. monocytogenes, we reduced expression of optn with specific siRNA in HeLa cells, and subsequently infected them with wt L. monocytogenes and its isogenic LLO-negative mutant ∆hly. In OPTN-depleted cells, the intracellular numbers of wt L. monocytogenes were significantly increased. By contrast, OPTN depletion did not affect the intracellular growth of L. monocytogenes ∆hly ( Figure 4). This result implies that LLO production is essential for the growth restriction of L. monocytogenes by OPTN.
The Reduction of OPTN Promotes the Growth of Wt L. monocytogenes in an LLO-Dependent Manner
To determine the involvement of LLO in OPTN-mediated growth restriction of L. monocytogenes, we reduced expression of optn with specific siRNA in HeLa cells, and subsequently infected them with wt L. monocytogenes and its isogenic LLO-negative mutant ∆hly. In OPTN-depleted cells, the intracellular numbers of wt L. monocytogenes were significantly increased. By contrast, OPTN depletion did not affect the intracellular growth of L. monocytogenes ∆hly (Figure 4). This result implies that LLO production is essential for the growth restriction of L. monocytogenes by OPTN.
Discussion
Autophagy plays a crucial role in the clearance of intracellular L. monocytogenes [7,23]. Cytosolic Listeria are ubiquitinated and are subsequently detected by the autophagy adaptors SQSTM1 and NDP52, which target them to autophagosomes for degradation [17,18]. Current studies have focused on the question of how L. monocytogenes evades autophagic recognition and have provided insight that these bacteria use mimicry, i.e., coating themselves with components of the host cell cytoskeleton by means of ActA [17,[24][25][26]. Our results in this study reveal another aspect of autophagic recognition. Indeed, we report that the autophagy adaptor OPTN is upregulated in response to LLO treatment. Significantly, OPTN reduces the intracellular growth of wt L. monocytogenes, but not that of its isogenic LLO-negative mutant strain. Detailed analysis has indicated that TBK1-mediated phosphorylation of OPTN is a crucial event in the restriction of intracellular growth of wt L. monocytogenes.
Previous studies on autophagosomal degradation of L. monocytogenes have shown that cytoplasmic bacteria are targeted by the autophagosomal machinery [23]. Other reports have demonstrated that LLO is required for autophagy induction, and it was postulated that L. monocytogenes containing phagosomes damaged by LLO might be targeted by autophagy [6,7]. The data reported in this study, for the first time, provide evidence that LLO induces the upregulation of the autophagy adaptor OPTN. We used HeLa cells to determine the role of OPTN during L. monocytogenes infection. This cell line is particularly well-suited for this study, since expression of LLO is dispensable for bacterial vacuolar escape in these cells [2], as evidenced by the presence of cytoplasmic LLO-negative L. monocytogenes with actin tails.
Our data show that intracellular growing L. monocytogenes consist of two populations: one which generates LLO and may be targeted by autophagy, thereby leading to its intracellular growth restriction, and a second group that might not be targeted for autophagic clearance and therefore, its growth remains unrestricted. These data therefore suggest that, in addition to evasion of autophagy by ActA [17], L. monocytogenes may also manipulate the cellular autophagic machinery by induction through LLO, to promote its growth and persistence in host cells. Thus, bacteria that escape the vacuole and hyper-replicate in the host cytosol may be subjected to autophagic detection and removal ( Figure 5). Further studies are required to conclude that autophagy is involved in the growth restriction of LLO-producing L. monocytogenes under these experimental conditions. It appears counterintuitive that LLO induces the upregulation of the autophagy adaptor protein OPTN. However, other functions of OPTN may be of relevance here, as it has been shown that the OPTN-TBK1 complex leads to the phosphorylation, dimerization, and nuclear localization of the interferon regulatory factor 3 (IRF3), which, in turn, mediates the transcription of the interferon
Discussion
Autophagy plays a crucial role in the clearance of intracellular L. monocytogenes [7,23]. Cytosolic Listeria are ubiquitinated and are subsequently detected by the autophagy adaptors SQSTM1 and NDP52, which target them to autophagosomes for degradation [17,18]. Current studies have focused on the question of how L. monocytogenes evades autophagic recognition and have provided insight that these bacteria use mimicry, i.e., coating themselves with components of the host cell cytoskeleton by means of ActA [17,[24][25][26]. Our results in this study reveal another aspect of autophagic recognition. Indeed, we report that the autophagy adaptor OPTN is upregulated in response to LLO treatment. Significantly, OPTN reduces the intracellular growth of wt L. monocytogenes, but not that of its isogenic LLO-negative mutant strain. Detailed analysis has indicated that TBK1-mediated phosphorylation of OPTN is a crucial event in the restriction of intracellular growth of wt L. monocytogenes.
Previous studies on autophagosomal degradation of L. monocytogenes have shown that cytoplasmic bacteria are targeted by the autophagosomal machinery [23]. Other reports have demonstrated that LLO is required for autophagy induction, and it was postulated that L. monocytogenes containing phagosomes damaged by LLO might be targeted by autophagy [6,7]. The data reported in this study, for the first time, provide evidence that LLO induces the upregulation of the autophagy adaptor OPTN. We used HeLa cells to determine the role of OPTN during L. monocytogenes infection. This cell line is particularly well-suited for this study, since expression of LLO is dispensable for bacterial vacuolar escape in these cells [2], as evidenced by the presence of cytoplasmic LLO-negative L. monocytogenes with actin tails.
Our data show that intracellular growing L. monocytogenes consist of two populations: one which generates LLO and may be targeted by autophagy, thereby leading to its intracellular growth restriction, and a second group that might not be targeted for autophagic clearance and therefore, its growth remains unrestricted. These data therefore suggest that, in addition to evasion of autophagy by ActA [17], L. monocytogenes may also manipulate the cellular autophagic machinery by induction through LLO, to promote its growth and persistence in host cells. Thus, bacteria that escape the vacuole and hyper-replicate in the host cytosol may be subjected to autophagic detection and removal ( Figure 5). Further studies are required to conclude that autophagy is involved in the growth restriction of LLO-producing L. monocytogenes under these experimental conditions. It appears counterintuitive that LLO induces the upregulation of the autophagy adaptor protein OPTN. However, other functions of OPTN may be of relevance here, as it has been shown that the OPTN-TBK1 complex leads to the phosphorylation, dimerization, and nuclear localization of the interferon regulatory factor 3 (IRF3), which, in turn, mediates the transcription of the interferon (IFN) type 1 response genes [27]. Secreted IFNα/β would stimulate the production of more potent antimicrobial interferon IFNγ by bystander cells, subsequently leading to cell-autonomous bacterial killing [28,29]. Thus, our results suggest that LLO induces a host response, the upregulation of OPTN, which is required to detect and to degrade intracellular L. monocytogenes.
To date, only one additional bacterial pathogen, namely S. typhimurium, was shown to be targeted by OPTN for its autophagosomal degradation [21]. For Salmonella, it was demonstrated that LPS leads to TBK1-dependent phosphorylation of OPTN [21], which is a function shared with the proteinaceous toxin LLO. During S. typhimurium infection, these bacteria remodel the phagosome into a non-degradative compartment referred to as Salmonella-containing vacuole (SCV) [30]. In autophagy-deficient cells, infection with S. typhimurium leads to a loss of membrane integrity in SCVs, thus suggesting that autophagy may be involved in membrane repair [31]. There is currently little evidence for repair of host membranes by the autophagic machinery and this certainly requires further investigation.
Our data presented here imply that a quantitative assessment of bacterial replication does not distinguish between the different compartments occupied by the bacterium during intracellular growth. Thus, the compartment in which LLO-deficient bacteria grow in infected cells is not targeted for autophagy and may indeed be the spacious Listeria-associated phagosomes previously described, where L. monocytogenes grow, albeit at low replication rates [3]. Further studies are warranted to examine replicative niches of L. monocytogenes and their contribution to overall growth.
Toxins 2017, 9,273 6 of 10 (IFN) type 1 response genes [27]. Secreted IFNα/β would stimulate the production of more potent antimicrobial interferon IFNγ by bystander cells, subsequently leading to cell-autonomous bacterial killing [28,29]. Thus, our results suggest that LLO induces a host response, the upregulation of OPTN, which is required to detect and to degrade intracellular L. monocytogenes.
To date, only one additional bacterial pathogen, namely S. typhimurium, was shown to be targeted by OPTN for its autophagosomal degradation [21]. For Salmonella, it was demonstrated that LPS leads to TBK1-dependent phosphorylation of OPTN [21], which is a function shared with the proteinaceous toxin LLO. During S. typhimurium infection, these bacteria remodel the phagosome into a non-degradative compartment referred to as Salmonella-containing vacuole (SCV) [30]. In autophagy-deficient cells, infection with S. typhimurium leads to a loss of membrane integrity in SCVs, thus suggesting that autophagy may be involved in membrane repair [31]. There is currently little evidence for repair of host membranes by the autophagic machinery and this certainly requires further investigation.
Our data presented here imply that a quantitative assessment of bacterial replication does not distinguish between the different compartments occupied by the bacterium during intracellular growth. Thus, the compartment in which LLO-deficient bacteria grow in infected cells is not targeted for autophagy and may indeed be the spacious Listeria-associated phagosomes previously described, where L. monocytogenes grow, albeit at low replication rates [3]. Further studies are warranted to examine replicative niches of L. monocytogenes and their contribution to overall growth. monocytogenes is trapped within a single-membrane vacuole. Listeriolysin O (LLO)-negative mutant (∆hly) or wild type (wt) bacteria expressing low levels of LLO allow the establishment of a replicative niche, which cannot be autophagocytosed. However, the pathogen escapes from the vacuolar compartment with the help of phospholipases into the cytosol. Cytoplasmic bacteria expressing ActA recruit the host actin cytoskeleton machinery and are camouflaged from autophagic recognition. In contrast, bacteria that do not quickly express ActA are ubiquitinated. This is followed by the binding of ubiquitinated bacteria to OPTN, whose expression is induced by LLO. OPTN interacts with LC3-containing membranes, leading to autophagosome formation around the bacterium.
Conclusions
In conclusion, host cells employ OPTN to control the intracellular growth of L. monocytogenes via host signaling that is activated by LLO. LLO belongs to the family of CDCs, which are mainly produced by Gram-positive bacteria including species from the genera Arcanobacterium, Bacillus, In contrast, bacteria that do not quickly express ActA are ubiquitinated. This is followed by the binding of ubiquitinated bacteria to OPTN, whose expression is induced by LLO. OPTN interacts with LC3-containing membranes, leading to autophagosome formation around the bacterium.
Conclusions
In conclusion, host cells employ OPTN to control the intracellular growth of L. monocytogenes via host signaling that is activated by LLO. LLO belongs to the family of CDCs, which are mainly produced by Gram-positive bacteria including species from the genera Arcanobacterium, Bacillus, Clostridium, Gardnerella, Lactobacillus, Listeria and Streptococcus [32]. Recently, it was demonstrated that S. pneumoniae induces autophagy in a pneumolysin (a CDC)-dependent manner [15]. It might be worth analyzing as to whether or not this toxin activates autophagy via OPTN, as well, which would suggest a general mechanism of CDC-dependent autophagic induction.
Cell Culture
HeLa (human cervical adenocarcinoma) cells were cultured in Dulbecco's modified Eagle medium (DMEM) (Thermo Fischer Scientific, Waltham, MA, USA) supplemented with 10% fetal bovine serum (FBS) (Biochrom, Berlin, Germany) at 37 • C in a humidified, 5% CO 2 -air atmosphere. The cells were seeded in cell culture dishes with medium containing 10% FBS 24 h prior to the experiments. At 90-100% confluency, the cells were washed once with Hanks' Balanced Salt Solution (HBSS) (Biochrom, Berlin, Germany), and incubated in DMEM containing 10% FBS for 2 h. The cells were then again washed three times with HBSS, and infected in medium containing 0.5% FBS. The cells were incubated in medium containing 0.5% FBS throughout the duration of infection. For treatment with 50 ng/mL LLO, the cells were washed five times with HBSS and incubation with LLO was performed in medium without FBS for 1 h. Prior to treatment, LLO was activated by incubation with 5 mM dithiothreitol (Sigma-Aldrich, St. Louis, MO, USA) for 10 min at room temperature (RT). LLO was isolated and purified from Listeria innocua expressing LLO as described [33].
The treatment of cells with 1 µM BX-795 (Merck Millipore, Billerica, MA, USA) was performed 1 h before infection in medium containing 0.5% FBS. The infection was done in the medium containing BX-795.
RNAi Transfection
The cells were plated shortly before transfection in 1.1 mL DMEM containing 10% FBS. The siRNA (5 nM for lc3; 10 nM for optn) and the HiPerFect reagent (1.5 µL for lc3; 3 µL for optn) were diluted in 100 µL DMEM and incubated for 5 min at RT. The transfection complexes were added dropwise to the cells, and the cells were incubated for 48 h. Subsequently, the cells were washed three times with HBSS to terminate the transfection, and DMEM containing 10% FBS was added. The cells were then infected as described. lc3 (SI02655597), optn (SI00132020) and scrambled (1022076) siRNA were purchased from Qiagen (Hilden, Germany).
Bacterial Culture and Infection
L. monocytogenes wt (EGD-e) [34] and L. monocytogenes ∆hly (a mutant lacking LLO) [35] were grown in Brain-Heart-Infusion (BHI) medium. Escherichia coli Top 10 (Invitrogen) were cultured in Luria-Bertani medium. The bacteria were grown with constant shaking (180 rpm) at 37 • C. For infection, overnight grown cultures of L. monocytogenes were diluted (1:50) in BHI medium, and cultured to exponential growth phase as determined by the optical density at 600 nm. An appropriate culture volume was centrifuged at 13,000 rpm for 1 min at RT. The bacterial pellet was washed twice with HBSS, resuspended in DMEM containing 0.5% FBS, and used for infection. A multiplicity-of-infection of 10 was used for infection. For determination of intracellular bacterial number, the extracellular bacteria were eliminated 1 h post infection (p.i.) by the incubation of the infected cells in DMEM containing 10% FBS, and 50 µg/mL of gentamicin. For analysis of OPTN levels, cells were infected for 6 h without gentamicin treatment.
Determination of the Number of Intracellular Bacteria
Four hours p.i., the cells were washed three times with phosphate-buffered saline (PBS; pH 7.4), and lysed with cold water containing 0.2% Triton X-100 for 20 min at RT. The bacteria were diluted in PBS and plated on BHI agar plates.
Protein Preparation from Eukaryotic Cells and Immunoblotting
Cell lysis was performed with RIPA [33] or CHAPS lysis buffer purchased from ProteinSimple (San Jose, CA, USA) [36]. The total protein content was measured with bicinchoninic acid solution (Sigma-Aldrich, St. Louis, MO, USA) assay.
Immunofluorescence
The cells cultured on coverslips were infected. Four hours p.i., the cells were washed three times with PBS, fixed in 3.7% formaldehyde-PBS for 20 min at RT and incubated with immunofluorescence buffer (0.3% Triton-X-100, 1% BSA in PBS) at RT. After incubation with monoclonal primary anti-Listeria antibody (M108, undiluted) overnight at 4 • C, the cells were washed three times with PBS and incubated with 1:1000 anti-mouse IgG Fab2 Fragment Alexa Fluor 647-conjugated secondary antibody (Cell Signaling Technology, Danvers, MA, USA, #4410) and 1:40 Alexa Fluor 488-conjugated phalloidin (Thermo Fisher Scientific, Waltham, MA, USA, #A12379) for 2 h at 37 • C in the dark. After three washing steps, the coverslips were mounted with ProLong Gold antifade reagent with DAPI (Thermo Fisher Scientific, Waltham, MA, USA, #P36935) and imaged by confocal microscopy (Leica TCS SP5, Leica Microsystems, Wetzlar, Germany).
Statistical Analysis
Statistical analysis of experiments was performed with SigmaPlot 11 (Systat Software, San Jose, CA, USA). The data of Figures 1A, 3A, 4 and S1 were analyzed by t-test. The data of Figure 3B were analyzed by one-way ANOVA with Tukey. Mean values ± SEM are plotted from three independent experiments. Representative immunofluorescence or immunoblotting images from three independent experiments are shown. | 2018-02-19T21:22:55.597Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "f8e50e0578d11cd6f8817c302b72d48a58e0bd3a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6651/9/9/273/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f8e50e0578d11cd6f8817c302b72d48a58e0bd3a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
245309521 | pes2o/s2orc | v3-fos-license | The Wisconsin Longitudinal Study: Overview, Data Linkages, and Future Plans
Abstract The WLS is a study of Wisconsin high school class of 1957 graduates, with follow-ups in 1964, 1975, 1993, 2004, 2011, and 2020. The data reflect the life course of the graduates (and their siblings), initially covering education, switching to family, career, and social participation in midlife, and physical and mental health, cognitive status, caregiving, and social support as respondents age. The WLS is linked to multiple administrative data sources including: parent earnings from state tax records (1957-60) and Social Security earnings and benefits for respondents; 1940 Census data; characteristics of high schools and colleges, employers, industries, and communities of residence; voting records from 2000-2018; Medicare claims; and the National Death Index. Efforts are underway to expand the racial/ethnic and educational composition of the WLS by supplementing the original sample with a new cohort of age-matched adults drawn from Wisconsin’s Black, Hispanic, Asian-American, and Native American communities.
of Asian, Black/African-American, Hispanic/Latinx and Multiracial vs. 7% White OAH withheld this information from at least one health care provider. Over 10% of OAH experienced prejudice/discrimination while accessing service. Non-disclosure and prejudice/discrimination were linked to lower self-rated health status, thus, evidencing stigma-related health burden.
THE WISCONSIN LONGITUDINAL STUDY: NEW COGNITIVE, GENETIC, BIOLOGICAL, AND SOCIAL DATA AND A DIVERSIFYING SAMPLE Chair: Michal Engelman
The Wisconsin Longitudinal Study (WLS) has followed a sample of one in three Wisconsin high school graduates from the class of 1957 for over 64 years, making it an excellent data source for researchers interested in linking early and midlife characteristics to a wide range of later-life outcomes. The WLS is unique among major studies of aging cohorts for its duration of follow up, the inclusion of siblings, and the combination of rich social and health information. This symposium will provide an overview of the WLS, describe recent data collection and linkages, and introduce ongoing efforts to diversify the educational and racial/ ethnic composition of the study sample. WLS data cover nearly every aspect of the participants' lives from early life socioeconomic background, schooling, family, and work, to physical and mental health, social participation, civic engagement, well-being, and cognition. The study is linked to administrative data including Medicare records, Social Security records, mortality records, and resource data on primary and secondary schools attended by participants as well as characteristics of their employers, industries, and communities of residence. Recent data collection efforts have generated a wealth of new biological and cognitive information, including genetic data collected from saliva and blood samples, measures of the gut microbiome, and derived polygenic scores for educational attainment, cognitive performance, depression, and subjective well-being. The currently-fielding ILIAD effort is implementing rigorous AD diagnostic protocols to track the progression of dementia across cognitive phenotypes. The symposium will conclude with practical information on accessing and using the data.
THE WISCONSIN LONGITUDINAL STUDY: OVERVIEW, DATA LINKAGES, AND FUTURE PLANS Michal Engelman, University of Wisconsin-Madison, Madison, Wisconsin, United States
The WLS is a study of Wisconsin high school class of 1957 graduates, with follow-ups in 1964, 1975, 1993, 2004, 2011, and 2020. The data reflect the life course of the graduates (and their siblings), initially covering education, switching to family, career, and social participation in midlife, and physical and mental health, cognitive status, caregiving, and social support as respondents age. The WLS is linked to multiple administrative data sources including: parent earnings from state tax records (1957-60) and Social Security earnings and benefits for respondents; 1940 Census data; characteristics of high schools and colleges, employers, industries, and communities of residence; voting records from 2000-2018; Medicare claims; and the National Death Index. Efforts are underway to expand the racial/ethnic and educational composition of the WLS by supplementing the original sample with a new cohort of age-matched adults drawn from Wisconsin's Black, Hispanic, Asian-American, and Native American communities.
WLS-ILIAD: INITIAL LIFETIME'S IMPACT ON ADRD Pamela Herd, Georgetown University, Georgetown University, District of Columbia, United States
Between 2021 and 2025, WLS will collect two new waves of data, which will capture detailed measures of cognitive change and dementia as the cohort reaches their early to mid 80s. In this session, I will provide an overview of the data that we're collecting, as well as opportunities to explore early and mid-life determinants of cognitive change and dementia onset in this unique study. Compared to existing studies, the WLS offers some novel opportunities. First, it will provide one of the only opportunities to study how early and midlife life conditions and experiences, on data gathered prospectively, can shape cognitive trajectories and dementia in later life. Second, its unique sibling design provides significant analytic advantages, improving causal inference. Third, the study includes a large group of rural participants, allowing for closer examinations of how rural conditions may shape risk and resilience against cognitive decline and dementia in later life.
. Division of Geriatrics and Gerontology, UW-Madison,, University of Wisconsin-Madison, Wisconsin, United States
One of the distinctive strengths of WLS is the availability of Henmon-Nelson IQ scores on all participants while in high school, followed by prospective collection of data through cognitive batteries of varying size and sophistication. Launched in 1993, the initial longitudinal cognitive testing included 8 abstract reasoning items followed by the administration of larger cognitive batteries in 2004 and 2011 comprised of a 10-item word recall test, digit ordering task, phonemic and category fluency, as well as repeated and new items from the WAIS-R similarities task first administered in the 1993 survey. In 2018, with R01 funding from NIA, the scope of cognitive testing expanded significantly and includes administration of a phone-based cognitive screening measure, and a comprehensive in-person neuropsychological assessment for individuals identified at risk for dementia targeting a range of cognitive domains, including memory, language, attention, visuospatial abilities, and executive functioning.
BIOLOGICAL MEASURES IN THE WLS: GENETIC AND MICROBIOME DATA Kamil Sicinski, University of Wisconsin-Madison, Madison, Wisconsin, United States
Ever since releasing genotype data in 2017, the WLS continually expands resources available to users interested in genetic research. Key advantages to the WLS data for genetics research include its sibling sample and nearly full life course longitudinal study design. In 2021, we now have state-of-the-art polygenic scores available in multiple domains, such as health, cognition, fertility, personality, risk behaviors and attitudes, and life satisfaction. The scores cover phenotypes spanning from adventurousness, through educational attainment, to age at which voice deepened. Additionally, the genotype data was re-imputed in 2021 to the superior Haplotype Reference Consortium reference panel and the WLS expects to obtain copy number variants data next year. In addition to genetic data, we have a set of novel microbiome data on a subset of participants that allows researchers to study relationships between environments and gut microbial composition.
HOW TO ACCESS AND USE DATA FROM THE WISCONSIN LONGITUDINAL STUDY Carol Roan, University of Wisconsin -Madison, Madison, Wisconsin, United States
With over 27,000 analysis variables covering more than 60 years of participants' lives, the WLS data can be overwhelming to new users who are looking for the measures they need to answer their research questions. Core WLS survey data is free and easy to download from our website. As we add new types of measures and new waves of data, we refine our data sharing methods to balance our need to make the data easily available with the need to protect the confidentiality of participants. This presentation will teach users how to access to the data files they need for their research and how to use our online documentation of survey instruments and data files. Symposium attendees will also receive a USB drive with the publicly available data and complete documentation.
Session 2340 (Symposium)
TRANSITIONS TO LONG-TERM RESIDENTIAL CARE SETTINGS Chair: Bram de Boer Co-Chair: Hilde Verbeek Discussant: Joseph Gaugler During their life course, many older adults encounter a transition between care settings, for example, a permanent move into long-term residential care. This care transition is a complex and often fragmented process, which is associated with an increased risk of negative health outcomes, rehospitalisation, and even mortality. Therefore, care transitions should be avoided where possible and the process for necessary transitions should be optimised to ensure continuity of care. Transitional care is therefore a key research topic. The TRANS-SENIOR European Joint Doctorate (EJD) network builds capacity for tackling a major challenge facing European long-term care systems: the need to improve care for an increasing number of care-dependent older adults by avoiding unnecessary transitions and optimising necessary care transitions. During this symposium, four presenters from the Netherlands and Switzerland will present different aspects of transitions into long-term residential care. The | 2021-12-19T16:09:54.036Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "1d3c356f337cd76b8d4f18730c9b2e55b21a845b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1093/geroni/igab046.850",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d50603a1b34ba2fe37b47b2bb539bc34e534eda",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
223407445 | pes2o/s2orc | v3-fos-license | Study of the unit structure in frequency selective fabric fabricated by u-shaped velvet
Frequency selective surface (FSS) is a periodic structure with one-dimensional or two-dimension (2D) array. The traditional FSS unit structure is a metal patch or metal aperture. With metal patch structure, FSSs would reflect the electromagnetic wave in the vicinity of the resonance frequency. The metal aperture-type FSSs transmit the electromagnetic wave around the resonance frequency.1 In recent years, the FSSs have been investigated by many researchers about the structure of single-layer,2,3 multi-layer4−7 or 3D8−10 metal unit. Changing the design of electromagnetic materials, unit size, arrangement and other parameters, metal FSSs can obtain the specific resonance frequency, such as broadband, multi frequency, better angle stability, miniaturization or other characteristics.
Introduction
Frequency selective surface (FSS) is a periodic structure with one-dimensional or two-dimension (2D) array. The traditional FSS unit structure is a metal patch or metal aperture. With metal patch structure, FSSs would reflect the electromagnetic wave in the vicinity of the resonance frequency. The metal aperture-type FSSs transmit the electromagnetic wave around the resonance frequency. 1 In recent years, the FSSs have been investigated by many researchers about the structure of single-layer, 2,3 multi-layer 4−7 or 3D 8−10 metal unit. Changing the design of electromagnetic materials, unit size, arrangement and other parameters, metal FSSs can obtain the specific resonance frequency, such as broadband, multi frequency, better angle stability, miniaturization or other characteristics.
Combined with the FSS, the conductive fiber is used as the structural material, and the flexible FSF with electromagnetic function is manufactured by means of textile processing. This research not only has an important scientific significance, but also has an practical value in the field of radar absorbing material, communication window, fabric antenna, flexible functional clothing and so on. 11,12 At present, domestic and foreign researchers have studied 2D FSF by screen printing, 13,14 weaving, 15 weft knitting, 16,17 embroidery, 18,19 selective chemical plating, 20 ink-jet printing 21 or other textile processing.
In this paper, we propose a novel FSF with 3D U-shaped velvet structure. Compared with the planar FSS, the U-shaped velvet FSF has a 3D design, which increases design parameters. The velvet FSF would have more flexible lightweight characteristics and more structure patterns, which a conventional FSS does not have. 12,22 As members of our research group, 23 proposed the U-shaped velvet FSS textiles, which are made by the technology of tufted carpet weaving. The parameter design of electromagnetic functional fabric can be divided into four parts, including the yarn material, the unit shape, the grid array and the electromagnetic wave incidence condition. In the unit design part, 22 have already explored the influence of the velvet height, the unit cell size, the bottom connectivity on frequency response characteristics. In this study, the structure parameters of other unit design, such as planar and 3D shape, the linear density of the conductive ply yarn, the inclination angle of velvet and different U-shaped connective conditions, are continued to be studied.
Experiment
FSF specimens with the unit structure of planar dipole and 3D U shape U-shaped structure unit, made by conductive yarns, is derived by two ending points of the dipole unit extending along the space Z-direction, which is a kind of 3D structure. The specimen based on the unit structure of planar dipole is showed in Figure 1. And the specimen with U-shaped unit structure is showed in Figure 2. In the actual production process, FSFs of the U-shaped structure can be woven by the tufted carpet loom.
2D dipole and 3D U-shaped FSF samples were manufactured by using 2 strands of copper wire (the diameter of single strand is 0.1 mm) as structure unit, shown in (Figure 1) (Figure 2) respectively. And sample substrate was common cardboards. Of course, non-conductive fabric also can be used as substrate. Experiments of different unit length of dipole and U type periodic structure model were prepared to explore the frequency response difference. The specific parameters of FSF specimens are listed in Table 1.
U-shaped velvet FSF specimens with different linear density of silver filaments in the unit structure
For independent U-shaped FSFs, a research on the linear density of silver filaments has been carried out, where the assembling number of yarns indicates the linear density of conductive yarns. Meanwhile, it determines the amount of conductive yarns in the actual weaving. The conductive yarn linear density is a significant parameter in FSF weaving process.
Silver filaments, the single yarn fineness of 10 tex, were used to build up U type FSF. The FSF specimens have the same unit size and different assembling number of silver filaments, including 4, 16 and 28yarns (corresponding to 408dtex, 1630dtex and 2852dtex respectively), as shown in Table 2. The substrate layers are polyester fabric and cellular PE plates, supporting conductive yarns to prevent collapsing, as shown in Figure 3. Ag-D3-# 9 6 6 9 Independent U type Silver filament 28 2852
Figure 3
Photograph of sample with substrate layers.
U-shaped velvet FSF specimens with different inclination angles of velvet
Velvet inclination is a very common phenomenon for velvet fabrics. Therefore, the velvet inclination plays an important role in this design. In the process of samples preparation, we need take 1 or 2 layers cellular PE plates (1mm thick per layer) of sample Ag-D2-# in Table 2, and then pull cellular PE plates along the extension direction of length L from the outside to the inside in turn. The purpose is to make different velvet inclination angles θ (such as θ 0 =0˚, θ 1 =15˚, θ 2 =40˚, θ 3 =60˚), which is seen in Figure 4(a) & (b). Finally, FSF specimens with inclination velvet were prepared. Velvet inclination is closely related to the shape of the carpet, which is of great value.
U-shaped velvet FSF specimens with different connectivity conditions
The unit structure of the above experimental samples is independent U type. However, there are a great many different shapes in practice, such as the cube, cylinder and so on. Therefore, it is very necessary to study different connection modes of U-shaped unit structures. Figure 5 indicates that the total length of the unit cell at the bottom is the same, and that the number of U type is different. Another kind of connectivity is continuous unit cells with the same length of single U type and different U type number, as seen in Figure 6.
By adjusting needle distance, velvet higher and other parameters of the machine, U-shaped velvet FSF specimens with different connectivity conditions were produced by the tufted carpet sample loom. Unit structures were formed by silver filaments. Common polyester yarns had a supporting role and substrates were ordinary carpet substrate cloth. The specific parameters are shown in Table 3.
U-shaped velvet FSF specimens with different connectivity conditions
The unit structure of the above experimental samples is independent U type. However, there are a great many different shapes in practice, such as the cube, cylinder and so on. Therefore, it is very necessary to study different connection modes of U-shaped unit structures. Figure 5 indicates that the total length of the unit cell at the bottom is the same, and that the number of U type is different. Another kind of connectivity is continuous unit cells with the same length of single U type and different U type number, as seen in Figure 6. By adjusting needle distance, velvet higher and other parameters of the machine, U-shaped velvet FSF specimens with different connectivity conditions were produced by the tufted carpet sample loom. Unit structures were formed by silver filaments. Common polyester yarns had a supporting role and substrates were ordinary carpet substrate cloth. The specific parameters are shown in Table 3.
Experimental test
In this work, the shielding chamber was used to test the transmission coefficient of samples. Testing system included an Agilent E8257D signal generator (250KHz-40GHz), an E7405AEMC spectrum analyzer (100Hz-26.5GHz), two horn antennas (1GHz-18GHz) and an absorbing screen etc. According to GJB 6190-2008 (Measuring method for shielding effectiveness of electromagnetic shielding materials), environmental conditions, transmitting and receiving antenna position were set up. The sample transmission coefficient of 1-18GHz was tested by transverse electric wave. Figure 7 is diagram of the testing system, where the center of the transmitting antenna, the testing samples and the receiving antenna were located on the same horizontal line. And the testing sample size was 18cm×18cm.
Comparison of frequency response characteristics of the 3D U-shaped velvet FSF and planar dipole cell structure FSF
The Cu-L series samples, with different unit length of 6mm, 9mm and 12mm, were the planar dipole FSFs and the velvet high H of samples was 0mm. As contrast, the velvet high of the Cu-U series samples was 6mm and they had the same unit length of 6mm, 9mm and 12mm. In addition, the other parameters were the same. The test results are shown in Figure 8. As it can be seen in Figure 8, the transmission coefficient of specimens at 2-18GH were tested. The unit structure of 2D FSF is the planar dipole and 3D FSF structure is U type. With the same bottom unit length L of 6mm, the planar sample Cu-L1-# does not produce resonance in the 2-18GHz band, while sample Cu-U1-# with the independent U type structure generates the resonance at 8GHz. When unit length L is 12mm, the resonance frequency of 2D FSF(Cu-L2-#) is 14.6GHz and 3D FSF(Cu-U2-#) has two resonance points, which are 6.8GHz and 17GHz. 2D FSF(Cu-L3-#) with the unit length of 18mm resonates at 11.7GHz and 3D FSF(Cu-U3-#) resonates at 6.1GHz and 14.6GHz. These can be concluded as follows: A. The 3D U-shaped velvet FSF, which is extended from the direction of the dipole height, has a dual-band effect.
B. When the resonance points are the same (e.g.14.6GHz), samples may be composed of different unit cell structures.
C. The resonance frequency of 2D FSF, whose parameters are the same as the 3D except the high H (e.g. H=0mm and H=6mm), is between the two resonance frequency of the 3D FSF and slightly closer to the larger resonance frequency of the 3D FSF.
The influence of the velvet assembling number
The linear density of yarns is an important design parameter of FSF. In this study, the velvet linear density is represented by the assembling number of silver filament. The assembling number of conductive yarns in the unit structure of the sample Ag-D series was 4, 16 and 28 respectively, and corresponding yarn linear density was 408dtex, 1630dtex, and 2852dtex. Test results of transmission coefficient are shown in Figure 9. The resonance frequency of the samples varies from 5.25GHz to 5.83GHz. With an increase of the conductive yarn linear density, the distance between the units and the unit coupling capacitance decrease. As a result, the resonance frequency is increased slowly. Because of increasing velvet linear density, the gap between velvet may affect the frequency response characteristics.
The influence of the inclination angle of velvet
Qualitative study of velvet inclination is investigated, namely this is to explore the impact of different velvet inclination angle on transmission coefficient. In the experiment, the sample Ag-D2-# was prepared with silver filaments of 16 assembling number. The inclination angle θ of velvet is shown in Figure 4(a), where θ 0 , θ 1 , θ 2 and θ 3 are 0˚, 15˚, 40˚ and 60˚ respectively). The double-column model is obtained by cutting the dipole of U type, as shown in Figure 4(b). Experimental results are shown in Figure 10.
In Figure 10(a), the sample has a U type unit and the resonance frequency is in the range 5.25-5.76GHz. In Figure 10(b), the transmission coefficient of the sample with a double-column type unit is about 0dB. The analysis is as follows.
Velvet inclination angle θ: Cheng et al. 22 analyzed that the resonance point moved to the lower frequency with an increase of the velvet height or the unit length L. 22 However, the increase of velvet inclination angle θ leads to two results, the equivalent height of the unit decreasing and the velvet equivalent length in the electric field increases. When θ is less than a certain angle, the impact of the equivalent height on the resonance frequency is less than the impact of the velvet equivalent length. Thus, resonance frequency moves to the low frequency. When θ is greater than a certain angle; the resonance frequency maintains a certain value. As a whole, the resonance frequency has decreased gradually and then held the resonance frequency nearby a constant value during the inclination angle θ increasing. Double-column structure: The transmission coefficients of doublecolumn structure unit samples with different angles remain unchanged, indicating that the influencing factor of the resonance frequency is mainly the U-shaped structure at the bottom, instead of a separate double-column structure. The U type structure is commonly applied to velvet carpet products. Ordinary non-conductive yarns are used to support and fix the unit cell structure made by the conductive yarns in FSF samples. And compact velvet arrangement can effectively solve the problem of velvet inclination.
The influence of U-shaped connective conditions on the bottom
The same total length L of the unit cell, the different number of U type: Generally, a U type, woven by the tufted carpet loom, is far less than 9mm. Hence, we need explore the impact of the number of U type under the condition of the same total length L of the unit, which determines whether samples with different cell shapes are manufactured by the tufted carpet sample loom or not. When the unit total length L of the bottom was 12mm, the sample of single-U type Ag-U-1# and the sample of double-U type Ag-2U-# were produced respectively, as seen in Figure 5. And testing results for the transmission coefficient are shown in Figure 11. The resonance point of the single-U type sample with the bottom length L of 12mm is 11.8GHz. The resonance point of the double-U type sample with the same parameters is 12GHz. From the experimental data, we can obtain that if the unit total length L is a constant value, the number of U connected elements will hardly affect the resonance frequency. That is to say, the effective unit length L of the FSS is the total length in contact with the U type. According to this characteristic, FSFs with different unit shapes, which are made up of many small U types, can be woven by the tufted carpet loom. The slight movement of the curve in the figure is related to the error of the system and the gap of arrangement.
The same length of single U, the different number of U type:
The single-U length was 6mm. With the increase of the number of connected U type, the sample Ag-1U-#, Ag-2U-#, Ag-3U-#, whose unit length were 6mm, 12mm and 18mm respectively, were made. The transmission coefficient curve is studied, and the test results are shown in Figure 12. In the graph, the first resonance frequency of the three curves is around 5GHz, and the second resonance point is in . The more the number of connected U type is, the smaller the resonance frequency results. Due to the increase of the U-shaped number, the unit total length of the bottom is longer, which is easy to resonate at lower frequency.
Conclusion
In our work, 2D and 3D samples with the same bottom unit length L and 3D samples with different parameters were prepared. Conclusions are drawn as follows: A. The 3D FSFs have the double-frequency resonance.
B. With an increase of the linear density of conductive yarns, the resonance frequency moves to higher frequency.
C. As inclination angle θ of velvet is larger, the resonance frequency has a trend of decreasing first and then stabilizing.
D. The number of connected U type rarely affects the resonance frequency of samples with the same bottom length L.
E. When the single-U length is the same, the more the number of connected U type is, the lower the resonance frequency results.
Based on the characteristics of lightweight, soft and flexible, the velvet fabric with FSS has a variety of design in aspects of materials, unit sizes and shapes and others. According to the above experimental results, it will be easier to develop the products with specific resonance points. Besides, a large number of experiments are still. | 2019-04-16T13:22:31.692Z | 2017-04-18T00:00:00.000 | {
"year": 2017,
"sha1": "84ef4436145874548b8123ac5f9f6f4fc9b0c1e6",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/JTEFT/JTEFT-01-00025.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "58412cb2d48f4627a6672ef5654bc174a358ceb1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
} |
235345349 | pes2o/s2orc | v3-fos-license | Cognitive heterogeneity and complex belief elicitation
The Stochastic Becker-DeGroot-Marschak (SBDM) mechanism is a theoretically elegant way of eliciting incentive-compatible beliefs under a variety of risk preferences. However, the mechanism is complex and there is concern that some participants may misunderstand its incentive properties. We use a two-part design to evaluate the relationship between participants’ probabilistic reasoning skills, task complexity, and belief elicitation. We first identify participants whose decision-making is consistent and inconsistent with probabilistic reasoning using a task in which non-Bayesian modes of decision-making lead to violations of stochastic dominance. We then elicit participants’ beliefs in both easy and hard decision problems. Relative to Introspection, there is less variation in belief errors between easy and hard problems in the SBDM mechanism. However, there is a greater difference in belief errors between consistent and inconsistent participants. These results suggest that while the SBDM mechanism encourages individuals to think more carefully about beliefs, it is more sensitive to heterogeneity in probabilistic reasoning. In a follow-up experiment, we also identify participants with high and low fluid intelligence with a Raven task, and high and low proclivities for cognitive effort using an extended Cognitive Reflection Test. Although performance on these tasks strongly predict errors in both the SBDM mechanism and Introspection, there is no significant interaction effect between the elicitation mechanism and either ability or effort. Our results suggest that mechanism complexity is an important consideration when using elicitation mechanisms, and that participants’ probabilistic reasoning is an important consideration when interpreting elicited beliefs. Electronic supplementary material The online version of this article (10.1007/s10683-021-09722-x) contains supplementary material, which is available to authorized users.
Result 8 There is no statistically significant observer effect in the data. Table 7, which shows the proportion of correct left/right choices in blocks one and two of the experiment with the data split into subsets of 10 periods. We focus the analysis on Periods 11-30 since these are the ten periods directly before and after the introduction of beliefs.
Support for Result 8 is provided in
An observer effect would create larger improvements in the proportion of correct left/right choices at the start of Block Two in the treatments with belief elicitation relative to the No-Elicitation treatment. There is no such pattern in the data: in the treatments with no belief elicitation, participants make mistakes in 35.1 percent of cases in Periods 11-20 and in 30.5 percent of cases in Periods 21-30. This difference of 4.63 percentage points is not significantly different to the difference of 4.7 percentage points observed in the SBDM mechanism in a difference-in-difference permutation test in which we restrict data to periods 11-30 (p-value = 0.898). It is also not different to the difference of 7.3 percentage points observed in the Introspection mechanism (p-value = 0.533).
In Block Two, the proportion of incorrect left/right decisions in the SBDM mechanism are not significantly different than the Introspection mechanism (p-value = 0.761).
Appendix C: Additional Figures
Result 3 presented histograms of reported beliefs for consistent participants across all of the informative priors. Here we provide the histograms of reported beliefs for the other cases. Figure 6 shows the reported beliefs of consistent and inconsistent participants in the case of an uninformative signal for both the SBDM mechanism and the Introspection mechanism using data from both the high-information treatments with 14 black balls in the left side of Bucket A and the low-information treatments with 12 black balls. Figure 7 shows the reported beliefs for the inconsistent participants across the eight potential informative posteriors.
Appendix D: Permutation Tests for Interactions
In this Appendix we briefly outline the Synchronized Permutation test of Perasin (2001) and Salmaso (2003) that we used to test for the interaction effect in Hypotheses 1 and 2. A more general introduction to permutation tests can be found in Good (2000) and Manly (2007). More details on Synchronized Permutation tests can be found in Basso et al. (2009) and Hahn and Salmaso (2017). Hypotheses 1 and 2 both use a 2×2 factorial design. We are primarily interested in the interaction effect between factors. The standard approach would be to use a parametric ANOVA specification. However, as seen in the main text, the error distribution in the data is not normally distributed and thus the underlying assumption of parametric ANOVA is not satisfied. The permutation test represents an ideal alternative since it requires only a minimal assumptions about the errors, is exact in some cases, and has high power relative to other approaches.
The main assumption of permutation tests is that the data is exchangeable under the null hypothesis. Data is exchangeable if the probability of the observed data is invariant with respect to random permutations of the indexes (Basso et al., 2009). In the 2 × 2 factor design, the observations are typically not exchangeable since units assigned to different treatments have different expectations. This implies that approaches that freely permute data across cells may fail to separate main and interaction effects (Good, 2000). The synchronized permutation test of Perasin (2001) and Salmaso (2003) restricts permutations to the same level of a factor to generate test statistics for both main factors and, separately, interactions that depend only on the effect being tested and a combination of errors (Basso et al., 2009).
For clarity, we will concentrate the discussion on Hypothesis 1, in which each observation E ijk , represents the mean error of an individual who has been assigned to mechanism i = {1, 2} and who is of cognitive ability j = {1, 2}. We note that permutation tests will assign all observations of an individual to the same factor combination, which we refer to as a cell. Thus, there is no loss in power in using average effort as our dependent variable rather than treating each decision made by an individual as an observation.
Following the main text, we assume that each observation can be decomposed into a mean, two main effects, an interaction, and an error term: in which i = {1, 2} is the belief elicitation mechanism assigned to an individual, j = {1, 2} is the cognitive ability of the individual, and k = {1, . . . , n ij } is the index of an observation within a treatment cell E ij . By including the additive constant µ, all main effects and interactions in the model can be defined to sum to zero. Thus, we assume that α 1 +α 2 = 0, β 1 +β 2 = 0, (αβ) i1 +(αβ) i2 = 4 0 for all i, and (αβ) 1j +(αβ) 2j = 0 for all j. In this construction, α 1 = −α 2 and thus, under the null of no effect of the mechanism on errors, each of the main effects α 1 = α 2 = 0. Under the alternative, α 1 represents the difference from a zero average, and the interaction term (αβ) ij represents the deviation from the sum α i + β j . The model assumes that errors ijk are exchangeable and E( ijk ) = 0. Errors are exchangeable if the probability of the observed error is invariant with respect to random permutation of the data (Basso et. al, 2009).
We begin by considering a balanced design in which all cells have n observations and first construct a statistic for comparing the first factor (i.e., the mechanism) at each of the two levels of the second. Let Further let T AB = T A|1 − T A|2 be a test for the interaction term. In a synchronized permutation, we select ν observations at random from the n observations in cell E 11 and exchange them at random with observations from E 12 . At the same time we select ν observations at random from E 21 and exchange them at random with elements of E 22 .
Noting that β 1 = −β 2 and (αβ) 11 = −(αβ) 12 , a perturbation of T A|1 , will be equal to, with the * denoting a permutation of the data and * ijk denoting the permuted error. Likewise, a perturbation of T A|2 is equal to Noting that (αβ) 11 = −(αβ) 21 , the expected value of the test statistic is This test statistic is independent of both main effects and relies only on the exchangeability of the errors. We also calculate the test statistic T BA where the second factor (i.e., cognitive type) is compared at each of the two levels of the first factor (i.e. the belief elicitation mechanism).
is also independent of both main effects. Since T AB is obtained from synchronized permutations involving the row factor A and T BA is obtained from permutations involving the column factor B, both are jointly and equally informative. It follows that their linear combination T = T AB +T BA is a separate exact test for interaction. Following Basso et al. (2009), we use this linear combination as our main test statistic throughout the paper.
Note that in a balanced design, we can divide our test statistic by the number of ob-servations in each cell without changing the relative value of the original test statistic and the value of the permutations. By doing so, both T AB and T BA are equal to the difference between (i) the difference in mean error in cells E 11 and E 12 and (ii) the difference in mean error in cells E 21 and E 22 . Thus, as described in the main text, the interaction term is based on the difference between (i) the difference in mean errors between consistent and inconsistent participants in the SBDM mechanism and (ii) the difference in mean errors between consistent and inconsistent participants in the Introspection mechanism. We follow Basso et al. (2009) and use constrained synchronized permutations in which we exchange the observations in the same location within each cell on each iteration. This is done by permuting the observations in cells E 11 and E 12 and then using the same permutation of columns when shuffling observations in cells E 21 and E 22 , cells E 11 and E 21 , and cells E 12 and E 22 . The constrained synchronized permutation ensures that the same number of exchanges is made between each pair of cells. We perform an initial permutation of each cell to ensure that the original position of observations is irrelevant. This ensures that each permutation of the data is equally likely.
Finally, while we have aimed for a balance design, the median split of types was not always exactly 50:50 and our data is not balanced. As discussed in Good (2000), this has the potential of confounding the interaction and main effects. Basso et al. (2009) provides an approach of weighting observations that can be used to conduct synchronized permutations in an unbalanced 2 × 2 factor design. However, Hahn and Salmaso (2017) shows that these weights also influence the error terms and can lead to a test statistic that is too permissive. The alternative weights proposed by Hahn and Salmaso (2017), which can be used if there is balance in one direction, can be applied only in a subset of our analysis and restrict us to tests using only T AB when it can be applied.
Rather than taking a weighting approach, we instead follow a suggestion in Montgomery (2017) of randomly dropping observations so that each cell has the same number of observations. Although we lose some power by reducing the size of the sample, the resulting data is a random sample of the original and the resulting test statistic is independent of the main effect. To ensure that our random subset of data is not driving our results, we use an outer loop in our testing procedure and perform our permutation test with 1000 sub samples. We report the average p-value over the 1000 samples in the main text. In Table 8 below we also report the percentage of iterations in which the individual p-value corresponds to the acceptance/rejection decision of the average p-value. For example, if the p-value of a test is 0.03 and we reject the null of no interaction, column 2 reports the percentage of sub samples in which the null was rejected.
The test for Hypothesis 2 is similar to the that of Hypothesis 1 with one major exception. In Hypothesis 2, we are comparing behavior of the same individual in informative and uninformative questions and thus the errors of observation E i1k will be correlated with E i2k . This correlation implies that we cannot randomly permute across informative and uninformative questions without changing the expected error distribution. In this case, we restrict attention only to the permutation test T BA where we shuffle the same observations between cells E 11 and E 21 and then permute the same columns in cells E 12 and E 22 . This permutation keeps pairs of observations together and does not change the underlying error distribution.
As a robustness test, we also analyzed the data using the Wald-Type Permutation Statistic (WTPS) developed by Pauly et al. (2015). This procedure uses a free permutation of the dependent variable and is asymptotically valid in the case of heteroscedasticity in the errors across cells. In our experiment, this may be an issue if inconsistent participants have larger variation in errors. As the test is based on a Wald test, it is more sensitive to outliers. As such, we apply the test to the cleaned version of our dataset that drops outliers according to the criterion in Appendix F. As seen below, the acceptance/rejection decisions of the two tests coincide in all four of the main reported tests. In the main text we classified individuals into "consistent" and "inconsistent" types based on their decisions in the last ten periods of Block One of the experiment (Periods 11-20). This selection criterion was used to ensure that individuals were not being classified into type based on early experimentation. However, as this selection criterion could be interpreted as arbitrary, we also explored how variation of this criterion influences our results in the initial experiment where the criterion was not pre-specified. We reported mean belief errors in the initial experiment in Appendix A in Table 5. Table 9 presents mean belief errors in the initial experiments using an alternative classification in which we do a median split using all 20 periods. As seen by comparing the two tables, mean belief errors are similar across the two classifications.
Concentrating on Table 9, which reports the mean errors for the alternative classification, the mean error for consistent participants in the SBDM mechanism is 10.76 while the mean error for inconsistent participants is 15.98. Thus, there is a −5.22 percentage point difference in means in the SBDM mechanism. The mean error for consistent participants in the Introspection mechanism is 14.57 while the mean error for inconsistent participants is 14.96. Thus there is a −0.39 percentage point difference in means in the Introspection mechanism. The difference-in-difference estimate of −4.83 is significant using the one-sided synchronized test used throughout the paper (p-value = .019). This is comparable to the difference-in-difference estimate of −5.40 that exists when we classify participants based on Periods 11-20, which is the measure we use throughout the paper (p-value = .009).
Cognitive
Informative Signals All Informative Uninformative All Elicitation Type Table 9: Data from Initial Experiment Using Alternative Classification with all Observations From Block One: Mean belief errors in the SBDM mechanism and the Introspection mechanism for (i) consistent participants, (ii) inconsistent participants, and (iii) both consistent and inconsistent participants combined. The reported p-values are based on permutation tests using 10,000 iterations in which the subset of participants is held fixed and participants are randomly allocated to the SBDM or Introspection mechanism in each iteration of a regression on the treatment effect. The null hypothesis is that the treatment coefficient is equal to 0 (i.e. that there is no difference in accuracy between the SBDM and Introspection). The two-sided test statistic is reported.
Appendix F: Robustness Check: Results with Outliers Excluded
Due to Covid-19 restrictions our follow-up experiment was conducted online. The resulting data set was noisier than the data generated by the lab-based initial experiment. As part of our robustness check we removed outliers to ensure that these were not affecting results. We found that statistical tests on the reduced dataset led to statistically stronger conclusions that the results reported in the body of this paper. When classifying participants as outliers, we began by counting the number of times that an individual reported a belief that was (i) less than or equal to 50 and (ii) greater than or equal to 50 in the final 10 periods of Blocks 2 and 3. Next, we classified an individual as an outlier if (i) either of the two counts were 19 or 20 and (ii) less than 50% of belief reports were exactly 50. This leads to the exclusion (for example) of participants who report a single number like 20 or 100 throughout the experiment, or whose probabilities are reported out of 40-the number of balls in the bucket-rather than 100.
This rule leads to the exclusion of 9 participants from the initial experiment (4 from the Introspection treatment; 5 from the SBDM treatment), and 16 from the follow-up experiment (5 from the Introspection treatment; 11 from the SBDM treatment). Table 10 reports the mean errors from the pooled data when outliers are excluded. The difference-in-difference estimate for Hypothesis 1 is -3.23, which is statistically significant in a one-sided test using the estimator described in Appendix D (p-value = .009). Tables 11 and 12 report the mean errors for the initial experiments and follow-up experiments separately when the outliers are excluded. Table 10: Data with both experiments with outliers removed: mean belief errors under the SBDM mechanism and the Introspection mechanism for (i) consistent participants, (ii) inconsistent participants, and (iii) both consistent and inconsistent participants combined. The reported p-values are based on permutation tests using 10,000 iterations in which the subset of participants is held fixed and participants are randomly allocated to the SBDM or Introspection mechanism in each iteration of a regression on the treatment effect. The null hypothesis is that the treatment coefficient is equal to 0 (i.e. that there is no difference in accuracy between the SBDM and Introspection). The two-sided test statistic is reported. Table 11: Data from initial experiment with outliers removed: Mean belief errors under the SBDM mechanism and the Introspection mechanism for (i) consistent participants, (ii) inconsistent participants, and (iii) all participants combined. The reported p-values are based on permutation tests using 10,000 iterations in which the subset of participants is held fixed and participants are randomly allocated to the SBDM or Introspection mechanism in each iteration of a regression on the treatment effect. The null hypothesis is that the treatment coefficient is equal to 0 (i.e. that there is no difference in belief error between the SBDM and Introspection). The two-sided test statistic is reported. Table 12: Data from follow-up experiment with outliers removed: Mean belief errors under the SBDM mechanism and the Introspection mechanism for (i) consistent participants, (ii) inconsistent participants, and (iii) all participants combined. The reported p-values are based on permutation tests using 10,000 iterations in which the subset of participants is held fixed and participants are randomly allocated to the SBDM or Introspection mechanism in each iteration of a regression on the treatment effect. The null hypothesis is that the treatment coefficient is equal to 0 (i.e. that there is no difference in belief error between the SBDM and Introspection). The two-sided test statistic is reported.
Instructions and Quizzes
The experiment included 3 blocks of 20 periods, which were referred to in the instructions as Experiments 1, 2 and 3. Statements in parentheses and italics provide additional details or discuss differences between the treatments and do not form part of the experiment instructions.
Experiment One
Thank you for choosing to participate today. We appreciate your time. This experiment is an opportunity to earn money. You will be paid in cash at the end of the experiment. You will be paid a $10 attendance fee. You will also receive payments based on the outcome of three experiments. You will not learn your total payoff until the end of the experiment.
There is a very short, anonymous questionnaire at the end of the experiment. You will be paid when the questionnaire is completed.
If you have any questions during the experiment, please sit quietly and raise your hand. An experiment assistant will be with you as soon as possible.
Payment for the first experiment: You will play the first experiment 20 times. Each repetition is called a "period." In each period you will get a payoff of $0, $4, or $8. At the end of the experiment, 1 of the 20 periods will be chosen randomly by the computer. Each period is equally likely to be chosen. Your cash payment for the first experiment will be your payoff in the randomly chosen period.
(In bold text:)Although you will play 20 periods in the first experiment, you are only paid in cash for the payoff you earn in a single period.
You are going to participate in a decision-making task, which is referred to as the "Choose-A-Side Game." There are two buckets: Bucket A and Bucket B. Each bucket contains 40 balls. Each bucket is divided in half, with 20 balls in each side.
There is a 50-in-100 chance (50% chance) that you have been given Bucket A. The left side of Bucket A contains 12 black balls and 8 white balls. The right side of Bucket A contains 20 black balls and 0 white balls.
(Stylized illustration of Bucket A: a rectangle divided vertically in two, with black or white dots to illustrate the ratio of black and white balls in each half of the bucket.) There is a 50-in-100 chance (50% chance) that you have been given Bucket B. The left side of Bucket A contains 8 black balls and 12 white balls. The right side of Bucket A contains 0 black balls and 20 white balls. (The buckets and balls are all computerized.) (Stylized illustration of Bucket B: a rectangle divided vertically in two, with black or white dots to illustrate the ratio of black and white balls in each half of the bucket.) One of the buckets will be randomly chosen by the computer. Both buckets have an equal chance of being chosen. This means that both buckets have a 50-in-100 chance of being chosen (50%). (You might imagine that the computer tosses a coin to decide which bucket will be used.) You will not be told which bucket has been chosen by the computer. The computer will randomly select a ball from the left hand side of your bucket. Each ball has an equal chance of being chosen. You will be told the colour of the ball. After you see the ball, it is put back in the left hand side of your bucket. If the ball is black, you receive $4. If it is white, you receive $0 (nothing). This is your Stage-1 payoff.
You then have a second chance to draw a ball from your bucket. As before, black balls are worth $4. White balls are worth $0 (nothing). You must decide whether you would like the computer to draw the ball from the left hand side of your bucket, or the right hand side. The computer randomly selects a ball from the side you choose. If it is black, you receive $4. If it is white, you receive $0 (nothing). This is your Stage-2 payoff.
Your payoff for the period is your Stage-1 payoff plus your Stage-2 payoff. In total you might have a payoff of $0, $4, or $8 across both stages of the Choose-A-Side Game.
In each period there is a 50-in-100 (50%) chance of being given Bucket A or Bucket B. Your bucket is randomly determined by the computer and is not affected by the bucket you have been given in previous periods.
Summary: Choose-A-Side Game
There are 2 buckets, Bucket A and Bucket B. Each bucket has a 50-in-100 chance (50%) of being chosen. Each bucket is divided in half, with 20 balls in each half. The computer randomly selects a bucket for you. You do not know which bucket you have been given. You will see a randomly chosen ball from the left-hand side of your bucket. If it is black, your Stage-1 payoff is $4. If it is white, your payoff is $0 (nothing). You then choose whether you want a second ball drawn from the left or right side of your bucket. The computer draws a ball from your chosen side. If it is black, your Stage-2 payoff is $4. If it is white, your payoff is $0 (nothing). Your period payoff is your Stage 1 plus your Stage 2 payoff. In each period you might get a payoff of $0 (nothing), $4, or $8 in the Choose-A-Side Game. 1 of the 20 periods will be randomly chosen. Each period has an equal (1-in-20) chance of being chosen. Your will be paid your earnings from that period in cash at the end of the experiment.
When you have finished Experiment 1 you will be given instructions for a second experiment. Quiz At the start of a period the computer randomly selects a bucket for you.
1. What is the chance-in-100 that you get Bucket A? (50) 2. What is the chance-in-100 that you get Bucket B? (50) The bucket has 20 balls in each side; 40 in total. The computer shows you a ball from the left-hand side of your bucket, tells you its colour, and tells you whether it is worth $4 or $0. This is your payoff for Stage 1. The computer puts the ball back in your bucket. The computer asks whether you want the next ball drawn from the left-hand side or the right-hand side of the bucket.
3. How many balls are there in the left-hand side? (20) 4. How many balls are there in the right-hand side? (20) The computer draws a ball from the side you choose, tells you its colour, and tells you if you have won $4 or $0. This is your payoff for Stage 2.
5. What is the minimum payoff possible in a period (Stage 1 + Stage 2)? (0) 6. What is the maximum payoff possible in a period (Stage 1 + Stage 2)? (8) You then finish the period.
7. How many periods are there in this experiment? (20) 8. Do you receive a cash payment for your payoff in every period? (No) 9. Every period has a 1-in-? chance of being paid? (20) 10. When the next period starts, what is the chance-in-100 that you get Bucket A? (50) 11. What is the chance-in-100 that you get Bucket B? (50) (Experiment begins when all questions are answered correctly. At the end of Experiment One:) Thank you! You have now played 20 periods and finished the first experiment. At the end of the third experiment you will find out which period was randomly chosen. You will be paid your payoff from the randomly chosen period. You will be pain in cash. You will now read instructions for the second experiment.
Experiment Two
You will play the second experiment 20 times. Each repetition is called a 'period.' In each period you get a payoff of $0, $4, or $8. At the end of the second experiment, 1 of the 20 periods will be chosen randomly by the computer. Each period is equally likely to be chosen. Your cash payment for the second experiment will be your payoff in the randomly chosen period. Your total payment today will include: • Your show-up fee of $10 • A cash payment for a randomly chosen period from the first experiment • A cash payment for a randomly chosen period from the second experiment • A cash payment for a randomly chosen period from the third experiment Although you will play 20 periods in this second experiment, you only receive cash for your payoff from a single period.
The set-up for Experiment 2 is the same as Experiment 1. There are two buckets: Bucket A and Bucket B. Each bucket contains 40 balls. Each bucket is divided in half, with 20 balls in each side.
There is a 50-in-100 chance (50% chance) that you have been given Bucket A. The left side of Bucket A contains 12 black balls and 8 white balls. The right side of Bucket A contains 20 black balls and 0 white balls.
(Stylized illustration of Bucket A: a rectangle divided vertically in two, with black or white dots to illustrate the ratio of black and white balls in each half of the bucket.) There is a 50-in-100 chance (50% chance) that you have been given Bucket B. The left side of Bucket A contains 8 black balls and 12 white balls. The right side of Bucket A contains 0 black balls and 20 white balls. (The buckets and balls are all computerized.) (Stylized illustration of Bucket B: a rectangle divided vertically in two, with black or white dots to illustrate the ratio of black and white balls in each half of the bucket.) One of the buckets will be randomly chosen by the computer. Both buckets have an equal (50-in-100) chance of being chosen. You will not be told which bucket has been chosen by the computer. The computer will randomly select a ball from the left hand side of your bucket. Each ball has an equal chance of being chosen. You will be told the colour of the ball. After you see the ball, it is put back in the left hand side of your bucket. If the ball is black, you receive $4. If it is white, you receive $0 (nothing). This is your Stage-1 payoff.
(The three treatments involve different instructions from this point.)
SBDM mechanism treatment
After seeing the colour of the ball, you need to think about the chance that the ball was drawn from Bucket A. This is your "belief" that the ball was drawn from Bucket A. Your "belief" is a number between 0 and 100, to indicate the chance-in-100 that the ball has been drawn from Bucket A. For example: If you are sure that Bucket A is being used, your belief is that there is a 100-in-100 chance that Bucket A is being used. If you are sure that Bucket A is not being used, your belief is that there is a 0-in-100 chance that Bucket A is being used. If you believe that it is equally likely that Bucket A is being used as Bucket B, then your belief is that there is a 50-in-100 chance that Bucket A is being used. (These are just examples. You can enter any chance-in-100 belief between 0 and 100.) You then answer 2 questions. Question 1: What is your belief that the ball was drawn from Bucket A? Question 2: Do you want the computer to draw a second ball from the left or right-hand side of your bucket?
The computer then tosses a coin to determine which question is used to determine your Stage-2 payoff. Tails: Question 1. Heads: Question 2. If the computer throws a Heads, your Stage-2 payoff will be determined the same way as Experiment 1 (the Choose-A-Side Game). The computer will draw a ball from the side of the bucket you choose. As before, black balls are worth $4. White balls are worth $0 (nothing).
We will now explain how Stage-2 payoffs are determined if the computer throws "Tails." In Question 1 you tell the computer your belief (the chance-in-100) that the first ball was drawn from Bucket A. If the computer throws "Tails", this is how we determine your Stage-2 payoff: The computer creates a Lottery Bag. The computer randomly chooses a number between 0 and 100. Each number is equally likely to be chosen. Although the computer knows this number, you do not. We call this randomly chosen number "?". The computer fills a bag with 100 chips. "?" chips are black, and the rest are white. ?-in-100 chips are black. There are now two ways to get a payoff of $4: The 'Belief about Bucket A Game," and the Lottery Bag Game.
( Table/illustration Chance-in-100 of winning $4: Chance-in-100 of winning $4: Belief (chance-in-100) that ball is from Bucket A "?"-in-100 The computer knows the chance of winning $4 in the Lottery Bag Game. Based on your reported belief that the ball was drawn from Bucket A, the computer will select the game that gives you the highest chance of winning $4. (If the games give you an equal chance of winning, you will play the Lottery Bag Game.) You should think carefully about your belief that the ball has been drawn from Bucket A, as the computer will use your reported belief to decide whether you are paid according to your "Belief about Bucket A" or the "Lottery Bag" Game. This experiment might feel very detailed and complicated, but it is set up this way so that it is in your best interest to report your beliefs honestly and carefully. If you make a report that is not your true belief, your payoff might be determined by the Lottery Bag Game when you would prefer to be paid based on your belief that the ball was drawn from Bucket A (or vice-versa).
The best thing you can do is report your belief honestly, so that you are given the game with the highest chance of a payoff of $4.
Summary: Experiment 2
You have a 50-in-100 chance of being given Bucket A or Bucket B in each period. You will be shown a ball from the left-hand side of your bucket. You will answer 2 questions. Question 1: What is your belief that the ball was drawn from Bucket A? Question 2: Do you want the computer to draw a second ball from the left or right side of your bucket? The computer then tosses a coin to determine which question is used to determine your Stage-2 payoff. Tails: Question 1. Heads: Question 2.
If the computer throws "Heads" your payoff is determined in the same way as Experiment 1. A second ball will be drawn from your bucket, from the side you choose. If the computer throws "Tails" your payoff will be determined by the "Belief about Bucket A" Game or a Lottery Bag Game. The best thing you can do is report your belief honestly, so that you are given the game with the highest chance of a payoff of $4.
Your period payoff is your Stage-1 payoff plus your Stage-2 payoff. In each period you might get a payoff of $0, $4, or $8. 1 of the 20 periods will be randomly chosen. Each period has an equal (1-in-20) chance of being chosen. Your will be paid your earnings from that period in cash at the end of the experiment. Quiz Imagine that you are shown a ball. Based on its colour, you report your belief that there is a 20-in-100 chance that the ball is from Bucket A. The computer flips a coin and it lands on "Tails." The computer creates a Lottery Bag game, and randomly includes 25 black chips. It has a 25-in-100 chance of winning $4. Based on your report, the computer chooses the game that gives you a higher chance of winning $4.
1. Which game will be used to determine your payoff for the period? (Lottery Bag Game) 2. What is your chance-in-100 of winning $4? (25) 3. What is your chance-in-100 of winning $0? (75) Imagine you start a new period. You are shown a new ball. This time, you believe there is an 81-in-100 chance that the ball was taken from Bucket A... but you make an error! You type 18 by mistake. This is your reported belief.
The computer doesn't know your belief, only your reported belief. The computer thinks you believe there is an 18-in-100 chance of winning $4 in the Belief-About-Bucketa Game.
The computer creates a Lottery Bag Game and randomly includes 36 black chips. It has a 36-in-100 chance of winning $4.
What do you believe is your chance-in-100 of winning $4 if you play the Belief-
About-Bucket-A Game? (81) 5. What does the computer think you believe is the chance-in-100 of winning $4 if you play the Belief-About-Bucket-A Game? (18) 6. What is your chance-in-100 of winning $4 if you play the Lottery Bag Game? (36) Based on your report, the computer chooses the game that it thinks will give you a higher chance of winning $4. 7. Which game will be used to determine your prize for the period? (Lottery Bag Game) (Experiment begins when all questions are answered correctly.)
Unpaid Introspection Treatment
After seeing the colour of the ball, you need to think about the chance that the ball was drawn from Bucket A. This is your "belief" that the ball was drawn from Bucket A. Your "belief" is a number between 0 and 100 to indicate the chance in 100 that the ball has been drawn from Bucket A.You should think carefully about your belief that the ball has been drawn from Bucket A. For example: If you are sure that Bucket A is being used, your belief is that there is a 100-in-100 chance that Bucket A is being used. If you are sure that Bucket A is not being used, your belief is that there is a 0-in-100 chance that Bucket A is being used. If you believe that it is equally likely that Bucket A is being used as Bucket B, then your belief is that there is a 50-in-100 chance that Bucket A is being used. (These are just examples. You can enter any chance-in-100 belief between 0 and 100.) You then answer 2 questions. Question 1: What is your belief that the ball was drawn from Bucket A Question 2: Do you want the computer to draw a second ball from the left or right side of your bucket?
The computer randomly selects a ball from the side you choose. If it is black, you receive $4. If it is white, you receive $0 (nothing). This is your Stage-2 payoff. Your payoff for the period is your Stage-1 payoff plus your Stage-2 payoff. In total you might have a payoff of $0 (nothing), $4 or $8 across both stages of the Choose-A-Side Game. In each period there is a 50-in-100 (50%) chance of being given Bucket A or Bucket B.Your bucket is randomly determined by the computer and is not affected by the bucket you have been given in previous periods.
Summary: Experiment 2 You have a 50-in-100 (50%) chance of being given Bucket A or Bucket B. Each bucket is divided in half, with 20 balls in each half. You will be shown a ball from the left hand side of your bucket. If it is black, your Stage-1 payoff is $4. If it is white, your payoff is $0 (nothing). You will answer 2 questions: Question 1: What is your belief that the ball was drawn from Bucket A? Question 2: Do you want the computer to draw a second ball from the left or right side of your bucket?
You should think carefully about your belief that the ball has been drawn from Bucket A. If it is black, your Stage-2 payoff is $4. If it is white, your payoff is $0 (nothing). Your period payoff is your Stage 1 payoff plus your Stage 2 payoff. In each period you might get a payoff of $0 (nothing), $4 or $8. 1 of the 20 periods will be randomly chosen. Each period has an equal (1-in-20) chance of being chosen. You will be paid your payoff from that period in cash at the end of the experiment. When you have finished Experiment 2 you will be given instructions for a third experiment. Quiz At the start of a period the computer randomly selects a bucket for you. The bucket has 20 balls in each side: 40 in total. The computer shows you a ball from the left-hand side of the bucket, tells you its colour, and tells you whether it is worth $0 or $4. This is your payoff for Stage 1. The computer puts the ball back in your bucket. The computer asks whether you want the next ball drawn from the left hand side or the right hand side of the bucket. The computer also asks your belief about the chance-in-100 that the ball was drawn from Bucket A.
Imagine that you're sure the ball is drawn from Bucket A. How would you report this as a chance-in-100 that the ball is drawn from Bucket A?
1. The chance-in-100 of the Ball being from Bucket A is: (100) Imagine that you're sure the ball is not drawn from Bucket A. How would you report this as a chance-in-100 that the ball is drawn from Bucket A?
2. The chance-in-100 of the Ball being from Bucket A is: (0) Imagine that you think there's an equal chance the ball is drawn from Bucket A. How would you report this as a chance-in-100 that the ball is drawn from Bucket A?
3. The chance-in-100 of the Ball being from Bucket A is: (50) The computer draws a ball from the side you choose, tells you its colour, and tells you if you have won $0 or $4. This is your payoff for Stage 2.
4. What is the minimum payoff possible in a period (Stage 1 + 2)? (0) 5. What is the maximum payoff possible in a period (Stage 1 + 2)? (8) You then finish the period.
6. How many periods are there in this experiment? (20) 7. Do you receive a cash payments for your payoff in every period? (No) 8. Every period has a 1-in-? chance of being paid? 1-in: (20) (Experiment begins when all questions are answered correctly.)
No-Elicitation Treatment
You must then decide whether you would like the computer to draw a second ball from the left-hand side of your bucket, or the right-hand side.
The computer randomly selects a ball from the side you choose. If it is black, you receive $4. If it is white, you receive $0 (nothing). This is your Stage-2 payoff. Your payoff for the period is your Stage-1 payoff plus your Stage-2 payoff. In total you might have a payoff of $0 (nothing), $4, or $8 across both stages of the Choose-A-Side Game. In each period there is a 50-in-100 (50%) chance of being given Bucket A or Bucket B. Your bucket is randomly chosen by the computer and is not affected by the bucket you have been given in previous rounds.
Summary: Experiment 2 You have a 50-in-100 (50%) chance of being given Bucket A or Bucket B. Each bucket is divided in half, with 20 balls in each half. You will be shown a ball from the left-hand side of your bucket. If it is black, your Stage-1 payoff is $4. It if is white, your payoff is $0 (nothing). You must then decide whether you would like the computer to draw a second ball from the left-hand side of your bucket, or the right-hand side. A second ball will be drawn from your bucket, from the side you choose. If it is black, your Stage-2 payoff is $4. If it is white, your payoff is $0 (nothing). Your period payoff is your Stage 1 playoff plus your Stage 2 payoff. In each period you might get a payoff of $0 (nothing), $4, or $8 in the Choose-A-Side Game. 1 of the 20 periods will be randomly chosen. Each period has an equal (1-in-20) chance of being chosen. You will be paid your payoff from that period in cash at the end of the experiment. When you have finished Experiment 2 you will be given instructions for a third experiment. Quiz (The quiz for the No-Elicitation Treatment is the same as the quiz for Experiment 1-that is, the "Choose-A-Side" Experiment.) (Experiment begins when all questions are answered correctly.)
Experiment Three
You are about to start Experiment 3. This is the final experiment. You will repeat the experiment 20 times. Each repetition is called a "period." In each period you get a payoff of $0, $4, $8 or $12. At the end of the third experiment, 1 of the 20 periods will be chosen randomly by the computer. Each period is equally likely to be chosen. Your cash payment for the third experiment will be your payoff in the randomly chosen period. Your total payment today will include: • Your show-up-fee of $10.
• A cash payment for a randomly chosen period from the first computerized experiment • A cash payment for a randomly chosen period from the second computerized experiment • A cash payment for a randomly chosen period from the third computerized experiment Although you will play 20 periods in this third experiment, you are paid cash for your payoff in a single period. Experiment 3 is the same as Experiment 2, except you will see two balls drawn from your bucket The computer will randomly select a ball from the left-hand side of your bucket. The computer will tell you the colour of the ball, and whether your payoff is $0 or $4. The computer will put the ball back in the left-hand side of your bucket. The computer will randomly select a second ball from the left-hand side of your bucket. Because the ball is randomly chosen, this might be the same ball (chosen a second time) or it might be a new ball. The computer will tell you the colour of the second ball, and whether your payoff is $0 or $4.
You have two chances to get a payoff of $4 in Stage 1. This means you can secure a payoff of $0, $4 or $8 in Stage 1.
(The three treatments involve different instructions from this point.)
SBDM Treatment
You then answer 2 questions: • Question 1: What is your belief that the two balls were drawn from Bucket A?
• Question 2: Do you want the computer to draw a third ball from the left or right side of your bucket?
The computer then tosses a coin to determine which Question is used to determine your Stage-2 payment.
Just like Experiment 2: If you throw a Heads, your Stage-2 payoff will be determined by the "Choose-A-Side Game." The computer will draw a ball from the side of the bucket you choose. You will get a payoff of $4 if the ball is black, and $0 (nothing) if it is white. If you throw a Tails, your Stage-2 payoff will be determined by the Lottery Bag Game, or the Belief-About-Bucket-A Game (whether the two balls were drawn from Bucket A). These games are played in exactly the same way as in Experiment 2. Based on your reported belief that the ball was drawn from Bucket A, the computer will select the game that gives you the highest chance of winning $4. You should think carefully about your belief that the balls have been drawn from Bucket A, as the computer will use your reported belief to decide whether you are paid according to your "Belief-About-Bucket A" or the Lottery game. The experiment is set up so that it is in your best interest to report your belief honestly and carefully. If you make a report that is not your true belief your payoff might be determined by the Lottery Game when you would prefer to be paid based on your belief that the ball was drawn from Bucket A.
Summary: Experiment 3
You have a 50-in-100 (50%) chance of being given Bucket A or Bucket B. You will be shown 2 balls from the left hand side of your bucket. You will answer 2 questions: • Question 1: What is your belief that the 2 balls were drawn from Bucket A?
• Question 2: Do you want the computer to draw a third ball from the left or right side of your bucket?
The computer then tosses a coin to determine which question is used to determine your Stage-2 payment: • Tails: Question 1 • Heads: Question 2 If the computer throws "Heads" your payoff is determined in the same way as Experiment 1. A second ball will be drawn from your bucket, based on the side you choose. If the computer throws "Tails" your payoff will be determined by the "Belief about Bucket A" game or a Lottery Game. The best thing you can do is report your belief honestly, so that you are given the game with the highest chance of a payoff of $4. Your period payoff is your Stage 1 payoff plus your Stage 2 payoff. In each period you might get a payoff of $0 (nothing), $4, $8 or $12 in the third experiment. 1 of the 20 periods will be randomly chosen. Each period has an equal (1-in-20) chance of being chosen. You will be paid your payoff from that period in cash at the end of the experiment.
Quiz
At the start of a period the computer randomly selects a bucket for you. Both buckets are equally likely to be chosen. The bucket has 20 balls in each side: 40 in total. The computer shows you a ball from the left side of your bucket, tells you its colour, and tells you whether your payoff is $4 or $0. The computer puts the ball back in the left-hand side of your bucket.
1. How many balls are there in the left hand side? (20) The computer draws a second ball from the left-hand side of your bucket and tells you whether your second payoff is $4 or $0.
2. Is it possible that the computer drew the same ball twice? (Yes) The computer puts the second ball back in your bucket.
3. How many balls are there in the left hand side? (20) The computer asks whether you want the third ball drawn from the left hand side or the right hand side of the bucket. The computer also asks you to report your belief that the ball is from Bucket A. 4. What is the minimum payoff possible in a period (both balls from Stage 1 + ball from Stage 2)? (0) 5. What is the maximum payoff possible in a period (both balls from Stage 1 + ball from Stage 2)? (12) (Experiment begins when all questions are answered correctly.)
Unpaid Introspection Treatment
After seeing the colour of the two balls, you need to think about the chance that the balls were drawn from Bucket A. This is your "belief" that the balls were drawn from Bucket A. Your "belief" is a number between 0 and 100 to indicate the chance-in-100 that the ball has been drawn from Bucket A. You should think carefully about your belief that the ball has been drawn from Bucket A. You then answer 2 questions: • Question 1: What is your belief that the ball was drawn from Bucket A?
• Question 2: Do you want the computer to draw a second ball from the left or right side of your bucket?
The computer randomly selects a ball from the side you choose. If it is black, you receive $4. If it is white, you receive $0 (nothing). This is your Stage-2 payoff. Your payoff for the period is your Stage-1 payoff plus your Stage-2 payoff. In total you might have a payoff of $0, $4, $8 or $12 across both stages of the third experiment. In each period there is a 50-in-100 (50%) chance of being given Bucket A or Bucket B. Your bucket is randomly determined by the computer and is not affected by the bucket you have been given in previous periods Summary: Experiment 3 You have a 50-in-100 (50%) chance of being given Bucket A or Bucket B. You will be shown 2 balls from the left hand side of your bucket. You will answer 2 questions: • Question 1: What is your belief that the 2 balls were drawn from Bucket A?
• Question 2: Do you want the computer to draw a third ball from the left or right side of your bucket?
You should think carefully about your belief that the ball has been drawn from Bucket A. A second ball will be drawn from your bucket, from the side you choose. If it is black, your Stage-2 payoff is $4. If it is white, your payoff is $0 (nothing). Your period payoff is your Stage 1 payoff plus your Stage 2 payoff. In each period you might get a payoff of $0 (nothing), $4, $8 or $8 in the third experiment. 1 of the 20 periods will be randomly chosen. Each period has an equal (1-in-20) chance of being chosen. You will be paid your payoff from that period in cash at the end of the experiment. Quiz At the start of a period the computer randomly selects a bucket for you. Both buckets are equally likely to be chosen. The bucket has 20 balls in each side: 40 in total. The computer shows you a ball from the left side of your bucket, tells you its colour, and tells you whether your payoff is $4 or $0. The computer puts the ball back in the left-hand side of your bucket.
1. How many balls are there in the left hand side? (2) The computer draws a second ball from the left-hand side of your bucket and tells you whether your second payoff is $4 or $0.
2. Is it possible that the computer drew the same ball twice? (Yes) The computer puts the second ball back in your bucket 3. How many balls are there in the left hand side? (20) The computer asks whether you want the third ball drawn from the left hand side or the right hand side of the bucket. The computer also asks you to report your belief that the ball is from Bucket A. 4. What is the minimum payoff possible in a period (both balls from Stage 1 + ball from Stage 2)? (0) 5. What is the maximum payoff possible in a period (both balls from Stage 1 + ball from Stage 2)? (12) (Experiment begins when all questions are answered correctly.)
No-elicitation Treatment
You then have a third chance to draw a ball from your bucket. As before, black balls are worth $4. White balls are worth $0 (nothing). You must decide whether you would like the computer to draw the ball from the left hand side of your bucket, or the right hand side. The computer randomly selects a ball from the side you choose.
If it is black, you receive $4. If it is white, you receive $0 (nothing). This is your Stage-2 payoff.
Your payoff for the period is your Stage-1 payoff plus your Stage-2 payoff. In total you might have a payoff of $0, $4, $8, or $12 across both stages of the third experiment.
In each period there is a 50-in-100 (50%) chance of being given Bucket A or Bucket B. Your bucket is randomly determined by the computer and is not affected by the bucket you have been given in previous periods.
Summary: Experiment 3
You have a 50-in-100 (50%) chance of being given Bucket A or Bucket B. You will be shown 2 balls from the left hand side of your bucket You will answer a question: Question: Do you want the computer to draw a third ball from the left or right side of your bucket?
A third ball will be drawn from your bucket, from the side you choose. If it is black, your Stage-2 payoff is $4. If it is white, your payoff is $0 (nothing).Your period payoff is your Stage 1 payoff plus your Stage 2 payoff. In each period you might get a payoff of $0 (nothing), $4, $8, or $12 in the third experiment. 1 of the 20 periods will be randomly chosen. Each period has an equal (1-in-20) chance of being chosen. You will be paid your payoff from that period in cash at the end of the experiment Quiz At the start of a period the computer randomly selects a bucket for you. Both buckets are equally likely to be chosen. The bucket has 20 balls in each side: 40 in total. The computer shows you a ball from the left side of your bucket, tells you its colour, and tells you whether your payoff is $4 or $0. The computer puts the ball back in the left-hand side of your bucket.
1. How many balls are there in the left hand side? (20) The computer draws a second ball from the left-hand side of your bucket and tells you whether your payoff is $4 or $0.
2. Is it possible that the computer drew the same ball twice? (Yes)
27
The computer puts the second ball back in your bucket.
3. How many balls are there in the left hand side? (20) The computer asks whether you want the third ball drawn from the left hand side or the right hand side of the bucket. 4. What is the minimum payoff possible in a period (both balls from Stage 1 + ball from Stage 2)? (0) 5. What is the maximum payoff possible in a period (both balls from Stage 1 + ball from Stage 2)? (12) (Experiment begins when all questions are answered correctly.) | 2021-06-06T05:14:43.129Z | 2021-06-04T00:00:00.000 | {
"year": 2021,
"sha1": "5cd8655bfdbf81f5a5b81663e280e7824b857316",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10683-021-09722-x.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5cd8655bfdbf81f5a5b81663e280e7824b857316",
"s2fieldsofstudy": [
"Economics",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
2815137 | pes2o/s2orc | v3-fos-license | Estrogen receptor transcription and transactivation: Structure-function relationship in DNA- and ligand-binding domains of estrogen receptors
Estrogen receptors are members of the nuclear receptor steroid family that exhibit specific structural features, ligand-binding domain sequence identity and dimeric interactions, that single them out. The crystal structures of their DNA-binding domains give some insight into how nuclear receptors discriminate between DNA response elements. The various ligand-binding domain crystal structures of the two known estrogen receptor isotypes (α and β) allow one to interpret ligand specificity and reveal the interactions responsible for stabilizing the activation helix H12 in the agonist and antagonist positions.
Introduction
The physiological effects of estrogens have long been considered mediated by a single nuclear receptor (estrogen receptor α [1,2]) through which the signal is transduced to the transcriptional machinery and chromatin template of target responsive genes. The cloning of a second estrogen receptor (ER) isoform (ERβ) ([3-5] and references cited therein) stimulated interest in the search for differences in tissue distribution and functioning. The ERs (ERα and ERβ) belong to the nuclear receptor (NR) superfamily representing a large group of transcriptional regulators that encompass receptors for steroid and thyroid hormones, retinoids, vitamin D, peroxisome proliferator-activated receptors and orphan receptors for which no ligand has until now been characterized.
The structural organization of NRs consists of six functional regions (A-F) showing various degrees of sequence conservation (Fig. 1a). The N-terminal A/B domain, not well conserved among NRs, contains the autonomous transactivation function AF-1. The size of the domain is extremely variable, and large A/B domains, extending beyond 550 residues in the case of the human androgen receptor, characterize steroid receptors. This domain is also poorly conserved between the two ER isoforms (with little or no detectable similarity, 17% identity). No clear secondary structure can be identified in these regions and no structural data have until now been obtained. We will thus focus on the better characterized parts, for which functional and structural data are available, such as the highly conserved C region harboring the DNA-binding domain (DBD) and the conserved E region containing the ligand-binding domain (LBD). The two remaining regions, D and F, are again of variable size and are not conserved: D can be considered as a linker peptide between the DBD and the LBD, whereas F is a C-terminal extension region of the LBD.
Both ERα and ERβ share a modest overall sequence identity (47%) [3]. The conservation, however, is much higher when considering the DBD and LBD domains (94 and 59%, respectively) ( Fig. 1a) [3]. Ligand-binding experiments revealed high affinity and specific binding of estradiol to both ERα and ERβ isotypes, which both stimulate transcription of an ER responsive gene containing an estrogen responsive element (Fig. 1b), in an estradiol dependent manner [5]. No obvious differences between the two isotypes alone or combined were observed in estrogen responsive element transcriptional assays in the presence of estradiol. Some synthetic or naturally occurring ligands nevertheless have different relative affinities/ activities for ERα versus ERβ [5], which will be analyzed in the light of the crystal structures.
The DNA-binding domains
The DBD of the two ER isoforms share the same response elements. As DBD structures for only ERα are available, the comparison will be made with other NRs, especially with the glucocorticoid receptor (GR) [6].
Several three-dimensional structures (nuclear magnetic resonance as well as X-ray investigations) are known for ERα DBD alone and in complex with DNA [7 • ,8 •• ,9,10]. The topology of ER DBDs (Fig. 2a) is characterized by a zinc finger-like motif with eight cysteines that constitute the tetrahedral coordination of two zinc ions. Residues participating in the 'D box' have been shown to be involved in the dimerization interface, whereas residues present in the 'P box' are implicated in specific interaction with DNA and are in contact with the central base pairs of the palindromic response element (Fig. 2b). The structural data now available from different nuclear receptors provide clear insight into the response element discrimination problem. For the steroid receptors GR and ERα, DBDs are monomers in solution and form dimers when bound to their respective response elements. DNA thus acts as a positive allosteric effector of its own recognition for binding the second monomer to favor the binding site with a correct spacer length [6,10]. The crystal structures of GR DBD in complex with a cognate (spacer = 3) and a nonspecific (spacer = 4) response element (glucocorticoid response element) [6,11] and that of ER DBD interacting with a cognate estrogen responsive element [9,10] evidenced amino acids in the 'P box' interacting with the two discriminating bases (Fig. 2). Discrimination between estrogen and glucocorticoid response elements are made by the two central base pairs of the DNA response element, interacting with residues of the first zinc finger helix going across the DNA large groove [6,10].
The ligand-binding domain
The LBD is a globular domain that harbors a hormone binding site, a dimerization interface (homo-and heterodimerization), and a coactivator and corepressor interaction function. Despite low sequence identity in LBDs of the NR superfamily ( Fig. 1a), the three-dimensional structures of the LBDs are similar.
The first reported crystal structure for a steroid receptor was that of ERα [12 •• ,13]. Crystals could be obtained from a chemically modified protein that formed a complex with the natural ligand 17β-estradiol. This structure together with that of the raloxifene (antagonist) complex presented concomitantly [12 •• ] led to a structural proposal to explain agonism and antagonism in NRs.
ER LBDs are arranged in an antiparallel α-helical 'sandwich' fold that was first described for the human RXRα apolipoprotein LBD [14 •• ]. This fold appears to be universal within the receptor superfamily. For the sake of comparison with RXR, helix H2, which does not exist in ER, has been considered in the numbering scheme. The liganded ER LBD (Fig. 3a) contains 11 α-helices (H1-H12) organized in a three-layered sandwich structure with H4, H5, H6, H8 and H9 flanked on one side by H1 and H3, and on the other side by H7, H10, and H11. The ligand pocket is closed on one side by an antiparallel β-sheet and on the other by H12, known to be directly involved in the transactivation function AF-2 by mutagenesis studies [15], and for which several conformations ('agonist' or 'antagonist' conformations) have been evidenced [16 • ].
The dimer interface
The ER LBDs form dimers within both agonist and antagonist complexes in a manner consistent with solution studies [17,18]. The overall homodimeric arrangement is the same whatever the class of the ligand or the ER isotypes and is similar to that observed in the crystal structure of apolipoprotein RXRα [14 •• ], being a symmetric 'head-to-head' arrangement where each protomer is slightly tilted from the twofold dimer axis. The dimerization interface involves residues from helix H8 up to helix H11, but the most important contact surface is located on H10 through a hydrophobic leucine zipper-like interaction zone and hydrophilic contacts (direct hydrogen bonds or via water molecules). The dimer contacts in both ER isoforms are constituted mainly by helices H10 and H11 [12 •• ,19 •• ], which are also in contact with the ligand, providing the link between ligand binding and dimerization. However, this dimeric interface may not be universal in the steroid receptor family, as indicated by the crystal structure of the progesterone-bound LBD of the human progesterone receptor (PR) [20 • ]. Unlike the ER, the PR LBD was crystallized with its F region (residues 922-933) that is essential for hormone binding by the PR [21], GR [22] and androgen receptor [23,24]. The C-terminal extension adopts a βstrand conformation tightly packed against the core protein contacting helices H8, H9 and H10. This β-strand then forms an antiparallel β-sheet with another β-strand inserted between H8 and H9, and impinges on the dimer interface seen in the ER. As the ER LBDs were not crystallized with their C-terminal F region (residues 553-595), we have to consider the possibility that the present ER dimer interface could be an artifact. Indeed, as the C-terminal end of the LBD of one protomer points towards the other, it is conceivable that an extension after H12 could interfere with the other protomer and perturb the dimerization. Additional structural information will be necessary to solve this problem.
Formation of ERα/ERβ heterodimers has been demonstrated in vitro and in transfected cells [25], but the in vivo physiological role of this cross-signaling is unclear. The three-dimensional arrangement of the heterodimer is not known, but it has been suggested [19 •• ] that it would be the same as in homodimers.
The coactivator recognition groove
Agonist binding induces a conformational rearrangement in the LBD [14 •• ,26 •• ] resulting in the formation of a specific binding site for the helical NR-box module of nuclear coactivators [16 • ,27,28,29 •• ,30 •• ]. This binding site is a hydrophobic groove formed by residues from helices H3, H4, H5 and H12, and the turn between helices H3 and H4 (Fig. 3c). The coactivator LXXLL motif functions as a hydrophobic docking module that binds on the surface of the LBD. In the case of ERs [12 •• ], both partial and pure antagonists induce conformations of the AF-2 region that are distinct from that observed in the presence of pure agonists [12 •• ,30 •• ]. The binding of raloxifene and tamoxifen is accompanied by major structural reorganization in the ternary structure in both ER isotypes [12 •• ,19 •• ,30 •• ]. The large piperidine extension of this ligand provokes steric clashes that prevent the transactivation helix H12 to adopt its characteristic conformation. Instead, H12 lies tightly in the coactivator recognition groove. Note that H12 possesses a NR box-like sequence (LXXML versus LXXLL) that perfectly mimics the interactions made by NR-box peptides in the ERα complex, whereas there is a shift in the H12 position of between 1.9 and 3.2 Å in the ERβ complex. This displacement does not affect the final location of the two key leucine residues, which coincide with the first and third NR-box leucines (Fig. 4). H12 in the genestein/human ERβ complex surprisingly also adopts an orientation that again exhibits fundamental differences in the length, positioning and interactions compared with the 'antagonist' orientation.
Agonist and antagonist recognition by human ERα α
The first crystal structure of an ERα LBD provided the molecular basis of the interaction of the receptor with its natural ligand 17β-estradiol (E2) [12 •• ]. The E2 cavity is completely shielded from the external environment and buries the ligand in a highly hydrophibic environment mostly defined by 22 residues. Two polar regions located at opposite sides of the ligand-binding pocket can be identified (Fig. 3b), and they are involved in the anchoring of the E2 hydroxyl moiety at positions 3 and 17. The phenolic hydroxyl group of the A-ring (3-OH) is hydrogen bonded to Glu353 from H3, and to Arg394 from H5 and a water molecule. The hydroxyl group of the D-ring (17β-OH) forms a single hydrogen bond with His524 (H11). The cavity delimited by the protein exhibits a probe accessible volume of 450 Å 3 , which is much larger than the molecular volume of the natural ligand (250 Å 3 ). The probe occupied volume, expressed as the ratio of the two volumes, is significantly higher in the retinoid family. This observation is likely to be general to the steroid receptors family as the PR, for which an LBD crystal structure is available, presents an even larger cavity (603 Å 3 ) [20 • ]. The ER and PR structures superimpose well (1.2 Å rmsd over 191 Cα atoms), with the planes of the ligands, defined by the A-, B-, C-and D-rings, almost exactly superimposed.
A large amount of data has been accumulated on the binding of synthetic agonist ligands to ERα [31,32] other phenolic ring shifted 1.7 Å from the position of the 17β-hydroxyl group of the E2 D-ring. DES contacts two regions of the ligand binding pocket not occupied by E2, located at the 7-α and 11-β positions of E2, and filled by the two ethyl groups of DES.
Crystal structures of antagonist bound ERα complexes (raloxifen, tamoxifen) showed that the position of the antagonist ligand in the binding pocket is dictated by the hydrogen bonds to the 3-hydroxyl group corresponding to that of the E2 A-ring and the bulky chain of the ligand that displaces H12 [12 •• ,30 •• ]. Note that additional structural changes occur near the N-terminus of the H3 region, in the loop connecting H1 to H3 and in the loop connecting H6 to H7. These changes are not imposed by the pres-ence of the bulky chain in antagonist ligands and thus could also be induced by agonist ligands. These structural differences induced by the ligands highlight the intrinsic ERα LBD plasticity.
ERβ β specificity
The crystal structure of the human ERβ complex bound to genistein [19 •• ], an isoflavonoid phytoestrogen [33], is also reminiscent of the E2 complex, especially for the hydrogen bond network around the two hydroxyl groups at the opposite sites of the ligand [19 •• ]. The flavone portion of genistein adopts a position similar to the C-and D-ring of E2 in human ERα, with the distal hydroxyl group forming a hydrogen bond to His475 (His524 in human ERα). The remaining flavone moieties exhibit no contact with the protein. The binding cavity of the β-isoform overall is smaller (390 Å 3 versus 450 Å 3 , of which genistein occupies 236 Å 3 ) compared with the ligand-binding cavity in the E2/human ERα complex. Among the residues lining the binding pocket, two differ significantly (Fig. 3d): on the β-side of E2, Leu384 in H5 of ERα is replaced by Met336 in ERβ; and on the α-side, below the E2 D-ring, Met421 in loop 6-7 of ERα is replaced by Ile373 in ERβ. These two residues are most probably responsible for the higher affinity of genistein for ERβ.
Together with the genistein/human ERβ LBD complex, the crystal structure of another complex with raloxifene bound to the rat ERβ LBD (raloxifene/rat ERβ LBD) has been reported [19 •• ]. In this new complex, raloxifene binds in a position similar to that observed in the human ERα complex (0.52 rmsd between the two ligands once the proteins are superimposed). The major difference is observed in the phenolic ring, where the distal hydroxyl moieties in the two isotypes are 1.4 Å apart. The result of the different position of raloxifene in the beta-isotype is that the piperidine ring pointing outside the cavity is shifted outward (0.9-1.5 Å) and prevents H12 from adopting its agonist position [19 •• ]. This shift is most probably responsible for the pure antagonist character of raloxifene on ERβ. | 2014-10-01T00:00:00.000Z | 2000-07-07T00:00:00.000 | {
"year": 2000,
"sha1": "1bc094f7c6bacf819f2143ac1e6fc43322687d44",
"oa_license": "CCBY",
"oa_url": "https://breast-cancer-research.biomedcentral.com/track/pdf/10.1186/bcr80",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86ef8f3ba1a501f6bcae32836bd89f4dfeaf5440",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
41240466 | pes2o/s2orc | v3-fos-license | Monitoring Tweets for Depression to Detect At-risk Users
We propose an automated system that can identify at-risk users from their public social media activity, more specifically, from Twitter. The data that we collected is from the #BellLetsTalk campaign, which is a wide-reaching, multi-year program designed to break the silence around mental illness and support mental health across Canada. To achieve our goal, we trained a user-level classifier that can detect at-risk users that achieves a reasonable precision and recall. We also trained a tweet-level classifier that predicts if a tweet indicates depression. This task was much more difficult due to the imbalanced data. In the dataset that we labeled, we came across 5% depression tweets and 95% non-depression tweets. To handle this class imbalance, we used undersampling methods. The resulting classifier had high recall, but low precision. Therefore, we only use this classifier to compute the estimated percentage of depressed tweets and to add this value as a feature for the user-level classifier.
Introduction
According to a recent report of the World Health Organization (WHO), mental health is an integral part of health and well-being (WHO, 2004). Mental disorders can affect anyone, rich or poor, male or female, of any age or social group. The experience of mental illness is often described as difficult, especially when associated with demeaning prejudices and lack of understanding. Mental illness is also difficult to diagnose. There is no reliable laboratory test for most forms of mental illness and typically, diagnostic is based on the patient's self-reported experiences, behaviors reported by relatives, and a mental status examination. Unfortunately, mental disorder problems are increasing worldwide.
In the context of mental illness, depression is very common. In Canada, 5.3% of the population had presented a depressive episode in the past 12 months. 1 According to Canadian Mental Health Association (CMHA, 2016), 20% of Canadians belonging to different demographics have experienced mental illness during their lifetime, and around 8% of adults have gone through major depression. Mental Health Commission of Canada (MHCC, 2016) has reported on the broad implications of mental illness, where from nearly 4,000 Canadians that die each year by suicide, 90% of them were identified as having some form of a mental disorder. According to World Health Organization (WHO, 2016), suicide is a preventable health problem and to be successful in preventing suicide; therefore, it is of great importance to identify depression as a first indicator of further problems.
Apart from the severity of mental disorders and their influence on one's mental and physical health, the social stigma or discrimination in the forms of rejection, isolation, abuse and fear of embarrassment have made the individuals with mental disorders to be neglected by the community, as well as to stay away from obtaining the necessary treatments (WHO, 2016). Due to the severity mental disorders can cause to one's life and the impact it has on the entire society, organizations such as Bell Canada have initiated programs to raise funding for mental health programs as well as to create awareness within the society. 2 The goal of this research is to exploit the mas-sive data issued from Twitter and apply social media mining and sentiment analysis methods to detect users at-risk of depression. It is an open question whether a tweet-level or user-level classifier is best for detecting at-risk people. A tweet-level classifier monitors individual tweets, identifying messages that indicate risk for depression; a userlevel classifier looks at the tweet history and determines if a person is at risk from their corpus of messages over a period of time. This paper describes experiments on both classifiers. Our system can be used by authorities to find a focused group of at-risk users. It is not a platform for labeling an individual as a patient with depression, but only a platform for raising an alarm so that the relevant authorities could take necessary interventions to further analyze the predicted user to confirm his/her state of mental health. We respect the ethical boundaries relating to the use of social media data and therefore do not use any user identification information in our research.
Related Work
With the gradual increase in social media usage and the extensive level of self-disclosure within such platforms (Park et al., 2012), research has been conducted to identify mental disorders at an individual as well as at a society level. Researchers have used features such as behavioural characteristics, depression language, emotion and linguistic style, reduced social activity, increased negative affect, clustered social network, raised interpersonal and medical fears and increased expression in religious involvement, use of negative words, in order to determine the cues of major depressive disorder (De Choudhury et al., 2013a;Tsugawa et al., 2015). Tsugawa et al. (2015), also used syntactical features such as bag of words (BOW) and word frequencies to identify the ratio of tweet topics and managed to conclude that topic modeling also adds a positive contribution to the predictive model compared to the use of the bagof-words model, which could also result in overfitting.
The successful use of computational linguistics techniques in identifying the progress and level of depression of individuals in online therapy could bring greater insights to clinicians, to apply interventions effectively and efficiently. Howes et al. (2014) used 882 transcripts gathered from an online psychological therapy provider to determined that use of linguistic features can be considered as more valuable in predicting the progress of a patient compared to sentiment and topic-based analysis. In contrary to traditional sentiment analysis approaches that use three main polarity classes (i.e., positive, negative, and neutral), Shickel et al. (2016), divided the neutral class into two classes: neither positive nor negative and both positive and negative. With the use of syntactic, lexical, and also by representing words as vectors in the vector space (word embeddings), the authors managed to achieve an overall accuracy of 78% for the fourclass polarity prediction. The shared task participants were provided with a dataset of self-reported users on PTSD and depression. For each user in the dataset, nearly 3,200 most recent posts were collected using the Twitter API. Resnik et al. (2015a), whose system ranked first in the CLPsych 2015 Shared Task, created 16 systems based on features derived using supervised LDA, supervised anchors (for topic modeling), lexical TF-IDF, and a combination of all. An SVM classifier with a linear kernel obtained an average precision above 0.80 for all the three tasks (i.e., depression vs. control, PTSD vs. control and depression vs. PTSD) and a maximum precision of 0.893 for differentiating PTSD users from the control group. Preotiuc-Pietro et al. (2015) employed user metadata and textual features from the corpus provided by the CLPsych 2015 Shared Task to develop a linear classifier to predict users having either one of the mental illnesses. They have used the bag-of-words approach to aggregate word counts, topics derived from clustering methods and metadata (e.g., followers, followees, age, gender) from the users Twitter profile as the main feature categories. With the use of logistic regression and linear SVM in an ensemble of classifiers, the authors managed to obtain an average precision above 0.800 for all the three tasks and with a maximum score of 0.867 for differentiating users in the control group from the users with depression.
The use of the supervised LDA and the supervised anchor model was proven to be highly successful compared to the unsupervised clustering approaches, and even more efficient than using linguistic methods such as the use of n-grams and other lexicon based approaches (Resnik et al., 2015b). Resnik et al. (2015a) proved that such approaches can be successfully used in identifying users with depression, who have self-disclosed their mental illnesses on Twitter. In general, a clear distinction in the lexical and syntactic structure of the language used by individuals with different mental disorders, as well as between individuals within a control group, can be identified throughout the literature mentioned above, as well as from the explorative analysis conducted by Gkotsis et al. (2016). Due to the reliability of the lexical and behavioral features used in many of the models mentioned above, our proposed solution also focused on these feature categories. Even though the dataset we have used is relatively smaller than the ones used by most of the experiments mentioned above, we managed to obtain reliable results in identifying users with mental disorders.
Datasets
For this research, we prepared a dataset consisting of tweets from users who participated in #BellLetsTalk 2015 campaign. #BellLetsTalk is a campaign created by Bell Canada to help reduce stigma and promote awareness and understanding of mental health issues. Canadians opened up the dialogue on mental health, contributing more than 122 million tweets, texts, calls and social media shares on #BellLetsTalk Day, helping to raise more than $6.1 million for mental health initia-tives. 3 We collected data for the year 2015 and we limited it to Canadian users. 156,612 tweets were obtained from 25,362 users. Only data made public by users was collected for this task. To clean the dataset, we used LDA (Grün and Hornik (2011)), to obtain topics from tweets. Prominent topics included "campaign publicity", "mental health awareness", "raising donations", "facts about mental health". If a tweet contained two or more keywords from any of the mentioned topics, it was removed from the dataset. Additionally, retweets, tweets beginning with a mention (@), short tweets (less than 5 words), and URLs were removed. We then used words like "depressed", "suffer", "attempt", "suicide", "battle", "struggle", "diagnosed", in addition to first person pronouns, to identify a subset of tweets where users are talking about depression. A human annotator reviewed these tweets to verify whether the user is disclosing their own depression or talking about a friend or family member. Using this method we identified 95 users who disclosed their own depression. For these 95 users we collect all tweets from 2015 and refer to these as "selfdisclosed" set. All remaining users were considered as control users. Similarly, for control users, all tweets from 2015 are collected and referred to as "control" set.
To prepare a dataset to label at tweet-level, we selected 60 users who had between 100 and 300 tweets. 30 users were selected from self-disclosed set, and 30 from control set. We asked two annotators to label 10 users with depression level 0-1, where 0 indicates no depression and 1 indicates some depression. 4 We found that most tweets fell into the "no depression" class. Since annotation is an expensive and a time-consuming task, we looked for tweets that could be removed without losing relevant tweets. Our first intuition was to remove tweets containing positive words, but this intuition proved to be false as many of the tweets labeled as depressed contained positive words. Next we looked for neutral tweets. Most neutral tweets were labeled as "no depression" and hence we decided to remove these from our dataset. The list of positive and negative words was obtained from Hansen et al. (2011). The final dataset consisted of 8,753 tweets. We refer to this dataset as 60Users. 5 The annotators were then asked to label the remaining 50 users. The Kappa value for 2-annotator agreement was found to be 0.67. If a tweet was labeled as depressed by at least one annotator, the tweet was considered as depressed. 6 We prepared a larger dataset to be labeled at user-level. This dataset consists of 80 users from self-disclosed set and 80 control users. It included the 60 users annotated above at tweet-level. We refer to this dataset as 160Users. 7 For fast annotation at user-level, we provided an undersampled version of the dataset to annotators. It was undersampled using our tweet-level classifier discussed in section 4. Nonetheless, for our experiments, we used all tweets from 160 users. The dataset was annotated by two annotators as "depressed" and "not-depressed" user. The conflicts were resolved by a third annotator. The following guidelines were provided for the task: • Depressed: The user shows clear signs of depression, or shows signs that could result in depression in near future. There is enough reason for a public health member or doctor to investigate further. Additionally, users who self-disclose depression but there are no other tweets indicative of depression, are also labeled as depressed. 8 • Not-depressed: the user does not show any signs of depression.
A third dataset is obtained from CLPsych shared task 2015 (Coppersmith et al., 2015). The dataset consists of 1,746 users. The training set consists of 327 depression users, 246 PTSD users, and, for each, an age and gender matched control user. The test set consists of 150 depression users, 150 PTSD users, but we cannot use it because the labels for the test set are not available. For our task, we use the depression and control 5 The 60Users dataset annotated at tweet-level will be made available on request for further research 6 Considering a tweet as depressed only when both annotators agreed that the tweet was depressed reduced the amount of positive training samples, but did not impact performance 7 The 160Users dataset will be made available on request for further research. 8 The users who self-disclose depression, but do not have other tweets indicative of depression in the dataset are marked as depressed in order to maximize the number of at-risk users predicted by the classifier. users from the training set. We refer to this as the CLPsych2015 dataset.
The 60Users dataset was split to contain onethird of the tweets for testing (2,971 tweets) and two-thirds for training purposes (5,782 tweets). In the case of 160Users and the CLPysch2015 datasets, we split each dataset into 70% training and 30% test set. Each model was trained on the training set using 10-fold cross validation and then tested on a held out test set.
Tweet-level Classifier
For the tweet-level classification, a preliminary experiment was performed on 60Users dataset using BOW as features and SVM classifier. This gave a very high accuracy because it classified all the tweets in the majority class. This was due to class imbalance. The dataset consisted of 95% not depressed tweets and 5% depressed tweets. To deal with the class imbalance, we then experimented with re-sampling methods including undersampling (randomly removing examples from the majority class) and with oversampling, in particular with adding examples for the minority class using Synthetic Minority Oversampling Technique (SMOTE) (Chawla et al., 2002). For evaluation, we will look at recall, precision, and F-measure for the class of interest (depression), instead of accuracy.
The goal of training a tweet-level classifier is to predict whether a given tweet indicated depression or not. For this, we perform two sets of experiments. The first set of experiments uses 7 features derived from tweet's text. These include polarity words, depression words, first person pronoun, and second person pronoun counts. These are referred to as initial features. Polarity words include counts of very negative words, negative words, positive words and very positive words. The list of polarity words was obtained from AFINN (Hansen et al., 2011). Depression related terms are obtained from Maigrot et al. (2016). The second set of experiments uses unigrams (BOW), in addition to the 7 initial features.
Each set consists of 3 experiments performed on the 8,753 tweets from the 60Users dataset.
1. Linear SVM trained on the original dataset 2. Linear SVM trained on the dataset balanced using SMOTE 9 9 For oversampling, we use the SMOTE function from the 3. Linear SVM trained on the dataset balanced by undersampling 10
User-Level Classifier
The goal of training a user-level classifier is to predict if a given user is at-risk of suffering from depression. For this, we train models on the 160Users dataset. For user-level classification, we start by making tweet-level predictions using the best model obtained from experiments described in Section 4. The initial features are generated as a requirement for the tweet-level classifier. The tweet-level predictions are then used to compute the percentage of depressed tweets for each user. Next, the text of all the tweets for each user is merged, and the initial features are summed. 11 During data annotation of the #BellLetsTalk users, we noticed that several users disclosed depression, but their tweets, at least those included in our dataset, did not indicate depression. Although these users were labeled as depressed, we noticed that removing such users from training set helps us to improve our models. For this we compute an additional feature called IsSelfReported for each user. The percentage of depressed tweets (hereafter called %DT) along with isSelfReported is used to decide whether a user should be removed from the training set. If IsSelfReported is True AND %DT is less than 10%, only then, the user is removed from the training set.
Several sets of experiments are performed for this task. An initial baseline experiment is performed using 7 initial features. The second set of experiments uses 8 features (the initial features + %DT). The third set of experiments uses 9 features (the initial features + %DT + isSelfReported). The fourth set of experiments uses a total of 115 features. The purpose for this was to identify whether increasing the number of features has a significant impact on performance. These additional-features include LIWC features, sentiment features, emoticon counts, text readability (SMOG, Flesh, Kincaid), and community features such as favorite counts, replies, mentions, retweets, in addition to initial features, %DT, and IsSelfReported.
DMwR package (Torgo, 2010) with default values. This implementation is based on (Chawla et al., 2002) 10 For undersampling we used the "downSample" function in the CARET package (Kuhn et al., 2012) with default values 11 The data is centered and scaled during model training Unlike 60Users dataset that was highly imbalanced, 160Users dataset had a relatively smaller degree of imbalance. The 160Users dataset consisted of 43% positive class and 57% negative class samples. The reason for using re-sampling methods at user-level was to investigate if performance can be improved by training a model on a fully balanced dataset.
From these experiments, we identify the model with highest performance. The set of features, and re-sampling method identified in relation to this model are then used in further experiments. These experiments include training further models using the CLPsych2015 dataset instead of the 160Users dataset. We also merge the 160Users dataset and CLPsych2015 dataset to investigate whether using a larger training data improves the performance.
Experimental setup
For this research, all the development is done in R version 3.3 (R Development Core Team, 2008) using the Rstudio IDE (RStudio Team, 2015). Data preparation, feature extraction, and classification tasks are performed using a variety of R packages. All classifiers were used from R's Caret package (Kuhn et al., 2012). Classifiers were trained using 10-fold cross validation to avoid over-fitting and then tested on a held-out test set. The results presented in Section 7 are those obtained on the held-out test set.
Results
For both tasks, tweet-level classification and userlevel classification, we report precision, recall & F-measure for the positive class (depression), as performance measures. Precision and recall are more informative than accuracy, due to the data being imbalanced. For example, baseline experiments for tweet-level classification returns an accuracy of 95% by classifying all samples as majority class, which is not a true reflection of classifier's performance. For measuring performance at user level, we think that recall is somewhat more important for the task, therefore we aim at achieving high recall. This can be justified by keeping in mind the problem we are attempting to solve. In the context of detecting depression, a false positive (FP) is defined as a user who is predicted to have depression but does not actually suffer from depression. A false negative (FN) is defined as a user who is actually depressed but is predicted to not have depression. A classifier detecting more false positives would result in lower precision, the cost of which is that the state would need to invest more money to help users who are not actually depressed. On the other hand, a classifier detecting more false negatives would result in lower recall, the cost of which is that users suffering from depression will not get the help they need on time, which could lead to serious consequences, like suicide. So low recall could lead to loss of human life.
At the same time, we are trying to find a balance of precision and recall. A perfect recall of 1, with a very low precision (e.g., 0.2) is also not an acceptable outcome. In such cases, we look at F-measure, which combines both precision and re-call. In particular, we look at the precision, recall, and F-measure of the positive class, obtained on the held-out test sets. Table 7.1 shows the results obtained for the tweetlevel classification experiments. Performance is reported on a held-out testset obtained from 60Users dataset. None of the classifiers performed well on the task of identifying depressed tweets. The best performing model (exp1-Undersample) is identified in bold. This is a Linear SVM classifier trained on an undersampled training set and uses 7 initial features without BOW. We obtain a precision of 0.1237 and a recall of 0.8020, with F1 of 0.2144. 12 The poor performance of all models indicates the complexity of the task and the fact that one tweet is not sufficient to detect depression. Table 7.2 shows results obtained for user-level classification experiments. Performance is reported on a 30% held-out test set obtained from 160Users dataset. For exp3, the results improved a lot over the baseline with initial features. This shows that the features %DT computed with the tweet-level classifier helps. The best performing model (exp5) is identified in bold. This is a Linear SVM classifier trained on a balanced dataset (CLPsych2015) and uses 9 features (Initial features + %DT + isSelfReported). We obtain a precision of 0.7083, a recall of 0.85, and F1 of 0.7727.
User-level Classifier
From exp3 and exp4 in Table 7.2, we observe that the dataset balanced using re-sampling methods provide better recall. For this reason, when we train models on the combined dataset (exp6), we continue to balance the datasets using SMOTE and undersampling. The CLPsych2015 dataset (exp 5) is perfectly balanced and therefore does not require balancing using re-sampling methods.
We note that the model trained on CLPysch2015 dataset performs better than the model trained on the 160Users dataset when using the same features. This could be due to larger training data. On the other hand, performance (in terms of recall) drops when the dataset size is increased further by combining the 160Users and CLPsych2015 datasets and Upon investigation as to why undersampling performs better than SMOTE, we discovered that SMOTE oversamples minority class instances, but does not fully balance the training data, whereas undersampling balances the training data. Hence, models trained on a balanced training set result in better performance.
It is interesting to see that models trained on 160Users (exp3 and exp4) perform better on CLPsych2015 dataset, while the model trained on CLPsych2015 dataset (exp5) performs better on the 160Users dataset.
The results for exp4+additionalFeatures are not reported because they are not significantly different from exp4 (though further investigations will need to be done in future work).
In terms of comparing the tweet-level classification task and the user-level classification task, we conclude that user-level models perform much better even with a small number of features. Resnik et al. (2015a) and Preotiuc-Pietro et al. (2015) reported good performance on the dataset made available through the CLPsych2015 shared task, as mentioned in Section 2. We ran our topperforming user-level classifiers on the training set of CLPsych2015 shared task data. Results are provided in Table 7.3. We report only the SMOTE versions of the classifiers since they obtained better results. The feature %DT helps a lot on this dataset (according to exp3). We note that exp5 that gave the highest performance on the 160Users dataset performs consistently well on the CLPsych users, even though performance is slightly lower in comparison.
Comparison to Related Work
These results are not comparable with those reported by (Resnik et al., 2015a) and (Preotiuc-Pietro et al., 2015), for two reasons. First, in comparison to Resnik et al. (2015a) and Preotiuc-Pietro et al. (2015), who report performance on a different test set. We report performance on the 30% of the training users provided to us, that we kept aside for testing, because of the unavailability of the labels for the test users from the shared task. Second, the shared task uses precision at a certain recall level as the main performance measure, while we report standard precision and recall, and we selected our model to have a high recall.
Conclusion and Future Work
In conclusion, we proposed models for tweet-level classification and used them to compute the percentage of depressed tweets for each user. We also proposed models for user-level classification. We experimented with many features, including the percentage of depressed tweets, which was shown to help improve the performance of the user-level classifier. We annotated our own dataset from the #BellLetsTalk campaign, but we also experimented with the existing dataset from CLPsych2015.
In future work, we plan to study depression among groups of users based on their age, gender, locations and other demographic attributes. We also plan to look into identifying other kinds of mental disorders, and detecting suicidal ideation. | 2017-07-29T19:22:52.795Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "00752c86bb26bbee675b95d152b136b9c08a9d5e",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/W17-3104.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "00752c86bb26bbee675b95d152b136b9c08a9d5e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
8213298 | pes2o/s2orc | v3-fos-license | Physical activity and sedentary behavior during the early years in Canada: a cross-sectional study
Background Physical activity and sedentary behavior habits are established during early childhood, yet only recently has objectively measured data been available on children aged 5 years and younger. This study presents data on the physical activity and sedentary behaviors of Canadian children aged 3–5 years. Methods Data were collected as part of the Canadian Health Measures Survey between 2009 and 2011. A nationally-representative sample (n = 459) of children aged 3–5 years wore Actical accelerometers during their waking hours for 7 consecutive days. Data were collected in 60-sec epochs and respondents with ≥4 valid days were retained for analysis. Parents reported their child’s physical activity and screen time habits in a questionnaire. Results Eighty-four percent of 3–4 year old children met the physical activity guideline of 180 minutes of total physical activity every day while 18% met the screen time target of <1 hour per day. Fourteen percent of 5 year old children met the physical activity guideline of 60 minutes of daily moderate-to-vigorous physical activity (MVPA) while 81% met the screen time target of <2 hours per day. Children aged 3–4 years accumulated an average of 352 min/d of total physical activity and 66 minutes of MVPA while 5 year old children accumulated an average of 342 min/d of total physical activity and 68 minutes of MVPA. Children were sedentary for approximately half of their waking hours and spent an average of 2 hours per day in front of screens. Only 15% of 3–4 year olds and 5% of 5 year olds are meeting both the physical activity and sedentary behavior guidelines. Conclusions Promoting physical activity while reducing sedentary behavior is important at all stages of life. The findings of the present study indicate that there remains significant room for improvement in these behaviors among young Canadian children.
Background
The early years represent a critical period for the establishment of active living habits; however, little is known about how much physical activity and sedentary behavior young children are accumulating. Research to date suggests that children less than 5 years of age spend a small proportion of time being active and have high levels of inactivity [1][2][3], although most studies historically focused on moderate-to-vigorous-intensity physical activity (MVPA) and relied on non-objective measures. A recent systematic review reported that physical activity during the early years is associated with improved measures of adiposity, motor skill development, psychosocial health, and cardiometabolic health indicators [4]. High levels of sedentary behavior in this age group, in particular high levels of television viewing, is associated with increased adiposity and lower measures of psychosocial and cognitive development [5].
The National Association for Sport and Physical Education (NASPE) recommends at least one hour of structured and one or more hours of unstructured physical activity every day for children from birth to age 5 years [6]. Levels of adherence to the NASPE guidelines have varied considerably (32-79%) and this is likely due in part to differences in measurement tools and inconsistent inclusion of light intensity physical activity [1,3,7]. Australian and Canadian guidelines recommend that young children (aged 1-4 years) participate in physical activity of any intensity (i.e., light, moderate or vigorous) for at least 3 hours a day [8,9]. The Canadian guidelines also recommend progression toward at least 60 minutes of energetic play (i.e., MVPA) by 5 years of age to align with the guidelines for children ages 5 to 17 years which recommend 60 minutes of MVPA per day [9]. Compliance with the recommendation of at least 180 min of physical activity at any intensity also varies considerably between countries, ranging from 5% in a sample of children from Melbourne, Australia [1] to 73% in a sample of children from Hamilton, Canada [10].
The American Academy of Pediatrics (AAP) recommends no more than 2 hours per day of television time for children aged 2 years and older [11] and the Australian Government's Department of Health and Ageing stipulates no more than one hour of screen-based entertainment per day for 3 to 5 year old children [8]. In a sample of 3 to 5 year old children from Melbourne, Australia, 22% met the Australian recommendation (<1 hr/d) and 59% met the AAP recommendation (<2 hr/d) [1]. New Canadian sedentary behavior guidelines were released in 2012 [12]. Key aspects of the Canadian sedentary behavior guidelines are that i) children under age 2 do not engage in screen time, a recommendation consistent with the American Association of Pediatrics [11], and ii) screen time be limited to less than 1 h per day in 3 to 4 year old children [12], and less than 2 h per day in 5 year old children [13]. Data from a sample of children from Kingston, Canada indicated that less than half (46%) of children aged 2-4 years met the Canadian screen time recommendation of <1 hr/d [14]. It is presently unknown whether this finding is representative of children in this age group across Canada.
The measurement of physical activity and sedentary behavior in children during the early years is needed not only to estimate the proportion of the population meeting recommendations pertaining to physical activity and sedentary behavior guidelines, but to establish the relationships between these movement constructs with health outcomes, and to enable researchers to evaluate the effectiveness of interventions. Physical activity and sedentary behavior in young children can be measured using indirect (e.g., parent-report, direct observation) and direct (e.g., pedometers and accelerometers) methods. Accelerometers provide objective information on the frequency, intensity and duration of movement. Parentreported information provides important contextual information on the specific behaviors young children are engaging in while being active or sedentary [15].
It is presently unknown what proportion of a nationally representative sample of Canadian children aged 3 to 5 years is meeting these new physical activity and sedentary behavior guidelines. The 2 nd cycle of the Canadian Health Measures Survey (CHMS) collected accelerometry and parent-reported data on physical activity and sedentary behavior on a sample of children aged 3-5 years. The purpose of this paper is to report, for the first time ever on a nationally-representative sample, the physical activity and sedentary behaviors of Canadian children aged 3-5 years.
Methods
The CHMS collected data from a nationally representative sample of the population aged 3 to 79 years living in private households at the time of the survey [16]. Residents of Indian Reserves, institutions and certain remote regions, and full-time members of the Canadian Armed Forces were excluded. Approximately 96% of Canadians were represented. The survey involved an interview in the respondent's home and a visit to a mobile examination center for a series of physical measurements. Data were collected at 18 sites across Canada from August 2009 to November 2011. Ethics approval to conduct the CHMS was obtained from Health Canada's Research Ethics Board [17]. For younger children, a parent or legal guardian provided written consent and written assent was also obtained from the child. Participation was voluntary; respondents could opt out of any part of the survey at any time.
Upon completion of the mobile examination center visit, ambulatory respondents were asked to wear an Actical accelerometer (Phillips -Respironics, Oregon, USA) over their right hip on an elasticized belt during their waking hours for 7 consecutive days. The Actical (dimensions: 2.8 × 2.7 × 1.0 centimetres; weight: 17 grams) measures and records time-stamped acceleration in all directions, providing an indication of movement intensity, duration and frequency. The digitized values are summed over a user-specified interval of 60-sec, resulting in a count value per minute (cpm). Accelerometer signals are also recorded as steps per minute. The Actical has been validated to measure physical activity and sedentary behavior in preschool aged children [18,19].
The monitors were initialized to start collecting data at midnight following the mobile examination center appointment. Respondents were blind to all data while they wore the device. The monitors were returned to Statistics Canada in a prepaid envelope, where the data were downloaded and the monitor was checked to determine if it was still within the manufacturer's calibration specifications [20]. Standard data reduction procedures were followed that are consistent with cycle 1 of the CHMS [20,21]. A valid day for this age group was defined as 5 or more hours of monitor wear time [22] and respondents with 4 or more valid days were retained for analyses [20,21,23]. Wear time was determined by subtracting non-wear time from 24 hours. Non-wear time was defined as at least 60 minutes of consecutive minutes of zero counts, with allowance for 1 to 2 minutes of counts between 0 and 100 [22,23].
For this study, time spent at various levels of movement intensity (sedentary, light, moderate, vigorous) was based on cut-points corresponding to each intensity level. The cut-point used for MVPA was 1,150 cpm [18]. A cut-point of 100 cpm was used to delineate sedentary behavior from light physical activity [24]. Children aged 3-4 years were classified as meeting the guideline if they achieved 180 min of physical activity at any intensity every day (i.e., 180 minutes ≥ 100 counts per minute) on all valid days (e.g. "daily"). To determine the probability that 5 year old children accumulated at least 60 minutes of MVPA at least 6 days a week, the analytical approach was harmonized with that used previously in [6][7][8][9][10][11][12][13][14][15][16][17][18][19] year old children in the CHMS [21]; an approach that was based on the technique used in the United States to analyze the 2003 to 2004 National Health and Nutritional Examination Survey (NHANES) accelerometry data [23]. To maximize the sample size, a Bayesian approach was used to incorporate the information from children with 4 or more valid days. An individual's probability of being active at least 6 days out of 7 days was estimated using a Beta distribution for the observed combination of active and wear days. The estimated population prevalence is the weighted average of these individual probabilities [25]. Progression towards meeting the physical activity guidelines of 60 daily minutes of MVPA on valid days in 3-4 year olds was assessed using the same Bayesian approach to examine the proportion of 3-4 year olds who accumulated 180 minutes of physical activity at any intensity where 10, 20, 30, 45 and 60 minutes of that time was at least MVPA. Average daily step counts were calculated and the proportions of children accumulating an average of 6,000 steps per day and 6,000 steps on every valid day [10] were both assessed.
As part of the CHMS household questionnaire, parents were asked a series of questions about their child's level of physical activity and engagement in sedentary behaviors: Over the past 7 days, on how many days was he/she physically active for a total of at least 60 minutes per day? (none, 1 day, 2-3 days, 4 days or more) Over a typical or usual week, on how many days is he/ she physically active for a total of at least 60 minutes per day? (none, 1 day, 2-3 days, 4 days or more) About how many hours a week does he/she usually take part in physical activity (that makes him/her out of breath or warmer than usual) outside of school while participating in lessons or league or team sports? (never, <2 hrs/wk, 2-3 hrs/wk, 4-6 hrs/wk, 7+ hrs/wk) About how many hours a week does he/she usually take part in physical activity (that makes him/her out of breath or warmer than usual) outside of school while participating in unorganized activities, either on his/her own or with friends? (never, <2 hrs/wk, 2-3 hrs/wk, 4-6 hrs/wk, 7+ hrs/wk) On average, about how many hours a day does he/she watch TV or videos or play video games? (doesn't watch TV or videos or play video games, <1 hr/d, 1-2 hrs/d, 3-4 hrs/d, 5-6 hrs/d, 7+ hrs/d) On average, about how many hours a day does he/ she spend on a computer (working, playing games, e-mailing, chatting, surfing the internet, etc.)? (doesn't use a computer, <1 hr/d, 1-2 hrs/d, 3-4 hrs/d, 5-6 hrs/d, 7+ hrs/d) Time spent watching TV, videos or playing video games and time spent on a computer was derived using the mid-points of the previous category (i.e. 0, 0.5, 1.5, 2.5, 5.5 and 7 hours for the respective categories). The amount of time was summed for the two questions to obtain screen time and children aged 3 to 4 with ≤1 h/d of screen time or children aged 5 with ≤2 h/d of screen time were deemed as following the screen time recommendations within the sedentary behavior guidelines. For example, if a parent reported <1hr/d for both the question about TV/videos and the question about computer time, that child was assigned the midpoint value of 0.5 hr/d for each to give a total equal to 1 hr/d of screen time, which is slightly different to <1 hr/d. This way of deriving screen time means that we assessed whether a child is accumulating ≤1 hr of screen time per day rather than the actual guideline which is <1 hr/d.
The response rate for selected household was 75.9%, meaning that in 75.9% of these households, a resident provided the sex and date of birth of all household members. One or two members of each responding household were chosen to participate in the CHMS; 92.6% of the parents of selected 3-5 year olds completed the household questionnaire, and 79.4% of this group participated in the mobile examination center component. Five children did not accept the activity monitor and 48 never returned the monitor. Of the children who participated in the mobile examination center component, 76.9% wore the accelerometer for at least 4 valid days. After adjusting for the sampling strategy, the final response rate for having a minimum of 4 valid days was 42.7% (75.9 × 92.6 × 79.4 × 76.9). This article is based on 459 examination center respondents aged 3-5 years who provided a minimum of 4 days of valid accelerometer data.
All analyses were completed using SAS version 9.2 and were based on weighted data for respondents with at least 4 valid days of data. To account for survey design effects of the CHMS, standard errors, coefficients of variation, and 95% confidence intervals were estimated using the bootstrap technique [26][27][28].
Results
Characteristics of the 459 children included in the analysis are in Table 1. The average age of the sample was 4 years and the sex split was almost equal (50.5% were boys). The majority (83%) of the sample was considered healthy weight according to the International Obesity Task Force classification cut-offs [29].
Meeting the physical activity guidelines based on accelerometer data Eighty-four percent of 3 and 4 year olds met the current physical activity guideline, defined as being active at any intensity for at least 180 minutes every day ( Table 2). Ninety-eight percent were active on all valid days except one. Progression towards accumulating 60 minutes of daily MVPA as part of the 180 minutes of total physical activity (3-4 year olds) is presented in Figure 1. More than half of 3 and 4 year olds accumulated at least 20 minutes of MVPA within their 180 minutes per day of total physical activity while 11% accumulated at least 60 minutes of MVPA within their 180 minutes of total physical activity ( Figure 1). Fourteen percent of 5 year olds accumulated at least 60 minutes of MVPA on at least 6 days per week (the operational definition of meeting the guideline of 60 minutes of MVPA every day).
Meeting the screen time recommendation within the sedentary behavior guidelines
Eighteen percent of children aged 3 and 4 years met the screen time recommendation within the sedentary behavior guidelines, which states that children of this age should accumulate less than 1 hour per day of screen time [12] ( Table 2). Eighty-one percent of 5 year-old children met the screen time recommendations, which states that children of this age should accumulate less than 2 hours per day of screen time (Table 2). Fifteen percent of 3-4 year olds and 5% of 5 year olds met both the physical activity and the sedentary behavior guidelines.
Average daily physical activity and sedentary behavior based on accelerometer data On average, 3 and 4 year-old children accumulated 352 daily minutes of total physical activity and spent 50% (~5.8 hrs•d -1 ) of their waking time per day engaged in sedentary behavior, 41% (~4.8 hrs•d -1 ) engaged in light intensity physical activity and 9% (~66 min•d -1 ) of their day engaged in MVPA (Table 3). On average, 5 year old children accumulated 343 daily minutes of total physical activity and spent 53% (~6.4 hrs•d -1 ) of their waking time per day engaged in sedentary behavior, 38% (~4.6 hrs•d -1 ) engaged in light intensity physical activity and 9% (~68 min•d -1 ) of their day engaged in MVPA ( Table 3). The average accelerometer wear time for all children in the sample was approximately 12 hours per day.
Daily step counts
Children aged 3 and 4 years accumulated an average of 9,764 step counts per day while 5 year old children accumulated an average of 10,202 step counts per day ( Table 3). The proportion of children meeting the 6,000 steps per day target is presented in Table 2.
Discussion
The majority of Canadian children aged 3-4 years are meeting the current physical activity guideline of at least 180 minutes of physical activity at any intensity every day [9]. This proportion is much higher when compared to older children in Canada, for whom the guideline focuses on MVPA and not total physical activity. According to the CHMS, 14% of 5 year olds met the physical activity guideline of 60 daily minutes of MVPA [30] while 7% of 5 to 11 year-olds and 3.5% of 12 to 17 year-olds met this same guideline [31]. Eighteen percent of 3-4 year olds and 81% of 5 year olds met the screen time recommendation within the current sedentary behavior guidelines, which recommend less than 1 hour a day of screen time in 2-4 year olds [12] and no more than 2 hours a day of screen time from age 5 to 17 [13]. Only 15% of 3-4 year olds and 5% of 5 year olds met both the physical activity and sedentary behavior guidelines ( Table 2). The difference in the proportion meeting the physical activity guidelines between ages 3/4 years to 5 years is largely explained by the shift in volume and intensity of physical activity recommended by the different physical activity guidelines, from 180 minutes of physical activity at any intensity to 60 minutes of daily MVPA. In accelerometer data analysis, light physical activity is defined as any observations above the sedentary cut-point (100 cpm) and therefore makes up a large proportion of the day. For example, 3 and 4 year-old children in the present analysis spent 41% (over 4 hours) of their waking hours engaged in light physical activity, and almost all children achieved the recommended 180 minutes of total physical activity every day. The physical activity guideline for children aged 0 to 4 years also states that children should progress towards accumulating 60 minutes of daily MVPA by age 5 [9]. To assess how 3 to 4 year olds were progressing towards this target, we examined the proportion that were accumulating 180 minutes of total physical activity with the added stipulation that at least 10, 20, 30, 45 or 60 minutes were of at least moderate intensity (Figure 1). The added 60 min MVPA requirement brought the 3 and 4 year olds to a level of meeting the guideline (11%) that was more consistent with children aged 5-11 years (7%). Most young children aged 3 and 4 years of age were active enough to meet the guideline; however, based on these CHMS data, the progression towards accumulating 60 minutes of daily MVPA by age 5 appears to be occurring only in a small proportion of this nationally representative sample of Canadian children.
Placing the findings of the current study in context with previous work is limited by the use of different models of accelerometers between studies, different choices of intensity cut-points, and in particular, the use of varying epoch lengths. The current study used a 60-sec epoch which is much longer than previous studies in this age group [1,10,32,33]. The encouraging physical activity result observed for 3 and 4 year children in the present analysis (84% meeting the physical activity guideline) is consistent with other data from Hamilton, Canada that reported between 73-100% of children accumulated at least 180 minutes per day of total physical activity [10,32]. This is in contrast to a recent study based in Melbourne, Australia that reported only 5% met the recommended minimum level of 3 hours per day of total physical activity [1]. The current study observed higher levels of total physical activity (343-352 min/d) compared to the two studies based in Hamilton, Ontario (252 and 220 min/d) [10,32] and the study based in Melbourne, Australia (127 min/d) [1]. However, in comparison to the Hamilton, Canada studies [10,30], children in the current study accumulated less MVPA (66-68 vs 75-92 min•d -1 ). Further, Gabel and colleagues reported that 57% accumulated 60 minutes of MVPA within their 180 minutes of total physical activity [10] whereas in the present analysis, only 11% achieved the same. Hinkley and colleagues did not report daily minutes of MVPA but reported that MVPA made up a smaller proportion of waking hours compared to the current study (3.4 vs 9.5%) [1]. The higher levels of MVPA observed in the two Hamilton, Canada studies [10,32] are consistent with a study by Vale and colleagues which observed that 93.5% and 77.6% of children accumulated 60 min•d -1 of MVPA on weekdays and weekends, respectively [33]. It has been suggested that longer epochs may underestimate MVPA and overestimate light intensity physical activity, especially in very young children [32,34]. The reason for this is the inability of longer epochs to capture the sporadic and intermittent nature of activity that is typical in this age group [3]. This may partially explain why the daily minutes of MVPA were lower in the current study compared to the two Hamilton, Canada studies; however, this does not explain why Hinkley and colleagues reported such low MVPA values as they also used an epoch of 15 sec. Additional differences in chosen intensity cut-points between studies may be contributing to differences in findings. Specifically, the cut-point used to delineate sedentary from light intensity physical activity in the study from Melbourne, Australia [1] was higher than other studies using the same monitor, thus leading to lower light and total physical activity. These inconsistencies in methodological design highlight the need for data harmonization to fully understand physical activity prevalence during the early years across countries.
The average daily step counts observed in the present analysis (3-4 year olds: 9,764; 5 year olds: 10,202 steps per day) are consistent with other step count data for this age group collected with both accelerometers (8,968 steps per day) [10] and pedometers (9,980 steps per day) [35]. Recently, a daily step target of 6,000 was proposed for the early years to be used as a step target equating to 180 minutes of total physical activity where 60 minutes are MVPA [10]. In the present study, the majority of children accumulated a weekly average of at least 6,000 steps per day (3)(4) year olds: 92%, 5 year olds: 87%) while fewer achieved this target on all valid days (45%). This finding highlights the limitation of a weekly average step count value to identify children who are meeting a daily target. In the current study, 84% of 3-4 year olds met the physical activity guideline according to the accelerometer results. If only step count data had been available, we would have concluded that 45% were meeting the daily guideline. Further research is needed to examine the relationships between accelerometer-and pedometermeasured physical activity at very young ages when gait patterns are still being established.
In the present analysis, only 18% of children aged 3-4 years met the sedentary behavior guideline of less than 1 h per day of screen time. Carson and colleagues reported a higher proportion (43%) of children meeting the same sedentary behavior guideline in 2-4 year old children from the Kingston, Canada health region [14]. The questions asked of parents to derive screen time were very similar between these two studies. Participants in the Kingston, Canada study were recruited primarily from registered child care centers while the CHMS recruited from a broader population that included those registered and not registered in these types of programs. It is possible that the Kingston, Canada sample reflected children of a higher socioeconomic status and with more structured days who had fewer opportunities to accumulate screen time. Also, the CHMS reflects the entire Canadian population instead of a single health region. Other countries have assessed a similar screen time target and found results consistent with ours. For example, 22% of 3 to 5 year old Australian children accumulated 1 h or less of daily screen time [1]. At present the sedentary behavior guideline do not stipulate a total daily sedentary time target (i.e., that encompasses sedentary activities beyond screen time). A lower proportion of the waking hours was spent in sedentary time in the present population of 3 to 5 year old children (50%) when compared to older children (63%) [31] and adults (71%) [31] in the CHMS, indicating that this age group is the least sedentary age group in the Canadian population.
Determining the proportion of children meeting physical activity guidelines using accelerometry data is largely dependent on the accelerometer cut-points used to define different levels of intensity. The MVPA cut-point recently proposed by Adolph and colleagues [18] was used in the present analysis because it was developed for Actical accelerometer data collected in 60-sec epochs in 3-5 year old children. Further, this cut-point (1,150 cpm) was based on an activity energy expenditure value of 0.05 kcal•kg -1 •min -1 or approximately 2-3 metabolic equivalents (METS); a demarcation point consistent with that used for the moderate cut-point (≥0.04 kcal•kg -1 •min -1 ; 1,500 cpm) used in older children and youth in the CHMS [21,36]. The only other published Actical cut-point for this age group was designed for use with 15-sec epoch data and was based on a very different methodological approach that placed the demarcation point between light and moderate at an energy expenditure level of 20 ml•kg -1 •min -1 or approximately 5.7 METS [19]. While the appropriateness of using METS to define energy expenditure in young children has been questioned [19,37] The results presented here provide the first estimates of the proportion meeting the physical activity and sedentary behavior guidelines on a nationally representative sample of Canadian children aged 3-5 years. The accelerometry data provide objective estimates and the parent-reported data provide important information about screen time behaviors and the context within which physical activity is accumulated in this age group (e.g., the breakdown between organized and unorganized activity). Consistent with the accelerometer data, parents reported high levels of physical activity participation in their children. As expected, time spent in unorganized activities (i.e., unstructured free play) was higher than organized activities in the younger children. Parentreported data may be impacted by recall and social desirability bias [38] and the questions used in this survey have not undergone rigorous validation testing. Accelerometers are limited in their ability to capture some activities (e.g., swimming, cycling, load bearing, incline changes) which may lead to some underestimation of overall activity. Further, the 60-sec epoch used for data collection in the CHMS may be an additional cause of underestimation in levels of MVPA and overestimation of light intensity physical activity [32,34]. Seasonal variation could not be assessed in this sample; however, this issue is of relevance in the Canadian context [39] and should be explored in larger data sets. An important area of future research would be to examine whether enrolment in structured childcare programs impacts upon physical activity and sedentary behavior in very young children. This could not be assessed within this study because a specific question relating to childcare arrangement was not asked as part of the household survey. Also of interest would be to examine differences in physical activity and sedentary behavior between healthy weight and overweight children. This was not possible in the present analysis because of the sample size.
The transition in guidelines between age 4 and 5 years creates two challenges: i) interpreting the transition from 84% meeting the guideline at age 3-4 years to 14% at age 5 years, and ii) understanding the required differences in analytical approach to assess the proportion meeting the physical activity guidelines. Figure 1 helps overcome the first challenge as it illustrates the impact of the progression towards the MVPA requirement on the results: 84% meet the guideline but only 11% accumulate 60 minutes of MVPA within those 3 hours. To remain consistent with the age brackets of the physical activity guidelines (i.e., 0-4 years and 5-17 years) as well as how children aged 6 years and older have been assessed previously in the CHMS [21], we used a probability function to estimate the proportion of 5 year olds meeting the guideline. This analytical approach is different to simply looking at meeting the target on all valid days, which is how 3-4 year olds were assessed. If we had assessed 5 year olds in this way, the data would have indicated that 7% (instead of 14%) met the physical activity guideline. The probability function is more robust when assessing low levels of adherence [21,23] and that is why we presented the 14% value as the primary finding for 5 year olds in this analysis.
Conclusions
The majority of 3 and 4 year old children in Canada are meeting current physical activity guidelines; however, only 18% are meeting their age-specific screen time recommendation within the sedentary behavior guideline. The opposite trend was observed in 5 year old children with 14% meeting their age specific physical activity guideline and the majority (81%) meeting their screen time recommendation within sedentary behavior guideline. Overall, very few Canadian children are meeting both guidelines. Promoting physical activity while reducing sedentary behavior is important at all stages of life. The findings of the present study indicate that there remains significant room for improvement in these behaviors among young Canadian children. | 2017-04-10T05:11:33.452Z | 2013-05-04T00:00:00.000 | {
"year": 2013,
"sha1": "8a4e8b7b11cfbcf39fff4a9cc706ce8990195429",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-10-54",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "deb633eb8d2a2d39898253352dad1bbbc2fe5874",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269317765 | pes2o/s2orc | v3-fos-license | Design and optimization of liquid-cooled plate structure for power battery of the pure electric excavator
With the increasing pressure of energy transformation and environmental protection, the trend of electrification of construction machinery is becoming more and more obvious. In this paper, based on a small pure electric excavator which is still in the stages of research and development, a liquid-cooled heat dissipation structure (liquid-cooled plate) is designed according to the power battery pack scheme. The overall shape of the liquid-cooled plate is designed as a symmetrical serpentine flow channel. Geometrically, the symmetrical serpentine flow channel is a combination of a straight flow channel and a runway-shaped flow channel. Then the conjugate heat transfer simulation model was established by using the CFD software Ansys Fluent. The results show that the initial scheme can not guarantee the temperature homogeneity of the battery pack. Therefore, the liquid-cooled heat dissipation scheme was optimized and compared. Firstly, a liquid-cooled plate was added on the original basis, and multiple sets of simulations were carried out with the length of the linear flow channel as a variable. The results show that appropriately increasing the length of the straight flow channel at the entrance of the liquid-cooled plate has a marked effect on improving the heat dissipation performance.
Introduction
Driven by environmental protection pressure and energy transformation, the construction machinery industry is developing in the direction of electrification.With the fast development of the new source of energy automotive industry, the three-electric technology (motor, battery, electronic control) technology has gradually matured, which provides support for the electrification of construction machinery.
In this paper, a set of liquid cooling structures is designed for a small pure electric excavator of a company.The power battery will produce a lot of heat during the discharge process, and its optimal operating temperature is generally 25 ~ 40°C, and the Difference in temperature is maintained within 5°C [1] .Although the air-cooled heat dissipation structure is simple and the cost is low, the cooling effect is relatively poor and there may be noise problems.Therefore, the liquid-cooled rejection of heat method is adopted.
The research on the liquid-cooled rejection of heat structure is generally focused on the design of the coolant flow channel.Scholars adopted a two-way linear flow channel as an improved heat dissipation scheme [1] .Chung and Kim [2] used a linear flow channel to perform thermal analysis and hierarchical design of the battery thermal management system; Sheng et al. [3] designed a serpentine flow channel with double inlets and outlets to availably control the temperature rise of the battery; Wei et al. [4] proposed a symmetrical serpentine channel.Compared with the traditional serpentine channel, the drop in pressure is greatly reduced and the temperature uniformity is better.Kong et al. [5] evaluated a divergent linear flow channel with two inlets and one outlet and found that it can reduce the drop in pressure and the temperature difference of the battery.Monika and Datta [6] compared the heat dissipation performance of six microchannel liquid cooling plates, including linear, serpentine, pumpkin-shaped, spiral,Ushaped and hexagonal grids.Their work shows that the serpentine and hexagonal grid flow channel design is the best choice for a liquid cooling plate.Asymmetrical serpentine flow channel will also be used in this paper.
Power battery scheme
Figure 1.Physical drawing of a small excavator.Figure 1 is a small fuel excavator, and the electric excavator is still in the stages of research and development.The structural engineer reserved 1200 × 600 × 500 mm 3 space for the pack of power batteries.The battery capacity of the whole machine is 70 KW • h, the indicative running time is 4-6 h, and the permanent magnet synchronous motor with a peak power of 60 KW is selected.In this paper, the Guoxuan high-tech IFP27175200A-105 Ah square lithium iron phosphate battery is selected.Its capacity is 3.2 × 105 × 10-3 = 0.336 KW•h, and the size is 21 × 175 × 200 mm 3 .The physical picture is shown in Figure 2.
Boundary conditions and Governing equations
The governing equations of the coolant are: The energy balance equations of the liquid-cooled plate and cells are [8] : In the above formula, the subscripts 'f', 'c', and 'b' represent coolant, liquid cooling plate, and battery respectively; represent dynamic viscosity, density, Acceleration of gravity, temperature, specific heat capacity, and thermal conductivity, respectively.p 、 v represent velocity vector and pressure respectively; q is the heat source of the cell.According to the theoretical hypothesis of Bernardi [9] , when the cell is regarded as a uniform heat source, the heat generated is chiefly made up of reaction heat and Joule heat.The computing formula is : In the above formula, q is the Heat-generating power of the cell, W / m 3 ; i is charge-discharge current, A; vb is the cell volume, m 3 ; e is the electromotive force of the cell, V; uL is the cell terminal voltage, V; is the reversible reaction heat of the cell, J.The whole machine is designed to work continuously for 4 hours, and the average discharge rate of the power battery is 0.25 C. Substituted into the numerical value, the volume heat source of the battery is 4555 W/m 3 .The material properties in the model are displayed in Table 1.1073.55 0.38 The default initial temperature of all materials is 293.15K (20°C), and the environmental temperature is 300 K (26.95°C).The inlet boundary condition is mass inlet, 10 g/s, and the temperature is 293.15K; the open boundary condition is the pressure outlet, and the uge pressure is 0; the outer surface of the cell and the liquid-cooled plate is set to natural convection with air, and the convection coefficient is 5 W/(K•m 2 ).
2.3Mesh model
The mesh model is established in Fluent meshing.In order to ensure the accuracy of the calculation results, the mesh independence is verified by monitoring the maximum temperature of the battery pack and the fluid pressure drop in the two-layer liquid-cooled plate [7] .According to the boundary conditions mentioned above, following the same setting process, five different numbers of grid models of 5593144, 7154830, 8608920, 10378808, and 12578996 were divided to compare the simulation results.When the calculation error caused by the change in the number of grids is less than 3% [10] , the grid accuracy can be considered to be up to standard.As shown in Figure 5, when the number of grids increases from 10.37 million to 12.57 million, Tmax decreases from 309.09 K to 308.97 K; P1 decreases from 189.83 Pa to 189.64 Pa.P2 decreases from 189.74 Pa to 189.69 Pa.The change range is less than 3 %, so the number of grids of 10.37 million has met the calculation requirements.
Calculation results of the original scheme
Figure 6 shows the pressure distribution in the upper coolant.Because the boundary conditions are the same, the results of the lower liquid-cooled plate are almost the same as those of the upper liquid-cooled plate.Along the flow direction the pressure gradually decreases, the total import and export pressure drop is 189.83Pa., the exit pressure is 0, and the negative pressure appears near the export.Figure 7 is the temperature distribution cloud diagram of the power battery.In general, the left-handed temperature (entrance) is low, and the right-handed temperature (exit) is high.It is well understood that the coolant temperature at the access point is low and the heat dissipation effect is the best.Along the flow direction, the coolant exchanges heat with the liquid-cooled plate, and its temperature is gradually increasing, and the heat dissipation effect becomes worse.The minimum temperature is 295.88K,the maximum temperature of the cell reaches 309.09K, and the temperature variation is 13.21 K, which is greater than 5 K and does not meet the requirements [1] .
Program improvement
As shown in Figure 7, the temperature of the superstratum cell is obviously higher than that of the substratum cell.This is because the lower cell contacts with two layers of liquid cooling plates and indirectly heats up with two layers of coolant.The upper cell only has the lower surface in contact with the liquid-cooled plate, and the upper surface and the side are natural convection heat dissipation.Therefore, the first improvement plan first plans to add a layer of liquid-cooled plate (the coolant flow rate is constant), although this may increase energy consumption.The calculation results are shown in Figs. 8 and 9.The cooling hydraulic drop is almost the same as the initial scheme.The maximum temperature of the cell is 302.74K, which is 6.35 K lower than the initial scheme.In addition, the temperature of the center point of each cell was monitored, as shown in Figure 10.The flow channel of the initial scheme and the improved scheme 1 is symmetrical in the flow direction, and the length of the straight flow channel at the entrance is R.As shown in Figure 9, the temperature of the cell at the inlet is significantly lower than that at the outlet.From improvement scheme 2 to improvement scheme 4, we try to continuously extend the linear flow channel at the entrance and actively delay the distribution of the runway-shaped flow channel.This is because the coolant temperature at the entrance is low and the heat dissipation performance is strong.In the linear flow channel can achieve very good heat dissipation effect, and delay the layout of the runway-shaped flow channel to improve the homogeneity of temperature distribution.
Calculation results of improved schemes
Figs. 14 and 15 are the calculation results of the improved scheme 4. Compared with the initial scheme, the pressure drop increases slightly, and the temperature distribution uniformity is improved.Figs.16 and 17 show the data statistics of cell temperature monitoring under four improved schemes.From scheme 1 to scheme 4, as the length of the linear flow channel at the inlet of the coolant increases, the maximum cell temperature decreases slightly, and the minimum cell temperature continues to rise, so the temperature variation between the cells continues to decrease.The cell temperature variation of the improved scheme 4 is 3.55 K, which achieves the expected goal [1] .In addition, the temperature variance of the cell, which characterizes the homogeneity of temperature distribution, is also significantly reduced, which shows the rationality of the improved scheme.
Conclusions
In this paper, a liquid-cooled heat dissipation scheme based on a symmetrical serpentine flow channel is designed for the power battery pack of a small pure electric excavator.After grid independence verification, multiple sets of CFD simulations are carried out, and the following conclusions are obtained : (1) The arrangement of three layers of the liquid-cooled plate (i.e., the superstratum and substratum surfaces of all cells can be in direct contact with the liquid-cooled plate) can significantly improve the temperature distribution uniformity of the power battery pack and improve the heat dissipation effect.
(2) The symmetrical serpentine flow channel can be regarded as a combination of a linear flow channel and a runway-shaped flow channel.Under the premise of ensuring that the energy consumption (fluid pressure drop) is unchanged, appropriately increasing the length of the straight-line flow channel at the coolant inlet and actively delaying the distribution of the runway-shaped flow channel can significantly decrease the cell temperature variance and ameliorates the temperature distribution homogeneity of the power battery pack.
Figure 2 .
Figure 2. Guoxuan High-tech 105 ah square lithium iron phosphate battery physical map.The connection mode of the planned cell is 210S1P ( the total capacity is 70.56 KW•h ), and the spatial arrangement is 5 × 3 × 7 × 2. The schematic diagram is shown in Figure 3.The size of the liquidcooled plate in the figure is 900 × 560 × 15 mm 3 .Every five cells form a module, which is separated by a layer of silicone heat conduction pad ( size of 200 × 175 × 1 mm 3 ).The overall size of the power battery pack is 900 × 560 × 430 mm 3 , and the design requirements have been met.
Figure 3 .
Figure 3.The Spatial arrangement of cells.
Figure 4 .
Figure 4. Size parameters of symmetrical serpentine flow channel.
Figure 5 .
Figure 5. Grid independence verification.When the calculation error caused by the change in the number of grids is less than 3%[10] , the grid accuracy can be considered to be up to standard.As shown in Figure5, when the number of grids increases from 10.37 million to 12.57 million, Tmax decreases from 309.09 K to 308.97 K; P1 decreases from 189.83 Pa to 189.64 Pa.P2 decreases from 189.74 Pa to 189.69 Pa.The change range is less than 3 %, so the number of grids of 10.37 million has met the calculation requirements.
Figure 6 .
Figure 6.Upper coolant pressure distribution of the initial scheme.
Figure 7 .
Figure 7. Temperature distribution of power battery in the initial scheme.
Figure 9 .
Figure 9. Improvement scheme 1 Temperature distribution of power battery.
Figure 10 .
Figure 10.Frequency histogram of cell temperature distribution.Other improvement schemes will have three layers of liquid cooling plates and change the length of the linear channel in the symmetrical serpentine channel.The specific parameters are shown in Figs.11-13.
Figure 11
Figure 11 Improvement scheme 2 channel design.
Figure 12
Figure 12 Improvement scheme 3 channel design.
Figure 13 .
Figure 13.Improvement scheme 4 channel design.The flow channel of the initial scheme and the improved scheme 1 is symmetrical in the flow direction, and the length of the straight flow channel at the entrance is R.As shown in Figure9, the temperature of the cell at the inlet is significantly lower than that at the outlet.From improvement scheme 2 to improvement scheme 4, we try to continuously extend the linear flow channel at the entrance and actively delay the distribution of the runway-shaped flow channel.This is because the coolant temperature at the entrance is low and the heat dissipation performance is strong.In the linear flow channel can achieve very good heat dissipation effect, and delay the layout of the runway-shaped flow channel to improve the homogeneity of temperature distribution.
8 Figure 16 .
Figure 16.The temperature difference of Battery pack.
Figure 17 .
Figure 17.Judgment index of cell temperature uniformity.
Table 1 .
Physical properties of materials. | 2024-04-24T15:16:40.471Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "7756f820019a67e97b7863d0c364a2d6e5507040",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2741/1/012047/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "05e3a4087947f0921c6a76fdb9b12e89caa8c8de",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
518714 | pes2o/s2orc | v3-fos-license | Library Event Matching event classification algorithm for electron neutrino interactions in the NOvA detectors
We describe the Library Event Matching classification algorithm implemented for use in the NOvA $\nu_\mu \rightarrow \nu_e$ oscillation measurement. Library Event Matching, developed in a different form by the earlier MINOS experiment, is a powerful approach in which input trial events are compared to a large library of simulated events to find those that best match the input event. A key feature of the algorithm is that the comparisons are based on all the information available in the event, as opposed to higher-level derived quantities. The final event classifier is formed by examining the details of the best-matched library events. We discuss the concept, definition, optimization, and broader applications of the algorithm as implemented here. Library Event Matching is well-suited to the monolithic, segmented detectors of NOvA and thus provides a powerful technique for event discrimination.
Introduction
Classifying images into a small number of categories is a common task in scientific and industrial fields. In particle physics, this task usually involves interpreting particle detector data to determine the type of particles, interactions, or decays present. Given the sheer volume of information that can be collected, the data is often first reduced to a set of derived quantities by running algorithms that pull out key features: clusters, tracks, showers, jets, etc. While this form of lossy compression is acceptable in some applications, it is worth exploring whether a classification scheme that uses all of the available information is feasible, even in cases where the data volume is high.
In this article we describe such a classification scheme developed to categorize neutrino scattering events recorded in the NOνA detectors. In the Library Event Matching (LEM) algorithm, a trial event of unknown type is compared to a large number of known "library" events to find those events that are most similar to the trial event. The properties of those best-matched library events reveal the likely nature of the trial event. A distinguishing feature of LEM is that the comparisons are made using the energy depositions directly, to avoid any information loss from calculating higher-level variables. This fundamental philosophy of LEM was developed within the MINOS collaboration for its own neutrino event categorization needs [1,2,3,4]. The LEM version described in this article has substantial differences from its predecessor, many of which are motivated by the higher spatial resolution of the NOνA detectors.
While we use NOνA as our case study, the approach discussed is generalizable and could be usefully applied to any highly segmented detector, from hadron calorimeters determining jet multiplicity to cubic kilometer arrays collecting neutri- The upper track is due to a proton. This event shows that the two showers from π 0 → γγ are not always distinct. (c) A ν µ CC event, with the usual tell-tale long, straight muon track. Note that the axis ranges are approximately doubled for this panel relative to the first two.
produced by the NuMI beamline at Fermilab [6] are observed by a Near Detector on the Fermilab site and by a Far Detector of identical construction located 810 km downstream in Ash River, Minnesota. For the purposes of this article, the neutrino oscillation mode of interest is ν µ → ν e , and the goal of the classification algorithm is to obtain a sample of electron neutrino interactions in the Far Detector with the highest possible efficiency and purity. The NOνA detectors are constructed from long PVC cells filled with scintillator-doped mineral oil. Each of the Far Detector's 344,064 cells is 16 m long with rectangular cross section 4 cm × 6 cm. A loop of wavelength-shifting fiber runs the length of each cell, with both ends of the fiber terminating at one pixel of a 32-pixel APD array. The body of the 14-kiloton detector consists of 896 layers, or "planes", each with 384 cells. Each plane is 16 m × 16 m square, and the depth of the detector along the beam direction is 60 m. Alternate planes are aligned vertically and horizontally so that three-dimensional information can be obtained through combination of the two "views". The detector has unprecedented granularity for its size, with one radiation length (38 cm) extending over many cells, to give a detailed view of neutrino-induced electromagnetic showers. Figure 1 shows a cut-away diagram of the detector's construction.
The signal for the ν µ → ν e oscillation analysis in NOνA is ν e charged-current (CC) scattering, which yields a high-energy electron in the final state that allows one to tag the incident neutrino's flavor. In the 1 to 3 GeV energy range of NOνA, this electron will be accompanied, with similar probabilities, by a proton (quasi-elastic scattering), a nucleon plus a pion (resonant scattering), or a richer hadronic shower (deep inelastic scattering). While nuclear effects blur these crisp definitions, these three scattering types are useful for conveying the variety of shapes that signal events in NOνA can take. The ∼1 GeV electron in the final state produces an electromagnetic shower in the detector that has a width of a few cells and runs longitudinally an average distance of 2.5 m (40 planes). Figure 2a shows a simulated ν e CC interaction in the NOνA Far Detector.
The primary mis-identification background comes from neutral-current (NC) interactions, particularly those where the recoil hadronic system contains a π 0 . The π 0 decays quickly to two photons, each of which induces an electromagnetic shower that is essentially indistinguishable from an electron-induced shower. NC π 0 events, taken as a whole, look sufficiently different from signal ν e CC events that we can reject them well, but the differences are sometimes obscured: • The presence of two electromagnetic showers, rather than one, can reveal a π 0 in the final state. However, if one of the showers has low energy or overlaps the other in the detector, it can be missed.
• Photon-induced showers are separated from the neutrino interaction point due to the distance traveled by the photon prior to its conversion. This gap is a tell-tale sign of The ν e signal to be identified by LEM is shown in red. The neutral current, ν µ charged current, and intrinsic beam ν e charged current components are blue, black, and magenta respectively. a photon, but in some cases the gap will be too small to resolve. The conversion length in NOνA is 50 cm.
• Photon-induced showers begin with two particles (an electron/positron pair) rather than one, but these cases can end up indistinguishable given the energy resolution of the detector.
• The energy lost to the outgoing neutrino in NC scattering leads to reconstructed energies lower than those of signal events. However, interactions from a sufficiently highenergy neutrino or with a large energy transfer can fall in the signal region of 1 to 3 GeV reconstructed energy. Figure 2b shows a simulated NC event with a π 0 . Additional background comes from ν µ CC scattering, which produces a muon in the final state. The muon leaves a long track of activity in the detector with a characteristic energy deposition per unit pathlength. These are readily removed from the sample due to the clear muon track except in cases where the muon is low in energy or is lost amongst other activity. In these cases, the background is similar to NC interactions, with neutral pions playing the same role. Figure 2c shows a ν µ CC example.
The NuMI beam also includes a 2% contamination of ν e . These ν e interact identically to the ν e from oscillations and thus constitute a background to the ν µ → ν e oscillation measurement. However, their rate is low and their energies are somewhat higher. Figure 3 illustrates the energy differences among all the event classes before any selection cuts have been applied.
Since the ν e CC signal falls within a known energy range, we can safely remove lower and higher energy events up front. For all figures and tables that follow, we require events to have reconstructed visible energies between 0.5 GeV and 4 GeV.
Library Event Matching concept
At the heart of the LEM algorithm is the comparison of each unknown trial event to a large number of known library events, with the comparisons based on low-level information collected by the detector. For NOνA, this means using the calibrated energy depositions in all the detector cells directly rather than forming higher-level objects such as showers and tracks from those.
Once the very best matches are found (here, the best 0.0001% of all library events), their known properties are used to estimate the properties of the trial event. In the simplest version of LEM, the fraction of the best matches that are signal events can be used as the discriminant. Appendix A.1 discusses the relationship between LEM and other machine learning techniques.
The matching metric: motivation
When comparing two events, a metric is needed to quantify how similar they are. It is instructive to look at the MINOS case briefly, as the situation there is somewhat simpler [1,2,3,4].
The MINOS detector has a segmented structure analogous to that of the NOνA detector, but the effective spatial resolution for events of interest is significantly lower. A ν e CC signal event in MINOS involves only a couple dozen active "strips" (the analogue of NOνA's cells), and these active strips are clustered in a relatively compact pattern. Thus, two events with the same underlying particle kinematics have a good chance of having identical (or near-identical) arrangements of active strips. The readout electronics report the number of photoelectrons detected in each active strip. Since this charge measurement suffers from shot noise (typical charge: ∼8 photoelectrons), strips with identical energy depositions may report different charges. The level of difference is governed by Poisson statistics.
These details guided the form of matching metric used by MINOS, which can be thought of as the likelihood L that the two events' recorded charges represent the same underlying energy depositions: where a i is number of photoelectrons registered by the i th strip of event A, b i is the same for event B, P(n|λ) is the Poisson probability of observing n given mean λ, and the sum runs over all strips active in at least one of the events. A higher log L for a pair of events means a better match. Before L is calculated, the events, which in general occur in different parts of the detector, are spatially aligned by shifting them so that their charge-weighted mean strip positions, rounded to the nearest strip, overlap.
In the MINOS metric L, displaced energy depositions in the two events do not get their charges directly compared. To obtain good matches for a trial event, the library must be large enough to span minor variations in active strip positions for nominally equivalent events. This is possible in MINOS given the limited spatial resolution of the detectors for ν e CC events. That is, the library can be expected to give reasonable coverage of all possibilities. Requiring exact charge agreement across the ∼20 active strips, though, would be combinatorically overwhelming. The Poisson factors take care of this, with acceptably different charges able to contribute appropriately to the match score.
The NOνA detectors are significantly more finely-grained than those of MINOS. This makes event discrimination easier in principle since more details are visible, but it makes the above matching metric impractical. It is much less likely that "equivalent" activity in the trial and library events will fall on the same cells. What is needed is a matching metric that rewards activity in nearby cells without requiring them to lie directly on top of one another. A library event identical to the trial event should still be a perfect match, but events with similar charges offset by a cell or so should still score well.
The metric we use draws its motivation from electrostatics. Two Coulomb charge distributions of similar shape, but with opposite signs, will have a low electrostatic potential energy when overlaid and examined together, as the attraction between the opposite signed charges counters the internal repulsion of the like-signed charges. Two overlaid charge distributions with dissimilar shape suffer the internal repulsion but lack the benefit of mutual attraction, leading to a large potential energy. Given the electrostatic analogue to what follows, we use "energy" to refer to the LEM match score for the remainder of the article unless otherwise stated. Lower energies correspond to better matches.
The match energy is defined as where E A is the self-energy (repulsion) of event A's charges, E B is the self-energy of event B's charges, and E AB is the (negative) energy due to the the A/B attraction. The charges are taken to be the recorded energy depositions in the NOνA cells. Treating the electrostatic analogue as exact for a moment, the self-energy terms are given by with a i (b i ) the recorded deposition in the i th cell of event A (event B) and with r i j the distance between cells i and j. The r i j = 0 case is handled again with an electrostatic analogue by distributing all charges uniformly across their individual cells.
The interaction term is given by Before evaluating this sum, the events are globally aligned with one another according to a separately reconstructed interaction vertex. 1 A perfect match, in which events A and B have identical depositions in identical cell positions, would yield E=0. A poorly 1 Alignment by charge-weighted mean cell position was also studied and gives similar classification performance. matched pair with charges far away from one another will have large energy Eq. (4) can be recast in terms of one set of charges embedded in the field of the other: The advantage of this formulation is that V can be precalculated for each trial event, along with the self-energies of the trial and library events. When matching against a large number of library events using (5), the complexity is linear in the number of charges rather than requiring a double sum over both trial and library charges.
The matching metric in NOνA
While the NOνA matching metric is inspired by electrostatics, there is no reason to expect that the precise form above will yield the best sensitivity. We incorporate the following generalizations.
• Above, r i j is calculated as the Euclidean distance in terms of the number of planes ∆p i j and number of cells ∆c i j . However, NOνA events are boosted forward and cover many planes longitudinally but relatively few cells transversely, so we assign different relative importance to separations in the two directions.
• The r −1 falloff with distance is generalized to r −α .
• The importance of larger charges relative to smaller ones is adjusted by raising all charges to a power β.
The resulting form of the matching metric still follows Eq. (2), but the self-energy and interaction terms are now given by with the transfer matrix T i j and field U i given by The electrostatics version is recovered by setting The first two parameters validate the intuition that transverse differences should be considered more significant than longitudinal ones. The third parameter specifies a 1/ 4 √ r falloff with distance, slower than the electrostatic analogue. For β, note that the simple presence or absence of activity in a cell conveys information regardless of its charge. Having 0<β<1 moves the metric towards this binary "on/off" interpretation and away from a charge-proportional weighting.
The library
The library consists of 77M simulated neutrino events, of which 18M are signal ν e CC events, 29M are background ν µ CC and NC events, and 30M are π 0 -enriched NC background events. Each trial event that LEM classifies is compared to these 77M events to find the 1,000 library events that are most sim-ilar to it, as quantified by the metric above. 2 Figure 4 shows an example trial event along with its event potential U and its best-matched library event.
The library events are generated ahead of time using the full NOνA Monte Carlo simulation chain including realistic neutrino flux, cross sections, and detector components. The flux is calculated using a FLUKA/FLUGG implementation of the beamline elements [7], the neutrino interactions are simulated by GENIE [8], and particle propagation through the detector geometry is handled by GEANT4 [9]. Simulated energy depositions in the liquid scintillator are converted into expected signals by NOνA electronics and data acquisition simulation code. The registered signals are corrected for light attenuation in the cells' fibers using standard NOνA calibration procedures.
NC events containing neutral pions are the dominant misidentification background owing to the electromagnetic showers from π 0 → γγ. Thus, we supplement the base background library sample with a π 0 -enriched library sample. To build this enriched sample, we apply a cut that selects out only those neutral current events with a π 0 present in the final state as reported by GENIE.
The library events are generated according to the expected ν µ flux (for background) or a 100% ν µ → ν e transmutation (for signal), without regard to any actual probabilities for neutrino flavor change. Oscillations are introduced into the library later by event weighting. This is discussed in Sec. 5 below. Appendix A.3 describes the oscillation probabilities used.
While increasing the library size beyond the 77M events would provide incremental improvement in classification performance, we observe that these gains enter logarithmically with the number of library events once the library is sufficiently large. In an earlier version of the algorithm, we found that doubling the library size provided only 1% gain in physics sensitivity. In light of the computational requirements discussed in Section 7, additional library events are not worthwhile for our application.
Event flipping
To good approximation, flipping an event transversely in one or both views produces an equally valid event. We use such flipping to effectively quadruple the size of the library when the matching is performed. Each library event is used in each of the four possible configurations, and the best of the four is retained. This symmetry is not quite perfect in the NOνA detectors. Attenuation in the readout fibers leads to subtly different charge resolutions and threshold effects on transversely opposing sides of an event, and NuMI neutrinos at the Far Detector enter at a 3 • upwards angle. Nevertheless, the best-scoring matches come from the four possible flipped configurations with nearly equal probability: 26% from unflipped events, 50% from events with either one of the two views flipped, and 24% from events with both views flipped.
Decision tree
As library size increases, the fraction of an event's best matches that are truly signal tends toward the probability that the trial event itself is signal. Further, all of the information available in the trial event is used when determining this probability. It is in this sense that LEM is optimal.
For a library of finite and practical size, though, this signal fraction alone does not contain the full information extractable. Other statistics constructed from the details of the best matches may, for example, indicate that the matches are drawn from an area of sparse library coverage and are thus less reliable. The most powerful approach given a finite library is to construct several statistics describing the matches and to feed these into one of the standard multivariate analysis techniques to extract the final classifier. In LEM, five variables are constructed from the 1,000 best library matches and are used as inputs to a decision tree, along with the calorimetric energy of the trial event as a sixth input.
Weighted fraction of signal matches
The basic quantity measuring what fraction of the best matches are signal events can be improved upon by weighting up the truly best matches over the lesser ones when calculating the signal fraction. We use the weighting where n is the match index, E n is the energy of the n th best match for the trial event, and E 1000 is the energy of the final (1000 th ) best match. The optimized values used for λ and γ in NOνA are λ = 6.67 (17) The typical ratio of weights w ′ 1000 /w ′ 1 is ∼0.1%, indicating that the most important matches are captured within the first thousand.
In practice, the weight must also include the oscillation probabilities alluded to earlier: where P osc n is the oscillation probability of match n, as described in Appendix A.3.
All sums below that are indexed by n run over the match list. For notational convenience we also define W ≡ n w n . This weighting scheme is used for all five quantities formed from the best-match list. The first is the weighted fraction of signal matches, where this sum includes only those terms due to signal matches.
Mean hadronic y
Signal events in which the outgoing electron carries only a small fraction of the incident neutrino's energy will look very much like NC background events. The kinematic quantity y (or rather, 1−y) measures this fraction: 1−y = K e /K ν , where we've used K e and K ν as the outgoing and incoming lepton energies to avoid confusion with the match energies E. If a trial event matches well to signal events with high y, this can suggest that the trial event is in fact a high-y NC event. A second input is the mean y for the best matches:
Mean matched charge fraction
Matched charge fraction is an independent measure of the quality of the library matches, separate from the match energy. For each trial/match pair, this is the quantity of charge that has a counterpart on identical cells in the two events divided by the total charge in the two events: The weighted average of the matched charge fraction over all the matches yields the next input:
Match energy difference
This quantity measures whether the signal or background matches are the better matches on average. It is the difference of the weighted mean energy of each class of matches: D = n, sig w n E n n, sig w n − n, bkg w n E n n, bkg w n (24)
Enriched fraction
The final match list quantity, similar in construction to f sig , is the weighted fraction of signal matches present among the signal and π 0 -enriched matches (i.e., excluding the non-enriched background) , f enr = n, sig w n n, enr w n + n, sig w n . (25)
Total calorimetric energy
NC backgrounds skew heavily to low visible energy thanks to the energy removed by the exiting neutrino. The sum of all depositions {a i } recorded in the trial event, is included as a final input so that the classifier knows the prior expectations of signal and background.
Choice of a decision tree, and figure of merit
There are many multivariate techniques capable of combining these six input quantities into a single classifier output. We investigated artificial neural networks, support vector machines, and decisions trees. An ensemble decision tree yielded the best performance of the approaches tried. One problem with other techniques is that the figure of merit (f.o.m.) that, for example, artificial neural network training aims to minimize is the meansquared-error of the classifier variable c: where the sums run over the signal and background training samples. However, the figure of merit relevant to an experiment measuring the magnitude of a signal excess s over a background b with Poisson fluctuations is If events are binned according to, say, the classifier output, the generalization is simply to sum in quadrature the significances in the individual bins: While training a decision tree classifier, if the sample is divided at each step into subsamples 1 and 2 so as to maximize The final classifier output is a voting ensemble of 1,000 decision trees each trained on a randomly chosen half of the full training sample. The ensemble technique protects against overtraining, a feature that we confirmed by evaluating the classifier performance on independent control samples. Figure 5 shows the distribution of the six input variables for all event classes in the NOνA ν µ → ν e analysis. Figure 6 shows the final LEM classifier output. Figure 7 shows the signal efficiency and purity obtained with various cuts on the LEM output. All curves come from Monte Carlo simulation of the expected NOνA data set. We choose the cut on the LEM output variable that maximizes the figure-of-merit in Eq. (28). When applying LEM in a full experimental setting, one can fit the output distribution to gain additional discrimination power. Table 1 shows the expected number of signal and background events selected by the optimum LEM cut. The signal efficiency is 55% for a background mis-identification rate of 2.0%. The muon track of ν µ CC events keeps their mis-identification rate particulary low. Background beam ν e events are selected with a lower efficiency than signal ν e events. This is possible due to the different underlying energy spectra of the two classes. As there is no absolute metric by which to judge the performance of the LEM classification algorithm described here, we note simply that the performance shown is excellent for the physics goals of NOνA [5].
Speed
While each individual energy calculation can be performed very quickly, classifying a single event takes some time given the large size of the library. For the NOνA application, a single event must be treated in a second or so, which is the time scale required by other steps already performed during NOνA event processing. Without specialized hardware to run the inner loop, techniques to manage the LEM matching time focus on reducing the number of energies that need to be calculated.
We achieve a significant speed-up by introducing a library "index". If trial event A matches well to library event B, A will likely match well to other library events that are, themselves, good matches to B. Similarly, if A and B match poorly, then A will likely match poorly to library events similar to B.
A library index is formed by drawing 10,000 events uniformly from the full library and matching each of these to the full library. For each index event, a list of its 1,000,000 bestmatched library events is saved. This process happens ahead of time, at library creation. When a trial event is classified, it is compared first to the 10,000 index events to find the single bestmatching index event. The trial event is then compared only to the 1M sibling events of that index event, reducing the total number of energies calculated per trial event from 77,000,000 to 1,010,000 -a significant speed improvement that takes the per trial matching time from 97 s down to 1.7 s on a 2.3 GHz AMD Opteron processor. Empirically, we find that 85% of the trial event's "true" one-thousand top matches are captured with this indexed approached, and we find no noticeable degradation in the physics performance.
Memory
The speed optimization above is what allows the use of a 77M event library. However, such a large library strains memory resources. The full library is too large (∼53 GB each for the library and index) to read from disk for each event, yet it Table 1: Number of events expected in each event category initially and again after an optimal LEM cut assuming a nominal 3-year NuMI exposure of 1.8 × 10 21 protons-on-target. The background is shown both as a total as well as broken down into NC, ν µ CC, and intrinsic beam ν e CC components. The bottom row shows the efficiencies for selecting events in each category. The "no selection" row and the efficiencies derived from it count only those events with reconstructed visible energy between 0.5 GeV and 4 GeV.
is larger than the typical per-core memory allocation on grid computing nodes. Thus, the library is converted from its original high-level format into the memory representation used by a running job. This representation includes the self-energy of each event. The conversion inflates the library slightly to 131 GB, but the advantage is that it can now be shared between running processes. Each parallel matching job uses the mmap() system call to make the contents of this file visible in its address space. The mapping is marked read-only, so the kernel shares the pages between all the running processes. For example, on a 64-core server, the memory requirement to run 64 matching jobs is still only 131 GB, equivalent to an unshared 2 GB per core. In case of memory pressure, the kernel will discard pages, knowing that they can be retrieved from disk (that is, the library file essentially acts as swap space) although this will significantly impact performance.
Other information available in the match list
In addition to signal-or-background classification, the detailed truth information available in the list of best matches allows other information about the trial event to be inferred. One could extract probabilities for different interaction modes, the inelasticity, and so on, without requiring any independent reconstruction. An application that has been pursued is the estimation of the incident neutrino energy for ν e CC events. Simply by averaging the true neutrino energies of the best signal library matches and calibrating the resulting estimator, we achieve an energy resolution of 8.8% on signal events selected by the oscillation analysis, competitive with other energy estimators in NOνA.
Summary
The Library Event Matching algorithm compares input trial events to a large library of known events using all the information available, making LEM an optimal classifier given a sufficiently large library. The NOνA implementation of LEM has demonstrated excellent performance in separating ν e signal from the key backgrounds, and a few simple optimizations have maintained practical computational requirements despite the large number of library events used. Within the NOνA context, the LEM technique has potential applications from reconstruction of the hadronic system to the event energy measure described above. More broadly, LEM can be applied to completely different particle detectors or imaging systems in an array of fields and industries, wherever one needs to classify finegrained images of objects whose visual characteristics vary in known ways.
where (x, y) and (u, v) scan over the areas of cells i and j and where r i j here is a generalization of the discrete distance used in the main text:
.(A.3)
For more distant pairs the simplified form of the transfer matrix given in Eq. (9) is sufficient.
Appendix A.3. Neutrino oscillation weights
The retained matches are weighted according to Eq. (19), which includes the probability for flavor oscillation. The probabilities used are P(ν µ → ν e ) = sin 2 θ 23 sin 2 2θ 13 sin 2 1. The second order effects can pull the probabilities higher or lower, making this weighting a reasonable middle ground for the library. The library is also made devoid of intrinsic ν e from the NuMI beam by setting that survival probability to zero. The overall prefactor on the ν µ → ν e (signal) line relative to the background lines actually does not enter in practice since the signal, background, and π 0 -enriched background classes are scaled to have equal total weight in the library. | 2015-04-13T23:45:27.000Z | 2015-01-05T00:00:00.000 | {
"year": 2015,
"sha1": "a50a9cf5031eaebefc5ecc2bee1ece4b0e0da249",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "http://manuscript.elsevier.com/S0168900215000431/pdf/S0168900215000431.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a50a9cf5031eaebefc5ecc2bee1ece4b0e0da249",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
255729528 | pes2o/s2orc | v3-fos-license | Static and Dynamic Impacts of Internet Use on Self-Rated Health among Adults in China: A Hybrid Model Analysis Based on National Panel Survey Data
The widespread use of the Internet has a substantial impact on people’s livelihoods, including health-related factors. Whether this impact is beneficial or harmful to people’s health remains unclear. Some cross-sectional studies found static differences in the health status between Internet users and nonusers, whereas panel data studies found dynamic changes in an individuals’ health over time, making the issue, including its causality, controversial. Therefore, we aimed to clarify the association between the use of the Internet and people’s health from both static and dynamic aspects. Data were obtained for 46,460 adults from the China Family Panel Studies in 2014, 2016, and 2018. The analysis applied a logistic regression hybrid model with self-rated health as the dependent variable and Internet use as the main independent variable. In the hybrid model, time-varying independent variables were decomposed into between-individual (static) differences and within-individual (dynamic) changes over time. The results indicated that the between-individual coefficient of Internet use was significantly positive, but the within-individual coefficient was not, i.e., Internet users felt healthier than nonusers from the static aspect but starting to use the Internet did not increase the self-rated health from the dynamic aspect. These findings suggest that attention is needed in order to not confuse the static differences with dynamic change regarding the causality between Internet use and self-rated health.
Introduction
Recently, remarkable advances in Internet technology have been accompanied by rapid increases in the number of Internet users. In China, there were 1.03 billion Internet users in December 2021, an increase of 0.26 billion since December 2017. During these 4 years, the annual growth rate of Internet users was 7.5% and the penetration rate in the population rose from 55.8% to 73.0% [1]. Such widespread Internet use has a large impact on people's livelihoods, including the health-related factors, because it enables an easier access to health information and health care, such as making doctor's appointments, purchasing medication, and even receiving medical consultations online [2,3].
The Chinese government has attempted to improve the health level of the public through the Internet. A blueprint for improving the population's health was released by the state council, the People's Republic of China, in 2016, entitled "The Healthy China 2030" [4]. According to The Healthy China 2030, several policies have been proposed via the Internet to enhance the quality of medical services, information, and other aspects. Such policies aimed to (1) promote the integration of health, retirement, tourism, the Internet, and others; (2) develop Internet-based health services; (3) establish and promote a standard for "Internet + health care" services; (4) develop a smart information technology for medical use; and (5) comprehensively expand the application of large-scale health care data in the governance of industry, and for clinical and scientific research, public health, and education.
In addition, regular Internet users are more likely than non-regular Internet users to live a healthier lifestyle [5], and frequent Internet users have shown better self-rated health than less frequent users [6]. However, Internet use is also associated with a number of side effects. Some cross-sectional studies have found that heavy Internet users engage in fewer health-promoting behaviors and more risky behaviors than do light Internet users [7] and have poorer self-rated mental health [8]. Further, users addicted to the Internet have shown worse physical, mental, and social health [9]. Meanwhile, a study found that the increasing frequency of using the Internet had no significant effect on one's depression levels using a panel fixed-effects model (an estimator of the panel fixed-effects model is also known as a within-estimator) [10]. Therefore, the association between the use of the Internet and the users' health remains controversial. There is clearly a need for further research regarding the relationship between the use of the Internet and an individuals' health status.
It is well known that cross-sectional analyses can reveal differences between survey subjects but cannot detect causality. By contrast, panel data analysis, especially using a fixed-effects model, can handle changes in variables within survey subjects over time, which approaches a causal relationship more closely than cross-sectional analyses. However, fixedeffects models fully exclude the components of differences between survey subjects from the analysis, and this prevents the association of variation across subjects since the outcomes are undetectable [11]. Therefore, the purpose of this study was to clarify the association between Internet usage and health among the public in China from two aspects: "static" differences and "dynamic" changes over time. In this study, the static aspect identifies the effect of the independent variables on the dependent variable between individuals at a point in time. The dynamic aspect identifies the effect of the independent variables on the dependent variable within individuals over time. As a consequence, the results of the static aspect are similar to those of the cross-sectional data analysis, which indicates the differences between individuals. The results of the dynamic aspect are equivalent to those of the panel fixed-effects analysis, which indicate the changes within individuals over time. Based on previous studies [5][6][7][8][9][10], there is still ambiguity about how the use of the Internet affects one's health status. Thus, in this study, we intend to examine whether there are differences in the self-rated health between Internet users and nonusers in the static aspect. Additionally, while "The Healthy China 2030" policy has proposed using the Internet to improve the health status of the population, it should be investigated whether the use of the Internet improves the population's health over time from the dynamic aspect.
Concretely, we hypothesized the following: (1) Internet users are in better health than nonusers (from the aspect of static variation), and (2) starting to use the Internet (changing from a non-Internet user to an Internet user) improves an individuals' health status (from the aspect of dynamic change). The findings of these research projects are expected to contribute to the development of the Chinese public health policy.
Data and Sample
We obtained survey data from the China Family Panel Studies (CFPS). These have been conducted biennially by the Institute of Social Science Survey at Peking University, since 2010. The CFPS are national longitudinal social surveys conducted to investigate the recent changes in the Chinese society, economy, population, education, and health. The data cover 25 provinces, municipalities, and autonomous regions in China (excluding Hong Kong, Macao, Taiwan, Xinjiang, Tibet, Qinghai, Inner Mongolia, Ningxia, and Hainan), which contain 95% of China's population; therefore, it is nationally representative in substance [12]. In the CFPS, the target sample was 16,000 households and included all individuals living in those households, consisting of both family and nonfamily members, such as domestic helpers. The CFPS is a panel survey that, in principle, tracks family members in the subsequent survey. However, some individuals drop out because of reasons such as death or moving out of the community, while newborn and adopted children are added. The CFPS data are available to academic researchers and public policymakers [13]. We used the CFPS adult dataset for three waves in 2014, 2016, and 2018, in which the subjects were aged 16 years and over. In each of the three waves, "self-rated health", one of the questionnaire's items of concern, was coded in the same way . The numbers of adult respondents in 2014, 2016, and 2018 were 37,147, 36,892, and 37,354, respectively [14][15][16]. The total number of adult respondents throughout the three waves was 46,896 after verifying identical persons.
Before the CFPS interviews, the interviewers provided explanations about the survey, especially in regard to its confidentiality, to every participant [17], and those who consented to answer participated in the survey [13].
Dependent Variable
As the outcome indicator, we used self-rated health, which has frequently been used as a health measure [18][19][20][21]. The question item for self-rated health was "How would you rate your health status?", and the answer choices were "Excellent", "Very good", "Good", "Fair", and "Poor" [22]. This five-point scale was transformed into a dummy variable, which was assigned a score of 1 for responses of "Excellent", "Very good", or "Good", and 0 otherwise, for entry into binary logistic regression analysis in line with a previous study [23].
Independent Variables
As the key issue of this study was the influence of Internet usage on self-rated health, we set the time-varying use of the Internet as the main independent variable (use = 1, do not use = 0). For the other independent variables, we selected 13 kinds of time-varying and two kinds of time-invariant variables from the questionnaire items. The 13 kinds of timevarying variables consisted of five dummy variables, two ordinal variables, five continuous variables, and one categorical variable. The five dummy variables were marital status (married/have a spouse/cohabiting = 1, otherwise = 0; "otherwise = 0" is omitted hereafter), smoking (smoked cigarettes in the past month = 1), alcohol drinking (drank alcohol at least three times a week in the past month = 1), public health insurance (enrolled = 1), and residential area (urban = 1, rural = 0). The two ordinal variables were self-rated relative to income and self-rated social status, both of which were on a five-point scale from 1 (very low) to 5 (very high). We divided each ordinal variable into five dummy variables by points. The five continuous variables were daily sleeping hours, the frequency of physical exercise in the past week, personal income (CNY; 1 CNY ≈ 0.14 USD), and height and weight. We created the square term of daily sleeping hours and transformed the participants' income logarithmically. From the participants' height and weight, we calculated the body mass index (BMI) and created two dummy variables: "overweight" (more than 25.0 kg/m 2 = 1) and "underweight" (less than 18.5 kg/m 2 = 1). We changed the one categorical variable, educational attainment, into a continuous variable: "years of education". For example, we converted "high school graduates" into 12 years of education. The two kinds of timeinvariant variables were the respondents' age in 2016 (equivalent to birth cohort) and his/her gender dummy (man = 1). We also created the square term of the respondents' ages. In addition to the abovementioned variables, we also created two survey wave dummies: wave in 2016 and wave in 2018.
Moreover, time-varying independent variables, including Internet usage, were decomposed into two components: between-individual differences and within-individual changes. In the panel dataset, the identical individuals were repeatedly observed over three waves, in essence. First, we calculated the means of each time-varying variable over time for every individual, which represented the between-individual differences (i.e., static differences) in the variable-like values of the cross-sectional data. Second, we subtracted the individual-specific means from the observed values of each variable for every individual, which represented the within-individual changes (i.e., dynamic changes) in each variable over time [11].
Statistical Analyses
Among the 46,896 adult respondents in the three waves, 46,886 provided answers on self-rated health at least once, 35,552 two or three times, and 22,091 three times. Mean-while, many missing values were seen throughout the dataset, so we performed multiple imputation (MI) to fill in the missing values. The MI procedure fills in each missing value with a plausible value estimate based on all of the non-missing values of all of the variables in the dataset [24,25]. We repeated the random imputation process 100 times, and then 100 imputed datasets were generated. After 100 imputation processes, all of the variable transformations mentioned above were made, such as binarizing ordinal variables, creating square terms, performing log-transformations, preparing new dummy variables, and decomposing time-varying variables.
Subsequently, using the 100 imputed datasets, we conducted two logistic regression models-a null model and a full model-with binarized self-rated health as the dependent variable. The null model had no independent variables except the constant terms. In the full model, the independent variables were the between-and within-individual variables derived from the use of the Internet and 13 other kinds of time-varying variables, two kinds of time-invariant variables, and two survey wave dummies. A panel data analysis model with both between-and within-individual independent variables is called a hybrid model [11]. Furthermore, the square terms of daily sleeping hours and age were entered into the full model together with the linear terms. Concurrently, the log-transformed income, overweight and underweight dummies, and dummy variables divided from the ordinal scales were entered instead of the respondents' income, their height and weight, and ordinal scales of self-rated relative income and social status, in that order.
In the context of MI, a regression analysis is conducted separately on each imputed dataset, and the 100 regression results are combined into a single result [24,25]. All statistical procedures were performed using Stata release 16.1 (StataCorp, College Station, TX, USA).
Descriptive Statistics
Regarding the descriptive statistics (Table 1) Regarding self-rated health, the percentages of those who answered "Excellent", "Very good", or "Good" averaged about 70% over the three waves. The percentage of Internet usage showed an increasing trend, from 29.9% in 2014 to 53.1% in 2018.
The mean age of the respondents was around 46 years over the three waves. From 2014 to 2018, education and the frequency of physical exercise in the past week increased from 7.5 to 8.2 years and from 1.8 to 2.6 times, respectively. The mean percentage of those who resided in an urban area was about 48% until 2016 and increased to 51% in 2018. The respondents' daily sleeping time was approximately 7.8 h throughout the three waves. Those who were married, had a spouse, or cohabited accounted for more than 70% of the respondents. Those who had smoked in the past month, had drunk more than three times a week in the past month, and who were enrolled in public health insurance accounted for more than 70%, about 15%, and about 91% of respondents in each wave, respectively. The prevalence of those who were overweight increased from 22.5% in 2014 to 26.5% in 2018, while those of the respondents who were a normal weight and underweight decreased from 68.0% to 64.9% and from 9.5% to 8.7% during the same period, respectively. The mean income was volatile: it was CNY 9022.7 in 2014, CNY 21,768.4 in 2016, and CNY 18,803.3 in 2018. Regarding the self-rated relative income, the percentage of those who answered "Very high" or "High" averaged around 10% until 2016 and increased to 22.9% in 2018. Conversely, those who answered "Low" or "Very low" averaged between 43% and 51% until 2016, and then decreased to 30.1% in 2018. Regarding the self-rated social status, no clear trend was observed, as the percentage of those who answered "Very high" or "High" varied from 20.2% to 29.7%, while the percentage of those who answered "Low" or "Very low" varied from 24.1% to 34.2%.
Regression Results
The results of the null model (Table 2) revealed a ρ value of 0.594, which means that the within-individual components (dynamic aspect) determined 40.6% (=1 − 0.594) of the total variance of the dependent variable (self-rated health), and the between-individual components (static aspect) 59.4%. The results of the hybrid model with MI (Table 3) showed that the between-individual coefficient was 0.342 and significant regarding the main independent variable, Internet usage, while the within-individual coefficient was positive and not significant. Regarding smoking and years of education, the between-individual coefficients were significantly positive and the within-individual coefficients were insignificantly positive. Besides, the overweight dummy had a significantly negative between-individual coefficient and a non-significantly negative within-individual coefficient. Regarding the drinking, sleeping hours, physical exercise, log-transformed income, the four dummies of the self-rated income level, and the four dummies of the self-rated social status, both the between-and withinindividual coefficients were significantly positive, but only the within-individual coefficient for drinking was significant at the 10% level. In addition, the largest coefficient value of the self-rated income level was very high, followed by high, medium, and low, in that order, regarding both the between-and within-individual components. The coefficient values of the self-rated social status also aligned in the same order in both aspects of the between-and within-individual components. The underweight dummy and the square of sleeping hours were significantly negative regarding both the between-and within-individual coefficients. Based on the coefficients of the linear and square terms, the within-and between-individual sleeping hours were positively correlated with the dependent variable for less than 9.2 and 8.5 h, respectively, and negatively correlated for more than 9.2 and 8.5 h, respectively. Public health insurance had a non-significantly negative between-individual coefficient and a significantly positive within-individual coefficient at the 10% level. Regarding variables with no distinction between the between-and within-individual components, the male gender and the square of age had significantly positive coefficients, and age had significantly negative coefficients. The respondents' age was negatively correlated with the dependent variable at younger than 90.3 years and positively at older than 90.3 years.
Sensitivity Analysis
Regarding the sensitivity analysis, we conducted another full model without MI (Table 4). Comparing the results of the two full models with and without MI, the number of observations increased from 53,113 to 139,376, and the number of individuals rose from 34,073 to 46,460 by MI. No gaps between the coefficient estimators of both models were observed. All of the standard errors were reduced after MI, probably because of the increased number of observations. Consequently, four insignificant within-individual coefficients before MI-public health insurance, a low self-rated income level, the frequency of physical exercise, and log-transformed income-became significant after MI. Additionally, the within-individual coefficient of a very high self-rated social status, which had been significant at the 5% level, became significant at the 1% level. However, only the within-individual coefficient of the overweight dummy, which had been significant before MI, became non-significant after MI. As a whole, the results of the full model with MI were similar to those of the model without MI. Notes: the results were similar between the two hybrid models with and without multiple imputation as a whole.
Discussion
In this study, panel data analyses conducted with CFPS datasets in 2014, 2016, and 2018 with a null model revealed that self-rated health was determined by changes in individual attributes over time (the dynamic aspect), and variation in individual attributes (the static aspect), at a ratio of approximately 4 to 6. Both aspects of the attributes were effective in measurable proportions, so we employed a hybrid model in which time-varying determinants were decomposed into within-and between-individual components. The between-individual coefficient of Internet use was significant, at 0.342, and the odds ratio (OR) was 1.41 (the OR is an exponential of the coefficient). However, the within-individual coefficient of Internet use was positive and not significant, which means that Internet users are statically 1.41 times more confident of their self-rated health than are nonusers, while starting to use the Internet is not dynamically effective in improving the self-rated health. Consequently, hypothesis (1) was supported, but not hypothesis (2).
Among the previous literature about the association of the use of the Internet with one's health status, to our knowledge, self-rated health is often seen as the indicator of an individuals' health status along with their mental health (including depression). Unlike various biological indicators, self-rated health is an integrated variable and a holistic assessment of an individual's health status [18][19][20]. Therefore, we chose self-rated health as the health indicator in this study.
Previous cross-sectional studies with the CFPS dataset in 2018 showed that Internet users indicated a better self-rated health than did nonusers among adults [23,26]. Additionally, a previous study that conducted panel analyses, in which lagged dependent and independent variables (i.e., self-rated health and the Internet usage of the preceding wave, respectively) were entered as the independent variables, using the CFPS data from 2014 to 2018, and found that the use of the Internet has a significantly positive association with self-rated health among adults [27]. Although the kind of panel analysis model of the study was not clearly described, it seems to be a random-effects model. In the present and the previous studies, the origin of the data was the same (CFPS dataset). However, unlike previous studies, we performed both the within-and between-estimation simultaneously. The between-estimation is similar to a cross-sectional estimation using the individual-specific means of the variables measured longitudinally instead of cross-sectionally, at a point in time. In the present study, the results deduced from the between-individual coefficient of Internet use were the same as those of the two previous cross-sectional studies. On the other hand, the results obtained with the within-individual coefficient of Internet use were different from that of the previous panel study. However, the results of the randomeffects model which we preliminarily conducted showed that the coefficient of Internet use was significantly positive and consistent with that of the previous panel study. In theory, the coefficients of a fixed-effects (within-estimation) model are always unbiased, and the coefficients of a random-effects model are intermediate between the between-estimator and within-estimator [28]. That is, the results of a random-effects model are not always unbiased. Therefore, we believe the results of the within-individual coefficients of the present study may possibly be more reliable than the results of the previous panel study. Further study is required to make the point clear.
For reference, previous studies with the dependent variable except self-rated health have reported that after 3 years of Internet usage, individuals rate themselves as having better physical/cognitive health and social well-being and attending more health screenings compared with nonusers [29], whereas those who had used the Internet heavily at the age of 18 years had worse mental health at the ages of 21-22 years [30]. In addition, a study using panel data in a random-effects model reported that the use of the Internet was associated with a lower risk of a depressed state [31]. Moreover, a study that conducted panel fixed-effects analyses with datasets from the two waves of the CFPS in 2016 and 2018 found that changing from a non-Internet user to an Internet user reduced depression in older adults aged 60 years and over [32].
As a whole, a major finding of this study is that a causal relationship could not be inferred from starting to use the Internet to feeling healthier regardless of the different self-rated health status between Internet users and nonusers. In other words, it leaves open the possibility that present Internet users may have felt healthy before starting to use the Internet, and that those who have been feeling unhealthy may be unwilling to start using the Internet. These findings were achieved because of the strength of the hybrid model, which simultaneously analyzed two aspects, i.e., the dynamic change over time and static divergence, and approached exact causal inferences closer than possible with a conventional analytical design.
In 2016, the Chinese government announced The Healthy China 2030, in Chapter 24 of which, "internet + health care" services were proposed to cover people's whole life cycle health management [4]. Of course, it is expected that the measure is beneficial to present Internet users. However, based on the results of this study, it is questionable whether the measure will be sufficiently effective in improving the present non-Internet users' health status even when they will start using the Internet. In this sense, the measures depending on the recent rapid increase in Internet users may not work enough to promote the overall health status of the people. Some alternative measures may be required to reduce the health disparities between Internet users and nonusers.
Contrary to our expectations, the between-individual coefficients of both smoking and drinking were significantly positive, which indicates that those who had smoked in the past month and those who had drunk over three times a week in the past month rated their health status as higher than those who had not. In addition, the within-individual coefficient of drinking was positively significant at the 10% level, which implies that selfrated health increases with the increasing frequency of drinking. This may be explained by unhealthy behavior leading to overrated self-reported health. Indeed, heavy smokers have reported excellent health as a result of an overconfidence in their own health [33]. Moreover, heavy drinkers aged 45 years and older have reported a better health status than those who only drink occasionally [34].
Meanwhile, we found from the results of the between-individual coefficients that those whose daily sleeping hours were around 8.5 h had better self-rated health than those who had shorter or longer daily sleeping hours. We also found from the results of the within-individual coefficients that, if someone slept fewer than 9.2 h, prolonging their sleeping time made him/her feel healthier, and if they slept for more than 9.2 h, shortening their sleeping time made him/her feel healthier. Previous studies have reported that short or insufficient sleep is associated with worse self-rated health [35,36]. The results of the present study indicated the effects of sleeping for too long and for too little using the square term. Indeed, a sleep duration of 7-8 h was associated with the lowest risk of chronic diseases such as obesity, diabetes, hypertension, and cardiovascular disease [37].
In addition, both being overweight and underweight were associated with a poor self-rated health, and recovery from being underweight improved one's self-rated health.
Moreover, those who engaged in frequent physical exercise reported a better self-rated health than those who did not, and increasing the frequency of physical exercise improved their self-rated health. Furthermore, the non-significant between-individual coefficient of public health insurance indicated that both enrollees and non-enrollees in public health insurance were homogeneous in terms of their self-rated health. However, the withinindividual coefficient suggested that enrolling in public health insurance significantly improved one's self-rated health at the 10% level.
Regarding socioeconomic status, we found that one's income and two indices of subjective socioeconomic status (the self-rated income level and self-rated social status) were positively associated with self-rated health from both the between-and within-individual (static and dynamic) aspects. That is, from the static aspect, those who had a high income, perceived themselves as having a high income, and those who perceived themselves as having a high social status felt healthier than those who did not. A previous cross-sectional study reported that both the income and subjective identification of socioeconomic status were positively associated with self-rated health in China [38]. From the dynamic aspect, a real increase in one's income, the perception of receiving a higher income, and the perception of increasing one's social status improved the self-rated health of the participants.
Along with income, education is an index of real socioeconomic status. In this study, the statistically positive between-individual coefficient of years of education meant that highly educated persons had a better self-rated health than the less educated. This finding is consistent with those from previous cross-sectional studies [26,39]. However, the nonsignificant within-individual coefficient does not support the hypothesis that obtaining a higher education is related to self-rated health.
This study has several limitations. First, the results do not necessarily reflect the viewpoints of healthcare providers or professionals because we used national social survey data and, consequently, focused on the use of the Internet among the general public. Second, the purpose or frequency of the use of the Internet was not considered. For example, searching for health information online may have positive effects on one's selfrated health, while lengthy hours of Internet usage for entertainment may have negative effects. However, in the present study, we did not differentiate the kinds of purposes or the ranks of frequency, so the positive and negative effects may have been mixed and canceled each other out through the statistical analysis. Consequently, the within-individual coefficient of Internet use may seem to be non-significant. Further research is required to address this issue. Third, the association between the use of the Internet and self-rated health may have been affected by the attributes of the respondents, such as their age, gender, and socioeconomic status. For example, young people may tend to be enthusiastic about entertainment, whereas older adults may search for health-care information more often. However, differences in the associations by attributes may have been missed because regression analyses were not performed separately by attributes. Fourth, the precision of the data for the dependent variable, self-rated health, had to be reduced from a fivepoint scale to a binary item because a panel ordinal logistic regression analysis of multiple imputed data is not possible in Stata.
Conclusions
In the present study, the association between the use of the Internet and self-rated health was analyzed using a hybrid model with national social survey data from 2014 to 2018 among adults in China. The between-individual coefficient (the static aspect) of Internet use showed that Internet users were statistically 1.41 times more confident of their self-rated health. However, the within-individual coefficient (the dynamic aspect) of Internet use did not indicate the dynamic effectiveness of starting to use the Internet for improving one's self-rated health. In other words, the increase in Internet users may not improve the individuals' health status. These results provide a concrete example of how cross-sectional analysis is deficient in proving causality between two variables. In addition, our findings suggest that policy makers should pay close attention to not confuse the static variation with dynamic change when considering the causal relationship between the use of the Internet and self-rated health. If policymakers attempt a public health promotion by increasing the number of Internet users, it may be hard to achieve the desired results. Rather, since Internet users have a higher health status than nonusers, it is necessary to discuss how to reduce the health disparities between Internet users and nonusers. | 2023-01-12T17:11:51.514Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "3cd04e135e3df92978f870bfe901355639c3dbc3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/2/1003/pdf?version=1672923764",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "742d0094c59d916a6ce26bc21edb802c00c46f00",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.